id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
78925487 | pes2o/s2orc | v3-fos-license | Effect of Anti-TNF-α on the Development of Offspring and Pregnancy Loss During Pregnancy in Rats
Background: Etanercept binds soluble tumor necrosis factor-alpha (TNF-α) and is classified as pregnancy risk category B. Increase in TNF-α level causes preterm labour or miscarriage. Lipopolysaccharides trigger preterm birth and abortion via producing of pro-inflammatory cytokines. Cytokines are divided into two groups as pro-inflammatory and anti-inflammatory. TNF-α is a pro-inflammatory cytokine, whereas interleukin (IL)-10 is an anti-inflammatory cytokine. IL-10 predominant in normal pregnancy while TNF-α characterize in abortion and recurrent abortion. The aim of this study was to determine the effect of etanercept on the development of offspring and lipopolysaccharide-induced pregnancy loss. Materials, Methods & Results: Twenty-eight female and 7 male Wistar rats (5-6 months old) were used in this study. The rats were fed a standard pelleted diet and tap water ad libitum. After female rats were caged with males for 1 day, the presence of a vaginal plug was designated as day 0 of pregnancy. Twenty-eight pregnant Wistar rats were divided into 4 equal groups, as follows: control (0.3 mL of Normal Saline Solution intravenously on day 10 of pregnancy); etanercept (0.8 mg kg/day intraperitoneally on days 9 and 10 of pregnancy); lipopolysaccharide (160 μg kg intravenously on day 10 of pregnancy); and etanercept + lipopolysaccharide. Blood samples were obtained from the tail vein on day 10 of pregnancy (3 h after lipopolysaccharide administration). All animals were followed during pregnancy. Pregnancy rates and offspring characteristics were determined. TNF-α and IL-10 levels were measured using an ELISA reader. Etanercept alone did not have any negative effects, and etanercept did not prevent (P < 0.05) lipopolysaccharide-induced pregnancy loss. Higher TNF-α and IL-10 levels were measured (P < 0.05) in the etanercept + lipopolysaccharide group compared to other groups. Discussion: It is well known that use of etanercept does not increase pregnacy loss. In this study, higher pregnancy rates were determined in the control and etanercept groups than the lipopolysaccharide and etanercept + lipopolysaccharide groups. The proportion of fetal deaths in lipopolysaccharide administered pregnant subjects was decreased by the use of anti-TNF-α agents. While the concentrations of TNF-α are low in the onset of pregnancy period, the concentrations of TNF-α increases a peak level during the onset of labour. Embryonic resorption is affected by Th1 cytokines (TNF-α and lL-2) and low-dose lipopolysaccharide without any affecting mother survival, and in the early pregnancy term, the implantation area of embryo is enormously sensitive to these molecules. In the current study, etanercept increased the concentration of TNF-α and the concentration of IL-10 when compared to the lipopolysaccharide group. IL-10 has a protective role, while TNF-α is an abortive factor during pregnancy. Thus, etanercept did not prevent pregnancy loss. This finding may have reflected an insufficient dose of etanercept. Adverse effects did not occur in the offspring of the etanercept or control groups, and there was no difference between the two groups statistically. Adverse pregnancy outcomes such as stillbirth, low birth weight, spontaneous abortion and herediter malformations are not associated with TNF-α inhibitors. In conclusion, etanercept does not pose a major teratogenic risk and has no preventive effects with respect to infection-dependent pregnancy loss.
INTRODUCTION
Anti-tumor necrosis factor (anti-TNF) drugs block the functioning of TNF-α and are used for the treatment of immune-mediated diseases, such as psoriasis and rheumatoid arthritis [15,21]. TNF-α receptors have a pivotal role in pregnancy and the effects of TNF-α are neutralized by soluble forms of TNF-α receptors [3].
Pregnancy is a complex process and leads to activation of the maternal cellular and humoral immune system [2,20]. The maternal immune system identifies or refuses fetal antigens [27]. The rate of spontaneous abortion is 15-20%, with unfavorable outcomes occurring during the first trimester of pregnancy [9].
Lipopolysaccharides (LPS) are derived from the cell wall of gram-negative bacteria. Because LPS triggers preterm birth and abortion via pro-inflammatory cytokines, LPS is used in experimental studies involving pregnancy [7,17,23]. Cytokines have an important role in the reproductive immune response and are divided into two groups (pro-inflammatory and anti-inflammatory) [3,13]. TNF-α (a pro-inflammatory cytokine) is a Thelper (Th) 1 cytokine, whereas interleukin (IL)-10 (an anti-inflammatory cytokine) is a Th2 cytokine [3,18]. Normal pregnancy is characterized by the predominant production of T-helper (Th) 2 cytokines, while abortion and recurrent abortion is characterized by the predominant production of Th1 cytokines [6].
Since high TNF-α level occurs in pregnancy loss, it has been hypothesized that etanercept, an anti-TNF-α drug, may prevent LPS-induced pregnancy loss.
The aim of this study was to determine the effect of etanercept on LPS-induced pregnancy loss in rats and the development of offspring.
Animals
Twenty-eight female and 7 male Wistar rats (5-6 months old) were used in this study. The rats were fed a standard pelleted diet and tap water ad libitum. The animals were bred in standard cages on a 12-h light/dark cycle at room temperature in a humiditycontrolled environment.
After female rats were caged with males for 1 day, the presence of a vaginal plug was designated as day 0 of pregnancy. The 10-12 th days of a rat pregnancy corresponds roughly to the first trimester of human pregnancy [24]. Pregnant rats were randomly divided into 4 groups, as follows: control group, 0.3 mL of normal saline solution was administered intravenously on day 10 of pregnancy (n = 7); etanercept group, etanercept was administered intraperitoneally at 0.8 mg kg -1 day on days 9 and 10 of pregnancy (n = 7); LPS group, LPS was administered intravenously via the tail vein at 160 µg kg -1 on day 10 of pregnancy (n = 7); and the etanercept + LPS group, etanercept was administered intraperitoneally at 0.8 mg kg -1 day on days 9 and 10 of pregnancy + LPS was administered intravenously at 160 µg kg -1 on day 10 of pregnancy (n = 7). Blood samples were obtained from the tail vein on day 10 of the experiment (3 h after LPS administration) and all animals were followed during pregnancy. Animals that did and did not give birth were determined. In addition, the development of offspring was assessed. At the conclusion of the study, all animals were euthanized under thiopental sodium anaesthesia (Pental ® sodium 1 g) 3 [70 mg kg -1 , intraperitoneally].
Development of offspring
The weight and body length of the offspring were determined with a scale and digital calipers, respectively.
Statistical analysis
The pregnancy rates of the groups were evaluated using a chi-square test. The concentrations of TNF-α and IL-10 and the number of offspring in each group was compared with ANOVA and a Duncan test as a post-hoc test. The weights and lengths of the offspring in the control, etanercept, and etanercept + LPS groups were evaluated by ANOVA and the Duncan test on the first day, while the same data from the control and etanercept groups were evaluated by independent t-tests on days 8, 15, and 22 because only 2 groups remained (SPSS 19.0). Data are expressed as the mean ± SE. Significance was accepted at the P < 0.05 level.
RESULTS
The pregnancy rates of the groups are shown in Table 1, and TNF-α and IL-10 levels are presented in Table 2. All animals were followed during pregnancy and 22 days after birth. Etanercept did not inhibit (P < 0.05) LPS-induced pregnancy loss, and did not exhibit adverse effects on the pregnancy rate ( Table 1). The TNF-α and IL-10 levels were higher (P < 0.05) in the etanercept + LPS group compared to the other groups. Although the TNF-α level was not determined in the etanercept and control groups, the IL-10 level was lower (P < 0.05) in the etanercept and control groups ( Table 2). The weights and lengths of the offspring on days 1, 8, 15, and 22 are shown in Table 3. Adverse effects did not occur during the development of offspring born from rats in the etanercept alone and control groups. The number of offspring was statistically significant (P < 0.05) in the LPS and etanercept + LPS groups when compared to the control and etanercept groups (Figure 1).
DISCUSSION
Etanercept is a dimeric fusion protein binding that only binds soluble tumor necrosis factor-alpha (TNF-α) and is classified by the Food and Drug Administration as pregnancy risk category B [8,21]. Pregnancy outcomes, such as miscarriage, preterm labour, and pre-eclampsia, result from changes in TNF-α and its receptors [3]. Spontaneous abortions and pre-eclampsia are complications in pregnant women, and these complications result from a change in a Th2-biased to a Th1-biased cytokine profile in maternal serum [3,6,28].
The aim of this study was to determine the effect of etanercept on cytokine levels and offspring development in LPS-induced abortion. Several inflammatory molecules play important roles in the mechanism underlying early pregnancy loss. Extreme inflammation results in unfavorable outcomes, such as spontaneous abortion and fetal resorption [1,14]. Recurrent abortion is classically defined as three or more pregnancy losses. This usually occurs before 20 weeks gestation. Recently, recurrent spontaneous miscarriage has been redefined as the spontaneous loss of two or more clinical pregnancies [12,29].
Animal models have been used to elucidate pregnancy success [16]. In the current study, etanercept alone did not have a negative effect on the pregnancy rate and did not prevent LPS-induced abortion (Table 1). It has been reported that etanercept, which is classified as pregnancy risk category B, can be used in pregnant women; TNF-α antagonists have no known embryotoxic or teratogenic effects. In addition, there is no evidence that there is an associated increased pregnancy loss following use of etanercept [25]. In this study, higher pregnancy rates were determined in the control and etanercept groups than the LPS and etanercept + LPS groups ( Table 1). The uses of anti-TNF-α agents decrease the proportion of fetal deaths in LPS-administered pregnant mice [11].
When TNF-α level increases during pregnancy, placental perfusion is inadequate, there is an increase in thrombotic events, and placental and fetal hypoxia occur [22]. In addition, a higher TNF-α level was measured in the etanercept + LPS group in the current study ( Table 2). Cytokines play a pivotal role in the pregnancy process [26]. Spontaneous abortion, preterm labour, pre-eclampsia, and intrauterine growth restriction are adverse pregnancy outcomes in which deregulation of cytokine networks can result [4]. While the TNF-α level is low in the first trimester of pregnancy, the TNF-α level reaches a peak during the onset of labour [25].
Low-dose LPS and Th1 cytokines (TNF-α and lL-2) induce embryonic resorption without affecting mother survival, and the implantation site is extremely sensitive to these molecules during early pregnancy [1]. It has been reported that TNF-α production in fetal membranes is increased by LPS. TNF-α has an important role as a cytokine in pregnancy and leads to the induction of labour in synergy with other inflammatory cytokines, which cause uterine contractions [3,25]. TNF-α has an effect on blastocyst implantation, endometrial vascular permeability, and uterine deciduation. High serum and amniotic fluid levels of TNF-α determine fetal growth retardation and the onset of labour. It has been reported that recurrent pregnancy loss is closely linked to the LPS-increased TNF-α level. The mechanism underlying fetoplacental resorption caused by LPS may be based on haemorrhage, and necrosis results from a direct effect of TNF-α on the placental vasculature [10,25].
In the current study, etanercept increased the concentration of TNF-α (14.5-fold) and the concentration of IL-10 (2.2-fold) when compared to the LPS group (Table 2). IL-10 has a protective role, while TNF-α is an abortive factor during pregnancy [5]. Thus, etanercept did not prevent pregnancy loss. This finding may have reflected an insufficient dose of etanercept. In addition, LPS-induced abortions are prevented by IL-10 injections or LPS-induced fetal death is decreased by IL-10 injections [22,23].
Adverse effects did not occur in the offspring of the etanercept or control groups, and there was no difference between the two groups statistically. It has been noted that TNF-α inhibitors are not associated with adverse pregnancy outcomes, including spontaneous abortion, stillbirth, low birth weight, and congenital malformations. Based on a study in which cynomolgus monkeys received TNF-α inhibitors several hundred times the recommended human dose, there was no evidence of teratogenic effects, and adverse pregnancy or maternal outcomes in embryo-fetal perinatal developmental toxicity studies [19].
The decreased pregnancy rate in the etanercept + LPS and LPS groups (Table 1) may reflect the inability of etanercept to sufficiently increase the level of IL-10, thus potentiating the effect of etanercept on TNF-α production. In conclusion, etanercept may not prevent infection-or endotoxemia-mediated pregnancy loss.
CONCLUSION
Etanercept does not pose a major teratogenic risk and has no preventive effects with respect to infection-dependent pregnancy loss. | 2019-03-16T13:03:47.539Z | 2018-03-19T00:00:00.000 | {
"year": 2018,
"sha1": "e17d058b0ca898e419f1f5d9c925044e0b3669be",
"oa_license": "CCBY",
"oa_url": "https://seer.ufrgs.br/ActaScientiaeVeterinariae/article/download/80901/47471",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "35f6ac52ab848e13305a6e54697497185ca4298c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18292179 | pes2o/s2orc | v3-fos-license | Trigonometry of 'complex Hermitian' type homogeneous symmetric spaces
This paper contains a thorough study of the trigonometry of the homogeneous symmetric spaces in the Cayley-Klein-Dickson family of spaces of 'complex Hermitian' type and rank-one. The complex Hermitian elliptic CP^N and hyperbolic CH^N spaces, their analogues with indefinite Hermitian metric and some non-compact symmetric spaces associated to SL(N+1,R) are the generic members in this family. The method encapsulates trigonometry for this whole family of spaces into a single"basic trigonometric group equation", and has 'universality' and '(self)-duality' as its distinctive traits. All previously known results on the trigonometry of CP^N and CH^N follow as particular cases of our general equations. The physical Quantum Space of States of any quantum system belongs, as the complex Hermitian space member, to this parametrised family; hence its trigonometry appears as a rather particular case of the equations we obtain.
Introduction
In a previous paper [1] the trigonometry of the complete family of symmetric spaces of rank one and real type was studied. These spaces are also called Cayley-Klein (hereafter CK) real spaces and were first discussed by Klein extending the Cayley idea of 'projective metrics'. In two dimensions there are nine real spaces with a real quadratic symmetric metric of any (positive, zero, negative) constant curvature, and any (positive definite, degenerate, indefinite) signature [2]. Further to this, the paper [1] had a long run aim towards opening an avenue for exploring the trigonometry of general symmetric homogeneous spaces.
Next to the spaces of real type, there are spaces of 'complex' type. In 'complex' dimension N there are 3 N +1 such geometries [3]; 'complex' type means here that these spaces are coordinatised by elements of a one-step extension of R through a labelled Cayley-Dickson procedure R → η R which adjoins a imaginary unit i with i 2 = −η to R, producing either the complex, dual or split complex numbers according as η > 0, = 0, < 0. We will term Cayley-Klein-Dickson (CKD) these spaces. They are 'Hermitian', since they are related to a scalar product with 'complex' values and hermitian-like symmetry Within this family, only the spaces coordinatised by ordinary complex numbers (where η > 0, which can be rescaled to η = 1 and i 2 = −1) are actually complex spaces; after this restriction, there are only 3 N CK complex type geometries in complex dimension N [4]. For N = 2 there are nine 2D complex Hermitian CK spaces, with constant holomorphic curvature (either K hol > 0, = 0, < 0) and a Hermitian metric of either definite, degenerate or indefinite signature. All these are Hermitian symmetric spaces with a complex structure, hence Kählerian, but only the three spaces with a definite positive Hermitian metric belong to the restricted family of the so-called two-point homogeneous spaces [6]. These are the elliptic Hermitian space, i.e. the complex projective space CP 2 with the Fubini-Study metric, the hermitian hyperbolic space CH 2 which can be realized in the interior of a Hermitian quadric in CP 2 like its real analogue, and the Hermitian euclidean space CR 2 , a two-dimensional Hilbert space, as the common 'limiting' space.
If degenerate and indefinite Hermitian products are allowed, a CK family of nine complex 2D spaces is obtained: the complex elliptic, euclidean and hyperbolic Hermitian planes with a definite positive metric, the complex co-euclidean, galilean and co-minkowskian (or Anti Newton-Hooke, Galilean and Newton-Hooke) Hermitian planes with a degenerate metric and finally the indefinite metric complex co-hyperbolic, minkowskian and doubly hyperbolic (or Anti De Sitter, Minkowskian and De Sitter) Hermitian planes. The last six spaces are the complex Hermitian analogues of the three non-relativistic and three relativistic space-times. In group theoretical terms, all these nine Hermitian spaces appear as four generic cases and five non-generic ones within the complex CK family. The generic cases are SU (3)/(U (1) ⊗ SU (2)), SU (2, 1)/(U (1) ⊗ SU (2)), SU (2, 1)/(U (1) ⊗ SU (1, 1)), the last hosting two different spaces with 'time like' and 'space-like' complex lines interchanged. Non-generic cases are contractions of these four generic ones, with either curvature vanishing, metric degenerating or both. Within the full complex CK family of spaces of complex type, results on trigonometry are only available, as far as we know, for the Hermitian spaces with definite positive metric and constant (either positive or negative) holomorphic curvature [5,7,8,9,10,11,12,13]; a review of these results is included in Section 2.
The spaces in the subfamily η < 0 are much less known; they have not a complex struc-ture, but its 'split complex' analogue. Its generic members correspond to the symmetric homogeneous space SL(3, R)/(SO(1, 1) ⊗ SL(2, R)), and the non-generic ones to some of its contractions. The trigonometry for such spaces has apparently not been studied. The spaces with η = 0 appear as common contractions from the spaces with η > 0 and η < 0.
In this paper we set out the task of studying in full detail the trigonometry of the complete family of spaces of 'complex' type. It clearly suffices to consider the two-dimensional case, as a triangle in any CKD such 'complex'-type space in 'complex' dimension N is fully contained in a totally geodesic subspace with 'complex' dimension 2.
The approach distinctive traits are: First, it covers at once the trigonometry in the whole family of 3 3 = 27 such geometries, parametrized by three real labels η; κ 1 , κ 2 . The study of the trigonometry in the family as a whole is in fact easier than its study for just one space at a time. Second, it gives a clear view of several duality relationships between the trigonometry of these different spaces. In particular it explicity displays the self-duality of the Hermitian elliptic space (analogous to the self-duality of the real sphere S 2 ) that is completely hidden in the trigonometric equations derived for this space in [9,10,11,13]. Third, it gives more than previously known: all previously known equations appear as a rather small subset of the equations to be derived here. Fourth, it deals uniformly with the (contracted) non-generic cases which correspond to curvature vanishing (κ 1 = 0) and/or ('Fubini-Study') metric degenerating (κ 2 = 0 or η = 0). 'Hermitian' analogues of the real angular and lateral excesses vanish in these limits, and are related to triangle symplectic area and its dual quantity. And fifth, the presence of the additional Cayley-Dickson type label η makes it possible the consideration of a new type of contractions, encompassed by the limit η → 0 whose physical meaning is worth exploring.
The paper is intended to be self-contained, but reference to [1] may be helpful, specially for motivations and general background. A condensed review of already known results on the trigonometry of both the Hermitian elliptic and hyperbolic spaces [7,8,9,10,11,12,13] is given in Section 2. Here we display the basic equations which in some cases were originally given in terms of angular invariants not considered in this paper (e.g. the Fubini-Study angles and the holomorphy inclinations at each vertex), and will be rewriten here so they can be easily identified with the ones we obtain. The choice of basic invariants adopted here affords the equations in a form which we believe simpler than using any other choice.
Information on CKD spaces of 'complex' type is given in Sections 3 and 4. Section 3 deals with the ordinary complex case, and therefore refers to a much more familiar situation. Section 4 comments the main new traits appearing when complex numbers are replaced by their 'complex' parabolic and split (hyperbolic) versions.
In Section 5 the approach to the trigonometry proposed in [1] is developed in depth for the complete CKD family. The whole of trigonometry for all these spaces is encapsulated in a single basic trigonometric group equation, involving sides, angles, and lateral and angular 'phases' and exhibiting explicitly (self)-duality in the whole family. This is mainly achieved due to a choice of triangle invariants as the canonical parameters of two pairs of commuting isometries, a choice which should ring a bell to any physicist educated in Quantum Mechanics. Dealing with many spaces at once, this equation gives a perspective on some relationships going far beyond any treatment devoted to the study of a single space. The behaviour of trigonometry when either the curvature vanishes or the metric degenerates is explicitly described through the CKD constants η; κ 1 , κ 2 . Duality is the main structural backbone in our approach, and the requirement to explicitly maintaining duality in all expressions and at all stages acts as a kind of method 'fingerprint'. Cartan duality for symmetric spaces [14] appears here as the change of sign in either η, κ 1 or κ 2 .
The basic trigonometric group equation is an equation for the parameter-dependent group of motions. By writing it in the fundamental 'complex' representation, a set of nine 'complex' equations follows. With these equations as starting point, we will explore in section 6 the rather unknown territory of 'complex hermitian' trigonometric equations. The background provided by real trigonometry makes this exploration easier by deliberately pursuing the analogies, while at the same time the relevant differences stand out clearly. The most interesting difference with real trigonometry is the natural splitting of the equations into two 'sectors'. The first involves quantities linked to Cartan generators of the motion group, where two new triangle invariants appear in a rather natural way; they play a specially important role since they are proportional to the symplectic area and coarea of the triangle. For the hermitian elliptic space CP 2 , these quantities were first found by Blasckhe and Terheggen [7,8]. The other 'sector' is the 'complex' analogue of the whole real trigonometric set of equations. Most results in the real case have (sometimes several and/or partly) analogous in this 'complex' trigonometry. The family form of all previously known equations is obtained here, together with a large number of new ones. In the CP 2 case (η = 1; κ 1 = 1, κ 2 = 1) all trigonometric functions of sides, angles, lateral and angular phases are the ordinary circular ones, and at a first look the whole paper can be read by restricting to this case; this may help to grasp the key ideas, while not losing view of the increased scope afforded by the possibility of zero or negative η; κ 1 or κ 2 .
The basic trigonometric identity for the family of 'complex Hermitian' spaces is also directly linked to other product formulas which we believe new. They can be considered as a kind of 'complex Hermitian' Gauss-Bonnet formulas, and contain the totality of 'complex Hermitian' trigonometry in a nutshell, as they are equivalent to the basic trigonometric identity. The subject of such 'exponential product formulas' appears as an step in our derivation (Section 5.1) but it can be further developped by itself, and affords a number of new identities; this will be discussed elsewhere.
There is actually a strong link between this study, which superficially seems like a work in geometry, and physics: the mathematical structure underlying the Quantum State Space belongs to the 'complex hermitian' CKD family. So as a byproduct of this work we obtain the basic equations of the 'Trigonometry of the Quantum State Space' [15,16]. Any Hilbert space appears in the CKD family as a Hermitian euclidean space (thus with labels η = 1; κ 1 = 0, κ 2 = κ 3 = . . . = 1) and its projective Hilbert space which plays in any Quantum Theory the role of space of states, appears as the Hermitian elliptic space (η = 1; κ 1 = κ > 0, κ 2 = κ 3 = . . . = 1). Geometric phases are related to trigonometric quantities: for the simplest 'triangle type' loop in the Quantum state space, the Anandan-Aharonov phase appears intriguingly as one of the triangle invariants introduced by Blaschke and Terheggen sixty years ago. The paper by Sudarshan, Ananadan and Govindarajan [17] gives a group theoretical derivation of the Anandan-Aharonov phase (equal to triangle symplectic area) for an infinitesimal triangle loop in CP N ; this result appears as a particular case of our exact expressions linking triangle elements for any finite triangle. The role of symplectic area for geodesic triangles in connection with coherent states and geometric phases has also been recently discussed by Berceanu [18] and Boya, Perelomov and Santander [19]. A separate, more physically oriented paper [20] will be devoted to the trigonometry of the Quantum space of states, in relation with geometric phases and in general, with the view towards a more geometrical formulation of Quantum Mechanics [21].
2 A review on Hermitian trigonometry of CP 2 and CH 2 The hermitian elliptic space, i.e., CP N endowed with the natural Fubini-Study (FS) metric induced by the real part of the hermitian canonical flat product in C N +1 , is an homogeneous hermitian symmetric space. It has a natural complex structure, and the FS metric is kählerian and has constant holomorphic curvature; the Kähler form is induced by the imaginary part of the hermitian canonical metric in C N +1 [22]. The standard choice of scale in the metric makes the maximum distance in CP N equal to π/2, and the total length of any (closed) geodesic equal to π. With this choice the constant holomorphic curvature of CP N is K hol = 4; the ordinary sectional curvature K of the FS metric in CP N seen as a riemannian space of real dimension 2N is not constant, and lies in the interval 1 ≤ K ≤ 4. Complex projective geometry was studied by Cartan [23], building over the works by Study [24] and Fubini [25].
For the real projective space RP N , trigonometry essentially reduces to spherical trigonometry [26]). The homogeneous symmetric character of CP N makes also possible an explicit study of its trigonometry, which is however much more complicated than for RP N . A common trait in most of the previous works on hermitian trigonometry is to introduce a single real invariant for each side (which seems natural as CP N is a rank one space), but two real invariants for each vertex, which also seems natural due to presence of two commuting factors in the group theoretical description of the hermitian elliptic space as the homogeneous space SU (N + 1)/(U (1) ⊗ SU (N )).
For side invariants the canonical choice are the distances in the FS metric a (resp. b, c) between vertices BC (resp. CA, AB). To avoid non-generic special cases all papers quoted before enforce the restrictions a, b, c < π/2; this means that each pair of sides does not meet the cut locus of the common vertex. In both the elliptic hermitian space CP N , and the hermitian hyperbolic space CH N , identified with a suitable bounded domain of CP N with the hyperbolic FS metric, each point [z] is a ray in the linear ambient space C N +1 [11]. Even if we assume normalized the ambient position vectors, z, z = 1, every ray in C N +1 still contains infinitely many normalized vectors differing only by a phase factor, [z] ≡ {e iε z}. Let z A , z B , z C denote arbitrarily chosen normalized position vectors in C N +1 (defined up to a phase factor) for the three vertices Then the length a of the side a ≡ BC can be obtained from the hermitian product of the two (normalized) vectors z B , z C in the ambient linear space through cos a exp(iǫ a ) := z B , z C . The phase ǫ a is not a triangle invariant, as the vectors representing the vertices can be still modified by arbitrary phase factors.
Vertex invariants are defined in terms of the tangent space to the hermitian space which will be considered here as a real vector space with a complex structure. At each point a vector tangent to a geodesic g is defined only up to a nonzero real factor. The tangent space to a complex (projective) line l at a point O is a real 2D subspace of the tangent space invariant under the complex structure and can be thus identified to a complex 1D subspace; this subspace contains a one-parameter family of real 1D subspaces, corresponding to a one-parameter family of FS geodesics through O and contained in l. For vertex invariants several real quantities can be used. In terms of the tangent vectors u, v to two (real one-dimensional) FS geodesic sides at the vertex C, these are: 1) the hermitian angle between the sides seen as complex projective lines, denoted C; 2) the ordinary or FS angle Λ between u, v computed as usual in the natural riemannian FS metric in CP N or CH N [27], g( , ) := Re , ; 3) the FS angle between iu, v, denoted here Θ. These are defined as: (2.1) Note Ψ = π/2 − Θ is the minimum value of the riemannian FS angle between the tangent vector u and a totally geodesic RP 2 containing the geodesic with tangent vector v.
In addition to, yet not independent from these, one can consider also 4) the holomorphy 'inclination' Υ of the real 2-flat tangent to the triangle at the vertex C, also called Kähler angle, inclination angle, holomorphy angle, slant angle, etc. between u and v [28]. This quantity depends only on the real 2-flat determined by u and v, and not on u, v separately, thus the names holomorphy inclination or Kähler inclination seem more appropriate. In terms of two vectors u, t which span the given 2-plane and are furthermore FS orthogonal, the holomorphy inclination Υ is given by: The holomorphy inclination measures how this real 2-flat separates from the unique real 2-flat C u containing u and invariant under the complex structure; the FS angle between iu and t is a natural measure of the separation between these two 2-planes, since C u = u, iu and the given 2-plane is spanned by u, t (both pairs are FS orthogonal). Finally, 5) another angular invariant Φ of the pair of tangent vectors u, v, its pseudoangle or Kasner angle [28] is : This angle has not been explicitly used in previous works on trigonometry on CP N or CH N ; it is generically well defined for any two vectors in the tangent space at each point of CP N or CH N (i.e. between two intersecting FS geodesics with tangent vectors u, v at the intersection point) but becomes indeterminate when u, v are FS orthogonal. The angular invariant Φ is obviously meaningless between complex lines in these spaces.
To sum up, there are several different choices available for two independent vertex invariants; see the review by Scharnhorst [28]. Authors studying trigonometry have made different choices and the following relations will be useful for comparing the proposed trigonometric equations (there are of course similar relations for the corresponding 'angular invariants' at vertices A, B): In choosing symbols for these angular invariants, we have tried to conform to the majority usage, but nevertheless we have systematically changed to capital letters, which allows a clear and systematic typographic rendering of the self-duality of the equations we will propose, by means of the change upper/lower case letters.
Trigonometry in the Hermitian spaces CP 2 and CH 2
The oldest general result is the Coolidge's (1921) sine theorem [5]: the sides a, b, c and angles A, B, C between the sides seen as complex lines are related by: The papers by Blaschke and Terheggen (1939) [7,8] (hereafter BT) contained the first complete approach to trigonometry in the elliptic hermitian space, identified with CP 2 . Unlike the phase ǫ a of z B , z C which is meaningless as a quantity in CP 2 , the combination Ω := ǫ a + ǫ b + ǫ c is a triangle invariant, as can be clearly seen in the relation z A , z B z B , z C z C , z A = cos a cos b cos c exp(iΩ). BT named ω this quantity, but for the reasons explained below we will change this notation to Ω. Let us now consider the (normalized) position vectors Z a , Z b , Z c of the poles [Z a ], [Z b ], [Z c ] of the three sides a, b, c defined by BT in the ambient space C 3 through a 'vector product' as Z a = z B ×z C sin a , and cyclically, where the vector product is defined exactly as in the real case, without complex conjugation in any factor. Then the dual procedure (cos A exp(iǫ A ) := Z b , Z c and ω := ǫ A + ǫ B + ǫ C ), applied to the poles of the three sides, produces four invariants; three angles A, B, C between sides seen as complex lines and another quantity ω which was called τ by BT; these four quantities are dual to a, b, c, Ω. BT gave a complete set of equations for the hermitian elliptic space trigonometry. One is the Coolidge's sine law (2.7), and there are two new equations, which we will call Blaschke-Terheggen cosine theorem for sides and angles; the need of four quantities (e.g. a, b, c, Ω) to determine a triangle up to isometry in the elliptic hermitian space follows from these equations: cos 2 a = cos 2 A + cos 2 B cos 2 C − 2 cos A cos B cos C cos ω sin 2 B sin 2 C (2.8) cos 2 A = cos 2 a + cos 2 b cos 2 c − 2 cos a cos b cos c cos Ω sin 2 b sin 2 c (2.9) Another approach was put forward by Shirokov, in a paper published posthumously by Rosenfeld. Shirokov took two angular invariants at each vertex: the riemannian FS angle Λ C between the two sides a, b considered as real-1D FS geodesics and the holomorphy inclination Υ C of the real 2-flat spanned at the vertex C by the real tangent vectors to the two sides a, b. The equations which we shall call Shirokov-Rosenfeld (SR) form of sine theorem, double sine theorem for sides, cosine theorem for sides and double cosine theorem for sides respectively are: cos 2a = cos 2b cos 2c + sin 2b sin 2c cos Λ A − 2 sin 2 b sin 2 c sin 2 Υ A sin 2 Λ A (2.13) as well as similar cosine and double cosine equations for the sides b, c. Among all these equations, only five are functionally independent (for instance (2.12) and (2.13) are equivalent). Note SR sine theorem (2.10) is equivalent to the Coolidge sine law as consequence of (2.5).
In 1989 Wu-Yi Hsiang [10] gave a new derivation valid simultaneously for the trigonometry of the two-point homogeneous rank-one spaces of real, complex, quaternionic and Cayley octonionic type, both elliptic and hyperbolic. At each vertex, say C, Hsiang uses the three invariants C, Λ C , Θ C linked by a relation (2.6) (recall Θ C = π/2 − Ψ C ). In the elliptic/hiperbolic case he obtained some equations which when translated to C, Λ C , Ψ C are given below in (2.14), (2.15) as well as a rather complicated form of 'cosine theorem', not reproduced here and that should not be considered as a 'basic' equation.
In 1990 Brehm [11] gave a fresh approach to trigonometry of both elliptic CP N and hyperbolic CH N hermitian spaces. In terms of three angular invariants Λ C , C, Ψ C , only two of which are independent, Brehm derived the following equations: The first two equations are Coolidge's sine law and the Hsiang form of double sine law. The third ones turns out to be simply the Shirokov-Rosenfeld cosine double theorem for sides expressed in terms of Brehm's angular variables. The need of four quantities to determine a triangle up to isometry in CP N or CH N was stressed by Brehm who introduced the shape invariant σ of the triangle, defined in the elliptic case as σ := Re z A , z B z B , z C z C , z A = cos a cos b cos c cos Ω, and as σ = − cosh a cosh b cosh c cos Ω in the hyperbolic case; Brehm showed a triangle is completely determined up to isometry by a, b, c, σ (recall in the elliptic case Brehm assumes a, b, c < π/2), and gave inequalities that must be fulfilled in order the triangle to exist, as well as a careful discussion on congruence theorems.
In 1994 Hangan and Masala [29] gave an interpretation of Ω in the complex projective space CP 2 as equal to twice the symplectic area enclosed by the triangle. Symplectic area comes from the Kähler structure of CP 2 , and is well defined by the triangle 'skelethon' itself, due to the closed nature of the Kähler form which makes the symplectic area of any surface with given boundary to depend only on the boundary.
The existence of two distinguished, non generic types of triangles is clear. In CP 2 , when Υ A = 0, then Υ B = Υ C = 0 follows and the trigonometry equations reduce to those of a spherical triangle in a sphere of curvature K = 4; this is seen in (2.11) and (2.13) and corresponds to a triangle completely contained in a complex line CP 1 . When Υ A = π/2, then Υ B = Υ C = π/2, the equations reduce (locally) to those of a spherical triangle in curvature K = 1; this can be seen in (2.10) and (2.12) and corresponds to a triangle completely contained in a real projective subplane RP 2 , whose trigonometry comes from the spherical one after antipodal identification. This case corresponds to real values for exp(iΩ) and exp(iω), as implied by the Blaschke-Terhegen equations (2.8) and (2.9).
In these two special cases, contained in a totally geodesic submanifold, whose sectional curvature attains the extremal values 1 and 4. In all other situations, the triangle is not contained in a totally geodesic submanifold. The sectional curvature of either CP 2 or CH 2 along any real 2-direction depends only on its holomorphy inclination Υ and is: (2. 16) 3 The family of nine complex Hermitian Cayley-Klein 2D geometries and their spaces
The generators B, I span a two-dimensional Cartan subalgebra (regardless of the values of κ i ); further references to the Cartan subalgebra will mean to this fiducial subalgebra. Let us introduce four new Cartan generators: whose representing matrices in the vector fundamental representation are: The Lie commutators of all these generators are given in (4.1) for η = 1. The CK algebras su κ 1 ,κ 2 (3) can be endowed with a Z 2 ⊗ Z 2 group of commuting involutive automorphisms The two remaining involutions are the composition Π (02) = Π (1) · Π (2) and the identity.
All these generators can be represented in a pictorial way in a block triangular diagram, where each 'block' involves the four generators of the u κ (2) subalgebras listed above; the generator in the u(1) subalgebra inside the center of each unitary subalgebra u κ (2) appears at the left-lower corner in each block. The global block pattern reproduces the pattern P 1 P 2 J made by the three generators P 1 , P 2 , J of a real type CK algebra so κ 1 ,κ 2 (3). This diagram will be extremely helpful for visualization of most properties discussed below.
There is a single quadratic Lie algebra Casimir in su κ 1 ,κ 2 (3). A good way of writing it, with each group of terms corresponding to one of the three su κ (2)-like subalgebras is: The elements defining a 2D CK complex hermitian geometry are analogous to the ones in the real case [1,30]. By a two-dimensional complex CK geometry we will understand the set of three symmetric homogeneous spaces of points and lines of first-and second-kind.
• The plane as the set of points corresponds to the symmetric homogeneous space whose dimension (over C) is 2. The generators I and J, M, B leave a point O (the origin) invariant, and so generate a direct product U (1) ⊗ SU κ 2 (2) of 'rotations' about O. The involution Π (1) is the reflection around O and P 1 , Q 1 (resp. P 2 , Q 2 ) move O and generate translations along the (complex) basic direction l 1 (resp. l 2 .) • The set of first-kind complex lines is identified to the symmetric homogeneous space with dimension 2 over C. The generators T 1 and P 1 , Q 1 , H 1 should be interpreted in CS 2 κ 1 ,[κ 2 ] as the generators of 'rotations' about the 'origin' line l 1 , which is left invariant by them. The point O is moved along two (complex) basic directions by J, M and P 2 , Q 2 . The reflexion in l 1 is Π (2) . Complex lines obtained by group motions from the basic fiducial line l 1 will be called first-kind lines.
• There is another set of complex lines, the complex-2D symmetric homogeneous space In this space T 2 and P 2 , Q 2 , H 2 leave invariant an 'origin' line l 2 while J, M and P 1 , Q 1 move it. The reflexion in l 2 is Π (02) . These lines will be called second-kind. They are actually different from first-kind ones only when κ 2 ≤ 0, since when κ 2 > 0 P 1 , Q 1 and P 2 , Q 2 are conjugate within su κ 1 ,κ 2 (3).
Consideration of the spaces of first and second kind lines can be bypassed, since lines can be seen not as 'points' in the spaces CS 2 κ 1 ,[κ 2 ] or SU κ 1 ,κ 2 (3)/H (02) , but alternatively as 1D complex submanifolds of CS 2 [κ 1 ],κ 2 , and all properties of the two spaces of lines can be transcribed in terms of this space, in which l 1 and l 2 should be considered as two hermitian orthogonal complex lines intersecting at O (see figure 4.1 for η = 1). This space CS 2 [κ 1 ],κ 2 has a complex hermitian metric with an associated real 'Fubini-Study' metric ('FS') given by the real part of the hermitian product. This 'FS' metric can also be derived directly from the Casimir (3.6): at the origin O the hermitian product is given by the matrix diag(1, κ 2 ), and the 'FS' metric by diag(1, 1, κ 2 , κ 2 ) (basis ordering P 1 , Q 1 , P 2 , Q 2 ); at other points they are uniquely determined by invariance. This 'FS' metric is definite positive when κ 2 > 0, degenerate for κ 2 = 0 and indefinite of real type (2, 2) for κ 2 < 0; when κ 2 = 1 it is the ordinary FS metric (elliptic or hyperbolic) with holomorphic curvature 4κ 1 . The line l 1 (resp. l 2 ) contains two 'FS' orthogonal geodesics through O, the orbits of O by the one-parameter subgroups generated by P 1 and Q 1 (resp. P 2 and Q 2 ).
Thus κ 1 is (one fourth of) the constant holomorphic curvature, and κ 2 determines the signature of both the hermitian metric and the 'FS' metric, hereafter denoted as FS for any space in the CKD family. The canonical conexion of CS 2 [κ 1 ],κ 2 as homogeneous symmetric space [27] is compatible with the FS metric. A suitable rescaling of generators P 1 , J 12 allows to reduce κ 1 (resp. κ 2 ) to ±1. Thus nine 2D-complex hermitian CK geometries are obtained; their groups of motion and isotopy subgroups are displayed in table 1.
A fundamental property of the whole scheme of CK geometries is the existence of an 'automorphism' of each family, called ordinary duality D. It is well defined for any dimension, and for the 2D case it is given by the following family automorphism: (3.10) Duality D leaves the general commutation rules (4.1) invariant while it interchanges the corresponding constants κ 1 ↔ κ 2 and the space of points with the space of first-kind lines, , preserving the space of second-kind lines. It relates in general two different geometries placed in symmetrical positions relative to the main diagonal in table 1, just like in the real case. Duality also underlies the introduction of the Cartan generators (3.2): B, I form a natural basis for the fiducial Cartan subalgebra (B is the unique Cartan generator in the SU κ 2 (2) part, and I in the U (1) part, of the isotopy Table 1: The nine two-dimensional complex hermitian CK geometries. At each entry the group G and the three subgroups H (1) , H (2) , H (02) are displayed Hermitian Galilean Hermitian Co-Minkowskian Hermitian Oscillating NH Hermitian Expanding NH Hermitian Minkowskian Hermitian Doubly Hyperbolic Hermitian Anti-de Sitter Hermitian De Sitter SU (2, 1) subalgebra of a point in CS 2 [κ 1 ],κ 2 ), H 1 , T 1 appear as their duals, and H 2 , T 2 have the simplest behaviour under duality. In terms of the block-triangular arrangement (3.5), duality corresponds to a 'block reflection' along the secondary diagonal and an eventual sign change. For Cartan generators duality can be depicted by (figure 1), related to the su(3) root diagram. More details on the geometric interpretation of the Cartan subalgebra generators B, I, T 1 , T 2 , H 1 , H 2 will be given later.
Realization of the spaces of points in the complex Hermitian Cayley-Klein spaces
Exponentiation of the matrix representation (3.1) and (3.3) of su κ 1 ,κ 2 (3) produces a representation of SU κ 1 ,κ 2 (3) as a linear transformations group in an ambient linear space C 3 = (z 0 , z 1 , z 2 ). The one-parametric subgroups generated by P 1 , P 2 , Q 1 , Q 2 , J and M are: where the cosine C κ (x) and sine S κ (x) functions with label κ are defined by: These functions coincide with the circular and hyperbolic trigonometric ones for κ = 1 and κ = −1; the case κ = 0 provides the so called 'parabolic' or galilean functions: General properties of these functions are given in the Appendix in [1].
The exponentials of the Cartan subalgebra generators B, I, T 1 , T 2 , H 1 , H 2 are: can be written as a product of matrices (3.11) and two commuting 'Cartan' transformations in (3.13). The action of SU κ 1 ,κ 2 (3) on C 3 is linear but not transitive, since it conserves the hermitian form |z 0 | 2 +κ 1 |z 1 | 2 +κ 1 κ 2 |z 2 | 2 . The isotopy subgroup of O = (1, 0, 0) is the three parameter subgroup SU κ 2 (2) generated by J, M, B , while the U (1) subgroup generated by I multiplies O by a phase factor. Hence the homogeneous symmetric space . This orbit is the domain of CP 2 determined by |z 0 | 2 + κ 1 |z 1 | 2 +κ 1 κ 2 |z 2 | 2 > 0, and when κ 1 > 0, κ 2 > 0 it is the full complex projective space CP 2 . The coordinates (z 0 , z 1 , z 2 ) can be called Weierstrass coordinates; they are linked by |z 0 | 2 + κ 1 |z 1 | 2 + κ 1 κ 2 |z 2 | 2 = 1 and still are defined up to a common unimodular complex factor which can be used to make z 0 real and non negative; these are the natural coordinates in the vector models of the Hermitian CK spaces, since the motion groups act linearly on them. Also in analogy with the real case, (z 1 /z 0 , z 2 /z 0 ) are called Beltrami coordinates; the groups act on these coordinates by complex fractional linear transformations.
The non-generic situation where κ 1 , κ 2 vanishes corresponds to an Inönü-Wigner contraction [31]. The limit κ 1 → 0 is a local contraction (around a point); it carries the first and third columns of table 1 to the flat middle one. The limit κ 2 → 0 is an axial contraction (around a line), carrying geometries of first and third rows to the middle one.
The complete family of 'complex Hermitian' Cayley-Klein-Dickson 2D geometries and their spaces
The previous section has been written so it can be re-read with minimal mutatis mutandis changes to suit the description of the full family of 'complex type' spaces. The new fact is the explicit There are three different cases: η > 1 can be rescaled to η = 1 and gives the division algebra of ordinary complex numbers +1 R ≡ C; η = 0 gives the dual or Study numbers 0 R ≡ C 0 ; and η < 0, which can be rescaled to η = −1 the split complex numbers −1 R ≡ C −1 , also called double numbers, hyperbolic complex numbers, Lorentz numbers or perplex numbers. These are three instances of a one-parameter system η R ≡ C η , the notation duplicity stressing either the 'complex' nature of the numerical system here obtained (C η ) or its character as a CD extension of R ( η R).
The 'complex Hermitian' Cayley-Klein-Dickson 2D geometries
The groups behind these geometries are the linear isometry groups of a 'complex hermitian' with the same Λ as in the complex case. For N = 2 the CKD algebra, denoted η su κ 1 ,κ 2 (3) is eight dimensional, and its fundamental or vectorial 3D 'complex' representation is given by 3 × 3 matrices (3.1, 3.3), where now entries are in C η and i stands for the pure imaginary 'complex' unit in C η . This form has 'hermitian' symmetry w, z = z, w , with 'complex' conjugation in C η : z = a + ib → z = a − ib, for real a, b and the form (z, w) → Im z, w is still real and antisymmetric in z, w, and therefore is a symplectic form in the real space R 2(N +1) underlying to η R N +1 . Rosenfeld [13] uses the word hermitian without qualifying, but to prevent misunderstandings, we will keep the term hermitian for the truly complex case, and we will put quotes in 'complex' and 'hermitian' when refering to the general 'complex' numbers C η with Cayley-Dickson label η. A simple scale change may reduce simultaneously η and κ 1 , κ 2 to either 1, 0, −1.
For any value of η the CKD algebras in the family η su κ 1 ,κ 2 (3) can be endowed with a Z 2 ⊗ Z 2 group of commuting involutive automorphisms generated by Π (1) , Π (2) (3.4); denoting everything as in the former section, the three Lie subalgebras h (1) , h (2) and h (02) spanned by the generators with the same name as in the complex case (the Lie algebra elements invariant under the involutions with the same indices), turn out to be of CKD type, η u(1) ⊕ su κ (2) with κ = κ 2 , κ 1 , κ 1 κ 2 respectively. The groups they generate . In all these expressions, the Lie algebras of CKD 'unitary type' are η u κ (2) ≡ η u(1) ⊕ η su κ (2), and for the groups of 'unimodular complex numbers' η U (1) we have two generic cases The Lie algebra η su κ 1 ,κ 2 (3) is given by the following Lie conmutators: The elements defining a 2D CK 'complex hermitian' geometry can be now described: • The plane as the set of points corresponds to the symmetric homogeneous space (4.2) • The set of (first-kind) 'complex' lines is identified to the symmetric homogeneous space 3) Both spaces have again dimension 2 over C η , and all comments made in the complex case can be easily rephrased. The definition of the space of second kind 'complex' lines can be also suitably adapted. Figure 2: Generators and their associated labels in a 'complex hermitian'-2D CKD geometry. Lines l 1 , and l 2 are 'complex', thus two-dimensional from a real point of view By a two-dimensional 'complex hermitian' CKD geometry we will mean the set of three symmetric homogeneous spaces of points, lines of first-kind and lines of second-kind. The group η SU κ 1 ,κ 2 (3) acts transitively on each of these spaces. The fundamental ordinary duality D (3.10) extends, by simply assuming D : η → η, to an 'automorphism' of the complete CKD family and leaves the general commutation rules (4.1) invariant. In general D relates two different 'complex hermitian' CKD geometries with the same label η but κ 1 , κ 2 interchanged. Figure 2 displays the generators (with their labels) as related to the three fiducial elements O, l 1 , l 2 .
The quadratic Lie algebra Casimir in η su κ 1 ,κ 2 (3) can be written grouping the terms which correspond to the three η su κ (2)-like subalgebras: From this Casimir we can easily derive the invariant FS metric in the space C η S 2 which is given, at the origin and in the basis P 1 , Q 1 , P 2 , Q 2 by the matrix diag(1, η, κ 2 , ηκ 2 ), coming also as the real part of the hermitian product whose matrix at O is diag(1, κ 2 ).
Realization of spaces of points in the 'complex hermitian' Cayley-Klein-Dickson spaces
When η is present, the fundamental 3D 'complex' matrix representation (3.1, 3.3) exponentiates to a representation of η SU κ 1 ,κ 2 (3) as a linear transformations group in the ambient linear space C 3 η . One-parametric subgroups corresponding to the generators so far considered are given again by (3.11, 3.13). where now the exponential e ix is related to the sine and cosine with label η by a Euler-like formula: Again the action of η SU κ 1 ,κ 2 (3) on C 3 η is linear but not transitive, since it conserves the 'hermitian' form |z 0 | 2 + κ 1 |z 1 | 2 + κ 1 κ 2 |z 2 | 2 . The isotopy subgroup of the point O whose position vector is O ≡ (1, 0, 0) is easily seen to be the three parameter subgroup η SU κ 2 (2) generated by J, M, B, while the η U (1) subgroup generated by I multiplies this vector by a unimodular 'complex' phase factor. Hence the homogeneous symmetric space The geometry behind the case η < 0 differs greatly from the one in the ordinary complex case: for η < 0, κ 2 = 0 the FS metric is always indefinite and of (2, 2) real type, no matter of the sign of κ 2 . The four spaces of points with η < 0, κ 1 = 0, κ 2 = 0 are essentially the same, though the choices of lines and FS geodesics of first-and second-kind are interchanged; this is tantamount to what happens in the real case for the 1+1 Anti-DeSitter and DeSitter spaces. These four spaces can be realized as spaces of 0-pairs in RP 2 (pairs made from a point and an hyperplane (here line) in the real projective plane RP 2 ), and the distance between two 0-pairs (X; α), (Y ; β) is related to the cross ratio of the four points X, Y ; Z, T , where Z, T are the intersections of the line determined by X, Y with the hyperplanes α, β (see Rosenfeld book [13], theorems 2.39 and 4.21).
The non-generic situation where a coefficient η; κ 1 , κ 2 vanishes corresponds to an Inönü-Wigner contraction [31]. The limit κ 1 → 0 is a local-contraction (around a point). The limit κ 2 → 0 is a line-contraction (around a whole 'complex' line). Finally, the limit η → 0 corresponds to a new kind of contraction around a purely real submanifold, the projectivized real κ 1 , κ 2 CK space. Contractions are built-in in the expressions associated to the 'complex hermitian' CK geometries and groups, simply by making zero any of the constants η; κ 1 , κ 2 (determining the curvature and signature of the space).
The compatibility conditions for a triangular loop
In this section we discuss the approach to the trigonometry of the twenty seven 'complex hermitian type' CKD spaces, and we introduce the 'complex hermitian' compatibility equations, loop equations and the basic trigonometric identity. Some general comments on this approach are given in [1] and will not be repeated here; specially we refer to the choice of 'external angles' at the vertex A and the fact that the standard angular excess appears without the explicit presence of the measure of twice a quadrant of angle (which equals π when κ 2 = 1).
A triangle in a 'complex hermitian' CKD space can be seen either as a triangle point loop or dually as a triangle line loop (see figure (3)). In the first case a point C is considered to move to a different point B 'translating' either along the geodesic segment CB, or along the two geodesic segments CA and AB. Dually, the geodesic c ≡ AB is considered to move to a different geodesic b ≡ CA 'rotating' either about the vertex A ≡ bc, or about the vertices B ≡ ca and then C ≡ ab. There exists though a very important difference with the real 2D case: the 'translations' along a geodesic are not uniquely defined by the geodesic only. Thus to make out sense of the idea of triangle loop a closer analysis of the geometry is required. Any geodesic g through C determine a well defined 'complex' line C η g containing g. Thus for the two geodesics a, b intersecting at the vertex C there are two uniquely determined 'complex' lines C η a, C η b through C. These two 'complex' lines will lie on a (generically) well defined line-geodesic G C (also called a line-chain). This is dual to the determination of a (generically) well defined geodesic g a through two different points C, B. This subtlety is not neccesary in the real case, as then the set of all lines through a point C is one-dimensional, while in the complex case, the set of 'complex' lines through C is two-dimensional. To start with the study of trigonometry we will take as 'sides' and 'angles', the canonical parameters of certain one-parameter subgroup elements associated to some algebra generators. To explain this choice, we first select a (real) flag O ⊂ g 1 ⊂ l 1 ⊂ G 1 as follows: O is the origin point O = [(1, 0, 0)], g 1 is the orbit of O under the one-dimensional subgroup generated by P 1 (thus the 'complex' line l 1 is the orbit of O under the subgroup generated by P 1 , Q 1 ) and the line-geodesic G 1 , is the orbit of l 1 under the subgroup generated by J; this flag is determined by singling out the generators P 1 , Q 1 , J. Now move the triangle to the canonical position where C coincides with O, the side a is on g 1 and the side b lies on a geodesic in G 1 . This 'canonical' position guarantees that the side b is obtained from a by means of two commuting rotations generated by J and I, where the phase rotation generator I is the unique generator in the fiducial Cartan subalgebra commuting with J. As angular invariants take the canonical parameters C ('Hermitian' or 'pure' angle between 'complex' lines) and Φ C (angular phase between real-1D geodesics within a 'complex' line) of the two 'rotations' whose product carries the side a to coincide with b; this products will be called complete rotations about the vertex C.
Since 'hermitian' CKD spaces are rank-one, and therefore each pair of points have a single invariant, it would seem enough to consider the FS distance a between the points C, B as the unique moduli of sides. This is what was done in the previous works on hermitian trigonometry [9,10,11,13]. But the formal duality requirement prompts the consideration of translation partners to both J; I and since duality maps J; I into −P 1 ; −T 1 , this suggests the use of the following complete translations: As well as the duality requirement, there are geometrical reasons for the use of the extra 'translation' e φaT 1 : the 'pure' translation e aP 1 carries the vertex C to B, but this alone does not carry the unique 'complex line'-geodesic at the vertex C determined by C η a, C η b, to the 'complex' line-geodesic at B determined by C η b, C η c; an additional e φaT 1 is required to bring them into each other.
If we now consider complete rotations at each vertex, and complete translations along each side, duality is manifestly restored: at each vertex a complete rotation is required to bring into coincidence simultaneously the sides seen both as 'complex' lines and as point-geodesic sides. And along each side, a complete translation is required to bring into coincidence simultaneously both the vertices as points and the 'complex' line-geodesics determined at each vertex by the two sides.
This choice of two conmmuting generators is very natural from a Quantum Mechanics wiewpoint and afford six 'vertex' quantities (three Hermitian or pure angles A, B, C and three angular phases, Φ A , Φ B , Φ C ), and six 'side' quantities (three lengths, a, b, c and three lateral phases φ a , φ b , φ c ). All these invariants appear as canonical parameters of pairs of commuting isometries, respectively generated by J A , J B , J C ; I A , I B , I C and P a , P b , P c ; T a , T b , T c . At each side P a , P b , P c are pure translation generators that perform the canonical parallel transport along their FS geodesic axes, and T a , T b , T c are the only Cartan generators in the isotopy subalgebras of the sides a, b, c commuting with P a , P b , P c respectively.
Cartan generators exponentiate to somewhat 'hybrid' transformations. The Cartan subalgebra is contained in the isotopy subalgebra of O, so its elements generate 'rotations' about O and conjugates of them rotations about other points. The phase 'rotation' part e φxT 1 of the complete fiducial translation e xP 1 e φxT 1 apparently breaks the scheme symmetry between rotations and translations. Nevertheless, since any Cartan transformation as e φT 1 leaves pointwise invariant the 'complex' line l 1 it should also be considered a 'translation' along l 1 . Thus Cartan transformations are both rotations about a point Figure 4: A 'Complex Hermitian' triangular loop as a single curve.
and translations along a 'complex' line. This could have been read from the diagram (3.5) as the whole Cartan subalgebra is contained in each of the three blocks, the isotopy subalgebras of l 1 , l 2 , O respectively.
From now on everything follows the real pattern [1], and the commutativity between both components of a complete transfomation allows the extension of the basic real identities to 'complex' ones: compatibility identities, point loop and side loop equations, and basic trigonometric identity.
The generators P a , T a , P b , T b , P c , T c ; J A , I A , J B , I B , J C , I C are not independent. They are related by several compatibility conditions: 3) which can be considered as an implicit group theoretical definition for the three sides, the three angles, the three lateral phases and the three angular phases.
All the trigonometry of the 'complex' CKD space is completely contained in these equations, which have as a remarkable property their explicit invariance under the duality interchange a, b, c ↔ A, B, C and φ a , φ b , φ c ↔ Φ A , Φ B , Φ C for triangle group theoretical invariants (sides ↔ angles, lateral phases ↔ angular phases) and P ↔ J, T ↔ I for generators; this duality is a consequence of the fact that D (3.10) is an automorphism of the family of CKD algebras which interchanges P 1 ↔ −J, and T 1 ↔ −I. These equations ressemble their real analogues: the real rotation e AJ A or translation e aPa are replaced by the 'complete' products e AJ A e Φ A I A or e aPa e φaTa .
Each equation in (5.3) is actually a pair relating both components of each 'complete' translation or rotation. As in [1] we will refer to them as P b (P a ), T b (T a ), etc. (or P a (P b ), T a (T b ) when the equation is read inversely). By cyclic substitution in the three pairs of equations P a (P c ), T a (T c ); P c (P b ), T c (T b ) and P b (P a ), T b (T a ) we find the identity e BJ B e Φ B I B e −AJ A e −Φ A I A e CJ C e Φ C I C must commute with P a and T a , e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b must commute with J C and I C . (5.6)
Loop excesses and loop equations
Had we not considered lateral phases, the 'loop' product e −aPa e cPc e bP b would have been a natural object associated to the three 'pure' translations along the triangle sides C This transformation is the ordinary holonomy associated to the triangle, as each factor e aPa is the ordinary parallel transport operator in the canonical connection of the hermitian space C η S 2 [κ 1 ],κ 2 . It moves the base point C along the triangle and returns it back to its original position, so it must be a rotation about C. This rotation is a product of an η U (1) phase part and a η SU κ 2 (2) part, but the explicit expression for the η SU κ 2 (2) part is rather involved.
The use of angular and lateral phases in this self-dual approach affords some simple and apparently new results for certain similar 'loop' operators. The guideline is the pattern established in the real case, replacing every traslation or rotation generators T / J by its 'complete' versions P, T / J, I. We start with the equation which gives (5.7) We introduce J B (J C ), I B (I C ) and trivially simplify and rearrange. Now we use J A (J C ), I A (I C ), simplify, and finally substitute P b (P a ), T b (T a ). This gives: (5.8) The three complete translations along the triangle appear in the former relation in a single piece e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b , while the three complete rotations are all about the base point C. Now we can go a bit further: (5.6) implies that e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b must commute with J C and with I C , so it will commute with any complete rotation e −Φ X I C e −XJ C about C, for any values of the complete angle X, Φ X . Then we can commute the whole complete translation piece e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b in (5.8) with the rotations about C and collect these altogether; as both components of the complete rotation do commute, we get: e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b e (−A+B+C)J C e (−Φ A +Φ B +Φ C )I C must commute with P a and T a . (5.9) We had already derived that e −aPa e −φaTa e cPc e φcTc e bP b e φ b T b e XJ C e Φ X I C must commute with J C and I C for any 'complete angle' (X, Φ X ) (see (5.6)). Since this expression also commutes with P a , T a for the special values X = −A + B + C, Φ X = −Φ A + Φ B + Φ C , we can conclude: because the identity is the only element of η SU κ 1 ,κ 2 (3) commuting with two such pairs of generators as P a , T a and J C , I C . This equation can be also written as: A similar procedure (or direct use of (5.3) in (5.11)) allows us to derive two analogous equations: The quantities defined as The explicit duality of the starting equations (5.3) under the interchanges a, b, c ↔ A, B, C and φ a , φ b , φ c ↔ Φ A , Φ B , Φ C for sides, angles and phases, and P ↔ J, T ↔ I for generators immediately implies that the dual process leads to the dual partners of (5.11) and (5.12): so that the two quantities play the role of lateral excess and lateral phase excess of the triangle loop. lateral orientations 'phased' along (5.14) give the product of the three oriented complete rotations about the three vertices of a triangle as a complete translation along the base line of the loop.
The basic trigonometric identity
Each one of the equations (5.11), (5.12) or (5.14) contains all the relationships between triangle sides, lateral phases, angles and angular phases in any CKD 'complex hermitian' space. However, all twelve elements appear in these equations not only explicitly as canonical parameters, but also implicitly inside the complete translation and rotation generators. This prompts the search for another relation, equivalent to the previous ones but more suitable to display the trigonometric equations; this new equation is indeed the bridge between the former equations and the trigonometry of the space.
The idea is to express all the generators as suitable conjugates of one pair of a translation and a phase translation generator and one pair of a rotation generator and a phase rotation generator, which we will take as primitive independent generators. A natural choice is to take the two pairs P a , T a and J C , I C as 'basic' independent generators. Next by using (5.3) we define the remaining triangle pairs of generators P b , T b ; J A , I A ; P c , T c ; J B , I B in terms of the previous ones and sides and angles, lateral phases and angular phases as: which after full expansion and simplification gives: (5.17) (Note the highly ordered pattern in these expressions). By direct substitution in the equation (5.11) and after obvious cancellations which due to the commutativity of each member in the pairs of the complete transformations fully mimic the pattern found in the real case, we find: The same process starting from any equation in (5.12) or (5.14) leads again to the same equation. This justifies to call (5.18) the basic trigonometric equation. We sum up in: Proof: A group motion can be used to move the triangle to a canonical position described before (5.1) for the flag O ⊂ g ⊂ l ⊂ l g . Then the theorem statement is simply (5.18).
Theorem 2. Let us consider a triangle loop in the 'complex hermitian' CKD space C η S 2 [κ 1 ],κ 2 , and let P a , P b , P c ; T a , T b , T c be the generators of translations and phase translations along the three triangle geodesic sides, whose lengths and lateral phases are a, b, c and φ a , φ b , φ c . Let J A , J B , J C ; I A , I B , I C be the generators of rotations and phase rotations about the three geodesic line vertices of the triangle, whose angles and angular phases are A, B, C, and Φ A , Φ B , Φ C . These quantities are related by two sets of identities, (5.11, 5.12) and (5.14) called the 'complex hermitian' point loop and the 'complex hermitian' line loop triangle equations, each equation being equivalent to the identity in Theorem 1.
The basic equations of trigonometry for any 'complex hermitian' two-dimensional Cayley-Klein-Dickson space
To obtain the trigonometric equations for the 'complex hermitian' CKD space we start with the basic trigonometric identity (5.19), for the triangle in its canonical position, so P a , T a and J C , I C can be taken exactly as P 1 , T 1 and J, I. For notational clarity we will omit even the subindex and will denote P 1 , T 1 here simply as P, T . We first write (5.19) in the equivalent form A c B = b C a: e −AJ e −Φ A I e cP e φcT e BJ e Φ B I = e −bP e −φ b T e −CJ e −Φ C I e aP e φaT (6.1) By considering this identity in the fundamental 3D vector representation of the motion group (3.11) and (3.13) we obtain an equality between 3 × 3 complex matrices, giving rise to nine 'complex' identities: (2)). Each equation in this set either is self-dual or appears in a mutually dual pair; this could have been expected due to the self-duality of the starting equation. We should recall again that all along this section i denotes the imaginary unit of the Cayley-Dickson 'complex' numbers C η , so that i 2 = −η; the labelled sine and cosine with label η are related to the exponential e ix by (4.6).
The association between sides (resp. angles) and the labels κ 1 (resp. κ 2 ) found in the equations of the real space S 2 [κ 1 ],κ 2 = SO κ 1 ,κ 2 (3)/SO κ 2 (2) extends to the Hermitian complex analogues so lengths a, b, c are associated to κ 1 , and angles A, B, C to κ 2 . The lateral φ a , φ b , φ c and angular phases Φ A , Φ B , Φ C have η as its label. The elements (−a, −φ a , −A, −Φ A ) always appear in the equations with a minus sign as compared with (b, φ b , B, Φ B ), (c, φ c , C, Φ C ); this follows from the structure of the basic equation (6.1), in which the side a and vertex A are traversed or rotated backwards.
The set (6.2) is equivalent to two other similar sets, obtained by starting with the basic identity splitted in the two equivalent forms symbolically denoted as C b A = c B a and b A c = B a C. In order to present all these equations in a concise way, it proves adequate to introduce a compact notation, following the pattern explained in [1]. The sides and angles, lateral phases and angular phases will be denoted as x i , X I , φ i , Φ I , i, I = 1, 2, 3 according to 3) The built-in minus sign in x i , φ i , X I , Φ I when i = I = 1 is natural when the triangle is considered as a point loop with the side a, φ a traversed backwards, or as a side loop with the angle A, Φ A rotated backwards; this choice absorbs the signs related to a, A and confer an uniform appearance to the equations (6.2). In particular, the angular and lateral excesses appear in this notation as ∆ = X I + X J + X K and δ = x i + x j + x k . The basic equation (5.19) now reads: e x i P e φ i T e X K J e Φ K I e x j P e φ j T e X I J e Φ I I e x k P e φ k T e X J J e Φ J I = 1 (6.4) where i = I, j = J, k = K are any cyclic permutation of 123. This basic equation can be very simply recalled: replace in the shorthand iKjIkJ each letter by the associated complete translation or rotation. From now on we will adopt this convention which makes all equations of trigonometry explicitly invariant under cyclic permutations of the 'oriented' complete sides x i , φ i and angles X I , Φ I . Capital indices will also help in distinguishing between mutually dual pairs x i , X I and φ i , Φ I .
The trigonometric equations in the 'Cartan sector'
Each equation in (6.2) is 'complex', and phases, both lateral and angular, appear through unimodular 'complex' factors e iφ , e iΦ , while 'pure' sides and angles x i , X I appear through their labelled sines or cosines, which are always real. The equations in the third line split into an equation for the modulus and another for the argument; this last part is: Writing the same equation for the choice of indices i, j, k → j, k, i and comparing we get: These three equations, only two of which are independent, are self-dual and hold for all the 'complex' CKD spaces. A consequence of these very simple linear relations is: the common value in this formula turns out to be a quantity first introduced for the complex hermitian elliptic space by Blaschke-Terheggen. We will call this Ω: In a similar dual way and starting also from (6.6) we find another linear relation between phases: whose value turns out to be the invariant dual to Ω and which for the complex hermitian elliptic case was also introduced by Blaschke-Terheggen: Another quantities linked to φ i , Φ I are the angular phase excess ∆ Φ (5.13) and lateral phase excess δ φ (5.15), which appear in the hermitian analogues of Gauss-Bonnet triangle theorems. In terms of the compact notation (6.3) they are: The invariants Ω (resp. ω) are thus two kinds of mixed phase excesses, with dominance of angular (resp. lateral) phases. The departure from the BT notation ω, τ to ours Ω, ω conforms to the typographical convention upper/lower case in order to stress duality and to convey that each mixed excess Ω, ω has dominance of either angular or lateral phases. These invariants and the two phase excesses ∆ Φ , δ φ , which appear in the point and line loop equations are related by: Thus there is a sector of hermitian trigonometry involving only phases and completely decoupled from 'pure' sides x i and angles X I . This sector holds in exactly the same form in the twenty seven 'complex' CKD spaces, as no explicit labels η; κ 1 , κ 2 appear. Since the triangle invariants φ i , Φ I are related to the Cartan subalgebra, we will call these equations the 'Cartan' sector of 'complex hermitian' trigonometry. This 'Cartan sector' has no analogue in the trigonometry of real spaces, and their equations are purely linear witnessing the abelian character of Cartan subalgebra.
The complete set of 'complex hermitian' trigonometric equations
Now by exploiting the 'Cartan' relations between phases (6.6), and introducing explicitly the invariants Ω, ω, it turns out possible to simplify the equations (6.2) by multiplying each one of them by some suitably chosen unimodular 'complex' factor. This leads to the full set of trigonometric equations coming from the basic trigonometric group identity as: These equations will be referred to by a tag, and are either self-dual (for instance 2ij ≡ 2IJ) or appear in mutually dual pairs (as 1i, 1I). Equations with tag 0 allow the introduction of the 'symmetric' invariants Ω and ω and are in the 'Cartan' sector. The remaining tags are intentionally made to match the ones used in [1]; the equations with tags 1, 2, 3, 4 are in most respects the closest 'complex hermitian' analogues to the equations found in the real case, as far as their mutual relations, dependence or independence, etc. are concerned. Therefore the trigonometry of real spaces provide a rough first guide in the exploration of the whole forest of 'complex hermitian' trigonometric equations.
The 'complex hermitian' trigonometric bestiarium
Taking the equations (6.13) as starting point, we now perform a fully explicit study of 'complex hermitian' trigonometry, including some comments. As the scheme enjoys selfduality, those equations which are not self-dual will have a dual partner, obtained by exchange in capitalization of names and indices: x ↔ X, φ ↔ Φ, i ↔ I and in CK constants κ 1 ↔ κ 2 ; in these cases we will only sketch the derivation of one member of the dual pair, but we will write each pair together, to emphasize self-duality as the main trait of this approach. The label η does not change under duality.
These equations will hold for all twenty seven 'complex' CKD spaces with arbitrary η; κ 1 , κ 2 . In the degenerate cases κ 1 = 0 (flat 'complex hermitian' spaces) and/or η = 0 or κ 2 = 0 (degenerate 'complex' 'Hermitian' metric) some equations may collapse or even reduce to trivial identities; these cases will be discussed later but for the moment we will stay in the general situation where η; κ 1 , κ 2 are assumed to have any values. All equations found in the literature for the elliptic (hyperbolic) complex hermitian spaces will follow from this set after we specialize η = 1; κ 1 = 1 (κ 1 = −1), κ 2 = 1; in those cases, the equations we found will be allocated a suitable name.
• The Cartan sector equations 0IJ ≡ 0ij will be called the 'complex hermitian' phases theorem. They are self-dual and involve only the triangle Cartan invariants. They allow the introduction of two symmetric triangle invariants Ω and ω after (6.8) and (6.10): There are two such independent equations, thus four independent quantities among the six lateral and angular phases. This number equals the number of essential independent triangle invariants; this is not accidental (see the comment at the end of Sect. 6.5).
• The equations 2iJ ≡ 2jI, taken together will be called the hermitian sine theorem.
This self-dual relation has two independent equations linking the six 'pure' sides and angles. The hermitian phases theorem can be written in terms of the phase factors e iφ i , e iΦ I and has the same form as the sine theorem; this is so because phases theorem (6.14) and sine theorem (6.15) are the modulus and argument of the same 'complex' equality.
• Each of the 'complex hermitian' cosine theorems 1i and 1I is a 'complex' equation. By splitting the hermitian cosine theorem 1i into real and imaginary parts, we get: called real and imaginary Hermitian cosine laws (for sides). Their duals are the real and imaginary Hermitian dual cosine laws (for angles): • By equating the modulus of both sides of the hermitian cosine theorem 1i we get: (6.18) For the complex elliptic case this is the Shirokov-Rosenfeld cosine theorem (2.12) [9] yet expressed in terms of the angular variables X I and Φ I , instead of the ones used in [9].
• By multiplying both sides of (6.15) by 1/S η (Ω) and using the second equation in (6.16) we obtain: called Shirokov-Rosenfeld double sine theorem, because in the complex elliptic case reduces to the SR double sine law (2.11) after changing to the angular variables used by SR. Its dual is: . (6.25) • By multiplying (6.24) and (6.25) we get the self-dual equation: • By taking quotient between the double sine theorem (6.24) and the sine theorem (6.15) we get: whose dual is: (6.28) • Another equations derive from the equations with tags 3iJ and 3Ij. In particular, by splitting the equations 3iJ into their real and imaginary parts we obtain: 29) whose duals are: • The same splitting for the equations 4ij ≡ 4IJ leads to the pair of self-dual equations: • Starting from the real and imaginary parts of the 'complex hermitian' cosine theorem (6.16), expanding the trigonometric functions of Ω = Φ I + φ j + Φ K by considering it as a sum of two phases and eliminating the term containing C η (Φ I + φ j ) we get: Its dual is: where we have used the relations Φ J + φ k = Ω − Φ I = ω − φ i which follow from the equations in the 'Cartan' sector and the definitions of Ω and ω.
• By dividing the equation (6.27) by (6.32) we get: whose form for another suitable choice of indices is: The duals of these equations are: • By eliminating the angles X I , X K between (6.34) and (6.35): whose dual is: These equations give the cosine of each side (angle) in terms of the angular (lateral) phases only. They somehow ressemble real trigonometry Euler equations for the cosine of half the sides (angles) in terms of angles (sides). In these hermitian 'Euler-like' equations, pure sides (angles) are however given in terms of angular phases and Ω (lateral phases and ω).
• By expansion of sines of sums or differences and elementary manipulation, we finally get the expresion for the squared sines of the sides: whose dual equation is: As we shall see shortly, and in spite of the presence of κ 1 , κ 2 in denominators, these equations are still meaningful when κ 1 → 0 or κ 2 → 0.
Symplectic area and coarea
For real CK spaces the angular excess ∆ shares three properties: ∆ goes to zero with κ 1 , it is proportional (coefficient κ 1 ) to triangle area, and satisfies Gauss-Bonnet type equations. These three properties are splitted in the 'complex' case: In the hermitian point loop equations (5.11) and (5.12), the 'complete excess' (∆, ∆ Φ ) plays a role partly analogue of the real angular excess, yet it may not vanish with κ 1 . There are two different independent hermitian triangle quantities which vanish with κ 1 . One of them is the Blaschke-Terheggen invariant Ω. This follows directly from the equations already derived. The situation for the other vanishing quantity is not so obvious (see however the comments in the next Section). Dually, while the real excess δ is proportional to the coarea, vanishes with κ 2 , and satisfies dual Gauss-Bonnet type equations, the complete lateral excess (δ, δ φ ) appears in (5.14) and may not vanish with κ 2 , while ω vanishes with κ 2 .
In the real case, the three cosine equations 1i (1I) turned into trivial identities when κ 1 = 0 (κ 2 = 0). In the 'complex hermitian' case, the three 'complex' independent equations 1i (1I), which are independent when κ 1 = 0 (κ 2 = 0) collapse when κ 1 = 0 (κ 2 = 0) into a single real one in the 'Cartan' sector, and as far as pure sides and angles are concerned become trivial: 1i when κ 1 = 0 e iΩ = C η (Ω) + iS η (Ω) = 1 implying C η (Ω) = 1, S η (Ω) = 0 (6.42) 1I when κ 2 = 0 e iω = C η (ω) + iS η (ω) = 1 implying C η (ω) = 1, S η (ω) = 0. (6.43) The behaviour of the quotient Sη(Ω) κ 1 as κ 1 → 0 can be derived both from the imaginary part of the Hermitian cosine theorem (6.16) and from equations (6.40): and since this quotient remains finite as κ 1 → 0, Ω behaves like the real case angular excess ∆ = −A + B + C. Dually, ω behaves as the real pure lateral excess δ = −a + b + c: The real excesses ∆, δ are proportional, with coefficients κ 1 and κ 2 , to the triangle area and coarea respectively. In the elliptic hermitian space CP 2 Hangan and Masala [29] found for the symplectic triangle area S the relation S = −Ω/2 (the inessential minus sign comes form their definition of symplectic form). For any member of the CKD family of the 'complex Hermitian' spaces, the definitions for triangle symplectic area and coarea : (note the factor 2) are in full agreement with the standard definition of symplectic area as the integral of the symplectic form over any surface dressing the triangle; this form is closed so by the Stokes theorem the integral depends only on the boundary.
Therefore all appearances of Ω or ω in trigonometric the equations could be rewritten in terms of trigonometric functions of the symplectic area S with label ηκ 2 1 (the symplectic area goes like the product of lengths along the geodesics generated by P 1 and Q 1 , whose labels are κ 1 and ηκ 1 ), and symplectic coarea s, with label ηκ 2 2 .
When κ 1 = 0, Ω vanishes but S keeps some finite value, a kind of 'residue' of the generically non vanishing mixed phase excess Ω. Dually, the same happens for ω and s as κ 2 → 0.
Dependence and basic equations
In the complex hermitian and hyperbolic spaces a triangle is known to be determined by four independent quantities. Since we have found eight generic independent relations between the twelve sides, angles and phases, this is still true in the generic CKD space C η S 2 [κ 1 ],κ 2 . Two such relations are the hermitian phase theorems (6.14). The other six happen to be exactly twice as many independent equations as in the real case, due to their 'complex' nature. This allows to split the dependence disscusion into the Cartan sector and the equations with a real analogue.
The Cartan sector includes six phases, to which we will add symplectic area and coarea. For any value of the labels κ 1 , κ 2 , η, there are four independent equations between the eight quantities φ i , Φ I , S, s: so in any case there are always four independent such Cartan sector quantities. Generically the triangle is almost determined by these quantities (see 6.40, 6.41).
In order to discuss the dependence of the remaining equations, let us consider: (6.49) The quantity △ g is the determinant of the Gramm matrix whose elements are the 'hermitian' products of the vectors corresponding to the vertices in the linear ambient space, and △ G its dual quantity. From (6.42) and (3.12) it follows that △ g vanishes when κ 1 → 0; dually the same happens for △ G when κ 2 → 0. Further, the quotient △ g /κ 2 1 (resp. △ G /κ 2 2 ) tends to a well defined finite limit when κ 1 → 0 (resp. κ 2 → 0), although still goes to zero when κ 2 → 0 (resp. when κ 1 → 0). To see this, simplify (6.49) by using (6.22) or (6.23) to obtain: This suggests to introduce two new 'renormalized' quantities γ, Γ in a way similar to (6.46): Relations between Γ, γ and symplectic area and coarea S, s holding for any η; κ 1 , κ 2 follow by expressing sines of sides and angles in (6.51) by means of (6.40) and (6.41): By direct substitution using (6.52) and (6.53) we can also derive the following relations between Γ, γ and the pure sides and angles: Let us now discuss the dependence issue in the generic case κ 1 = 0, κ 2 = 0, where all the equations with tags 2, 3 and 4 follow from 1, exactly alike in the real case. This means that a triangle is completely determined by the three sides a, b, c and Ω, or by the three angles A, B, C and ω. The proofs are also a verbatim translation of the real ones, with a single hermitian caveat: sometimes the 'complex' conjugate of an equation 1I or 1i should be used. For instance, let us derive the hermitian sine theorem from the dual cosine theorems 1I in the case κ 2 = 0. Start from the identity S 2 replace one factor C κ 2 (X J ) by its expression taken from 1J, and the other C κ 2 (X J ) by the complex conjugate; then expand: .
(6.55) which by using (6.49) and (6.51) can be rewritten as: As κ 2 = 0, the r.h.s of (6.56) is well defined and is clearly symmetric in the indices ijk (this was also clear from (6.51)). Therefore S 2 κ 1 (x i )S 2 κ 2 (X J )S 2 κ 1 (x k ) = S 2 κ 1 (x j )S 2 κ 2 (X K )S 2 κ 1 (x i ) leads to the sine theorem. Dually, when κ 1 = 0 the sine theorem follows also from the three cosine theorems 1i . By following the real pattern, the dual hermitian cosine theorem and equations with tag 3 and 4 can also be derived. Therefore, in the generic κ 1 = 0, κ 2 = 0 case, the three 'complex hermitian' cosine theorems 1i seen as six idependent real equations are a set of basic equations. By duality, the same applies to the three dual cosine theorems 1I. By adding to either choice the four independent 'Cartan sector' phase equations relating the six phases, S, s, we get a complete set of ten equations relating fourteen quantities. A triangle in the hermitian spaces κ 1 = 0, κ 2 = 0 is characterized by four independent quantities, for instance either a, b, c, Ω or A, B, C, ω, which are a dual pair; this was already known for hermitian elliptic or hyperbolic spaces, but holds for the complete family of 'complex' CKD spaces as we shall see in the next section. Another choice are the six phases linked by the two relations (6.14).
Alternative forms of Hermitian Cosine Equations when
The collapse of the Hermitian cosine equations (6.16) to (6.42) when κ 1 = 0 can be circumvented by writing (6.16) in an alternative form. The imaginary part can be rewritten in terms of the symplectic area S, by using (6.47). For the real part, the procedure mimicks the real one [1]: write all cosines of the sides and of Ω in terms of versed sines and then substitute in 1i, expand, cancel a common factor κ 1 and use the identities for the versed sines of a sum. Thus the pair of equations (6.16) can be rewritten as: 58) a form meaningful for any value of κ 1 (even if all other labels are equal to zero). When κ 1 = 0 they reduce to (6.59) The dual cosine equations 1I allow a similar reformulation: (6.60) which is meaningful for any value of κ 2 The quantities γ (Γ) can be also given in terms of sides and symplectic area, (angles and symplectic coarea) by expressions which are still meaningful when κ 1 = 0 (κ 2 = 0): Thus when κ 1 = 0 but κ 2 = 0, the six real equations 1i ′ are independent and all the remaining equations follow from the Cartan sector equations (6.48) and from the three pairs 1i ′ just as everything followed from 1i in the case κ 1 = 0. Dually, mutatis mutandis from 1I ′ when κ 2 = 0 but κ 1 = 0.
We finally discuss the situation when κ 1 = κ 2 = 0. The lateral and angular phases are equal: Φ I = φ i and this provides three independent equations. Two further equations are the sine theorem which in this case cannot be derived from 1i ′ or 1I ′ . This makes five independent equations: The remaining details depend on whether η is zero or not. If η = 0, the equations 1i ′ /1I ′ read: 63) Taking into account (6.62), the groups of three equations Re1i ′ and Re1I ′ are equivalent; either of them can be taken as three further equations. Any of these sets imply the relation (whose general form is (6.27)) which shows that the three equations either Im1i ′ or Im1I ′ collapse to a single equation. Taken altogether, these provide another five independent equations in (6.63). When η = 0, i.e., in the most contracted case, the three equations Re1i ′ collapse to a single equation better written as x i + x j + x k = 0 and likewise Re1I ′ collapse to X I + X J + X K = 0; these two equations are however not independent in view of the sine theorem. In this case the most contracted form of (6.27) cannot be derived from previous equations and have to be added as two further independent equations, in either of the two forms: using these equations, each group of three equations Im1i ′ or Im1I ′ collapses to a single equation. This makes again five independent additional equations altogether in: Theorem 3. The full set of equations of 'complex Hermitian' trigonometry linking the fourteen qantities x i , X I , φ i , Φ I , S, s contains for any value of η, κ 1 , κ 2 exactly ten independent equations. Any other equation in the set is a consequence of them. When κ 1 or κ 2 are different from zero, four such equations are the two phases equations 0ij ≡ 0IJ and the two relations Ω = κ 1 2S, ω = κ 2 2s. The remaining six independent equations are: • When κ 1 = 0 and κ 2 = 0, any η, either the equations 1i or 1I.
• When η = 0, the ten independent equations in (6.62) and (6.66). Table 2: Complex Hermitian sine theorems and relations between symplectic area S, coarea s and mixed phase excesses Ω, ω for the twenty seven 'complex Hermitian' ('CH') CKD spaces. The Table is arranged with columns labeled by κ 1 = 1, 0, −1 and rows by κ 2 = 1, 0, −1; (η; κ 1 , κ 2 ) are explicitly displayed at each entry. All relations in this Table hold in the same form no matter of the value of η. The group description of the homogeneous spaces is given in the CKD type notation.
Symmetric invariants and existence conditions
Several Hermitian trigonometric equations are or can be rewritten as a relation belonging to one of two types. The first type has a structure similar to the sine theorem: a 'oneelement' expression involving only one index (vertex, opposite side) has the same value for the two remaining ones: Under duality τ ↔ 1 τ and ξ ↔ Ξ. Other such 'one-element' type equations have values which can be expressed in terms of the three triangle invariants τ, ξ, Ξ: The second type has a structure like formulas allowing the introduction of Ω, ω; a 'cyclic' expression invariant under any cyclic permutation of the three indices it involves: There is no essential difference between the 'one-element' and 'cyclic' types of equations, and it turns out to be possible to express the 'one-element' invariants in an explicitly 'cyclic' form: .
The two quantities γ/κ 2 and Γ/κ 1 must be real in order the triangle to exist. Therefore, any triangle must satisfy the inequalities: which apply to any member of the CKD family of 'complex Hermitian' spaces, notwithstanding any restriction for the sides and angles. Brehm's inequalities under which a triangle with prescribed values for sides and shape invariant exists in the elliptic and hyperbolic complex hermitian spaces are simply the transcription of the condition γ κ 2 ≥ 0 to the complex CK spaces with η = 1; κ 1 = 0, κ 2 = 1. Thus γ ≥ 0, or, equivalently △ g = κ 2 1 γ ≥ 0; by using (6.49) this gives: which covers simultaneously the inequalities given by Brehm for the elliptic (κ 1 > 0) and hyperbolic (κ 1 < 0) hermitian spaces (remark Brehm calls ω our Ω). inequalities fourth element ω It is also worth highlighting the translation of the inequalities (6.71) by using (6.38); this brings them in terms of angular and lateral phases, and symplectic area and coarea: The same translation can be done in the expressions (6.70), thereby expressing the values τ, ξ, Ξ in terms of lateral and angular phases, symplectic area and coarea: .
The inequalities (6.71) are analogous to the existence conditions E κ 1 ≤ 0, e κ 2 ≤ 0 for the half-excesses E = ∆/2, e = δ/2 appearing in the trigonometry of real spaces. A way to derive such real inequalities, alternative to the one used in [1], is to introduce in the real case the determinants △ g , △ G of the Gramm matrices built up from the real symmetric scalar products of vectors corresponding to vertices or to poles of sides; these are given by (6.49) with Ω = 0, ω = 0, and also vanish when κ 1 , κ 2 → 0. If we introduce again γ, Γ by (6.51), in absence of the factors C η (Ω), C η (ω), the identity A.30 in the appendix of [1] allow a factorization of γ, Γ, translating the conditions γ κ 2 ≥ 0, Γ κ 1 ≥ 0 into inequalities for angular and lateral excesses E κ 1 ≤ 0, e κ 2 ≤ 0. This last step cannot be done in the Hermitian case and the inequalities stay in the form (6.71).
6.7
The three special cases: collinear triangles, concurrent triangles and purely real triangles Browsing through the equations we have given, we find several pairs of equations which can be stated in two similar variant forms, one involving sides (angles) and the other involving twice the sides (angles): examples of such pairs are (6.18, 6.19) or (6.15, 6.24) and their duals. This fact insinuates the existence of two special non-generic types of triangles, for which the appropriate generic equation reduces to a (known) simpler form.
Complex Collinear triangles
The first special case corresponds to a triangle determined by three 'complex' collinear vertices, hence collapsing from the 'complex'-2D CK hermitian space η SU κ 1 ,κ 2 (3)/( η U (1)⊗ η SU κ 2 (2)) to a 'complex'-1D subspace, which can be identified with a space η SU κ 1 (2)/ η U (1). Depending on whether κ 1 > 0, = 0, < 0, this space is the elliptic, euclidean or hyperbolic hermitian 'complex' line. Sides are all different from zero, but angles X I , X J , X K must be zero (or straight), and thus satisfy S κ 2 (X I ) = S κ 2 (X J ) = S κ 2 (X K ) = 0 (hence γ = Γ = 0). For these values the equations 1J reduce to e iω = 1, thus ω = 0 and from 0iI we get: The SR double cosine for sides (6.19), and double sine for sides (6.24) become in this case: so all hermitian trigonometric equations reduce in this case to the trigonometry of a triangle with sides 2x i , 2x j , 2x k and angles Φ I , Φ J , Φ K in a auxiliar real CK space with labels κ 1 for sides and η for angles, or equivalently, for a triangle with sides x i , x j , x k and angles Φ I , Φ J , Φ K in a real CK space with labels 4κ 1 for sides and η for angles, for which (6.76, 6.77) are the real cosine and sine theorems. The Lie algebra isomorphism η su κ 1 (2) ≃ so κ 1 ,η (3) lies behind this. By using (6.12), and recalling ω = 0, the auxiliar triangle angular excess Φ I + Φ J + Φ K turns out to be equal to 2Ω, and thus Ω plays the role of the angular half-excess denoted E in the previous paper on real type trigonometry [1]. It is worth remarking that the area A of this auxiliar triangle is related to its angular excess as A = 2Ω 4κ 1 = Ω 2κ 1 , thus coinciding with the original triangle symplectic area S. The lateral phases φ i turn out to coincide with the three auxiliar angles denoted E I in [1]. In terms of the symmetric invariants, the 'collinear' case corresponds to:
Concurrent triangles
The second special case, dual to the previous one, corresponds to a triangle determined by three different concurrent geodesic sides. Then sines of sides are equal to zero: Here Ω = 0, and the the SR dual double cosine equation (6.21) and SR dual double sine equation (6.25) become the cosine and sine theorems for a triangle with sides X I , X J , X K and angles φ i , φ j , φ k in a real CK space with labels 4κ 2 for sides and η for angles. For the angular excess of this auxiliar triangle we have φ i + φ j + φ k = 2ω, and thus ω plays the role of the angular half-excess (E in [1]). In terms of the symmetric invariants, this case correspond to:
Purely Real triangles
The third special case corresponds to a triangle for which the lateral and angular phase factors e iφ i and e iΦ I are real, and sides and angles are different from zero; this purely real triangle is contained in a purely real totally geodesic submanifold, isometric to SO κ 1 ,κ 2 (3)/(O(1)⊗SO κ 2 (2)), and locally isometric (as O(1) ≡ Z 2 ) to SO κ 1 ,κ 2 (3)/SO κ 2 (2); for κ 1 = 1, κ 2 = 1 this is the real projective space RP 2 . Sines of both sets of phases vanish whenever the other does (see (6.30)); this corresponds to the self-dual nature of this case. Angular and lateral phase excesses, Ω and ω and symplectic area and coarea have vanishing sines. Each individual phase φ i or Φ I can thus have only two values, either 0 or twice a quadrant of label η, which have opposite cosines ±1. In terms of the symmetric invariants, this case correspond to the values This reduction also provides an approach to the trigonometry of real projective planes, requiring as triangle elements, further to sides and angles, a set of discrete phases, entering the equations only through their cosines ε i = C η (φ i ) = ±1; ε I = C η (Φ I ) = ±1. Thus Hermitian trigonometry of the 'complex' spaces simultaneously afford, if we restrict phases to these two possible discrete values, the trigonometry of the real 'projective' CK spaces family (to which RP 2 belongs). The distinction between the trigonometry of the sphere and the real projective plane is well known (e.g. in Coxeter [26]).
Overview and Concluding remarks
The most direct physical application of Hermitian trigonometry is to the trigonometry of the Quantum space of states, which is the elliptic member (κ 1 > 0, κ 2 > 0) of the family of complex (η > 0) CKD Hermitian spaces; geometric phases appear directly as trigonometric invariants from this point of view. This will be discussed in the companion paper [20].
There are also other possibly interesting applications of an explicit knowledge of the trigonometry of this family of spaces. The real space-time models with zero or constant space-time curvature (Minkowskian and de Sitter space-times) are superseded by a variable curvature pseudoRiemannian space-time; this is the essence of the Einstenian interpretation of gravitation. The possibility of a kind of 'Riemannian' Quantum space of states, whose curvature might be not constant, cannot be precluded a priori. A good understanding of the geometry of the Hermitian constant curvature cases might be helpful to explore and figure out what consequences might follow from this idea and familiarity with their trigonometry is a first order tool in this aim.
Another physical problem where the results we have obtained could apply lies on the use of pseudo-Hilbert spaces with an indefinite Hermitian scalar product (Gupta-Bleuler type). These indefinite Quantum spaces of states are those corresponding to κ 2 > 0; and its hermitian trigonometry should provide the basic elementary relations in the geometry of these spaces, just as the corresponding real relations are the basic space-time relations in the De Sitter and Anti De Sitter space-times.
The identification of the Quantum space of states as a member of this complete CKD family of spaces makes it also natural to inquire about whether or not the labels η, κ 1 , κ 2 may have any sensible physical meaning. Within the 'kinematical' (κ 2 ≤ 0) interpretation of the real CK spaces, κ 1 is the curvature of space-time and κ 2 = −1/c 2 is related to the relativistic constant. A natural query is: are the limits η → 0, κ 1 → 0 (and N → ∞) somehow related to a 'classical' limit → 0 within some sensible 'quantum' interpretation of the 'complex hermitian' spaces? This is worth exploring.
Real hyperbolic trigonometry, deeply involved in manifold classification problems, knot theory, etc., is merely a particular case of real CK trigonometry. It is not unreasonable to assume that (some instances at least) of the generic Hermitian trigonometry may be at least as relevant in the similar 'complexified' problems [32]. The intriguing indications for an essentially complex nature of space-time at some deep level makes also worthy the study of complex spaces in a way as explicit and visual as possible.
Aside the physical interest of particular results, another potential in the method proposed in [1] and developed in the present paper lies on the possibility of opening an avenue for studying the trigonometry of other symmetric homogeneous spaces, most of whose trigonometries are still unknown. Very few results are known in this area; a general sine theorem is derived in Leuzinger [33] for non-compact spaces, and the trigonometry of the rank-two spaces SU (3) and SL(3, C)/SU (3) is discussed in [34,35] heavily relying on the use of the Weyl theorems on invariant theory and characterization of invariants by means of traces of products of matrices.
The trigonometry of the rank-one 'quaternionic hyper-hermitian' spaces (Sp(3)/(Sp(1)⊗ Sp(2)), Sp(2, 1)/(Sp(1) ⊗ Sp(2)), Sp(2, 1)/(Sp(1) ⊗ Sp(1, 1)) or Sp(6, R)/(SO(2, 1) ⊗ Sp(4, R))) -which correspond to further Cayley-Dickson extensions with a new CD label η 2 -, and also of the 'octonionic type' analogues of the Cayley plane -with another CD label η 3 altogether-, reduces in some sense to the 'complex' two-dimensional case, since any triangle in these spaces lies on a 'complex' chain; thus in a sense the study of trigonometry in rank one spaces is essentially complete with the spaces of real quadratic and 'complex Hermitian' type. This reduction is not natural however from a purely quaternionic or octonionic viewpoint. Perhaps quaternionic (and also the exceptional octonionic) trigonometry should be understood better. In any case, this kind of approach in a 'R, C, H spirit' fits into the V.I. Arnold idea of mathematical trinities; hopefully it may provide a way to the quest [32] for the quaternionic analogue of Berry's phase.
A next natural objective along this line is the study of trigonometry of higher rank grassmannians, either real or complex. This is still largely unknown (see however [36]). Should the method outlined in this paper be able to produce in a direct form the equations of trigonometry for grassmannians which are also very relevant spaces in many physical aplications, this would make a further step towards a general approach to trigonometry of any symmetric homogeneous space. This goal will require first to group all symmetric homogeneous spaces into CKD families, and then to study trigonometry for each family. Work in progress on this line [37,3,38] opens the possibility of realizing all simple Lie algebras (even SL(N, R), SL(N, C), SO * (2n), SU * (2n) and the exceptional ones) as 'unitary' algebras, leaving invariant an 'hermitian' (relative to some antiinvolution) form over a tensor product of two pseudo-division algebras. This realization should allow a test on whether or not some extension of the ideas outlined here afford the equations of trigonometry for any homogeneous space in an explicit and simple enough way. the same, but the corresponding geometries differ by the interchange of first-and secondkind lines generated by either P 1 or P 2 . Notice the sign difference in equations involving a, A and involving b, B; c, C and the relevant comments in the main text. Table 4 (η = 0). The Table is arranged after the values of the pair κ 1 , κ 2 , and the three labels (η; κ 1 , κ 2 ) are explicitly displayed at each entry. The group description G/H of the homogeneous space is not shown as when η = 0 the CKD groups are not simple and have not a standard name. The fiducial role of the trigonometry for the space (η = 0; κ 1 = 0, κ 2 = 0) in the center of this Table is clear. All the trigonometries in Tables 3, 4 and 5 are deformations of this 'purely linear' one. Table 5 (η < 0). The Table is arranged after the values κ 1 , κ 2 , and the labels (η; κ 1 , κ 2 ) are explicitly displayed at each entry. The group description G/H of the homogeneous space is shown only when G has a standard name. The spaces at the four corners are equal, but the trigonometric equations in these geometries are different as they correspond to triangles with geodesics sides of the four not conjugate different possible types. (2) IU (2)/U (1)⊗SU (2) SU (2, 1)/U (1)⊗SU (2) cos a cos Ω = cos b cos c − sin b sin c cos A cos ΨA a 2 = b 2 + c 2 + 2bc cos A cos ΨA cosh a cos Ω = cosh b cosh c + sinh b sinh c cos A cos ΨA cos b cos Ω = cos a cos c + sin a sin c cos B cos ΨB b 2 = a 2 + c 2 − 2ac cos B cos ΨB cosh b cos Ω = cosh a cosh c − sinh a sinh c cos B cos ΨB cos c cos Ω = cos a cos b + sin a sin b cos C cos ΨC c 2 = a 2 + b 2 − 2ab cos C cos ΨC cosh c cos Ω = cosh a cosh b − sinh a sinh b cos C cos ΨC cos a sin 2S = sin b sin c cos A sin ΨA 2S = bc cos A sin ΨA cosh a sin 2S = sinh b sinh c cos A sin ΨA cos b sin 2S = sin c sin a cos B sin ΨB 2S = ca cos B sin ΨB cosh b sin 2S = sinh c sinh a cos B sin ΨB cos c sin 2S = sin a sin b cos C sin ΨC 2S = ab cos C sin ΨC cosh c sin 2S = sinh a sinh b cos C sin ΨC cos A sin 2s = sin B sin C cos a sin ψa cos A sin 2s = sin B sin C sin ψa cos A sin 2s = sin B sin C cosh a sin ψa cos B sin 2s = sin C sin A cos b sin ψ b cos B sin 2s = sin C sin A sin ψ b cos B sin 2s = sin C sin A cosh b sin ψ b cos C sin 2s = sin A sin B cos c sin ψc cos C sin 2s = sin A sin B sin ψc cos C sin 2s = sin A sin B cosh c sin ψc cos A cos ω = cos B cos C − sin B sin C cos a cos ψa cos A cos ω = cos B cos C − sin B sin C cos ψa cos A cos ω = cos B cos C − sin B sin C cosh a cos ψa cos B cos ω = cos A cos C + sin A sin C cos b cos ψ b cos B cos ω = cos A cos C + sin A sin C cos ψ b cos B cos ω = cos A cos C + sin A sin C cosh b cos ψ b cos C cos ω = cos A cos B + sin A sin B cos c cos ψc cos C cos ω = cos A cos B + sin A sin B cos ψc cos C cos ω = cos A cos B + sin A sin B cosh c cos ψc 2s = CA sin ψ b 2s = CA cosh b sin ψ b 2s = AB cos c sin ψc 2s = AB sin ψc 2s = AB cosh c sin ψc A 2 = B 2 + C 2 + 2BC cos a cos ψa A 2 = B 2 + C 2 + 2BC cos ψa A 2 = B 2 + C 2 + 2BC cosh a cos ψa B 2 = A 2 + C 2 − 2AC cos b cos ψ b B 2 = A 2 + C 2 − 2AC cos ψ b B 2 = A 2 + C 2 − 2AC cosh b cos ψ b C 2 = A 2 + B 2 − 2AB cos c cos ψc C 2 = A 2 + B 2 − 2AB cos ψc C 2 = A 2 + B 2 − 2AB cosh c cos ψc cos a cos Ω = cos b cos c − sin b sin c cosh A cos ΨA a 2 = b 2 + c 2 + 2bc cosh A cos ΨA cosh a cos Ω = cosh b cosh c + sinh b sinh c cosh A cos ΨA cos b cos Ω = cos a cos c + sin a sin c cosh B cos ΨB b 2 = a 2 + c 2 − 2ac cosh B cos ΨB cosh b cos Ω = cosh a cosh c − sinh a sinh c cosh B cos ΨB cos c cos Ω = cos a cos b + sin a sin b cosh C cos ΨC c 2 = a 2 + b 2 − 2ab cosh C cos ΨC cosh c cos Ω = cosh a cosh b − sinh a sinh b cosh C cos ΨC cos a sin 2S = sin b sin c cosh A sin ΨA 2S = bc cosh A sin ΨA cosh a sin 2S = sinh b sinh c cosh A sin ΨA cos b sin 2S = sin c sin a cosh B sin ΨB 2S = ca cosh B sin ΨB cosh b sin 2S = sinh c sinh a cosh B sin ΨB cos c sin 2S = sin a sin b cosh C sin ΨC 2S = ab cosh C sin ΨC cosh c sin 2S = sinh a sinh b cosh C sin ΨC cosh A sin 2s = sinh B sinh C cos a sin ψa cosh A sin 2s = sinh B sinh C sin ψa cosh A sin 2s = sinh B sinh C cosh a sin ψa cosh B sin 2s = sinh C sinh A cos b sin ψ b cosh B sin 2s = sinh C sinh A sin ψ b cosh B sin 2s = sinh C sinh A cosh b sin ψ b cosh C sin 2s = sinh A sinh B cos c sin ψc cosh C sin 2s = sinh A sinh B sin ψc cosh C sin 2s = sinh A sinh B cosh c sin ψc cosh A cos ω = cosh B cosh C + sinh B sinh C cos a cos ψa cosh A cos ω = cosh B cosh C + sinh B sinh C cos ψa cosh A cos ω = cosh B cosh C + sinh B sinh C cosh a cos ψa cosh B cos ω = cosh A cosh C − sinh A sinh C cos b cos ψ b cosh B cos ω = cosh A cosh C − sinh A sinh C cos ψ b cosh B cos ω = cosh A cosh C − sinh A sinh C cosh b cos ψ b cosh C cos ω = cosh A cosh B − sinh A sinh B cos c cos ψc cosh C cos ω = cosh A cosh B − sinh A sinh B cos ψc cosh C cos ω = cosh A cosh B − sinh A sinh B cosh c cos ψc Table 4: 'Complex Hermitian' cosine theorems and their duals for the nine parabolic complex (dual) 'Hermitian' CKD spaces (η = 0).
'Parabolic Complex Hermitian' Elliptic (0; +1, +1) 'Parabolic Complex Hermitian' Euclidean (0; 0, +1) 'Parabolic Complex Hermitian' Hyperbolic (0; −1, +1) cos a = cos b cos c − sin b sin c cos A a 2 = b 2 + c 2 + 2bc cos A cosh a = cosh b cosh c + sinh b sinh c cos A cos b = cos a cos c + sin a sin c cos B b 2 = a 2 + c 2 − 2ac cos B cosh b = cosh a cosh c − sinh a sinh c cos B cos c = cos a cos b + sin a sin b cos C c 2 = a 2 + b 2 − 2ab cos C cosh c = cosh a cosh b − sinh a sinh b cos C cos a 2S = sin b sin c cos A ΨA 2S = bc cos A ΨA cosh a 2S = sinh b sinh c cos A ΨA cos b 2S = sin c sin a cos B ΨB 2S = ca cos B ΨB cosh b 2S = sinh c sinh a cos B ΨB cos c 2S = sin a sin b cos C ΨC 2S = ab cos C ΨC cosh c 2S = sinh a sinh b cos C ΨC cos A 2s = sin B sin C cos a ψa cos A 2s = sin B sin C ψa cos A 2s = sin B sin C cosh a ψa cos B 2s = sin C sin A cos b ψ b cos B 2s = sin C sin A ψ b cos B 2s = sin C sin A cosh b ψ b cos C 2s = sin A sin B cos c ψc cos C 2s = sin A sin B ψc cos C 2s = sin A sin B cosh c ψc cos A = cos B cos C − sin B sin C cos a A = B + C cos A = cos B cos C − sin B sin C cosh a cos B = cos A cos C + sin A sin C cos b B = A − C cos B = cos A cos C + sin A sin C cosh b cos C = cos A cos B + sin A sin B cos c C = A − B cos C = cos A cos B + sin A sin B cosh c 'Parabolic Complex Hermitian' Co-Euclidean (0; +1, 0) 'Parabolic Complex Hermitian' Galilean (0; 0, 0) 'Parabolic Complex Hermitian' Co-Minkowskian (0; −1, 0) 'Parabolic Complex Hermitian' Oscillating Newton-Hooke 'Parabolic Complex Hermitian' Expanding Newton-Hooke cos a cosh Ω = cos b cos c − sin b sin c cos A cosh ΨA a 2 = b 2 + c 2 + 2bc cos A cosh ΨA cosh a cosh Ω = cosh b cosh c + sinh b sinh c cos A cosh ΨA cos b cosh Ω = cos a cos c + sin a sin c cos B cosh ΨB b 2 = a 2 + c 2 − 2ac cos B cosh ΨB cosh b cosh Ω = cosh a cosh c − sinh a sinh c cos B cosh ΨB cos c cosh Ω = cos a cos b + sin a sin b cos C cosh ΨC c 2 = a 2 + b 2 − 2ab cos C cosh ΨC cosh c cosh Ω = cosh a cosh b − sinh a sinh b cos C cos ΨC cos a sinh 2S = sin b sin c cos A sinh ΨA 2S = bc cos A sinh ΨA cosh a sinh 2S = sinh b sinh c cos A sinh ΨA cos b sinh 2S = sin c sin a cos B sinh ΨB 2S = ca cos B sinh ΨB cosh b sinh 2S = sinh c sinh a cos B sinh ΨB cos c sinh 2S = sin a sin b cos C sinh ΨC 2S = ab cos C sinh ΨC cosh c sinh 2S = sinh a sinh b cos C sinh ΨC cos A sinh 2s = sin B sin C cos a sinh ψa cos A sinh 2s = sin B sin C sinh ψa cos A sinh 2s = sin B sin C cosh a sinh ψa cos B sinh 2s = sin C sin A cos b sinh ψ b cos B sinh 2s = sin C sin A sinh ψ b cos B sinh 2s = sin C sin A cosh b sinh ψ b cos C sinh 2s = sin A sin B cos c sinh ψc cos C sinh 2s = sin A sin B sinh ψc cos C sinh 2s = sin A sin B cosh c sinh ψc cos A cos ω = cos B cos C − sin B sin C cos a cosh ψa cos A cos ω = cos B cos C − sin B sin C cosh ψa cos A cos ω = cos B cos C − sin B sin C cosh a cosh ψa cos B cos ω = cos A cos C + sin A sin C cos b cosh ψ b cos B cos ω = cos A cos C + sin A sin C cosh ψ b cos B cos ω = cos A cos C + sin A sin C cosh b cosh ψ b cos C cos ω = cos A cos B + sin A sin B cos c cosh ψc cos C cos ω = cos A cos B + sin A sin B cosh ψc cos C cos ω = cos A cos B + sin A sin B cosh c cosh ψc 2s = CA sinh ψ b 2s = CA cosh b sinh ψ b 2s = AB cos c sinh ψc 2s = AB sinh ψc 2s = AB cosh c sinh ψc A 2 = B 2 + C 2 + 2BC cos a cosh ψa A 2 = B 2 + C 2 + 2BC cosh ψa A 2 = B 2 + C 2 + 2BC cosh a cosh ψa cos a cosh Ω = cos b cos c − sin b sin c cosh A cosh ΨA a 2 = b 2 + c 2 + 2bc cosh A cosh ΨA cosh a cosh Ω = cosh b cosh c + sinh b sinh c cosh A cosh ΨA cos b cosh Ω = cos a cos c + sin a sin c cosh B cosh ΨB b 2 = a 2 + c 2 − 2ac cosh B cosh ΨB cosh b cosh Ω = cosh a cosh c − sinh a sinh c cosh B cosh ΨB cos c cosh Ω = cos a cos b + sin a sin b cosh C cosh ΨC c 2 = a 2 + b 2 − 2ab cosh C cosh ΨC cosh c cosh Ω = cosh a cosh b − sinh a sinh b cosh C cos ΨC cos a sinh 2S = sin b sin c cosh A sinh ΨA 2S = bc cosh A sinh ΨA cosh a sinh 2S = sinh b sinh c cosh A sinh ΨA cos b sinh 2S = sin c sin a cosh B sinh ΨB 2S = ca cosh B sinh ΨB cosh b sinh 2S = sinh c sinh a cosh B sinh ΨB cos c sinh 2S = sin a sin b cosh C sinh ΨC 2S = ab cosh C sinh ΨC cosh c sinh 2S = sinh a sinh b cosh C sinh ΨC cosh A sinh 2s = sinh B sinh C cos a sinh ψa cosh A sinh 2s = sinh B sinh C sinh ψa cosh A sinh 2s = sinh B sinh C cosh a sinh ψa cosh B sinh 2s = sinh C sinh A cos b sinh ψ b cosh B sinh 2s = sinh C sinh A sinh ψ b cosh B sinh 2s = sinh C sinh A cosh b sinh ψ b cosh C sinh 2s = sinh A sinh B cos c sinh ψc cosh C sinh 2s = sinh A sinh B sinh ψc cosh C sinh 2s = sinh A sinh B cosh c sinh ψc cosh A cos ω = cosh B cosh C + sinh B sinh C cos a cosh ψa cosh A cos ω = cosh B cosh C + sinh B sinh C cosh ψa cosh A cos ω = cosh B cosh C + sinh B sinh C cosh a cosh ψa cosh B cos ω = cosh A cosh C − sinh A sinh C cos b cosh ψ b cosh B cos ω = cosh A cosh C − sinh A sinh C cosh ψ b cosh B cos ω = cosh A cosh C − sinh A sinh C cosh b cosh ψ b cosh C cos ω = cosh A cosh B − sinh A sinh B cos c cosh ψc cosh C cos ω = cosh A cosh B − sinh A sinh B cosh ψc cosh C cos ω = cosh A cosh B − sinh A sinh B cosh c cosh ψc | 2014-10-01T00:00:00.000Z | 2001-12-14T00:00:00.000 | {
"year": 2001,
"sha1": "aec28059a38c1481606c60ba1704eb91e2f9c727",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0112030",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7225ab444b2e60864e4695e2cc3f567bdafa7e12",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
9139298 | pes2o/s2orc | v3-fos-license | CD1d-mediated Recognition of an α-Galactosylceramide by Natural Killer T Cells Is Highly Conserved through Mammalian Evolution
Natural killer (NK) T cells are a lymphocyte subset with a distinct surface phenotype, an invariant T cell receptor (TCR), and reactivity to CD1. Here we show that mouse NK T cells can recognize human CD1d as well as mouse CD1, and human NK T cells also recognize both CD1 homologues. The unprecedented degree of conservation of this T cell recognition system suggests that it is fundamentally important. Mouse or human CD1 molecules can present the glycolipid α-galactosylceramide (α-GalCer) to NK T cells from either species. Human T cells, preselected for invariant Vα24 TCR expression, uniformly recognize α-GalCer presented by either human CD1d or mouse CD1. In addition, culture of human peripheral blood cells with α-GalCer led to the dramatic expansion of NK T cells with an invariant (Vα24+) TCR and the release of large amounts of cytokines. Because invariant Vα14+ and Vα24+ NK T cells have been implicated both in the control of autoimmune disease and the response to tumors, our data suggest that α-GalCer could be a useful agent for modulating human immune responses by activation of the highly conserved NK T cell subset.
ciated 1 transmembrane proteins that are related to MHC-encoded antigen presenting molecules. Despite their association with  2m, comparisons of primary sequences demonstrate that CD1 molecules are almost as closely related to class II molecules as they are to class I molecules (1). Therefore, CD1 molecules probably diverged from these other antigen presenting molecules early in vertebrate evolution, around the time of the class I-class II divergence. CD1 molecules are distinguished from the MHCencoded classical class I and class II molecules by their lack of polymorphism. Of the five human CD1 genes, protein products have been identified for the CD1a, -b, -c, and -d isoforms (2). The CD1 isoforms are themselves quite divergent, although CD1a, -b, and -c are more closely related to one another in their amino acid sequences than to CD1d (1,3). Only two CD1 genes, CD1d1 and CD1d2 , have been identified in mice. They are highly related to one an-other and are most similar to human CD1d in sequence (4). However, there is not an extraordinarily high degree of conservation between the mouse CD1 (mCD1) and human CD1d (hCD1d) homologues. The overall percentage of sequence identity between the hCD1d and mCD1 polypeptides in the antigen binding region is 60.4% for the ␣ 1 domain and 62.4% for the ␣ 2 domain (1,5).
One of the properties shared by mCD1 and hCD1d molecules is their ability to be recognized by NK T cells (6)(7)(8)(9)(10)(11). Recently, much interest has been focused upon the NK T cell subpopulation on account of its ability to quickly produce large amounts of cytokines, suggesting a potential for these T cells to regulate immune responses. The majority of mouse NK T cells use mainly the V  8.2, V  7, or V  2 chain paired with an invariant V ␣ 14J ␣ 281 rearrangement (12). The human counterpart of the mouse NK T cells also expresses a restricted TCR repertoire, including a homologous, invariant V ␣ 24 rearrangement paired with V  11, the human homologue of mouse V  8 (13)(14)(15). NK T cells in both species are autoreactive in vitro for CD1 molecules, in the absence of exogenous antigen. In addition to TCR repertoire and CD1 autoreactiv-1522 CD1 Presentation of ␣ -Galactosylceramide ity, NK T cells from both species resemble each other in several additional ways, including expression of intermediate TCR levels, expression of CD4 or the absence of both CD4 and CD8, expression of cell surface proteins characteristic of memory or activated T cells (16), and the presence of NK receptors, particularly NK1.1 (CD161) in mice (17,18) and its homologue, NKRP1, in humans (11,16).
Despite their relatively restricted TCR repertoire, a minority of NK T cells lack V ␣ 14 expression. Moreover, V ␣ 14 can be paired in the mouse with several different V  s (19), and the V  rearrangements expressed by NK T cells have junctional diversity (12). The combination of these three factors allows for some diversity in the antigen receptors expressed by the NK T cell population, and the results from recent studies in the mouse indicate that mCD1 autoreactive T cells are in fact heterogeneous in their ability to recognize different mCD1 ϩ cell lines and transfectants (20,21). These data suggest that a diverse set of autologous ligands may be presented by mCD1, although other interpretations are not ruled out. Consistent with the requirement for a diverse set of ligands, it has been shown recently that mouse V ␣ 14 ϩ T cells can recognize the lipoglycan ␣ -galactosylceramide ( ␣ -GalCer) presented by mCD1, whereas V ␣ 14 Ϫ but mCD1-autoreactive T cells are not responsive to this antigen (22,23).
In this study, we have uncovered a surprising degree of conservation in the interaction of invariant NK TCRs from different species with CD1 molecules despite the extensive divergence in primary sequences of these molecules between mice and humans. We also demonstrate here that ␣ -GalCer is presented by hCD1d to human NK T cells, providing the first evidence both for the presentation of a defined antigen by hCD1d and for a requirement for human NK T cells for a lipoglycan antigen in addition to hCD1d. As ␣ -GalCer can induce a dramatic hCD1d dependent expansion of human NK T cells, as well as a strong release of cytokines by these cells, our data further suggest that this lipoglycan could be a useful agent for the modulation of human immune responses.
Materials and Methods
Gene Cloning and Transfection. pSR ␣ -neo-hCD1d (a gift from Dr. S. Balk, Beth Israel Hospital, Boston, MA), which contains a full-length hCD1d cDNA, was used as a template for PCR. After amplification, the hCD1d cDNA was sequenced and ligated into the TA cloning vector (InVitrogen, Carlsbad, CA). For expression in mammalian cells, the hCD1d cDNA was inserted into the BamHI and SalI sites of the pH  Aprneo vector, which contains the human  -actin promoter. 20 g of plasmid was linearized and electroporated (Gene Transfector 300, BTX Corp., San Diego, CA) into A20 B lymphoma, C1R, and Hela cells. Stable transfectants were chosen at 3-4 wk and stained with biotinylated anti-hCD1d mAb 42.2 (see below).
T Cell Hybridomas. The derivation and characterization of the mCD1 autoreactive T cell hybridomas has been described previously (21,23,24). For the stimulation assays, 5 ϫ 10 4 T hybridoma cells per well were cultured in the presence of 10 5 mCD1 ϩ , hCD1d ϩ , or control stimulator cells. After 16 h, IL-2 release was evaluated in a sandwich ELISA using rat anti-mouse IL-2 mAbs (PharMingen, San Diego, CA).
Cloning of Invariant V ␣ 24 ϩ T Cells. PBMCs of healthy donors were stained with purified anti-V ␣ 24 (IgG1) and anti-V  11 (IgG2a) mAbs, followed by human adsorbed FITC-conjugated goat anti-mouse IgG1 and PE-conjugated goat anti-mouse IgG2a antibodies (Southern Biotechnology Associates, Inc., Birmingham, AL). Double positive cells were sorted and either immediately cloned by limiting dilution or activated as a primary bulk culture and then cloned by limiting dilution (14). Clones that coexpressed V ␣ 24 and V  11 by cytofluorimetric analysis were further expanded and used for this study. Molecular typing of the TCR expressed by V ␣ 24/V  11 ϩ T cell clones was performed as previously described (25).
Activation of V ␣ 24/V  11 ϩ T Cell Clones by CD1 Transfectants. Activations were performed in 96-well plates in 200 l total volume containing 1 ng/ml of PMA. T cell clones were added to wells at 5 ϫ 10 4 per well, along with 10 5 CD1 transfectants. Hela and Hela-hCD1d transfectants were fixed for 30 s in glutaraldehyde 0.05% in PBS, immediately diluted, and washed three times in complete medium. Antigen-pulsed APCs were generated by culture of hCD1d transfectants for 2 h with 100 ng/ ml of ␣ -GalCer followed by three washes, and irradiated at 8,500 rads. To block recognition of hCD1d, the 51.1 anti-hCD1d-specific mAb, provided by Dr. S. Porcelli (Brigham and Women's Hospital, Boston, MA), or control irrelevant mouse IgG2b was added to cultures at a final concentration of 20 g/ml.
Generation of ␣ -GalCer-reactive Cell Lines. Total human PBMCs were cultivated in 24-well plates in the presence of 50 U/ml of IL-2 and 100 ng/ml of ␣ -GalCer. Expansion of the V ␣ 24 ϩ cells was determined upon staining with a combination of anti-CD3, anti-CD4, anti-CD8, anti-V ␣ 24, and anti-V  11 mAbs.
Flow Cytometric Analysis. Biotinylated anti-hCD1d mAb 42.2 was provided by Dr. S. Porcelli. Secondary reagents for mCD1 detection were streptavidin-PE-conjugated (Caltag, South San Francisco, CA). For staining, cells were suspended in buffer comprised of PBS, pH 7.3, containing 2% BSA (wt/vol) and 0.02% NaN3 (wt/vol) and incubated at 4 Њ C for 20-30 min with the primary Ab, washed twice, and then further incubated with secondary reagents for another 20-30 min at 4 Њ C. After two washes, the cells were fixed and analyzed on a FACScan ® 440 flow cytometer (Becton Dickinson, San Jose, CA).
Cytokine Determination by ELISA. Supernatants were quantified for IL-4 or IFN-␥ by ELISA using commercial pairs of mAbs: BAF285 and MAB285 for IFN-␥ or BAF 204 and MAB604 for IL-4 (R&D Systems, Minneapolis, MN).
Results
A Subset of mCD1-autoreactive Hybridomas Responds to hCD1d. The parallels between mouse and human NK T cells led us to test for a possible cross-reactivity of mouse T cell hybridomas with hCD1d. Transfectants in several different cell types were generated and selected for approximately similar levels of surface hCD1d expression ( Fig. 1 A). Each of these three transfected cell lines was used as APCs for T cell hybridomas that are mCD1 autoreactive. As shown in Table 1, two of the seven mCD1 autoreactive hybridomas also react with the three different hCD1d transfectants, whereas the other five do not. The hCD1d reactivity of these two NK T cell hybridomas has been confirmed using anti-hCD1d blocking mAb, which inhibited the reactivity to hCD1d A20 transfectants but not to mCD1 transfectants of the same cell line ( Fig. 1 B). Similar antibody blocking data have been obtained using hCD1dtransfected Hela and C1R cells (data not shown).
It is noteworthy that the two hCD1d reactive mouse T cell hybridomas express the canonical V␣14/V8 NK TCR, whereas the nonreactive ones either express V␣14/ V8 or V␣14/V10 or, in three instances, lack V␣14 expression. The lack of hCD1d reactivity of the V␣14/V8 ϩ 3C3 hybridoma is not due to a lower level of TCR expression, implicating either V junctional sequences or some other factor that distinguishes this hybridoma. Surprisingly, one of the two cross-reactive T cell hybridomas, DN3A4-1-2, responds much more strongly to hCD1d than to mCD1 transfectants of A20 cells (Table 1 and Fig. 1 B). Although the amount of IL-2 this cell releases in response to mCD1 transfectants is well above the background, the -21فfold greater increase in IL-2 release obtained with hCD1d was typical of the results from nine different experiments. By contrast, hybridoma 2C12 reacts equally to the two CD1 homologues (Table 1 and Fig. 1 B).
A Subset of Human NK T Cell Clones Are Reactive to mCD1 Molecules. We then asked whether the interspecies cross-reactivity of NK T cells for CD1 molecules also would be observed using human V␣24/V11 ϩ clones as responder cells. The presence or absence of the invariant V␣24-J␣Q rearrangement in the T cell clones was confirmed by both heteroduplex and oligotyping analysis, and the diversity in the V11 junctions was determined by sequencing (data not shown). A total of six clones from three different donors were analyzed, four of which expressed the invariant V␣24 TCR paired with V11 and two of which expressed a noninvariant V␣24 also paired with V11. The ability of these clones to release IFN-␥ ( Table 2) and IL-4 (data not shown) in response to CD1 transfectants was tested. Data from five experiments are combined and averaged. Consistent with previous data (11), we found that recognition of hCD1d by human NK T cells required PMA, and in some cases also required mild aldehyde fixation of the target cells. Therefore, all the experiments shown in Table 2 had PMA in the cultures, and the Hela cell APCs also were fixed with glutaraldehyde.
In agreement with the previous results, all of the clones with an invariant V␣24 TCR could respond either to one or both hCD1d transfectants ( Table 2). As we and others have described for mouse NK T cells (20,21), the reactivity of different human NK T cell clones is somewhat heterogeneous, and is more dependent on the type of hCD1d expressing APCs than on the level of hCD1d surface expression. Indeed, clone LG2D3 responds to hCD1d-trans- fected Hela cells but not to A20-transfected cells, whereas the other three clones respond similarly to both types of cells ( Table 2). Neither of the two clones that lack the invariant V␣24 responded significantly to the hCD1d transfectants, although both showed some background reactivity against all the APCs when PMA was added. When reactive, each of the clones could synthesize significant quantities of both IFN-␥ and IL-4 in response to hCD1d stimulation.Three out of the four clones with an invariant V␣24 also were stimulated significantly by transfectants expressing heterologous mCD1 in the presence of PMA (Table 2), and this reactivity could be inhibited with an anti-mCD1 mAb (data not shown).
In summary, we conclude that not only have NK T cells been conserved throughout evolution, but the ability of the invariant TCR expressed by NK T cells to recognize the CD1d-like molecules mCD1 and hCD1d also has been conserved strictly. Furthermore, if self-ligands are required for this CD1 recognition, these ligands are likely to be similar in mice and humans.
Mouse V␣14/V8 ϩ Hybridomas Respond to ␣-GalCerpulsed hCD1d Molecules. We tested the ability of mouse NK T cell hybridomas to respond to the lipoglycan ␣-Gal-Cer presented by the heterologous hCD1d molecule. It has been shown recently that mouse NK T cells with an invariant V␣14 TCR can respond in an mCD1-mediated fashion to ␣-GalCer (22,23). We therefore used this compound to determine if hCD1d and mCD1 can present similar compounds to NK T cells.
Using ␣-GalCer-pulsed CD1d transfectants as APCs, we analyzed the reactivity of the panel of seven mouse T cell hybridomas described in Table 1. Only three hybridomas responded to the pulsed APCs, regardless of whether the hCD1d-expressing cell line used was a transfected C1R, A20, or Hela cell. The responses of V␣14/V8 ϩ hybridomas 2C12 and DN3A4-1-2 to hCD1d plus ␣-GalCer were significantly increased above the level of hCD1d reactivity in the presence of the vehicle alone. Depending upon the APC used, the ratio of the IL-2 production in the presence of ␣-GalCer compared with the vehicle increased from 1.7-to 3-fold for hybridoma 2C12, and from 1.5-to 9.6fold for hybridoma DN3A4-1-2. Interestingly, although the third V␣14/V8 ϩ hybridoma, 3C3, was unresponsive to hCD1d transfectants in the absence of antigen, it was strongly stimulated by ␣-GalCer-pulsed hCD1d cells (Fig. 2 A). Complete inhibition of the IL-2 release induced by ␣-GalCer pulsed APCs was obtained using an anti-hCD1d mAb, confirming the hCD1-mediated presentation of ␣-GalCer (Fig. 2 A). Therefore, all three V␣14/V8 ϩ T cell hybridomas tested respond to hCD1d plus ␣-GalCer. By contrast, the V␣14/V10 ϩ hybridoma DN3A4-1-4 responded to ␣-GalCer-pulsed mCD1-expressing APCs but not to ␣-GalCer-pulsed hCD1d-expressing cells (Fig. 2 B). In summary, these data demonstrate that hCD1d can present the same lipoglycan antigen as its mouse counterpart. Our previous studies demonstrated that mCD1-mediated recognition of ␣-GalCer by mouse NK T cells was effective when V␣14 was expressed with any one of these different TCR- chains (23). The data presented here, in contrast, suggest that when presented by the heterologous hCD1d molecule ␣-GalCer responsiveness by mouse NK T cells may be more highly dependent upon the TCR- chain.
Human Invariant V␣24/V11 ϩ Clones Respond to ␣-Gal-Cer-pulsed hCD1d Molecules. We next tested if human NK T cell clones are responsive to ␣-GalCer presented by hCD1d. Cultures containing lipoglycan antigen-pulsed APC were carried out in the absence of either PMA or glutaraldehyde fixation. Three NK T cell clones with an invariant V␣24 TCR (15.21.2, LG2D3, and PD9) and one of the V␣24/V11 clones without the invariant junction (PD18) were tested. In the absence of lipoglycan antigen there was no stimulation by hCD1d because of the absence of PMA. The results from one representative experiment of three carried out with clone 15.21.2 are shown in Fig. 2 C. ␣-GalCer is a potent inducer of both IFN-␥ and IL-4 release by this T cell clone. The cytokine release was inhibited by an anti-hCD1d mAb (Fig. 2 C) as well as an anti-V␣24 mAb (data not shown), confirming the TCRmediated recognition of ␣-GalCer-pulsed hCD1d ϩ APCs. Similar data were obtained with clones LG2D3 and PD9. The reactivity of the 15.21.2 and PD9 human T cell clones for heterologous mCD1 also could be detected by the addition of ␣-GalCer. Clone LG2D3, which did not respond to mCD1 ϩ APCs in the presence of PMA, likewise did not respond to ␣-GalCer presented by mCD1 (data not shown). ␣-GalCer-induced Expansion of Human Primary NK T Cells. The results described above demonstrate that long term T cell clones, selected only on the basis of invariant V␣24 and V11 expression, are specifically activated by ␣-GalCer presented by hCD1d. To analyze the response of T cells that have not been subject to long-term culture, we asked whether ␣-GalCer would induce the in vitro proliferation of human NK T cells. In these experiments, either the lipoglycan antigen or the vehicle control was added directly to cultures of unfractionated, fresh PBMCs. The percentage of V␣24-positive T cells was determined by flow cytometry at days 0, 7, 9, 11, and 12. By day 12, we systematically obtained a significant expansion in both the percentage and number of T cells expressing the V␣24/ V11 TCR. A compilation of all the data from a series of experiments done with PBMCs from eight different donors is shown in Table 3. Although on day 0 the V␣24/V11 ϩ T cells were Ͻ1% of the total cell number, by day 7-12, the percentage of V␣24 ϩ cells had expanded from 18.5-to 82-fold (Table 3). This increase in the percentage of V␣24 ϩ cells is due to an expansion in cell number, because the total number of cells in the cultures generally was not decreased compared with day 0. This expansion required the presence of IL-2 and ␣-GalCer, as the number of V␣24/V11 ϩ T cells did not augment significantly when either one of these reagents was omitted (data not shown). PCR-heteroduplex analysis and oligotyping performed on aliquots of PBMCs, drawn at different time points from the different culture conditions, confirmed the expansion of V␣24/V11 ϩ T cells expressing the invariant V␣24-J␣Q rearrangement (data not shown). Several of these lines were maintained in culture beyond day 12, and restimulated once with ␣-GalCer-pulsed autologous APCs, and further with ␣-GalCer-pulsed hCD1d-transfected C1R cells. These two further recalls allowed us to enrich the percentage of V␣24-positive T cells to 64% for line XC and 53% for line PB (Fig. 3). As shown in Fig. 3, lipoglycan-specific invariant V␣24/V11 ϩ T cells are either CD4 ϩ or CD4 Ϫ , consistent with previous reports on the surface phenotype of this subset (12,14,16). A 72% inhibition of the expansion of V␣24 ϩ T cells was obtained for line PB when parallel cultures were set up from day 0 in the presence of the 51.1 anti-hCD1d mAb. This inhibition is consistent with the view that the ␣-GalCer-induced expansion of fresh T cells is hCD1d dependent. Similarly, a 62 and 60% reduction of the V␣24 ϩ T cell expansion was obtained for lines MP and PD, respectively, when an anti-V␣24 mAb was added to the culture from day 0. These data suggest that the ␣-GalCer-induced T cell expansion was dependent upon engagement of the TCR. Interestingly, when cultured in the presence of ␣-GalCer-pulsed transfected APCs, these 2 lines released IFN-␥ but not IL-4 in an hCD1d-and V␣24-dependent manner (data not shown).
Discussion
The hCD1d molecule is of particular interest on account of its recognition by human NK T cells and on account of having a broader expression pattern than do other human CD1 molecules (26). In this report, we provide the first evidence for the ability of hCD1d to present antigen. The antigen identified here is ␣-GalCer, a structurally well-defined glycolipid. In the presence of ␣-GalCer, neither the addition of PMA nor fixation of the APCs is required to stimulate NK T cell reactivity to hCD1d. Furthermore, preliminary data indicate that ␣-GalCer can bind in vitro to purified, soluble hCD1d and mCD1 molecules (Maher, J., O. Naidenko, W. Ernst, R. Modlin, and M. Kronenberg, manuscript in preparation). Together with the antigen presentation and antibody inhibition studies presented here, these data indicate that the effect of ␣-GalCer is not likely to be an indirect one, but rather that ␣-GalCer forms a complex with hCD1d that is recognized by the TCR on human NK T cells. It should be noted that ␣-GalCer was purified from an extract from a marine sponge, based upon the positive results obtained with this extract during a screen for compounds that prevent tumor metastases (27). Ceramides with an ␣-linked sugar are not abundant in most microbes, and they are much less abundant than those with a -linked sugar in mammalian cells. However, -galactosylceramide is not an antigen for CD1d presentation (22,23). It therefore remains possible that ␣-GalCer is a mimic for a natural ligand that normally stimulates NK T cells, although there are no data to exclude the possibility of a low level of ␣-GalCer expression in mammalian cells. In mice, a glycosylphosphatidylinositol has been the only natural ligand bound to mCD1 identified so far (28), although compounds of this type apparently do not stimulate NK T cells in a CD1-dependent fashion (23).
The lack of polymorphism of nonclassical antigen-presenting molecules has led to the proposal that they may carry out some conserved and essential antigen presenting function. Although this is a plausible hypothesis, with the possible exception of mouse Qa-1 b and human HLA-E (29), there is surprisingly little evidence for interspecies conservation of nonclassical class I molecules at either the sequence level or the functional level. In contrast, the CD1 family is demonstrated here to show an extremely high degree of conservation with regard to its interaction with the invariant TCRs that are expressed by NK T cells. This conservation is observed either in the absence of exogenous antigen or together with a lipoglycan antigen. From a functional viewpoint, the mouse and human invariant TCRs and CD1 molecules are nearly equivalent, indicating a strong selection upon this TCR interaction with CD1. These data are consistent with previous reports demonstrating that human and mouse NK T cells are similar in phenotype and function (11,12,16). This degree of conservation and cross-reactivity at the antigen recognition level is particularly striking in light of the divergence in primary sequence of both the CD1 molecules and the TCRs expressed by NK T cells (5). We conclude that despite these significant changes in primary structure, the amino acids important for either lipoglycan binding to CD1 or the interaction between the TCR and CD1 must not have diverged significantly.
The results from immune assays of mouse and human T cells permit the identification of candidate regions of the TCR that are necessary for the recognition of CD1 plus lipoglycan. Data from a previous study of TCR transgenic mice expressing V8 and V␣14 and of mice deficient for J␣281 as a result of targeted mutation implicated a role for the invariant V␣14 TCR in ␣-GalCer recognition (22). Although several analyses of mCD1-reactive T cell hybridomas and cell lines established that V␣14 is not absolutely required for mCD1 autorecognition (20,24,30), only mCD1-autoreactive hybridomas with a V␣14 TCR could respond to ␣-GalCer plus mCD1 (23). The conservation of the V␣-J␣ junction in the invariant TCR (14,15,18,31) implicates this region of the TCR in contacting the CD1 molecule or a combination of CD1 and lipoglycan bound to it. Consistent with this hypothesis, we found that human T cells that express a V␣24 without the invariant V-J junctional sequence were unreactive to hCD1d transfectants in either the absence or presence of ␣-GalCer. Furthermore, there may be a more stringent selection upon the NK T cell  chain in humans than in mice, as the invariant V␣24 is mainly paired with V11 (16,32,33), whereas in mice there are several predominant Vs (12,19,20).
It is unprecedented to find complete cross-reactivity between mice and humans of the interaction of a TCR with a peptide-MHC complex. However, such a high degree of conservation is consistent with a superantigen type of recognition mechanism (34). In addition, the high frequency of ␣-GalCer-reactive cells in humans is consistent with a superantigen-like mode of action. However, there are two reasons why the data presented here do not favor the possibility that ␣-GalCer is recognized as a superantigen. First, the reactive T cells have a restricted V-J junctional diversity of their ␣ chains, as well as V␣ gene use. Second, the reactive T cells are selected for both V␣ and for V expression, particularly in humans. These findings favor a more conventional mechanism for the interaction of the ␣-GalCer plus CD1 complex with the TCR expressed by NK T cells.
At the functional level, NK T cells are distinguished by their ability to produce large amounts of the cytokines characteristic of memory T cells within a few hours of stimulation. It was proposed originally that the early expression of IL-4 by NK T cells could be required for Th2type immune responses (35)(36). Although NK T cells normally may contribute to the induction of Th2 responses, it is now clear that in many cases they are not required (37,38). It also is clear from our studies and those of other investigators that NK T cells can secrete large amounts of IFN-␥ in addition to IL-4 under some circumstances (39,40). Based upon their relatively high frequency in some sites and their ability to rapidly secrete large amounts of cytokines, functions for NK T cells in the regulation of autoimmune diseases (33,41,42), the response to infectious agents (43), and the surveillance for tumors (44) have been proposed. With regard to autoimmune disease progression, the strongest evidence currently favors a connection between NK T cells and diabetes, in which the results from mice and humans show a striking degree of similarity (33,42,45). The dramatic expansion and activation of human NK T cells upon stimulation in vitro with ␣-GalCer is therefore of a considerable interest. Because we generally observed both IL-4 and IFN-␥ in the culture supernatants of the T cell clones, but only IFN-␥ from the lines, it should be possible to identify conditions in which a more polarized cytokine response develops. Therefore, one might envision the use of this molecule, or perhaps analogues (22) that might give a more polarized Th1 or Th2 response, as an attractive strategy to modulate human immune response in vivo. | 2014-10-01T00:00:00.000Z | 1998-10-19T00:00:00.000 | {
"year": 1998,
"sha1": "a33d848f2f411ad78688ced2169df35120980247",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/jem/188/8/1521.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a33d848f2f411ad78688ced2169df35120980247",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
59553480 | pes2o/s2orc | v3-fos-license | Race, Ethnicity and National Origin-based Discrimination in Social Media and Hate Crimes Across 100 U.S. Cities
We study malicious online content via a specific type of hate speech: race, ethnicity and national-origin based discrimination in social media, alongside hate crimes motivated by those characteristics, in 100 cities across the United States. We develop a spatially-diverse training dataset and classification pipeline to delineate targeted and self-narration of discrimination on social media, accounting for language across geographies. Controlling for census parameters, we find that the proportion of discrimination that is targeted is associated with the number of hate crimes. Finally, we explore the linguistic features of discrimination Tweets in relation to hate crimes by city, features used by users who Tweet different amounts of discrimination, and features of discrimination compared to non-discrimination Tweets. Findings from this spatial study can inform future studies of how discrimination in physical and virtual worlds vary by place, or how physical and virtual world discrimination may synergize.
Introduction
Race, ethnicity or national-origin based discrimination (hereafter referred to as "discrimination") is a type of hate speech that systemically and unfairly assigns value based on race, ethnicity, or national-origin and affects the daily realities of many communities. Researchers have used a variety of proxy measures to assess discrimination at scale, such as policies (Kawachi and Berkman 2003), or bias-motivated crimes (Sharkey 2010). However, policies usually have large spatial resolution, not all discrimination escalates to a crime, and as a measure, crimes don't describe any details regarding specific issues, antecedents or motivations of the crime that can be used to better illuminate and mitigate discrimination. This need for better understanding of discrimination is compounded, or brought to attention by recent increases in hate crimes in the United States (U.S.) (Farivar 2018). Simultaneously, a very compelling area of recent social media research is in the exploration of types of hate speech and potential links between physical events (ElSherief et al. 2018a;ElSherief et al. 2018b;Olteanu et al. 2018;Müller and Schwarz 2018b;Müller and Schwarz 2018a). In Under peer review. particular, an examination of race, ethnicity and nationalorigin based discrimination on social media and it's spatial characteristics is warranted, given that hate crimes based on these motivations are the largest form of hate crimes in the United States (Federal Bureau of Investigation 2018).
Some prior social media research has focused on racebased discrimination experienced, and specifically on describing the racist concepts (e.g. appearance or accent related) most experienced by people of different races (Yang and Counts 2018). For the United States specifically, research has shown that anti-Muslim hate crimes since Donald Trump's presidential campaign have been concentrated in counties with high Twitter usage (specific content was not parsed) (Müller and Schwarz 2018b). Further, some social media research has identified self-narration and targeted hate speech both as important (Yang and Counts 2018;ElSherief et al. 2018a), but a gap remains to examine these linguistically and comparatively in their possible association with physical events. Self-narration is important to consider as 86% of 18-to 29-year-olds have witnessed harassing behaviors online, and 60% of those ages 30 and older, and 24% of 18-to 29-year-olds have experienced mental or emotional stress as a result of their online harassment (Duggan 2014). Simultaneously, it is estimated there are approximately 10,000 uses per day of racist and ethnic slur terms in English on Twitter (Bartlett et al. 2014). For targeted discrimination, research has examined the scope of targets (personal or towards groups) (ElSherief et al. 2018a). Finally, research hasn't examined how different forms of race, ethnicity and/or national origin-based discrimination on social media vary across the country, nor systematically assessed spatial differences in how people produce or discuss hate, or discrimination online.
We address these important gaps and build upon prior social media research by examining a specific type of hate speech: race, ethnic or national-origin based discrimination, enabling us to study these alongside hate crimes (as we can filter those by this same group of biases), across 100 different cities in the United States. This examination across the entire country allows us to assess the relationship across varying levels of urbanization, across different constituent properties of cities, and as well across different levels of social media usage in different places. While understanding the relationship between the two does not necessitate a causal pathway (nor do we aim to show one), this analysis helps to identify the way(s) in which social media may be relevant as a source for understanding structural discrimination, and helps illuminate how discrimination on social media may vary by place, in comparison to hate crimes. As well, our spatial analysis allows us to incorporate and assess linguistic differences associated with race-based discrimination on Twitter across cities in the United States. Specific contributions of this work are: • Creation of a spatially-diverse training data set to account for local variations in race-based hate speech • Development of a multi-level classifier to automatically identify self-narration versus targeted race, ethnicity or national-origin based discrimination on social media • Assessment of the relationship between social media measures of targeted and self-narration of race, ethnicity or national-origin based discrimination and hate crimes motivated by the same biases in 100 cities across the United States, controlling for demographic and other city-level attributes.
Related Work Social Media and Hate Speech
There is a recent and growing literature in the social media research community on characteristics of hate speech. Beyond just detecting hate speech, research has gone further, for example, in analysis of the differences between personally-targeted and broadly-targeted online hate speech, showing linguistic and substantive differences (ElSherief et al. 2018a). Also, comparative study of hate speech instigators and targeted users on Twitter found personality differences in both, different from the general Twitter population (ElSherief et al. 2018b). The above work was focused comprehensively on any hate speech, which is defined as speech that attacks a person or group on the basis of attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity (ElSherief et al. 2018a). Given that in the research here, we want to describe the association between social media discrimination and hate crimes, we specifically narrow this research to race, ethnicity and national-origin based hate speech, as hate crimes are described by different biases that motivate them, this being one of the categories (race, ethnicity and national-origin motivated hate crimes are often grouped so we could not separate these out for all of the years and cities considered, and moreover these are often grouped together in studies of the implications of discrimination (Williams, Neighbors, and Jackson 2003)). In regards to race-specific hate speech, there is research distinguishing self-narration of racial discrimination and identifying which types of support are provided and valued in subsequent replies (Yang and Counts 2018). A user-level analysis characterizing those who display hate speech on Twitter was performed by annotating users' entire profiles, showing differences in activity patterns, word usage as well as network structure (Ribeiro et al. 2018).
Comparing Online Hate Speech to Offline Events
To-date there have been a few efforts in examining hate speech in social media to offline events. An analysis found that extremist violence leads to an increase online hate speech, using a counterfactual time series method to estimate the impact of the offline events on hate speech (Olteanu et al. 2018). The social media data and crimes examined in this work were based on Arabs and Muslims specifically. Research by Müller et al. showed that rightwing anti-refugee sentiment on Facebook predicts violent crimes against refugees in otherwise similar municipalities with higher social media usage. Essentially, in this work they compare how social media posts affect crimes within the same municipality compared to other locations in the same week. Though this link was found, this paper was focused in Germany, the content examined was manually collected from one Facebook group, and the social media data and crimes examined were based on anti-refugee sentiment specifically (Müller and Schwarz 2018a). In the United States, another study has shown that that the rise in anti-Muslim hate crimes since Donald Trump's presidential campaign has been concentrated in counties with high Twitter usage (specific content was not parsed) (Müller and Schwarz 2018b). In sum, related work on social media hate speech motivates that hate crimes are likely to have many fundamental drivers; social media can help illuminate some of these local differences (such as variation in xenophobic ideology or a higher salience of immigrants). To advance the validation of social media for this work, we focus on hate crimes biased by race, ethnicity and national-origin, and appropriately parse social media to understand the same type of discrimination. This enables us to compare discrimination in social media to offline events at scale across the United States.
Data Hate Crime Data
The Federal Bureau of Investigation (FBI) aggregates hate crime data under Congressional mandate. The biases that motivated the crimes are also recorded, broken down into specific categories (e.g. sexual-orientation or religious biases) (Federal Bureau of Investigation 2018). Agencies (generally metropolitans) of varying sizes contribute data. We used data from 2011-2016, which, at the time of analysis (9th November 2018), were the latest full years of data available, overlapping with our available Twitter data. Race, ethnicity and national-origin are combined as the motivating biases in some of the years, so for our study we focused on this entire group, aggregating hate crime data across these biases for years where they were delineated. One hundred cities which spanned a range of total number of hate crimes and locations were chosen. Cities in high and medium hate crime categories respectively were selected solely based on their hate crime numbers. Then, geographic regions which were under-represented in that group (e.g., Florida in the southeast, and Utah and Idaho in the midwest) were added in the low hate crime category to balance the geographical distribution of cities. Geographic distribution of the cities, shaded based on the number of race, ethnicity or nationalorigin based hate crimes by city is illustrated in Figure 1a.
All 48 contiguous states, as well as Washington, D.C. are represented.
Social media data
We used Twitter's Streaming Application Programming Interface (API) to procure a 1% sample of Twitters public stream from January 1st, 2011 to December 31st, 2016. From this data set we selected Tweets made in the specified 100 cities using the "place" attribute. The place attribute contains the name of the city where a Tweet was made, determined using both the point and polygon coordinates associated with a Tweet. We manually accounted for changes in the way cities are described by name over time. Using place is computationally faster for matching city names as compared to mapping coordinates to cities using a polygon mapping algorithm with each Twitter JSON object. In total this resulted in 532 million Tweets. The text, time-stamp and location of the Tweets were used in discrimination classification; user id's were used in the bot analysis.
Census data
Census data for demographic and other city-attributes which, from domain knowledge may relate to discrimination, were included to control for their effect on the relationship between the online prevalence of discrimination and the number of race, ethnicity and national-origin based hate crimes. These included: the percentage of white, black, Asian, hispanic/latino, foreign born, female and ages 18-64 in the city. As well, population density (population per square mile) and median income (dollars) were included (US Census Bureau 2018a). We used data from Census Quick Facts, as it combines statistics from the American Community Survey (ACS) with other surveys to give a broader view of a particular geography (includes population, density and income variables) (US Census Bureau 2018b). As well, Quick Facts uses ACS 5-year estimates which have increased statistical reliability compared with that of singleyear estimates, particularly for small geographic areas and small population subgroups which we do have in this study. Further, the five-year estimates capture information across a large portion of our study years (2012)(2013)(2014)(2015)(2016).
Social Media Classification
The classification pipeline to identify discrimination Tweets, and then delineate those into self-narration of discrimination or targeted is illustrated in Figure 2. To classify Tweets we used shallow neural networks, which have shown improved performance over traditional classifiers (dos Santos and Gatti 2014; Tang et al. 2014), especially for short texts such as Twitter messages, which contain limited contextual information. The n-gram based approach and classifier parameters were the same as in previous work on discrimination classification that showed good performance (Relia et al. 2018). As well, the overall approach followed from this Color of labels assigned based on number of race, ethnicity and nationalorigin based hate crimes (green: lowest 25%, yellow: between 25-75%, red: top 25%). Size of dots is based on the proportion of Tweets in that city that exhibit discrimination (self narration or targeted). b) Color of the labels is assigned based on proportion of Tweets that exhibit discrimination (self narration or targeted) (green: lowest 25% of cities, yellow: 25-75%, red: top 25%). Size of dots is based on the ratio of number of discrimination Tweets to the number of unique users who produce them. Underlined cities have a targeted proportion of discrimination greater than half.
same previous work in order to ensure that the resulting classified Tweets indicate discrimination and not colloquial uses of keywords and phrases. For example, not all text that contains the "n-word" are motivated by racist attitudes (Relia et al. 2018). More details are in the following sections.
Spatially-diverse Training Data
To improve classification performance, and account for possible language/terms specific to different locations across the United States, we developed a spatially diverse training data set. To do so, we used Tweets made in the top 11 cities ranked by total hate crimes. We specifically chose these cities, as there were more Tweets (245 million) made in these cities as compared to the next 39 ranked cities combined (206 million). As well, these cities provided geographic diversity, as we aim to capture language used in different parts of the country. In order to create a balanced training data that represents each of these 11 cities and each of the 6 years (2011 to 2016), we searched for hate speech keywords through a total of 73.42 million Tweets
Active Learning Classification
Pronoun-checker
Labelling Data Procedure
We used the services of Figure Eight 1 (formerly known as Crowdflower) to label the training data, to capture and expand the keywords and phrases that are used in a discrimi-1 https://www.figure-eight.com/ natory context. We clearly defined the criteria for labelling a Tweet (only the text of the Tweet was provided to the annotators) as indicating "discrimination" (versus "no discrimination") as a Tweet against a person, property, or society which is motivated, in whole or in part, by bias against race, ethnicity or national origin to workers for annotation. Initial trial experiments and annotators review score on trial annotations confirmed the clarity of our instructions. As this project involved exposure of the annotators to potentially sensitive content, we clearly indicated the task is about Tweets that discuss discrimination, and created each Tweet as an individual task giving annotators a chance to discontinue at any point without losing payment if they felt uncomfortable. Each Tweet was labeled by at least two independent Figure Eight annotators, and all annotators were required to maintain at least an 80% accuracy based on their performance on five test tasks. Annotators falling below this accuracy resulted in automatic removal from the task (ElSherief et al. 2018a). Out of the 1988 Tweets labeled, 1698 were labeled as discussing discrimination and the remaining as no discrimination. The label result was chosen based on the response with the greatest confidence of the labels. The confidence score (between 0 and 1) is calculated based on the level of agreement between multiple contributors, weighted by the contributors' trust scores (Figure Eight Inc. ). A high average confidence score of 0.92 resulted for the task, and our team manually labeled Tweets where there was a conflict of labels between the annotators. Finally, appending 14,012 Tweets not containing any discrimination keywords chosen equally from across the 11 cities and 6 years, resulted in a training data of 16,000 Tweets (1698 discrimination and 14,302 non-discrimination). This proportion is consistent with 10-13% positive labels in a training dataset of 16,000 Tweets (Le and Mikolov 2014; Dai, Olah, and Le 2015;ElSherief et al. 2018b).
Selection of Decision Boundary and Active learning for Classification
For the shallow neural net classifier, selection of the decision boundary (threshold), 0.623, was made by optimizing the balance between precision and recall (Wulczyn, Thain, and Dixon 2017). The F1 score and AUC for this decision boundary, calculated by averaging the scores from a k-fold cross validation (k=10), were 0.85 and 0.89 respectively. Further, we used active learning to improve classification at the decision boundary. The entire active learning procedure involved first randomly sampling 10,000 Tweets from the top 11 cities, ranked by hate crimes, and classifying them using the shallow neural network. We then manually labelled the 1000 Tweets (5% of Tweets on each side of the decision boundary) and appended these Tweets into the training data (active learning portion). We did this iteratively until performance of the classifier plateaued (Relia et al. 2018). Finally, out of a total 17,000 Tweets in the training data, this resulted in 1987 discrimination and 15,013 non-discrimination Tweets. The average F1 score of the classifier improved to 0.86 and average AUC improved to 0.90 after this procedure.
Delineating Targeted and Self-Narration Discrimination
Previous social media research has discussed both targeted (ElSherief et al. 2018a) and self-narration (Yang and Counts 2018) aspects of hate speech. Targeted discrimination is defined as someone being discriminatory as compared to self-narration of discrimination where someone is sharing their exposure to discrimination; either a direct experience or witnessing someone experience it. In line with existing work which used simple first-person pronoun filtering to identify self-narration of discrimination (racism) in Reddit posts (Yang and Counts 2018), we used a similar first-person pronoun filtering approach. To categorize Tweets as selfnarration, we selected Tweets that contain any of the firstperson pronouns: I, me, mine, my, we, us, our, ours. As simple first-person pronoun filtering resulted in a high false positive rate (e.g., "I think so-called white trash turns to white supremacy because, a., they're victimized by minority criminals, and b., they're uppity whites"), we also required an absolute majority of number of first-person pronouns over number of second and third-person pronouns in a Tweet for a Tweet to be categorized as self-narration of discrimination. This condition helped to decrease the false positive rate by 75%, measured by our team on a randomly chosen sample of 1000 Tweets.
Hate Crimes and Social Media Relationship
To examine the relationship between race, ethnicity or national-origin based hate crimes and discrimination on social media by city, we use a regression modeling approach where the number of race, ethnicity or national-origin based hate crimes in all years, for each city, is the dependent variable, while accounting for other potential covariates. We caution that the approach does not imply that the social media measures directly causes hate crimes in different cities. Assessing this relationship would be a first step towards assessing any potential causal pathway, or the possible uses of social media discrimination in complement to hate crimes, for example to study issues at higher spatialresolution than is available through hate crimes, such as at the neighborhood-level. Each model controlled for the census attributes discussed in the Census Data section. All analyses were conducted in R v3.5.2.
Linguistic Analysis
To assess affect and linguistic-related features of discrimination on social media at the city, user and Tweet level, we used EMPATH. EMPATH is a tool that can generate and validate new lexical categories on demand from a small set of seed terms and capture aspects of affective expression, linguistic style, behavior, and psychological state of individuals from content shared on social media by deep learning a neural embedding across more than 1.8 billion words. The performance of EMPATH has been found to be similar to LIWC (considered a gold standard for lexical analysis), EMPATH is freely available, and EMPATH also provides a broader set of categories to choose from compared to LIWC (Fast, Chen, and Bernstein 2016). We selected several affect-related features based on prior work in understanding selfnarration of discrimination and discussion of racial equity (Yang and Counts 2018;De Choudhury et al. 2016): positive emotion, negative emotion, disappointment, sadness, aggression, violence. Motivated by studies regarding risk factors for racial/ethnic discrimination we also selected EM-PATH features potentially related to socio-economic status (work, money) and culture (night e.g. nightlife) (Williams, Neighbors, and Jackson 2003).
City-level To understand the relationship between these linguistic features and hate crimes in a city, we used a regression modeling approach, where the number of race, ethnicity or national-origin based hate crimes in a city is the dependent variable, and the per-city normalized proportion of each linguistic feature are independent variables. Negative binomial regression is used to model the count data, and account for over-dispersion. We also present iterative model selection results (backwards step-wise model selection by exact Akaike information criterion) to protect against overoptimism and assess predictors that contribute a significant part of explained variance.
User-level Here we examine the features used by users who discuss increased amounts of discrimination (> 21 discrimination Tweets total) versus those who only had one discrimination Tweet. Those users with > 21 discrimination Tweets represent those (0.01% of all users with any discrimination Tweets) with the very highest number of discrimination Tweets in our dataset (more description of this range in the Descriptive Analyses of Discrimination Results section).
Tweet-level At the Tweet level, we examined the distribution of all EMPATH linguistic features in discrimination versus non-discrimination Tweets. Using normalized means of the category counts for each group, we compute the correlation across these two types of Tweets and examine the odds of EMPATH feature categories likely to appear in discrimination Tweets.
Discrimination Content by Bots
Increasingly, there has been a recognition of bot accounts on Twitter that spread malware and unsolicited content that in particular has included public health and antagonistic content and towards eroding public consensus (Broniatowski et al. 2018). Given the antagonistic nature of the content examined in this study, we also assessed and analyzed the prevalence of bots and bot-generated Tweets in our data. We used seven available lists of bot accounts (used in (Broniatowski et al. 2018)) to identify bot accounts. These lists (Lee, Eoff, and Caverlee 2011; Cresci et al. 2017;Varol et al. 2017;Frommer 2017;Cresci et al. 2015;Popken 2018) which id's of 8076 bots generally overlap in time with our study period (except for (Lee, Eoff, and Caverlee 2011) in which the data was collected from December 30, 2009 to August 2, 2010, thus possibly very few users from this list would be relevant to our data). We assessed i) the total proportion of bot accounts in our data, and those responsible for discrimination content, and ii) the spatial distribution of those accounts.
Classification
Training Data Representation We found that the top 20 features that were most predictive of a Tweet being classified as containing discrimination occurred in more than 25% of the discrimination Tweets made in the top 11 overall hate crime cities. We studied the spatial and temporal distribution of these top 20 features to assess the spatial (and temporal) balance of the training data. We first assessed the temporal changes in the use of features and found there was no significant difference between the % use of any of the top features in each city between all pairs of consecutive years (two-tailed Student's t-test, p >0.05). We then compared the distribution of these top 20 features in the 11 high overall hate crime cities with the distribution in the remaining 89 cities (medium and low hate crime) and high correlation between the use of features in the two groups determined using the Spearmans rank correlation (ρ=0.87, p <0.05). We also performed the same analysis for the entire keyword list, and found no significant difference between the 2 sets of cities in any consecutive years (p >0.05) and high spatial correlation (ρ=0.84, p <0.05). Therefore, as the overall feature and keyword distributions were similar spatially and over consecutive years, we found it sufficient to use the training data generated by the top 11 hate crime cities to classify Tweets made in the additional 89 cities.
Spatial Distribution
Crime distribution Phoenix, AZ had the highest number of race, ethnicity or national-origin based hate crimes over the 6 years (566). Conversely, Castleton, VT and Riverton, WY had 0 race, ethnicity or national-origin based hate crimes, and 26 cities had 9 total or less. The biggest differential in rank of race, ethnicity or national-origin based hate crimes and total discrimination Tweets was in Castleton, VT (lowest number of race, ethnicity or national-origin based hate crimes and 17th highest proportion of discrimination Tweets), and Montpelier, ID (in the 19th lowest by number of race, ethnicity or national-origin based hate crimes, and 5th highest proportion of discrimination Tweets). Social Media Feature Distribution Overall we found that most of the top features identified were used consistently across cities, but notably, there were some features that were found specifically in particular cities. For studying the top discrimination feature distribution we used the top 10 features in each of the top 21 cities ranked based on race, ethnicity or national-origin based Hate crimes (the 21st was San Francisco, which added geographic diversity, also San Francisco ranked highly for hate crimes in general (13th)). Figure 3 illustrates the distribution of the top discrimination features across a) targeted and b) self-narration Tweets. In terms of top features, there were three used in all of the top 21 cities ranked by race, ethnicity or national-origin based hate crimes: f*cking n*ggers (* added to censor offensive language) which was the top feature in 10 of the top discrimination hate crime cities, most racist person which appears in all top 21 cities (and is the top 1 or 2 feature in 9 cities), white trash which is the top 1 or 2 feature in 4 cities, Indianpolis, IN, Cincinnati, OH, Las Vegas, NV and San Diego, CA. Three features were only found in single cities: f*cking wiggers (Seattle, WA), insane redskin trash (Washington, DC), n*gger is like (Seattle, WA). Consistency of some of the top features, but appearance of some features only in specific places indicate there is some geographic variation that may not have been discovered if we did not ensure the training data was well distributed spatially. The most common feature was used in targeted discrimination and 15 of the 23 top features were used in targeted discrimination Tweets and the rest in self-narration discrimination Tweets, except white trash which we found is used in both contexts (Figure 3).
In 36 of the 100 cities, the proportion of discrimination that is targeted was higher than that of self-narration of discrimination experiences. The top and bottom ranked cities for targeted to self-narration discrimination ratio were
Descriptive Analyses of Discrimination
The overall number of users by city who discuss any discrimination on Twitter has a long tail distribution, with a
Hate Crimes and Social Media Relationship
Upon visual inspection, we noticed that a few cities had a very high number of hate crimes (in general, including specifically race, ethnicity or national-origin based hate crimes): Phoenix, AZ, Boston, MA, Columbus, OH, Los Angeles, CA, New York, NY, Seattle, WA and Kansas City, MO. These cities are known for various reasons to have high number of hate crimes. Phoenix has a bias crimes unit which most major cities do not have, and Phoenix police say they look to be as thorough as possible when it comes to hate crimes, in contrast with other cities which take a more relaxed approach to personal attacks (Crenshaw 2018). Seat-tle also records precinct-level hate crime data (unlike other cities except NYC). New York City and Los Angeles are the largest metropolitans in the United States, and Boston (and the state of Massachusetts) has more agencies contributing information to the FBI than most other states (Jarmanning 2018). The higher number of reported hate crime incidences in Columbus, OH has been recognized and attributed to it's disclusion of heightened punishments for crimes such as assault or murder though, and lack of inclusion for protections for sexual orientation, gender identity, age, disability, or military status (Kocut 2018). Finally, Kansas City, MO is also known to be one of the top crime (in general) cities in the country (Alcock 2018). We thus performed the regression analysis both with and without these outlier cities.
Regression results show that the ratio of discrimination Tweets that are targeted to self-narrations has a positive relationship (β > 0) with the number of race, ethnicity or national-origin based hate crimes in a city, and this variable is significant when controlling for all of the demographic and other city attributes (Tables 1-4). The stepwise model shows that percentage black and foreign born also contribute a significant portion of the explained variance. When outliers are removed, the main difference in model results is that the stepwise model shows the total percent of discrimi- nation Tweets that are self-narration , percent black and population density of cities to also contribute to explained variance along with the targeted to self-narration ratio (all with positive coefficients).
Linguistic Results
City-level Results Of the EMPATH features selected based on their potential relation to our outcome, in discrimination Tweets, both positive and negative emotion were significant predictors of hate crimes. Surprisingly, positive emotion had a positive significant relationship while negative emotion a negative relationship with the number of race, ethnicity or national-origin based hate crimes. Disappointment, money, and night all had a positive, significant relationship with the number of hate crimes. Work-related features had a negative significant relationship.
User-level Results
In assessment of the linguistic characteristics of users who discuss relatively high amounts of discrimination (>21 discrimination Tweets), we found that correlation of the resulting EMPATH categories in their discrimination Tweets was very high both with the discrimination Tweets of users who made only 1 discrimination Tweet, as well as with those who made between 1 and 21 (ρ = 0.99, p <0.05, for both correlations). This indicates that linguistic characteristics are consistent amongst those who post a lot of discrimination versus a little. Pearson correlation between the linguistic characteristics common to non-discrimination and discrimination Tweets was fairly high but not as similar (ρ = 0.80, p <0.05).
Tweet-level Results
We further examined the specific EMPATH characteristics that did not fall in line with the correlation via the normalized mean of each feature count. Table 5 shows the top 10 features with a normalized mean in discrimination tweets compared to the normalized mean for the same feature in non-discrimination Tweets. In general, negative features that do all have some intuitive relation with discrimination are most common. The average normalized mean ratio is 3.6; and so these features are at least twice or more times likely to be in discrimination versus non-discrimination text (minimum is 7.2 times in the table). Table 6: Empath regression results, predicting race, ethnicity or national-origin discrimination-motivated hate crimes.
Discrimination Content by Bots
We found a minimum of zero% (18 cities) to maximum of 31.6% (Washington, D.C.) discrimination users that were classified as bots based on the lists of known bots, with a mean of 7.8% (sd: 6%) (fairly spatially consistently distributed, with only 11 cities above 15%). Some example Tweets by identified bots are "Giants playing terribly against a terrible franchise. Enjoy gloating skins fans, you're still white trash. #Giants #redskins" (New York, NY). "@Usernameredacted it a proven fact that #blackpeople are the most racist people out there" (San Francisco, CA). Overall there were many themes in the bot posts that were classified as indicating discrimination. We decided to keep Tweets from the bots in our analysis, as these Tweets would be visible to followers as they were posted, though we remark that these should be further investigated or noted in any further analyses of the causal reasons for or implications of discrimination in social media.
Discussion
Summary of Contributions to Social Media Research In this work, we study the characteristics of race, ethnicity and national-origin based discrimination on social media spatially, as well as hate crimes motivated by these biases across the United States. In creating the spatially diverse training data set of social media discrimination, we found that most of the features predictive of discrimination were common, but there are examples of less common features that only appear in select cities. As well, we showed that there is a larger distribution of features in discrimination Tweets that are targeted compared to those that are self-narration of discrimination in the cities with the most race, ethnicity and national-origin based hate crimes. In terms of the relationship between social media discrimination and race, ethnicity and national-origin based hate crimes, the proportion of social media discrimination that is targeted was significantly related to the number of hate crimes. When not considering specific cities with outlier numbers of crimes, the proportion of social media discrimination that is selfnarration was also significant. Linguistically, we identified features more common in discrimination Tweets versus nondiscrimination Tweets, and also showed that positive and negative emotion, as well as disappointment, money, night and work were significantly related to race, ethnicity or national-origin based hate crimes by city. The surprising significance of positive emotion may be potentially related to high-levels of emotion in general in discrimination, or positivity in response to self-narration of discrimination experiences (Tynes et al. 2012). The ubiquity of emotion in discrimination is also supported by the increased frequency of the empath feature sadness in discrimination Tweets (Table 5). Finally, we also showed that there was race-based discrimination from existing, recognized lists of Twitter bots, linked to most of the 100 cities considered, and most predominantely in Washington, D.C..
Implications of Analysis
We stress that while our work makes no causal claims directly between discussion on social media and crimes, findings from this study are pertinent for better discrimination surveillance and mitigation efforts. In particular, as this work shows that social media may significantly explain some of the variation in hate crimes, discrimination on social media should be studied further to understand, contrast and assess the possible synergies of online and physical world discrimination. The linguistic analysis highlights the opportunities of social media to dissect and understand more about different types of discrimination in the country. These opportunities are discussed further in the Future Work section below. As there has been some concern regarding the criteria and consistency in how hate crimes are reported in different cities, social media also provides a different measure of systemic discrimination by which to augment our understanding of this phenomenon; for example discrimination on social media encompasses that which doesn't necessary elevate to the level of a crime or for which there are no laws mandating reporting in a particular region (e.g. sexual-orientation based discrimination), or location (e.g. sub-city level regions), but such day-to-day negative insults can be internalized, still impact communities and should be ascertained (Williams, Neighbors, and Jackson 2003).
Future Work Beyond this spatial analysis, and given newly released statistics from the FBI that show a 17% increase in hate crimes nationwide, in 2017 (Farivar 2018), a temporal analysis of both discrimination on social media and hate crimes could be of relevance. It should be noted that changes in Twitter (or any company's) policies around hate speech should be carefully considered if attempting to unpack temporal changes or causal mechanisms. In generating the spatially balanced training data, we did find a sudden drop in the number of Tweets containing discrimination keywords across all cities in 2016 as compared to previous years. This drop was likely caused by Twitter's strategy in decreasing hate speech (e.g. using e-mail and phone verification) announced in December 2015 (Cristina 2015). This change coupled with the aggregation of hate crimes motivated by biases in different ways across the years would have to be accounted for in any temporal analysis or assessment of discrimination based on more specific biases. Though the drop based on Twitter's actions was spatially consistent across all Tweets, and therefore not a concern in the context of our spatial analysis, it is the type of mitiga-tion that this work can potentially inform (via the types of features or the need for spatially-different linguistic features and differences to be considered). Further, our finding that the proportion of race, ethnicity or national-origin discrimination online that is targeted is a significant predictor in the regression model indicates a focus on this measure. Further analyses should also consider unpacking differences in the relative prevalence of hate crimes and social media discrimination in different cities, discrimination against different groups based on the language/terms used, incorporation of non-english languages and communication through emojis (Barbieri et al. 2016). | 2019-01-31T23:03:49.000Z | 2019-01-31T00:00:00.000 | {
"year": 2019,
"sha1": "b7c9a3a230e18465db89cd2b0c322dd2af9756cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b7c9a3a230e18465db89cd2b0c322dd2af9756cd",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science",
"Sociology"
]
} |
238227078 | pes2o/s2orc | v3-fos-license | Musical Mix Clarity Predication using Decomposition and Perceptual Masking Thresholds
Objective measurement of perceptually motivated music attributes has application in both target driven mixing and mastering methodologies and music information retrieval. This work proposes a perceptual model of mix clarity which decomposes a mixed input signal into transient, steady-state, and residual components. Masking thresholds are calculated for each component and their relative relationship is used to determine an overall masking score as the model's output. Three variants of the model were tested against subjective mix clarity scores gathered from a controlled listening test. The best performing variant achieved a Spearman's rank correlation of rho = 0.8382 (p<0.01). Furthermore, the model output was analysed using an independent dataset generated by progressively applying degradation effects to the test stimuli. Analysis of the model suggested a close relationship between the proposed model and the subjective mix clarity scores particularly when masking was measured using linearly spaced analysis bands. Moreover, the presence of noise-like residual signals was shown to have a negative effect on the perceived mix clarity.
Introduction
Terms such as 'clarity', 'punch', 'warmth' and 'brightness' are semantics often used to describe perceptual features found in musical mixes. These features are often subconsciously combined by a listener when assessing the overall quality of a musical mix. Whilst some of the perceptual features represented by the semantics outlined have an objective counterpart [1][2][3][4][5], clarity does not.
Clarity in the context of this work is related to Pedersen and Zerarov's sound wheel term 'clean' [6], which is defined as: "It is easy to listen into the music, which is clear and distinct. Instruments and vocals are reproduced accurately and distinctly. The opposite of clean: dull, muddy." Other similar definitions can be found in the literature, for example [7][8][9][10][11]. Although none of these definitions are uniform in wording, they have a similar focus on the separability of the component parts of the mix such that each part is distinctly audible. Potential links between the perception of single instrument clarity and brightness, measured using centroid and harmonic centroid, have been suggested [7,12]. This work evaluates a model-based approach to objective mix clarity prediction. Perceptually motivated metrics have their use in metering and control applications, as well as automatic mixing [10,[13][14][15]. Additionally, understanding the underlying related signal characteristics of these perceptual features facilitates the proposal of formalised definitions. arXiv.org Preprint Parker and Fenton Considering the acoustic characteristics of a space, it is possible to objectively determine the clarity and intelligibility that can be achieved. Measures such as C50 (ratio between early and late arriving reflections) and early decay time (EDT) can be combined for this purpose. The direct to reverberant ratio (D/R) has also been linked to acoustic clarity [16]. Furthermore, in a proposal for objective measurement for loudspeaker quality [17], 'clearness' is considered to be a perceptual dimension. This is calculated based on perceived degradation imparted on a signal by the loudspeaker under test in reference to an 'ideal' loudspeaker signal. This approach is similar to established standards to objectively measure music and speech quality, PESQ [18] and PEAQ [19]. These compare encoded signals with their unencoded counterparts to estimate a perceived error signal caused by encoding, which can then be measured using a number of features to determine how disturbing it is. What is common between these measures is that they all compare a signal with an effected version of itself, for example altered by a room or a loudspeaker's response. In the proposed model, there is not a processed version of the signal under test to compare to. Instead, the signal is decomposed, and a comparison made between its desirable and less-desirable attributes is made.
In previous research [20,21], it has been suggested that masking between signals constituting a multitrack mix could be related to the perception of clarity of the overall mix. Auditory masking is a phenomenon of hearing in which energy present at the ear is not perceived, due to stronger neighbouring energy in frequency or time, known as frequency masking and temporal masking respectively [22]. The model proposed for mix clarity prediction is based on an analysis of the masking relationship between transient, steady-state, and residual components of the signal, utilising the MPEG Psychoacoustic Model II [23,24]. The model's performance is assessed using a correlation test against subjective mix clarity scores elicited from a controlled listening test. In addition, further analysis of the model response to an independent dataset consisting of musical excerpts with varying degrees of controlled degradation is undertaken.
Transient, Steady-State and Residual Masking Model
The proposed model has some similarities to a cross-adaptive approach employed to minimise masking between composite signals of a multitrack mix in automatic mixing systems [10,14]. The automatic mixing system employed in [10] was shown to increase the perceived clarity of the mixes where the amount of masking had been reduced. To achieve this, a cross-adaptive masking metric was used to calculate the level by which a given signal in the multitrack was masked by the sum of all other signals of the multitrack mix. The signals contributing to the given multitrack mix are then processed in such a way to minimise the amount they are masked, thus lowering the amount of masking occurring in the multitrack mix overall. A similar approach was also proposed as a Hierarchical Perceptual Mixing (HPM) system [25], which first determines the most important signals present in the mix as a function of time based on a user parameter, then calculates the perceptual masking threshold of these dominant signals and removes the masked energy of the non-dominant signals present in the mix. It is suggested this approach may improve the clarity of the resulting mix. However, this was not related to any subjective testing.
The MPEG Psychoacoustic Model II's output, employed in the cross-adaptive automatic mixing system [10], is a Signal-to-Mask ratio (SMR). It indicates the ratio of the energy of the input signal to a masking threshold, which is calculated as a function of frequency and time [23,24]. The masking threshold is calculated by grouping spectral lines into threshold calculation partitions, which represent approximately a third of a critical bandwidth. Energy in these partitions is spread and then weighted based on a tonality measure. The tonality measure is a sliding scale indicating how noise-like or tonelike the input is. More noise-like partitions result in an increased masking threshold as they are more arXiv.org Preprint Parker and Fenton effective maskers. This weighted masking threshold is then compared with the threshold in quiet and the largest of the two is taken as the final masking threshold in each partition. By calculating the masking threshold of an external signal, the SMR reflects the level at which the external signal masks the input signal.
In typical MIR applications this multitrack based approach cannot be used directly as it assumes access to all the signals present which make up the multitrack mix. Additionally, it is unable to calculate masking occurring contained within a single signal, for example if multiple instruments were captured by a single microphone. Whilst there is an emerging field of source separation methods utilising deep learning techniques to 'unmix' mixed signals, such as Spleeter [26]; these systems are unsuited in this case as they are only able to classify a limited number of instruments, resulting in the cross-masking being poorly represented in cases where the given mix is split into only 1 or 2 instrument categories.
Model Details
The proposed model assumes no knowledge of the individual signals making up the multitrack mix. It utilises a novel approach which calculates the level of perceived masking between the separated transient, steady-state, and residual components of the multitrack mix.
When considering a spectrogram representation these components are characterised as follows: • Transient components: Spectrally broadband and temporally transient bursts of energy which form strong vertical beams on the magnitude spectrogram [27], and unpredictable changes in phase [28]. • Steady-state components: Slowly evolving harmonic partials with predictably evolving phase [28] forming strong horizontal beams on the magnitude spectrogram [27]. • Residual components: Cannot be classified as transient or steady-state. These are noise-like components of the signal, described as the 'texture' of the sound [29], which are generally broadband and stochastic in terms of magnitude and phase.
The TSR model gives a more generalised description of signal components that do not require specific instrument classification. Following the suggested negative impact of noise-like signals on mix clarity [20,21], greater masking of the residual (R) component would indicate a less audible residual component and therefore a potentially higher perceived mix clarity. Additionally, and perhaps more importantly, less masking of the transient and steady-state (TSS) components would indicate greater audibility of the percussive (rhythmic) and harmonic (pitched) parts of the signal. Transient onsets have also been linked to instrument identification [30], suggesting potentially greater mix clarity perception where TSS components are less masked. Considering the HPM system [25], masked residual energy bears some resemblance to the idea of masked non-dominant signals, whose presence may be unnecessary or even detrimental to the perceived clarity of the mixed signal. Moreover, the importance of TSS and R components can be thought of as a simple hierarchy, where presence of the TSS components take priority over R components. A block diagram of the proposed clarity model is given in Figure 6. The TSR separation system is a median filter based approach [29], chosen for its prior good performance in the perceptual measure of punch [1,31] and its relative simplicity. In this case, a single layer of filtering on a short time Fourier transform (STFT) spectrogram consisting of a 2048 sample window with 50% overlap, where the separation parameters were a percussive threshold of 1.75, a harmonic threshold of 1, and a median filter had an order of 17. These parameter values were chosen based on previous works, namely the perceptual punch meter [35]. The separated transient and steadystate components can simply be combined by addition to construct the TSS signal without the residual component.
A signal-to-mask ratio (SMR) is calculated by the MPEG Psychoacoustic Model II, measuring the level of masking present as a function of frequency and time [24,32]. This is the ratio of the energy in each scale-factor band (E) to a calculated masking threshold (MT). Thus, the SMR is defined as: In the case of the cross-adaptive masking metric [10], the energy and threshold calculation for the scale-factor bands is kept the same, though they define a masker-to-signal ratio (MSR) given in decibels as: Where MT′ is the threshold calculated for the sum of accompanying signals. They assume masking occurs in any band where MT′ > E, and scale the outcome by a predefined Tmax value of 20dB, giving the final masking metric as: The model proposed in this paper incorporates the MPEG Psychoacoustic Model II to calculate energy and masking thresholds of the TSS and R components of the separated input signal. Masking metrics are then calculated based on the cross-adaptive metric [10] for these signals in parallel, indicating the amount the arXiv.org Preprint Parker and Fenton A statistical representation of each metric is taken to represent an overall score for the given analysis window, in this case 10 seconds, as this was the length of the stimuli included in the listening test. These are: the 5 th percentile of MTSS giving the points where the TSS component is minimally masked, and the 95 th percentile of MR giving the points where the R component is maximally masked. The overall masking metric is defined as the ratio between the statistical representations of masking expressed in decibels: As such, where MR is low and MTSS is high a large overall score is given and in the opposing case a low overall score is given, thus the score is negatively correlated with mix clarity perception. The layer II (L2PM) and layer III (L3PM) implementations of the MPEG Psychoacoustic Model II present some differences in the calculation of the masking threshold and SMR, though they are both based on the same principles [23]. Both implementations along with a L3PM variant were employed in the model; each was tested and are examined in this work.
It is worth noting, while ideally the separation system incorporated in the model would leave no residual energy in the TSS component and vice versa, this is not the case. In practice, with a purely white noise input, the output of energy to both TSS and R components is approximately even. We would expect in the case of ideal separation that only the R component would contain the signal energy, however, this is not an issue in the present application as the white noise example simply represents the upper-bound of overall masking scores where more structured signals are better separated. This results in a lower overall masking score.
Test Stimuli
The model outputs were evaluated against subjective scores collected in a controlled listening test. The stimuli used in testing were widely stylistically varied in order to determine features of clarity that applied regardless of instrumentation or arrangement, such that any resulting metrics may be generally applicable across a wide range of music. In addition, an independent dataset was synthesised to investigate the effect of specific forms of signal degradation on the model's output.
arXiv.org Preprint Parker and Fenton
A controlled listening test was conducted to elicit perceived mix clarity scores across a selection of stylistically different musical stimuli as presented in previous work [20]. 18 listeners took part in the listening test, of which 11 were undergraduate students enrolled on a music technology course, 4 were postgraduate researchers, and 3 were Doctors in the field of Psychoacoustics.
The test stimuli were taken from the Free Music Archive (FMA) 'FMA small' dataset [33], which consists of 8000 30-second long musical excerpts from 8 different parent genres. The dataset was processed and a random selection method was used to select 16 10-second long 44.1kHz 16 bit wave file stimuli which were monophonic and loudness normalised (-23LU) [5]. This process allowed the selection of 16 stimuli that were widely stylistically varied without the need for any personal selection. Details of the selected stimuli are given in Table 1.
Procedure
Listeners were asked to rate the stimuli for perceived mix clarity. This was a standalone and absolute judgement of mix clarity without reference or comparison to any other piece of music, similar to how mix clarity would be judged were the listener to come across a piece of music on the radio or a music playlist. The custom test interface used was designed in MAX [34] by modifying an existing HULTIGEN [35] interface. Each stimulus was presented individually along with a slider on which the listener was asked to rate the mix clarity they perceived between 0 and 100 in steps of 1. A score of 0 indicated the stimulus was perceived to be unclear (no clarity), and a score of 100 indicated the stimulus was perceived to be clear (highest clarity). Labels were given according to a standard 5-point category rating scale [36]. 6 repeats were performed where the order of the stimuli was randomised each time.
Results
The listeners' results were screened based on the consistency of their repeat scores, as a lack of consistency between repeats showed the listener may not have had a consistent perception of mix clarity or may not have understood the task at hand. Repeat consistency was measured using the interclass correlation coefficient (ICC) between a given listeners repeats. Estimated ICCs and their 95% confidence intervals were calculated using SPSS [37]. The ICC was a mean-rating, absolute agreement, 2-way mixed effects model as recommended in [38]. Ratings from listeners whose repeat scores that had an ICC of below 0.75 were excluded from the final analysis, as it is suggested ratings with an ICC equal to or greater than 0.75 have a 'good' reliability [38]. In this case, the lower bound of the ICC estimates' 95% confidence intervals were taken as the value which needed to exceed the 0.75 threshold, as this ensured with 95% confidence the listeners had 'good' reliability. After post-screening, only ratings from the 15 most consistent listeners remained.
The mean of each listener repeat rating was taken as the listener's subjective clarity score for a given stimulus. The median of all listeners' mean ratings for each stimulus was then taken as its median clarity score (MCS). The median clarity scores, along with their approximated 95% confidence intervals [39] are shown in Figure 1.
The assumption of homogeneity of variances for ANOVA was not met, indicated by both Levene's and Barlett's tests showing p < 0.05 for all stimuli. As such, a multiple pairwise Wilcoxon test was performed to identify significantly different MCSs. Holm correction was applied to account for multiple comparisons. The results affirm what is indicated by the confidence intervals shown in Figure 1, in that stimuli whose confidence intervals do not overlap are also shown as being significantly different (p < 0.05) in the Pairwise Wilcoxon analysis. arXiv.org Preprint Parker and Fenton
Independent Dataset
An independent dataset was synthesised to investigate the effect of specific signal degradation and how it would impact the predicted clarity score. For this, the stimuli used in the subjective listening test (see Section 3.1) were processed in 3 different ways, with 7 levels of severity, creating 3 test datasets. The processing was intended to gradually decrease the level of perceptual clarity at each level of severity by gradually introducing masking from noise-like signals common in music production. By forming the dataset from the stimuli which had be rated subjectively by listeners, the effect of the degradation on the proposed model could be evaluated relative to the MCSs the stimuli received.
The 3 sets included:
• Set 1 -Addition of broadband pink noise in 6dB steps (-36dBFS, -30dBFS, -24dBFS, -18dBFS, -12dBFS, -6dBFS, 0dBFS). This degradation simulates a gradually raising noise floor or an extremely 'busy' mix where lots of conflicting signals become noise-like. • Set 2 -Addition of reverberation in 3dB steps of wet mix level (-36dBFS, -30dBFS, -24dBFS, -18dBFS, -12dBFS, -6dBFS, 0dBFS). In this case, the reverberation was meant to simulate a large hall, and thus had a diffuse tail with a decay time of 3 seconds, with decay times more pronounced for low frequency energy rather than high frequency energy. Artificial reverberation is commonly added in music production to enhance the sense of space and listener envelopment. However, it also disperses diffuse energy over time adding to the signal's noise floor and can smear transient energy over time making them less clearly defined. • Set 3 -Clipping applied at a ceiling calculated as a percentile range of amplitude values about the fiftieth percentile (90%, 80%, 70%, 60%, 50%, 40%, 30%), such that clipping applied is uniform regardless of signal amplitude. Clipping is often used as a creative effect, though also occurs in signals recorded without an appropriate level of headroom. This can cause a reduction in dynamic range, decreasing the energy difference between the signal peaks and the noise floor. In addition, clipping can produce additional harmonic content making the signal more spectrally dense, thus increasing the potential for masking to occur. However, increasing spectral density and brightness of transient and steady-state components may be beneficial to the perception of clarity in some cases, whereby the masking potential of the transients and steady-state components in the signal are increased greater than the masking potential of the residual component.
Each of the test signals generated were loudness normalised to -23LU after the signal degradation had been applied.
Results
To evaluate the model's performance, the outputs of the clarity model variants were correlated against the MCSs obtained from the controlled listening test detailed in Section 3.3. Both Pearson's correlation coefficient r and Spearman's rank correlation coefficient rho were calculated as a lack of a bivariate normal distribution or linear relationship between the two variables can cause the Pearson's coefficient to provide an inaccurate measure of association. Pearson's coefficient is still provided, as a non-linear relationship is suggested in cases where rho is greater than r, and a linear relationship where r is greater than rho. The Spearman's rank coefficient is non-parametric, so is more robust, and is the focus of the following analysis. Both correlation coefficients for each model variation is given in Table 2 for the sake of comparison.
When correlating the L2PM clarity model masking scores against the MCSs, a strong negative Spearman rank correlation of rho = -0.8382, p < 0.01 was achieved. This correlation is shown in Figure 2. This was the highest Spearman correlation achieved by any implementation of the system, with most deviations from the line of best fit still within the 95% confidence bounds of the MCSs. The least well ranked stimulus, '126410', was also poorly predicted by a different mix clarity model suggested in previous work [20]. This implementation also achieved r = -0.7884, p < 0.01, which suggests a strong and slightly non-linear parametric relationship between the model output and the subjective scores.
The model was also implemented using the L3PM, forming the L3PM clarity model. This achieved a significant, but weaker correlation of rho = -0.6882, p < 0.01, shown in Figure 3. Again, a lesser Person correlation (r = -0.6882, p< 0.01) was seen, indicating a somewhat non-linear relationship. This correlation shows some heteroscedasticity, where the stimuli given higher masking scores were predicted less accurately than those given low masking scores, though many of the 95% confidence intervals cross the line of best fit. The weaker correlation of this model compared to the L2PM clarity model was somewhat unexpected, as the L3PM is more efficient in coding applications, and the SMR values are calculated in scale-factor bands which approximate critical bands. These are more reflective of perception than the L2PM scale-factor bands which are linearly spaced. The better performance of the L2PM clarity model suggests a greater importance of masking occurring between higher frequency energy, due to the model calculating the overall masking of a given frame in each component as the sum of masking within the scale-factor bands (see Section 2, Equation 5). Other work has suggested an importance of high frequency energy in clarity perception for some single instrument sounds [7,12]. The L3PM also employs window switching which the L2PM implementation does not [24]; whereby, shorter analysis windows and fewer scale-factor bands are used in determining the energy and masking thresholds of highly transient frames. For the stimuli used in the present testing, the perceptual entropy threshold was not crossed by any frames of any of the stimuli, therefore short analysis frames were not used in any of the masking calculations and thus could not be responsible for the difference in performance between the L2PM and L3PM clarity model variants.
To confirm that the difference in scale-factor bands was largely responsible for the weaker correlation of the L3PM clarity model, a modified version of the L3PM was devised. This calculated the energy and masking thresholds as normal [23,24], however rather than calculating SMR directly in scale-factor bands which approximate critical bandwidth, the masking thresholds were spread back across the FFT-bins and the SMR was calculated for the linearly distributed bands specified in the L2PM arXiv.org Preprint Parker and Fenton [23]. This saw an increase in performance to a level similar to, but lesser than that of the L2PM clarity model, achieving rho = -0.8088, p < 0.01 and is shown in Figure 4.
The modified L3PM clarity model improves the position of the L3PM clarity model's most severe outlier, '94414', though a number of outliers remain. The remaining differences between the L2PM and modified L3PM clarity models were due to the difference in how the models calculate the masking threshold. Whist this implementation of the clarity model had a slightly weaker Spearman and Pearson correlation (r = -0.7868, p < 0.01) than the L2PM clarity model, the Spearman and Pearson correlation coefficient values are similar, suggesting a more linear relationship between the modified L3PM clarity model and the subjective scores.
Further to testing the clarity model variations' correlations to the subjective data, they were also evaluated using the independent dataset (see Section 3.4). Figure 10 shows the scores calculated for audio examples at the various levels of degradation for each of the 4 degradation methods. Box plots show median and interquartile range as well as outliers at each degradation level, the scores are also tied with lines to show the progressive change in model output which occurred as increasing levels of degradation was applied. The addition of pink noise sees the addition of broadband residual energy resulting in the masking of transient and steady-state components of the stimuli. As such, the masking scores progressively rise as more noise is added, until they reach a ceiling level where the TSS component is maximally masked. This ceiling causes a convergence of the clarity scores for the tested stimuli, with the tracks scoring a lower MCS in the subjective test tending to see less change when degraded by pink noise. This suggests how noise-like the stimuli are may have had an influence on the MCSs they received by listeners.
The addition of reverberation had a similar though less extreme effect as adding pink noise. Unlike pink noise, the reverberated signal is still derived from and therefore correlated to the original signal. If viewed using a spectrogram, reverberation is somewhat akin to blurring an image where the energy is smeared over time, and to a lesser extent frequency. This smeared energy is largely characterised by the separation system as residual energy, increasing the level to which the TSS component is masked by the R component and vice versa. The addition of reverberation tended to increase the masking score of the stimuli. However, a greater effect is seen for stimuli which received a higher MCS similarly to the addition of pink noise. Moreover, masking scores for stimuli which did not contain strong transient onsets from things such as drum hits, such as '137167','15541', and '94414' also showed less difference than those which did, suggesting the masking effect of reverberation is more severe for transient sounds. This is in line with the greater effect reverberation has on the temporal axis of the spectrogram. arXiv.org Preprint Parker and Fenton Clipping appears to cause two different responses in the model output whereby some stimuli masking scores increase with degradation level and others decrease. The effect of this is almost symmetrical, keeping the median masking score relative constant throughout the degradation levels whilst the interquartile range increases. Clipping had the effect of reducing MR (see equation 5), as the dynamic range was decreased and the difference between the R and TSS components was reduced. However, in cases where there was very little residual energy, the MTSS (see equation 5) was also reduced. Clipping can increase the harmonic density of transients which emphasises their spectrally broadband characteristic and thus increases the level of transient energy separated by the separation system. In cases where the present signal is largely steady-state energy, such as a sustained Rhodes piano chord, clipping can emphasise the temporally constant and more slowly evolving nature of steady-state energy and increase the level of steady-state energy separated by the separation system. Moreover, additional harmonics generated from clipping such a signal are spectrally narrowband and temporally constant, and as such are considered to be steady-state by the separation system, further contributing to the level of steady-state energy extracted. Therefore, stimuli that were more noise-like at the reference level, receiving high masking scores, received even higher masking scores as more degradation was applied. Conversely, stimuli that had a low level of residual energy at the reference level received even lower masking scores at higher degradation levels, as the reduction the R component masking was lesser than the reduction of TSS component masking. Whilst the effects of clipping may potentially increase the perception of clarity, given the more complex effect compared to the addition of pink noise or reverberation, the clipping applied at the highest levels of degradation is extreme. The resulting signals are very noise-like, and as such were expected receive high masking scores, which they did not in the case of stimuli that scored middle or low masking scores at the reference level.
The L3PM clarity model responded similarly to the L2PM clarity model for all degradation types. Figure 6 indicates that there was an increase of median masking score and convergence with increasing levels of degradation in the case of additional pink noise and reverberation, and somewhat diverging masking scores with a relatively constant median in the case of increasing levels of clipping. However, this response was muted in comparison to the response seen from the L2PM clarity model. Application of the signal degradation had a greater effect on the separated higher frequency energy than on the separated low frequency energy; this was as a result of the STFT based separation system employed having frequency resolution which increases with bin index. When measured using linearly spaced scale-factor bands, the high frequency bins represent a larger proportion of the scale-factor bands than when measured with logarithmically spaced scale-factor bands. Thus, less change in masking score is seen when degradation is applied in the case of the L3PM clarity model's logarithmically spaced bands, as fewer of these scale-factor bands correspond to the high frequency bins which have the greatest difference in separated energy between degradation levels. While the logarithmic scale-factor bands are more aligned with human perception, in this case, a negative effect on correlation to the subjective scores was seen (see Figures 1 & 2). Figure 7 shows the clarity model which employed the modified L3PM clarity model, using linearly grouped scale-factor bands like those used in the L2PM clarity model, responded to the degradation similarly to the L2PM clarity model. The similarity of these results, and difference to the L3PM clarity model results (see Figure 6), suggests that the differences between the models' responses could have been caused by the different scale-factor bands employed. The remaining differences then are due to the difference in masking threshold calculation between the L2PM and L3PM clarity models discussed previously [23].
Discussion
All tested correlations showed a significant and good correlation (<-0.6) to the subjective scores, suggesting a potential link between the underlying concept of transient, steady-state and residual masking and mix clarity perception. However, as the tested subjective dataset contained only 16 stimuli, the model may be overfit, and a larger test containing more participants and stimuli should be performed to validate these results. The L2PM clarity model showed the strongest rho correlation, though was non-linear, with a Spearman's rank correlation test showing a stronger relationship than the Pearson correlation. This implementation of the model is also the least computationally expensive, as the L2PM is less complex than the L3PM. During independent testing, the model responded as expected to the addition of pink noise and reverberation, with the masking score increasing (clarity decreasing) as the degradation levels increased. The model's response to clipping was less uniform than that of the additional pink noise or reverberation, showing a more complex effect on the relationship of the R and TSS components of the signal. While the response is understandable in terms of how the model operates, it is not expected to be congruent with perception. Low perceived clarity scores, corresponding to high masking scores, would be uniformly expected for all stimuli at the highest levels of clipping degradation, showing a potential shortcoming of the model. However, given the strong correlation to the subjective scores, this shortcoming did not seem to have a greatly detrimental effect for the tested stimuli in this case. Though, improving the response to this kind of degradation would improve the robustness of the model in extreme cases.
While the L3PM has a more complex calculation of masking threshold and provides more efficient encoding in the context of MPEG compression compared to the L2PM, the L3PM clarity model unexpectedly had the weakest correlation to the subjective scores. It is suggested this was largely due to L3PM's logarithmically spaced scale-factor bands not providing emphasis on masking occurring in high frequency bands like in the L2PM and modified L3PM variations of the clarity model. This could indicate a greater importance of masking occurring between high frequency energy to the perception of clarity. In terms of independent testing, the L3PM clarity model had a similar but less extreme response to all three degradation types than the L2PM clarity model. Additionally, the L3PM is capable of calculating masking thresholds using shorter windows when transients occur, providing a higher time resolution and reducing pre-ringing artifacts in MPEG coding. Similarly, this window switching could be used in the clarity model to provide greater temporal resolution for masking occurring relating to transient passages in the signal under test. In the present testing only long windows were used, as none of the stimuli under test had transient content capable of triggering the short frame calculation. If a more appropriate onset detection method was used, the application of shorter windows may improve the clarity model's performance.
To confirm the difference between the L2PM and L3PM clarity model variations' performance was largely due to the scale-factor band grouping, a modified version of the L3PM was devised to employ linearly grouped scale-factor bands like the L2PM. The performance of the modified L3PM clarity model response was similar to the L2PM clarity model in both correlation with the subjective scores, and to the independent dataset, affirming the difference in scale-factor bands was largely responsible for the difference in performance between the L2PM and L3PM clarity models. While the rho and r coefficients showed a slightly weaker correlation than that of the L2PM clarity model, the coefficients were more similar in value, which indicates a somewhat more linear relationship to the subjective data. Additionally, being based on the L3PM clarity model, this model may also benefit from improving the onset detection system used for window switching.
Conclusion
A new model for prediction of mix clarity has been proposed, based on the masking relationship between residual, transient and steady-state components of a musical signal. The model consisted of a median filter-based separation system, which feeds the MPEG Psychoacoustic Model II used to calculate signal-to-mask ratios of the component parts which are then compared. Both layer 2 and layer 3 implementations of the MPEG Psychoacoustic Model II were tested, along with a modified version of the layer 3 implementation, forming L2PM, L3PM, and modified L3PM variants, respectively.
Each variation was evaluated through both Pearson and Spearman's rank correlation to subjective scores gathered in a controlled listening test, and, though their response to an independent dataset of stimuli degraded though the addition of pink noise, reverberation, and clipping. The L2PM clarity model showed the strongest correlation to the subjective scores, followed by the modified L3PM, with the L3PM clarity model showing the weakest relationship. Although the L3PM is most efficient in coding applications, the stronger correlation achieved by the modified L3PM clarity model showed that the linearly grouped scale-factor bands of the L2PM were advantageous to performance in this case. All variations of the model responded similarly to the degradation introduced in the independent dataset; the L3PM clarity model's response was less extreme to that of the modified L3PM and L2PM clarity models, whose responses were very similar. Addition of pink noise and reverberation caused an increase in masking score, reflecting a decrease in clarity. Clipping caused a somewhat more complex response, where stimuli which were noise-like and received high masking scores at their reference level gained higher masking scores when clipped, and stimuli which had a low level of residual energy and a low masking score at their reference level received even lower masking scores when clipped.
Further work is ongoing to validate the proposed model's performance against a larger subjective data set. | 2021-03-24T01:15:56.102Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "226ee200a42c85bb16f1be558068957a40453db1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/20/9578/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "226ee200a42c85bb16f1be558068957a40453db1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
220324571 | pes2o/s2orc | v3-fos-license | Wuchang Fangcang Shelter Hospital: Practices, Experiences, and Lessons Learned in Controlling COVID-19
In early January 2020, the outbreak of the new corona virus pneumonia (Corona Virus Disease 2019, COVID-19) occurred. Wuhan, the capital city of Hubei province, became the epicenter of the disease in China. The rapid growth of patients had exceeded the maximum affordability of local medical resources. A large comprehensive gymnasium was converted into Wuchang Fangcang Shelter Hospital in order to provide adequate medical beds and appropriate care for the confirmed patients with mild to moderate symptoms. For these hospitalized patients with COVID-19, medication became the mainstay of therapy. From 5th February to 10th March, a team of pharmacists successfully completed drug supplies and pharmaceutical services for 1124 patients and approximately 800 medical staff, and, while doing so, received zero complaint, and experienced zero disputes and zero pharmacist infection. This paper summarizes the development and construction of the pharmacy, human resource allocation of pharmacists, pharmacy administration, and pharmaceutical services. It aims to review a 34-day period of pharmaceutical practice and serve as a reference for other health professionals working on COVID-19 prevention and treatment in other regions. Electronic supplementary material The online version of this article (10.1007/s42399-020-00382-1) contains supplementary material, which is available to authorized users.
On 8th December 2019, a confirmed case of the new coronavirus infection of pneumonia, termed Corona Virus Disease 2019 (COVID-19), was detected in Wuhan City [1]. In a short period of time, the virus spread quickly throughout the country, and the number of infected patients increased rapidly. At the beginning of February 2020, available hospital beds soon reached full occupancy in those hospitals designated for antivirus treatment. To complicate matters further, some medical workers were infected due to occupational exposure, which forced the medical team into quarantine for medical observation. Based on clinical manifestations, confirmed patients are divided into mild, moderate, severe, and critical types [2,3]. Since more than 80% of COVID-19 patients were mild or moderate types [4][5][6], a novel public health measure, Fangcang Shelter Hospitals, was conceived [7]. In case of emergency, these temporary hospitals have been able to provide extra beds capacity at short notice and provide classified treatments. All confirmed patients with mild and moderate symptoms could be admitted to the Fangcang Shelter Hospital for free medical treatment. During the worst epidemic period in Wuhan, a total of 16 Fangcang Shelter Hospital were established. Wuchang Fangcang Shelter Hospital was developed from the Hongshan Gymnasium and was one of the first three hospitals accepting patients and was the last one to be closed. It covered an area of 14,800 m 2 and housed a total of 800 beds, which were separated across three independent regions in order to optimize management and treatment efficiency. During this major public health emergency, pharmacists, as a member of the medical team, have been responsible for providing professional and superior pharmaceutical services. This paper looks back at the pharmacy construction, occupational protection, pharmacy administration, and pharmaceutical services at Wuchang Fangcang Shelter Hospital. These practices and lessons at the forefront of containing the virus may help others in their efforts around the world.
Location, Arrangement, and Allocation of Human Resources
The pharmacy of Wuchang Fangcang Shelter Hospital was a conversion from two referee meeting rooms in Hongshan Gymnasium ( Fig. 1 a and b). The two rooms were adjacent and connected by a shuttle door. One room was used as pharmacy, the other served as a level 2 warehouse (Fig. 1 c and d). Due to space constraints, there was no room for a level 2 warehouse for medications requiring refrigeration. All central air conditioners were turned off to prevent the virus spreading through the ventilation systems. Several household heaters, humidifiers, air purifiers, and refrigerators were used to maintain the appropriate temperature, humidity, and clean air for the storage of medicines.
Ten pharmacists from Renmin Hospital attached to Wuhan University provided strong support and formed a professional pharmaceutical team ( Table 1). The pharmacy was open 24/7 (pharmacists working 12-h shifts), enabling the constant availability of pharmaceutical services at all times. Considering the long hours, high-intensity, and complexity, most pharmacists working in Wucang Fangcang Shelter Hospital were young to middle-aged, well-educated with a master's degree or higher, and with at least 5 years of pharmacy experience.
Occupational Protection
The team of pharmacists was monitored regularly. Body temperature was measured twice a day, and nucleic acid and specific antibodies of COVID-19 were tested twice per month. Attention was also paid to the staff's mental and emotional health. If necessary, psychological evaluations were carried out, and psychological counseling was provided. The correct use of personal protective equipment (PPE) and regular and thorough hand hygiene were key measures in the prevention and control against infection through contact transmission, droplet transmission, and airborne virus particles. According to the risk of exposure in different working areas, pharmacists took different precautions ( Table 2) [8]. Various protective measures were taken to ensure that pharmacists provided professional services in a responsible manner.
Drug Supply and Pharmacy Administration
All medications used in Wucang Fangcang Shelter Hospital were obtained from three sources: 1. Renmin Hospital of Wuhan University, which acted as the National general coordinating agency for all medical teams, provided medicines for chronic diseases and firstaid. 2. Some medicines recommended in the diagnosis and treatment guidelines were obtained from Wuhan epidemic prevention and control headquarters [9]. 3. Medicines donated by pharmaceutical companies or charitable organization.
The diverse sources of medications added to the need for efficient and accurate pharmacy work, caused many difficulties for the pharmacy administration. The corresponding work processes, including the receipt of medicines, selection, inspection, prescription-checking dispensing, and distribution were established under the unified leadership of Wuchang Fangcang Shelter Hospital (Fig. 2).
According to their characteristics and according to the diagnosis and treatment guidelines, other regulatory documents [9,10], combined with expert clinical opinions, a list of medicines was determined for infected patients with mild and moderate symptoms. This involved symptomatic treatment, prevention of complications, treatment of underlying diseases, emergency rescue medications, and so on. Based on the actual clinical situation, the variety and quantity of medication were regularly adjusted and supplemented. A catalog containing 116 medications was finally revised in 4 Mar 2020 and showed in Table S1.
The category of medicines in the pharmacy was compatible with the function of the Fangcang Shelter Hospital, and covered antiviral, antibacterial, antipyretic, antitussive, antianxietic, expectorant, and chronic disease medications. It conformed the treatment protocols for COVID-19, and met the needs of some patients with underlying disease, as well as including first-aid medicines to be able to deal with any sudden incidents. To integrate traditional Chinese and Western medicine to treat infected patients, 16 kinds of Traditional Chinese Medication, including preventive and therapeutic decoctions for medical staffs and patients (Fig. S1), respectively, were also available.
Pharmaceutical Services
In addition to maintaining a constant and regular supply of drugs, the provision of pharmaceutical services was another important and indispensable duty during the pandemic [11][12][13]. Relying on the 5G network and medical information systems, the team of pharmacists accomplished pharmaceutical services smoothly, helping to reduce the risk of occupational exposure in the Shelter Hospital.
Pharmaceutical services focused on the following six aspects: 1. Checking medical orders related to the drug therapy 2. Paying attention to the prescriptions for patients with underlying diseases 3. The suitability of the usage and dosage of the medication 4. Monitoring adverse drug reactions and drug-drug interactions 5. Summarizing and sharing the latest drug information 6. Providing medication-related consultation and education.
In some cases, several patients with chronic diseases took repeated medications or overdose. They were admitted to the hospital with their own chronic disease medicines. Not clearly understanding the situation, doctors prescribed the same medication or other drugs possessing the same pharmacological effect. To overcome this problem, patients taking chronic disease medicines (such as antihypertensive drugs and hypoglycemic agents) were screened out by the pharmacy information system. As a result, pharmacists could intervene in time on medical orders and prescriptions. The pharmacists also
Conclusion
While our understanding of the virus deepens and with the constant improvement of diagnosis and treatment and strategies for prevention and control, pharmacists should continue to actively collect and improve their services based on the latest information. For instance, it has recently been discussed whether or not angiotensin-converting enzyme inhibitor (ACE-i) and angiotensin receptor blockers (ARBs) increased the susceptibility of COVID-19. In the pharmaceutical services provided at Wuchang Fangcang Shelter Hospital, ACE-I and ARBs were not recommended, but now, the latest joint viewpoint from three U.S. heart groups states that patients with COVID-19 should take ACE inhibitors and ARBs [14].
Not to be neglected, the mental health of the pharmacist also requires close attention. Having worked in this shelter hospital for 23 days, one pharmacist felt anxious and uncomfortable. Tests of viral-specific nucleic acid and antibody were negative and computed tomography reported normal results. After psychological counseling and a brief period of rest, the physical and mental states were greatly improved, and the pharmacist gradually recovered. Psychology-related research shows that during the peak of the COVID-19 epidemic in China, more than one-third of medical staffs suffer from insomnia, which may progress to depression, anxiety, and stress trauma [15].
On March 10, 2020, Wucang Fangcang Shelter Hospital was closed, thus indicating that all of these large-scale temporary hospitals had completed their missions in Wuhan. The team of pharmacists has done their utmost efforts to perform their professional duties, ensuring the supply of medicines and providing high-level pharmaceutical services. Currently, the COVID-19 outbreak is spreading worldwide, and the situation awaits a vaccine. Pharmacists should unite globally to contribute their expertise and strength to help prevent the spread of the virus.
Acknowledgments In this concerted effort against the COVID-19 virus, thousands of medical staffs in China (doctors, pharmacists, nurses, inspection, and image technicians) put their hearts and souls into curing infected patients, subjecting themselves to huge risk of infection. We thank all pharmacists for their valuable suggestions and great contributions in fighting this epidemic disease. Thanks also to all logistics support workers in delivering invaluable protection equipment and living supplies to medical staffs and patients.
Authors' Contributions BS and BZ initiated the topic. BS, BZ, LC, LZ, MZ, JL, JW, KC, YX, and WS participated in discussions. BS wrote the first draft of the manuscript. All authors read and approved the final manuscript.
Compliance with Ethical Standards
Conflict of Interest The authors declared that they have no conflict of interest.
Ethics Statements Not applicable. This is a descriptive and retrospective study that and does not undermine the principles according to the ethical standards of the institutional and/or national research committee and/or the 1964 Helsinki declaration and its later amendments or comparable ethical standards. | 2020-07-04T13:58:39.749Z | 2020-07-04T00:00:00.000 | {
"year": 2020,
"sha1": "429f8fe979f3fc48be25e80547f20f425b430bfd",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42399-020-00382-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "429f8fe979f3fc48be25e80547f20f425b430bfd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251643407 | pes2o/s2orc | v3-fos-license | Exploration of In-Medium Hyperon-Nucleon Interactions
The study focuses on exploring the changes in the hyperon-baryon interaction at various nuclear densities. This approach starts by building a vacuum hyperon-nucleon interaction model based on Boson -Exchange maintaining SU(3) flavor symmetry. Bethe-Goldstone equation is then explored to investigate the medium properties over the bare interaction. A detailed investigation of the density dependence revealed clear changes in the low energy parameters with the variation of the medium density shown for different strangeness channels.
Introduction
Nuclear physics has gone through an exciting shade after the discovery of strange particles introducing certain un-explained phenomenon like the 'hyperon puzzle', charge symmetry breaking, hypertriton lifetime puzzle, possibility of NNΛ resonant state and so on [1]. All these requires knowledge about hyperon-baryon interaction with a special focus on the medium properties.
With this motivation, this study focuses on the understanding of the hyperon-nucleon interaction in presence of medium that was developed some time ago at Giessen [2]. For that purpose, a microscopic approach is used by first considering the bare interaction and then gradually adding the medium effect to have a thorough understanding of the subject. The major goal of this work is to study the density effects qualitatively first so that with improved experimental inputs, the results can be quantified to a great extent.
Here in this paper, first the methodology for studying vacuum interaction is discussed followed by in-medium effective interaction. The results for each will be discussed.
Methodology
In this work, the meson exchange approach is used to develop a hyperon-nucleon (YN) interaction, applicable to the full baryon and meson octets, for the latter including also the meson singlets. In the past, that strategy has been used extensively, see e.g. [4,5], although large uncertainties remain in view of the scarcity of hyperon-nucleon scattering data. In our approach, the main focus is on in-medium interactions, aiming finally at using hypernuclear data as independent additional constraints on interactions among octet baryons.
Vacuum Hyperon-Baryon Interaction
Being mainly focused on studying the density effects of the interaction, a revived version of the One-Boson -Exchange potential (OBEP) was developed following SU(3) flavor symmetry to compensate the experimental data set scarcity from theory [3]. The interaction Lagrangian is defined by a superposition of pseudo-scalar (P), scalar (S), and vector (V) SU (3) scalars of the form with octet coupling constants g 8 , α 8 = F F+D and the singlet coupling g 1 , S U(3) f invariant baryon and meson matrices B, φ. The mixing of octet and singlet mesons is taken into account by mixing angles θ.
In order to overcome the uncertainties by the lack of YN-data, here the baryon-meson coupling constants are derived from the elementary g and α couplings by using strictly the fundamental SU(3)-flavor relations [3]. Hence, the three sets of (g 8 , α 8 , g 1 ) P,S ,V are factors remaining as free parameters. The mixing angles were chosen as the ideal mixing values being found not to be crucial as determining factor if changed from ideal case in affecting the interaction. For g P 8 , the value fixed by pion-nucleon scattering is used. Owing to these, octet strengths (g S ,V 8 ) for scalar and vector, singlet strengths (g P,S ,V 1 ) and α P,S ,V 8 for each three of the meson nonets are fitted in this model.
With this construction and including dipole vertex form factors then the parameters were fitted with the available data set by solving a set of coupled 3-D Lippmann Schwinger equations for the octet T-matrix (Eq. 2) serving to determine phase shifts, cross sections, and other observabels like low energy parameters with (q, q',k) denoting initial, final and intermediate relative momenta. The OBEP and Green's function are denoted by V and G, respectively.
Vacuum Interaction Results
The constructed potential then first fitted to available set of Σ + p and Λp cross section data with 1 S 0 partial wave only is at this stage as at this energy sector this is sufficient for preliminary studies. The obtained fit parameter results with a χ 2 fit of 6.68 is shown in Fig. 1 (left) and the obtained parameter set is given in Table. 1 (left). As an application, the obtained parameter set were then used to evaluate Σ − p ⇒ Σ 0 n cross section as shown in Fig. [1] (right) to produce a satisfactory output remembering the large error bars of the scattering data. For the present study, only S=-1, -2 results were calculated for few channels as a starting point. Thus, having a satisfactory vacuum interaction model, the next step is to apply the medium effect as discussed in next section.
Medium Effect
To investigate the medium effect, infinite nuclear matter stands as rich laboratory for strong interaction that are crucial for hypernuclei to dense matter to heavy-ion experiments to name a few. In this work, medium effects are incorporated in terms of the Pauli Projector Q F = Θ(k 1 − k F 1 )Θ(k 1 − k F 2 ) where k F represents nucleon Fermi momentum. Q F = 1 in free space. The Bethe-Goldstone equation in momentum space is then given by
In-Medium Results
When the effect of medium was studied for cross sections, a clear weakening of strength is found as shown in Fig. 2 (left). A significant effect of dense medium is seen as suppressing the channel mixing as prominent from the weakening in the sharpness of the 'cusp'-a signature of channel mixing as shown in Fig. 2 (right) phase shift plot. In order to get direct information for this kind of low energy scattering interaction, effective range parameters are suitable to gain more insights of the interaction. The low energy behavior stands as a convenient measure for the baryon sector as the core information is limited here (Eq. 4).
The scattering length (a s ) and effective range (r e ) provide information about the nature of interaction. It is important to keep in mind here that for this sector, there are not precise quantitative results yet, hence a qualitative measure can stand as a theoretical prediction.
To explore the density effect on the hyperon interaction, nuclear density was varied and the effect on scattering length and effective range was studied for S=-1 and S=-2 channels. The results obtained are reported in Fig. 3. Here it shows that the scattering length saturates at saturation density that means at high densities. The calculated values are given in Table. 1 (right). The obtained results are in-line with the existing predictions as reported by other groups (Extended Soft Core (ESC) [4], Juelich [5], chiral effective field theory (χEFT) [6] models).
Summary and Outlook
The work reported a revived version of the one-boson exchange potential for hyperon-nucleon interaction primarily aiming at in-medium studies of interactions. As a preliminary step, the vacuum interaction showed good agreement for S=-1 scattering data. With the vacuum input, Bethe Goldstone formalism then successfully describes the dense medium behavior of YN interaction. The investigation also revealed channel dependent behavior as well deserving further exploration. As a future step, the approach can be extended for higher strangeness channels as well. The G-matrix formalism can be extended for various in-medium studies of hypernuclei as well to study hypernuclear structure properties in terms of Hartree Fock formalism. As a working qualitative model for hyperon, future promising scattering data availability will be helpful in quantifying the model parameters and thus to have more inputs on the dense matter behavior as well. Inclusion of three-body forces and comparison to the interactions derived independently by the DBHF energy density functional approach in [8] are in preparation. | 2022-08-19T01:15:24.721Z | 2022-08-18T00:00:00.000 | {
"year": 2022,
"sha1": "566edc8136c5ef8a0d792d0f79b0063c36c4c790",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "566edc8136c5ef8a0d792d0f79b0063c36c4c790",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53978318 | pes2o/s2orc | v3-fos-license | Results on Charmonium(-like) and Bottomonium(-like) States from Belle and BaBar
Spectroscopy results for Belle and BaBar are reported. A particular focus is put on new results of the X(3872) state with its radiative decays to $J$/$\psi$$\gamma$ and $\psi'$$\gamma$, its decay into $J$/$\psi$3$\pi$ and the search for production in radiative Upsilon decays. Another focus is $L$=2 mesons, in particlar a possible $D$-wave assignment to the X(3872) and the confirmation of an Upsilon $D$-wave state.
INTRODUCTION
In this paper, spectroscopy results from the two B factories BaBar [1] at PEP-II and from Belle [2] at KEKB are reported. The data samples of the two experiments are summarized in Tab. 1. There is quite a number of unresolved interesting questions in charmonium spectroscopy, such as: • The charmonium potential is usually regarded as a static quark-antiquark potential (Cornell ansatz) [3] V (r) = − 4 3 with a Coulomb-like term and a confinement term. As can be seen, α S is assumed fixed over the total range 0<r< 1 fm, which is an approximation. Also, as the mechanism of confinement is still one of the unanswered questions in QCD, it is still unproven if the linear approximation (corresponding to a constant string force) in the confinement term is (a) valid to all orders of α S and (b) valid even in the far long range r> 1.3 fm (i.e. in the string breaking regime). In fact, studies of potential constructed with two-gluon exchanges [4] [5] lead to a number of additional terms with different r dependances. In addition, the high lying states (e.g. L≤2, n≥3) are sensitive to the string constant k 1 GeV/fm, which is the slope of the confinement term, and mass measurements can provide a precise measurement of k. • The strong coupling constant in the charmonium system has a quite high value α S =0.54 [6], so there might be non-perturbative effects becoming visible e.g. in the hyperfine splittings. • What is the nature of the newly observed narrow states near thresholds, which do not fit into potential model calculations? Are they molecular states? Or tetraquarks? Or threshold effects? As an example, molecular potentails might contain r −2 and r −3 [7] terms, not present in the Cornell potential, and leading to eigenstates with masses different from the quark-antiquark potential. • There are yet unobserved states, some of them expected narrow, e.g. the 3 D 2 (J PC =2 −− ) state.
Similar questions apply to the bottomonium system. At B factories, studies of bottomonium require the change of the beam energies. A few results are reported down below.
Belle BaBar
On-resonance Although the η c is the ground state of the charmonium system (J PC =0 −+ , 1 S 0 ) and already discovered in 1980 [11] [12], its width has been of particular interest recently. Previous width measurements [17] showed values around Γ 15 MeV from radiative J/ψ and ψ decays, and values around Γ 30 MeV from B meson decays. However, this is probably not surprising, as in the radiative decays the cross section varies according to E a γ with an exponent a=3−7. This energy dependance modifies the lineshape and the width determination becomes non-trivial. On the other hand, in the reaction γγ→η c the Breit-Wigner line shape is an appropriate approximation. In a new high statistics measurement by BaBar with a data set of 469 fb −1 and 14090 η c signal events, a high precision measurement of the mass m=2982.2±0.4±1.6 MeV and the width Γ=31.7±1.2±0.8 MeV of the η_c could be obtained. This measurement represents a factor 3 improvement in both statistical and systematic errors compared to the BaBar measurement in 2008 in B meson decays [9] and the Belle measurement in 2008 in γγ collisions [10].
Decays of X(3872) to DD * The X(3872) state has been discovered in B meson decays in the decay X(3872)→J/ψπ + π − by Belle [13]. It was confirmed in the same process by BaBar [14] and confirmed in inclusive production in pp production at √ s=1.8 TeV at CDF-II [15] and D0 [16]. Among the newly observed charmonium-like states (sometimes referred to as XYZ states) the X(3872) is the only one with several decay channels having been observed. It has a surprisingly very narrow width Γ<2.3 MeV [13], although its mass is above the open charm threshold. As its mass with 3871.56±0.22 MeV [17] is very close (within 1 MeV) to the D 0 D 0 * threshold, it was discussed as a possible S-wave [D 0 D 0 * ] molecule [18] [19]. The decay into D ± D ∓ * is kinematically forbidden, but the decay into D 0 D 0 * is a strong decay and among the so far observed decays it represents the dominant one, i.e. the branching fraction is a factor 9 higher than for the decay into J/ψπ + π − . In this decay channel, BaBar measured surprisingly a high mass of the X(3872) as m=3875.1 +0.7 −0.5 (stat.)±0.5(syst.) MeV [20]. This high value initiated discussion, that there might be two different X states, namely X(3872) and X(3875), which would fit to a tetraquark hypothesis [21] and the two different states [ccuu] and [ccdd]. On the other hand Belle measured in the same decay channel the mass as m=3872.9 +0.6 −0.4 (stat.) +0.4 −0.5 (syst.) MeV [22] and thus consistent with the world average [17]. A possible explanation of the discrepancy is the difficulty of performing fits to signals close to threshold. In fact, the two fits used two very different approaches: • Babar used a 1-dimensional binned maximum likelihood fit [20] with the D * D invariant mass as the only variable, where the signal probability density function was extracted from MC simulations and an exponential function was used for the background parametrization. • Belle used an 2-dimensional unbinned maximum likelihood fit [22], i.e. on the one hand the beam constraint mass with a Gaussian signal and an Argus function for the background, and on the other hand a Breit-Wigner signal for the D * D invariant mass with a square root function for the background.
Radiative Decays of X(3872)
The branching fraction of the rare decay X(3872)→J/ψγ is a factor 6 smaller than the one for X(3872)→J/ψπ + π − . There was a priori no understanding of the fact, why the transition of the X(3872) to a n=2 charmonium state should be stronger than to n=1. In fact, quite the opposite behaviour was expected.
In case of X(3872)→J/ψγ the photon energy is E γ =775 MeV, and thus due to vector meson dominance ρ and ω can contribute to the amplitudes. However, in case of X(3872)→ψ γ with the smaller E γ =186 MeV the transition can only proceed through light quark annihilation with an expected small amplitude. A new measurement by Belle of both radiative channels was based upon a data set of 711 fb −1 [26]. The background was studied in MC simulations and revealed peaking behaviour in some background components close to the signal region. The signal X(3872)→J/ψγ was clearly reestablished with 30.0 +8.2 −7.4 signal events (4.9σ significance) for B + →K + X(3872) and 5.7 +3.5 −2.8 signal events (2.4σ significance) for B 0 →K 0 X(3872). For X(3872)→ψ γ, the four decay channels B + →K + X(3872) and B 0 →K 0 X(3872) with X(3872)→ψ γ with ψ →l + l − and ψ →J/ψπ + π − . Charged and neutral B modes were treated separately, but the two ψ subdecay modes were fitted simultaneously because of their different background shapes. The signal was treated as a double Gaussian, the combinatorial background was parameterized as a threshold function. The shape of the ψ K * and ψ K background, and in particular the peaking structures, was modeled as a sum of bifurcated gaussians using a large MC sample. The signal yields were determined as 5.0 +11.9 −11.0 signal events (0.4σ significance) for B + →K + X(3872) and 1.5 +4.8 −3.9 signal events (0.2σ significance) for B 0 →K 0 X(3872). Thus, contrary to BaBar, Belle observed no signal, which would imply that there is no indication that the radiative transition from X(3872) to n=2 charmonium is stronger than to n=1 charmonium. In the same analysis, the decay χ c1,2 →J/ψγ was used as a reference channel with signal yields of 32.8 +10.9 −10.2 (3.6σ significance) for B + →K + χ c2 and 2.8 +4.7 −3.9 (0.7σ significance) for B 0 →K 0 χ c2 . In the charged mode, this represents the first observation of a J P =2 + state in a rare exclusive final state in a B meson decay, and thus a transition 0 − →0 − 2 + .
Decays of X(3872) and Y(3940) into J/ψ3π
As the ρ meson carries isospin I=1, the X(3872) seems to violate isospin conservation in the decay X(3872)→J/ψρ(→π + π − ). One of the proposed explanations [27] is ρ/ω mixing, and therefore the investigation of the decay X(3872)→J/ψω(→π + π − π 0 ) is of importance. The difficulty hereby is the nearby Y(3940) state, which is also known to decay into the same final state. The latter decay was investigated be Belle with a data set of 275×10 6 B meson pairs. The mass of the Y(3940) was determined as 3943±11(stat.)±13(syst.) MeV with a width of 87±22(stat.)±26(syst.) MeV. Belle also observed a signal for X(3872)→J/ψω(→π + π − π 0 ), based upon a data set of 256 fb −1 [29]. The measured efficiency corrected ratio of X(3872)→J/ψπ + π − π 0 / X(3872)→J/ψπ + π − = 1.0±0.4(stat.)±0.3(syst.) indicated another case of isospin violation due to the additional π 0 . BaBar published slightly different values for the Y(3940) with a data set of 383×10 6 B meson pairs mass [30], namely a mass of 3914.6 +3.8 −3.4 (stat.)±2.0(syst.) MeV and a width of 34 +12 −8 (stat.)±5(syst.) MeV. However, a signal for X(3872)→J/ψω(→π + π − π 0 ) was not observed. In a recent re-analysis with 433 fb −1 by BaBar, a requirement on the 3-pion mass was adjusted, i.e. the lower offset was extended from 0.7695 GeV to 0.7400 GeV. With this change in the analysis technique BaBar was able to confirm the Belle signal for X(3872)→J/ψω(→π + π − π 0 ) and confirm the large isospin violation for the ratio X(3872)→J/ψπ + π − π 0 / X(3872)→J/ψπ + π − measured as 0.7±0.3(stat.) and 1.7±1.3(stat.) for B + and B 0 decays, respectively. In the re-analysis, BaBar also investigated the shape of the 3π mass distribution in order to determine the quantum number of the X(3872). A similar analysis for the 2π mass distribution in case of X(3872)→J/ψπ + π − was performed before by Belle [32] and CDF-II [33]. The result in both cases was that S-wave is preferred. However, in the new BaBar analysis the shape of the 3π mass distribution seems to indicate that P-wave is preferred. For the 2π case and Swave, a parity of +1 for the X(3872) is preferred, leading to a tentative assignment of J PC =1 ++ . This quantum number assignment is also supported by angular analyses [29] [34] and leads to a possible charmonium state assignment of χ c1 ( 3 P 1 ), which is an n=2 state with a mass of 3953 MeV as predicted by potential models [6] and thus 70 MeV too high compared to the observation. For the 3π case and P-wave, a parity of −1 for the X(3872) is preferred, leading to a possible assignment of J PC =2 −+ . Then a possible charmonium assignment is η c2 ( 1 D 2 ), which is an n=1 state. The predicted mass is 100 MeV lower than for the χ c1 . This state would be an L=2 meson. In a different analysis, Belle investigated the J/ψω final state in γγ collisions based upon 694 fb −1 [35]. This analysis not only uses ϒ(4S), but also ϒ(3S) and ϒ(5S) data (see Tab. 1). The event selection uses a p T <0.1 GeV/c balance requirement. The final state in γγ collisions is required to have isospin I=0. Fig. 2 shows the W distribution of the final candidate events, where W is defined as W =m 5 -m(l + l − )+m J/ψ . m 5 is the invariant mass of the system constructed from four charged tracks and a neutral pion candidate. A clear enhancement seen just above J/ψω threshold with 49±14(stat.)±4(syst.) events (7.7σ stat. significance). The fitted mass is 3915±3(stat.)±2(syst.) MeV, thus it might be the observation the Y(3940) state in a second production mode. However, the fitted width is Γ=17±10(stat.)±3(syst.) MeV, which is narrower than the width of the Y(3940) as measured in B decays. The production mode allows to establish the charge parity C=+1 for this state, same as the X(3872), but the determination of the other quantum numbers would require more statistics.
ϒ(1S) Radiative Decays to X(3872)
As shown in Tab. 1, Belle recorded an extensive data set with the beam energies adjusted to the ϒ(1S) resonance, the n=1 3 S 1 bb state with J P =1 − and a mass of 9.46 GeV. With this data set, radiative transistions bb→ccγ can be investigated. These rare events with an expected branching fraction of ≤10 −5 [36] with interfering QED and QCD amplitudes. The transition may be from a 1 −− state, such as the ϒ(1S), to a 1 ++ state, which is one of the most probably quantum number assignments for the X(3872). Belle searched for the process ϒ(1S)→γX(3872)(→J/ψπ + π − ) with a data set of 5.712 fb −1 [37], corresponding to 88×10 6 ϒ(1S) decays The photon detection required E lab γ >3.5 GeV and the recoil mass on four charged The W distribution of the final candidate events (dots with error bars) for γγ→J/ψω at Belle [35]. The shaded histogram is the distribution of non-J/ψ background estimated from the sideband distribution. The bold solid, thinner solid and dashed curves are the total, resonance and background contributions, respectively. The dot-dashed curve is the fit without a resonance.
tracks being consistent with zero, i.e. −2<m recoil <2 GeV 2 . Initial state radiation (ISR) was treated in two different ways. On the one hand, ISR events were rejected by a criterium on the cms polar angle of the photon, i.e. |cosϑ * γ |<0.9 On the other end, ISR events for ψ production, with the same J/ψπ + π − final state as the X(3872), were used as a crosscheck, and the cross section for this ISR process was determined as 20.2±1.1(stat.) pb. For the X(3872) one event in the signal region was observed, resulting in an upper limit for the product branching fraction BR(Y(1S)→γX(3872)×BR(X(3872)→J/ψπ + π − )<2.2×10 −6 at 90% CL.
SUMMARY
The B factories continue to provide exciting results. Charmonium spectroscopy is studied in B meson decays, γγ collisions and ϒ(nS) decays. Bottomonium spectroscopy is studied in ϒ(nS) decays. Highly excited states such as L=2 states are clearly identified and provide accurate tests for potential models. States which are not consistent with any potential model, such as the X(3872), are studied in new ways, such as radiative decays or production in radiative decays. Surprising properties such as large isospin violation are confirmed. | 2010-10-12T09:41:21.000Z | 2010-10-12T00:00:00.000 | {
"year": 2010,
"sha1": "241e569e72b3e29c048a1ddb41b9d6a95baf8b3a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.2331",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "241e569e72b3e29c048a1ddb41b9d6a95baf8b3a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
22426205 | pes2o/s2orc | v3-fos-license | Concurrent Signals and Behavioral Plasticity in Blue Crab ( Callinectes sapidus Rathbun ) Courtship
Behavioral flexibility and behavioral regulation through courtship signals may both contribute to mating success. Blue crabs (Callinectes sapidus) form precopulatory pairs after courtship periods that are influenced by female and perhaps male urine-based chemical signals. In this study, male and female crabs were observed in 1.5-m circular outdoor pools for 45 min while the occurrence and sequence of courtship behaviors and pairing outcomes were recorded. These results were then compared with trials in which males or females were blindfolded; lateral antennule (outer flagellum) ablated; blindfolded and lateral antennule ablated; or had received nephropore blocks. The relative importance of visual and chemical sensory systems during blue crab courtship were then determined and urine and non-urine based chemical signals for both males and females were examined. Courtship behaviors varied considerably in occurrence and sequence; no measured behavior was necessary for pairing success. Male or female blindfolding had no effect on any measured behavior. Males and females required chemical information for normal courtship behaviors, yet blocking male or female urine release did not affect courtship behaviors. Males required chemical information to initiate pairing or to maintain stable pairs. Male urine release was necessary for stable pairing, suggesting that male urine signals may be involved in pair maintenance rather than pair formation. Females that could not receive chemical information paired faster and elicited fewer male agonistic behaviors. The results demonstrate a great variability and flexibility in blue crab courtship, with no evidence for stereotyped behavioral sequences. However, these behaviors appear regulated by urineand nonurine-based redundant chemical signals emanating from both males and females. Although urine-based signals play roles in blue Received 30 March 1998; accepted 1 June 1999. * Current address: Anne Arundel Community College, 101 College Parkway, Arnold, MD 21012. E-mail: pjbushman@mai1.aacc.cc.md.us 63 crab courtship, chemical signals from other sites appear to carry sufficient information to elicit a full range of behavioral responses in males and females.
Introduction
Courtship and mating success depend upon correct behavioral responses by both males and females.One might expect a degree of plasticity in these behaviors (Hazlett, 1995).Because behavior can quickly track changes in environmental conditions (West-Eberhard, 1989), flexibility in the occurrence and timing of reproductive behaviors might help insure successful mating.Many invertebrates do exhibit plasticity in their behaviors (Carlson and Copeland, 1978;Dejean, 1987;Elner and Beninger, 1995) and this variability may be the rule for most animal species (Lott, 1991).
Conversely, one might also expect courtship and reproductive behaviors to be controlled and regulated by conspecific communication signals.By eliciting appropriate behavioral responses, these signals could enhance mating success and help to prevent interspecies mating.Courtship and mating in a fluctuating environment could be aided by multiple or redundant signals, which would make the transmission of adequate and correct information more likely.Multiple or redundant signals have been found in both invertebrate and vertebrate species (van den Hurk and Lambert, 1983;Linn et al., 1984;Rand et ai., 1992).
Like many crustaceans (Hartnoll, 1969), the blue crab Callinectes sapidus Rathbun practices a polygynous mating system involving a complex coordination of female ecdysis, maturation, and copulation.The mating process has been well described (Hay, 1905;Churchill, 1921;Van Engel, 1958;Gleeson, 1980).Immature females nearing their final maturational molt, termed prepubertal females, are approached and courted by mature males.Pairing success results in females being held beneath males in a "cradle carry" posture for a period of precopulatory guarding.They are released for their molt, mated while still soft, and carried again for a period of postcopulatory guarding.This latter guarding protects the female while she is soft and prevents subsequent inseminations by other males (Jivoff, 1997a).Females are thought to receive only one copulation in their lifetime while males mate repeatedly (Van Engel, 1958), although multiple inseminations are possible and occur occasionally (Jivoff, 1997a).
Blue crab courtship can be divided into three phases: mate attraction, pair formation, and pair maintenance.In each phase a precise signaling system would seem important to help insure mating success.The coupling of molt and reproductive condition requires individuals to ascertain the physiological state of prospective partners.Signals can function in the reduction of agonistic behaviors (Tinbergen, 1953;Bastock, 1967), and during mating female blue crabs must in some way guard against injury or death by aggressive, cannibalistic males.Reproductive behaviors and sequences might, therefore, be tightly regulated by communication signals, making appropriate responses more likely and increasing the eventual mating success of the participants (Ryan, 1990;Reynolds, 1993).
Chemoreception and vision are the two best studied sensory modalities in blue crab courtship.Teytaud (1971) reported a role for visual signals in male recognition by pre-pubertal females.However, Gleeson (1980) showed that males did not respond to female visual stimuli alone, and pairing could proceed in darkness.Chemical signals are important for both male (Gleeson, 1980) and female (Teytaud, 1971;Gibbs, 1996) mate recognition.Some mature males respond with a courtship display to chemical compounds in pre-pubertal female urine (Gleeson, 1980; Gleeson et al., 1984) and reception of these chemical signals occurs via the aesthetasc sensilla on the lateral filament (outer flagellum) of the male antennules (Gleeson, 1982).This signaling theme appears common in crustaceans: urine carries chemical courtship signals (Ryan, 1966;Bushmann and Atema, 1997;Bamber and Naylor, 1997) and the antennules appear to be the site of distance chemoreception (Ache, 1975;Ameyaw-Akumfi and Hazlett, 1975;Devine and Atema, 1982;Cowan, 1991).The presence of a male chemical signal has not been firmly established, although Gleeson (1991) showed female attraction to water that contained males and Gibbs (1996) demonstrated disruption of pairing with male antennule ablation.
In this study, the occurrence and variability of courtship behaviors observed during blue crab pair formation were examined.These behaviors were then compared with those generated by male and female pairs with vision, distance chemoreception, both senses, or urine release impaired.This allowed a determination of the relative importance of visual and chemical sensory systems during blue crab courtship and an examination of urine-and nonurine-based chemical signals for both males and females.
Materials and Methods
Adult male crabs (125 mm-170 mm carapace width) were collected from the Rhode River, an upper Chesapeake Bay subestuary, with baited commercial crab traps.Premolt prepubertal females (96 mm-127 mm carapace width) were purchased from two local businesses which hold molting females for the soft crab industry.Females ranged in molt stage from late Do to D 3 (Drach, 1939).Animals were held in floating cages in the Rhode River or flow-through seawater tanks for no more than 48 h before participation in the study.
Behavioral interactions were observed in outdoor circular pools (150 cm d.X 20 cm h.) with three centimeters of washed river sand as substrate.Prior to a trial, pools were filled with 15 cm of new river water filtered through a felt bag with 10 /Lm mesh.A trial began by randomly selecting a male crab and placing him into a pool.Ten minutes later, a randomly selected prepubertal female was placed into the middle of the pool, inside an opaque plastic cylinder designed to prevent interactions prior to the start of the trial.After 10 min acclimation, the cylinder was removed, allowing the animals to freely interact.Three pools were started and watched simultaneously, and the ensuing behaviors were recorded by hand for 45 min.Carapace width and molt stage were recorded for each animal.
Prior to trials either a male or a female from each pair was subjected to an experimental treatment.They were as follows: 1. Nephropore Occlusion: Blue crabs possess bilateral nephropores, located anteriorly and just ventral to the eye stalks.Each opening is found in a pit in the carapace.A chitinous flap opens to allow urine to exit.A modification of a successful cannulation technique was used to prevent urine release.Each pit was first dried by blotting and a drop of acetone, then filled with a viscous cyanoacrylate glue.The glue was immediately hardened with a catalytic accelerator.This sealed, the nephropore flap shut.Animals were occluded 30 min prior to a trial.The blocks were checked for a tight bond with the carapace immediately before and after a trial.n = 12 males (M: URINE), 14 females (F:URINE).3. Antennule Ablation: the distal lateral filament (outer flagellum), containing the aesthetasc sensilla, of both antennules was removed.n = 12 males (M: ANTENN), 12 females (F:ANTENN).4. Blindfolding: two strips of black plastic (50 X 10 mm) were fastened with cyanoacrylate glue to the dorsal and ventral carapace so that each wrapped over and covered an eye stalk.n = 13 males (M:BLIND), 12 females (F:BLIND).5. Antennule ablation and blindfolding: animals received both antennule ablation and blindfolding treatments.n = 12 males (M:ANT-BLIND), 12 females (F:ANT-BLIND).6. Sham treatment: both animals in a pair were subjected to sham operations.Antennules were held with forceps without ablation, nephropores were treated with acetone and accelerator but not glued, and blindfolds were attached similarly, but lateral to the eye stalks so that vision was not impaired.n = 10.7. Intact: No treatments or sham operations were performed on either animal.n = 12.
Comparisons of the intact and sham-treated groups showed no differences in the frequency of occurrence of any measured behavior or pairing outcome.These two groups thus appeared to represent samples of the same population and their data were pooled to yield 22 intact control trials.Behaviors of these pairs were examined to determine a normal range of behavioral variability and sequence.Behaviors were scored once if they occurred in a given trial.The number of trials in which behaviors occurred for the intact control group was then compared with those generated by the treatment groups.Overall differences between treatment and control groups were evaluated with a Chisquare test for multiple independent samples (Siegel and Castellan, 1988).Where significance was found, differences between specific treatment groups and the control were evaluated with a Fisher exact test (FAT) (Siegel and Castellan, 1988).The mean times between trial start and both the first behavioral interaction and Initiation of Pair Formation were also compared between the control and treatment groups.Overall differences were evaluated with analysis of variance (Jaccard, 1983), while mean differences between specific treatments and the control were evaluated with a non-directional t-test (Jaccard, 1983).
Frequency of courtship and agonistic behaviors in intact blue crab pairs. The number of trials in which each behavior occurred is shown for all trials, those trials in which Initiation of Pair Formation occurred, and those trials in which a stable pair was formed
Male and female blue crabs in intact control pairs showed great variability in the occurrence of their behaviors.During courtship, no behavior occurred with a high frequency (Table I).Male Strike, Male Display, Female Present and Female Rock occurred in only 41, 41, 56, and 36 percent of intact control trials, respectively.Pairing was initiated at a high rate, however (82% of trials), with 50% of trials resulting in Stable Pair Formation.No single behavior more likely led to the initiation of pairing or stable pairing, nor did the exhibition of any behavior preclude these outcomes (Table I).There was no single sequence of behaviors
Behaviors
Blue crab reproductive and agonistic behaviors have been well described over the years (Churchill, 1921;Van Engel, 1958;Teytaud, 1971;Jachowski, 1974;Gleeson, 1980).This study analyzed one agonistic and five reproductive behaviors.These behaviors were common, unmistakable, and reliable indicators of the nature of the interaction occurring.They were: 1. Male Strike: an agonistic behavior in which the male strikes or seizes any female body part with either chelae without subsequent attempts at cradle carry.
Male Display:
A courtship behavior in which the male raises high on his walking legs, spreads his chelae laterally, and raises and rotates his 5th walking legs (periopods) laterally.3. Female Present: a courtship behavior in which the female faces away from the male and holds her body in a cradle carry posture, with or without spread chelae.4. Female Rock: a courtship behavior in which the female rocks her body from side to side. 5. Initiation of Pair Formation: the male seizes the female and attempts to pull her into a cradle carry position.Females often resist, males may make many attempts, and pairing mayor may not become established.6. Stable Pair Formation: this was scored at the end of a trial.Pairs were in stable cradle carry if both female and male struggling had ceased, and the animals had been paired for at least 10 min.(FAT, P = 0.009) (Fig. 2C).This behavior was also reduced by male antennule ablation (FAT, P = 0.001).Female Rock (Fig. 2D) was reduced in incidence when females were antennule ablated and blindfolded (FAT, P = 0.009); female antennule ablation alone did not significantly reduce the occurrence of this behavior (P = 0.083).Female Rock also occurred less frequently when males were antennule ablated and blindfolded (FAT, P = 0.009).Male or female nephropore occlusions or blindfolding had no significant effect on either female courtship behavior.
Initiation of Pair Formation occurred frequently (80% of trials) in the intact control group (Fig. 2E).There were significant overall differences between groups in the occurrence of this behavior (~= 34.8, P < 0.05).It occurred significantly less often than the control group when males were antennule ablated (FAT, P = 0.007), while the reduction for antennule ablated and blindfolded males approached statistical significance (P = 0.062).Examination of stable pairing at the trials' conclusions showed significant overall differences between treatment groups (~= 31.36,Courtship Behaviors No Male Strike that predominated, nor any single sequence that invariably led to greater or lesser pairing success.Neither male or female courtship behaviors were correlated with female molt stage (early premolt Do vs. late premolt D 3 ) or the relative sizes of males and females.
However, some general trends emerge from courtship sequences examined together with male agonistic behavior (Fig. 1).Most pairs (18 of 22) exhibited some sequence of courtship behaviors prior to pair formation (~= 8.91, P = 0.003).The presence of male agonistic behavior significantly reduced the likelihood of stable pairing (FAT, P = 0.040).Of the nine pairs in which males exhibited Male Strike, only two (22%) formed stable pairs.Of the remaining 13 pairs in which males did not exhibit Male Strike, nine (69%) formed stable pairs (Fig. 1).
When the behaviors Female Present and Female Rock were examined, there were significant overall differences between treatment and control groups (~= 45.78, P < 0.05; ~= 20.2, P < 0.05).The incidence of Female Present was reduced when females were antennule ablated (FAT, P = 0.035) or antennule ablated and blindfolded P < 0.05).Fewer pairs were stable (Fig. 2F) if the males were antennule ablated (FAT, P = 0.016) or antennule ablated and blindfolded (FAT, P = 0.002).The incidence of stable pairing was also reduced when male nephropores were occluded (FAT, P = 0.016).This was the only significant effect observed with any nephropore occlusion.
An examination of the mean time between a trial's start and the first observed behavior (Fig. 3A) showed significant differences between treatment groups (ANOVA F = 2.73, P = 0.009).The mean time to first behavior was significantly less than the control group when males were blindfolded (t = 2.97, P = 0.026), when males were blindfolded and antennule ablated (t = 2.28, P = 0.032), and when females were antennule ablated (t = 3.69, P = 0.001).Overall differences were found (ANOVA F = 2.29, P = 0.030) when the time between trial start and Initiation of Pair Formation was evaluated (Fig. 3B).In this comparison only the female antennule-ablated trials showed a significant reduction in time (t = 3.90, P = 0.001).Time differences between the male blindfolded group and the intact controls closely approached significance (t = 2.01, P = 0.06), while those for the male blindfolded and antennule ablated group were not significant (t = 1.46,P = 0.170).
Discussion
Arthropod behavior has generally been considered stereotyped.Studies of some insects, such as many moth species, have demonstrated stereotypic courtship behavior: specific chemical signals elicit specific and predictable responses (Kaissling, 1979;Charlton and Carde, 1990).Other ins~ct species have shown greater flexibility, with individuals basing their behavioral responses upon current conditions and context (Carlson and Copeland, 1978;Dejean, 1987).
Similarly, the behavior of many crustacean species is not based upon stereotyped responses but instead shows great plasticity and can be modified as context changes (Ra'anan and Cohen, 1984;Elner and Beninger, 1995;Hazlett, 1995).The current study demonstrates such flexibility in Callinectes sapidus courtship behavior.Courtship is variable in that no single behavior must occur, nor does any behavior invariably lead to successful pairing.No single behavior occurred more than approximately half the time, yet the odds of successful pairing remained high.This suggests that courtship follows multiple behavioral pathways, all potentially leading to successful pair formation.Such flexible courtship would be useful for both males and females in a species that mates in a fluctuating estuarine environment.With intense male competition for females (Jivoff, 1997b) and only one chance for females to receive sperm, it maximizes the chances of an encounter producing pair formation, with eventual mating and reproductive success.
However, blue crab mating behavior is not without constraints and regulation.In the intact control group most pairs displayed some courtship behaviors prior to pair formation, and male agonistic behavior reduced the likelihood of stable pairing.This demonstrates the importance of controlling male aggression during courtship and, together with the treatment trials, illustrates the role that communication signals often serve in this regard (Tinbergen, 1953).For blue crabs, the most likely path to successful pairing, and therefore successful reproduction, involves courtship and reduced male aggression.
The treatment trials suggest behavioral regulation through chemical communication signals and that both female and male chemical signals play important roles in courtship and pairing.Males with ablated antennules showed reduced instances of Male Display, Initiation of Pair Formation and Stable Pair Formation.For the male, loss of distance chemoreception affected behavioral expression and directly reduced courtship success.The relevant chemical information did not seem to reside solely in female urine, however, because females with occluded nephropores induced male behaviors at frequencies similar to intact controls.Although the results were less clear, females also appeared to exhibit fewer instances of courtship behaviors when their antennules were ablated, while pairing initiation or stability was unaffected.The physical act of pairing is initiated by the male, and evidently an antennule-ablated female is still attractive to males.However, an unreceptive female can likely flee and decline pairing in the wild.Blocking male urine release had no effect on female courtship behaviors, again suggesting that the relevant chemical compounds are not restricted to urine.
It is now generally recognized that many chemical signals are mixtures or blends and thus can serve as multiple or redundant signals (van den Hurk and Lambert, 1983;Vetter and Baker, 1983;Linn et ai., 1984).In blue crabs and other brachyurans, a chemical signal in female urine that induces male courtship behavior has been well described (Ryan, 1966;Gleeson, 1980;Seifert, 1982;Bamber and Naylor, 1997).The present study does not refute the existence of this signal, but rather suggests urine is only one source of courtship signals and is not obligatory for the initiation of male or female courtship behaviors.There appears to be chemical information from non-urine sources capable of eliciting the same behaviors when nephropores are occluded.It is only when all chemical signals are lost through antennule ablation that behavior is negatively affected.These statements appear at odds with Ryan's (1966) work showing no male responses to seawater that had contained nephropore-blocked premolt Portunus sanguinolentus females.It may be that the relevant female P. sanguinolentus signal is sent only in urine.In addition, the females in Ryan's study were isolated in 8-1 buckets during signal release, while females in the current study were placed in larger tanks in the presence of a male.This more naturalistic behavioral context may have elicited female nonurine signal release and male responses not seen in the earlier study.Lastly, Ryan used molten paraffin rather than glue as blocks; this may have affected the animals differently from the blocks used here.These apparent interspecific differences in behaviors and signals should be more closely examined.
Blue crab courtship thus appears regulated by female and male concurrent chemical signals emanating from multiple sources.It is unknown if the concurrent signals demonstrated here are different compounds or if they are the same compound released at different sites.This knowledge awaits the purification and structural description of these chemical courtship signals.The release sites of the non-urine chemical compounds are likewise unknown.In lobsters (Homa- rus americanus), the gill current has been implicated as a method for transporting chemical signals to a receiver (Atema, 1985).Because blue crabs possess a similar current, it is possible that the gills themselves or structures within the gill cavity are sources of chemical signals.Tegumental glands, found in blue crabs and other arthropods (Johnson, 1980;Talbot and Demers, 1993) have been suggested as chemical signal sources in several crustacean species (Berry, 1970;Kamiguchi, 1972;Bushmann and Atema, 1996) and also may play a role here.
Loss of chemical signals in some instances had indirect effects on behavior.Males were less aggressive toward antennule-ablated females.Ablation evidently alters either female behavior or her signaling patterns in a way that affects male agonistic behavior.Similarly, female courtship behaviors were reduced when male chemical reception was impaired.Male antennule ablations must alter male behaviors or communication signals in a way that makes them less attractive to females and less capable of inducing female courtship behavior.This is consistent with field work (Gibbs, 1996) demonstrating that antennule-ablated males in crab traps are less able to attract prepubertal females.
There is evidence for an obligatory male urine-based signal involved in pair maintenance during precopulatory guarding.When male nephropores were occluded, initiation of pair formation was not affected yet there was reduced incidence of stable pairing.This was the only evidence for a urine-based signal in this study.However, female antennule ablation did not reduce the incidence of stable pair formation.It is possible that the direct contact involved in a cradle carry produces other avenues for signal reception, such as contact chemoreceptors on the dactyls or elsewhere on the exoskeleton (Fuzessery and Childress, 1975).Although the observed reduction in stable pairing could have resulted from some male trauma associated with the occlusion procedure, occluding females produces no such pattern and blue crabs and lobsters appear capable of suspending urine release for periods of several hours without ill effect (Bushmann, unpub. data, Breithaupt and Atema, 1993).
Visual signals seem to play no role in influencing courtship behaviors or outcomes.Blindfolded males and females courted, received courtship, and paired with success rates equal to the intact controls.This is consistent with previous observations for blue crabs and lobsters that visual signals are of secondary importance during social interactions (Gleeson, 1980;Snyder et al., 1993;Kaplan et ai., 1993).Thus, the primary function of the male courtship display is likely not transmission of a visual signal.However, it may be an excellent method for transmitting both chemical and hydrodynamic signals to a potential partner.Rotation of the periopods causes a strong and highly turbulent flow of water directed forward of the animal (Gleeson, 1991;Bushmann, unpub. data).This flow would likely entrain any chemical signal emanating from the gills or nephropores.In addition, some crustaceans use hydrodynamic information during agonistic interactions and prey capture (Barron and Hazlett, 1989; Breithaupt et ai., 1995).The highly turbulent, directed flow generated by male paddle waving could provide directional or other information to females.
Many aspects of the male courtship display remain unclear.It must have some energetic cost and may draw attention by predators, yet it need not occur for successful pairing and occurred in less than half the observed encounters.In this study its occurrence was not correlated with female premoIt stage, the relative sizes of males and females, or pairing success during the encounter.The function of this rather spectacular behavior and the stimuli leading to its initiation require further investigation.
Loss of female chemoreception appeared to accelerate rather than retard pairing.When females were antennule-ablated, males showed little agonIstIc behavior, females exhibited fewer courtship behaviors, and pairs formed more quickly than in the intact control group (Fig. 3B) and they remained stable.This is at odds with Gibbs (1996), who found males to be more aggressive toward antennuleablated females and the time required for pairing to be unaffected.The present study suggests that females use chemical information and courtship behaviors to lengthen courtship periods, perhaps as a way of better evaluating potential partners.Loss of chemical information through female antennule ablation would then result in less female evaluation and faster pairing.
The significant reduction in time until first behavior seen in the male blindfolded group was probably a general behavioral rather than specific communication effect.Blindfolded males, without visual stimuli, may have been less wary and more likely to begin moving about the pool after trial start.This male movement would result in more rapid encounters with females.The time until Initiation of Pair Formation was not significantly shortened, however (Fig. 3B), and blindfolding had no effect on any measured behavior.
Several studies have shown that lateral antennule ablation affects behavior by interfering with chemical reception (Ache, 1975;Ameyaw-Akumfi and Hazlett, 1975;Gleeson, 1980;Cowan, 1991).However, in any ablation experiment there is always a question of false-negative responses due to a general dampening of behavior caused by the procedure itself (Dunham, 1978).In the present study, while ablated males showed reduced reproductive behaviors, agonistic responses were unaltered.Antennule-ablated females, while not exhibiting many courtship behaviors, were nonetheless courted and carried by males.These ablations appeared to affect certain reproductive behaviors, presumably those dependent upon chemical signals, rather than causing a general reduction in behavioral responses.
A second potential problem concerns the blocks applied to the nephropores to prevent urine release.Correct interpretation of results depends upon an effective block.Several lines of evidence suggest that these blocks prevented urine release.First, they are the initial step in the attachment of a urine cannula.This cannula can collect urine from blue crabs for several days without leaking (Bushmann, unpub. data).Second, three urine blocked animals were held after their trials.These individuals were swollen from fluid retention within 6 h and died within 12 h.Lastly, the water from four blocked animals held individually in 2-1 tanks showed reduced ammonia levels compared to water from four unblocked crabs (Bushmann, unpub. data).Ammonia levels from blocked crab water were not zero, because ammonia is also excreted across the gills (Mantel and Farmer, 1983).Taken together, these observations suggest that the blocks used in this experiment were effective in preventing urine release.
In summary, Callinectes sapidus courtship illustrates both behavioral plasticity and the importance of behavioral regulation through a signaling system.The concurrent and seemingly redundant chemical signals discussed here may be different compounds or the same compound released from different sites.Chemical rather than visual signals from both male and female seem to play crucial roles in courtship and pairing.Although these signals influence the initiation of behaviors and pairing success, there appear to be many different pathways leading to pairing success, and no single behavior and perhaps no single signal is necessary for pairing success.Courtship behaviors and chemical signaling may operate in a more complex and flexible manner than previously demonstrated.
Figure 2 .
Figure 2. The percentage of trials in which Male Strike, (2A), Male Display (2B), Female Present (2C), Female Rock (2D), Initiation of Pair Formation (2E), and Stable Pair Formation (2F) occurred for the intact control and treatment groups.Differences between intact control and treatment groups were evaluated with a Fisher exact test.Stars indicate statistical significance at a = 0.05.
Figure 1 .
Figure1.Flow chart showing behavioral pathways from first encounter, through courtship and/or male agonistic behavior, to stable pairing success or failure.The circled numbers represent the number of trials following that particular pathway.
Figure 3 .
Figure 3. Mean time to first observed behavior (3A) and Initiation of Pair Formation (3B) for the intact control and treatment groups.Bars represent mean standard error.Differences between intact control and treatment groups were evaluated with a non-directional t-test.Stars indicate statistical significance at a = 0.05. | 2017-08-15T11:45:51.726Z | 1999-08-01T00:00:00.000 | {
"year": 1999,
"sha1": "66e3024801425a91316eb2b6e3cc5a9268b47b15",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.biodiversitylibrary.org/partpdf/19384",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "66e3024801425a91316eb2b6e3cc5a9268b47b15",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
222296945 | pes2o/s2orc | v3-fos-license | DC Power Control Strategy of MMC for Commutation Failure Prevention in Hybrid Multi-Terminal HVDC System
This article presents a control strategy for a modular multilevel converter (MMC) to prevent commutation failure of a line-commutated converter (LCC), forming a three-terminal hybrid HVDC transmission system, where one LCC sending end is connected to the large generation and two receiving ends (LCC inverter and MMC) are located near the load center. This configuration, one of the potential options, has been proposed to strengthen Korea electric power transmission system through the optimized use of existing assets and rights-of-way, extremely challenging to secure. The MMC power control strategy has been developed to regulate the AC voltage and the extinction angle of the LCC inverter. This indirect yet effective active and reactive power control of the LCC inverter terminal helps prevent the commutation failure (CF) of the LCC in emergency and maximize the benefits of the costly planning option. By establishing a theoretical foundation for this power control problem and relationship among the control parameters, we quantify the active power reference for MMC to secure the desired LCC extinction angle. A coordinated strategy has been developed for the AC filter, the on-load tap changer of a transformer, along with the MMC control to lower the risk of CF and its catastrophic impact on the whole power system. The validity and performance of the proposed control methods are demonstrated for the real Korea electric power planning cases using a real-time power system simulator.
I. INTRODUCTION
In modern power systems, the line-commuted converter (LCC) high voltage DC (HVDC) system has been widely employed around the world for long-distance and bulk power transmission. Special attention has been paid to the LCC HVDC interface with weak AC systems, as LCC HVDC consumes massive reactive power when operating with high active power. The reactive power consumption of LCC HVDC may adversely affect AC voltage security [1]- [8]. In this case, where the LCC HVDC operates with large reactive power consumption and the grid is not strong enough The associate editor coordinating the review of this manuscript and approving it for publication was Dragan Jovcic . to tightly regulate the AC voltage, the risk of commutation failure (CF) increases and threatens the whole system security [2]. In terms of CF, the increase of DC current and the decreased extinction angle may lead to CF due to the thyristor turn-off characteristics [2], and this CF can result in a shutdown of the LCC HVDC system and voltage instability on the AC-side [1]. Several indices for risk evaluation have been developed and reported to assess the stability of AC grids in the planning stage of LCC HVDC systems [9]- [16]. Besides, cascaded CF concerns were investigated for the large-scale multi-infeed HVDC systems [13].
To handle the CF, various solutions have been developed for the safe operation of LCC HVDC systems [17]- [30]. The voltage-dependent current-order limit strategies are introduced to mitigate the CF by reducing the magnitude DC current [22]- [29]. In addition, the capacitor-commutated converter topology can be applied to prevent the CF [19]- [21]. In general, the method to increase the extinction angle is widely adapted for CF prevention [30]. This method commonly includes setting the minimum limit of the extinction angle for securing a margin and adjusting the on-load tap changer (OLTC) in a transformer for a high extinction angle.
These solutions are focused on increasing the extinction angle to mitigate the risk of CF; however, these solutions may inevitably face adverse impacts such as slow response time (slow switching behavior of AC filters and OLTC) or AC voltage drop which is caused by the consumption of reactive power from the LCC HVDC system [2]. Furthermore, the limitation of these solutions is that they may need to reduce the amount of power transmission considering the stability of AC grids. In this article, we thus present a method of controlling the extinction angle to mitigate the risk of CF without reducing power transmission to the load center.
A coordinated operation is developed among multiple HVDC lines called multi-infeed systems to improve the transmission capacity and system stability [9]- [12], [15], [16], [31], [32]. Recently, several utilities have carried out preliminary technical investigations to upgrade cos-effectively upgrade and exploit existing LCC HVDC systems, by tapping a new MMC station onto the LCC HVDC system, which is called a hybrid MTDC system [33]- [39]. The major advantage of this hybrid MTDC system is that it takes advantage of both LCC and MMC [40], resulting in a completely different set of characteristics than the LCC HVDC system itself due to the MMC terminal [33], i.e., the rapid and flexible operation and redirection [41]. This hybrid MTDC system can also be a cost-saving solution to large power transmission from a wind farm in a remote area compared to adding LCC HVDC lines for a multi-infeed HVDC system [33], [34]. The hybrid MTDC topology is considered to be feasible but practical issues including the DC fault remain to be further investigated for implementation. As adopted in this study, Full-bridge MMC with DC fault-blocking capability would be preferred [42], [43]. Studies for example, on the DC fault ride-through strategy and a proper fault clearing and recovery process for the hybrid MTDC system in [35] and the performance of the same hybrid MTDC topology for DC faults as presented in [44] should be beneficial and useful. These research efforts regarding the hybrid MTDC also have investigated the operation strategies, which however, do not directly address the critical CF problems.
This article thus proposes a strategy to mitigate the CF risk on the LCC inverter side using MMC power control in a hybrid MTDC structure, in cases where the MMC station is in the same area as the LCC inverter. Since the MMC is jointly connected with the LCC inverter on the AC and DC side, the MMC can flexibly control both the AC voltage drop and the DC current of the LCC inverter, implying control of the firing and extinction angle. The technical advantages and contributions of the proposed control scheme are summarized as follows.
• Regulating reactive power consumption by indirectly adjusting the active power of LCC inverter through the fast control of MMC • Establishing a theoretical and practical framework for power flow control in the hybrid MTDC; a link between the extinction angle and DC power to the MMC, and an effective control scheme in emergency • Lowering the risk of commutation failure and stabilizing the grid voltage and MTDC by coordinating the OLTC and the AC filters along with the MMC The paper is organized as follows. Section II describes the background and operation of the topology. Section III introduces how to regulate the extinction angle and AC voltage through DC power control of MMC. In Section IV, the simulation results verify the efficacy of the proposed method.
A. CONFIGURATION OF HYBRID MULTI-TERMINAL HVDC SYSTEM
Tapping a new MMC station onto the existing LCC HVDC system, the multi-terminal DC system is formed as shown in Fig. 1. In South Korea, the existing transmission network, including high voltage AC (HVAC) lines and LCC HVDC lines, delivers tremendous power from the East Coast area to massive load areas. HVAC lines and LCC HVDC lines are prepared to perform a ramp up/down operation in the contingency conditions of the network [45]. For instance, LCC HVDC lines immediately extend the transmission power amount when one of the HVAC lines is tripped. In addition, a Special Protection Scheme (SPS) has an important role to play in these conditions. Since the generators on the East Coast are critical to supplying the overall power demand of South Korea, the SPS is essential to prevent damage from the acceleration of these generators in case of a transmission network failure.
According to the announced Korean government's operation plan, a new nuclear power plant and thermal power plant will be added to the East Coast. As such, there is a growing need to enhance the transmission network for reliable power delivery. South Korea's transmission system operator (TSO) has pursued the construction of a new HVAC line. However, this project is suspended due to social responsibility issues. Instead, the installation of a new DC transmission system is currently ongoing.
As an encouraging solution, the MTDC topology is being prepared to improve the connectivity with existing HVDC lines and the flexibility of the network. Instead of building a new DC line, including two MMC stations, tapping an MMC onto the DC side is investigated to upgrade and exploit the existing LCC HVDC system cost-effectively. This topology features two receiving ends consisting of an LCC inverter and an MMC located in the load center. As such, the transmitted power from the LCC rectifier can be bypassed to the HVAC where P demand is the transmitted power from the LCC rectifier, P inv is the absorbed power in the LCC inverter, and P mmc is the absorbed power through the tapped MMC station. As shown in Fig. 2, the LCC inverter and the MMC divide the power transmitted from the LCC rectifier as (1). The receiving power of the LCC inverter is defined as where v di (t) is the DC voltage, and i di (t) is the DC current of the LCC inverter. Most importantly, if the power from the LCC rectifier P demand is constant in (1), the higher power received through the MMC, the less power delivered to the LCC inverter. Accordingly, the DC power flow of MMC is then determined as follows: It should be noted that the amount of the receiving power of the LCC inverter affects the AC voltage drop and reactive power consumption on the LCC inverter side. Also, since the reactive power consumption in the LCC inverter varies depending on the extinction angle, the investigation is carried out considering the operating point of the LCC HVDC [46].
C. OPERATING POINT OF HYBRID MULTI-TERMINAL HVDC SYSTEM
Regarding the operation of the LCC HVDC system, the most common control mode of the LCC rectifier is a constant DC current (CC) mode and the DC voltage of the LCC rectifier is then determined as follows: v dr (t) = v ac,r cos α(t) − I dr R cr (4) where v ac,r is the rectifier-side line-to-line AC voltage, α(t) is the ignition angle of LCC rectifier, R cr is equivalent commutating resistance, and I dr is the DC current reference for CC mode. The CC mode is illustrated as the gray line parallel to the y-axis as shown in Fig. 3. The second mode for LCC rectifier is to maintain the constant ignition angle (CIA): where α a is the reference ignition angle for LCC rectifier, and i dr is DC current of LCC rectifier. The CIA mode is illustrated as the gray line with a slope in Fig. 3. The most common control mode for LCC inverter is a constant DC voltage (CV) control and the DC voltage of the LCC inverter is obtained as follows [2], [6]: where v ac is the inverter-side line-to-line AC voltage, γ (t) is extinction angle, R ci is equivalent commutating resistance, and V dc is the DC voltage reference for CV mode. Under CV mode, the extinction angle decreases as the DC current increases to regulate the DC voltage constantly. This mode is illustrated as the blue line parallel to the x-axis as shown in Fig. 3.
The second mode aims to maintain the constant extinction angle (CEA). Under the CEA mode, the DC voltage decreases as the DC current increases to regulate the extinction angle constantly as follows: This mode is illustrated as the blue line with a slope in Fig. 3. As the extinction angle increases(or decreases), the CEA line moves downward(or upward). Consequently, the DC current flowing into the LCC inverter is defined as The CV mode is widely adopted among LCC HVDC operators. Note that the extinction angle increases as the DC current decreases under CV mode as manifested in (8). The DC current of LCC inverter thus decreases when the MMC delivers more power; the MMC can indirectly control the extinction angle.
Based on (2) and (3), the operating point of MMC is determined according to the DC current of LCC inverter in (8). The detailed MMC operating characteristics for implementing the proposed control are described in Section III.
III. PROPOSED EXTINCTION ANGLE AND AC VOLTAGE CONTROL METHOD A. EXTINCTION ANGLE CONTROL
Regarding the risk of CF, a higher extinction angle can be a stable condition. The extinction angle can be extended as the DC current of the LCC decreases in (8). Based on (1) and (2), the DC power flow power control of the MMC can reduce the active power of the LCC inverter. In other words, with increased power through the MMC, the LCC inverter receives less DC current. Thus, the reduced DC current of the LCC inverter extends the extinction angle as (8). Consequently, we can obtain the margin of the extinction angle through the DC power flow control of the MMC. The reference for the extinction angle is as follows: where γ ref is the reference value for the extinction angle management, γ min is the minimum extinction angle including the preset margin in consideration of AC disturbances, and γ margin is the margin angle obtained by the proposed DC power control of MMC as shown in Fig. 4(a). Therefore, the DC current reference including (9) is where i di,ref is the reference value for the DC current to control the extinction angle following the reference γ ref including margin γ margin in (9).
Combining (1) and (3), the receiving power and DC current of LCC inverter decreases, when MMC receives the power as much as P mmc . Therefore, the receiving power of the LCC inverter can be represented as a blue rectangle, and the receiving power of MMC can be represented as a purple rectangle as shown in Fig. 4(a).
Also, the extinction angle is controlled to γ ref when DC current of LCC inverter is i di,ref as depicted in Fig. 4(a). As a consequence, the operating point of MMC can be obtained as follows: VOLUME 8, 2020 We can quantify the active power order for the MMC to control the extinction angle to γ ref including the margin γ margin . Also, the operating point of LCC HVDC considering the proposed MMC power control can be determined by (10) and (11) as shown in Fig. 4(b). Note that the characteristic curve of the LCC inverter in Fig. 4(b) moves right along the x-axis as much as the MMC takes, and the intersection with the curve of the LCC rectifier becomes the operating point of the hybrid MTDC system.
B. MMC-ASSISTED GAMMA-KICK FUNCTION
This method can be applied during the AC filter switching. When a step-wise operation occurs in a capacitor bank of AC filters, an instantaneous fluctuation occurs on the extinction angle of the LCC inverter [47], [48]. For instance, when the steady-state AC voltage is higher than the rated value, a capacitor bank is turned off to lower the AC voltage. Following this, the AC voltage drops immediately at the moment of switching. The extinction angle can temporarily decrease below the minimum angle, which may lead to CF in severe cases.
For the undesired phenomenon, the conventional gammakick function averts the negative impact of the AC filter switching in advance [18]. The gamma-kick function pre-adjusts the extinction angle before switching to ensure that the angle remains in the normal range during the switching as shown in Fig. 5. This function should be coordinated with a well-planned sequence of the mechanical switching operations of AC filter and OLTC. Thorough investigation of communication delay and preparation time to ensure an exact process for safe switching should be required in practice. Before the AC filter switching, the receiving end power (LCC inverter power) decreases to adjust the extinction angle. After the AC filter switching, the receiving end power returns to the nominal operating point.
The limitations of the conventional gamma-kick function are that the power delivery from the sending end to the receiving end power must be reduced while the function is being performed and that the extinction angle is maintained near the minimum value with a higher risk of CF until the OLTC is operated as shown in Fig. 5.
Basically, the OLTC is activated to regulate the extinction angle. The discrete operation of the OLTC can be represented as T (n) in the inverter-side line-to-line voltage v ac (t) as follows [3], [47]: where B is the number of bridges in the LCC inverter (a four-bridge converter is considered in this research), T (n) is the converter transformer tap ratio which depends on the discrete tap position n, and E LL is the line-to-line voltage at the AC grid side. In this research, the step size of the tap is 0.0125 with ± 16 gears, where the ratio ranges between 0.8 and 1.2 p.u.. The discrete tap position changes to maintain the extinction angle in the normal range with a certain dead-time.
where n 0 is the previous position of the OLTC, γ − normal and γ + normal are the minimum and the maximum limit of the normal range, respectively.
The extinction angle can be controlled by the adjustment of T (n), which is the ratio of the OLTC in (12). However, the adjustment of the OLTC is also performed by the mechanical movement of gear. Thus, stress (including wear and tear) on the equipment is inevitable. Besides, since the operation delay time of the OLTC takes a few seconds, it cannot finely adjust the extinction angle immediately.
In this regard, we propose an MMC-assisted gamma-kick function based on the extinction control method utilizing the active power of MMC as (11). As shown in Fig. 6, it aims to pre-adjust the extinction angle before the AC filter switching through active power control of the MMC to prevent the transient phenomenon. After the AC filter switching and before the OLTC operation, which typically takes a few seconds, the MMC gradually reduces active power to maintain the extinction angle higher than the minimum angle and then returns to the nominal operating point as shown in Fig. 6. The proposed scheme does not compromise the desired power delivery. In addition, regulating the extinction angle in the normal range helps avoid unnecessary OLTC operations. The proposed MMC-assisted gamma-kick function thus helps coordinate operations among the AC filter, the OLTC, and the MMC control. Successful operation indeed depends on sufficiently fast and reliable communication and control architecture: upon detecting AC voltage variations, new orders for MMC and LCC are calculated and then dispatched simultaneously. For practical implementation, rigorous investigation beyond the scope of this article is required to synchronize the MMC and LCC in a complementary way.
C. AC VOLTAGE REGULATION
The reactive power consumption in LCC inverter is represented as follows [2]: where Q inv is the reactive power consumption in the LCC inverter, and φ is the power factor angle of the LCC inverter. Based on these, the reactive power consumption of the LCC inverter is derived as follows: The proposed method can control the active power of LCC inverter through the power control of MMC, and accordingly, the reactive power of LCC inverter as (15) and (17). In other words, it indicates that the AC voltage of the LCC inverter-side can be regulated by controlling the active power of MMC with the proposed method.
Considering (3) and (17), the reactive power consumption in LCC inverter station can be represented as follows: It is noted that the reactive power of the LCC inverter can be regulated by DC power flow control through MMC as (18).
The DC current reference for LCC inverter is derived as follows based on (8): Combining (2) and (19), where K p is the proportional coefficient. We present a control method that the AC voltage of the LCC inverter-side can be regulated by adjusting the active power of the MMC as (20). In order to verify the efficacy of the proposed method, we empirically obtain the steady-state relationship between the active power of the MMC and the AC voltage on the LCC inverter through repeated numerous simulations as shown in Fig. 7. Note that the AC filters' operation to compensate for the reactive power of the LCC inverter is disabled in order to investigate the effects of the DC power flow control of the MMC only. As shown in Fig. 7(a), the extinction angle tends to increase as the active power of the MMC increases as (11); however, it cannot change linearly due to the discrete operation of the OLTC represented as T (n) in (12). Note that the term multiplied by the line-to-line voltage of the inverter-side and the cosine function of the extinction angle responds linearly to the variation in the MMC active power as shown in Fig. 7(b). It should also be noted that the slope of the lines in Fig. 7(b) is determined by K p in (21). Therefore, the magnitude of AC voltage varies depending on the R ci pre-determined by the reactance component of the AC network and converter transformers. Accordingly, it is shown that the proportional control by DC power control of the MMC can be applied to this hybrid MTDC system as (20) and (21).
Comprehensively, the relationship between MMC active power, extinction angle, and AC voltage of the LCC-inverter side is shown in Fig. 8. It is verified that as the active power of VOLUME 8, 2020 MMC increases, the extinction angle and AC voltage of LCC inverter-side can increase.
Since the deviation of the cosine value of the extinction angle is negligible compared to the magnitude of the AC voltage in (20), the proportional controller can be designed as follows: where v * ac is the reference for the inverter-side line-to-line voltage.
Including (11) and (20), the overall control system for the MMC can be designed as shown in Fig. 9. The active power control loop of the MMC can be composed of the extinction angle control and remote (LCC inverter side) AC voltage control. The proposed MMC-assisted gamma-kick function described in Fig. 6 will be performed at the operators' option.
It is worth noting that the active power of the MMC should be coordinated with the LCC HVDC system, not to be faster than the response of the LCC HVDC system. This limit is predetermined in the planning stage according to the national reliability performance standards, particularly to ensure the transient stability of the whole system. Therefore, the rate of change of active power is where P LCC / t is the rate of change of active power of the existing LCC HVDC system. In addition, since the rated capacity of the LCC HVDC system and the MMC are not identical in this research, hard limiters are needed in the current control unit as follows: where i d,ref is the reference for d-axis current of the MMC controller, i di,min and i di,max are the minimum and maximum current limit of the LCC inverter, respectively. The reactive power control loop, local AC voltage control loop, and the inner controller are set identically with the existing control system of the MMC as shown in Fig. 9.
A. COMMUTATION FAILURE PREVENTION
This section demonstrates the effect of the proposed method in the hybrid MTDC system using a real-time power system simulator. The system parameters are summarized in Table 1.
To verify the effect of mitigating CF, we evaluate the hybrid MTDC system with the proposed method through two indices. Commutation failure immunity index (CFII) is an indicator of how robust the system is against CF [15], [16]. CFII is represented as follows: where Z fault is the fault impedance, and P dc is the rated DC power. It indicates that the larger the CFII, the more robust against CF. The CF mitigation effect of the proposed method is verified by comparing the CFII of the hybrid MTDC system. For the three-phase resistive fault, CFII of the hybrid MTDC with the proposed method is higher than that without the method as shown in Fig. 10. Based on (22), the proposed method indirectly regulates the AC voltage of LCC inverter-side through DC power control of MMC. Therefore, it is confirmed that the hybrid MTDC system becomes more robust against CF through the proposed method.
Critical voltage drop (CVD) is another indicator to represent the maximum AC voltage drop without CF. In other words, CVD is the maximum allowable voltage drop (threephase) that does not cause the CF. In this research, the CVD values are obtained through numerous simulations by causing the AC voltage reduction at different points on waves, similarly to [15], [16], [49], [50]. The CF occurs on a voltage drop that is more severe than CVD. Therefore, the operating point where the AC voltage drop is more critical than CVD is in the commutation failure region as shown in Fig. 11. Most importantly, as the active power of MMC increases, the CVD increases, and the safe region can be extended. Besides, it is noted that the higher the short-circuit ratio (SCR), the larger the CVD.
A dynamic simulation is performed to demonstrate the CF mitigation effect of the proposed method using a real-time power system simulator. Fig. 12 shows the response of hybrid MTDC when an AC voltage drop of 0.1 p.u. occurs at the instance of 0.2 s and is cleared after five cycles. When the proposed method is not utilized, CF occurs as indicated by the dotted line in Fig. 12. Under the proposed method, the hybrid MTDC system can avoid the CF by increasing the active power of MMC according to the AC voltage drop and simultaneously reducing the active power and DC current of the LCC inverter. Fig. 13 shows the results of the dynamic simulation in Fig. 12 simultaneously with the CVD curve (when the SCR is 4) in Fig. 11. Without the proposed method, the operating point reaches the commutation failure region as shown in Fig. 13. On the other hand, the proposed method increases the active power through MMC and consequently allows the operating point to remain in the safe region. Therefore, the dynamic simulation results in Fig. 12 correspond with the analysis in Fig. 13, which simultaneously shows that CF can be avoided through the proposed method. Fig. 11) and the traces of MMC power (Fig. 12(a))-Voltage drop (Fig. 12(d)) for the base case (black line) and the proposed method (blue-line).
B. MMC-ASSISTED GAMMA-KICK FOR AC FILTER SWITCHING
This section demonstrates the application of the proposed MMC-assisted gamma-kick function in the hybrid MTDC system. Two AC filter switching cases to regulate AC voltage under normal operating condition are investigated using a real-time power system simulator. Fig. 6 shows the principle of the proposed MMC-kick function based on (11) and (14). Note that the 34 Mvar capacitor bank in the AC filters is turned out at the instant of 1 s as shown in Fig. 14.
Without the proposed MMC-assisted gamma-kick function, switching in a capacitor bank caused an undesired oscillation on the extinction angle of the LCC inverter as shown in Fig. 14(d). The extinction angle immediately decreases below the minimum angle value of 17 • . In addition, the number of operating cycles of the OLTC is two to regulate the extinction angle within the normal range after the AC filter switching as shown in Fig. 14(c). Note that the OLTC operation takes more than a few seconds, though we assume it takes two seconds in this section.
On the other hand, the proposed MMC-assisted gammakick function can prevent unnecessary transients when switching AC filters. Before turning off a capacitor bank, the extinction angle is pre-adjusted to 21 • through the active power control of the MMC to ensure the safe commutation of the LCC inverter during the AC filter switching. Based on (11), the power order for regulating the extinction angle to 21 • is calculated to be 343 MW. Furthermore, the number of OLTC switchings is reduced from two to one as shown in Fig. 14(c) with the proposed function. It is confirmed that this reduces the mechanical stress on the OLTC, supporting stable LCC HVDC system operation.
C. GENERATOR TRIP
When a 100 MW generator near the LCC inverter area is tripped, the impact on AC voltage regulation of the proposed method is investigated. After the generator is tripped (at 0.5 s), a significant fluctuation occurs in the AC voltage on the LCC inverter's side as given in Fig. 15(c). The extinction angle also falls below the minimum angle as shown in Fig. 15(d). Also, the fluctuation of the extinction angle also causes undesired transients in the active power transfer as shown in Fig. 15(b).
On the other hand, the proposed control method prevents unnecessary transients when the generator is tripped. The proposed method immediately adjusts the active power of the MMC depending on the AC voltage deviation according to (22). As a result, the oscillation of the AC voltage of the LCC inverter is attenuated as shown in Fig. 15(c) and the deviation of extinction angle are alleviated as shown in Fig. 15(d). The extinction angle remains within the normal range, in contrast to the response without the proposed method where the angle drops below the minimum angle.
V. CONCLUSION
As a potential option to enhance the transmission network in Korea, tapping an MMC station onto an existing LCC HVDC system creates a hybrid MTDC system. While offering flexibility in controlling power flows via the MMC, the hybrid MTDC still has the same reactive power and the AC voltage stability concerns on the LCC inverter, possibly causing the commutation failure (CF). To reduce the risk of CF and ensure the safe operation of the LCC inverter, this article presents the MMC power flow control to regulate the extinction angle of the LCC inverter and the AC voltage under contingency conditions. As demonstrated through the rigorous simulation studies, the proposed control strategy can significantly improve the CF immunity, and the MMC-assisted gamma-kick helps perform harmoniously with the AC filter and the OLTC operations. Findings through this study should be beneficial for ongoing and future hybrid MTDC projects. | 2020-10-13T13:18:29.278Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "57261d676bbad136b18aaae4d130a8a27764c536",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09208670.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "57261d676bbad136b18aaae4d130a8a27764c536",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3101635 | pes2o/s2orc | v3-fos-license | Sporadic Endolymphatic Sac Tumor-A Very Rare Cause of Hearing Loss , Tinnitus , and Dizziness
INTRODUCTION Sporadic endolymphatic sac tumor is a very rare neoplasm derived from the endolymphatic sac, a part of the inner ear located in the dural duplicature in the posterior cranial fossa behind and medial to the labyrinth. The tumor has low malignant potential and is locally destructive and expansive, but non-metastatic. The first case was described in 1984 by Hassard et al. [1], and in 1989, Heffner described 20 cases with different morphologies [2]. The tumor invades adjacent bony and soft tissue structures of the temporal bone (the mastoid, inner ear, and middle ear) and cerebellopontine angle, including the cranial nerves [3]. It is very rare in the sporadic form, but is more often associated with Von Hippel–Lindau disease, an inherited autosomal dominant disease [4]. This case report describes a sporadic case causing dizziness and subsequently hearing loss and tinnitus.
INTRODUCTION
Sporadic endolymphatic sac tumor is a very rare neoplasm derived from the endolymphatic sac, a part of the inner ear located in the dural duplicature in the posterior cranial fossa behind and medial to the labyrinth.The tumor has low malignant potential and is locally destructive and expansive, but non-metastatic.The first case was described in 1984 by Hassard et al. [1] , and in 1989, Heffner described 20 cases with different morphologies [2] .The tumor invades adjacent bony and soft tissue structures of the temporal bone (the mastoid, inner ear, and middle ear) and cerebellopontine angle, including the cranial nerves [3] .It is very rare in the sporadic form, but is more often associated with Von Hippel-Lindau disease, an inherited autosomal dominant disease [4] .This case report describes a sporadic case causing dizziness and subsequently hearing loss and tinnitus.
CASE PRESENTATION
A 65-year-old man presented to our tertiary referral center with left-sided tinnitus and hearing loss for several months.Several years earlier, he suffered from periods of dizziness, which were diagnosed as "vestibular neuritis." Dizziness attacks returned several times during the following years.Audiometry showed an asymmetrical sensory neural hearing loss on the left side with pure tone hearing thresholds around 30 dB HL at 125-500 Hz, increasing to 70 dB HL at 3 kHz and 90 dB HL at 8 kHz.Speech discrimination score was 46% and stapedial reflexes were absent.Hearing on the right side was age-equivalent.Magnetic resonance imaging (MRI) showed a destructive and locally invasive tumor in the peripheral vestibular system on the left side expanding into the cerebellopontine angle, inferiorly toward the jugular foramen.The tumor measured 30 mm in the transverse diameter and had characteristic, cystic, and nodular components, dislocating the vestibulocochlear nerve (Figure 1).MR angiography excluded paraganglioma, and no major feeding vessels were found.Von Hippel-Lindau's disease was excluded by a normal eye examination and normal MRI of the spinal Sporadic Endolymphatic Sac Tumor-A Very Rare Cause of Hearing Loss, Tinnitus, and Dizziness Sporadic endolymphatic sac tumor is a very rare neoplasm.It is low malignant, locally destructive and expansive, but non-metastasizing.The tumor is very rare in the sporadic form, but more often associated with Von Hippel-Lindau disease.A 65-year old man with left sided tinnitus and hearing loss for several months.Audiometry showed an asymmetrical sensory neural hearing loss on the left side up to 60 dB.The speech discrimination score was 46% and stapedial reflexes were absent.Several years earlier, he had suffered from periods of dizziness.Magnetic resonance imaging (MRI) showed a destructive and locally invasive tumor in the peripheral vestibular system expanding into the cerebellopontine angle.Paraganglioma and von Hippel-Lindau`s disease were excluded.Vestibular examination showed no function of vestibular organ left side.The tumor was resected radically by translabyrintine approach.Per-operative freeze-microscopy showed inflammation tissue, whereas subsequent microscopy showed papillary-cystic endolymphatic sac tumor.Endolymphatic sac tumor is a rare neoplasm.The tumor may present with asymmetrically sensory neural hearing loss with or without tinnitus, dizziness and facial nerve paresis.An MRI scan is the appropriate diagnostic tool final dianosis is made by the post-operative histo-pathology.Dizziness can be the first sign of a tumor in this area.cent bony tissue structures (Figure 2).Cystic walls displayed several small capillaries and signs of hemorrhage caused by tissue destruction.Epithelial components were stained by immunohistochemistry for cytokeratin 7, EMA, and vimentin, and less so for CK5, GFAP, NSE, and S-100 (Figure 2).Based on these findings, the final diagnosis was papillary cystic endolymphatic sac tumor.Post-operative period was uneventful and uncomplicated without dizziness or facial nerve paresis.As expected from the approach and accepted by the patient, post-operative deafness occurred on the operated side.The patient was discharged in 4 days after the surgery.Post-operative MRI performed 2.5 years after surgery showed complete tumor removal and no recurrence.Fat has been placed in the drilled mastoid/temporal bone (Figure 3).
DISCUSSION
Sporadic endolymphatic sac tumor is a very rare tumor.Whereas Von Hippel-Lindau's disease is associated with bilateral endolymphatic sac tumors in around 30% of cases, the sporadic form is unilateral [4] .Mean age at diagnosis is 52 years for the sporadic form and 31 years for Von Hippel-Lindau's disease.No sex preference is seen for the sporadic tumor, whereas women are at double risk in patients with Von Hippel-Lindau's disease [3] .The tumor may present with asymmetrical sensory neural hearing loss with or without tinnitus, dizziness, and facial nerve paresis.An MRI scan is the appropriate diagnostic tool.Although endolymphatic sac tumor has a characteristic appearance on MRI, the final diagnosis is made based on the post-operative histopathology, including immunohistochemical examination.This case report illustrates that dizziness can be the first sign of a tumor in this area, emphasizing the relevance of MRI when this symptom occurs.The patient had attacks and periods of dizziness several years earlier and was erroneously diagnosed with vestibular neuritis, despite recurrent symptoms.As in the present case, it is not uncommon for long intervals between symptom onset and established diagnosis.Although some tumors may bleed excessively during removal, the preferred treatment is surgery with the option of supplementary radiotherapy if complete tumor removal is not possible.Early intervention when the tumor is relatively small may allow preservation of hearing and balance function.
Figure 1.a, b.Magnetic resonance imaging showed a destructive and locally invasive tumor in the peripheral vestibular system on the left side that was expanding into the cerebellopontine angle inferiorly toward the jugular foramen.The tumor measured 30 mm in the transverse diameter and had characteristic cystic and nodular components, dislocating the vestibulocochlear nerve.
Figure 2 .Figure 3 .
Figure 2. a, b.Histopathology showed a mostly cystic, partially papillary tumor invading and destroying adjacent bony tissue structures.Cystic walls displayed several small capillaries and signs of hemorrhage caused by tissue destruction.Epithelial components were stained by immunohistochemistry for cytokeratin 7, EMA, and vimentin, and less so for CK5, GFAP, NSE, and S-100.The left micrograph shows HE-stained tumor tissue, whereas the right micrograph shows immunohistochemical staining for cytokeratin 7. | 2018-04-03T03:24:28.332Z | 2017-07-17T00:00:00.000 | {
"year": 2017,
"sha1": "a78fcac25805d7a573ff2985077b0d6bba369613",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5152/iao.2017.2237",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "03dd2666c1ff6a6728c9007deff072acc11351f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31820422 | pes2o/s2orc | v3-fos-license | Optimal Stopping under Nonlinear Expectation
Let $X$ be a bounded c\`adl\`ag process with positive jumps defined on the canonical space of continuous paths. We consider the problem of optimal stopping the process $X$ under a nonlinear expectation operator $\cE$ defined as the supremum of expectations over a weakly compact family of nondominated measures. We introduce the corresponding nonlinear Snell envelope. Our main objective is to extend the Snell envelope characterization to the present context. Namely, we prove that the nonlinear Snell envelope is an $\cE-$supermartingale, and an $\cE-$martingale up to its first hitting time of the obstacle $X$. This result is obtained under an additional uniform continuity property of $X$. We also extend the result in the context of a random horizon optimal stopping problem. This result is crucial for the newly developed theory of viscosity solutions of path-dependent PDEs as introduced in Ekren et al., in the semilinear case, and extended to the fully nonlinear case in the accompanying papers (Ekren, Touzi, and Zhang, parts I and II).
Introduction
On the canonical space of continuous paths, we consider a bounded càdlàg process X, with positive jumps, and satisfying some uniform continuity condition. Let h 0 be the first exit time of the canonical process from some convex domain, and h := h 0 ∧ t 0 for some t 0 > 0. T is the collection of all stopping times, relative to the natural filtration of the canonical process, and P is a weakly compact non-dominated family of singular measures.
Our main result is the following. Similar to the standard theory of optimal stopping, we introduce the corresponding nonlinear Snell envelope Y , and we show that the classical Snell envelope characterization holds true in the present context. More precisely, we prove that the Snell envelope Y is an E−supermartingale, and an E−martingale up to its first hitting time τ * of the obstacte. Consequently, τ * is an optimal stopping time for our problem of optimal stopping under nonlinear expectation. This result is proved by adapting the classical arguments available in the context of the standard optimal stopping problem under linear expectation. However, such an extension turns out to be highly technical. The first step is to derive the dynamic programming principle in the present context, implying the E−supermartingale property of the Snell envelope Y . To establish the E−martingale property on [0, τ * ], we need to use some limiting argument for a sequence Y τn , where τ n 's are stopping times increasing to τ * . However, we face one major difficulty related to the fact that in a nonlinear expectation framework the dominated convergence theorem fails in general. It was observed in Denis, Hu and Peng [3] that the monotone convergence theorem holds in this framework if the decreasing sequence of random variables are quasi-continuous. Therefore, one main contribution of this paper is to construct convenient quasi-continuous approximations of the sequence Y τn . This allows us to apply the arguments in [3] on Y τn , which is decreasing under expectation (but not pointwise!) due to the supermartingale property. The weak compactness of the class P is crucial for the limiting arguments.
We note that in an one dimensional Markov model with uniformly non-degenerate diffusion, Krylov [10] studied a similar optimal stopping problem in the language of stochastic control (instead of nonlinear expectation). However, his approach relies heavily on the smoothness of the (deterministic) value function, which we do not have here. Indeed, one of the main technical difficulties in our situation is to obtain the locally uniform regularity of the value process.
Our interest in this problem is motivated from the recent notion of viscosity solutions of path-dependent partial differential equations, as developed in [5] and the accompanying papers [6,7]. Our definition is in the spirit of Crandal, Ishii and Lions [2], see also Fleming and Soner [9], but avoids the difficulties related to the fact that our canonical space fails to be locally compact. The key point is that the pointwise maximality condition, in the standard theory of viscosity solution, is replaced by a problem of optimal stopping under nonlinear expectation.
Our previous paper [5] was restricted to the context of semilinear path-dependent partial differential equations. In this special case, our definition of viscosity solutions can be restricted to the context where P consists of absolutely continuous measures on the canonical space. Consequently, the Snell envelope characterization of the optimal stopping problem under nonlinear expectation is available in the existing literature on reflected backward stochastic differential equations, see e.g. El Karoui et al [8], Bayraktar, Karatzas and Yao [1]. However, the extension of our definition to the fully nonlinear case requires to consider a nondominated family of singular measures.
The paper is organized as follows. Section 2 introduces the probabilistic framework. Section 3 formulates the problem of optimal stopping under nonlinear expectation, and contains the statement of our main results. The proof of the Snell envelope characterization in the deterministic maturity case is reported in Section 4. The more involved case of a random maturity is addressed in Section 5.
2 Nondominated family of measures on the canonical space 2
.1 The canonical spaces
Let Ω := ω ∈ C([0, T ], R d ) : ω 0 = 0 , the set of continuous paths starting from the origin, B the canonical process, F the natural filtration generated by B, P 0 the Wiener measure, T the set of F-stopping times, and Λ := [0, T ] × Ω. Here and in the sequel, for notational simplicity, we use 0 to denote vectors or matrices with appropriate dimensions whose components are all equal to 0. We define a seminorm on Ω and a pseudometric on Λ as follows: for any (t, ω), (t ′ , ω ′ ) ∈ Λ, Then (Ω, · T ) is a Banach space and (Λ, d ∞ ) is a complete pseudometric space. In fact, the subspace {(t, ω ·∧t ) : (t, ω) ∈ Λ} is a complete metric space under d ∞ .
We next introduce the shifted spaces. Let 0 ≤ s ≤ t ≤ T .
-Let Ω t := ω ∈ C([t, T ], R d ) : ω t = 0 be the shifted canonical space; B t the shifted canonical process on Ω t ; F t the shifted filtration generated by B t , P t 0 the Wiener measure on Ω t , T t the set of F t -stopping times, and Λ t := [t, T ] × Ω t .
-For ω ∈ Ω s and ω ′ ∈ Ω t , define the concatenation path ω ⊗ t ω ′ ∈ Ω s by: -Let s ∈ [0, T ) and ω ∈ Ω s . For an F s T -measurable random variable ξ, an F sprogressively measurable process X on Ω s , and t ∈ (s, T ], define the shifted F t T -measurable random variable ξ t,ω and F t -progressively measurable process X t,ω on Ω t by:
Capacity and nonlinear expectation
A probability measure P on Ω t is called a semimartingale measure if the canonical process B t is a semimartingale under P. For every constant L > 0, we denote by P L t the collection of all semimartingale measures P on Ω t such that there exist F t -progressively measurable R dvalued process α P , a process β P ≥ 0 with d×d-symmetric matrix values, and a d-dimensional P-Brownian motion W P satisfying: dB t = β P t dW P t + α P t dt, P-a.s. and |α P | ≤ L, tr ((β P ) 2 ) ≤ 2L. (2.2) Throughout this paper, we shall consider a family {P t , t ∈ [0, T ]} of semimartingale measures on Ω t satisfying: (P1) there exists some L 0 such that, for all t, P t is a weakly compact subset of P L 0 t .
and P i ∈ P t , the followinĝ P is also in P s :P Here (2.3) means, for any event E ∈ F s T and denoting E t,ω := {ω ′ ∈ Ω t : ω ⊗ t ω ′ ∈ E}: We refer to the seminal work of Stroock and Varadhan [18] for the introduction of r.c.p.d., which is a convenient tool for proving the dynamic programming principles, see e.g. Peng [12] and Soner, Touzi, and Zhang [15].
We observe that for all L > 0, the family {P L t , t ∈ [0, T ]} satisfies conditions (P1-P2-P3). In particular, the weak compactness follows standard arguments, see e.g. Zheng [19] Theorem 3. The following are some other typical examples of such a family {P t , t ∈ [0, T ]}.
Relaxed bounds, Uniformly elliptic P ue The set P t induces the following capacity and nonlinear expectation: When t = 0, we shall omit t and abbreviate them as P, C, E. Clearly E is a G-expectation, in the sense of Peng [13]. We remark that, when ξ satisfies certain regularity condition, then E t [ξ t,ω ] can be viewed as the conditional G-expectation of ξ, and as a process it is the solution of a Second Order BSDEs, as introduced by Soner, Touzi and Zhang [16].
Abusing the terminology of Denis and Martini [4], we say that a property holds P-q.s.
(quasi-surely) if it holds P−a.s. for all P ∈ P. A random variable ξ : Ω → R is -P-quasicontinuous if for any ε > 0, there exists a closed set Ω ε ⊂ Ω such that C(Ω c ε ) < ε and ξ is continuous in Ω ε , Since P is weakly compact, by Denis, Hu and Peng [3] Lemma 4 and Theorems 22,28, we have: be a sequence of P-quasicontinuous and P-uniformly integrable maps from We finally recall the notion of martingales under nonlinear expectation. Definition 2.3 Let X be an F-progressively measurable process with X τ ∈ L 1 (F τ , P) for all τ ∈ T . We say that X is a E−supermartingale (resp. submartingale, martingale) if, for any (t, ω) ∈ Λ and any τ ∈ T t , E t [X t,ω τ ] ≤ (resp. ≥, =) X t (ω) for P-q.s. ω ∈ Ω.
We remark that we require the E-supermartingale property holds for stopping times. Under linear expectation P, this is equivalent to the P-supermartingale property for deterministic times, due to the Doob's optional sampling theorem. However, under nonlinear expectation, they are in general not equivalent.
Optimal stopping under nonlinear expectations
We now fix an F-progressively measurable process X.
Assumption 3.1 X is a bounded càdlàg process with positive jumps, and there exists a modulus of continuity function ρ 0 such that for any (t, ω), (t ′ , ω ′ ) ∈ Λ: Remark 3.2 There is some redundancy in the above assumption. Indeed, it is shown at the end of this section that (3.1) implies that X has left-limits and X t− ≤ X t for all t ∈ (0, T ]. Moreover, the fact that X has only positive jumps is important to ensure that the random times τ * in (3.2),τ * in (3.5), and τ n in (4.7) and (5.15) are F-stopping times.
We define the nonlinear Snell envelope and the corresponding obstacle first hitting time: Our first result is the following nonlinear Snell envelope characterization of the deterministic maturity optimal stopping problem Y 0 .
is an optimal stopping time for the problem Y 0 .
To prove the partial comparison principle for viscosity solutions of path-dependent partial differential equations in our accompanying paper [7], we need to consider optimal stopping problems with random maturity time h ∈ T of the form for some t 0 ∈ (0, T ] and some open convex set O ⊂ R d containing the origin. We shall extend the previous result to the following stopped process: The corresponding Snell envelope and obstacle first hitting time are denoted: Our second main result requires the following additional assumption. (ii) For any 0 ≤ t < t + δ ≤ T , P t ⊂ P t+δ in the following sense: for any P ∈ P t we havẽ P ∈ P t+δ , whereP is the probability measure on Ω t+δ such that theP-distribution of B t+δ is equal to the P-distribution of {B t s , t ≤ s ≤ T − δ}.
In particular, τ * is an optimal stopping time for the problem Y h 0 .
where τ n is defined by (5.15) below and increases to τ * . However, we face a major difficulty that the dominated convergence theorem fails in our nonlinear expectation framework. Notice that Y is an E-supermartingale and thus Y τn are decreasing under expectation (but not pointwise!). We shall extend the arguments of [3] for the monotone convergence theorem, Proposition 2.2, to our case. For this purpose, we need to construct certain continuous approximations of the stopping times τ n , and the requirement that the random maturity h is of the form (3.3) is crucial. We remark that, in his Markov model, Krylov [10] also considers this type of hitting times. We also remark that, in a special case, Song [17] proved that h is quasicontinuous.
(ii) Assumption 3.4 is a technical condition used to prove the dynamic programming principle in Subsection 5.1 below. By a little more involved arguments, we may prove the results by replacing Assumption 3.4 (i) with for some constant L, L 1 , L 2 , P ue where P ue t is defined in Example 2.1 (iv).
We conclude this section with the Proof of Remark 3.2 Fix ω ∈ Ω, and let {t n } and {s n } be two sequences such that t n ↑ t, s n ↑ t, and X tn −→ lim s↑t X s , X sn −→ lim s↑t X s . Here and in the sequel, in lim s↑t we take the notational convention that s < t. Without loss of generality, we may assume t n < s n < t n+1 for n = 1, 2, .... Then for the ρ 0 defined in (3.1) we have This implies the existence of X t− (ω). Moreover, completing the proof.
Deterministic maturity optimal stopping
We now prove Theorem 3.3. Throughout this section, Assumption 3.1 is always in force, and we consider the nonlinear Snell envelope Y together with the first obstacle hitting time τ * , as defined in (3.2). Assume |X| ≤ C 0 , and without loss of generality that ρ 0 ≤ 2C 0 . It is obvious that Throughout this section, we shall use the following modulus of continuity function: and we shall use a generic constant C which depends only on C 0 , T , d, and the L 0 in Property (P1), and it may vary from line to line.
Dynamic Programming Principle
Similar to the standard Snell envelope characterization under linear expectation, our first step is to establish the dynamic programming principle. We start by the case of determinsitic times.
Lemma 4.1
The process Y is uniformly continuous in ω, with the modulus of continuity function ρ 0 , and satisfies Proof (i) First, for any t, any ω, ω ′ ∈ Ω, and any τ ∈ T t , by (3.1) we have Since τ is arbitrary, this proves uniform continuity of Y in ω.
Step 1. We first prove "≤". For any τ ∈ T and P ∈ P: where the inequality follows from Property (P2) of the family {P t } that P t,ω ∈ P t . Then: By taking the sup over τ and P, it follows that: Step 2. We next prove "≥". Fix arbitrary τ ∈ T and P ∈ P, we shall prove By (3.1) and the uniform continuity of Y , proved in (i), we have Thanks to Property (P3) of the family {P t }, we may define the following pair (τ ,P) ∈ T ×P: It is obvious that {τ < t} = {τ < t}. Then, by (4.5), which provides (4.4) by sending ε → 0.
We now derive the regularity of Y in t.
On the other hand, by the inequality X ≤ Y , Lemma 4.1, and (3.1), we have We are now ready to prove the dynamic programming principle for stopping times.
Proof First, follow the arguments in Lemma 4.1 (ii) Step 1 and note that Property (P2) of the family {P t } holds for stopping times, one can prove straightforwardly that On the other hand, let τ k ↓ τ such that τ k takes only finitely many values. By Lemma 4.1 one can easily show that Theorem 4.3 holds for τ k . Then for any P ∈ P t andτ ∈ T t , by Sending k → ∞, by Lemma 4.2 and the dominated convergence theorem (under P): Since the process X is right continuous in t, we obtain by sending m → ∞: which provides the required result by the arbitrariness of P andτ .
Preparation for the E−martingale property
If Y 0 = X 0 , then τ * = 0 and obviously all the statements of Theorem 3.3 hold true.
Therefore, we focus on the non-trivial case Y 0 > X 0 .
We continue following the proof of the Snell envelope characterization in the standard linear expectation context. Let Proof By the dynamic programming principle of Theorem 4.3, For any ε > 0, there exist τ ε ∈ T and P ε ∈ P such that where we used the fact that Y t − X t > 1 n for t < τ n , by the definition of τ n . On the other hand, it follows from the E−supermartingale property of Y in Theorem 4.3 that .
Continuous approximation
The following lemma can be viewed as a Lusin theorem under nonlinear expectation and is crucial for us.
on Ω, with values in a compact interval I ⊂ R, such that for some Ω 0 ⊂ Ω and δ > 0: Then for any ε > 0, there exists a uniformly continuous functionθ : Ω → I and an open subset Ω ε ⊂ Ω such that Proof If I is a single point set, then θ is a constant and the result is obviously true. Thus at below we assume the length |I| > 0. Let {ω j } j≥1 be a dense sequence in Ω. Denote Then clearly θ n is uniformly continuous and takes values in I. For each ω ∈ Ω n ∩ Ω 0 , the Then, by our assumption, Similarly one can show that θ − 1 n ≤ θ n in Ω n ∩ Ω 0 . Finally, since Ω n ↑ Ω as n → ∞, it follows from Proposition 2.2 (i) that lim n→∞ C[Ω c n ] = 0.
Proof of Theorem 3.3
We proceed in two steps.
Step 2. By Lemma 4.4, for each n large, there exists P n ∈ P such that By Property (P1), P is weakly compact. Then, there exists a subsequence {n j } and P * ∈ P such that P n j converges weakly to P * . Now for any n large and any n j ≥ n, note that τ n j ≥ τ n . Since Y is an E-supermartingale and thus a P n j -supermartingale, we have By the boundedness of Y in (4.1) and the uniform continuity of Y in Lemma 4.2, we have Then (4.10) together with the estimate C[Ω c n ] ≤ 2 −n lead to Notice that Y andτ n−1 ,τ n ,τ n+1 are continuous. Send j → ∞, we obtain Since n P * |τ n − τ n | ≥ 2 −n ≤ n C |τ n − τ n | ≥ 2 −n ≤ n 2 −n < ∞ and τ n ↑ τ * , by the Borel-Cantelli lemma under P * we see thatτ n → τ * , P * -a.s. Send n → ∞ in (4.11) and apply the dominated convergence theorem under P * , we obtain Similarly Y t (ω) ≤ E t [Y t,ω τ * ] for t < τ * (ω). By the E-supermartingale property of Y established in Theorem 4.3, this implies that Y is an E-martingale on [0, τ * ].
Random maturity optimal stopping
In this section, we prove Theorem 3.5. The main idea follows that of Theorem 3.3. However, since X h is not continuous in ω, the estimates become much more involved.
Throughout this section, let X, h, O, t 0 , X := X h , Y := Y h , and τ * be as in Theorem 3.5. Assumptions 3.1 and 3.4 will always be in force. We shall emphasize when the additional Assumption 3.4 is needed, and we fix the constant L as in Assumption 3.4 (i). Assume |X| ≤ C 0 , and without loss of generality that ρ 0 ≤ 2C 0 and L ≤ 1. It is clear that By (3.1) and the fact that X has positive jumps, one can check straightforwardly that, In particular, Moreover, we define 4) and in this section, the generic constant C may depend on L as well.
Dynamic programming principle
We start with the regularity in ω.
To motivate our proof, we first follow the arguments in Lemma 4.1 (i) and see why it does not work here. Indeed, note that Since we do not have h t,ω ≤ h t,ω ′ , we cannot apply (5.2) to obtain the required estimate.
Proof Let τ ∈ T t and P ∈ P t . Denote δ : Moreover, by Assumption 3.4 and Property (P3), we may choose P ′ ∈ P t defined as follows: α P ′ := 1 δ (ω t −ω ′ t ), β P ′ := 0 on [t, t δ ], and the P ′ -distribution ofB t δ is equal to the P-distribution of B t . We claim that , and it follows from the arbitrariness of P ∈ P t and τ ∈ T t that Y t (ω) − Y t (ω ′ ) ≤ Cρ 1 (Lδ). By exchanging the roles of ω and ω ′ , we obtain the required estimate.
It remains to prove (5.5). Denotẽ This excludes the exceptional case in (5.2). Then it follows from (5.6) and Since L ≤ 1, we have If δ ≥ 1 8 , then I ≤ 2C 0 ≤ Cρ 1 (Lδ). We then continue assuming δ ≤ 1 8 , and thus 3δ + 1 4 δ 1 3 ≤ δ 1 3 . Therefore, Thus I ≤ ρ 0 (δ We next show that the dynamic programming principle holds along deterministic times. Lemma 5.2 Let t 1 < h(ω) and t 2 ∈ [t 1 , t 0 ]. We have: Proof When t 2 = t 0 , the lemma coincides with the definition of Y . Without loss of generality we assume (t 1 , ω) = (0, 0) and t := t 2 < t 0 . First, follow the arguments in To show that equality holds in the above inequality, fix arbitrary P ∈ P and τ ∈ T satisfying τ ≤ h (otherwise reset τ as τ ∧ h), we shall prove Since Y h = X h , this amounts to show that: We adapt the arguments in Lemma 4.1 (ii) Step 2 to the present situation. Fix 0 < δ ≤ t 0 −t.
It remains to prove (5.12). For any ε > 0 and each i ≥ 1, there exists a partition Fix an ω ij ∈ E i j for each (i, j). By Property (P3) we may defineP ε ∈ P by: By Property (P1), P is weakly compact. ThenP ε has a weak limitP ∈ P as ε → 0.
We now prove the regularity in the t-variable. Recall the ρ 2 defined in (5.4).
. So we assume in the rest of this proof that δ < 1 8 . First, by Assumption 3.4, we may consider the measure P ∈ P t 1 such that α P t := 0, β P t := 0, t ∈ [t 1 , t 2 ]. Then, by setting τ := t 0 in Lemma 5.2, we see that Thus, by Lemma 5.1, Next, for arbitrary τ ∈ T t 1 , noting that X ≤ Y we have By (5.3) and Lemma 5.1 we have Since δ ≤ 1 8 , following the proof of (4.6) we have By the arbitrariness of τ and the dynamic programming principle of Theorem 5.4, we obtain , and the proof is complete by (5.14).
Applying Lemmas 5.1, 5.2, and 5.3, and following the same arguments as those of Theorem 4.3, we establish the dynamic programming principle in the present context.
Theorem 5.4 Let t < h(ω) and τ ∈ T t . Then
we see thatŶ h− exists. However, the following example shows that in general Y may be discontinuous at h. and Y t (ω) ≤ t 0 . However, for any t < h(ω), set τ := t 0 and P ∈ P t such that α P = 0, β P = 0, This issue is crucial for our purpose, and we will discuss more in Subsection 5.4 below.
Continuous approximation of the hitting times
Similar to the proof of Theorem 3.3, we need to apply some limiting arguments. We therefore assume without loss of generality that Y 0 > X 0 and introduce the stopping times: for any Here we abuse the notation slightly by using the same notation τ n as in (4.7). Our main task in this subsection is to build an approximation of h m and τ n by continuous random variables. This will be obtained by a repeated use of Lemma 4.5.
We start by a continuous approximation of the sequence (h m ) m≥1 defined in (5.15).
We shall prove only the right inequality of (5.18). The left one can be proved similarly.
Proof This is a direct combination of Lemmas 5.6 and 5.7.
Proof of Theorem 3.5
We first prove the E-martingale property under an additional condition.
Proof If Y 0 = X 0 , then τ * = 0 and obviously the statement is true. We then assume Y 0 > X 0 , and prove the lemma in several steps.
Step 1 Let n be sufficiently large so that 1 n < Y 0 − X 0 . Follow the same arguments as that of Lemma 4.4 , one can easily prove: (5.22) Step 2 Recall the sequence of stopping times (τ n ) n≥1 introduced in (5.20). By Step 1 we Then for any ε > 0, there exists P n ∈ P such that Y 0 −ε < E Pn [ Yτ n ]. Since P is weakly compact, there exists subsequence {n j } and P * ∈ P such that P n j converges weakly to P * . Now for any n and n j ≥ n, since Y is a supermartingale under each P n j and Our next objective is to send j ր ∞, for fixed n, and use the weak convergence of P n j towards P * . To do this, we need to approximate Yτ n with continuous random variables. Denote Then ψ n is continuous in ω, and In particular, this implies that Y θ * n ψ n and Y θ * n ψ n are continuous in ω. We now decompose the right hand-side term of (5.23) into: Note that θ * n ≤τ n ≤ θ * n on Ω * n . Then Send j → ∞, we obtain Step 3. In this step we show that (i) First, by the definition of Ω * n in (5.21) together with Lemmas 5.6 (iii) and 5.7, it follows that C (Ω * n ) c ≤ C2 −n −→ 0 as n → ∞. (ii) Next, notice that Moreover, by (5.20) and Lemma 5.8, Then Then one can easily see that C[ψ n < 1] → 0, as n → ∞.
Step 4. By the dominated convergence theorem under P * we obtain lim n→∞ This, together with (5.26) and (5.27), implies that Note that Y is an P * -supermartingale and τ ≤ τ * , then Since ε is arbitrary, we obtain Y 0 ≤ E[ Y τ − ], and thus by the assumption This, together with the fact that Y is a E-supermartingale, implies that In light of Lemma 5.9, the following result is obviously important for us.
We recall again that Y τ * − = Y τ * whenever τ * < h. So the only possible discontinuity is at h. The proof of Proposition 5.10 is reported in Subsection 5.4 below. Let us first show how it allows to complete the Proof of Theorem 3.5 By Lemma 5.9 and Proposition 5.10, Y is an E-martingale on [0, τ * ]. Moreover, since X τ * = Y τ * , then Y 0 = E[ X τ * ] and thus τ * is an optimal stopping time.
The first result corresponds to Theorem 5.4.
Next result corresponds to Lemma 5.9.
Lemma 5.12 Let P ∈ P, τ ∈ T , and E ∈ F τ such that τ ≤ τ * on E. Then for all ε > 0: Proof We proceed in three steps.
Step 1. We first assume τ = t < τ * on E. We shall prove the result following the arguments in Lemma 5.9. Recall the notations in Subsection 5.2 and the ψ n defined in (5.24), and let ρ n denote the modulus of continuity functions of θ * n , θ * n , and ψ n . Denoteτ n := 0 for n ≤ ( Y 0 − X 0 ) −1 . For any n and δ > 0, let {E n,δ i , i ≥ 1} ⊂ F t be a partition of E ∩ {τ n−1 ≤ t <τ n } such that ω − ω ′ t ≤ δ for any ω, ω ′ ∈ E n,δ i . For each (n, i), fix ω n,i := ω n,δ,i ∈ E n,δ i . By Lemma 5.9, Recall the h δ defined by (5.16). We claim that, for any N ≥ n, CnE ρ 2 δ + ρ n (δ) + 2η n (δ) + Cρ n (δ) + ε + C2 −n + CC(ψ n < 1) where η n (δ) := sup Moreover, one can easily find F t -measurable continuous random variables ϕ k such that Send δ → 0. First note that [δ + ρ n (δ) + 2η n (δ)] ↓ 0 and h δ ↓ 1 {0} , then by Proposition 2.2 Moreover, for each N , by the weak compactness assumption (P1) we see that P N,δ has a weak limit P N ∈ P. It is straightforward to check that P N ∈ P(P, t, E). Note that the random variables Y t∨θ * n ψ n ϕ k and sup θ * n ≤s≤θ * n | Y s − Y θ * n |ψ n ϕ k are continuous. Then Again by the weak compactness assumption (P1), P N has a weak limit P * ∈ P(P, t, E) as N → ∞. Now send N → ∞, by the continuity of the random variables we obtain Send k → ∞ and recall that P * = P on F t , we have Finally send n → ∞, by (5.27) and applying the dominated convergence theorem under P and P * we have That is, P ε := P * satisfies the requirement in the case τ = t < τ * on E.
Step 3. Finally we prove the lemma for general stopping time τ . We follow the arguments in Lemma 5.11. Let τ n be a sequence of stopping times such that τ n ↓ τ and each τ n takes only finitely many values. By applying the dominated convergence Theorem under P, we may fix n such that Assume τ n takes values {t i , i = 1, · · · , m}, and for each i, denote E i := E ∩ {τ n = t i < τ * } ∈ F t i . Then {E i , 1 ≤ i ≤ m} form a partition ofẼ := E ∩ {τ n < τ * }. For each i, by Step 1 there exists P i ∈ P(P, t i , E i ) such that Now define P ε := m i=1 P i 1 E i + P1Ẽ c ∈ P(P, τ n ,Ẽ) ⊂ P(P, τ, E). Recall thatẼ ∈ F τ n and note that Y τ * ≤ Y τ * − , thanks to the supermartingale property of Y . Then The proof is complete now.
We need one more lemma.
Lemma 5.13 Let P ∈ P, τ ∈ T , and E ∈ F τ such that τ ≤ h on E. For any ε > 0, there exists P ε ∈ P(P, τ, E) such that h ≤ τ + 1 L d(ω τ , O c ) + 3ε + sup τ ≤t≤τ +ε |ω t − ω τ |, P ε -a.s. on E Proof First, there existsτ ∈ T such that τ ≤τ ≤ τ + ε andτ takes only finitely many values 0 ≤ t 1 < · · · < t n = t 0 . Denote E i := E ∩ {τ = t i < h} ∈ F t i . For any i, there exists a partition (E i j ) j≥1 of E i such that |ω t i − ω ′ t i | ≤ Lε for any ω, ω ′ ∈ E i j . For each (i, j), fix an ω ij ∈ E i j and a unit vector α ij pointing to the direction from ω ij t i to O c . Now for any ω ∈ E i j , define P i,j,ω ∈ P t i as follows: We see that Similar to the proof of (5.12), there exists P ε ∈ P(P,τ , E) ⊂ P(P, τ, E) such that the r.c.p.d. | 2013-02-08T23:49:57.000Z | 2012-09-28T00:00:00.000 | {
"year": 2014,
"sha1": "b11915d00c68ca46ad46b9b8a6a9dd68226c3ca5",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.spa.2014.04.006",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "b11915d00c68ca46ad46b9b8a6a9dd68226c3ca5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265162574 | pes2o/s2orc | v3-fos-license | Serum and urinary angiotensinogen levels as prognostic indicators in acute kidney injury: a prospective study
SUMMARY OBJECTIVE: The delayed increase in serum creatinine levels poses challenges in the timely diagnosis of acute kidney injury. This study aimed to investigate the relationship between serum angiotensinogen and urinary angiotensinogen levels and the prognosis of renal function in patients diagnosed with acute kidney injury. METHODS: A total of 79 newly diagnosed acute kidney injury patients aged 18 years and older were enrolled. Serum angiotensinogen and urinary angiotensinogen levels were measured at the onset of the disease, as well as on the 15th and 30th days of follow-up. After 3 months, renal function was evaluated by measuring serum creatinine levels. RESULTS: Among the acute kidney injury patients, those in Kidney Disease: Improving Global Outcomes stage 3 exhibited significantly higher urinary angiotensinogen/urine creatinine levels compared with stages 1 and 2 patients at the time of diagnosis (p<0.05). Furthermore, a positive correlation was observed between the urinary angiotensinogen/urine creatinine level at the time of diagnosis and the serum creatinine level at the third month (r=0.408, p=0.048). CONCLUSION: The findings suggest that urinary angiotensinogen levels can serve as an indicator of the severity of acute kidney injury. Monitoring urinary angiotensinogen levels could potentially contribute to the prognosis assessment and management of acute kidney injury patients.
INTRODUCTION
The diagnosis and staging of acute kidney injury (AKI) are based on the criteria provided by Kidney Disease: Improving Global Outcomes (KDIGO).The parameters used for diagnosing AKI include an increase in serum creatinine (sCr) levels and a decrease in urine output.However, delayed increases in creatinine levels can result in delayed diagnosis of AKI.Furthermore, the severity of the disease may not be accurately reflected by creatinine levels, leading to delays in disease management.
Acute kidney injury occurs in approximately 25-30% of patients hospitalized in intensive care units and 3-7% of general hospitalized patients 1 .While the mortality rate in uncomplicated AKI cases is approximately 5%, it exceeds 40% in intensive care unit patients.AKI often coexists with multiple organ failure rather than isolated organ failure 2 .
Many AKI patients regain their kidney function during follow-up, which is evidenced by an increased urine production and a gradual decrease in sCr levels.However, some patients do not fully recover and may develop chronic kidney disease (CKD) 3 .Emerging evidence suggests that timely and effective treatment initiation is crucial for AKI outcomes 4 .Therefore, early identification of AKI severity can facilitate prompt intervention and improve patient outcomes.The current diagnostic markers, such as sCr levels and urine output, have limited prognostic value for potential complications.Insufficient prognostic information regarding AKI is a significant barrier to improving patient outcomes.
Animal studies investigating AKI have demonstrated that the activation of the renin-angiotensin system (RAS) can contribute to the development of AKI.Urinary angiotensinogen (uAGT) levels serve as an indicator of RAS activation and hold promise as a biomarker for assessing AKI progression in patients with acute decompensated heart failure 5 .uAGT has the potential to be a novel biomarker that can aid in determining the intrarenal RAS status and the severity of acute tubular necrosis (ATN).
The objective of this study was to explore the association between serum angiotensinogen (sAGT) or uAGT levels and Acute kidney injury and angiotensinogen AKI-related complications.Consequently, patients with elevated sAGT and uAGT levels will be closely monitored for the development of potential complications.
Study population
A total of 79 AKI patients were included in this study.These patients were recruited from the Internal Medicine Clinic of Gaziosmanpaşa University Hospital, where they were initially diagnosed with AKI and received treatment.Pregnant women, patients who used angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and aldosterone receptor antagonist drugs at the time of diagnosis, patients with postrenal AKI, and patients with a diagnosis of malignancy, chronic liver disease, multiorgan failure, or sepsis were excluded from the study.
Data collection
At the time of diagnosis, a detailed patient history including age, gender, presence of chronic diseases, medications used, recent exposure to nephrotoxic agents, and history of surgery was recorded.For patients admitted to the intensive care unit, the reasons for hospitalization and Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were documented.Information on the application of hemodialysis, time of death for deceased patients, and discharge time for surviving patients was also recorded.
Biochemical measurements
Blood samples were collected from the patients at the beginning of AKI and on the 15th and 30th days of follow-up.The levels of serum blood urea nitrogen (BUN), sCr, and urine creatinine (uCr) were measured.The renal function of the patients who attended follow-up visits was monitored up to the 90th day.
Ethical considerations
Approval for this study was obtained from the ethics committee of Gaziosmanpaşa University, Faculty of Medicine (19-KAEK-052).Written informed consent was obtained from all patients in accordance with the principles of the Helsinki Declaration, and all relevant documents were recorded.
Biochemical analysis
For the measurement of sAGT levels, the blood samples were centrifuged at 3500 rpm for 10 min.The serum was separated and stored at -80°C in Eppendorf tubes.For uAGT levels, urine samples collected in gel-free tubes were centrifuged at 3500 rpm for 10 min, and the supernatant was transferred to Eppendorf tubes and stored at -80°C.Before the analysis, the frozen samples were thawed at 2-8°C for 8 h.The sAGT and uAGT levels were measured using Human Angiotensinogen kits (Elabscience Biotechnology Co., Wuhan, PRC) and the enzyme-linked immunosorbent assay (ELISA) method in the biochemistry research laboratory of our hospital.
The absorbance values were measured at 450 nm using an ELISA reader (Organon Teknika Reader 230S).Based on the absorbance values, the concentrations were calculated in ng/L using Microsoft Excel.uCr levels were measured using the Jaffé kinetic colorimetric method on the Roche Cobas 6000 device, specifically the Cobas C501 module.According to the uCr values, uAGT levels were corrected and reported as ng/mg.
Statistical analysis
The data were analyzed using the SPSS 19.0 package program for Windows.The Pearson chi-square test was employed to compare the frequency of gender between groups.The Mann-Whitney U test was used to compare the mean age, APACHE II scores, uAGT levels, and sAGT values.According to primary diagnoses, the distribution of biomarkers was evaluated using Kruskal-Wallis analysis.Furthermore, correlation analysis was conducted to assess the relationship between biomarkers in the case and control groups.
Descriptive statistics were used to present the general characteristics of the study groups, and the data were expressed as mean±standard deviation.A p<0.05 was considered statistically significant.Repeated-measures analysis of variance (ANOVA) was performed to evaluate the differences in uAGT levels and kidney function at the beginning and on the 15th and 30th days of the study.Statistical significance was determined when p<0.05.
Categorical variables were presented as n (%), while continuous variables were reported as mean±standard deviation.Independent sample t-tests or one-way ANOVA was utilized to assess the quantitative differences between groups.Chi-square tests were employed to examine the qualitative differences between groups.Pearson correlation analysis was performed to investigate the relationship between quantitative variables.All statistical calculations were conducted using the IBM SPSS Statistics 19 software (SPSS Inc., an IBM Co., Somers, NY).
RESULTS
The study included a total of 79 AKI patients, with a mean age of 73.8 years (range: 33-96 years).Among them, 35 (44%) were females and 44 (56%) were males.The most common etiological reason for AKI was cerebrovascular damage (25%), followed by primary AKI (20%), pneumonia (12%), and other causes.The most prevalent comorbidities among the patients were hypertension (58%), coronary artery disease (48%), and diabetes mellitus (30%).During the follow-up period, 58% of the patients died, and hemodialysis was required in 58% of the cases.
In our study, the uAGT/uCr values of the patients were corrected according to uCr.There was no significant relationship between uAGT/uCr and sAGT levels (r=0.063,p=0.579).However, there was a positive correlation between uAGT/uCr and sCr levels (r=0.289,p=0.01).As expected, there was an inverse correlation between sCr and glomerular filtration rate (GFR) (r=-0.74,p≤0.001).Further details can be found in Table 1.
When comparing uAGT/uCr ratios at different stages based on the KDIGO criteria upon admission, the ratios were 9.93±10.82 in stage 1, 5.03±5.04 in stage 2, and 32.15±37.13 in stage 3.As shown in Table 2, stage 3 patients had significantly higher uAGT/uCr values compared with stage 1 and stage 2 patients.
Table 3 examines the relationship between uAGT/uCr values and the development of hemodialysis, mortality, and morbidity.There was no significant correlation observed between uAGT/uCr ratios and the need for hemodialysis, mortality, or morbidity (p>0.05).
A weak positive correlation was found between uAGT/ uCr on day 0 and sCr on day 90 in the patients who were followed up (r=0.408).However, no significant correlation was observed between sCr levels on the 90th day and uAGT/uCr on the 15th day of follow-up.There was a strong correlation between uAGT/uCr on the 30th day and sCr on the 90th day, but these values did not reach statistical significance.
DISCUSSION
The diagnostic criteria for AKI are based on a decrease in urine output and an acute increase in sCr levels.However, sCr levels typically rise within 2-3 days due to skeletal muscle release, making interventions based solely on high sCr levels potentially incomplete 6 .An early and accurate diagnosis of AKI is crucial to prevent renal dysfunction and damage 7 .Recognizing the decline in GFR at an early stage is vital, and several biomarkers have shown promising results in providing early detection of AKI 8 .
The kidney contains all components of the RAS system.Growing evidence suggests that locally produced angiotensin II (Ang II), which is the principal effector peptide of RAS, may contribute to AKI pathogenesis by upregulating pro-inflammatory and pro-fibrotic cytokines such as TNF and TGF-beta 9 .Ang II expression is observed in the proximal tubule, where it stimulates TGF-beta synthesis 10,11 .Experimental studies have demonstrated increased levels of TGF-beta mRNA and protein in rats following acute ischemic injury 12 .In male Sprague-Dawley rats, renal Ang II levels in the proximal tubule were found to increase 53.5-fold in association with decreased renal perfusion pressure 10 .
In a study assessing uAGT and sAGT levels, higher uAGT levels were observed in patients with ATN compared with healthy subjects 13 .This elevation in uAGT levels was found to be correlated with increased intrarenal RAS expression, suggesting that enhanced uAGT synthesis within the kidney may contribute to ATN pathogenesis 13 .Another study identified intrarenal angiotensinogen in structures close to the apical membrane of proximal tubular cells, facilitating its secretion into urine 14 .The study results also revealed no association between sAGT levels and intrarenal RAS status, supporting the notion that intrarenal RAS operates independently of circulating RAS 14 .Consistent with the existing literature, our study found no correlation between sAGT levels and uAGT/uCR values, further supporting the idea of independent regulation of intrarenal RAS 14 .
In our study, although patients receiving hemodialysis exhibited higher uAGT/uCR values, the difference was not statistically significant.Similarly, no significant difference was observed between uAGT/uCR values and mortality rates.When patients were classified according to the KDIGO stages, no significant correlation was found between uAGT/uCR values and mortality among AKI patients.While higher uAGT/uCR values were observed in patients undergoing hemodialysis and those who died, the lack of statistical significance could be attributed to the small sample size and high standard deviation in our study.
A separate investigation 15 examined uAGT as a potential clinical biomarker for identifying individuals at high risk of CKD.All distributions of 24-h uAGT and uAGT/uCR ratio were found to be higher in CKD patients compared with controls, whereas sAGT levels were similar between the two groups.Notably, uAGT excretion and uAGT/uCR ratio exhibited a strong correlation.These findings suggest that intrarenal RAS may play a significant role in CKD risk, and uAGT levels could aid in stratifying and predicting CKD risk.In our study, a positive correlation was found between uAGT/uCR values obtained at the time of AKI diagnosis and sCr levels on the 90th day.A strong correlation was also observed between uAGT/uCR values on the 30th day and sCr levels on the 90th day, although no statistically significant relationship was found.
Analyzing uAGT/uCR values on the 0th, 15th, and 30th days based on the development of CKD during patient follow-up revealed no significant differences between the groups.However, this could be attributed to the limited number of cases included in the study and a high mean standard deviation.
The limitations of this study include the small sample size and high standard deviation, which may have contributed to the lack of statistically significant findings in some analyses.Additionally, the study focused on a specific population and did not explore other potential confounding factors.Furthermore, the study did not assess long-term outcomes or evaluate the predictive value of uAGT/uCR values in terms of disease progression.On the contrary, the study contributes to the existing literature by examining the correlation between uAGT/uCR values and renal function in AKI patients.It also adds to the understanding of the independent regulation of intrarenal RAS.Further research with larger cohorts and comprehensive evaluation of clinical outcomes is necessary to confirm and build upon these findings.
CONCLUSION
The uAGT levels have emerged as potential biomarkers for assessing the intrarenal RAS activity in AKI and CKD.Further research with larger sample sizes is warranted to validate their clinical utility and establish their role in risk stratification and prognosis prediction for renal diseases.
Table 1 .
Correlation between urinary angiotensinogen/urine creatinine and serum angiotensinogen values.
Table 2 .
Serum angiotensinogen and urinary angiotensinogen/urine creatinine ratio by stage of Kidney Disease: Improving Global Outcomes.
Table 3 .
Relationship between urinary angiotensinogen/urine creatinine and hemodialysis and mortality. | 2023-11-15T16:12:02.946Z | 2023-11-13T00:00:00.000 | {
"year": 2023,
"sha1": "46d68d077c08113ac87ca260031a1841d84d06c4",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ramb/a/Xm3SqnKF3RwpZkz7zGwJL8t/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8280847852c2ce82d5bc93f7296e15708025e691",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
89652368 | pes2o/s2orc | v3-fos-license | ULTRASTRUCTURE AND MORPHOLOGY OF THE RESORPTIVE MARGIN OF BOVINE OSTEOCLASTS
Cerny H.: Ultrastructure and Morphology of the Resorptive Margin of Bovine Osteoclasts. Acta vet. Bmo, 52, 1983: 3-13. Ultrastucture of the resorptive margin of the osteoclast was studied from theosteoid zone of endochondral ossification of the growth cartilage in the metaphyseal rim of the tuber coxae using 4 bovine fetuses aged 246 to 271 days. The material was routinely processed for electron microscopy. The multinucleated osteoclast forms a specialized and polar structure, a resorptive margin, at sites of its contact with the mineralized ground substance of the cartilage. This resorptive margin is analogous with the ruffled border of the osteoclasts. In the electron-microscopic field typical structures present in the resorptive margin include: shallow invaginations of the plasmalemma and the modified cytoplasm containing electron-dense strongly osmiophilic material and no other organelles. In cytoplasm of the resorptive margin only minute vesicles limited by a unit membrane are found which contain finely granular material of various densities. The resorptive margin has been generally described as a concave structure. In our material, however, morphologically variable shapes of the resorptive margin were observed in the two-dimensional profiles, e.g. open-ring or circular, completely closed structures situated intracellularly. Morphology of the resorptive margin is determined to a certain degree by the phagocytic activity in this functionally very active area. Among the organelles present in the osteoclast, only those presumed to be involved in the intracellular transport of the resorbed substances into the extracellular space and into the blood capillaries were investigated. Cattle, endochondral ossification, osteoclast, resorptive margin, cartilage resorption. In the course of endochondral ossification of the growth cartilage the cartilaginous model is gradually replaced by newly formed bone tissue. Only a small portion of the ground substance of the cartilage model serves as a scaffolding for the forming bone tissue. The mineralized tissue is destroyed by activity of mononucleated (Horn and Dvorak 1977, 1980) and multinucleated cells (Hancox 1972; Holtrop et al. 1979; Kallio et al. 1972; Knese 1972; Lucht 1972; Malkani et al. 1973). The cartilage model is thus resorbed by the direct action of clastic cellular elements present in the resorptive margin of the cell which in tum are in direct contact with the mineralized tissue. Multinucleated cells are but one part of a complex mechanism of resorption of the mineralized cartilaginous ground substance. At the contact site they form a specialized structure, the so-called resorptive margin. It has been described in detail for guinea pig osteoclasts by Malkani et al. (1973). Structural details of this modified cytoplasm, this "ruffled border" have been reported in detail by Schenket al. (1967) and Scott (1967). Formation ofthe ruffled border by cytoplasmic invaginations enlarges substantially the resorptive surface, plasmalemma (Scott and Pease 1956). The "ruffled border", including the specialized cytoplasm, is the centre of enhanced cellular activity and many substances are taken up from the extracellular space into cytoplasmic, phagocytic vesicles of the osteoclast (Kallio et al. 1972; Lucht 1972).
In the course of endochondral ossification of the growth cartilage the cartilaginous model is gradually replaced by newly formed bone tissue.Only a small portion of the ground substance of the cartilage model serves as a scaffolding for the forming bone tissue.The mineralized tissue is destroyed by activity of mononucleated (Horn andDvorak 1977, 1980) and multinucleated cells (Hancox 1972;Holtrop et al. 1979;Kallio et al. 1972;Knese 1972;Lucht 1972;Malkani et al. 1973).The cartilage model is thus resorbed by the direct action of clastic cellular elements present in the resorptive margin of the cell which in tum are in direct contact with the mineralized tissue.
Multinucleated cells are but one part of a complex mechanism of resorption of the mineralized cartilaginous ground substance.At the contact site they form a specialized structure, the so-called resorptive margin.It has been described in detail for guinea pig osteoclasts by Malkani et al. (1973).Structural details of this modified cytoplasm, this "ruffled border" have been reported in detail by Schenket al. (1967) and Scott (1967).Formation ofthe ruffled border by cytoplasmic invaginations enlarges substantially the resorptive surface, plasmalemma (Scott and Pease 1956).The "ruffled border", including the specialized cytoplasm, is the centre of enhanced cellular activity and many substances are taken up from the extracellular space into cytoplasmic, phagocytic vesicles of the osteoclast (Kallio et al. 1972;Lucht 1972).
The structure of the osteoclast and its cytoplasmic contact with the mineralized matrix, and its function in the mechanism of mineralized tissue resorption has been described by Knese ( 1972).
Materials and Methods
Material for this electron microscopic study was collected from the growth cartilage of the tuber coxae of four bovine fetuses aged 246 to 271 days.The samples from the osteoid zone of endochondral ossification were fixed in 4 % glutaraldehyde in 0.1 M phosphate buffer at pH 7.4 for 4 h.
The samples were then decalcified in 0.1 M EDTA with 4% glutaraldehyde at pH 7.4 for 14 -16 h.
Sections were cut with a Tesla BS 490 ultramicrotome.Semithin sections (250 -300 nm) were stained with 1 % methylene blue for 1 min.and 1 % Azur II solution for I min., both at 37 cC.
Ultrathin sections were counterstained with uranyl acetate and lead citrate according to Reynolds and viewed and photographed with a Tesla BS 613 electron microscope.
Semithin sections were viewed and photographed with an UNIV AR Reichert optical microscope.
Results
Ultrastructure of the cytoplasm in contact with the matrix of osteoclasts and mineralized ground substance of the cartilage was investigated in longitudinal sections of the metaphyseal margin of the growth cartilage using samples of the tuber <:oxae from bovine fetuses.
A morphologically variable resorptive margin forms at the resorptive pole of the cell.This zone is concave or an incomplete ring in profile and it encroaches upon the ground matrix.Thus, the shape of the adjacent ground matrix determines to a significant degree the morphology of this structure.With complete envelopment of a larger fragment by an osteoclast, a closed, ring-like resorptive margin forms around the mineralized tissue (see Fig. 5).Significantly, a single cell may form more than one resorptive surfaces as indicated in Fig. 4. At the site of bone and mineralized cartilage contact, minute and shallow invaginations penetrating into the ground matrix are formed by the plasmalemma of the osteoclast.By these invaginations the actual surface of the plasmalemma Fig. 2. Osteoclast interposed between the mineralized ground substance of the cartilage and blood capillary (semithin section) : Oc -osteoclast, Rz -its resorptive zone, bv -blood capillary, II -lamina limitans, gs -mineralized ground substance of the cartilage.X 650.
exposed to the bone is enlarged considerably.Between these invaginations, phagocytosis of small fragments of the cartilaginous ground substance may be observed.At such contact sites a close relationship exists between the cell and the cartilage while in other parts of the same resorptive margin the space between the cytoplasmic membrane and mineralized ground substance of the cartilage may be found.We designated this space as the intermediate space.It appears without structure or filled with fine granular, slightly osmiophilic material.At sites of seemingly newly formed, narrow spaces the surface of mineralized cartilage present a fine dense lamina as if it were covered by lamina limitans.
The cytoplasm of the resorptive margin contains a dense, strongly osmiophilic material with no apparent organelles.This dense cytoplasm is arranged into trabeculae running in parallel to one another and oriented radially to the plasmalemma or it also is observed as a dense, net-like structure.In this modified cytoplasm minute vesicles and/or larger ones may be observed.They are limited by a smooth membrane and they contain material of various electron densities.
The cytoplasm adjacent to this margin contains numerous mitochondria arranged in an irregular pattern.Some of them have few inner membrane projections; others contain tubular or vesicular structures in centre, especially near the basal portion of the cell.
The most frequently observed cytoplasmic organelles are the small vesicles coated by smooth membranes .Their number is larger in the resorptive portion of the osteoclast where they populate a large field.However, they are also rather abundant in the basal portion of the cell.Minute vesicles may join to form larger ones occupying a major portion of the cytoplasm (Fig. 6).These organelles are especially conspicuous when fixed with glutaraldehyde (Scott 1967;Schenk et al. 1967).Similar to small vesicles also the vaculoes contain material of various electron density.They contain primarily a fine granular, slightly osmiophilic material, but in some larger vacuoles, on the contrary, large, quite dense granules are visible.Some vacuoles demonstrate no structure resolvable by these procedures.
The basal portion of the cell is in close contact with endothelial cells of blood capillaries.Here it appears that intracellularly transported substances are being extruded into the extracellular space, i.e. between the osteoclast and the endothelial cell.Between the plasmalemma of the basal portion of the osteoclast and the basal margin of the endothelial cell numerous small vesicles appear.
Discussion
Destruction of the mineralized tissue by multinucleated cells, osteoclasts, has been described as osteoclastic resorption.Mostly the huge multinucleated cell is called osteoclast with no consideration of the type of mineralized tissue destroyed by this cell.Only Schenk et al. (1967) and Savostin and Asling (1975) described the huge multinucleated cell as chondroclast.
In our opinion, from the functional point of view there is only one cell type with the main function being degradation of the mmeralized matrix.Therefore we call them osteoclasts for the sake of simplicity.However, osteoclasts/chondroclasts destroying the cartilage are described as being simple more than osteoclasts degrading the bone tissue (Lucht 1972).These multinucleated cells appeared at sites of seemingly intensive resorption.Their appearance was limited to the osteoid zone, only rarely they were detected in close contact with the cartilage in the erosion line.This situation was mostly found in terminal parts of the trabeculae.By osteoclastic activity an intensive resorption of the cartilage ground substance occurred resulting in gradual shortening of the trabeculae of the osteoid zone in the direction of the erosion line.
The osteoclast is characteristically found between the mineralized matrix and the blood capillary.With the resorptive surface or pole of the cell adjacent to the cartilage and the basal portion in contact with the endothelium, such position of the cell would facilitate the transport of substances by the cell more directly to the blood capillary.
At the site of cell contact with the mineralized matrix the osteoclast forms a morphologically specialized structure designated the ruffled border.Formation of the resorptive margin is believed to be a prerequisite for resorption according to several authors (Knese 1972;Kallio et a1. 1979;Holtrop et a1. 1979).
To the contrary, our results would suggest that the cytoplasmic contact of an osteoclast with mineralized matrix of the cartilage does not form this characteristic ruffled border, but a resorptive surface constisting of small and rather inconspicuous invaginations and densely stained cytoplasm.Despite this fact (the cytoplasmic invaginations penetrate into the ground substance) destruction of the ground matrix does occur.Between the invaginations fragments of cartilage are visible.These it would seem are phagocytized, destroyed, and transported to the basal portion of the cell as has been described for other species.
The osteoclast which brings about resorption of the mineralized cartilage is not morphologically identical with that cell of similar function but found resorbing bone tissue.In the latter cell the cellular contact is made through a ruffled border characterized by numerous long cytoplasmic projections.The functional activity of the cell related to active resorption determines the morphology of this ruffled border.At sites with less functional involvement of the osteoclast another type of cytoplasmic contact is formed with short, broad projections.In this case, the dense cytoplasmic modification is apposed to the cytoplasmic membrane (Malkani et a1. 1973).This type of ruffled border is called the "clear zone" by Cameron (1963), Malkani et a1. (1973) and HoI trop et a1.(1979), "ectoplasmic layer" by Scott (1967), and "transitional zone" by Lucht (1972).
In our material, the cytoplasmic contact of the multinucleated cells with the mineralized matrix in the osteoid zone was formed exclusively by this type of ruffled border designated by us as a resorptive margin.It seems that the type of mineralized tissue and the degree of its mineralization is a decisive factor conditioning the formation of a specialized cellular structure, i.e. the ruffled border characteristic of the osteoclast or a resorptive margin characterized by short evaginations and dense cytoplasm.
The ground matrix of cartilage is less mineralized than the bone tissue.Therefore, its osteoclastic resorption might not be occurring in conjunction with a functionally more demanding resorptive activity.
In spite of a different arrangement of the ruffled border we still regard this structure as a functional and rather exclusive one providing the cell-to-matrix contact of the osteoclast in the osteoid zone.In addition to the generally described concave shape of the structure, we observed a variety of shapes including open or closed ring-like or circular zones with intracellular positions.
A single osteoclast may form more than one plasmalemmal contact with the mineralized matrix as demonstrated by L u c h t (1972) and by our data.
The component of organelles of a bovine osteoclast does not differ from that of other species.So far we confirm the conclusions by Lucht (1972) who found no substantial species differences in the structure of osteoclasts.Again, however, it seems that the structure of the multinucleated cell resorbing the cartilage is simpler than that of a bone-destroying osteoclast.
This pertains especially to the specific dense granules in the cytoplasm, described by Scott (1967) as dense granules, and by Lucht (1972) as cytoplasmic bodies.In our study they were observed as vesicular structures limited by a non-coated membrane with contents of various electron densities.
In our view, the cytoplasmic vesicles may originate in two ways.They form as membrane derivatives of the modified ruffled border through endocytosis of matrix, transporting the resorbed material.In the course of this transport they destroy the material, forming larger vesicles.The largest vesicles are found chiefly in the basal portion of the cell.Lucht (1972) demonstrated that vacuoles originating from the cell membrane yield a positive acid phosphatase reaction.In our opinion, the intravacuolar material is degraded in the course of transport.Vesicles with dense contents we regard as lysosomal structures present not only in close vicinity of the Golgi complex but also in the peripheral cytoplasm.Descriptive nomenclature of these structures varies considerably: "vesicles with dense contents" (Schenk et al. 1967), "specific granules" (Scott 1967), "dense bodies" (L u ch t 1972), "lysosomes" (L u ch t 1972) are equally numerous in the osteoclast of the osteoid zone as in other more typical osteoclasts engaged in resorption of bone tissue.The finding of numerous cytoplasmic vesicles is indicative of an intensive resorptive activity of the multinucleated cell although it does not demonstrate a typical ruffled border.
According to the degree of development of the ruffled border Knese (1972) differentiated two basic phases of resorption: the synthetic and the resorptive phase.The synthetic phase is characterized by a moderately developed ruffled border whereas, in the resorptive phase itself, a conspicuous ruffled border is formed along with large numbers of secretory vesicles in the cytoplasm of, in this case, a mononucleated cell.
In our material, the above-mentioned phases could not be distinguished.Furthermore, in the so-called resorptive phase, characterized by large numbers of minute vesicles and vacuoles in the cytoplasm of the osteoclast, no detectable <:hanges of the ruffled border per se were observed.
Mitochondria with few inner membrane projections or few vesicular or tubular profiles were observed in our material.Weare inclined, however, to interpret these as artifacts resulting from the processing.
Our material suggests that there is a considerable variability in the arrangement of the resorptive margin.Destruction of the mineralized ground substance of the cartilage in the osteoid zone of ossification by the multinucleated cell is a complex process with both the extracellular breakdown of matrix and its phagocytosis involved. | 2018-12-26T20:07:58.295Z | 1983-01-01T00:00:00.000 | {
"year": 1983,
"sha1": "97c4f094254d35d5653a73008a7b12615c53d716",
"oa_license": "CCBY",
"oa_url": "https://actavet.vfu.cz/media/pdf/avb_1983052010003.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "97c4f094254d35d5653a73008a7b12615c53d716",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
245129458 | pes2o/s2orc | v3-fos-license | The role of transformational leadership, servant leadership, digital transformation on organizational performance and work innovation capabilities in digital era
Received Apr 18, 2021 Revised Jul 06, 2021 Accepted Aug 26, 2021 The purpose of this study is to analyze the relationship between Transformational Leadership, Servant Leadership, Digital Transformation on Organizational Performance and Work Innovation Capabilities. In this study using quantitative methods and data analysis techniques Structural Equation Modeling (SEM) using SmartPLS 3.0 software. This research was conducted in the population in this study were all SMEs in Tangerang City as many as 41,155 SMEs and based on the method of determining the number of samples according to Morgan, the sample units were 380 SMEs. Based on the results of data analysis transformational leadership has a significant effect on Organizational Performance, transformational leadership has no significant effect on Work Innovation Capabilities, servant leadership has a significant effect on Organizational Performance, servant leadership has no significant effect on work innovation capabilities, digital transformation has no significant effect on organizational performance, digital Transformation has no significant effect on Work Innovation Capabilities, organizational performance has no significant effect on work innovation capabilities, transformational Leadership has no significant effect on Organizational Performance through Work Innovation Capabilities, Servant Leadership has no significant effect on Organizational Performance through Work Innovation Capabilities, digital transformation has no significant effect significant to the Organizational Performance through Work Innovation Capabilities. Keyword:
Introduction
The global Covid-19 pandemic that is endemic in all countries of the world has affected all sectors of people's lives. In Indonesia, almost all sectors are affected, especially the economic ecosystem which has been the focus of society. Furthermore, the Covid-19 pandemic has caused a slowdown in the economic sector in Indonesia with its various derivatives. The Micro, Small and Medium Enterprises (MSME) sector, which is the most important part of the economic sector, is greatly affected. This is what is worried by all parties, because it has made the MSME sector experience a significant setback. Moreover, currently many MSMEs are experiencing various problems such as declining sales, capital, hampered distribution, difficulty in raw materials, declining production and the occurrence of many layoffs for workers and workers which then become a threat to the national economy. According to Maroukian (2020) MSMEs as drivers of the domestic economy and absorber of labor are facing a decline in productivity which results in a significant decrease in profits. Even based on a survey by the Asian Development Bank (ADB) regarding the impact of the pandemic on MSMEs in Indonesia, 88% of micro-enterprises have run out of cash or savings, and more than 60% of these micro-small businesses have reduced their workforce. It must be admitted that the Covid-19 pandemic has reduced people's purchasing power. According to Quddus (2020) many consumers then keep their distance and switch purchases digitally. As a result, many MSMEs have to close their businesses due to declining purchases and are still dependent on offline sales. So that several MSME sectors that have not adapted digitally are ultimately very affected and closed their outlets. Even so, the Covid-19 pandemic has indirectly prompted new changes in Indonesia's business style. The change is the shift from offline business to digital business which is also known as the phenomenon of digital entrepreneurship. According to Fayaz (2017), Quddus (2020) Social media and market places (intermediaries) can be a concept to make it easier for MSME actors to get wider marketing access (Purwanto, 2019). It should be noted that MSMEs in Indonesia have become the most important pillar for the economic ecosystem. Moreover, it is known that 99% of business actors in Indonesia are the MSME sector. According to El Toufaili (2017), Fayaz (2017), Quddus (2020) The role of MSMEs has contributed 60% to the national gross domestic product and 97% to the absorption of labor affected by the pandemic. Meanwhile, of the existing MSMEs, only 16 percent have entered the digital economy ecosystem. In the context of Indonesia, the MSME sector is one of the main pillars of Indonesia's economic fundamentals. In fact, during the 1998 economic crisis, it turned out that the MSME sector had a very positive contribution in saving the Indonesian economic ecosystem at that time. The same thing happened during the Covid-19 pandemic, where the MSME sector could have great potential to become an accelerator of national economic recovery. Therefore, an entrepreneurial model is needed that can adapt to technological advances. This is what gave birth to the digital entrepreneurship model. This business model comes from a combination of digital technology and entrepreneurship which then produces a new characteristic phenomenon in terms of business.
According to Fayaz (2017), Quddus (2020) Emerging technological paradigms harness the potential of collaboration and collective intelligence to design and launch stronger and more sustainable entrepreneurial initiatives. Even so, there are four dimensions related to digital entrepreneurship, namely digital actors (who), digital activities (what), digital motivation (why) and digital organizations (how). However, according to the records of the Ministry of Cooperatives and SMEs, currently there are 10.25 million MSME players who have been connected to digital platforms. According to. El Toufaili (2017), Fayaz (2017), Quddus (2020) The Covid-19 pandemic has made Micro, Small and Medium Enterprises (MSMEs) stagnant until some have had to close their outlets. This is due to changes in new habits with the implementation of health protocols which have an impact on the decline in people's purchasing power. In the context of the Indonesian economy, the Covid-19 pandemic can finally encourage the creation of a new ecosystem, namely digital entrepreneurship. In other words, this ecosystem has encouraged MSME actors to start transforming into the digital realm. Even so, the government has been trying to encourage the digitization program for MSMEs in Indonesia. This can be traced from the adaptation of MSME actors to use market places and social media in digital marketing.
The role of social media platforms has also become the main focus of MSME actors. In addition, MSME actors have also begun to adapt using various supporting applications such as digital financial platforms. According to Quddus (2020) Digital transformation of MSMEs during the Covid-19 pandemic can finally make MSMEs re-develop their businesses. Thus, the development of digital MSMEs during the Covid-19 pandemic can be an alternative to saving the MSME sector so that it continues to exist. However, efforts to develop digital SMEs must also be supported by the role of the government and the Ministry of Cooperatives and SMEs. Because MSME actors still need a lot of support, guidance and capital which incidentally comes from the government during the Covid-19 pandemic. If there is synergy between MSMEs, the government and other supporting stakeholders, it is certain that the MSME digital transformation process will be able to run perfectly. So that the government's target to increase digital-based MSMEs can be realized immediately. According to Gui (2021), Hernández (2020), Quddus (2020) the development of digital MSMEs after the Covid-19 pandemic must also be a top priority for the government and all stakeholders so that the digital economy ecosystem in Indonesia continues to run well. This is because the development of digital MSMEs will also contribute to strengthening the digital entrepreneurship ecosystem in Indonesia.
According to Hernández (2020), Kim (2020) The large number of MSME business actors illustrates that this sector has quite good potential in supporting the economy. The fairly good performance of these MSMEs contributes to the Gross Domestic Product (GDP) and employment. This is what underlies the increasing capacity of MSMEs, especially in facing the industrial era 4.0. The movement of the industrial revolution 4.0 which has echoed in recent years has an impact on changes in the way of working in various fields, especially in the business field. Business people are starting to use information technology and telecommunications to The role of transformational leadership, servant leadership ... run and support their business activities. The movement and changes in the way of doing business that are increasingly fast towards digitalization are forcing business people to adapt to follow these changes. According to Fayaz (2017), Quddus (2020) for large companies, changes in business patterns that lead to the digitization process are not too constrained due to the characteristics of large companies that have good enough resources. However, for MSMEs this digitization process will require a lot of preparation. In order to encourage digitization and make it easier for MSMEs to deal with changes that occur, the government has increased the ease of access and transfers technology to MSME actors so that they can survive in business competition. The ability to master digital devices and the internet is an absolute thing that must be mastered by MSMEs if they want to survive in the competition. Research by Delloitte Access Economics (2015) states that consumers are increasingly accustomed to making decisions based on digital content and buying goods online. This is a challenge but also a promising business opportunity for MSMEs in Indonesia. Based on this, this research seeks to formulate a strategy for developing the digitization of MSMEs to support the development of MSMEs and as input for MSME actors in implementing digitalization in their business processes.
Relationship between Transformational Leadership and Organizational Performance
Transformational Leadership Theory by Adwan et al. (2019) states that Transformational Leadership will have a positive influence on increasing Organizational Performance. According to research conducted by El Toufaili (2017), Fayaz (2017), Quddus (2020) states that increasing Transformational Leadership will encourage an increase in Organizational Performance variables. conducted by stated that an increase in the Transformational Leadership variable will encourage an increase in the Organizational Performance variable. According to research conducted by Fayaz (2017), Quddus (2020) states that Transformational Leadership has a significant influence on the Organizational Performance variable. Based on theoretical studies and previous studies, the following hypotheses are formulated: Hypothesis 1: There is a positive influence between Transformational Leadership and Organizational Performance Gui (2021), Hernández (2020) states that increasing Servant Leadership will encourage an increase in the Work Innovation Capabilities variable. research conducted by states that an increase in the Servant Leadership variable will encourage an increase in the Work Innovation Capabilities variable. According to research conducted by El Toufaili (2017), Fayaz (2017), Quddus (2020) states that Servant Leadership has a significant influence on the Work Innovation Capabilities variable. Based on theoretical studies and previous studies, the following hypotheses are formulated: Vol. 7, No. 2, 2021, pp. 225-238 228 Journal homepage: https://jurnal.iicet.org/index.php/jppi
Relationship between Digital Transformation and Organizational Performance
Digital Transformation Theory by Adwan et al. (2019) stated that Digital Transformation will have a positive influence on increasing Organizational Performance. According to research conducted by Cheng et al. (2013), Christopher (2021), El-Gohary (2013) stated that an increase in Digital Transformation will encourage an increase in Organizational Performance variables. Meanwhile, research conducted by states that an increase in Digital Transformation variables will encourage an increase in Organizational variables. Performance. According to research conducted by Adwan et al. (2019); Bazazo et al. (2017) and Bui et al. (2006) stated that Digital Transformation has a significant influence on the Organizational Performance variable. Based on theoretical studies and previous studies, the following hypotheses are formulated: Hypothesis 5: There is a positive influence between Digital Transformation and Organizational Performance MSMEs. While the sample selection method uses non-probability sampling methods. Data collection in this study was carried out using techniques and procedures, namely online questionnaires. Questionnaire is the main instrument in collecting primary data. After the questionnaires were sent to the respondents as many as 400 questionnaires, the next stage was to evaluate the returned questionnaires, namely 380 questionnaires and 20 questionnaires that did not return.
Research Model
Based on theoretical studies and previous studies, the research model is structured as follows (2021) reliability is a measure of the internal consistency of indicators of a construct that shows the degree to which each indicator shows a general latent construct. According to Purwanto et al (2020) the reliability requirement is a measure of the stability and consistency of the results (data) at different times. To test the reliability of the construct in this study used the value of composite reliability. A variable is said to meet construct reliability if it has a composite reliability value > 0.7 and a Crobanch apha value > 0.7 has a good level of reliability for a variable (Purwanto et al, 2019).
Validity Test
After the data test results are declared reliable, then the next step is to test the validity including loading factor, AVE, Farnell Lacker Criterion and cross loading. The steps that need to be taken are selecting the outer loading menu to see the results of the loading factor test, then the discriminant validity menu to see the results of the Farnell lacker criterion and cross loading tests. . According to Purwanto et al. (2020) The validity test is intended to measure the extent to which the accuracy and accuracy of a measuring instrument performs the function of its measuring instrument or provides appropriate measurement results by calculating the correlation between each statement with a total score. In this study, the measurement validity test consisted of convergent validity and discriminant validity.
Convergent Validity
Convergent validity is used to measure the correlation between item scores and construct scores, the higher the correlation the better the data validity (Purwanto, 2019). Measurement Measurement can be categorized as having convergent validity if the loading factor value is > 0.7 . If all loading factors have a value of > 0.7, it can be concluded that all indicators have met the criteria for convergent validity, because no indicators for all variables have been eliminated from the model.
Discriminant validity
Discriminant validity is a test of construct validity by predicting the size of the indicator from each block (Purwanto et al, 2019). One of the discriminant validity can be seen by comparing the AVE value with the correlation between other constructs in the model. If the AVE root value is > 0.50, it means that discriminant validity is reached (Purwanto et al, 2020). In addition, discriminant validity is also carried out based on the Fornell Larcker criteria measurement with constructs. In addition to using the AVE value, another method that can be used to determine discriminant validity is to measure discriminant validity by using the cross loading value. An indicator is said to meet discriminant validity if the cross loading value is 0.70 or more (Purwanto, 2020).
Structural model (inner model)
The structural model (inner model) is a pattern of research variable relationships. Evaluation of the structural model is by looking at the coefficients between variables and the value of the coefficient of determination (R2). The coefficient of determination (R2) essentially measures how far the model's ability to explain variations in the dependent variable is. A value close to 1 means that the independent variables provide almost all the information needed to predict the variation of the dependent variable (Purwanto, 2021). This test aims to determine how much the independent variable model's ability to explain the dependent variable. The value of The role of transformational leadership, servant leadership ...
R square (R2) is a measure of the proportion of the variation in the value of the affected variable which can be explained by the variable that influences it. According to Purwanto et al (2020) if in a study using more than two independent variables, then the adjusted r-square (adjusted R2) is used. The value of r square adjusted is a value that is always smaller than r square. The R2 value is close to 1, with the limiting criteria being divided into 3 classifications, namely: If the value of R2 = 0.67 Model is substance (strong) If the value of R2 = 0.33 the model is moderate (medium) If the value of R2 = 0.19 the model is weak (bad) In this study, the adjusted r-square value (adjusted R2) is used, because it has more than two independent variables.
Hypothesis Testing
According to After a research model is believed to be fit, a hypothesis test can be performed. The next step is to test the hypothesis that has been built in this study. In this case, the bootstrapping method is applied to the sample. Testing with bootstrapping is intended to minimize the problem of abnormal research data. The last step of the test using the smart Pls application is hypothesis testing and is carried out by looking at the results of the bootstrapping value. This test is done by selecting the calculate menu and after that the menu options appear, then select bootstrapping, then the desired data will appear. The following are the results of the data test using bootstrapping. Hypothesis testing in this study can be known through regression weight by comparing the p-value with a significance level of 5% (α=5%). The hypothesis is said to be significant if it has a probability value (p-value) < 5%.
Reliability Test
According to reliability is a measure of the internal consistency of indicators of a construct that shows the degree to which each indicator shows a general latent construct. According to Purwanto et al (2020) the reliability requirement is a measure of the stability and consistency of the results (data) at different times. To test the reliability of the construct in this study used the value of composite reliability. A variable is said to meet construct reliability if it has a composite reliability value > 0.7 and a crobanch apha value > 0.6 has a good level of reliability for a variable (Purwanto et al, 2019 In table 1, it can be seen the results of the analysis of the reliability test using the SmartPLS tool which states that all composite reliability values are each greater than 0.7, which means that all variables are reliable and have met the test criteria. Furthermore, the value of cronbanch's omission also shows that all cronbanch's 'alpa' values are more than 0.6 and this indicates the level of reliability of the variable has also met the criteria.
Validity test
After the data test results are declared reliable, then the next step is to test the validity including loading factor, AVE, Farnell Lacker Criterion and cross loading. The steps that need to be taken are selecting the outer loading menu to see the results of the loading factor test, then the discriminant validity menu to see the results of the Farnell lacker criterion and cross loading tests. According to Purwanto et al. (2020) The validity test is intended to measure the extent to which the accuracy and accuracy of a measuring instrument performs the function of its measuring instrument or provides appropriate measurement results by calculating the correlation between each statement with a total score. In this study, the measurement validity test consisted of convergent validity and discriminant validity.
Convergent Validity
Convergent validity is used to measure the correlation between item scores and construct scores, the higher the correlation the better the data validity (Purwanto, 2019). Measurement Measurement can be categorized as having convergent validity if the loading factor value is > 0.7 . Figure 2 shows that all Vol. 7, No. 2, 2021, pp. 225-238 232 Journal homepage: https://jurnal.iicet.org/index.php/jppi loading factors have a value > 0.7, so it can be concluded that all indicators have met the criteria for convergent validity, because indicators for all variables have not been eliminated from the model.
Discriminant validity
Discriminant validity is a test of construct validity by predicting the size of the indicator from each block (Purwanto et al, 2019). One of the discriminant validity can be seen by comparing the AVE value with the correlation between other constructs in the model. If the AVE root value is > 0.50, it means that discriminant validity is reached (Purwanto et al, 2020).
Based on table 2, the AVE value for all variables is > 0.50. So it can be said that the measurement model has been valid with discriminant validity. In addition, discriminant validity was also carried out based on the Fornell Larcker criteria measurement with the construct. If the construct correlation in each indicator is greater than the other constructs, it means that latent constructs can predict indicators better than other constructs (Purwanto et al, 2019
Structural model (inner model)
The structural model (inner model) is a pattern of research variable relationships. Evaluation of the structural model is by looking at the coefficients between variables and the value of the coefficient of determination (R2). The coefficient of determination (R2) essentially measures how far the model's ability to explain variations in the dependent variable is. A value close to 1 means that the independent variables provide almost all the information needed to predict the variation of the dependent variable (Purwanto, 2021). This test aims to determine how much the independent variable model's ability to explain the dependent variable. The value of R square (R2) is a measure of the proportion of the variation in the value of the affected variable which can be explained by the variable that influences it. According to Purwanto et al (2020) if in a study using more than
Relationship between Transformational Leadership and Organizational Performance
Based on the results of data analysis using SmartPLS, the p value is 0.000 < 0.050 so it can be concluded that Transformational Leadership has a significant effect on Organizational Performance, an increase in the transformational leadership variable will have a significant effect on increasing the Organizational Performance variable and a decrease in the Transformational Leadership variable will have a significant effect on the decrease in the Organizational variable. Performance. This result is not in line with research conducted by El Toufaili (2017), Fayaz (2017), Quddus (2020) that Transformational Leadership has a positive and significant effect on Organizational Performance.
Relationship between Transformational Leadership and Work Innovation Capabilities
Based on the results of data analysis using SmartPLS, the p value is 0.684 < 0.050 so it can be concluded that Transformational Leadership has no significant effect on Work Innovation Capabilities, an increase in the transformational leadership variable will have an insignificant effect on increasing the Work Innovation
Relationship between Servant Leadership and Organizational Performance
Based on the results of data analysis using SmartPLS obtained p value of 0.003 < 0.050 so it can be concluded that Servant Leadership has a significant effect on Organizational Performance, an increase in the Servant Leadership variable will have a significant effect on increasing the Organizational Performance variable and a decrease in the Servant Leadership variable will have a significant effect on the decrease in the Organizational variable. Performance. This result is not in line with research conducted by Maroukian (2020), Gandolfi (2018) that Servant Leadership has a positive and significant effect on Organizational Performance.
Relationship between Servant Leadership and Work Innovation Capabilities
Based on the results of data analysis using SmartPLS, the p value value is 0.297 < 0.050 so it can be concluded that Servant Leadership has no significant effect on Work Innovation Capabilities, an increase in the Servant Leadership variable will have an insignificant effect on increasing the Work Innovation Capabilities variable and a decrease in the Servant Leadership variable will have an insignificant effect. to the decrease in the Work The role of transformational leadership, servant leadership ...
Innovation Capabilities variable. These results are not in line with the research conducted by According to Hernández (2020), Kim (2020). El Toufaili (2017) that Servant Leadership has a positive and significant effect on Work Innovation Capabilities.
Relationship between Digital Transformation and Organizational Performance
Based on the results of data analysis using SmartPLS obtained p value of 0.345 > 0.050 so that it is concluded that Digital Transformation has no significant effect on Organizational Performance, an increase in the Digital Transformation variable will have an insignificant effect on increasing the Organizational Performance variable and a decrease in the Digital Transformation variable will have an insignificant effect on decrease in Organizational Performance variables. This result is not in line with research conducted by Maroukian (2020), Gandolfi (2018), Burawat (2019) that Digital Transformation has a positive and significant effect on Organizational Performance.
Relationship between Digital Transformation and Work Innovation Capabilities
Based on the results of data analysis using SmartPLS obtained p value of 0.345 > 0.050 so it can be concluded that Digital Transformation has no significant effect on Work Innovation Capabilities, an increase in the Digital Transformation variable will have an insignificant effect on increasing the Work Innovation Capabilities variable and a decrease in the Digital Transformation variable will have an insignificant effect. significant to the decrease in the Work Innovation Capabilities variable. This result is not in line with the research conducted by Kim (2020). El Toufaili (2017), Fayaz (2017), Quddus (2020) that Digital Transformation has a positive and significant effect on Work Innovation Capabilities.
Relationship between Organizational Performance and Work Innovation Capabilities
Based on the results of data analysis using SmartPLS obtained p value of 0.392 > 0.050 so it can be concluded that Organizational Performance has no significant effect on Work Innovation Capabilities, an increase in Organizational Performance variables will have an insignificant effect on increasing Work Innovation Capabilities variables and a decrease in Organizational Performance variables will have no effect. significant to the decrease in the Work Innovation Capabilities variable. This result is not in line with research conducted by El Toufaili (2017), Fayaz (2017), Quddus (2020) that Organizational Performance has a positive and significant effect on Work Innovation Capabilities.
Relationship between Transformational Leadership and Organizational Performance through Work Innovation Capabilities
Based on the results of data analysis using SmartPLS, the p value is 0.813 > 0.050 so it can be concluded that Transformational Leadership has no significant effect on Organizational Performance through Work Innovation Capabilities, an increase in transformational leadership variables will have an insignificant effect on increasing Organizational Performance variables and a decrease in Transformational Leadership variables will provide no significant effect on the decrease in Organizational Performance variables through Work Innovation Capabilities. This result is not in line with the research conducted by Kim (2020). El Toufaili (2017), Fayaz (2017), Quddus (2020) that Transformational Leadership has a positive and significant effect on Organizational Performance through Work Innovation Capabilities.
Relationship between Servant Leadership and Organizational Performance through Work Innovation Capabilities
Based on the results of data analysis using SmartPLS obtained p value 0.582 > 0.050 so it can be concluded that Servant Leadership has no significant effect on Organizational Performance through Work Innovation Capabilities, an increase in the Servant Leadership variable will have an insignificant effect on increasing the Organizational Performance variable and a decrease in the Servant Leadership variable will give no significant effect on the decrease in Organizational Performance variables through Work Innovation Capabilities. This result is not in line with research conducted by Maroukian (2020), Gandolfi (2018) that Servant Leadership has a positive and significant effect on Organizational Performance through Work Innovation Capabilities. | 2021-12-14T21:25:27.646Z | 2021-08-30T00:00:00.000 | {
"year": 2021,
"sha1": "8280c814edf5b5e4093195e35d306443165c7370",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.29210/020211164",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8280c814edf5b5e4093195e35d306443165c7370",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
250641820 | pes2o/s2orc | v3-fos-license | Crisis of the Asian gut: associations among diet, microbiota, and metabolic diseases
The increase of lifestyle-related diseases in Asia has recently become remarkably serious. This has been associated with a change in dietary habits that may alter the complex gut microbiota and its metabolic function in Asian people. Notably, the penetration of modern Western diets into Asia, which has been accompanied by an increase in fat content and decrease in plant-derived dietary fiber, is restructuring the Asian gut microbiome. In this review, we introduce the current status of obesity and diabetes in Asia and discuss the links of changes in dietary style with gut microbiota alterations which may predispose Asian people to metabolic diseases.
INTRODUCTION
Several hundred microbial species form a distinctive complex ecological community, namely the gut microbiota, in the digestive tract of a person. It influences the host's physiology and susceptibility to disease via direct contact with host cells [1] or via its collective metabolic activities [2]. Multiple intrinsic and extrinsic factors, such as diet, host genetics and physiology, drugs and disease, and living environments, are involved in shaping the gut microbiota and its metabolic performance [3]. Notably, diet is considered one of the key drivers of the gut microbial community, as it supplies nutrition and alters the environment for the microbes [4][5][6][7].
Asian diets vary remarkably within the continent and significantly differ from those on other continents, with their main characteristics being that they are high in carbohydrates, fiber, vitamins, and antioxidants but low in concentrated fat [8]. Traditional Asian diets are basically considered to contain foods with beneficial effects against metabolic diseases, and some are reported to be helpful in promoting and inhibiting the colonization of beneficial and non-beneficial gut bacteria, respectively [7,[9][10][11][12]. However, contemporary diets have recently influenced the dietary lifestyles of Asian people, as rapid development of global food service chains has distorted local eating lifestyles. For instance, a trend toward an increase in consumption of caloriedense diets by Asian people, such as diets containing refined carbohydrates, fat, red meats, and low fiber contents, has been largely confirmed by several studies [13][14][15]. This occasionally distresses the gut microbiota of Asian people, eventually leading to dysfunction of their gut microbial communities [16][17][18].
The development of next-generation sequencing (NGS) technology and progress of computer-and database-assisted bioinformatics has revolutionized the genomic research fields in the past two decades [19]. This includes studies on the microbiome of the human digestive system, and we have gained insight into its variation among the peoples of the world, including Asian people. The enterotype has been proposed as a general concept to type the human gut microbiome throughout the world and was first introduced by Arumugam et al. [20], although it remains controversial in the aspect of inconsistency and lack of discreteness of the enterotyping [21]. Three enterotypes were originally characterized with high abundances of Bacteroides, Prevotella, and Ruminococcus, respectively, which are present regardless of ethnicity, gender, age, and body mass index (BMI) [20]. Following this first report, a number of studies addressed the links of various microbiome markers and enterotypes with host phenotypes [22]. doi At present, we understand that the gut microbiota plays an indispensable role as an interface between foods and host health. Asia is the world largest populous continent, accounting for proximately 60% of the world population and containing a great diversity of ethnicities with large variations in culture and lifestyle, especially in diet [23]. It can be said that each ethnic group has its own dietary culture. In this context, Asia is an attractive field to study the interplay of the gut microbial community and diet together with their effects on host health. Notably, Asian people have specific physiological aspects involved in the vulnerability to metabolic diseases. This therefore warrants capturing of the current status of the Asian gut in association with host metabolic disorders.
In this review, we introduce the current status of metabolic diseases, such as obesity and type 2 diabetes (T2D), and the risk factors for them that are increasing and becoming recent social problems in Asia. We also discuss the associations between diet and gut microbiota that cause metabolic diseases in Asian people by drawing upon the latest studies, including our own data.
Prevalence of obesity and diabetes in Asia
Economic development and the ubiquity of low priced, highly processed diets in the past half century in Asian countries has been critically associated with an epidemic of obesity and diabetes among Asian people. Indeed, people in Asia have tended to shift away from traditional plant-based diets to calorie-dense foods rich in fat, animal protein, and simple sugars that lead to excessive weight gain and unhealthiness [13][14][15]. A study released by the Asian Development Bank Institute (ADBI) indicated that the obese population (BMI >25) in Asia has reached about one billion, corresponding to two out of every five adults [24]. The obese populations differ among the regions of Asia, with the population being extremely high in the Pacific region, exceeding 50%, and the rate in Central Asia following closely behind this. On the other hand, the rate was originally low in Southeast Asia (19% of total population on 1990), but it increased from 1990 to 2013 to 38.6%, whose increasing rate was highest in Asia. In particular, Malaysia is the fattest country in Asia at present (46% of total population has a BMI >25), with the nation particularly consuming high amounts of sugar in the form of various sugary drinks, an average of about 3 kg per person per year as reported by the World Health Organization (WHO) (https://www.who.int/malaysia/news/commentaries/detail/ sugary-drinks-tax-important-first-step-but-obesity-in-malaysiademands-further-action). It should also be noted that Asia has recently become the epicenter of a diabetes epidemic [25]. The International Diabetes Federation stated that at least 463 million people among the total world population in 2019 were suffering from diabetes and that more than half of the affected people lived in Asia [26]. Moreover, five Asian countries are ranked among the world's top ten countries for the number of adults (20-79 years) with diabetes, and six Asian countries are among the top ten for impaired glucose tolerance (IGT; Table 1). The abovementioned points indicate that metabolic diseases are a serious concern in Asian people.
Specific phenotypes of Asian people leading to the risk of obesity and T2D
Abdominal obesity is one of the common phenotypes among Asian people. Normally, Asian people have lower BMIs compared with other ancestry groups but have higher body fat distributions, which causes susceptibility to metabolic abnormalities related to obesity, such as metabolic syndrome (MetS), cardiovascular diseases (CVDs), and T2D [27]. A study by Deurenberg et al. indicated that the body fat percentage of Asian people is around 3-5% higher than that of Caucasian people with the same BMI [28], and this condition is termed the Y-Y paradox [29]. Comparison between populations of the same sex, age, and BMI suggested that Filipino people with a higher body fat percentage show higher risks for T2D and MetS than Caucasians [30]. People from China and India are highly predisposed to abdominal obesity. Jia et al. reported that one-third of Chinese adults are overweight or obese and that 10-20% of all adults are affected by MetS [31]. Moreover, Chinese people tend to have an apple-shaped body rather than pear-shaped body, which represent abdominal obesity and generalized obesity, respectively [32]. Similar to China, obesity in India generally results from abdominal obesity. Ahirwar and Mondal presented a study released by the Indian Council of Medical Research-India Diabetes (ICMR-INDIAB) in 2015 stating that the prevalence rate of abdominal obesity in India is higher than that of general obesity, with the rates varying among the regions from 16.9% to 36.3% and 11.8% to 31.3%, respectively [33]. Moreover, abdominal obesity is one of the critical risk factors for the development of CVDs in Indian people [33]. Obesity can be linked with an increased risk of T2D, namely diabesity [34]. Accumulation of body fat is associated with inflammation and is one of the major contributing factors to T2D [35]. In obese individuals, excessive calorie intake results in fat accumulation in adipose tissues and lipotoxicity in non-adipose tissues, which activate the production of nonesterified fatty acids (NEFAs), glycerol, and pro-inflammatory cytokines, whereas antidiabetic hormones, such as leptin and adiponectin, are also secreted from adipose tissues. The former action causes impaired insulin function as well as low-grade inflammation, resulting in a loss of insulin sensitivity, which is referred to as insulin resistance, and long-term insulin resistance leads to a constantly elevated systemic glucose concentration and ultimately drives the development of T2D. It should be noted that not all obese individuals develop T2D, and this is possibly explained by specific anti-T2D metabolic phenotypes, such as the increased adipose tissue capacity for lipogenesis in metabolically normal obese people [36,37]. On the other hand, in Asia, diabesity has been increasing gradually, although lean diabetes is still highly prevalent [26,[38][39][40].
Asian people are known to have specific physiological aspects involved in vulnerability to glucose homeostasis. For example, Asian people have a high risk of insulin resistance caused by dysfunctional pancreatic insulin secretion [41][42][43]. A number of studies have shown that slight defects in insulin secretion capacity are indicated in healthy Asian people when assessed by their glucose tolerance [41,42]. Reduced pancreatic β-cell mass is normally found in Asian people, particularly East Asians. A study by Yoon et al. suggested that the impaired insulin secretion of T2D patients results from an inadequate pancreatic β-cell mass and/or functional defects within the β-cells themselves [43]. Moreover, Kodama et al. speculated that even a small decrease in insulin secretory function in East Asians leads to a rapid decrease in the threshold level of insulin resistance and the development of T2D and that this instability and the vulnerability of glucose homeostasis due to their lower β-cell function has increased the prevalence of diabetes in East Asia in recent decades [44].
Gut microbiota of Asian children with a change in their dietary lifestyle
We investigated the gut microbiomes of school-aged children in five Asian countries, including Japan, China, Taiwan, Indonesia, and Thailand [45]. Urban and rural cities were chosen in each country that highly reflected the microbiome profiles of the respective countries and residents, at least in part, which we thought they might reflect the differences in dietary habits [45]. Among the subjects, two enterotype-like clusters were observed, which were defined by a high abundance of Prevotella (P type) or high abundance of Bacteroides and Bifidobacterium (BB type; Fig. 1A and 1B). Whole shotgun metagenomics data for each microbiome type indicated that the P type microbiome is enriched with genes encoding plant-polysaccharide degrading enzymes, such as amylase and pectinase, while the BB type microbiome is enriched with genes involved in bile acid metabolism, such as Abundance of enzymatic genes encoded in the fecal metagenome of BB-type and P-type feces. (D) Distribution of P-type and BB-type samples in ten cities from five countries. Figure 1A and B are reproduced from Nakayama et al. [ 1C). Children from East Asian countries primarily harbored the BB type, whereas those from Southeast Asian countries primarily harbored the P type, except for children from Bangkok, Thailand (Fig. 1D). The gut microbiota of Thai children likely reflects a shift of dietary habits from a traditional to modern style that commenced in urban areas.
To address the impact of changes in dietary style, we performed two cross-sectional studies to compare the gut microbiota of urban and rural children: one was conducted on Leyte island in the Philippines ( Fig. 2A) [16], and the other one was conducted on the capital and rural cities in Thailand (Fig. 3A) [17]. Interestingly, Rohrer's index was significantly higher in urban children in both countries; notably, the index in urban cities was around 145, which was the border between normal and obesity, whereas in rural cities, it was in the middle of the normal range, suggesting that urban children in these developing countries tend to suffer from obesity ( Fig. 2B and 3B). This has attracted interest with respect to the condition of the gut microbiota of urban children compared with rural children who maintain a standard body mass under a traditional diet.
To begin, we compared the daily diets and fecal microbiota of children living in an urban city (Ormoc) and rural city (Baybay) on Leyte island [16]. The results of a dietary survey and examination of fecal microbiota revealed that the children in Ormoc consumed modern high-fat foods, such as snacks and fast foods, with the associated total fat consumption accounting for 27.2% of their total energy intake, which corresponded closely to the similar level of children in Western countries. On the other hand, the children in Baybay maintained a traditional dietary style, including the daily consumption of regional fruits, green mangos and bananas, and their total fat consumption rate was 18.1% (Fig. 2C). Regarding microbiota, the children in these two cites harbored two distinctive gut microbiotas, namely BB type and P type microbiotas, respectively. A redundancy analysis indicated that the BB type microbiota in Ormoc is driven by high fat consumption, reflecting the introduction of Western foods (Fig. 2D). It is highly surprising that the two populations of these cities, which where of the same ethnicity and only 60 km apart from each other on the same island, had opposing enterotypes that appeared to be driven by the introduction of a Western diet. Interestingly, the fat consumption of the children in both cities on Leyte island showed a positive correlation with the Firmicutes-to-Bacteroidetes (F/B) ratio, which is known as a gut biomarker of obesity (Fig. 2E). In our Thai study, different aspects were observed [17]. Comparison of the daily diet profiles of Thai children who lived in a rural city, Buriram, and an urban city, Bangkok, clearly indicated that the urban dietary lifestyle has penetrated into Bangkok. Dietary records showed that Bangkok children consumed more fat and simple sugars and much lower amounts of vegetables, whereas Buriram children maintained a traditional Thai plantbased diet (Fig. 3C and 3D). Comparative microbiomics did not show an enterotype shift like the Leyte children but did show slight changes in the abundance of Peptostreptococcus, which increased in the Buriram children. On the other hand, comparative metabolomics showed some distinct types among the children; one type showed a higher level of short-chain fatty acids (SCFAs) in a cluster mainly consisting of Buriram children, and the other type showed a higher level of amino acids and lower level of SCFAs in a cluster mainly consisting of Bangkok children. Taken together, the fecal butyrate and propionate concentrations were significantly lower in Bangkok children than Buriram children (Fig. 3E). This suggests that urban dietary habits with lower consumption of vegetables results in a reduction in colonic SCFA fermentation in Thai children.
These two cross-sectional studies may indicate a gut microbiota crisis in Asian children. Loss of SCFA fermentation in Bangkok children due to the shift away from traditional Thai foods suggests the advantage of Thai traditional foods and disadvantage of urban foods. Thailand is known as the "kitchen of the world", and traditional Thai foods may be representative of foods in Southeast Asia. The loss of benefits from this tradition and the impact of this on the gut microbiota of children may not be a problem only in Thailand but may also affect the whole area of Southeast Asia. The enterotype shift in children on Leyte island represents a strong impact of the introduction of Western foods to Asian people. The penetration of Western diets may drive the enterotype shift continuously throughout Southeast Asia. We should keep an eye on the impact of the enterotype shift on the health of Asian people.
Impact of the penetration of modern diets on gut microbiota in Asian people in developing areas
There is a clear difference in terms of dietary patterns between Western and Asian countries. Asian diets generally integrate several tastes, including sweet, sour, salty, spicy, and bitter tastes, and their main characteristics are known to be high fiber, vitamin, mineral, and antioxidant contents together with a high carbohydrate content but low concentrated and total fat contents [8]. Western diets, notably urban diets, tend to be short on fiber and to contain high-fat dairy products and excessive amounts of refined and processed foods, alcohol, salt, red meats, sugary drinks, snacks, eggs, and butter, meaning that they are enriched in terms of total fat, animal proteins, and refined sugars [8]. Noticeably, the Western diet is penetrating into traditional diets in developing areas as a part of urban lifestyles introduced in conjunction with ongoing economic growth.
Asian diets are basically considered to contain relatively beneficial foods, to protect against chronic metabolic disease, and to be helpful in promoting and inhibiting beneficial and non-beneficial gut bacteria, respectively. For example, studies have reported that a Japanese diet containing high levels of fiber promotes high numbers of Bifidobacterium but low numbers of gut pathogenic Clostridium spp. [10,11]. Moreover, endosperm protein extracted from a Japanese rice cultivar, Koshihikari, alters the gut microbiota diversity and is associated with the suppression of high-fat diet (HFD)-induced obesity progression by suppressing the growth of endotoxin-related chronic inflammatory Escherichia coli in mice [9]. Another major component of the Asian diet is vegetables. The Thai diet is known to be rich in vegetables (FAO, United Nations, http://www.fao. org/3/ac145e/ac145e02.htm). As mentioned above, the study of diets in relation to the gut microbiome and metabolome in the Thai cohort indicated that children who consume a traditional vegetable-based diet have a greater bacterial diversity with significantly higher levels of fecal SCFAs, mainly butyrate and propionate, compared with children who consume a modern high-fat diet (Fig. 3E) [17]. Another study investigating healthy Thai vegetarians indicated that their microbial communities are mainly driven by Prevotella but that they have a low abundance of potential pathogen varieties [12]. A Western diet with an urban lifestyle is generally considered to be harmful to health. One of the major characteristics of Western diets is that they are rich in total fat, so-called HFDs, and the consumption of a HFD has been linked to low-grade inflammation related to metabolic disease [46][47][48]. An animal study by Bortolin et al. found that a diet formulated based on a Western style generates excessive fat accumulation in mice and results in metabolic dysfunctions, as evaluated by significantly higher levels of several metabolic biomarkers associated with obesity-related diseases, mainly hepatic steatosis, inflammation, and insulin resistance, compared with a dietary control group [49]. Moreover, it alters the microbial community, resulting in gut dysbiosis [49]. In humans, a randomized crossover clinical trial was performed by Shin et al. to investigate the differences between a Korean diet (more plant-derived and fewer animal components) and two American diets (more animal and fewer plant-derived components) [7]. The results of four-week dietary interventions in overweight adults indicated that the Korean diet promotes gut microbial diversity and decreases the level of branched-chain amino acids, increased circulating levels of which are known to induce insulin resistance and aggravate glucose intolerance [50], which was the opposite of the American diets [7]. A cross-sectional study indicated that gut microbiota of European children, who ordinarily consume high-calorie diets, harbor increased abundances of Proteobacteria and decreased abundances of Prevotella [6].
As mentioned in the previous subsection regarding the changes in the gut microbiotas of Asian children with a change in their dietary lifestyle, regional comparative studies of Asian children suggested that their gut microbiotas are recently being affected by the modernization of their diets. This leads to the question of how adults are affected. To discuss this point, we present here a dataset we obtained through the Asian Microbiome Project (http://www.agr.kyushu-u.ac.jp/lab/microbt/AMP/). A principal component analysis (PCA) was performed by using gut microbial compositions at the genus level of adults from six Asian countries, including Japan, China, South Korea, Mongolia, Indonesia, and Thailand (Fig. 4A). As shown in Fig. 4B, three enterotype-like clusters, namely a Ruminococcus type (R type) in addition to the P and BB types, were observed, although the border between the clusters was rather unclear. Differences in the distribution of enterotypes were present among the Asian countries; for example, Japanese samples were mostly typed as the BB type, and Mongolian and Indonesian samples were highly classified as the P type (Fig. 4C). Although it is known that the P type favorably colonizes the intestines of people who favor plant-based diets, Mongolian people have dietary habits that mainly consist of the consumption of meats and dairy products, with less consumption of vegetables [51]. However, it is known that Mongolian people consume high amounts of whole-wheat products [52], which contain high contents of arabinoxylan, which is known to promote the colonization of Prevotella in the intestine [53]. Korean, Chinese (Beijing), and especially Thai samples were highly localized in the R-type cluster, namely the Ruminococcus-rich microbiome cluster (Fig. 4C), probably as a result of the fiber-rich diets in these countries [54].
To address the relationships between consumed foods and gut microbiota more directly, we performed cross-sectional studies in two countries, Mongolia [55] and the Philippines [56,57]. In each country, we collected food consumption data as well as gut microbiome data in urban and rural sites. In Mongolia, we collected samples in the capital city, Ulaanbaatar, and a rural city, Bulgan. The food consumption data showed contrasting dietary habits: people in Bulgan mainly consumed a traditional Mongolian diet, whereas people in Ulaanbaatar consumed far fewer traditional foods (Fig. 5A). In the Philippines study, we collected samples in the capital city, Manila, and a rural area, Albay. The food consumption data indicated that people in Manila consumed a high ratio of fat, while people in Albay consumed a high ratio of carbohydrates (Fig. 5A). The microbiome data for these two countries similarly showed the tradeoff of Prevotella and Bacteroides between urban and rural areas in association with the penetration of urban diets (Fig. 5B). Figure 5B also shows Japanese data indicating that Prevotella is no longer present in most people. Japan has undergone drastic development that began in the second half of the twentieth century, and people in cities now live completely urban lifestyles. Taken together, the results of these studies suggest that dietary urbanization has been a strong driving force for the shift in enterotype from the P to BB type in Asian people.
Impact of tradeoff between Prevotella and Bacteroides on the health of Asians
Wu et al. investigated the links of long-term dietary patterns with enterotypes [5]. They found that Bacteroides was associated with protein and animal fat diets, whereas Prevotella was associated with plant-based carbohydrate diets. Similar results were revealed in the Asian cohort studies presented in the previous subsection regarding the impact of the penetration of modern diets on gut microbiota in Asian people in developing areas, suggesting that tradeoff between Prevotella-type and Bacteroidestype microbiomes is ongoing due to the penetration of modern Western-type diets [16,45]. Recently, the Bacteroides enterotype was reported to be associated with a high prevalence of T2D. A study conducted by Wang et al. indicated a high prevalence of T2D in Chinese subjects with Bacteroides-type microbiomes and elevated levels of blood lipopolysaccharide (LPS), diamine oxidase (DAO), and tumor necrosis factor-alpha [58]. Their study suggests that these T2D patients suffer from endotoxemia and low-grade inflammation, causing impaired insulin sensitivity [58]. On the other hand, beneficial effects of Prevotella colonization have been reported in some studies. The response to a barley kernel diet with improved glucose tolerance is dependent on a high ratio of Prevotella to Bacteroides in the human intestine [59]. Mice administered Prevotella copri by gavage showed improved glucose tolerance and an increase in hepatic glycogen storage via the modulation of intestinal gluconeogenesis and systemic energy homeostasis, with succinate serving as a source of intestinal glucose [60]. Furthermore, a subsequent study indicated that subjects with a high Prevotella level displayed an overall lower insulin response, lower IL-6 concentrations, and hunger sensations compared with a low Prevotella group, suggesting the benefit of a higher Prevotella/Bacteroides ratio in host metabolic regulation [61]. However, it should be noted that the opposite results have also been reported, with mice administered P. copri by gavage having significantly higher serum glucose levels after a three-week challenge compared with controls administered a sham gavage and the fecal P. copri abundance being positively correlated with homeostasis model assessment-insulin resistance (HOMA-IR) at two weeks post bacterial challenge [62].
We summarized the action of the enterotype shift from the Prevotella type to the Bacteroides type in Fig. 6A. It seems that Bacteroides appears to be a new player related to the promotion of T2D. A study by Sun et al. found that over-representation of Bacteroides fragilis in Chinese patients is associated with T2D via bile acid biotransformation [63]. Bile acids (BAs) are the end products of cholesterol catabolism and are synthesized only in the liver [64]. Produced BAs are pooled in the gallbladder, and after a meal, they are secreted into the ileum. Most BAs are reabsorbed and recycled via the enterohepatic circulation, and the remaining BAs enter the colon, are therein metabolized by a certain group of gut bacteria, and are eventually voided in feces [64]. BAs regulate metabolism and pathophysiology in the liver as signaling molecules that activate several nuclear receptors. Among the receptors, farnesoid X receptor (FXR) plays an important role in the metabolic regulation related to obesity and diabetes [64]. Inhibition of intestinal FXR is suggested to have beneficial effects on glucose homeostasis [65][66][67]. Sun et al. elucidated that B. fragilis is involved in T2D through its bile salt hydrolase (BSH) function, which causes the loss of conjugated BAs, notably glycoursodeoxycholic acid (GUDCA) and tauroursodeoxycholic acid (TUDCA), functioning as an intestinal FXR antagonist and improving glucose homeostasis [63]. Moreover, the administration of an antidiabetic drug, metformin, in T2D patients has been suggested to suppress B. fragilis, causing a decrease in BSH activity with a consequent increase in GUDCA and TUDCA levels. Eventually, the recovered levels of these conjugated BAs improve the blood glucose level. A similar phenomenon was found in Indonesian T2D patients. A study by Therdtatha et al. found that a T2D group of subjects harboring a high abundance of B. fragilis showed low levels of conjugated BAs, especially TUDCA, the level of which was restored by the administration of metformin [18]. It can be inferred that the key mechanism of the abovementioned T2D promotion is BSH activity, the inhibition of which leads to the accumulation of conjugated BAs, which have an FXR antagonistic effect. BSH, which is widely present in gut bacteria, hydrolyzes and deconjugates glycine or taurine from the sterol core of the primary BAs, and this activity has been known to have effects on various aspects of health [68]. A number of studies have suggested that BSH activity is commonly found in strains of gram-positive probiotic candidates, such as Lactobacillus spp. and Bifidobacterium spp., and other gram-positive genera, such as Clostridium and Enterococcus [69,70]. In contrast, the activity of Gram-negative bacteria has been largely unexplored, and the characterization of BSH related to the members of Bacteroidetes has started [71]. Further underlying modes of action in the microbiota-BSH axis involved in T2D and some other metabolic diseases require more investigation.
Finally, we show a model of T2D promotion by B. fragilis in Fig. 6B. B. fragilis synthesizes folate, which mediates a broader set of biotransformations known as one-carbon (C1) metabolism that serve as biosynthetic processes for amino acids, including glycine, serine, and methionine, required for bacterial growth and survival. Metformin is known to suppress folate and methionine production in gut bacteria (Fig. 6B (a)) [72]. Collectively, this indicates that metformin inhibits the growth of B. fragilis via the modification of folate and methionine metabolism, resulting in a decrease in BSH activity and subsequently increasing in the levels of TUDCA and GUDCA, which are intestinal FXR antagonists ( Fig. 6B (b)) [63]. Inhibition of intestinal FXR is suggested to have beneficial effects on glucose homeostasis via the induction of glucagon-like peptide-1 (GLP-1) production ( Fig. 6(c)) [65] Induction of GLP-1 production is controlled by Takeda-G-protein-receptor-5 (TGR5) [73,74], whereas FXR activation has found suppressing the transcription and secretion of GLP-1 [66]. Subsequently, GLP-1 induces insulin secretion in β-cells ( Fig. 6(d)) and downregulates hepatic gluconeogenesis ( Fig. 6(e)) [66]. It is also known that intestinal FXR inhibition attenuates hepatic gluconeogenesis due to suppression of the expression of genes involved in ceramide synthesis in the intestine (Fig. 6(f)) [67]. Ceramides are lipid molecules that are known to disturb glucose homeostasis via inhibition of the insulin signaling leading to insulin resistance ( Fig. 6(g)) [75]. Moreover, they induce pancreatic β-cell apoptosis that impairs insulin production ( Fig. 6(h)) [76]. Ceramides also impair adipose function through increased endoplasmic reticulum stress, resulting in a decrease in the ratio of beige to white adipocytes leading to obesity, and possibly also inflammation and insulin resistance (Fig. 6(i)) [77]. It should also be noted that metformin controls the blood glucose level via the inhibition of gluconeogenesis by inhibiting mitochondrial glycerophosphate dehydrogenase (Fig. 6(j)) [78].
CONCLUDING REMARKS
The rates of metabolic diseases have been increasing in Asia over the past few decades in conjunction with rapid socioeconomic growth and in association with lifestyle-related diseases of Asian people and their specific phenotypes. A shift in dietary habits from Asian traditional to modern styles distorts the gut microbiota and metabolome, resulting in a worsening of health. Particularly in Asian developing countries, where diets have gradually modernized in urban areas, the enterotypes show a clear trend of change from the P type to BB type. Diabetes is a metabolic disease that has recently become a serious problem among Asian people. In terms of gut microbiota-related diabetes, the Bacteroides enterotype can be a marker for a high risk of T2D in people in developing countries in Asia. In particular, Indonesian study has indicated that B. fragilis together with BSH activity shows a strong association with T2D. Collectively, this review warns of the common risk of modernization of dietary habits with regard to the health of Asian people.
CONFLICTS OF INTEREST
The authors declare that there are no conflicts of interest relevant to this article. | 2022-07-20T05:16:56.836Z | 2022-02-09T00:00:00.000 | {
"year": 2022,
"sha1": "8615cf49ab2b4389a76aa229b36ee343ac4cdb5b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/bmfh/advpub/0/advpub_2021-085/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8615cf49ab2b4389a76aa229b36ee343ac4cdb5b",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237576700 | pes2o/s2orc | v3-fos-license | Antiviral medicinal plants found in Lanna traditional medicine
Traditional medicine uses a multitude of plants to create medicinal formulations, some of which show antiviral properties that may be of benefit in treating emerging viral diseases, including Covid-19. Lanna, an ancient Kingdom in Northern Thailand, with a thriving culture that continues to this day and has a rich history of traditional medicine using local plants that is still practiced today. To find potential antiviral medicinal candidates, we examined ancient manuscripts, interviewed traditional healers practicing today, and inventoried current traditional medicines to catalogue 1400 medicinal formulations used in Lanna traditional medicine. We then narrowed this list to find those traditionally used to treat diseases that in their original use and descriptions most likely map to those we know today to be viral diseases. We identified the plants used in these formulations to create a list of 64 potential antiviral herbal candidates drawn from this ancient Lanna wisdom and matched these to the scientific literature to see which of these plants had already been shown to possess antiviral properties, generating a list of 64 potential antiviral medicinal candidates from Lanna traditional medicine worth further investigation for treating emerging viral diseases.
Introduction
Thai traditional medicines and pharmacopoeia are documented in traditional reference works and textbooks. Ethnomedicine or indigenous medicine continues to play an important role and is still being practiced throughout Thailand, including in the private sector and through local healers and monks. The knowledge and skills of traditional healers was not usually recorded or written but transferred from generation to generation, father to son and teacher to students. The healers and herbalists were usual the same people. The medicaments were usually compound medicines, characterized as having hot, cold, or equally hot and cold proper- are derived from herbs that produce heat in the body, such as fresh ginger (Zingiber officinale Roscoe). Ginger has a sweet-hot property and releases the air element. Consuming ginger warms the body and can relieve fever-related insomnia and flatulence. Cold property medicines, such as Ficus racemosa L, can reduce the heat in the body and relieve fever and flu. Some herbs, such as Leucaena leucocephala (Lam.) de Wit, contain equal parts of hot and cold properties; Medicines derived from these are used to stabilize and normalize the body temperature. The Royal Thai Government's Department of Thai Traditional and Alternative Medicine, Ministry of Public Health restores and conserves traditional Thai medicine and knowledge, conducts pharmaceutical research of the traditional herbs, and promotes their use.
Lanna was an ancient kingdom in Northern Thailand, covering eight provinces-Chiang Mai, Chiang Rai, Phayao, Phrae, Nan, Lamphun, Lampang, and Mae Hong Son (Fig. 1). Chiang Mai was the center of the Lanna Kingdom. While the kingdom is gone, much of its culture remains in the people of the region. They retained a close relationship to nature and many of their traditional beliefs, which together provided continued support for traditional healers and the application of medicinal plants as used for centuries in treating those in the Lanna community.
Traditionally, two types of medicines, YaKae and YaTheep, are combined to treat these diseases. YaKae medicines help cure the patient while YaTheep medicines drive off the toxin or lessen the course of illness. If patients are treated with only YaTheep or YaKae medicine, it will take longer to recover, and the disease will recur sooner. The most common YaKae and YaTheep medicines that we found were referenced as Ya Kae Ha Ton and Ya Sri Munluang (Table 1).
In the Lanna context, antiviral medicines typically use names associated with the wind and hot elements. The wind element is involved with the respiratory system. Several symptoms were mentioned, including Khikoo (asthma) and Khaang khare (severe asthma). The hot element is involved with fever, such as fever with toxic substances, which refers to fever accompanied by inflammation. Fever with cough means the prolonged fever effect to the lung. Other fevers are E-suk-e-sai (chicken pox), Ngu-sawad (herpes zoster), Phee kuer (skin rash scattered with pustules) and Hadd (measles rash). The traditional preparations were first documented on palm leaves (Fig. 2) and mulberry bark (Fig. 3).
This paper aimed to gather the formulations of antiviral medicaments used in traditional healing from a variety of sources in the Lanna region of Northern Thailand: palm leaf manuscripts, mulberry bark manuscripts, translations, researches, documents written by healers, interviews with folk healers, and an inventory of herbal plants and medicines in the region. In this study, we
Antiviral medicinal plants in Lanna
We analyzed palm leaf and mulberry bark manuscripts from the eight-province Lanna region of Northern Thailand to find traditional medicinal formulas that might offer antiviral properties. We found and analyzed 1400 formulas, mapping their traditional uses to infections that are known today to be viral, such as influenza, several symptoms of toxic fever with or without cough, fever with asthma, chicken pox, herpes zoster, skin rashes with abscesses or pustules, and measles.
About 1400 formulas were selected and their constituent herbs were cross-referenced with current antiviral research to identify Anti-Herpes simplex virus type 1 (HSV-1) and type 2 (HSV-2) (IC 50 of ethanol extract against HSV-1 and HSV-2 were 0.022 and 0.1 lg/mL) Omer et al., 2014;Sand et al., 2021;Fukuchi et al., 2016;Badam, 1997;Baltina et al., 2015 Fabaceae C. nutans, Pha Ya Yo, belongs to the family Acanthaceae. In Thailand, this plant has been used to treat skin ailments. Leaves of C. natans are extracted with ethanol and used to prepare topical formulations to treat Herpes simplex virus and Varicella-zoster virus. The plant has been tested for several anti-viral properties, including anti-Cyprinid herpesvirus 3, anti-HSV type 1 activity, anti-HSV type 2 activity, and anti-Dengue virus (Haetrakul et al., 2017;Sakdarat et al., 2009;Yoosook et al., 1999;Tu et al., 2014). Chlorophyll derivatives (phaeophytins) were extracted from leaves of C. natans and showed anti-Herpes simplex virus type 1 (HSV-1) activities at subtoxic concentrations. These compounds could prevent the entry of the virus into cells (Sakdarat et al., 2009).
B. balsamifera, Compositae, Nat Yai, has been used widely in Thai traditional medicine, including as a carminative, for relieving sinusitis pain, and preparing bath water for mothers after giving birth. The main compounds have been identified, and include essential oil, steroids, flavonoids, and coumarin (Ruangrungsi et al., 1985). Many anti-viral properties have been tested, including anti-HIV-1 integrase activity and anti-Zika virus (ZIKV). Antibacterial and antifungal activities against Bacillus cereus, Staphylococcus aureus, Candida albicans, and Enterobacter cloacae have been reported (Sakee et al., 2011). The chemical compounds have been extracted from Caesalpinia sappan L., including brazilein, brazilin, protosappanin A, 3deoxysappanchalcone, sappanchalcone, and rhamnetin. These compounds showed activities against neuraminidase (NA) inhibitory activity (anti-Influenza viral activities) (Liu et al., 2009). C. sappan, Fabaceae, is distributed throughout Southeast Asia, Africa, and America. Its size was small to medium. The heartwood of C. sappan has been used to treat inflammatory disease, arthritis, and cancer. Other biological activities of this plant have also been investigated, including antioxidant activities and its protective effects against DNA damage (Saenjum et al., 2010).
Licorice, Glycyrrhiza glabra L., has been used as a traditional medicine, particularly as an expectorant to treat sore throats and coughs. The licorice plant has a sweet taste due to glycyrrhizin and its derivatives. It has been tested for its anti-viral properties against several viruses, including Newcastle disease virus, SARS-CoV-2, human immunodeficiency virus (HIV), Herpes simplex virus (HSV), Japanese encephalitis virus (JEV), West Nile virus, Sindbis, adenoviruses, Coxsackie viruses, and Influenza A/H1N1/pdm09 virus. Moreover, licorice extract has been shown to inhibit Candida albicans, Lactobacillus casei, and Lactobacillus acidophilus (Sirilun et al., 2018).
Conclusion
We analyzed over one thousand traditional formulations made from various medicinal plants to find those that might offer antiviral properties. Previous scientific research supported the antiviral properties of many of these formulations. As some of the documents studied were over 100 years old, we were unable to identify some of the plants referenced, as old names could not always be matched to their scientific names. Only a few of the plants we identified based on their traditional uses as possibly offering antiviral properties have been investigated both in vitro and in vivo. Further studies are needed to identify their active components and mechanisms of action.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-09-21T13:10:31.272Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "a11ef46bd44573cb9570ec128d6102ad8ffee622",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.chmed.2021.09.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc8bd728fdbce2de5cf449b58795a1d739e8a109",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118805182 | pes2o/s2orc | v3-fos-license | The formation of disc galaxies in a LCDM universe
We study the formation of disc galaxies in a fully cosmological framework using adaptive mesh refinement simulations. We perform an extensive parameter study of the main subgrid processes that control how gas is converted into stars and the coupled effect of supernovae feedback. We argue that previous attempts to form disc galaxies have been unsuccessful because of the universal adoption of strong feedback combined with high star formation efficiencies. Unless extreme amounts of energy are injected into the interstellar medium during supernovae events, these star formation parameters result in bulge dominated S0/Sa galaxies as star formation is too efficient at z~3. We show that a low efficiency of star-formation more closely models the subparsec physical processes, especially at high redshift. We highlight the successful formation of extended disc galaxies with scale lengths r_d=4-5 kpc, flat rotation curves and bulge to disc ratios of B/D~1/4. Not only do we resolve the formation of a Milky Way-like spiral galaxy, we also observe the secular evolution of the disc as it forms a pseudo-bulge. The disc properties agree well with observations and are compatible with the photometric and baryonic Tully-Fisher relations, the Kennicutt-Schmidt relation and the observed angular momentum content of spiral galaxies. We conclude that underlying small-scale star formation physics plays a larger role than previously considered in simulations of galaxy formation.
INTRODUCTION
The prevailing picture of galaxy formation emerged more than 30 yr ago (White & Rees 1978;Fall & Efstathiou 1980). Within the framework of the broadly accepted Λ Cold Dark Matter (ΛCDM) scenario (Komatsu et al. 2009), gravity assembles structures in a bottom-up fashion. Haloes of dark matter acquire angular momentum via tidal torques (Peebles 1969;Fall & Efstathiou 1980) from interacting structures, and as gas cools and condenses into their central parts, star-forming galaxies form. A realistic angular momentum content can be accounted for if most of the angular momentum is retained in the assembly process. In this picture, the host halo is responsible for the final galaxy characteristics (e.g. Mo et al. 1998). While several aspects of the theory of galaxy formation are still being developed, e.g. the underlying physics of the missing satellite problem (Klypin et al. 1999;Moore et al. 1999) and the role of cold stream accretion (Kereš et al. 2005(Kereš et al. , 2009Dekel et al. 2009), the model has proven successful for understanding global properties of galaxy assembly. ⋆ agertz@physik.uzh.ch Given the complexity and non-linearity of the involved processes, computer simulations have become the ideal tool for studying the formation of structure. The formation of a late-type spiral galaxy, such as our own Milky Way, has been studied numerically in fully ΛCDM cosmological context by many authors (e.g. Abadi et al. 2003b;Sommer-Larsen et al. 2003;Governato et al. 2004;Robertson et al. 2004;Okamoto et al. 2005;Governato et al. 2007;Croft et al. 2009;Scannapieco et al. 2009;Piontek & Steinmetz 2009b;). To date, no attempt has yielded a realistic candidate. The dominant reason for this is the so called "angular momentum problem" which leads to small, centrally concentrated discs dominated by large bulges (Navarro & Benz 1991;Navarro & White 1994). Merging substructures lose angular momentum to the outer halo via dynamical friction, forcing the associated baryons to end up in the central parts of the proto-galaxy as a spheroid rather than a disc. This poses a problem for the theoretical understanding of extended late-type galaxies. This might in part stem from numerical issues: the commonly used Smoothed Particle Hydrodynamics (SPH) (Gingold & Monaghan 1977;Lucy 1977) technique is known to incorrectly treat boundaries, hence poorly treating multiphase fluids (e.g. Agertz et al. 2007;Read et al. 2010). This can lead to artificial angular momentum transfer at the interface between cold disc and a hot halo (Okamoto et al. 2005).
Many proposed solutions exists to the angular momentum problem, all amounting to the same process: keep the gas from cooling and forming stars too efficiently in the merging dark matter satellites at high redshift. One natural source is the cosmological UV background, being responsible for reionization at z 6 which heats the gas, preventing it to cool efficiently into star-forming dwarf galaxies (Thoul & Weinberg 1996;Quinn et al. 1996;Gnedin 2000;Hoeft et al. 2006). However, the impact on objects larger than vcirc ∼ 10 km s −1 is unclear due to e.g. self-shielding and efficient collisional cooling (Dijkstra et al. 2004).
Gas in low-mass haloes can also be blown out by supernova driven winds (Dekel & Silk 1986;Efstathiou 2000), hence lowering the resulting star formation efficiency (SFE), enriching the intergalactic medium (IGM) in the process. Mac Low & Ferrara (1999) demonstrated that while dwarf galaxies of mass 10 6 − 10 9 M⊙ efficiently can expel metals in supernovae-driven winds, virtually no mass is lost for systems of mass 10 7 M⊙ (see also Dubois & Teyssier 2008). The inefficiency in driving winds from dwarfs was also reported by Marcolini et al. (2006) who attributed this to the extended dark matter halo and efficient metal cooling. In this scenario, mass-loss and IGM enrichment will occur due to tidal and ram-pressure stripping (e.g. Mori & Burkert 2000). Phenomenological models of e.g. momentum driven winds have proven successful in reproducing the high-z IGM (Oppenheimer & Davé 2006) but it is uncertain how it regulates star formation and in what manner the expelled gas is re-accreted at later times (Oppenheimer et al. 2010).
Various recipes of supernovae feedback have been developed for numerical simulations (e.g. Navarro & White 1993;Kay et al. 2002;Scannapieco et al. 2006), and the methods have proven successful in removing low angular momentum material from central parts of galaxies (e.g. Sommer-Larsen et al. 2003;Okamoto et al. 2005;Governato et al. 2007), yielding more extended galaxies in comparison to models without feedback. However, it is unclear to what extent this way of reducing star formation can account for disc-dominated spiral galaxies like the Milky Way. Recently Scannapieco et al. (2009) demonstrated numerically, in a fully cosmological setting, how a set of 8 Milky Way sized haloes failed to form significant discs. While half of the sample were early type galaxies resulting from late time mergers, the other half of the sample had less than 20 per cent of their stellar mass in discs. This can be a result of the inability of the adopted feedback to remove or redistribute low angular momentum material, but is also a strong indication that something else might regulate star formation at high redshift. On the same topic, Sawala et al. (2010) argues that modern simulations of dwarf galaxy formation (Valcke et al. 2008;Stinson et al. 2009;Governato et al. 2010) all yield much larger stellar masses than expected from observations as well as gas-tostar conversion efficiencies almost an order of magnitude too large. Dutton & van den Bosch (2009) found that, for SNe feedback to yield realistic galaxies, it must be very efficient, converting 25 per cent of the SN energy into outflows. If too strong feedback is employed, the discs can be destroyed by internal processes as too much material is ejected into the halo, preventing efficient disc reformation from cold gas, and possibly violating the upper bounds of halo gas found in X-ray surveys [see Bregman (2007) and references within]. In light of these studies, it is unclear if supernovae feedback is the sole agent in regulating star formation. Note that SNe explosions can regulate star formation in galaxies without expelling gas, being a driver of galactic turbulence (Mac Low & Klessen 2004).
Fundamentally, star formation is regulated by the availability of H2. The observed Kennicutt-Schmidt (from now on K-S ) relation (Kennicutt 1998), that relates ΣSFR to Σgas, varies strongly among individual spiral galaxies and can not be fit with a single power law . ΣSFR behaves very differently for Σgas greater or smaller than ≈ 9 M⊙ pc −2 , marking the transition from atomic to fully molecular star-forming gas , and is dependent on gas metallicity, dust content, turbulence, small scale clumpiness and local dissociating UV field (McKee & Ostriker 2007). The inclusion of these processes and its impact on global star formation in discs has recently been studied both numerically (Robertson & Kravtsov 2008;Gnedin et al. 2009;Pelupessy & Papadopoulos 2009) as well as analytically (e.g. Krumholz et al. 2009). A natural outcome of this treatment is an order of magnitude lower amplitude of the K-S relation at high redshifts (z ∼ 3) (Gnedin & Kravtsov 2010b). This agrees well with the observation of damped Lyα systems (DLA: Wolfe & Chen 2006) as well as Lyman Break Galaxies (Rafelski et al. 2009). This indicates that star formation can be made inefficient at high redshift, leaving gas for late-time star formation in a disc like environment, but not necessarily by expelling gas in supernova-driven winds. In addition, Murray et al. (2010) argues that the disruption time-scale of giant molecular clouds (GMCs) due to jets, H II gas pressure, and radiation pressure also serves to regulate the SFE in galaxies. The disruption occurs well before the most massive stars exit the main sequence, meaning that supernovae in principle have little effect on GMC lifetimes.
In this paper we investigate to what extent supernovae feedback and the underlying small scale star-forming physics can affect the formation and evolution of realistic spiral galaxies in a fully cosmological setting. The former effect is studied via well tested numerical implementations of SNII, SNIa feedback coupled to metal enrichment, as well as stellar mass-loss. The latter influence is achieved by considering different normalizations of the Schmidt-law star formation efficiency. We conduct a comprehensive analysis of the resulting z = 0 discs and compare them to observational relations.
The paper is organized as follows. In Section 2, we describe the numerical method used in this work, including the adopted feedback and star formation prescriptions. In Section 3, we present the cosmological initial conditions and discuss the free parameters of this work. Section 4 outlines the disc analysis and summarizes the final properties of the simulation suite. In Section 5 and Section 6, we present a detailed analysis of the impact of small-scale SFE and supernova feedback respectively. In Section 7 we compare our simulations to modern observations. Finally, Section 8 summarizes and discusses our conclusions.
NUMERICAL FRAMEWORK
We use the Adaptive Mesh Refinement (AMR) code RAM-SES (Teyssier 2002) to simulate the formation of a massive disc galaxy in a cosmological context including dark matter, gas and stars. The gas dynamics is calculated using a second-order unsplit Godunov method, while collisionless particles (including stars) are evolved using the particlemesh technique. The equation of state of the gas is that of a perfect mono-atomic gas with an adiabatic index γ = 5/3. Self-gravity of the gas is calculated by solving the Poisson equation using the multi-grid method (Brandt 1977) on the coarse grid and by the conjugate gradient method on finer ones. The modelling includes realistic recipes for star formation (Rasera & Teyssier 2006), supernova feedback and enrichment (Dubois & Teyssier 2008). Details on these implementations are given below. Metals are advected as a passive scalar and are incorporated self-consistently in the cooling and heating routine. The code adopts the cooling function of Sutherland & Dopita (1993) for cooling at temperatures 10 4 − 10 8.5 K. We extend cooling down to 300 K using rates form Rosen & Bregman (1995). Gas metallicity is also accounted for in the cooling routines. A UV background is considered using the prescription of Haardt & Madau (1996). In order to model a subgrid gaseous equation of state, hence avoiding artificial gas fragmentation, the gas is given a polytropic equation of state for densities large than ρ0. Throughout this paper we adopt T0 = 1000 K and γ0 = 2.0. In this work, the polytrope density is set equal to the star formation threshold n0. Following , we adopt an initial metallicity of Z = 10 −4 Z⊙ in the high-resolution region. This also serves as a flag for allowed regions of refinement. The refinement strategy is based on a quasi-Lagrangian approach, so that the number of particles per cell remains roughly constant, avoiding discreteness effects (e.g. Romeo et al. 2008).
Star formation
To model the conversion of gas into stars we adopt a Schmidt-law (Schmidt 1959) of the forṁ where ρg is the gas density, t ff = 3π/32Gρ is the local free-fall time, ǫ ff is the star formation efficiency per freefall time and ρ0 is the threshold for star formation. As soon as a cell is eligible for star formation, particles are spawned using a Poisson process where the stellar mass, m * , is chosen to be a multiple of ρ0∆x 3 . Each formed star particles is treated as one stellar population with an associated initial mass function (IMF). This is a relevant approximation as the star particle masses are orders of magnitudes larger than the average stellar masses. We also ensure that no more than 90 per cent of the gas in a cell is depleted by the star formation process.
The ρ0 and ǫ ff parameters in Eq. 2 are, in addition to being unconstrained physical parameters, resolution and hence scale dependent. There are in a sense two regimes of star formation in global simulations of disc galaxies as follows.
(i) The ISM is resolved: At parsec-scale resolution, star formation occurs in their natural sites i.e. massive clouds such as GMCs. Modern estimates of star formation efficiencies by Krumholz & Tan (2007) point towards values of ǫ ff = 1 − 2 per cent at densities of n ∼ 10 2 − 10 5 cm −3 . To only allow for star formation to take place in the actual physical star formation sites, hence tracing the formation of H2, allows for more accurate predictions of e.g. the Kennicutt-Schmidt star formation relation with less of a requirement to tune numerical parameters (see e.g. Gnedin et al. 2009). This treatment leads to ρg → ρH 2 in Eq. 2, which is equivalent to ǫ ff being dependent on the environment, due in part to the local H2 fraction. On galactic scales, this means that the scaleheight of all ISM components are resolved using at least 10 resolution elements (Romeo 1994). If this is not satisfied, the true disc stability will not be modelled accurately. This treatment is the goal of most simulations, but is due to the computational load beyond the capabilities of modern simulations attempting to study the assembly and evolution of large spiral galaxies to z = 0. Isolated simulations of large spiral galaxies in a non-cosmological setting have successfully reached this resolution Tasker & Tan 2009), albeit with simplified physics. As the star formation sites become resolved, new physics becomes important e.g. radiative feedback in order to accurately treat the lifetimes of GMC structures (Murray et al. 2010).
(ii) The ISM is under-resolved: To radially resolve a Milky Way like galactic disc, i.e. sampling the scale radius with at least 10 resolution elements, a force and hydro resolution of a few 100 pc is necessary. At this resolution the scale height is captured with more or less one resolution element. The true disc stability can be affected as both the density and velocity structure (gas and stellar dispersion) are influenced numerically. This still allows for the disc to have the correct global properties such as gas and stellar mass compositions, thin and thick disc, and even to develop realistic spiral structure. In this case a statistical star formation recipe based on the local gas density and free-fall time is well motivated both theoretically and observationally.
As we will describe in Section 3.1, we are targeting the latter regime of subgrid star formation and will investigate some of the numerical caveats related to it. At resolutions of several 100 pc, the ǫ ff parameter absorbs the small scale physics regulating star formation, allowing for a qualitative influence on galaxy formation. We note that there are alternative formulation of gas to star conversion laws in the literature (see e.g. Leroy et al. (2008) for a comprehensive summary). However, as they are all designed to fit an observed relation, and all include a normalization constant similar to ǫ ff , we believe Eq. 2 is a representative choice for this study.
Supernovae and stellar feedback
The standard recipe for supernova feedback in RAMSES involves only Type II supernovae events (SNII). We have also implemented additional treatment of Type Ia events (SNIa) as well as mass-loss via stellar winds. Including all of these effects is important as a single stellar population can return up to 30 − 40 per cent of its mass to the ISM during its lifetime. The implementations are as follows.
Type II
Type II SN events are relevant for stellar masses of 8−40 M⊙ which represents ∼ 10 per cent of the mass of a stellar population, regardless of IMF. We assume that 10 Myr after a star particle is formed, 10 per cent of the star particle's mass is injected into the nearest gas cell together with a total energy of ESNII = 10 51 (mejecta/10 M⊙) erg in thermal energy. At low resolution, this energy would quickly radiate away (Katz 1992) in the dense gas, without allowing for an adiabatic expansion of the supernova blast-wave (McKee & Ostriker 1977). To remedy this, we turn off cooling in cells containing young stars to allow for the blast-wave to grow and be resolved by few cells, hence converting thermal energy into P dV work (see e.g. Gerritsen 1997). In detail, for every star formation event, the inverse of the birth time, 1/t bt , is stored in the computational grid, overwriting any previous value. This passive scalar field is advected with the hydro flow (the conserved quantity is ρgas/t bt ). Gas cooling at a simulation time tsim is only allowed if tsim − t bt > ∆t off , where ∆t off is the cooling shut-off time-scale. Calculations of relevant timescales and numerical tests using the SPH formalism was carried out by Stinson et al. (2006). Relevant time-scales are on the order of tens of Myr and we adopt ∆t off = 50 Myr.
Type Ia
We treat SNIa and stellar mass-loss using the prescription outlined in Raiteri et al. (1996). The assumed IMF is the parametrization by Kroupa et al. (1993), which for a star particle of mass m * reads where M is here the stellar mass in units of M⊙ and the normalization constant A ≈ 0.3029. The adopted lower and upper limits are 0.08 M⊙ and 100 M⊙ respectively. At each simulation time-step, we calculate the mass fraction of each star particle ending its H and He burning phase, i.e. leaving the main sequence, using the fit where t * is the lifetime of the star and Z the metallicity. The adopted coefficients and references to the original data can be found in Raiteri et al. (1996). Progenitors of SNIa are carbon plus oxygen white dwarfs that accrete mass from binary companions. Stellar evolution theory predict the binary masses to be in the range of ∼ 3 − 16 M⊙. The number of SNIa events within a star particle, at a given simulation time with an associated timestep ∆t, is where mt and mt+∆t is the mass interval of stars ending their life during the computational timestep.Φ(M2) is the IMF of the secondary star, i.e.
where MB is the mass of the binary, M inf = max(2M2, 3 M⊙) and Msup = M2 + 8 M⊙. The constant A ′ = 0.16A, and is a calibrated value for SNIa events in our Galaxy (van den Bergh & McClure 1994). Each explosion is assumed to release 10 51 erg (released as thermal energy in the nearest gas cell) and 0.76 M⊙ of metal enriched material (0.13 M⊙ of 16 O and 0.63 M⊙ of 56 Fe) (Thielemann et al. 1986).
Stellar mass-loss
For each time-step ∆t, and star particle, we calculate the average stellar mass, M , exiting the main sequence using Eq. 3. The mass-loss during the ∆t time-span is calculated using the best fit initial-final mass relation of Kalirai et al. (2008): At each time interval ∆t, the total mass-loss in winds is where The lost stellar mass enters the gaseous mass in the nearest cell and the gas metallicity is updated consistently with the star particle's metallicity.
INITIAL CONDITIONS AND SIMULATION SUITE
The initial conditions used in this work are a subset the Silver River simulation suite (Potter et al. in preparation), aimed to study the pure dark matter assembly history of a Milky Ways size halo in much greater detail than here. We adopt a WMAP5 (Komatsu et al. 2009) compatible cosmology, i.e. a ΛCDM Universe with ΩΛ = 0.73, Ωm = 0.27, Ω b = 0.045, σ8 = 0.8 and H0 = 70 km s −1 Mpc −1 . The upcoming work by Potter et al. will present the details concerning the initial condition generation. Briefly, a pure dark matter simulation was performed using a simulation cube of size L box = 179 Mpc. At z = 0, a halo of mass M200,c ≈ 9.7 × 10 11 M⊙ was selected for re-simulation at high resolution, and traced back to the initial conditions at z = 133. M200,c is the virial mass of the halo, defined as the mass enclosed in a sphere with mean density 200 times the critical value. The corresponding virial radius is r200,c = 205 kpc. By using a definition based on 200 times the background density we obtain M 200,bg = 1.25×10 12 M⊙ and r 200,bg = 340 kpc. When baryons are included in the simulations, the final total halo mass remains roughly the same.
The halo has a quiet merger history, i.e. it undergoes no major merger after z = 1, which favours the formation of a late-type galaxy. A nested hierarchy of initial conditions for the dark matter and baryons was generated using the GRAFIC++ 1 code, where we allow for the high resolution particles to extended to 3 virial radii from the centre of the halo at z = 0. This avoids mixing of different mass dark matter particles in the inner parts of the domain. In this work, we focus on two sets of resolutions from the Silver River suite, referred to as SR5 and SR6. The simulations are identical apart form the number of dark matter particles, and hence particle mass, as well as maximal AMR refinement. In SR6 the dark matter particle mass is mDM = 2.5 × 10 6 M⊙ and in SR5 mDM = 3.2 × 10 5 M⊙. The mesh is refined if a cell contains more than eight dark matter particles, and similar criterion is employed for the baryonic component.
At the maximum level of refinement, the simulations reach a physical resolution of ∆x = 170 pc and ∆x = 340 pc in SR5 and SR6, respectively.
The free parameters
The goal of this work is to study how the characteristics of disc galaxies change when standard numerical parameters governing star formation are modified. Following the discussion in Section 1, we consider star formation regulation in two different ways: small scale (∼ 100 pc) physics such as H2 abundance, GMC turbulence, metallicity, radiative effects etc., or via energy injection from supernovae explosions leading to gas expulsion in galactic winds. The first mechanism is modelled by varying the Schmidt-law (Eq. 2) SFE, ǫ ff , which acts on a cell-by-cell basis. The latter is studied by increasing the injected SNII energy, ESNII. In addition we study the impact of the star formation threshold n0, but in lesser detail.
The traditional way of treating star formation in simulations of galaxy formation (e.g. Governato et al. 2007;Piontek & Steinmetz 2009b) is to tune the SFE parameter using an isolated disc model to match the observed K-S relation, most commonly the fitting formula given by Kennicutt (1998) of z = 0 galaxies. In addition, the recipe for energy injection via supernovae and its efficiency is tuned simultaneously. These parameters are then used in fully cosmological simulation of galaxy formation. This scheme assumes that supernova explosions are the main sources of star formation regulation at high redshift. As argued in Section 1, the numerically assumed constant efficiency is strongly redshift dependent and a z = 0 tuning is likely to over-predict star formation in more metal poor environment at higher redshift. We treat ǫ ff as a free, but constant, parameter and adopt ǫ ff = 1, 2 or 5 per cent in the fully cosmological context. These values are in agreement with GMC estimates from Krumholz & Tan (2007). As we will demonstrate below, lower values than what traditionally is adopted is preferred in order to form late-type galaxies. Note that ǫ ff ≈ 2c * , where c * is the efficiency parameter used in e.g. Governato et al. (2007) and Scannapieco et al. (2009) Values of c * = 0.05 − 0.1 are commonly employed i.e. a few times, up to an order of magnitude larger than what we consider here. The standard SNII feedback described in Section 2.2.1 is the baseline feedback in all of our simulations. In a subset of simulations we add the additional stellar mass-loss and SNIa treatment. The high-efficiency simulation (ǫ ff = 5 per cent) is used as a template for the impact on the injected feedback SNII energy, which we set to ESNII = 1, 2 and 5×10 51 erg. Using energies that are several times larger than the canonical 10 51 erg might be perceived as unrealistic, but we believe it is illustrative to study the extreme cases of this type of feedback. In addition, the amount of SNII energy dissipated in cooling, after the shut-off time has passed, is complicated to measure. As a control set we also run the simulations without feedback, both with and without metal enrichment.
The philosophy of the star formation threshold is as follows. In reality stars form in molecular clouds of average densities of n > 10 2 cm −3 . Imposing a threshold of this magnitude would require a resolution on the order of parsecs to resolve the formation of the star-forming clouds, something that is beyond the scope of fully cosmological hydro+Nbody simulations today (but see Gnedin et al. 2009). We adopt n0 = 0.1 and 1 cm −3 for each setting of ǫ ff , but the appropriate choice is fundamentally tied to resolution and can lead to spurious results. The ISM has been shown to be represented by a lognormal density probability distribution function (PDF) (e.g. Kravtsov 2003;Wada & Norman 2007), or even a superposition of several lognormally distributed ISM phases (Robertson & Kravtsov 2008). The amount of gas eligible for star formation is represented by the high density part of the PDF which in turn is a function of total disc gas mass and turbulence. A density threshold should be picked to allow for the high-density star-forming part of the PDF to be well resolved or at least contains, given an adopted numerical resolution, a converged amount of star-forming mass. If not, then chosen threshold will affect the numerical efficiency for global star formation. We will demonstrate this effect below.
In summary, the varied constants of interest is here the star formation threshold (n0), the star formation efficiency per free-fall time (ǫ ff ) and the form of supernova feedback and injected energy (ESNII). Our main focus is the impact of these parameters at the SR6 level of resolution, and we present a brief resolution study in Appendix A. We summarize our complete test suite in Table 1.
THE DISCS
In this work, we focus primarily on the disc properties in the SR6 simulations at z = 0. Details of the satellite galaxies and halo properties will be considered in a future work. Figs 1 and 2 show projected face-on and edge-on stellar and gas density maps at z = 0 for the galaxies in the star formation efficiency and feedback test suite, respectively. The discs show a wide range of spiral galaxy morphologies, and will return to this point in Section 5.4. We decompose the resulting stellar discs into a bulge, bar and disc component and fit these simultaneously to the stellar surface density profile. The latter is calculated using all stars out to a height of |z| = 2.5 kpc. For the bulge and disc component we assume exponential profiles, i.e.
where we fit for Σ0 and the scale radius r d . The bar component is modelled using a simple Gaussian, where we fit for the width σ, the central point r0 and amplitude A0. We consider this a conservative estimate of the Table 1. Summary of the numerical parameters. The simulations use a maximum physical cell resolution of ∆x = 340 pc (SR6) or ∆x = 170 pc (SR5), and the high-resolution region is occupied with dark matter particles of mass m DM = 2.5 × 10 6 M ⊙ (SR6) or m DM = 3.25 × 10 5 M ⊙ (SR5). All simulations use delayed cooling in regions of young stars, unless specified. When SNII feedback is used E SNII = 10 51 erg, unless other values are indicated.
Run Table 2. Summary of disc characteristics at z = 0. The mass of the components are obtained by fitting the stellar surface density (see text), and are in units of 10 10 M ⊙ . Note that we consider all gas phases for the gas mass and all stars for the stellar masses.
(1) Fitted scalelength of stellar disc. Large uncertainties exist for r d > 10 kpc as the stellar discs are small and feature almost flat stellar surface density profiles.
(3) Total measured specific angular momentum of the baryons in the disc and bulge in units of km s −1 kpc. 1. Projected face-on and edge-on surface density maps of the stars (top) and gas (bottom) of the z = 0 discs, where each panel is 60 kpc across. As the SFE is lowered and mass-loss is employed, spiral structure becomes more pronounced due to a less massive bulge. The Hubble type of the disc changes from an early type (S0 or Sa) disc, to a late-type spiral galaxy (Sb or Sbc) as we decrease ǫ ff . bar mass as a Gaussian contribution falls off towards the centre of the disc, leaving more mass to be accounted for by the bulge. An example of the fitting procedure can be seen in Fig. 3. The necessity of a separate bar component is here clearly illustrated. In the more bulge-dominated cases the bar amplitude is decreased considerably, owing to the weaker disc self-gravity.
The bulge mass, M bulge = 2πΣ bulge r 2 bulge , is obtained by integrating Eq. 9. A similar relation holds for the disc, where we also include the stellar disc mass past the break radius in the quoted disc stellar mass, M disc,s , but we only use the data within the break when fitting (see Fig. 3). The bar mass is simply the integrated mass from Eq. 10, and we consider the bar as a part of the disc component and include it in the quoted M disc,s . Doing this or not modifies the disc mass only slightly, especially in bulge-dominated galaxies. The gas is treated as a single component, and we simply consider the mass within r = 15 kpc and |z| = 2.5 kpc as the gaseous disc mass, M disc,g . We consider only the stars when calculating the bulge-to-disc (B/D) and bulge-to-total (B/T) ratios. All measured and derived quantities are summarized in Table 2. We note that this method of defining galactic components in simulations, as well as others e.g. via angular momentum (Okamoto et al. 2005;Scannapieco et al. 2009), carry uncertainties.
Our simulated discs span a large range of characteristics: stellar disc masses are in the range M disc,s = 5 − 9 × 10 10 M⊙, bulge masses of M bulge,s = 2 − 7 × 10 10 M⊙, B/D ∼ 0.23−1.2 and gas fractions fg = 0.05−0.28. The scale radii of the discs, r d , vary from typically 4 − 5 kpc to > 10 kpc in the bulge-dominated systems. As we demonstrate below, extended disc galaxies of Sb, or even Sbc type, form only when star formation is numerically resolved in the whole disc (n0 = 0.1 cm −3 ), and a low efficiency of ǫ ff ∼ 1 per cent (or very strong feedback) is adopted. At larger efficiencies we observe how the B/D ratios increase, the discs are less extended, the rotational velocities peak at very large values and the spiral patterns become more tightly wound and less pronounced. This indicates a shift towards early type discs like Sa or even S0.
EFFECT OF STAR FORMATION PARAMETERS
In this section we study the influence of star formation parameters, i.e. in essence the small scale physics, on disc properties at z = 0. The resulting stellar surface densities (Σs), gas surface densities (Σgas) and rotational velocities mea-sured from the gas (vrot) are presented in Figs 4 and 5 for the first 11 simulations in the SR6 suite at z = 0 (see Ta-ble. 1).
The star formation density threshold, n0
We start by focusing on the data presented in Fig. 4. By keeping ǫ ff fixed to 1 per cent, while varying n0, we observe a strong change in the ability to form stars at large radii. The galaxies adopting a large threshold have a more concentrated distribution of stars, smaller stellar disc scalelengths as well as larger Σgas at all radii. The scalelengths are r d > 4, kpc for n0 = 0.1 cm −3 , but only r d ∼ 2.5 kpc for n0 = 1 cm −3 . The latter values are on the low side when compared to observations of late-type spirals at this mass range (Courteau et al. 1996;Courteau 1997;Gnedin et al. 2007). The systematically lower Σs signals an under-resolved or missing' star formation throughout the disc: the average physical gas density does not efficiently cross the targeted n0, even at intermediate radii. This is also reflected in the gas fractions of ∼ 25 per cent, which is much larger than observed average values for galaxies of this size (Garnett 2002;Zhang et al. 2009). The rotational velocities are large at smaller radii for n0 = 1 cm −3 , regardless of choice of ǫ ff . Naively one would expect this numerically induced star formation deficiency to alleviate the angular momentum loss at high redshift, hence forming a less concentrated galaxy. However, due to secular evolution in the disc, this is not the case: disc instabilities drive gaseous flows to the centre of the disc where, as the gas crosses the correct threshold, star formation can proceed. We conclude, that given our numerical resolution and simulated system, n0 = 0.1 cm −3 yields more realistic (average) disc galaxies when compared to observations. We discuss this numerical effect and its relationship to the adopted mesh resolution further in Appendix B. Note that this is not a fundamental result of galaxy formation, but serves only as tuning given our numerical resolution and is a basis for the subsequent tests. A discussed in Section 2.1, n0 should be increased as the resolution is increased.
The star formation efficiency, ǫ ff
We now turn to the data presented in Fig. 5, where we keep the threshold fixed at n0 = 0.1 cm −3 and adopt ǫ ff = 1, 2 or 5 per cent. As ǫ ff is increased, Σstar increases at small radii i.e. the bulge mass increases, singalling a lower disc angular momentum. The bulge to disc ratio increases from B/D = 0.25 to 1.25 as ǫ ff increases from 1 to 5 per cent. Σgas roughly follows a 1/r-profile and the magnitude is lowered at all radii by approximately the relative change in efficiency. The stellar disc is less extended and the exponential scale-length increases for larger efficiencies (see Table. 2). For ǫ ff = 1 per cent, the disc scalelength is measured to be r d ∼ 4−5 kpc, in good agreement with observed average values from the SDSS (Gnedin et al. 2007), while for ǫ ff = 5 per cent, r d ≈ 15 kpc is a > 2σ outlier. Large uncertainties exist for r d > 10 kpc as the stellar discs are small and feature almost flat stellar surface density profiles. At large radii in all simulations, r d shifts to ∼ 2 kpc. Disk breaking is a well observed phenomenon (Pohlen & Trujillo 2006) correlated with a dip in the . The effect of the Schmidt-law star formation efficiency. The panels show the stellar surface density (left), gas surface density (middle) and rotational velocity measured from the gas (right). We consider all material within a height of |z| < 2.5 kpc for all components. The Schmidt-law density threshold is n = 0.1 cm −3 in all simulations. The different colours are described in the first row, and a dashed line indicates that we use the extended feedback model (see text). star formation rate, and has been studied numerically by Roškar et al. (2008). As larger star formation efficiencies lead to less extended discs, the disc breaks occurs at smaller radii: r break ≈ 16, 14, 10 kpc for ǫ ff = 1, 2 and 5 per cent, respectively. The breaks can also be seen from the average stellar ages t * , shown in Fig. 6. The central parts of the discs generally consist of older stars formed at z > 1, and t * decreases with larger radii, reaching t * ≈ 6 Gyr. Past the disc break, older stars appear which in part can be attributed to stellar migration as well as pollution by old halo stars with t * ≈ 11 − 12 Gyr. We note that t * flattens or even declines towards the centre of the discs as ǫ ff is lowered. This is due to secular evolution: as spiral structure is more pronounced (as in n01e1), gas is transported towards the centre more efficiently and late-time star formation occurs; see Fig. 1.
The efficiency has a strong impact on the rotational velocity. The rotation curve in the n01e5 simulation features a strong peak in the inner parts of the disc. As ǫ ff is lowered, B/D decreases and the velocity profile flattens. Only when ǫ ff < 2 per cent can a flat rotational velocity profile be produced! In n01e1, the rotational velocity reaches vrot ≈ 275 km s −1 and stays roughly flat. For n01e2 and n01e5, vrot peaks at ≈ 310 and 360 km s −1 , but converges at 275 km s −1 close to r = 20 kpc.
In the left-hand panel of Fig. 7, we plot the circular velocities, i.e. vc(r) = GM (< r)/r, for the dark matter, disc + bulge baryons and total mass of the galaxies. We note that the vc-profiles are well traced by the cold gas rotation curve (the stellar rotational velocities are lower than what can be expected from vc due to a larger velocity dispersion). While all simulations converge at large radii, and have an equal dark matter and baryon contribution within r ∼ 17 kpc, the mass distribution (and angular momentum distribution) differs dramatically leading to large difference in circular velocities. As we will demonstrate in the next section, a majority of the mass within the bulge component originates from the intense star formation epoch at z ∼ 2−3 where the value of ǫ ff matters the most. We also observe a significantly enhanced dark matter contraction at large ǫ ff . This effect in our simulation suite, and its relevance for di- Figure 6. Average stellar ages as a function of radius. The colours represent different star formation efficiencies, ǫ ff = 1 (black), 2 (red) and 5 per cent (blue). Dashed lines mark the measured stellar surface density breaks. rect dark matter detection, has recently been analysed by Pato et al. (2010).
Star formation histories
The left-hand panel of Fig. 8 shows the star formation histories for all stars belonging to the discs in n01e1, n01e2 and n01e5 at z = 0. The average SFR at the current epoch is ∼ 3 − 4 M⊙ yr −1 in all simulation, regardless of numerical setting. Moreover, the star formation history during the quiescent phase of disc evolution, i.e. after z ∼ 1, is relatively flat and roughly the same in all simulations. Significant differences occur at intense epochs of star formation, especially at z = 3 where the proto-disc is assembled via cold streams, satellite mergers and gas accretion from the hot halo, as seen in the simulation snapshot in Fig. 9 2 . Here the SFR peak changes dramatically from ∼ 43 M⊙ yr −1 in n01e5 to ∼ 23 M⊙ yr −1 in n01e1.
In n01e5, stars form efficiently everywhere, even in satellites. The gas is quickly consumed locally during the high-redshift assembly, and merging systems lose angular momentum to the dark halo, ending up in the central part of the galaxy. Accretion via cold streams and from the hot gaseous halo will still supply the galaxy with unprocessed gas (Dekel et al. 2009;), but now in a more bulge-dominated environment. In n01e1, star formation is less efficient and a significant portion of the mass in 2 The image is an RGB composite image where red is temperature, green is metals and blue is density. Each quantity is a mass weighted average along the line of sight. For each image pixel, we calculate the RGB triplet as the merging clumps is in gaseous form. This material is lost to the hot gaseous halo via ram-pressure and tidal stripping, or expelled during SNe events, which later cools down to join the disc. Hence, the material that is not consumed by star formation at z = 3 is processed at a later epoch, closer to z = 1 − 2 (see Fig. 8), but now in a more disc-like, higher angular momentum configuration. In fact, the trend at z = 3 is reversed at this later epoch, and the largest SFR is found for ǫ ff = 1 per cent. These two modes of star formation are related to the classical angular momentum problem (Navarro & White 1994), and they lead to fundamentally different modes of disc assembly and morphology. A confirmation of the above discussion is shown in Fig. 10, where contours of formed stellar masses are outlined in the star formation time vs. disc radius plane. Note that this mass refers to all the stars at z = 0 contained in the disc, and is hence the sum of the stars formed in merging satellites as well as in situ. While the formed stellar mass in n01e1 is smoothly distributed in a roughly exponential profile across the disc at all times, without a clear sign of extreme star formation bursts, the n01e5 simulation shows a strong central concentration of stars formed at t = 11.5 Gyr (z ∼ 3). This analysis confirms the notion of efficient star-forming satellites loosing angular momentum and being dragged into the central parts of the galaxy. Fig. 1 shows mass weighted projections of the stellar and gas surface densities for n01e1ML, n01e1, n01e2ML, n01e2, n01e5ML and n01e5. The gaseous discs are thin and extended in all simulations, and are surrounded by a warped layer of cold/warm gas, probably associated with misaligned accretion events (Shen & Sellwood 2006). A hot gaseous halo surrounds the discs, and a temperature projection (not shown) reveals an extended disc-halo interface of warm/hot gas. We will explore this in future work.
Hubble types
As discussed in Section 5.2, we find a very strong trend in disc and bulge mass with increasing star formation efficiency. The n01e1 simulation feature a 8.6 × 10 10 M⊙ stellar disc with a 2 × 10 10 M⊙ bulge, hence B/D∼ 1/4. In n01e5 the disc is 35 per cent less massive and the bulge 3.5 times more massive with B/D∼ 1.25. Roughly the same scaling holds when including additional SNIa feedback and stellar mass-loss.
All discs show spiral pattern in the gas component with a larger amplitude in the more gas-rich discs, having lower ǫ ff . We also observed spiral structure in the stellar component, which is the most pronounced in n01e1 and n01e1ML and n01e2ML simulations. As ǫ ff is increased, B/D increases and the spirals arms become more tightly wound as marginal gravitational instabilities can no longer excite pronounced open spiral arm structure. All discs feature a stellar bar, and viewed edge-on, we observe how the inner stellar distribution flattens as ǫ ff is decreased. The flattened central parts of n01e1 and n01e1ML, and the fact that the bulge is well fitted using an exponential, is indicative of a bulge formed via secular processes (Kormendy & Kennicutt 2004) e.g. bar buckling (Debattista et al. 2006). The gaseous bar strengthens at lower ǫ ff , and in n01e1 and n01e1ML gas is transported towards the disc centre, triggering star formation. In these simulations, close to 50 per cent of the stars at z = 0, regardless of star formation parameters. Using a high efficiency leads to central galaxies and dwarfs burning their fuel quickly at high redshift during galaxy assembly, resulting in excess angular momentum loss and a prominent central spheroid. A lower efficiency avoids this issue, leaving more gas left for star formation at lower redshifts in a more disc-like configuration. (Right) Star formation histories for a set of simulations of increasing supernovae feedback strength (E SNII ). We note that the large SFR peak at z = 3 is only lowered when a very large amount of energy is injected into the ISM. associated with the bulge formed in situ of the disc at z 1, and only ∼ 25 per cent formed at the intense star formation peak at z ∼ 3. This indicates that a significant portion of the flattened bulge has formed via secular evolution, leading to a pseudo-bulge. This is in stark contrast to the bulge formation epoch seen in the central parts of the n01e5 simulations (right-hand panel of Fig. 10), where essentially all bulge stars form at z ∼ 3. Table 2) suggests that the final disc in n01e5 is of S0/a type, in n01e2 it is of Sa/Sab type and in both n01e1 and n01e1ML it is of Sb/Sbc type. We consider this agreement only as indicative as each Hubble type spans a wide range of B/D and B/T values. Graham & Worley (2008) presented B/D and B/T flux ratios using a sample of over 400 galaxies observed in the K band. Their B/D estimates for different Hubble types confirm the classification of our simulated discs. There is no doubt that we are measuring a transformation along the Hubble sequence. Figure 9. A large scale view of the assembling spiral galaxy from the SR5 simulation at z ∼ 3; the most intense epoch of star formation for this system. The RGB -image 2 shows the gas component using temperature (red), metals (green) and density (blue). We can clearly distinguish accretion via streams of cold pristine gas (in blue) penetrating the shock-heated gas (in red), reaching the heart of the halo. Dwarf galaxies outside of the large gaseous halo are surrounded by puffy distributions of enriched gas originating from stellar outflows. Gas is efficiently lost via tidal and ram-pressure stripping as the dwarfs interact with the main galaxy and its hot gaseous halo. The distance measure is in physical units. Figure 10. Formation time of stars and their radial distribution for the discs in n01e1 (left) and n01e5 (right) at z = 0. The contours trace regions of binned mass using bins of size ∆t = 0.5 Gyr and ∆x = 0.5 kpc. The contour lines trace, from thin lines with light shades, to thick lines with dark shades, the formed stellar masses from log(M * ) = 6.5 to log(M * ) = 9.5 in steps of 0.25 dex. While the formed stellar mass in n01e1 is smoothly distributed across the disc at all times, the n01e5 simulation shows a strong central concentration of stars formed at t = 11.5 Gyr (z ∼ 3).
EFFECT OF SUPERNOVA FEEDBACK
As we have demonstrated in the previous sections, a high SFE overproduces the central stellar mass of the galaxy. The inclusion of additional SNIa feedback and stellar massloss did not drastically change the galaxy properties, even though differences can be seen in Fig. 1 (the discs in n01e1 and n01e2 feature much stronger spiral structure) and Fig. 4, and more late time star formation is made possible (see Section 7.2). The effect of stellar mass-loss was studied by Martig & Bournaud (2010) who found a stronger effect on the bulge mass in a similar setting, perhaps due to implementation differences.
In Figs 7 and 11 we present vc and Σs for the n01e5 simulation, but with different amounts of injected SNII energies; see Table 2. Note that we still enrich the ISM with metals in the simulation with ESNII = 0. Without this, the effect of metal cooling will not be present in all simulations. We find that the bulge mass is lowered as we increase ESNII, but only for a very large injected value of 5×10 51 erg can the disc rotation curve peak at a reasonable vc < 300 km s −1 , resembling that of the n01e1 simulation. As for the standard feedback runs in the previous section, the dark matter halo is more contracted as star formation is less regulated. The difference in vc at r = 20 kpc between the n01e5SN5 and the other simulations, corresponding to a few 10 10 M⊙, is due to the expelled gas during galaxy assembly which can be accounted for in the more massive gas halo. This effect can also be seen in Σs as the central values are decreased, the discs scale ra- Figure 11. Stellar surface densities of the z = 0 discs in the supernova feedback test suite. As the SNII feedback energy input is increased, the disc becomes more extended and the bulge component less massive.
dius decreases and the break radius is shifted to larger radii. As seen in Table 2, a massive disc still forms. The effect on the star formation histories are shown in the right-hand panel of Fig. 8; we find no significant difference among the simulations, apart for the very energetic n01e5SN5 simulation. The SFH now resembles that of n01e1 where the z = 3 amplitude is lowered to ≈ 20 M⊙yr −1 and more gas is left to form stars in a disc-like environment at z ∼ 1 − 2.
The projected gas density and stellar maps were shown in Fig. 2. While the standard feedback simulations shown in Fig. 1 showed a clear Hubble sequence of open to tightly wound spiral structure as B/D was lowered, this is not the case for the feedback test suite. In n01e5SN5, the gaseous disc is heavily distorted, warped and puffed up by the large SNII energy injections. Star formation is here very different compared to n01e1 as stars form in filaments and shells from SNe explosions rather than in gas-rich spiral arms.
The effect of metal cooling is not always accounted for in cosmological simulations. Piontek & Steinmetz (2009a) included this effect and reported on difficulties in suppressing the initial high-z peak, even with sophisticated feedback models. In our simulations metal cooling is roughly counteracted by the standard SNII feedback. If metal enrichment is turned off together with the feedback, we do not find a significant modification to our discs. For example, the n01e1NFB simulation shows a surprisingly successful set of characteristics when compared to n01e1 (see Table 2). As a zero metallicity gas cools inefficiently below 10 4 K, as well as in the range 10 5 K to 10 7 K, the n01e1NFB disc essentially behaves as higher metallicity counterpart but with SNII heating balancing cooling. This is the philosophy behind subgrid multiphase models (e.g. Springel & Hernquist 2003) in which feedback is implicitly treated as a stiff gas equation of state. Note that our polytropic EOS is slightly stiffer than what is usually adopted (γ = 2 instead of γ = 5/3).
Angular momentum of the baryons
For each galaxy we calculate the cumulative specific angular momentum vector, defined as including all bulge and disc baryons. Here xi and vi are positions and velocities of the gas cells and star particles of the N elements within a radius r encapsulating the mass M (≤ r). The resulting j bar = |j bar | are presented in Table 2.
Using the sample of Courteau (1997) and Mathewson et al. (1992), Navarro & Steinmetz (2000) calculated the specific angular momenta vs. rotational velocity for late-type spiral galaxies and compared them to numerical simulations. The discs were assumed to follow an exponential profile for which the peak rotational velocity, vrot,2.2, occurs at r = 2.2r d , and it follows that j bar = 2r d vrot,2.2. This assumption can be misleading when comparing to simulated galaxies as the true vrotpeak can be significantly underestimated in the case of bulge-dominated disc galaxies. The sample of Courteau (1997) concerned Sb-Sc galaxies for which B/D is low and a dominating exponential disc assumption is roughly valid. The difference in measured and estimated angular momentum content makes it difficult to compare simulated and observed galaxies, as discussed in Abadi et al. (2003a) and Piontek & Steinmetz (2009b). A simulated galaxy can be considered as a successful realizations of a late-type (Sb-Sc/Sd) galaxy if the estimated and measured angular momenta are in agreement.
Focusing on the n = 0.1 cm −3 suite, we find that the n01e1 and n01e1ML simulations are in good agreement with the observed galaxies, both when analysed using Eq. 11 and the exponential disc approximation. Typical measured and estimated values are here j bar ∼ 2000 km s −1 kpc and j bar ∼ 2750 km s −1 kpc respectively. As ǫ ff is increased, the calculated angular momentum decreases. The ǫ ff = 2 per cent simulations are still a part of the observed scatter while higher values create more significant outliers in the observed distribution. When using the exponential disc approximation, all simulated galaxies are in good agreement with the observed data as the velocities are quite comparable at larger radii, and for the fact that the discs, although less extended, have larger r d in the higher efficiency cases (see Fig. 5). We conclude that an angular momentum reservoir comparable to Sb/Sc galaxies have been reproduced for the baryons in the case of low SFE (i.e. ǫ ff = 1 per cent).
The lack of correlation between ESNII and the baryonic angular momentum content might come as a surprise. However, while the B/D ratio decreases for large supernova energy injections, the actual disc mass changes little, and is ∼ 6 × 10 10 M⊙ for all ǫ ff = 5 per cent simulations. As the net contribution of the bulge to the angular momentum content is roughly zero, similar j bar is to be expected. All ǫ ff = 5 per cent simulations have measured j bar ∼ 1300−1450 km s −1 kpc which is close to the estimated j bar ∼ 1600 km s −1 kpc in n01e5SN5.
In summary, the largest measured baryonic specific an- Figure 12. The i-band Tully-Fisher relationship from the SDSS ). We show the observed average (solid line), 1σ (dashed line) and 2σ (dotted line) relation. The symbols are results from our simulated galaxies, which use n 0 = 0.1 cm −3 and ǫ ff = 1 (black symbols), 2 (red symbols) and 5 (blue symbols) per cent.
gular momentum reservoir can be found in simulations using ǫ ff = 1 per cent due to a massive disc component, regardless of including feedback or not. At higher efficiencies, j bar decreases, again regardless of feedback.
The Tully-Fisher relationship
The photometric Tully-Fisher (TF) relation (Tully & Fisher 1977) links the characteristic rotational velocity of a galaxy with its total absolute magnitude. This correlation holds in all typical photometric bands but with variation in functional form (e.g. Pizagno et al. 2007). Early attempts in forming realistic galaxies (e.g. Abadi et al. 2003a) showed off-sets in the observed relation owing to the formation of very concentrated bulge-dominated galaxies with a low star formation activity at late times. Their velocitymagnitude relation had more in common with S0 galaxies (Mathieu et al. 2002). Recent work seems to have improved on these results by SN feedback regulated star formation Piontek & Steinmetz 2009b). These studies place galaxies closer to the observed relation, but this is in part achieved by circumventing the large measured vrot (caused by the dominant bulge) by using the exponential disc assumption discussed above (but see Governato et al. 2009), i.e. using vrot,2.2. Observationally, the measured quantity is often half of the HI velocity width at 20 (W20) or 50 (W50) per cent of the peak intensity.
In Fig. 12 we present the measured i-band magnitudes of several of our simulated galaxies as a function of their peak rotational velocities measured from the gas component. These are compared to the observed TF relation from the SDSS ). Pizagno et al. measured the velocity at a radius containing 80 per cent of the i-band flux. This measure (V80) is equivalent to measuring vrot at ∼ 3r d for a pure exponential disc. By using the true peak of vrot, we provide an absolute lower limit to the agreement with observations, and can clearly separate disc and bulgedominated galaxies. We note that the low efficiency models agree well with the average data, regardless of adopted feedback scheme, and even without. At higher efficiencies, the discs are off-set by more than 2σ, mostly due to their peaked rotation curves. In these circumstances, the inclusion of additional recycling via SNIa and stellar mass-loss increases the magnitudes by ∼ 0.5 dex in n01e2ML and n01e5ML simulations. The n01e5NFB simulation is brighter than the corresponding simulations including feedback due to exclusion of metal enrichment, leading to less efficient cooling and more gas left to form stars at later times. Allowing for enrichment without any energy deposition demonstrates this fact (see figure). From a photometric TF point of view, the ǫ ff = 5 per cent discs correspond to S0 systems or early type spirals (Mathieu et al. 2002).
As described in the previous sections, vrot and Σs in the n01e5SN5 simulation agrees fairly well with the disc values found in n01e1. The strong feedback brings the galaxy closer to the observed values but the absolute magnitude is still lower than in the ǫ ff = 1 per cent simulations. The SFH in Fig. 8 tells us why: after z = 1 the SFR is lower in n01e5SN5 compared to n01e1 by almost a factor of 2 (even though the z = 0 values agree) due to strong gas expulsion, resulting in a less bright disc by ∼ 1/3 dex in i-band magnitude.
As for the specific angular momentum analysis, adopting the vrot,2.2 measure (or V80), all discs would agree statistically with the observed TF relation, especially when including SNIa feedback and stellar mass-loss.
Similar to the photometric TF is the 'baryonic TF' relation (McGaugh et al. 2000(McGaugh et al. , 2010 which links characteristic rotation velocity with total galaxy baryonic mass. The baryonic TF relation therefor accounts for the fact that less massive galaxies are more gas rich, and their stars only account for a small fraction of the total disc mass. The same conclusion as above holds for the baryonic TF: the low-efficiency simulations agrees well with the observations. As the baryonic masses for the discs are not strongly affected, even in the case of extreme feedback (n01e5SN5) the data points shift only with the increase of the vrot peak. As for the photometric TF, using vrot,2.2 puts all galaxies on the observed relation.
The ΣSFR-Σgas relation
The most famous study of the globally averaged relationship between the star formation rate and gas surface density is from Kennicutt (1998) (from now on K98), where a sample of 61 nearby normal spiral galaxies and 36 infrared-selected starburst galaxies were considered. Assuming a Schmidt-law of the form . Σ SFR vs. Σg for the resulting discs using ǫ ff = 1 (n01e1), 2 (n01e2) and 5 (n01e5) per cent at z = 0 (left-hand panel) and at z = 3 (right-hand panel). The filled circles are radial data for 7 spiral galaxies from the THINGS survey , where Σgas includes the contribution from helium (Σgas = 1.36 Σ HI+H 2 ). The data points represent, from lightest to darkest, > 1, > 5, > 10, > 20 and > 30 detections. The vertical dotted lines separate regions where different star formation laws are conjectured to apply (see text). Diagonal dotted lines show lines of constant SFE=Σ SFR /Σgas, indicating the level of Σ SFR needed to consume 1, 10 and 100 per cent of the gas reservoir in 10 8 years. The solid black line is the average relation from Kennicutt (1998). The z = 3 observations approximately populate the region of the Wolfe & Chen (2006) observations. THINGS data courtesy of F. Bigiel. Figure 14. Σ SFR vs. Σg for the resulting discs using a high star formation efficiency (ǫ ff = 5 per cent), but with different SN feedback strengths: E SNII = 10 51 erg (n01e5), 2 × 10 51 erg (n01e5SN2), 5 × 10 51 erg (n01e5SN5) as well as with zero SNII feedback energy but metal enrichment (n01e5NFBmet). The panels show the results at z = 0 (left) and at z = 3 (right). The lines and symbols are described in the caption of Fig. 13. SN feedback has little effect on the Σ SFR -Σg relation at z = 0 but does affect the high redshift relation, although only for very large energy injections (E SNII ≥ 2 × 10 51 erg).
(N ≈ 1−3) have been found, suggesting that either different SF laws exist in different galaxies or that N is very sensitive to systematic differences in methodology. Bigiel et al. (2008) presented a comprehensive analysis of the ΣSFR-Σgas relationship using multifrequency data of 7 spiral galaxies and 11 late-type and dwarf galaxies. The analysis pointed to a great variation within the sample and a markedly different functional behaviour in atomic-and molecular-dominated gas.
The THINGS data of Bigiel et al., relevant for spirals, as well as the K98 law (Eq. 12), is reproduced in the lefthand panel in Fig. 13 together with the azimuthally averaged (∆r = 540 pc) data from n01e1, n01e2 and n01e5. For the calculation of ΣSFR we only consider stars younger than 50 Myr. At a given value of Σgas we find a clear trend of higher ΣSFR values for higher ǫ ff . All simulations are compatible with the range of observed values, having the same functional behaviour but with an off-set. We note that only the disc in the n01e5 simulations is compatible with the K98 relation. The n01e1 simulation is on the low side but can still statistically be associated with one of the THINGS spiral galaxies. However, at high redshift the argument can be reversed, as can be seen in the right-hand panel of Fig. 13. The observations of DLAs at z ∼ 3 by Wolfe & Chen (2006) are typically an order of magnitude lower than the K98 relation, agreeing only with measurement of the low density environment of the discs in our low efficiency simulations. This trend is also predicted by simulations including treatment of H2 formation (Gnedin & Kravtsov 2010b,a). In essence, while n01e1 is on the low side at z = 0, it is consistent with high redshift observations and the reverse argument is valid for n01e5. A higher efficiency is acceptable at lower redshift, and is predicted due to e.g. higher gas metallicity. As the bulge component is assembled at high redshift, the efficiency of star formation during this epoch is crucial in setting the morphology of the galaxy.
The same analysis is performed for the feedback test suite in Section 6, and shown in Fig. 14. At z = 0, all simulations show a similar functional behaviour, but with a weak trend of lower ΣSFR as ESNII is increased, while remaining comparable to the K98 law. At z = 3, a slightly greater effect is found, but only for very large energy injections (ESNII ≥ 2 × 10 51 erg). The extreme case of ESNII = 5 × 10 51 erg (n01e5SN5) is comparable to a lowering star formation efficiency to ǫ ff = 2 per cent (n01e2). None of the strong feedback simulations regulate star formation enough to reproduce the low ΣSFR values found for ǫ ff = 1 per cent (n01e1). This z ∼ 3 insensitivity of the K-S relation to feedback was also found by Kravtsov (2003).
DISCUSSION AND CONCLUSIONS
In this paper we have presented a set of AMR simulations studying the assembly of large Milky Way-like disc galaxies. The self-consistent formation of a late-type disc galaxy has remained elusive in the field of numerical galaxy formation, mainly due to the strong loss of angular momentum in the galaxy assembly process. A popular solution to this problem is to regulate star formation at high redshift via supernova explosions that drive galactic winds, transporting material out of star-forming regions hence lowering the local star formation rate.
We have investigated the plausibility of this mechanism in comparison to a small scale (∼ 100 pc) physical approach where star formation is made inefficient by modifying the Schmidt-law star formation normalization. In a very crude way, this mimics unresolved physics such as H2 formation, small scale turbulence and radiative effects. We find that the Schmidt-law efficiency of star formation is far more successful way of regulating star formation towards realistic galaxies than what can be achieved via supernova feedback. Our most successful models reproduce Milky Way galaxies with flat rotation curves, where the small bulge component is formed via secular processes. The main conclusions of this work can be summarized as follows.
(i) Disk characteristics such as Σ * (r), Σgas(r), vrot(r) and B/D strongly depend on the choice of star formation efficiency per free-fall time, ǫ ff . The parameter will essentially set the mode of global star formation, hence governing the final spiral Hubble type, where low efficiencies of ǫ ff ∼ 1 per cent render discs of Sb or Sbc type, while ǫ ff = 5 per cent moves the discs closer to Sa/S0 types. Simulations at low efficiencies agree well with observational constraints on disc characteristics (Courteau 1997;Gnedin et al. 2007), as well as the angular momentum content of disc galaxies (Navarro & Steinmetz 2000), the Tully-Fisher relationship ) and the ΣSFR-Σgas relation (Kennicutt 1998;Bigiel et al. 2008). The origin of the successful Milky Way-like galaxy formation is a well motivated suppression of star formation at z ∼ 3, the epoch at which the violent assembly process would form a slowly rotating bulge in case of efficient star formation.
(ii) Supernova feedback does not regulate star formation efficiently at low input energies. Only when the injected energy per supernova event is five times the canonical value, i.e. 5×10 51 erg, do we find lower and more realistic B/D ratios in the simulations tuned to the standard Kennicutt (1998) star formation law, leading to a flatter rotational velocity profile, hence resembling the galaxies formed without strong feedback but with a low Schmidt-law efficiency. This comes at the cost of a significantly distorted gas disc at z = 0, as well as a less bright stellar disc as gas is expelled into the halo, leaving less fuel for star formation at late times. In essence, we find that changes in ǫ ff can play a much greater role in shaping a spiral galaxy than gas redistribution via supernovae-driven winds.
It is plausible that at very high resolution, or using a drastically different recipe of supernovae feedback, lower values of ESNII may be successful in regulating the SFE. If so, it will still need to mimic the low efficiency on scales of a few 100 pc which, as argued in this work, can be absorbed by the ǫ ff -term.
(iii) If the star formation efficiency parameter is tuned to match the standard z = 0 K-S data (Kennicutt 1998), i.e. requiring on the order of ǫ ff ≥ 5 percent (e.g. Stinson et al. 2006), star formation is likely to be overestimated at high redshift (z = 3) where the amplitudes of ΣSFR are an order of magnitude lower (Wolfe & Chen 2006;Gnedin & Kravtsov 2010b). All efficiencies studied in this work (ǫ ff = 1 − 5 per cent) are compatible with modern data of the THINGS survey ) but only when ǫ ff ∼ 1 per cent can the constraints from z = 3 data be met and late-type, disc dominated systems form. As the true SFE varies in space and time, being dependent on small scale physics governing H2 formation (see e.g. Gnedin et al. 2009), present day simulations based on single valued efficiency parameter have little predictive power.
We argue (see also Gnedin et al. 2009) that the results presented in this paper indicate that other processes in the ISM in addition to, or in conjunction with, supernova feedback are important in explaining the evolution of the galaxy population, as well as regulating observed disc sizes. Some form of outflow process must be responsible for enriching the IGM (Oppenheimer & Davé 2006), which together with an inefficient star formation might explain the faint end of the stellar mass function (Somerville & Primack 1999;Kereš et al. 2009). The same argument can be used for the mass-metallicity relationship ), although Tassis et al. (2008) demonstrated that it could be reproduced without supernova-driven outflows. Galaxies of masses considered in this work are situated at the knee of the stellar mass function, where the observed and simulated functions (even without feedback; see Kereš et al. 2009) are in closest agreement. This circumstance might explain why even our simulations without feedback resulted in realistic discs. At this galaxy mass, supernova driven winds cannot escape the deep potential well, and are impeded by the hot halo. On the other hand, AGN feedback, which recently has been introduced into galaxy formation simulations (Di Matteo et al. 2005), is probably not relevant for the Milky Way since the black hole might not be massive enough for efficient AGN radio-heating. At higher masses, and/or at high redshift, the inclusion of AGN is probably necessary to correctly reproduce the observed abundances and stellar masses. This is the greatest uncertainty of our work, which we leave for a future study.
The way in which galaxies populate dark matter haloes is an important topic, see e.g. Dutton et al. (2010) and references within for a compilation of recent observational data and theoretical work. Recently, Guo et al. (2010) [see also Moster et al. (2010) and Behroozi et al. (2010)] matched dark matter halo mass function from cosmological N −body simulations to the stellar mass function of the galaxies from the SDSS (Li & White 2009). This analysis yields the required galaxy formation efficiency, η = (M * /M halo )(Ωm/Ω b ), i.e. what fraction of the universal baryons that must have condensed into stars at a given halo mass. In our "best-case" model (n01e1ML, see Table 1), the total stellar and dark matter halo virial mass is ∼ 10 11 M⊙ and ∼ 10 12 M⊙ respectively. This results in a stellar fraction of 10 per cent, which corresponds to almost 60 per cent of the cosmic baryon fraction. The rest of the baryons reside in the stellar halo, gaseous disc and ionized gas halo. At this halo mass, abundance matching requires that the stellar disc accounts for only ∼ 20 per cent of the cosmic baryon fraction, i.e. a factor of three lower. Similar discrepancies exist in all modern work of numerical galaxy formation (Abadi et al. 2003a;Okamoto et al. 2005;Governato et al. 2007;Scannapieco et al. 2009;Piontek & Steinmetz 2009b), and its origin is not yet know, although AGN is a compelling mechanism at the high mass end, as discussed above. This issue is the topic of a follow-up paper in preparation. Behroozi et al. (2010) performed a comprehensive analysis of abundance matching, accounting for systematic errors in e.g. the stellar mass estimates, the halo mass function, cosmology etc. Our simulated galaxy formation efficiencies would be in ∼ 2σ agreement with their result (see their Fig. 11). We note that our own Galaxy and M31 also might be strong outliers in this analysis, considering the inferred η from mass modelling (Klypin et al. 2002;Seigar et al. 2008) as well as via recent MW halo mass estimates (Xue et al. 2008).
Abundance matching is insensitive to the actual Hubble types of the galaxies, forcing all galaxies of a specific mass to be linked to only one halo mass. At the stellar mass scale of the Milky Way (5 − 7 × 10 10 M⊙), only ∼ 25 per cent of galaxies are of Sb/Sbc type (Nair & Abraham 2010). It is plausible that the more active merger histories associated with ellipticals and early type disc galaxies have led to a stronger mass expulsion, via e.g. AGN, in comparison to the more disc dominated counterparts. In this scenario, late-type discs are expected to be outliers in the galaxy formation efficiency vs. stellar mass relation, considering the strong bias towards early type systems. A detailed sub-division into Hubble types has not yet been performed when matching galaxies to haloes, although color separations into red and blue systems have been made in studies using weak-lensing (Mandelbaum et al. 2006) and satellite kinematics . These studies indicate a different galaxy formation efficiency for galaxies similar in mass to the Milky Way; a late-type galaxy is associated with a halo of ∼ 0.5 dex lower halo mass compared to an equally massive early type (see e.g. fig. 11 in More et al. 2010).
Understanding, from a numerical perspective, the spread of baryon fractions across dark matter haloes of different masses, accretion histories and environments is a complicated problem, and will require a large sample of highresolution simulations, which we leave for a future investigation.
ACKNOWLEDGMENTS
We are very grateful to D. Potter for generating the initial conditions used in this paper. We thank F. Bigiel for providing a copy of his data. We thank Andrey Kravtsov, Joseph Silk, Simon White, Joop Schaye, Francois Hammer and Alister Graham for valuable comments. This work was granted access to the HPC resources of CINES and CCRT under the allocation 2009-SAP2191 made by GENCI (Grand Equipement National de Calcul Intensif). We have also made use of the zBox3 and Schrödinger supercomputers at the University of Zürich. Figure A1. Stellar surface densities (top) and circular velocities (bottom) in SR6-n1e1ML (black line) and SR5-n1e1ML (red line). | 2010-11-01T21:33:42.000Z | 2010-03-31T00:00:00.000 | {
"year": 2011,
"sha1": "6b7440bccb5f4d9ae7dad4850b817454581b7354",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/410/2/1391/3458678/mnras0410-1391.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6b7440bccb5f4d9ae7dad4850b817454581b7354",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233977403 | pes2o/s2orc | v3-fos-license | Structural and Functional Variations of the Macrobenthic Community of the Adige Basin along the River Continuum
: Since the publication of the River Continuum Concept (RCC), the capacity of the longitudinal dimension to predict the distribution of species and ecological functions in river networks was discussed by different river theories. The taxonomic structures and functional attributes of macrobenthic communities were investigated along the river continuum in the river Adige network (Northern Italy), with the aim to test the reliability of RCC theory and clarify the relation between structural and functional features in lotic systems. Distance from the spring was found to be most representative proxy among environmental parameters. The analysis highlighted the decrease of biodiversity levels along the river continuum. The decrease of taxonomic diversity corresponded to the loss in functional richness. The abundances of predator and walker taxa, as well as semelparous organisms, declined along the longitudinal gradient, suggesting variations in community complexity and granulometry. Regression models also depicted the presence of disturbed communities in the central section of the basin, where intensive agricultural activities occur, that affected environmental gradients. Overall, results offered evidences that the river continuum may predict macrobenthic community structures in terms of taxonomic diversity, thus confirming the general validity of RCC. Nonetheless, the functional analysis did not provide equally clear evidences to support the theory. After four decades from its postulation, the RCC is still a reliable model to predict the general macroinvertebrates distribution. However, community functions may respond to a number of local factors not considered in RCC, which could find a declination in other theories. The relations between structural and functional features confirmed to be complex and sensitive to disturbances and local conditions.
Introduction
Sustainable development strictly depends on the good ecological status of aquatic ecosystems, which encompasses both structural and functional features. This concept is also stated by the EU Water Framework Directive (WFD) [1], which defined the ecological status as "an expression of the quality of the structure and functioning of aquatic ecosystems associated with surface waters". The understanding of both taxa distribution patterns and ecological functioning of lotic ecosystems is therefore fundamental for the management of aquatic environments and the reaching of sustainable goals.
Nonetheless, the relations between community structures and functions in river networks are not fully understood. Since the publication of the River Continuum Concept (RCC) [2], river ecologists investigated the spatial patterns of taxa distribution and ecological processes in lotic systems. The RCC theory represents a milestone in aquatic science, providing the first unified synthesis of structures and processes distribution along springmouth gradient. The theory postulates that aquatic communities are structured differently to optimize energy use, according to the variation of physical attributes and the availability of food resources that occur longitudinally along the river continuum. After its publication, several efforts were made to discuss its general validity and to provide new river theories. Doretto et al. [3] recently revised the role of RCC in shaping river ecology along its 40-years history and illustrated how other river theories and approaches were developed to overcome its main limitations. In general, the latter can be synthetized in the missing consideration for local heterogeneity. For instance, the metacommunity approach was proposed to include species dispersal effects, thus considering both species sorting and mass effects to predict taxa distributions [4]. Since the RCC is focused on the main stem stream and omits to frame the river in the context of river network, including its interruptions and disturbances, many studies tried to model the river system within new framework. While some theories, such as the Flood Pulse Concept (FPC) [5] and the Serial Discontinuity Concept (SDC) [6], were developed to describe more specific contexts, other recent models provide more comprehensive attempts to unify the overview of the river network. The Riverine Ecosystem Synthesis (RES) [7] frames the river as longitudinal arrangements of functionally and structurally similar functional process zones, defined by shifts in hydrological and geomorphological conditions. Rivers are thus viewed as downstream mosaics of large hydrogeomorphic patches, whose features determine the delivery of ecological processes and the occurrence of taxa. The Network Position Hypothesis (NPH) [8,9] further implemented the metacommunity theory in the river network with respect to the position within the river network. The authors state that headwater communities are mainly influenced by species sorting, while downstream assemblages are driven by dispersal-related dynamics. Beside the debate on the validity of the RCC, the role of the river continuum on shaping biotic communities is still steadily considered in river ecology [10][11][12]. Most of the other abovementioned river theories are reconcilable with the theory of RCC, introducing exceptions and adjustments to general framework of river continuum and extending the concept of river continuum beyond the original model of Vannote et al. [2].
Since macrobenthic communities are fundamental for ecosystem functioning, they represent ideal models for the study of river systems. In fact, macrobenthic invertebrates process a significant amount of organic matter, and they constitute food resources for crustacean, fishes, and birds, thus transferring relevant energy amounts from primary producers to higher trophic levels [13,14]. They are also suitable indicators for local environmental conditions, due to their reduced capacity to move actively in aquatic ecosystems. Therefore, macroinvertebrate assemblages can be used to investigate variations in ecosystem functioning and environmental conditions [15,16].
Alpine rivers are good examples of impacted lotic systems, where actions for environmental conservation are urgently needed [17]. Alpine rivers and streams suffer from a variety of stressors related to human activities and climate change, which lead to major losses of freshwater biodiversity and ecosystem services [18,19]. An ideal model for such investigations is Adige river [20], which is among the longest of the alpine area. In fact, due to its importance and the presence of both natural and anthropic features in its catchments, it has been one of the most important areas for studying macroinvertebrate assemblages since a long time ago [21]. The first ecological studies [22,23] found that macrobenthic fauna accumulated pollutants from surrounding industries and other human activities. More recently, Giulivo et al. [24] analyzed the structural response of the macrobenthic community of Adige river to seasonality and environmental stressors. They found that human stressors, such as streamflow alteration and pollutants, affect the community composition but not the diversity. Functional attributes were investigated by De Castro-Català et al. [25] in the Adige and others two European rivers, where common trends were observed. Particularly, functional and structural indices were significantly correlated, and taxa richness was found to be the best predictor for pesticide concentrations. Larsen et al. [11] studied the functional feeding habits of macrobenthic assemblages using a geostatistical method and observed that, even following a heterogeneous pattern along the longitudinal gradient, the distribution of feeding functional groups was generally consistent with the RCC. Similar outcomes were observed by Pollice et al. [18] for structural composition of macrobenthic community of Adige and other alpine streams and rivers using asymmetric eigenvector maps that are used to detect the influence of directional spatial processes on the taxa distribution.
The aim of the study is to test the reliability of RCC theory by investigating structural and functional attributes of macrobenthic communities of the river Adige basin along the river continuum. The study will also allow to clarify the relation between structural and functional features in lotic systems. Unlike the other abovementioned works carried out in Adige river and other nearby systems, the present study considers taxonomic structures and a more comprehensive range of functional and biological traits. The results will contribute by shedding light on the relation between structural and functional features of the macrobenthic community along an important alpine lotic system, as well as by reevaluating the capacity of RCC theory for the specific river.
Study Area
The Adige river flows for 410 km, and it is the second longest river, hosted in the third largest watershed (12,200 km 2 ) in Italy. The spring is near the Lake Resia at 1586 m a.s.l., and the river reaches the Adriatic Sea at the south of Venice lagoon. The hydrological regime follows the typical pattern of alpine rivers, with higher discharges in summer due to snow melt. The higher values of mean monthly discharge at lower gauge station (located in Boara Pisani) occur in June with a decreasing trend (373 m 3 /s for the period 1928-1990 and 292 m 3 /s for the period 2004-2016). The Adige basin ( Figure 1) includes 8 tributaries, mainly flowing in the upper section, characterized by a forest-dominated landscape. The upper section of the river is surrounded by typical alpine landscape features, and it is mainly impacted by hydropower dams, while its mid and lower sections are stressed by the nutrients' leaching by the intensive agricultural activities and livestock farming. The dramatic rise of fertilizer rates in the basin in the last decades severely harmed ecosystem services and water quality [19], representing a stressor of increasing relevance.
Sampling Methodology, Community Indices and Environmental Descriptors
Macrobenthic assemblages are ideal models for the investigation of the effects of environmental features on living communities and were therefore chosen to test the prediction of the RCC in river networks, as well as to study relations between community structures and functions. Macrobenthic samples were collected during summer season (2009-2013) from 15 sites along the main course of Adige river and 9 sites from tributaries (24 sampling sites in total) ( Figure 1). Benthic macrofauna was collected by sweeping a 40 cm wide D-frame hand net (mesh size = 500 µm) in an area of 1 m 2 , after suspending the sediment 1 m upstream by kicking. Five replicates per site were sampled. After being fixed in 4% formalin solution, the animals were brought to laboratory to be classified. A total of 63 taxa were recorded (Table 1). Both structural and functional indices were computed to study macrobenthic community attributes and test RCC predictions with respect to environmental conditions. Individual abundances were expressed as individuals per area unit (ind. m −2 ). The considered taxonomic indices for structural composition were the following: taxa richness (S), individuals' abundance (N), Shannon-Wiener index (H'), Pielou index (J), Margaref index (d) and Simpson index (lambda). Functional analysis was based on six biological and functional traits attributed to each taxon: functional feeding group, mobility, adult life habitat, life span, reproductive frequency and habitat choice. Such attributes were used to calculate three functional indices using the "FD" package for R [26], which takes into account multidimensional (i.e., multiple traits) functional diversity: Functional richness (Fric), Functional evenness (Feve) and Rao quadratic entropy index (RaoQ). The assignment of biological and functional attributes was carried out at genera level based on database available in the literature [27,28].
Sampling Methodology, Community Indices and Environmental Descriptors
Macrobenthic assemblages are ideal models for the investigation of the effects of environmental features on living communities and were therefore chosen to test the prediction of the RCC in river networks, as well as to study relations between community structures and functions. Macrobenthic samples were collected during summer season (2009-2013) from 15 sites along the main course of Adige river and 9 sites from tributaries (24 sampling sites in total) ( Figure 1). Benthic macrofauna was collected by sweeping a 40 cm wide D-frame hand net (mesh size = 500 μm) in an area of 1 m 2 , after suspending the sediment 1 m upstream by kicking. Five replicates per site were sampled. After being fixed in 4% formalin solution, the animals were brought to laboratory to be classified. A total of 63 taxa were recorded (Table 1). Both structural and functional indices were computed to study macrobenthic community attributes and test RCC predictions with respect to environmental conditions. Individual abundances were expressed as individuals per area unit (ind. m −2 ). The considered taxonomic indices for structural composition were the following: taxa richness (S), individuals' abundance (N), Shannon-Wiener index (H'), Pielou index (J), Margaref index (d) and Simpson index (lambda). Functional analysis was based on six biological and functional traits attributed to each taxon: functional feeding group, Nine environmental factors were measured to characterize the river continuum and local variability ( Table 2). The distance of the sampling stations, as a main descriptor for the river continuum, from the river spring was measured using Google Earth images. Altitude was recorded using a GPS device (Garmin 72 H, Garmin Ltd., Schaffhausen, Switzerland). Water temperature and dissolved oxygen concentration were measured using a multiparameter probe (YSI Model 85, YSI Incorporated, Yellow Springs, OH, USA), mean water depth and streambed width were measured with a metric cord. Granulometry was measured using an analytical sieve shaker (Fritsch Analysensieb DIN 4188, FRITSCH GmbH-Milling and Sizing, Idar-Oberstein, Germany) and expressed with Krumbein (ϕ) scale [29]. The latter expresses the sediment size range according a logarithmic modification of the Wentworth scale. Finally, NO 3 − and NH 4 + were measured following the automatic colorimeter method using Technicon AutoAnalyser II (SEAL Analytical Ltd., Southampton, United Kingdom) [30,31] and the Bower and Holm-Hansen protocol [32], respectively.
Methods of Analysis
Since the RCC assumes the co-presence of different environmental gradients along a longitudinal dimension, the distance between the spring and sample sites was selected as ideal descriptor to study the prediction capacity of the RCC. Spearman's rank correlation coefficient was used to investigate the correlations among environmental variables and to evaluate if the distance from the source (Dist) adequately includes the effects of the rest environmental variables. Then, its relation to biological traits and community indices was investigated by selecting the most appropriate regression models using Excel for selecting the form and R language for assessing the statistical significance of regression models using the "nls.lm" function of the {minpack.lm} tool. The reason of including regression models was based on the presence of unimodal responses.
Results
The Spearman correlations among environmental variables of Table 2 are given in Table 3, where the very low and non-significant correlation of NO 3 − and NH 4 + with all the rest environmental parameters is observed. On the other hand, all the rest parameters Alt, Gran, Dist, Width, Depth, Temp and O 2 present statistically significant correlations. Among them, Dist parameter presents the higher mean absolute value of Spearman correlations with all the rest parameters without considering NO 3 − and NH 4 + (Figure 2). These results show that Dist can adequately describes the effects of topography, hydromorphology and physical water parameters but not those associated to chemical water parameters that are closely associated to human activities like agriculture. Therefore, the distance from the spring (Dist) was confirmed to be an optimum descriptor for data variability and can be robustly contrasted with community indices and biological/functional traits. The most fitted regression models for community indices are presented in Table 4 and Figure 3. Statistically significant regression models were found for all the taxonomic indices except N and for only one functional index (FRic). Negative exponential monotonous models were chosen for S, d and Fric, while polynomial relations were selected for H', J and λ with the latter two showing a clear decreasing trend at central values. Table 5 shows the statistically significant regression models selected for the biological traits. Among the feeding groups, "predators" is the only attribute with a significant relation. Other statistically significant results were observed for adult life (both aquatic and aeric life habits), mobility (walker taxa) and reproductive frequency (semelparous taxa). No significant variations along Dist gradient were found for life span and habitat choice. Aquatic adult life habit is the only significant attribute showing polyno- Therefore, the distance from the spring (Dist) was confirmed to be an optimum descriptor for data variability and can be robustly contrasted with community indices and biological/functional traits. The most fitted regression models for community indices are presented in Table 4 and Figure 3. Statistically significant regression models were found for all the taxonomic indices except N and for only one functional index (FRic). Negative exponential monotonous models were chosen for S, d and Fric, while polynomial relations were selected for H', J and λ with the latter two showing a clear decreasing trend at central values. Table 5 shows the statistically significant regression models selected for the biological traits. Among the feeding groups, "predators" is the only attribute with a significant relation. Other statistically significant results were observed for adult life (both aquatic and aeric life habits), mobility (walker taxa) and reproductive frequency (semelparous taxa). No significant variations along Dist gradient were found for life span and habitat choice. Aquatic adult life habit is the only significant attribute showing polynomial/unimodal relation with Dist with higher individual densities at central Dist values, while a negative exponentially relation was found for the other traits (Figure 4).
Discussion
The presence of environmental gradients along the river continuum is the central concept at the basis of RCC, and their role on shaping biotic communities still stimulates the scientific debate around river theories. The first result of this study is the observed relevance of the Dist as the most representative proxy of all the most important parameters of topography, hydromorphology and physical water properties except nitrogen species. This confirms the presence of a continuum along the longitudinal gradient and, at least in the case of river Adige, reinforces the basic assumption of RCC. On the other hand, the low correlation of Dist with nitrogen species indicates the limitations of this parameter and consequently of RCC to include the effects of human activities scattered in the basin. This could be the reason why the regression analysis showed unimodal trends with higher or lower peaks for Dist values at approximately 200 km that alter the river continuum and may be responsible for reducing the capacity of the RCC theory to predict some functional attributes (e.g., shredder abundances). This section of the basin is characterized by the presence of intensive agricultural activities and urban centers that are documented pollution sources threating water quality of Adige river [19,20]. The decreasing trend of Pielou evenness (J) and Simpson (λ) indices at this longitudinal range showed macrobenthic communities dominated by few taxa, as commonly observed in agricultural impacted aquatic ecosystems, e.g., [33,34]. The analysis of biological traits did not identify any other attribute of these organisms, except from an increase at the medium section of adult aquatic life habit due to increased abundances of oligochaeta. Nevertheless, nutrients alone may not properly capture the wide range of effects of agricultural activities on the river network. In fact, agriculture is also responsible for the runoff of chemical products (e.g., pesticides) in the middle section of river Adige [20].
The results of both structural and functional features of macrobenthic communities can be compared with the findings of other studies carried out in the river Adige or other similar systems. For example, the reliability of Dist to describe taxonomic composition of macrobenthic communities was also demonstrated by Pollice et al. [18] in the river systems of Northern Italy (including river Adige basin). The decrease of predator taxa along the longitudinal gradient of the river Adige network was detected also by Larsen et al. [11], who also found significant patterns for other functional groups that were not confirmed in this study.
It also must be mentioned that flow regimes and disturbance history are additional factors affecting macrobenthic communities. Despite the fact that samplings were carried out during the summer season, when river discharges reach their maximum levels, previous extreme low-and high-flow events occurred in the past could strongly influence macrobenthic communities and ecological processes. At the same time, such extreme events seem to not exert significant influences on geochemical gradients [35] and therefore on the river continuum, thus potentially reducing the dependence of macrobenthic communities on the latter.
The results also demonstrated the complex relations between community structures and functions. Particularly, the analysis suggested that the decrease of species richness, and more generally of taxonomic diversity, corresponds to loss in functional richness (i.e., the number of biological/functional traits observed), as also found by De Castro-Català et al. [25], but not in other functional indices. While it is intuitive that a larger number of taxa delivers a larger range of functions, the loss in species richness does not necessary implies an uneven distribution of functional attributes, a dimension computed in both functional evenness (FEve) and Rao quadratic entropy (RaoQ) indices. In fact, RaoQ values may tend to be negatively correlated with species richness, as described by Botta-Dukat [36]. Moreover, Pakerman's [37] warnings that functional indices are highly sensitive to trait measure errors. The attribution of biological/functional traits using literature data may introduce some errors and omit the capacity of ecological plasticity of some taxa.
As expected, biodiversity levels decreased along the river continuum. Headwater systems present heterogeneous mosaics of habitats and more pristine conditions that support the occurrence of higher biodiversity levels, while lower river sections host more disturbed communities [33,38,39]. As a consequence, decreasing trends of predators' abundances may depict the loss of structural complexities. The analysis of functional feeding groups in river Adige network does not provide any evidence to support RCC. Local heterogeneity and disturbances may be the cause of the lack of significant patterns for other feeding groups, an explanation that may instead be coherent with other theories. Variation in mobility strategy is an adaptive response to granulometry variations along the continuum in accordance to RCC theory, as demonstrated by the observed decrease of walker taxa along the longitudinal gradient. Nevertheless, some caveats have to be considered with respect to these conclusions. Taxonomic identification relies on investigators' taxonomic skills on the different groups. For this reason, the biodiversity that is analyzed may not exhaustively represent the whole diversity of macrobenthic assemblages nor the biodiversity that would be found by taxonomists of the different groups. Similarly, the attribution of functional and biological traits to each taxonomic group may be affected by the abovementioned limitations.
Overall, in the case of the Adige basin, the results offered evidences that the river continuum may predict macrobenthic community structures in terms of taxonomic diversity, thus confirming the general validity of RCC theory. However, their functional organization may be driven by a number of factors not considered in RCC, such as local patchy habitats, disturbances at different scales and surrounding land cover, which could find a declination in other theories, such as the Riverine Ecosystem Synthesis and the Network Position Hypothesis. Ecological processes as species sorting and source-sink dynamics in non-pristine river networks are likely to be a result of combined effects of other environmental gradients that come along with the longitudinal gradient (e.g., land use and coverage, hydrological alterations and nutrient pollution) [40,41] that were not directly captured by the present analysis. The use of geospatial statistics may harbor the potential to detect such complex relations. However, while demonstrating the validity of theories used on river continuum, the present analysis suggests that future studies should consider both structural and functional indicators, as well as a comprehensive set of biological traits, to capture and describe the complex relations underpinning organisms' distributions in river networks. Finally, the fact that some sampling sites receive water from different springs in the same basin does not reduce the significance of the results but, on the contrary, expands it since it does not consider only the distance from one source but the distance from multiple sources of the same river.
Data Availability Statement:
The data presented in this study are available on request from the first author. The data are not publicly available due to its current use for preparing a next publication.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-08T00:04:38.541Z | 2021-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "0396d4a615c14f0c39ae7a6f21ce8f583f1e57cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/13/4/451/pdf?version=1613538892",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d315438824730f6940ba7b85e3219ab96191b60d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
231843092 | pes2o/s2orc | v3-fos-license | Model-Agnostic Graph Regularization for Few-Shot Learning
In many domains, relationships between categories are encoded in the knowledge graph. Recently, promising results have been achieved by incorporating knowledge graph as side information in hard classification tasks with severely limited data. However, prior models consist of highly complex architectures with many sub-components that all seem to impact performance. In this paper, we present a comprehensive empirical study on graph embedded few-shot learning. We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels. Our proposed regularization is widely applicable and model-agnostic, and boosts the performance of any few-shot learning model, including fine-tuning, metric-based, and optimization-based meta-learning. Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS, outperforming state-of-the-art graph embedded methods. Additional analyses reveal that graph regularizing models result in a lower loss for more difficult tasks, such as those with fewer shots and less informative support examples.
Introduction
Few-shot learning refers to the task of generalizing from a very few examples, an ability that humans have but machines lack. Recently, major breakthroughs have been achieved with meta-learning, which leverages prior experience from many related tasks to effectively learn to adapt to unseen tasks [2,30]. At a high level, meta-learning has been divided into metric-based approaches that learn a transferable metric across tasks [31,32,36], and optimization-based approaches that learn initializations for fast adaptation on new tasks [7,28]. Beyond meta-learning, transfer learning by pretraining and fine-tuning on novel tasks has achieved surprisingly competitive performance on few-shot tasks [4,6,38].
In many domains, external knowledge about the class labels can be used. For example, this information is crucial in the zero-shot learning paradigm, which seeks to generalize to novel classes without seeing any training examples [14,16,40]. Prior knowledge often takes the form of a knowledge graph [37], such as the WordNet hierarchy [23] in computer vision tasks, or Gene Ontology [1] in biology. In such cases, relationships between categories in the graph are used to transfer knowledge from base to novel classes. This idea dates back to hierarchical classification [15,29].
Recently, few-shot learning methods have been enhanced with graph information, achieving state-ofthe-art performance on benchmark image classification tasks [3,17,18,19,33]. Proposed methods typically employ sophisticated and highly parameterized graph models on top of convolutional feature extractors. However, the complexity of these methods prevents deeper understanding of the impact of incorporating graph information. Furthermore, these models are inflexible and incompatible with other approaches in the rapidly-improving field of meta-learning, demonstrating the need for a model-agnostic graph augmentation method.
Here, we conduct a comprehensive empirical study of incorporating knowledge graph information into few-shot learning. First, we introduce a graph regularization approach for incorporating graph relationships between labels applicable to any few-shot learning method. Motivated by node embedding [10] and graph regularization principles [11], our proposed regularization enforces category-level representations to preserve neighborhood similarities in a graph. By design, it allows us to directly measure benefits of enhancing few-shot learners with graph information. We incorporate our proposed regularization into three major approaches of few-shot learning: (i) metric-learning, represented by Prototypical Networks [31], (ii) optimization-based learning, represented by LEO [28], and (iii) fine-tuning, represented by SGM [25] and S2M2 R [21]. We demonstrate that graph regularization consistently improves each method and can be widely applied whenever category relations are available. Next, we compare our approach to state-of-the-art methods, including those that utilize the same category hierarchy on standard benchmark Mini-ImageNet and large-scale ImageNet-FS datasets. Remarkably, we find that our approach improves the performance of strong base learners by as much as 6.7% and outperforms graph embedded baselines, even though it is simple, easy to tune, and introduces minimal additional parameters. Finally, we explore the behavior of incorporating graph information in controlled synthetic experiments. Our analysis shows that graph regularizing models yields better decision boundaries in lower-shot learning, and achieves significantly higher gains on more difficult few-shot episodes.
Model-Agnostic Graph Regularization
Our approach is a model-agnostic graph regularization objective based on the idea that the graph structure of class labels can guide learning of model parameters. The graph regularization objective ensures labels in the same graph neighborhood have similar parameters. The regularization is combined with a classification loss to form the overall objective. The classification loss is flexible and depends on the base learner. For instance, the classification loss can correspond to cross-entropy loss [4], or distance-based loss between example embeddings and class prototoypes [31].
Problem Setup
We assume that we are given a dataset defined as a pair of examples X ⊆ X with corresponding labels Y ⊆ Y. We say that point x i ∈ X has the label y i ∈ Y . For each episode, we learn from a support set D s = {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x K , y K )} and evaluate on a held-out query set For each dataset, we split all classes into C train and C test , C train ∩ C test = ∅. During evaluation, we sample the N classes from a larger set of classes C test , and sample K examples from each class. During training, we use a disjoint set of classes C train to train the model. Non-episodic training approaches treat C train as a standard supervised learning problem, while episodic training approaches match the conditions on which the model is trained and evaluated by sampling episodes from C train . More details on the problem setup can be found in Appendix A. Additionally, we assume that there exists side information about the labels in the form of a graph G(Y, E) where Y is the set of all nodes in the label graph, and E is the set of edges.
Regularization
We incorporate graph information using the random walk-based node2vec objective [10]. Random walk methods for graph embedding [24] are fit by maximizing the probability of predicting the neighborhoods for each target node in the graph. Node2vec performs biased random walks by introducing hyperparameters to balance between breadth-first search (BFS) and depth-first search (DFS) to capture local structures and global communities. We formulate the node2vec loss below: where θ are node representations, sim is a similarity function between the nodes, N (y) is the set of neighbor nodes of node y, T is the temperature hyperparameter, and Z y is partition function defined as Z y = v∈Y exp( 1 T sim(θ y , θ v )). The partition function is approximated using negative sampling [22]. We obtain the neighborhood N (y) by performing a random walk starting from a source node y. The similarity function sim depends on the base learner, which we outline in Section 2.3.
Augmentation Strategies
Our graph-regularization framework is model-agnostic and intuitively applicable to a wide variety of few-shot approaches. Here, we describe augmentation strategies for high-performing learners from metric-based meta-learning, optimization-based meta-learning and fine-tuning by formulating each as a joint learning objective.
Augmenting Metric-Based Models
Metric-based approaches learn an embedding function to compare query set examples. Prototypical networks are a high-performing learner of this class, especially when controlling for model complexity [4,35]. Prototypical networks construct a prototype p j of the j th class by taking the mean of support set examples, and comparing query examples using Euclidean distance. We regularize these prototypes so they respect class similarities and get the joint objective: We set the graph similarity function to negative Euclidean distance, sim(p i , p j ) = −||p i − p j || 2 2 . Note that our approach can easily be extended to other metric-based learners, for example regularizing the output of the relation module for Relation Networks [32].
Augmenting Optimization-Based Models
Optimization-based meta-learners such as MAML [7] and LEO [28] consist of two optimization loops: the outer loop updates the neural network parameters to an initialization that enables fast adaptation, while the inner loop performs a few gradient updates over the support set to adapt to the new task. Graph regularization enforces class similarities among parameters during inner-loop adaptation.
Specifically for LEO, we pass support set examples through an encoder to produce latent class encodings z, which are decoded to generate classifier parameters θ. Given instantiated model parameters learned from the outer loop, gradient steps are taken in the latent space to get z while freezing all other parameters to produce final adapted parameters θ . For more details, please refer to [28]. Concretely, we obtain the joint regularized objective below for the inner-loop adaptations: We set the graph similarity function to the inner product, sim(z i , z j ) = z T i z j , though in practice cosine similarity, sim(z i , z j ) = z T i z j /||z i ||||z j || results in more stable learning.
Augmenting Fine-tuning Models
Recent approaches such as Baseline++ [4] and S2M2 R [21] have demonstrated remarkable performance by pre-training a model on the training set, and fine-tuning the classifier parameters θ on the support set of each task. We follow [4] and freeze the feature embedding model during fine-tuning, though the model can be fine-tuned as well [6]. We perform graph regularization on the classifiers in the last layer of the network, which are learned for novel classes during fine-tuning. This results in the objective below: We set the graph similarity to cosine similarity, sim(θ i , θ j ) = θ T i θ j /||θ i ||||θ j ||.
Experimental Results
For all ImageNet experiments, we use the associated WordNet [23] category hierarchy to define graph relationships between classes. Details of the experimental setup are given in Appendix B. On the synthetic dataset, we analyze the effect of graph regularizing few-shot methods.
Mini-ImageNet Experiments
We compare performance to few-shot baselines and graph embedded approach KGTN [3] on the Mini-ImageNet experiment. We enhance S2M2 R [21], a strong baseline fine-tuning model. Table 1 shows graph regularization results on Mini-ImageNet compared to results of the state-of-the-art models. We find that S2M2 R enhanced with the proposed graph regularization outperforms all other methods on both 1-and 5-shot tasks.
As an additional baseline, we consider KGTN which also utilizes the WordNet hierarchy for better generalization. To ensure that our improvements are not caused by the embedding function, we pretrain KGTN feature extractor using S2M2 R . Even when controlling for improvements in the feature extractor, we find that our simple graph regularization method outperforms complex graphembedded models.
Graph Regularization is Model-Agnostic
We augment ProtoNet [31], LEO [28], and S2M2 R [21] approaches with graph regularization and evaluate effectiveness of our approach on the Mini-ImageNet dataset. These few-shot learning models are fundamentally different and vary in both optimization and training procedures. For example, ProtoNet and LEO are both trained episodically, while S2M2 R is trained non-episodically. However, the flexibility of our graph regularization loss allows us to easily extend each method. Table 2 shows the results of graph enhanced few-shot baselines. The results demonstrate that graph regularization consistently improves performance of few-shot baselines with larger gains in the 1-shot setup.
Large-Scale Few-Shot Classification
We next evaluate our graph regularization approach on the large-scale ImageNet-FS dataset, which includes 1000 classes. Notably, this task is more challenging because it requires choosing among all novel classes, an arguably more realistic evaluation procedure. We sample K images per category, repeat the experiments 5 times, and report mean accuracy with 95% confidence intervals. Results demonstrate that our graph regularization method boosts performance of the SGM baseline [12] by as much as 6.7%. Remarkably, augmenting SGM with graph regularization outperforms all few-shot baselines, as well as models that benefit from class semantic information and label hierarchy such as KTCH [20] and KGTN [3]. We include further experimental details in Appendix B, and explore further ablations to justify design choices in Appendix C.
Experiments on Synthetic Dataset
To analyze the benefits of graph regularization, we devise a few-shot classification problem on a synthetic dataset. We first embed a balanced binary tree of height h in d-dimensions using node2vec [10]. We set all leaf nodes as classes, and assign half as base and half as novel. For each task, we sample k support and q query examples from a Gaussian with mean centered at each class embedding and standard deviation σ. Given k support examples, the task is to predict the correct class for query examples among novel classes. In these experiments, we set d = 4, h ∈ {4, 5, 6, 7}, k ∈ {1, 2, ..., 10}, q = 50, and σ ∈ {0.1, 0.2, 0.4}. The baseline model is a linear classifier layer with cross-entropy loss, and we apply graph regularization to this baseline. We learn using SGD with learning rate 0.1 for 100 iterations.
We first visualize the learned decision boundaries on identical tasks with and without graph regularization in Figure 1. To measure the relationship between few-shot task difficulty and performance, we adopt the hardness metric proposed in [6]. Intuitively, few-shot task hardness depends on the relative location of labeled and unlabeled examples. If labeled examples are close to the unlabeled examples of the same class, then learned classifiers will result in good decision boundaries and consequently accuracy will be high. Given a support set D s and query set D q , the hardness Ω φ is defined as the average log-odds of a query example being classified incorrectly: where p(·|x i ) is a softmax distribution over sim(x i , p j ) = −||x i − p j || 2 2 , the similarity scores between query examples x i and the means of the support examples p j from the j th class in D s .
We show average loss with shaded 95% confidence intervals across shots in Figure 2 (left), confirming our observations in real-world datasets that graph regularization improves the baseline model the most for tasks with lower shots. Furthermore, using our synthetic dataset, we artificially create more difficult few-shot tasks by increasing h, tree heights, and increasing σ, the spread of sampled examples. We plot loss with respect to the proposed hardness metric of each task in Figure 2 (right). The results demonstrate that graph regularization achieves higher performance gains on more difficult tasks. Figure 2: Quantified results of classification loss across shots (left) and task hardness metric (right). Each point is a sampled task. Red color denotes graph regularized method and gray method without graph regularization.
Conclusion
We have introduced a graph regularization method for incorporating label graph side-information into few-shot learning. Our approach is simple and effective, model-agnostic and boosts performance of a wide range of few-shot learners. We further showed that introduced graph regularization outperforms more complex state-of-the-art graph embedded models.
Appendix A Problem Statement and Related Work
Episodic Training A common approach is to learn a few-shot model on C train in an episodic manner, so that training and evaluation conditions are matched [35]. Note that training on support set examples during episode evaluation is distinct from training on C train . Many metric based metalearners and optimization based meta-learners use this training method, including Matching Networks [36], Prototypical Networks [31], Relation Networks [32], and MAML [7].
Non-episodic Baselines Inspired by the transfer learning paradigm of pre-training and fine-tuning, a natural non-episodic approach is to train a classifier on all examples in C train at once. After training, the final classification layer is removed, and this neural network is used as an embedding function f that maps images x i to x i ∈ R feature representations, including those from novel classes. It then fine-tunes the final classifier layer using support set examples from the novel classes. The models are a function of the parameters of a softmax layer, θ ⊂ R d . The softmax layer is formulated as the similarity between image feature embeddings and the classifier parameters where θ j is the parameters for the j th class, sim is the cosine similarity function.
A.1 Related work
Few-Shot Learning Canonical approaches to few-shot learning include memory-based [9,12,25], metric learning [27,31,32,36], and optimization-based methods [7,28]. However, recent studies have shown that simple baseline learning techniques (i.e., simply training a backbone, then fine-tuning the output layer on a few labeled examples) outperform or match performance of many meta-learning methods [4,6], prompting a closer look at the tasks [35] and contexts in which meta-learning is helpful for few-shot learning [26,34].
Few-Shot Learning with Graphs
Beyond the canonical few-shot literature, studies have explored learning GNNs over episodes as partially observed graphical models [8] and using GCNs to transfer knowledge of semantic labels and categorical relationships to unseen classes in zero-shot learning [37]. Recently, Chen et al. presented a knowledge graph transfer network (KGTN), which uses a Gated Graph Neural Network (GGNN) to propagate information from base categories to novel categories for few-shot learning [3]. Other works use domain knowledge graphs to provide task specific customization [33], and propagate prototypes [18,19]. However, these models have highly complex architectures and consist of multiple sub-modules that all seem to impact performance.
B.1 Mini-ImageNet
Dataset The Mini-ImageNet dataset is a subset of ILSVRC-2012 [5]. The classes are randomly split into 64, 16 and 20 classes for meta-training, meta-validation, and meta-testing respectively. Each class contains 600 images. We use the commonly-used split proposed in [36].
Training details We pre-train the feature extractor on C train using the method proposed by [21]. Activations in the penultimate layer are pre-computed and saved as feature embeddings of 640 dimensions to simplify the fine-tuning process. Training details We follow the procedure by [12] to pre-train the ResNet-50 feature extractor, and adopt the Square Gradient Magnitude loss to regularize representation learning, which we scale by 0.005. The model is trained using the SGD algorithm with a batch size of 256, momentum of 0.9 and weight decay of 0.0005. The learning rate is initialized as 0.1 and is divided by 10 for every 30 epochs. During fine-tuning, we train for 10, 000 iterations using the SGD algorithm with a batch size of 256, momentum of 0.9, weight decay of 0.005, and learning rate of 0.01.
B.3 Label Graph
WordNet ontology ImageNet comprises of 82, 115 synsets, which are based on the WordNet ontology. For both the Mini-ImageNet and ImageNet-FS experiments, we first choose the synsets corresponding to the output classes of each task -100 for Mini-ImageNet and 1000 for ImageNet-FS. ImageNet provides IS-A relationships over the synsets, defining a DAG over the classes. We only consider the sub-graph consisting of the chosen classes and their ancestors. The classes are all leaves of the DAG.
Training details The hyperparameter settings used for the node2vec-based graph regularization objective are in line with values published in [10]. For all experiments, we set p = 1, q = 1 and temperature T = 2. We set the batch size to 128 for Mini-ImageNet and 256 for ImageNet-FS. Empirically, we find that setting the regularization λ scaling higher for lower shots results in better performance, and set λ = 5, 3, 1 for 1-,2-, and 5-shot tasks respectively.
C.1.1 Model re-implementations with adaptation
For episodically-evaluated few-shot models, it is common practice to disregard base classes during evaluation. To implement graph regularization, we include both base and novel classes during test time and perform a further adaptation step per task. We show that the boost in performance is not due to these modifications. Recent works have shown that good parameter initialization is important for few-shot adaptations [26]. For example, Dhillion et al. [6] showed that initializing novel classifiers with the mean of the support set improves few-shot performance.
Here, we explore various methods of incorporating graph relations to improve parameter initialization for novel classes. We compare our proposed method with simpler methods to show that the our graph regularization method is boosting performance in a non-trivial manner. For each method, we keep the adaptation procedure the same, namely, the fine-tuning procedure described by Baseline++ [4].
We then vary parameter initialization using the following methods: (A) random initialization, (B) initializing novel classes with the weights of the closest training class in graph distance in the knowledge graph, (C) our method.
C.2 ImageNet-FS Ablations
Here, we justify our model design decisions by considering alternatives. We first probe the benefits of using random walk neighborhoods by defining N (y) as only nodes that have direct edges with y ("child-parent loss"). We try separately learning label graph embeddings, and passing the information to the classifier layer via "soft target" classification loss ("Independent graph w/ soft targets"). Results show that computing the graph loss directly on the classifier parameters is important for performance. Finally, we show that the quality of the label graph affects performance by removing layers of internal nodes of the WordNet hierarchy, starting from the bottom-most nodes ("Remove last 5, 10 layers"). | 2021-02-07T15:04:52.564Z | 2021-02-14T00:00:00.000 | {
"year": 2021,
"sha1": "c0dfef0f0c390ddd175cb4e5a3f9b6c8ed8a88dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a2dd3d755ffb661304bdf5e8eb24126816828f1c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264887235 | pes2o/s2orc | v3-fos-license | Predicting the weld zones size in FSSW of 304L stainless steel plates by mathematical model based on RSM
: The 300 series austenitic stainless steels are widely used in industries due to their special properties. High heat in fusion welding reduces the properties of these steels and causes many problems. Therefore, stir friction spot welding, which is a type of solid state welding, is useful and widely used in high-tech industries. In this paper, a 3D dynamic explicit finite element model is developed to simulate the friction stir spot welding of 304L stainless steel plates. Using this model, the temperature distribution and the size of weld zones (thickness of weld zones) are obtained. Then, by experimental study, the results of the temperature and the size of weld zones were obtained to be a criterion for comparing and validating the numerical results. Microstructure and hardness of these zones are determined experimentally. Finally, a mathematical model based on the response surface methodology is proposed to predict the size of weld zones. Good agreement between the numerical results that are produced by the finite element simulation, the proposed model and the experimental data is observed. The results show the maximum temperature level appears in the stir zone and it reduces by moving from the weld center. Also, by increasing the rotational speed, plunging depth and dwell time of the tool, the size of both the stir zone and the heat affected zone increase to a peak value and then the size of the latter zone decreases.
INTRODUCTION
AISI-304L stainless steel is the most widely used alloy among the austenitic stainless steel commercial grades (Kondapalli et al., 2014).It is used in mechanical equipment such as boilers and heat exchangers in petrochemical and power plants.304L stainless steel (304L SS) is a low-carbon (~ 0.03 wt.% maximum carbon) type of 304 series in which the carbide precipitations are eliminated during the welding (Kondapalli et al., 2014;Vakili Tahami et al., 2010).
Friction stir welding (FSW) was invented at the welding institute of the UK in 1991 and was primarily used for joining the aluminum alloy plates (Mishra and Ma, 2005;Mohan and Wu, 2021).This method can be considered as a promising welding method for joining different type of metals and alloys which fusion welding is difficult (Heidarzadeh and Saeid, 2013;Verma et al., 2022;Kheder et al., 2023).It is fundamentally a solid state process without large distortion, solidification cracking, porosity, oxidation, and other defects that are outcomes of conventional fusion welding methods.Thus, FSW could produce the joints with better mechanical properties such as the ultimate tensile strength, ductility, and hardness compared with the joints that are welded by the conventional fusion welding processes (Mishra, 2008;Mohan and Wu, 2021;Sharabeyani and Daei Sorkhabi, 2022).Therefore, nowadays FSW is widely used in industry (Heidarzadeh et al., 2021;Verma et al., 2022).During FSW, a non-consumable rotating tool plunges into the plates and traverses along the weld line.The heat generated by the FSW tool and the plastic deformation cause work-pieces to join (Salih al et al., 2023).Furthermore, friction stir spot welding (FSSW) as a new spot welding process, can be used to join the overlapping work-pieces and is a good replacement of the resistance spot welding (Reilly et al., 2015).FSSW was initially limited to join aluminum plates due to the difficulty in selecting the appropriate tool materials that is capable of withstanding the high temperature during the joining process of the steels.However, by developing new technology and tool materials, this process can be even applied to weld the stainless steel now (Aota and Ikeuchi, 2009;Lakshminarayanan et al., 2015;Ahmed et al., 2016).Hence the application of the FSW/FSSW on the stainless steels is the subject of recent studies, which provides important knowledge about the properties and microstructure of welded stainless steel parts by investigating various parameters (Reynolds et al., 2003;Kokawa et al., 2005;Park et al., 2003;Siddiquee et al., 2015;Ragab et al., 2021;Siddiquee et al., 2020).For example, Reynolds et al. (2003) have examined an initial assessment of the tensile properties, optical microstructure, and residual stress state of two friction stir welds in 304L SS.Kokawa et al. (2005) have investigated the details of the microstructural features, and the relationship between the hardness profile and the microstructure in friction stir welded 304 SS plates.Also, in the other work, Park et al. (2003) have shown that FSW of 304 SS using a polycrystalline cubic boron nitride tool causes the formation of sigma phase along the advancing side of the welding tool, and they proposed a theory for the evolution of sigma phase during the FSW.Ragab et al. (2021) have developed a 3D thermo-mechanical finite element model based on the Coupled Eulerian Lagrangian approach investígate FSW of martensitic stainless steel.Also, Siddiquee et al. (2020) have focused on butt-welding of AISI 304 stainless steel by friction stir welding.In this research the effect of the shoulder diameter, tool and traverse speed parameters have been investigated by using Taguchi's L27 orthogonal array.
Due to the generated heat and severe plastic deformation during the FSW, the method can be considered as a thermo-mechanical process.Therefore, the welding parameters such as tool rotational speed, travel speed, plunge depth, dwell time that control the final microstructure and mechanical properties of the welded joints.Vinayak et al. (2014) have investigated FSW and FSSW using ABAQUS/Explicit based on the finite element method (FEM).They have also studied the effect of parameters and tool geometry on temperature and morphology of the weld region.They have shown that the shape of the tool has an effect on the generated heat.The square and triangular pin profiles of the tool increase the temperature of the process compared to the circular pin profiles.Also, the solution time is reduced to a greater extent by using Coupled Eulerian-Lagrangian method when compared to Lagrangian and Eulerian ones.Hirasawa et al. (2010) used a practical method to assess the effect of tool geometry on the plastic flow and material mixing during FSSW.They have shown that for high strength spot welds, triangular pin with a concave shoulder is the preferred tool geometry.Ravi Sekhar et al. (2018) studied the effect of the tool's rotational speed on the FSSW of AA5052-H38 aluminum alloy.They reached the maximum shear load of 4.215 kN.Bang et al. (2018) have investigated the mechanical properties of dissimilar A356/SAPH440 lap joints with FSSW and Self-Piercing Riveting.They found that the maximum shear load values in the weld joints is approximately 3.5 and 7.9 kN in self-piercing riveting joints.Avinash et al. (2014) investigated the feasibility of FSW of AA7075 T6 and AA2024 T3 dissimilar aluminum alloys.They also studied the mechanical properties of the weldment.The effects of the tool rotational speed and the welding speed on the joint performance were analysed in this paper.The FSW of EH46 steel was investigated by Al-Moussawi et al. (2017).They studied the impact of the welding parameters including the tool rotational speed and plunge depth in dwell stage on the weld zone microstructure.They found that a small increase in the plunge depth causes a significant change in the microstructure, and also increasing the tool rotational speed leads to a significant difference in the microstructure.
The literature review reveals that there is a need to develop a relationship relating the mechanical properties of the FSSW joints to the welding parameters.This type of relations will be an ideal method in industry to predict the properties of the joint beforehand.Such relationships are developed in this paper and relate the rotational speed, plunge depth and dwell time to the thickness of the weld zones that in turn affect the mechanical properties of the joint.Considering the extensive use of the austenitic 304L SS in different industries, the effect of welding parameters on the thickness of weld zones, which is called the size of the weld zones, and the temperature distribution of FSSW of 304L SS plates is studied in this paper.The FSSW process is modeled based on the FEM numerical solution and a three dimensional dynamic thermo-mechanical modeling has been used for predicting the temperature history in the welded plates and the size of weld zones.Then, a set of experimental data is obtained for comparing and validating the numerical results.The FSSW tests have been conducted based on the design of experiments (DOE) and then in order to optimize and study the influence of the tool rotational speed, plunging depth and dwell time on the temperature distribution and the size of the weld zones, the Response Surface Methodology (RSM) has been applied to propose a mathematical model.
Material specification
The chemical composition of 304L SS is given in Table 1 and its temperature-dependent physical and mechanical properties are shown in Table 2.
Response Surface Methodology (RSM)
The main aim of DOE is to identify the points/ conditions where the experiments should be carried out or evaluated.After collecting a large set of experimental data, several techniques can be used to fit these data; and then, they may also be used in the numerical solutions.The RSM is a mathematical and statistical technique for DOE.An important feature of the RSM is the design of experiments.The aim of this technique is to optimize the test results that are affected by several independent variables (input variables).In fact, RSM assists in the selection of the appropriate experimental design and the definition of the tests.The application of the RSM also covers the mathematic-statistical treatment of the test data by fitting the polynomial function to them.In this line, Box et al. developed the RSM in early 50 s (Khuri and Mukhopadhyay, 2010).This method uses the fit of predefined models to the experimental results.For this purpose, linear or square polynomial relationships are implemented to explain the system studied and, subsequently, to identify (modeling and displacing) experimental parameters to obtain an optimum condition (Khuri and Mukhopadhyay, 2010).Usually a second-order model together with the two-level factorial design are used, but this method may fail if extra effects, such as second-order effects, are significant.In this case, a central point in two-level factorial designs is implemented to evaluate the relationship.The next step is to obtain the additional terms that explain the interaction between the different test parameters.Using the above techniques, the following polynomial known as Box-Behnken (Khuri and Mukhopadhyay, 2010) can be suggested: where k is the number of variables, is the constant term, are the polynomial coefficients, x i refers to the variables, introduce the interaction between these variables and ε is the residual associated to the error in test results.
Numerical method
To capture the thermo-mechanical response under the given system and process parameters, coupled temperature and displacement numerical formulations are used along with the heat generation factor.The frictional heat generation formulation involves the calculation of the heat flux at the interface elements located on the contacting parts (tool and work-piece).The heat flux is applied as a thermal load to the volume of elements on each part.Thermal distributions to each surface are given in Equations ( 2) and (3): (2) (3) where, q tool and q WP are the thermal loads, the heat flux to the tool and the work-piece respectively, q g is the total heat generated by the interface element due to the friction, q r is the heat flux due to the radiation, q k is the heat flux due to the conduction, and f tool and f WP stand for the fraction of the total generated heat flux (q g ) to the tool and the work-piece (q tool + q WP = 1).
In order to incorporate the hardening effect due to the plastic deformation (as a function of applied temperature), Johnson Cook material constitutive model is employed.This model is mathematically given in Eq. ( 4), which is extensively used for extrusion, forging and impact analyses (Al-Moussawi et al., 2017;Zhu and Chao, 2004;Khuri and Mukhopadhyay, 2010;Johnson and Cook, 1983): (4) where is the effective yield strength, is the equivalent plastic strain rate, is a coefficient to normalize the strain rate, A is the yield stress constant, B is the strain hardening constant, n is the strain hardening exponent, C is the strain rate hardening constant and m is the temperature dependent coefficient.Also, presents the non-dimensional temperature, which is given as below (Al-Moussawi et al., 2017;Zhu and Chao, 2004;Khuri and Mukhopadhyay, 2010;Johnson and Cook, 1983): (5) where θ, θ ref , θ melt are the current, reference and melting temperatures.By employing the arbitrary lagrangian-eulerian (ALE) formulation, large deformations may be simulated by re-meshing the FE model in ABAQUS by "8node C3D8RT" element Abaqus Version 6.14 (2014).The main point is to control the distortion of the elements by improving the aspect ratio of the distorted elements.This is done by the inter-step conversion of the results at the integration points and re-adjustment of the nodal positions as close as possible to the original coordinates.As compared to the pure Eulerian formulations, the advantage in using ALE formulation is that in the ALE approach the "free" surfaces have Lagrange properties in the normal direction, in a way that the surface tracking and partially filled elements are avoided.Hence, the position of the surface of the domain is found directly by solving the governing equation and the iterations are not required (Al-Moussawi et al., 2017).The FSSW geometry consists of three components including a rigid tool and two 304L SS plates.The dimensions of the plates and the tool location are given in Fig. 1. tool, made of Tungsten Carbide, and its dimensions are shown in Fig. 3a and Fig. 3b, respectively.According to Fig. 2, boundary conditions are applied in the modeling as follows: upper and lower edges of the work-pieces are restrained and fixed in the horizontal direction; the welding tool has transitional and rotational movement.The convection coefficient of 30 W/m 2 o C at 25 °C is applied to all surfaces that are exposed to the surrounding air (Awang, 2007;Jiji, 2006).
Experimental method
Two similar 304L SS plates are joined by FSSW process and the temperature levels of the selected points around the weld zone are measured during the welding process, and then, the microstructure, hardness and thickness of the weld zones are determined.The FSSW For the welding process, a Universal Milling Machine shown in Fig. 4a is used and Fig. 4b shows the plates are fixed by a set of fixtures during the welding.Fig. 4a and 4b show that the welding tool is installed on the spindle and rotates around the axis perpendicular to the surface of the plate.Welded plates are shown in Fig. 4c.An Axial force exerted by the milling machine head is applied to the material and welding is being possible by the extreme plastic deformation in the solid phase that includes recrystallization of the base material (BM), and eventually, a strong metallurgical connection is created.In this way, the stir zone (SZ), thermo-mechanical affected zone (TMAZ) and heat affected zone (HAZ) are formed around the weld spot.In order to study the microstructure of the welded specimens, macrographs are taken from the mid-plane perpendicular cross section of the weld zone according to ASTM E3-01 (2001).The marble solution with 50 ml hydrochloric acid, 50 ml distilled water and 10 g copper sulfate is used to etch the specimen's surface and the macro-graphic images of the specimens are taken using an optical microscope.Then, the surface hard-ness of the specimens in different points of the weld cross section are determined based on ASTM E384 (2017) using Micro Vickers method.
Temperature of the points have been measured by a K type thermocouple during the FSSW process.These thermocouples are linked to a PC equipped with a data acquisition system through the digital thermometers (TM-747D) or analogue to digital converters indicating the temperature change/history during the process.
Design of experiment conditions by response surface method
Rotational speed of the tool, plunging depth and dwell time are selected as process variables in this research identifying the major parameters affecting the nugget of the FSSW and the structure of the weld zones.In order to study the effect of these parameters on the size of the weld zones, the tests are conducted based on DOE method and RSM has been used for proposing a model to predict the zone sizes.The mentioned parameters are considered as input data and three levels of them (given in Table 4) are selected.
Hardness
In the present research, Vickers Hardness for different points on the plate surface along a radial line starting from the outer surface of the weld nugget has been investigated.Considering that the process of changes in the hardness-distance diagram is almost the same for all case studies, for example in Fig. 6, this process shown for case study with tool rotational speed: 750 rpm, dwell time: 4 s, and plunging depth: 0.1 mm.The vertical axis of this figure is located at the outer surface of the weld nugget.As shown in this figure, the SZ hardness is maximum (250 Vickers) and it decreases by moving away from this area.In other words, SZ has the highest amount of hardness.The reason for that is the microstructure of its fine grains, which can be clearly seen in this figure.It can also be caused by the plastic deformation as a result of pressure by the tool pin and work hardening phenomenon.By entering to the TMAZ, the amount of hardness is reduced due to the lower amount of plastic deformation or stir.In other words, the reason for which was low is the mixing and reduction of the effect of welding fineness in this zone.In the HAZ, the decreasing trend of the hardness continues to reach the base metal hardness, which is equal to 192 Vickers.The SZ hardness numbers for three case studies are presented in Table 5.
As it is shown in this figure, the SZ hardness is maximum (250 Vickers) and it decreases by moving away from this area.In other words, SZ has the highest hardness because of its microstructure.By entering in the TMAZ, the hardness is reduced due to the lower These levels are selected based on the previous experiences that produce appropriate joints and the reported values for the similar material (Lakshminarayanan et al., 2015;Ahmed et al., 2016).This set of input data is the foundation of the numerical and experimental analyses in this research.
Microstructures
The cross section images of the FSSW for two 304L SS plates with magnification factor 50 (Mag.=50x) are shown in Fig. 5a.In this case, the tool rotational speed is 750 rpm, the plunging depth is 0.1 mm and the dwell time is 4 s.For better resolution, macrographs with Mag.= 200x are shown in Fig. 5b.Considering the structural characteristics, three weld zones: stir zone (SZ), thermo-mechanical affected zone (TMAZ) and heat affected zone (HAZ), can be seen in this figure.The procedure is repeated for all other case studies with different welding parameters listed in Table 4. amount of plastic deformation or stir.In the HAZ, the decreasing trend of the hardness continues to reach the base metal hardness, which is equal to 192 Vickers.The SZ hardness numbers for three case studies are presented in Table 5.
shown in Fig. 8.In FEM simulation for this case study, the rotational speed of the tool, the plunging depth and the dwell time are 750 rpm, 0.1 mm and 4 s respectively.The difference in temperature level also affects the microstructure of the zones.Also, this figure shows the maximum temperatures difference of the various regions is approximately 200 °C.For validating the FEM results, temperature levels of point "D" (see Fig. 7) have been measured.Fig. 9 shows the temperature history of both FE solution and experimental measurements for selected point ("D").In addition, the experimentally measured temperature levels are compared with those obtained using the FE solutions in Table 6.As it can be seen, there is a good agreement between the experimental data and the numerical results with maximum deviation of 40 °C (3.39%)
Developing the mathematical model
As it was mentioned before, for studying the impact of the rotational speed (R), plunging depth (P) and dwell time (D) on the size of the weld zones, DOE conditions are determined by the RSM.Based on this method, fifteen tests are defined and conducted and their conditions are presented in Table 7.
Since carrying out these tests is expensive, the numerical model that has been validated in Section 3.3, is used to obtain the temperature distribution and the weld zone size for each case and their results are employed to develop a mathematical model by the RSM.The size of each zone is approximated based on their temperature level: between 1200 °C and 1000 °C is known as SZ+TMAZ and between 1000 °C and 850 °C for HAZ.Based on these data an order-two polynomial have been developed using the RSM to predict the weld zone size for FSSW of 304 SS plates according to the selected operating conditions as below: (6) (7)
Validation of the mathematical model
In Fig. 10 and Fig. 11 the results obtained from the finite element model and the mathematical model for the size of HAZ and SZ+TMAZ are compared.According to the R 2 values (>95%), it can be seen that there is an acceptable agreement between the results.
The experimental data are used to validate the results of the mathematical model achieved by the RSM.For this purpose, three different cases are randomly selected in the range of experiments, and the results of the mathematical model are compared with those obtained by the measurements.The comparisons for SZ+TMAZ and HAZ sizes are presented in Table 8 and Table 9, respectively.As it can be seen from the error values given in these tables, the maximum error is 0.1 mm (9%) which indicates a good agreement between the results.
Effect of the welding parameters on the size of the weld zones
Figure 12 shows the direct impact of the welding parameters: rotational speed, dwell time, and plunging depth on the size of SZ+TMAZ and HAZ.As it can be seen in Fig. 12a, the size of SZ+TMAZ increases by increasing the rotational speed, plunging depth and dwell time.By increasing these parameters, the amount of input heat to the work-piece increases and the stirring effect around the pin intensifies and leads to a larger size of SZ+TMAZ.It is clear that the impact of the rotational speed on the size of SZ+TMAZ at very low revolutions (from 500 rpm to 550 rpm) is insignificant.This figure also shows that the increase of the SZ+T-MAZ size by increasing the plunging depth is lower than the increase of the size of this area by increasing other parameters.In Fig. 12b it is evident that by increasing the rotational speed, the size of HAZ increases to a maximum value and then decreases.By increasing the rotational speed, the temperature increases and that causes the HAZ size to enlarge, but with further increase of this parameter, HAZ size decreases because the amount of stirring increase and SZ+TMAZ moves outward and replaces the HAZ.
The impact of the dwell time on the HAZ size is similar to the impact of the rotational speed.Initially, the size of HAZ rises to a maximum amount and then is reduced due to the enlargement of the SZ+TMAZ.The trend of change of the HAZ size due to the increase in the plunging depth is similar to the other parameters.
In addition to the direct effect of parameters on the size of different zones, the interference effect of parameters is also very important.Figure 13 shows the interference effect of the parameters on SZ+TMAZ size.According to Fig. 13(a), at low rotational speeds, due to the increase in the plunging depth, the size of SZ+TMAZ rises.The increase rate of SZ+TMAZ size is reduced by raising the rotational speed.However, by reaching a speed of 750 rpm, the increase rate of size for this zone rises.Actually, at speeds greater than 750 rpm, the effect of increasing the plunging depth is greater and causes a faster growth in this zone.Fig. 13 (b-c) show the interaction between the dwell time and the rotational speed.As it is seen, by increasing of these two parameters at the same time, the size of SZ+TMAZ increases.This rate, at speeds lower than 750 rpm, is low and at speeds greater than 850 rpm is high.Actually, the simultaneous effect of these two parameters is much greater in higher values.Since the rotational speed and the dwell time have the same direct effect on the size of SZ+TMAZ, the interference effect of the dwell time and the plunging depth is similar to the interference effect of the rotational speed and the plunging depth.
Figure 14a shows the interference effect of the plunging depth and the rotational speed parameters on HAZ size.The maximum value of this zone occurs when the rotation speed is between 750 and 1000 rpm and the plunging depth is between 6.0 and 14.0 mm.At lower values than these ranges, the HAZ size is reduced due to the lower input heat.Besides, at higher values than these ranges, the HAZ size is reduced because of SZ expansion.Figure 14 (b-c) show that the highest value of the HAZ size occurs when the rotational speed is between 750 and 1000 rpm and the dwell time is between 3.5 and 7 s due to the interference effect between the rotation speed and the dwell time.Due to the same direct effect of the rotating speed and the dwell time, the interference effect of the dwell time and the plunging depth is similar to the interfering effect of the rotating speed and the plunging depth on the size of the HAZ zone.
CONCLUSIONS
In this paper the effect of the rotational speed, the plunging depth and the dwell time on the size of weld zones in FSSW of 304L SS plates are studied.The impact of these parameters is assessed via DOE by RSM.Using the variance analysis, the direct, the second degree and the interacting effect of parameters on the output of the process are considered.The test conditions are defined by the RSM and to reduce the number tests and consequent expenses, after validation, the FE based numerical solutions are employed.The results are summarized as follows: -Based on the RSM two relationships are developed that can be used to predict the weld zone sizes as a function of the welding parameters.-It has been shown that the size of SZ increases by increasing the magnitude of the welding parameters.-When the plunging depth is constant and equal to 0.1 mm and the rotational speed is at the range of 950 to 1000 rpm and the dwell time is in the range of 6 to 7 s, the largest HAZ size is obtained.-When the dwell time is constant and equal to 4 s and the rotational speed is in the range of 950 rpm to 1000 rpm and the plunging depth is in the range of 0.1 to 0.2 mm, the largest SZ+TMAZ size is achieved.-The largest HAZ size occurs at the range of 750 to 1000 rpm and 0.06 to 0.14 mm for the rotational speed and the plunging depth, respectively.
FiguRe 2 .
FiguRe 2. Schematic view a part of the model and boundary conditions.
FiguRe 3 .
FiguRe 3. a) Photograph of welding tool b) Dimensions of the welding tool (mm).
FiguRe 4 .
FiguRe 4. a) Welding equipment b) Fixed plates while being welded c) Specimen of welded plates.
FiguRe 6 .Figure 7
FiguRe 6. Vickers Hardness for different points on the plate surface along a radial line.The vertical axis is located at outer surface of the weld nugget.
FiguRe 8 .
FiguRe 8. Temperature level history for different zones of FSS welded piece for case study 2.
FiguRe 9 .
FiguRe 9. Temperature level history obtained by the experiment and FEM for point D (see Fig. 7) in case study 2.
Figure 10 .
Figure 10.Comparison of results obtained from finite element model and mathematical model for HAZ size.
Figure 11 .
Figure 11.Comparison of results obtained from finite element model and mathematical model for SZ+TMAZ size.
Figure 12 .
Figure 12.The impact of the parameters, rotational speed, dwell time and plunging depth on the size of weld zones for a) SZ+TMAZ b) HAZ.
Figure 13 .
Figure 13.Interference effect of parameters on SZ+TMAZ size, a) plunging depth and rotational speed, b) dwell time and rotational speed, and c) plunging depth and dwell time.
Figure 14 .
Figure 14.Interference effect of parameters on HAZ size, a) plunging depth and rotational speed, b) dwell time and rotational speed, and c) plunging depth and dwell time.
table 1 .
The
table 4 .
Welding parameters and their levels
table 6 .
FEM results and experimentally measured data for the maximum temperature of the weld zones
table 7 .
Input and output parameters in DOE
table 9 .
Experimental and Mathematical model results for the thickness of HAZ
table 8 .
Experimental and mathematical model results for the thickness of SZ+TMAZ | 2023-11-02T15:16:27.289Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "e9b385f4ec081e1bea040c0b61cff52ee8694ed9",
"oa_license": "CCBY",
"oa_url": "https://revistademetalurgia.revistas.csic.es/index.php/revistademetalurgia/article/download/1564/2064",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "370b5346e33318f148b556141138ce3519766a28",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
49669564 | pes2o/s2orc | v3-fos-license | Systematics of the ant genus Proceratium Roger (Hymenoptera, Formicidae, Proceratiinae) in China – with descriptions of three new species based on micro-CT enhanced next-generation-morphology
Abstract The genus Proceratium Roger, 1863 contains cryptic, subterranean ants that are seldom sampled and rare in natural history collections. Furthermore, most Proceratium specimens are extremely hairy and, due to their enlarged and curved gaster, often mounted suboptimally. As a consequence, the poorly observable physical characteristics of the material and its scarcity result in a rather challenging alpha taxonomy of this group. In this study, the taxonomy of the Chinese Proceratium fauna is reviewed and updated by combining examinations of traditional light microscopy with x-ray microtomography (micro-CT). Based on micro-CT scans of seven out of eight species, virtual 3D surface models were generated that permit in-depth comparative analyses of specimen morphology in order to overcome the difficulties to examine physical material of Proceratium. Eight Chinese species are recognized, of which three are newly described: Proceratium bruelheidei Staab, Xu & Hita Garcia, sp. n. and P. kepingmai sp. n. belong to the P. itoi clade and have been collected in the subtropical forests of southeast China, whereas P. shohei sp. n. belongs to the P. stictum clade and it is only known from a tropical forest of Yunnan Province. Proceratium nujiangense Xu, 2006 syn. n. is proposed as a junior synonym of P. zhaoi Xu, 2000. These taxonomic acts raise the number of known Chinese Proceratium species to eight. In order to integrate the new species into the existing taxonomic system and to facilitate identifications, an illustrated key to the worker caste of all Chinese species is provided, supplemented by species accounts with high-resolution montage images and still images of volume renderings of 3D models based on micro-CT. Moreover, cybertype datasets are provided for the new species, as well as digital datasets for the remaining species that include the raw micro-CT scan data, 3D surface models, 3D rotation videos, and all light photography and micro-CT still images. These datasets are available online (Dryad, Staab et al. 2018, http://dx.doi.org/10.5061/dryad.h6j0g4p).
Introduction
Recent phylogenetic studies have clarified the evolutionary history of ant subfamilies and genera. One higher-level taxon consistently recovered is the subfamily Proceratiinae, which belongs to the poneroid clade (e.g. Brady et al. 2006, Moreau et al. 2006, Blanchard and Moreau 2017, Borowiec et al. 2017. This subfamily currently contains three valid extant genera and one fossil genus with eight fossil and 144 valid extant species, and one fossil genus with four species (Bolton 2018). Proceratium Roger, 1863 is the largest genus in the subfamily with 83 extant and six fossil species. However, based on recent molecular phylogenetic results, the monophyly of the genus appears doubtful (Borowiec et al. 2017). While globally distributed, with the majority of species occurring in warm and sufficiently wet climates, the geographic record is very patchy (Baroni Urbani and de Andrade 2003, Hita Garcia et al. 2014. Specimens are only rarely collected, usually in leaf litter or soil samples. Colonies typically occur in low densities (but see Masuko 2010) and are small, having usually fewer than 100 workers (but see Onoyama andYoshimura 2002, Fisher 2005). Proceratium have a cryptobiotic lifestyle with hypogeic foraging habits and nesting in leaf litter, rotting wood, top soil, or below stones (Baroni Urbani and de Andrade 2003). As far as it is known, they are specialized predators of the eggs of spiders and other arthropods, which can be stored in large quantities in the nest (e.g. Brown 1957, Brown 1979, Fisher 2005. Notably, some Japanese Proceratium species also display larval haemolymph feeding, a behavior otherwise only known from the 'dracula ant' subfamily Amblyoponinae (Masuko 1986). However, if this is a typical feature for the whole genus or restricted to a few congeners remains unknown and requires more in-depth natural history data than currently available.
The genus has been comprehensively revised on a global scale by Baroni Urbani and de Andrade (2003). The authors also refined the internal species clades originally erected by Brown (1958) and grouped the genus in eight internal clades that reflect the relationships of a morphology-based phylogeny (Baroni Urbani and de Andrade 2003). Nevertheless, the account on the genus is far from complete, as can be seen in the few single species descriptions (Fisher 2005, Liu et al. 2015a) and regional revisions published since then (Xu 2006, Hita Garcia et al. 2014. Considering the cryptic lifestyle and extreme rarity in collections, it is very likely that many more species await discovery and formal taxonomic treatment. In China, seven Proceratium species from three clades (P. itoi clade, P. silaceum clade, P. stictum clade) have been recorded so far (Xu 2000, Xu 2006, Liu et al. 2015b, albeit the geographic coverage within the country is poor (Guénard and Dunn 2012). The genus is only known from the provinces of Yunnan (six species), Hunan (two species), Zhejiang (one species), and the island of Taiwan (two species). There are no records from the other provinces in south and southeast China where Proceratium populations almost inevitably occur.
In the last decade, X-ray microtomography (micro-CT) technology has gained popularity among systematicists and is being increasingly employed in arthropod taxonomy. Micro-CT is a state-of-the-art imaging technology that facilitates the generation of high-resolution, virtual, and interactive three-dimensional (3D) reconstructions of whole specimens or of particular body parts (Hörnschemeyer et al. 2002, Friedrich and. The virtual nature of such reconstructions enables non-destructive and comprehensive 3D analyses of anatomy and morphology , Friedrich et al. 2014. Another crucial benefit of micro-CT is its application for virtual dissections and identification of new diagnostic characters (Deans et al. 2012), which has been successfully applied for lepidopterans (Simonsen and Kitching 2014), mayflies (Sartori et al. 2016), and recently ants (Hita Garcia et al. 2017a).
Despite its common usage in invertebrate paleontology, as well as functional and comparative morphology (e.g. Beutel et al. 2008, Berry and Ibbotson 2010, Barden and Grimaldi 2012, until very recently micro-CT was not applied to alpha taxonomy. In the last years, this situation is changing and micro-CT has become a powerful tool to visually enhance and support diagnostic species delimitations, from single species descriptions to revisions. While initially used for polychaetes , myriapods (Stoev et al. 2013, Akkari et al. 2015, spiders (Michalik and Ramírez 2013), earthworms (Fernández et al. 2014), and flatworms Lenihan 2016, Carbayo et al. 2016), micro-CT has evolved into a cutting-edge tool increasingly applied for ant taxonomy (Csősz 2012, Agavekar et al 2017, Hita Garcia et al. 2017a, 2017b. A detailed and critical assessment of the technology and its applications for ant taxonomy was provided by Hita Garcia et al. (2017b). Another key advantage of applying micro-CT for invertebrate taxonomy is the use of openly available cybertype datasets linked to the original, physical type material , Akkari et al. 2015, Hita Garcia et al. 2017b.
In this study, we provide a review of the genus Proceratium in China, in which we describe three new species: P. bruelheidei sp. n. and P. kepingmai sp. n. from the P. itoi clade from subtropical southeast China and P. shohei sp. n. from the P. stictum clade from the tropical south of Yunnan Province. The newly available insights from this study suggest that P. nujiangense Xu, 2006 is conspecific with P. zhaoi Xu, 2000. Thus, we treat P. nujiangense syn. n. as a junior synonym of P. zhaoi. To distinguish the new species from morphologically similar species, particularly in the P. itoi clade, and to ease future identifications, we provide an illustrated key to the Chinese fauna. We also give species accounts for all other valid species and add a locality record for Proceratium longigaster Karavaiev, 1935. Like in previous studies , Agavekar et al. 2017, Hita Garcia et al. 2017a, 2017b), we continue using and exploring microtomography for ant taxonomy. In order to visually enhance the taxonomic descriptions, we provide still images and 3D videos based on surface volume renderings of micro-CT scans from all Chinese species (except for P. longmenense Xu, 2006). Since the treated species are rather hairy, often dirty, and too scarce for any physical specimen manipulations, we also use the 3D reconstructions for virtual in-depth examinations of surface morphology. Furthermore, the complete micro-CT datasets containing the scan raw data, 3D rotation videos, still images of 3D models, and 3D surfaces supplemented by color montage photos are made freely available online (Staab et al. 2018, http://dx.doi.org/10.5061/dryad.h6j0g4p) as cybertypes.
Abbreviations of depositories
The collection abbreviations follow Evenhuis (2018). The material upon which this study is based is located or will be deposited at the following institutions:
Specimens and imaging
The material of the new species was collected during recent ecological field work activities of the first author (see e.g. Staab et al. 2014, Staab et al. 2017) and others (see Liu et al. 2016). All available worker specimens were mounted and measured with a Leica M125 stereo microscope under magnification of 80-100×. To compose montage images, we took raw image stacks with a Leica M205C microscope equipped with a Leica DFC450 camera and then assembled montage images with Helicon Focus (version 6) software. Additional material of previously described Proceratium species known to occur in China and of Asian species from the three Proceratium species clades containing Chinese species (P. itoi clade, P. silaceum clade, P. stictum clade) was also examined (see species and specimen data in Suppl. material 1: Table S1 for non-Chinese species). The other distributional data used for map generation was extracted from Antmaps.org (Janicki et al. 2016).
Measurements and indices
The following measurements (all expressed in mm) and indices are based on Hita Garcia et al. (2014Garcia et al. ( , 2015:
EL
Eye length: maximum length of eye measured in oblique lateral view.
HL
Head length: maximum measurable distance from the mid-point of the anterior clypeal margin to the mid-point of the posterior margin of head, measured in full-face view. Impressions on anterior clypeal margin and posterior head margin reduce head length. HLM Head length with mandibles: maximum head length in full-face view including closed mandibles. HW Head width: maximum head width directly above the eyes, measured in fullface view. MFeL Metafemur length: maximum length of metafemur measured along its external face. MTiL Metatibia length: maximum length of metatibia measured along its external face.
X-ray micro computed tomography and 3D images
We scanned all Chinese Proceratium species, except for P. longmenense from which no material was available for micro-CT analysis. For each of the new species, we scanned the holotype worker specimen, whereas for the remainder of the species we either scanned a paratype or non-type specimen, if no type material was available. An overview of scanning parameters and specimens used is provided in Table 1. All micro-CT scans were performed using a Zeiss Xradia 510 Versa 3D X-ray microscope operated with the Zeiss Scout-and-Scan Control System software (version 11.1.6411.17883). 3D reconstructions of the resulting scan raw data were done with the Zeiss Scout-and-Scan Control System Reconstructor (version 11.1.6411.17883) and saved in DICOM file format. Volume renderings, surface mesh generations, virtual examinations and dissections were performed with Amira software (version 6.3.0). Post-processing of mesh data in order to generate clean surfaces was done with Meshlab (version 1.3.3). The methodology for the virtual examinations of 3D surface models, generation of 3D rotation videos, and virtual dissections follow Hita Garcia et al. (2017a). For more details on micro-CT scanning and post-processing workflow pipeline, we refer to the exhaustive descriptions in Hita Garcia et al. (2017aGarcia et al. ( , 2017b.
Data availability
All specimens used in this study have been databased and the data are freely accessible on AntWeb (http://www.antweb.org). Each specimen can be traced by a unique specimen identifier attached to its pin. The Cybertype datasets provided in this study consist of the full micro-CT original volumetric datasets (in DICOM format), 3D surface models (in STL and PLY formats), 3D rotation video files (in .mp4 format, see Suppl. material), all light photography montage images, and all image plates including all important images of 3D models for each species. All data have been archived and are freely available from the Dryad Digital Repository (Staab et al. 2018, http://dx.doi.org/10.5061/dryad.h6j0g4p). In addition to the cybertype data at Dryad, we also provide freely accessible 3D surface models of all treated species on Sketchfab (https://sketchfab.com/arilab/collections/proceratium).
Identification key to Chinese Proceratium species (workers)
This key is partly derived from Baroni Urbani and de Andrade (2003) and Xu (2006).
Proceratium bruelheidei
Cybertype. Volumetric raw data (in DICOM format), 3D rotation video (in .mp4 format, see Suppl. material 3: Video 1), still images of surface volume rendering, and 3D surface (in PLY format) of the physical holotype (CASENT0790023) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the type. In addition to the cybertype data at Dryad, we also provide a freely accessible 3D surface model of the holotype at Sketchfab (https://skfb.ly/6txMz).
Diagnosis. Proceratium bruelheidei differs from the other members of the P. itoi clade by the following character combination: relatively large species (TL 3.61-4.00); sides of head straight to very weakly convex, posterior sides only narrowing dorsally, vertex convex; frontal carinae well developed, with large lamellae that extend laterally above the antennal insertions; frontal furrow inconspicuous and of the same color as the surrounding anterior cephalic dorsum; posterodorsal corners of the propodeum broadly angular; propodeal declivity superficially punctured, but shiny; posterior face of petiolar node in profile as steep as anterior face and less than half as long as anterior face; apex of the petiolar node almost as long as broad in dorsal view; subpetiolar process roughly trapezoid and well developed (albeit with variable ventral outline); abdominal segment IV very strongly recurved (IGR 0.24-0.26); in addition to dense pubescence, abundant erect hairs present on scapes and dorsal surface of body, longest of those hairs longer than maximum dorsoventral diameter of metafemur.
Worker Worker description. In full-face view, head slightly longer than broad (CI 89-94), anterior sides straight to very weakly convex, posterior sides narrowing dorsally, vertex convex. Clypeus reduced and narrow, with a broadly triangular median anterior projection. Frontal carinae relatively short, moderately separated, slightly covering antennal insertions, constantly diverging posteriorly, lateral expansions of anterior part of frontal carinae developed as broad lamellae, raised, conspicuously and broadly extending laterally above antennal insertions; frontal area convex; frontal furrow developed as a raised carina, starting at the clypeal projection and extending over the anterior 2/5 of the cephalic dorsum, with a short gap at the level where the lamellae of frontal carinae are broadest, frontal furrow less conspicuous after the gap. Eyes reduced, minute (OI 4-5), consisting of one to four ommatidia and located on midline of head. Antennae 12-segmented, scapes short (SI 59-63), not reaching posterior head margin and thickening apically. Mandibles elongate and triangular, relatively slender, masticatory margin with four teeth in total, apical tooth long and acute, the other teeth smaller and decreasing in size from second to fourth tooth, gap between second and third tooth larger than between other teeth. Mesosoma in profile slightly convex and as long as maximum head length including mandibles (WL 1.03-1.10 vs HLM 0.96-1.09). Lower mesopleurae (katepisterna) with well-demarcated sutures, upper mesopleurae (anepisterna) with inconspicuous sutures, no other sutures developed on lateral and dorsal mesosoma; lower mesopleurae weakly inflated posteriorly; posterodorsal corner of propodeum broadly angular, propodeal lobes weakly developed as bluntly rounded lamellae; propodeal declivity almost vertical, slightly inclined anteriorly; in posterodorsal view, sides of propodeum separated from declivity by distinct lamellate margins; in profile view, propodeal spiracle rounded, at mid height, opening of spiracle slightly facing posteriorly. Legs moderately long (MFeL 0.63-0.74, MTiL 0.54-0.58, MBaL 0.39-0.41); all tibiae with a pectinate spur; calcar of strigil without a basal spine; pretarsal claws simple; arolia present. Petiolar node in profile high, nodiform, with a straight and sloping anterior face, dorsum of node broadly rounded, posterior face as steep as anterior face and relatively short, less than half as long as anterior face; petiole in dorsal view longer than broad, apex of node almost as long as broad; ventral process of petiole well developed, with a roughly trapezoid projection of varying shape and ventral outline (see 'variation').
In dorsal view abdominal segment III anteriorly much broader than petiole; its sides convex; abdominal sternite III anteriomedially with a conspicuous depression marked by a thin rim. Constriction between abdominal segments III and IV deep. Abdominal segment IV very large, strongly recurved (IGR 0.24-0.26) and posteriorly rounded, with a lamella on its anterior border around the constriction to abdominal segment III, this lamella thicker ventrally than dorsally; abdominal tergum IV 1.5-1.6× longer than abdominal tergum III (ASI 145-159), remaining abdominal tergites and sternites inconspicuous and projecting anteriorly. Sting large and extended.
Whole body covered with dense mat of short, decumbent to suberect pubescent hairs; additionally, dorsal surfaces of body with abundant significantly longer suberect and erect hairs; such hairs also present on abdominal sterna III + IV, scapes (anterior faces of scapes with many hairs, posterior faces with fewer hairs) and legs (ventral faces of femora and tibiae with many hairs, dorsal faces with fewer hairs), the longest hairs on dorsal surface of body longer than the maximum dorsoventral diameter of metafemur. Mandibles striate; entire body densely punctate; on sides of pronotum punctures aligned in diffuse lines, appearing striate; punctures on antennae, legs, and abdominal segment IV finer than on rest of body; propodeal declivity shiny and at most superficially punctate; abdominal segments V-VII very superficially reticulate and shiny. Body color uniformly orange brown to reddish brown, vertex of head slightly darker, legs, antennal funiculus, and abdominal segments V-VII yellowish brown.
Etymology. The species epithet is a patronym in honor of the German botanist Prof. Helge Bruelheide and his efforts in establishing and promoting the BEF-China project. All specimens of this species were collected on BEF-China field sites.
Distribution and ecology. Most of the type series was collected during a leaf litter ant survey (Noack 2016) in the experimental tree plantations of the BEF-China Main Experiment . No direct observations of biology and natural history are available. The trees under which the Winkler samples yielding seven of eight type specimens were collected were just six years old and had a mean diameter at breast height of 5.6 ± 2.5 cm (n=7) (Noack 2016). This may indicate that P. bruelheidei could prefer early successional forests with a relatively open soil, as the ground from which leaf litter was taken had a mean litter cover of 55 ± 24% (n=7). The single specimen (CASENT0790027) from the Gutianshan National Nature Reserve was likewise collected from an early successional forest stand that was clear-cut less than 20 years prior to the collection of the specimen. However, further sampling will be necessary to draw quantitative conclusions on habitat preferences. Taxonomic notes. Proceratium bruelheidei is most similar to P. kepingmai. From the other species of the P. itoi clade, P. bruelheidei can be separated by using the characters given in the 'taxonomic notes' of P. kepingmai below. From this species, P. bruelheidei dif- fers by the shape of the head in full-face view with straight sides and a convex vertex (sides convex, broadest at level of eyes and vertex almost straight in P. kepingmai), the shiny propodeal declivity that is only superficially punctured (densely punctured and mostly opaque in P. kepingmai), the inconspicuous frontal furrow that has the same color as the surrounding anterior cephalic dorsum (frontal furrow conspicuous and dark in P. kepingmai), the posterior face of petiolar node as steep as the anterior face of the node and less than half as long as the anterior face (posterior face steeper than anterior face and about half as long in P. kepingmai), the apex of the petiolar node that is little broader than long (clearly broader than long in P. kepingmai), and the more strongly recurved abdominal segment IV (IGR 0.24-0.26) (IGR 0.30-0.32 in P. kepingmai). Additionally, P. bruelheidei has distinctly more and longer erect hairs protruding from the dense pubescence on the dorsum of the body and the ventral abdomen. While the number of hairs may be a treacherous character as hairs can break during specimen processing, the length of hairs can reliably be quantified. In P. bruelheidei the longest erect hairs on the dorsum of the petiole and on abdominal sternum III are longer than the maximum dorsoventral diameter of the metafemur (as long as or shorter than maximum diameter of metafemur in P. kepingmai).
Variation. The variation in body size is within the normal limits of other Proceratium species and the type specimens of P. bruelheidei show, with the notable exception of the subpetiolar process, no observable intraspecific differences. While the process is well developed and roughly trapezoid in all available specimens, its size, exact shape, and ventral outline vary. In the holotype (CASENT0790023) and several paratypes (CASENT0790025, CASENT0790026, CASENT0790029) the subpetiolar process has a distinct notch, so that it almost looks like an upside-down volcano. This notch is absent in other specimens (CASENT0790027, CASENT0790028, CASENT0790030), where the ventral outline of the process is straight. In one specimen (CASENT0790024) the ventral outline is also straight but with a row of minute denticles. It thus appears that this character, which is often used to delimitate Proceratium species (e.g. Baroni Urbani and de Andrade 2003, Hita Garcia et al. 2014), may be less suitable for species in the P. itoi clade, as also indicated by the variation in the subpetiolar process within the type series of P. zhaoi (Xu 2000). (Forel, 1918 Virtual dataset. Volumetric raw data (in DICOM format), 3D rotation video (in .mp4 format, see Suppl. material 4: Video 2), still images of surface volume rendering, and 3D surface (in PLY format) of a non-type specimen (OKENT0016142) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the species. In addition to the data at Dryad, we also provide a freely accessible 3D surface model at Sketchfab (https://skfb.ly/6txMM).
Proceratium itoi
Diagnosis. Proceratium itoi differs from the other members of the P. itoi clade by the following character combination: medium-sized species (TL 3.46-3.82); sides of head very weakly convex, almost straight, broadest at level of eyes and gently narrowing anteriorly and posteriorly, vertex weakly convex, almost straight; frontal carinae well developed, with large lamellae that extend laterally above the antennal insertions; frontal furrow inconspicuous; posterodorsal corners of propodeum rounded, propodeal declivity superficially punctured (more so dorsally) but largely shiny; posterior face of petiolar node in profile steeper as anterior face; petiole almost as broad as long (DPeI 86-93), apex of petiolar node broader than long in dorsal view; subpetiolar process developed and triangular (but may be small); in addition to dense pubescence abundant erect hairs present on scapes and dorsal surface of body, longest of those hairs shorter than maximum dorsoventral diameter of metafemur.
Distribution and ecology. This species is widely distributed, occurring from Japan (except Hokkaido) and South Korea to Vietnam. It has been recorded from Taiwan and the Chinese provinces Zhejiang and Hunan. Thus, we expect that it will be collected from the geographically intermediate provinces in the future. No direct biological observations from China are available, but the Japanese populations are comparatively well studied (Onoyama and Yoshimura 2002). Nests are found in the soil or rotting wood of various deciduous or evergreen forest types and workers forage hypogeic or in leaf litter. Mature colonies have 100-200 workers and densities can reach 0.3 colonies per m² (Masuko 2010). Larval hemolymph feeding has been observed (Masuko 1986).
Taxonomic notes. Proceratium itoi is a typical member of its clade of intermediate size (WL 0.96-1.04) and is similar to most other species in body proportions and indices. Proceratium itoi can be separated from P. williamsi and P. zhaoi by the presence of erect hairs on the dorsal body surface (absent in P. williamsi and P. zhaoi); from P. longmenense by the presence of erect hairs on the scape (absent in P. longmenense) and by the frontal carinae separated at their anteriormost level (touching each other at their anteriormost level in P. longmenense). In P. itoi the posterodorsal corners of propodeum are rounded and this character distinguishes this species from P. bruelheidei and P. kepingmai (posterodorsal corners of the propodeum angular), which are also larger species (WL 1.03-1.10 and 1.14-1.24). The rounded posterodorsal corners of propodeum are shared between P. malesianum and P. itoi, but P. malesianum is a smaller species Cybertype. Volumetric raw data (in DICOM format), 3D rotation video (in .mp4 format, see Suppl. material 5: Video 3), still images of surface volume rendering, and 3D surface (in PLY format) of the physical holotype (CASENT0790031) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the type. In addition to the cybertype data at Dryad, we also provide a freely accessible 3D surface model of the holotype at Sketchfab (https://skfb.ly/6txMy).
Diagnosis. Proceratium kepingmai differs from the other members of the P. itoi clade by the following character combination: large species (TL 4.39-4.54); sides of head weakly convex, broadest at level of eyes and gently narrowing anteriorly and stronger posteriorly; vertex almost straight; very reduced eyes (OI 2-3) consisting of a single minute ommatidium; frontal carinae well developed, with large lamellae that extend laterally above the antennal insertions; frontal furrow darker than the surrounding anterior cephalic dorsum; posterodorsal corners of the propodeum broadly angular; propodeal declivity densely punctured, mostly opaque; posterior face of petiolar node in profile steeper than anterior face and about half as long as anterior face; apex of petiolar node distinctly broader than long in dorsal view; in addition to dense pubescence, erect hairs present on scapes and dorsal surface of body, longest of those hairs at most as long as the maximum dorsoventral diameter of metafemur.
Worker Worker description. In full-face view, head slightly longer than broad (CI 92-93), sides weakly convex, broadest at the eye level and gently narrowing anteriorly and (stronger) posteriorly, vertex weakly convex, almost straight. Clypeus reduced and narrow, with a broadly triangular median anterior projection. Frontal carinae relatively short, moderately separated, slightly covering antennal insertions, constantly diverging posteriorly, lateral expansions of anterior part of frontal carinae developed as broad lamellae, raised, conspicuously and broadly extending laterally above antennal insertions; frontal area convex; frontal furrow well developed as a raised carina, starting at the clypeal projection and extending over the anterior 2/5 of the cephalic dorsum, with a short gap at the level where the lamellae of frontal carinae are broadest. Eyes reduced, minute (OI 2-3), consisting of a single ommatidium and located on midline of head. Antennae 12-segmented, scapes short (SI 60-62), not reaching posterior head margin and thickening apically. Mandibles elongate and triangular, masticatory margin with four teeth in total, apical tooth long and acute, the other teeth smaller and decreasing in size from second to fourth tooth, gap between second and third tooth larger than between other teeth.
Petiolar node in profile high, nodiform, with a straight and sloping anterior face, dorsum of node broadly rounded, posterior face half as long and steeper than anterior face; petiole in dorsal view longer than broad but apex of node clearly broader than long; ventral process moderately developed on anterior petiole, with a relatively indistinct rectangular projection.
In dorsal view abdominal segment III anteriorly much broader than petiole; its sides convex; abdominal sternite III anteriomedially with a conspicuous depression marked by a thin rim. Constriction between abdominal segments III and IV deep. Abdominal segment IV very large, recurved (IGR 0.30-0.33) and posteriorly strongly rounded, with a lamella on its anterior border around the constriction to abdominal segment III, this lamella thicker ventrally than dorsally; abdominal tergum IV 1.6-1.7× longer than abdominal tergum III (ASI 161-169); remaining abdominal tergites and sternites inconspicuous and projecting anteriorly. Sting large and extended.
Whole body covered with dense mat of short, decumbent to suberect pubescent hairs; additionally, the dorsal surfaces of body interspersed with significantly longer suberect and erect hairs, such hairs also present on abdominal sterna III + IV, scapes (anterior faces of scapes with many hairs, posterior faces with single hairs), and legs (ventral faces of femora and tibiae with many hairs, dorsal faces with single hairs); the longest hairs on dorsal surface of body at most as long as the maximum dorsoventral diameter of metafemur. Mandibles striate; entire body including propodeal declivity densely punctate; on sides of pronotum punctures aligned in diffuse lines, appearing striate; punctures on antennae, legs, and abdominal segment IV finer than on rest of body, abdominal segments V-VII very superficially punctured and shiny. Body color uniformly orange brown to reddish brown, vertex of head slightly darker, frontal furrow conspicuously darker than surrounding cephalic dorsum, legs, antennal funiculus, and abdominal segments V-VII yellowish brown.
Etymology. The species epithet is a patronym in honor of the Chinese botanist Prof. Keping Ma and his efforts in establishing the BEF-China project and promoting biodiversity research and nature conservation in China. All specimens of this species were collected in old-growth subtropical forest, an ecosystem Prof. Ma has investigated in detail.
Distribution and ecology. Both specimens were collected in secondary mixed evergreen broadleaved forest of relatively advanced age, as indicated by the presence of large trees. The paratype was collected within the Gutianshan National Nature Reserve (Yu et al. 2001, Bruelheide et al. 2011, Staab 2014, one of the larger remaining fragments of subtropical broadleaved forest in southeast China. The forest at this locality (the type locality is a similar but much smaller forest fragment) is on sloped land and rich in plant species; more than 250 woody species have been recorded on about 8000 ha. Approximately 50% of the woody species are deciduous but the tree layer is dominated by evergreen species including Castanopsis eyrei (Fagaceae) (Champ. ex Benth.) Tutch., Cyclobalanopsis glauca (Fagaceae) (Thunb.) Oerst., Machilus thunbergii (Lauraceae) Sieb. et Zucc., and Schima superba (Theaceae) Gardn. et Champ. No direct observations of biology and natural history are available for P. kepingmai.
Taxonomic notes. Proceratium kepingmai is the largest (WL 1.14-1.24) member of the P. itoi clade and has, even for eye-bearing Proceratium, very minute eyes (OI 2-3). From each of the species in the clade with very similar body proportions (particularly indices) that also have erect hairs on the dorsal surface of the body (P. itoi, P. longmenense, P. malesianum, P. bruelheidei; no erect hairs, only dense pubescence in P. williamsi, P. zhaoi) it can safely be separated by one or more characters. In P. kepingmai the posterodorsal corner of the propodeum is angular (rounded in P. itoi and P. malesianum), which is also the case for P. longmenense and P. bruelheidei. However, P. longmenense lacks erect hairs on the scape (at least some erect hairs present in P. kepingmai and P. bruelheidei), has a relatively narrower head (CI 85) with longer scapes (SI 68) (CI 92-93 and SI 60-62 in P. kepingmai), and frontal carinae that touch each other at their anteriormost level (clearly separated in P. kepingmai and P. bruelheidei). With P. bruelheidei, the most similar species, P. kepingmai also shares the broad frontal carinae that have large lamellae and are conspicuously extended laterally above the antennal insertions (not extended and narrower in P. longmen- ense). In contrast, P. kepingmai differs from P. bruelheidei by the shape of the head in full-face view that has convex sides, which are broadest at the level of the eyes and narrow weakly anteriorly and more strongly posteriorly towards to almost straight vertex (sides straight, not narrowing anteriorly and vertex convex in P. bruelheidei), the densely punctured and mostly opaque propodeal declivity (sparsely and superficially punctured and very shiny in P. bruelheidei), the conspicuous frontal furrow that is darker than the rest of the surrounding anterior cephalic dorsum (inconspicuous and of same color in P. bruelheidei), the posterior face of petiolar node in profile steeper than the anterior face of the node and about half as long as the anterior face (posterior face as steep as anterior face and less than half as long in P. bruelheidei), the apex of the petiolar node that is clearly broader than long in dorsal view (less broad than long in P. bruelheidei), and relatively fewer and shorter erect hairs (see P. bruelheidei for details).
Variation. Apart from the small difference in body size (WL 1.14 vs. 1.24) there is no observable variation between the two specimens. Diagnosis. Proceratium longmenense differs from the other members of the P. itoi clade by the following character combination: medium-sized species (TL 3.2); sides of head and vertex weakly convex, almost straight; head (CI 85) and scapes (SI 68) relatively long; frontal carinae developed, their lateral lamellae relatively narrow, touching each other at their anteriormost level, not conspicuously broader above antennal insertions; posterodorsal corners of the propodeum broadly angular; posterior face of petiolar node in profile shorter and steeper than anterior face; petiole almost as broad as long (DPeI 91); subpetiolar process developed, roughly trapezoid; in addition to dense pubescence erect hairs present on dorsal surface of body, but only sparsely on head, scapes without erect hairs.
Proceratium longmenense
Distribution and ecology. This species is only known from the holotype that was collected in subtropical evergreen broadleaved forest at 2050 m asl. No direct observations of biology and natural history are available for P. longmenense.
Taxonomic notes. The unique hair patterns separate P. longmenense from the other species of the P. itoi clade. Proceratium williamsi and P. zhaoi have no erect hairs that protrude from the dense pubescence on the dorsal surface of body (hairs present in P. longmenense, but relatively sparsely, especially on head). All other species (P. bruelheidei, P. itoi, P. kepingmai, P. malesianum) have also such hairs on the scapes (absent on scapes in P. longmenense). In addition to hairs, which may be worn down in old specimens, P. longmenense is unique by the relatively long scapes (SI 68) combined with the relatively narrow head (CI 85). Among the other Chinese P. itoi clade species, it differs furthermore from P. zhaoi in size (WL 0.97; WL<80 in P. zhaoi), from P. itoi by the shape of the posterodorsal corners of the propodeum (broadly angular; rounded in P. itoi), and from P. bruelheidei, P. itoi, and P. kepingmai by the lamellae of the frontal carinae (touching each other at their anteriormost level; separated in the other three species). Xu, 2000 Figs 4A, 15, 16, 17, 25 Proceratium zhaoi Xu, 2000: 435 (w.q.), China Proceratium nujiangense Xu, 2006: 153 (w.q.), China, syn. n. Type material. Of P. zhaoi: Holotype. Pinned worker from CHINA, Yunnan Province, Menghai County, Meng'a Town, Papo Village, 1280 m asl, deciduous broadleaved forest, soil sample, 10- IX-1997, leg. Zheng-Hui Xu, No. A97-2338 [examined].
Proceratium zhaoi
Paratypes. Six pinned workers and 24 alate females; one worker with same data as holotype; all other paratypes with same data as holotype but No. A97-2380 (CASENT0235334 in CASC; CASENT0790671 and all other paratypes in SWFU) [all examined].
all other paratypes in SWFU) [all examined].
Virtual dataset. Volumetric raw data (in DICOM format), 3D rotation videos (in .mp4 format, see Suppl. material 6: Video 4 for P. zhaoi and Suppl. material 7: Video 5 for P. nujiangense), still images of surface volume rendering, and 3D surfaces (in PLY format) of a paratype of P. zhaoi (CASENT07900671) and a paratype of P. nujiangense (CASENT0790672) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/dryad.h6j0g4p) and can be freely accessed as virtual representations of the species. In addition to the data at Dryad, we also provide freely accessible 3D surface models at Sketchfab (https://skfb.ly/6txOT and https://skfb.ly/6txOL).
Diagnosis. Proceratium zhaoi differs from the other members of the P. itoi clade by the following character combination: small species (TL 2.0-2.8, WL 0.66-0.80; measurements and indices use data from the original descriptions); of head weakly convex, broadest at level of eyes and gently narrowing anteriorly and posteriorly, posterior head margin weakly concave to almost straight; frontal carinae developed, their lateral lamellae relatively narrow, not extending over antennal insertions; posterodorsal corners of propodeum bluntly angled; posterior face of petiolar node, in profile, shorter and steeper than anterior face, dorsum of node broadly rounded, petiole as long as broad or broader than long (DPeI 98-110), subpetiolar process developed, relatively variable, varying in size and shape (from rectangular to triangular to acutely toothed); only dense pubescence, no erect hairs on dorsum of body, head, and scapes.
Distribution and ecology. This species is only known from two locations at mid elevation in forests of southern and western Yunnan Province. The original description reported 45 workers in the type colony (Xu 2000) and no other data on natural history have been published. However, the relatively short legs suggest a purely hypogeic life style, which conforms to the fact that specimens were extracted from soil samples.
Taxonomic notes. Even though at the beginning of this study we treated P. nujiangense and P. zhaoi as distinct species, thorough examinations combining traditional microscopy with micro-CT scans proved that there are no morphological characters separating them. The virtual comparisons of type specimens of both taxa showed that there are no morphological differences, a fact that is not easy to observe by comparing physical specimens. The types are hairy, dirty, and mounted in ways that hide most important characters, as it is typical for most Proceratium specimens. Furthermore, the main character used by Xu (2006) to separate the species was the subpetiolar process, which has been used for species diagnostics in previous studies (Baroni Urbani and de Andrade 2003, Hita Garcia et al. 2014. However, these works either had very little material for the assessment of intraspecific variation and/or treated different clades of Proceratium. Our study shows that the subpetiolar process is extremely variable within the P. itoi clade and refrain from using it for species delimitations. As a matter of fact, the variation of the subpetiolar process was already noted in the description of P. zhaoi (Xu 2000). Reexamination of all type specimens of both species also revealed a comparatively high degree of variation and overlap in the form of the posterodorsal corner of the propodeum and the width of the propodeal node. In addition, the morphometric ranges of P. nujiangense and P. zhaoi overlap and form a continuum, and there are no significant differences in proportions since all indices are identical. Considering these similarities in light of the newly available images and micro-CT data, we propose treating P. nujiangense as a junior synonym of P. zhaoi.
This species was not mentioned in the revision of Baroni Urbani and de Andrade (2003), potentially because the authors were not aware of its description shortly before the completion of their monograph. Despite some size variation (TL 2.0-2.8), the relative body proportions of P. zhaoi are constant . Proceratium zhaoi is the smallest (WL 0.66-0.80) member of the P. itoi clade. It can be distinguished from all other P. itoi clade species (except for P. williamsi) by the absence of erect hairs that protrude through the dense pubescence on the dorsal body surface. Proceratium williamsi also lacks hairs on the dorsal body surface, but is larger (WL 0.80-0.92), has stronger developed frontal carinae and relatively more slender and longer legs. The relatively weakly developed frontal carinae and the short legs (MFeI <80, MTiI <65, MBaI <40) make P. zhaoi also unique among the Chinese P. itoi clade species.
Proceratium silaceum clade
Definition. Workers of this clade can be distinguished by a moderately squamiform petiolar node that narrows only little from base to apex (extremely squamiform in the Fiji archipelago, Hita Garcia et al. 2015) and by an almost straight to weakly concave anterior clypeal margin (definition follows Baroni Urbani and de Andrade 2003).
Comments. The P. silaceum clade sensu Baroni Urbani and de Andrade (2003) is, with more than 30 species, the most speciose and widespread clade within the genus. Numerous species occur in, respectively, Borneo and Australia. Species of this clade have been reported from all continents and several have reached oceanic islands. From China and east Asia only two species, P. japonicum and P. longigaster, are known. Santschi, 1937 Figs 1A, 2B, 18, 19, 24 Proceratium japonicum Santschi, 1937: 362 (w.), Japan (see also Baroni Urbani and de
Proceratium japonicum
Andrade 2003 Virtual dataset. Volumetric raw data (in DICOM format), 3D rotation video (in .mp4 format, see Suppl. material 8: Video 6), still images of surface volume rendering, and 3D surface (in PLY format) of a non-type specimen (CASENT0790834) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the species. In addition to the data at Dryad, we also provide a freely accessible 3D surface model at Sketchfab (https://skfb.ly/6txNO).
Diagnosis. Proceratium japonicum differs from the other east Asian members of the P. silaceum clade by the following character combination: medium-sized species (WL 0.72-1.00); sides of head convex, broadest above the level of eyes; anterior clypeal margin not protruding and slightly notched; frontal carinae well developed and widely separated, with large lamellae that extend laterally above the antennal insertions and reach posteriorly almost to the level of eyes; frontal furrow strongly developed; petiole squamiform, in profile not or only weakly narrowing dorsally, the base as or almost as broad as the apex, in dorsal view relatively narrow (DPeI <150); subpetiolar process developed, subtriangular, directing backwards; sculpture not deeply impressed, on abdominal segment III granulate and relatively regular; in addition to dense pubescence, some suberect to erect hairs present on scapes and dorsal surface of body.
Distribution and ecology. This species is common from Japan (except Hokkaido) to Taiwan and usually collected in forests of relatively low elevation. It has also been reported from Yunnan Province in China. Thus, it is not unlikely that more records from the southern and eastern Chinese mainland will appear in the future if sampling effort is increased. No direct biological observations from China are available. In Japan, nests are typically found in deadwood in evergreen broadleaved forest (Onoyama and Yoshimura 2002). Colony size can reach over 150 workers and larval haemolymph feeding has been observed (Masuko 1986).
Taxonomic notes. According to Baroni Urbani and de Andrade (2003) P. japonicum is most similar to P. numidicum Santschi, 1912, which is, however, a geographically widely separated species occurring in the eastern Mediterranean and northern Africa. We were not able to examine P. japonicum material from China. In Japan, specimens from the Ryukyu and Yaeyama islands are smaller than from the main islands (Onoyama andYoshimura 2002, Baroni Urbani andde Andrade 2003), explaining the relatively large variation in body size.
From P. longigaster, the only other P. silaceum clade species in China and east Asia, P. japonicum can be separated by the shape of the petiole in profile that not or only weakly narrows dorsally (clearly narrowing dorsally, broader on the base than on the apex in P. longigaster). Also, the petiole in dorsal view is narrower in P. japonicum (DPeI <150) than in P. longigaster (DPeI ≥155). Furthermore, the frontal carinae in P. japonicum reach posteriorly almost to the level of eyes (shorter and ending well below the level of eyes in P. longigaster). Proceratium japonicum has only relatively few suberect to erect hairs that protrude from the dense pubescence on the dorsal body; those hairs are straight (never shaggy) and do not conspicuously project from LT3 over the constriction between LT3 and LT4 (many shaggy hairs projecting in P. longigaster); if single longer hairs are present, then they are not shaggy. Karavaiev, 1935 Figs 2A, 20, 21, 25 Proceratium longigaster Karavaiev, 1935: 59 (w.), Vietnam (see also CASENT0790673 and CASENT0790843 in SWFU;CASENT0790845 in BMNH;CASENT0790846 in ZMBH).
Proceratium longigaster
Virtual dataset. Volumetric raw data (in DICOM format), 3D rotation video (in .mp4 format, see Suppl. material 9: Video 7), still images of surface volume rendering, and 3D surface (in PLY format) of a non-type specimen (CASENT0790673) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the species. In addition to the data at Dryad, we also provide a freely accessible 3D surface model at Sketchfab (https://skfb.ly/6txOA).
Diagnosis. Proceratium longigaster differs from the other east Asian members of the P. silaceum clade by the following character combination: medium-sized species (WL 0.75-0.89); sides of head slightly convex, broadest directly above the level of eyes; anterior clypeal margin not protruding and slightly notched; frontal carinae well developed and widely separated, with large lamellae that extend laterally above the antennal insertions and reach posteriorly about half the distance to the level of eyes; frontal furrow strongly developed; petiole squamiform; in profile, narrowing dorsally, the base clearly broader than the apex; in dorsal view, relatively wide (DPeI ≥155); subpetiolar process developed, subtriangular, directing backwards and relatively acute; sculpture deeply impressed, on abdominal segment III irregularly granular to reticulate (more so on dorsum); very hairy species; in addition to dense pubescence, many appressed to erect hairs present on entire body; abundant, long appressed, shaggy hairs project from LT3 distinctly over the constriction between LT3 and LT4. OI 5; Distribution and ecology. The type locality is at ca. 1400 m asl in the Bà Nà hills close to Đà Nẵng city (referred to as Tourane in the original description), central Vietnam. The species is also known form Nangongshan Mountain, Mengla County, Yunnan Province (Xu 2000) (1525 m asl) and from Hunan Province (Guénard and Dunn 2012). In the places where it is known, specimens were collected from the ground in evergreen broadleaved forest. The new record from the Gutianshan National Nature Reserve, Zhejiang Province, is no exception in being from the same forest type albeit at lower elevation (890 m asl) and marks the easternmost distribution of the species. Thus, P. longigaster seems to be widespread in suitable forest habitats in south and east China and adjacent countries. No direct observations of biology and natural history are available.
Taxonomic notes. This is a poorly known species. Since the single type specimen was not available for examination, Baroni Urbani and de Andrade (2003) were unable to formally treat it in their monograph. Karavaiev's (1935) type specimen is lodged in the Schmalhausen Institute of Zoology (Kiev, Ukraine) and cannot be obtained as a loan. Fortunately, though, it has recently been imaged and the montage photos are available on AntWeb (CASENT0916806). Our new specimens agree with the type and the accounts of Xu (2000). Thus, with a note of caution, we feel confident enough to treat the specimens from Zhejiang Province as P. longigaster.
The only other P. silaceum clade species known from China and east Asia is P. japonicum, from which P. longigaster can be separated by the shape of the petiolar node, the frontal carinae, and the pilosity, among other characters (see the accounts for P. japonicum above).
Proceratium stictum clade
Definition. Worker of this clade can be separated from all other Proceratium by the combination of calcar of strigil with a basal spine and clypeus distinctly and broadly notched (definition follows Baroni Urbani and de Andrade 2003).
Comments. This is an exclusively tropical clade with species occurring in Africa, Australia, Madagascar, the Mascarene Islands, Mesoamerica, and tropical southeast Asia. Eleven extant species are known, of which P. deelemani Perrault, 1981, P. foveolatum Baroni Urbani and de Andrade, 2003, P. stictum Brown, 1958, and the newly described P. shohei are known from the oriental zoogeographic region. Proceratium shohei is the only species known from China. Cybertype. Volumetric raw data (in DICOM format), 3D rotation video (in mp4 format, see Suppl. material 10: Video 8), still images of surface volume rendering, and 3D surface (in PLY format) of the physical holotype (CASENT0717686) in addition to montage photos illustrating head in full-face view, profile and dorsal views of the body. The data is deposited at Dryad (Staab et al. 2018, http://dx.doi.org/10.5061/ dryad.h6j0g4p) and can be freely accessed as virtual representation of the type. In addition to the cybertype data at Dryad, we also provide a freely accessible 3D surface model of the holotype at Sketchfab (https://sketchfab.com/models/0dd8217041274f 268fae8897958d9b6a).
Proceratium shohei
Diagnosis. Proceratium shohei differs from the other oriental members of the P. stictum clade by the following character combination: head broadest at the level of eyes, sides and vertex of head weakly convex, almost straight; scapes relatively long (SI 72); frontal carinae relatively broad and slightly convex; posterodorsal corners of propodeum with broad teeth that project over less than half of the propodeal lobes in profile; petiole in dorsal view longer than broad; petiolar node relatively compressed dorsoventrally, subpetiolar process inconspicuous, a lamellae only, without a projec- Worker description. In full-face view, head slightly longer than broad (CI 90), sides and vertex weakly convex, almost straight. Clypeus relatively broad, surrounding antennal insertions and protruding anteriorly, anterior clypeal margin with a distinct notch. Frontal carinae relatively short, broadly separated from each other, constantly diverging posteriorly and not covering antennal insertions, lateral expansions of frontal carinae slightly concave in full-face view; frontal area convex; frontal furrow absent. Genal carinae strongly developed; ventral face of head (gular area) concave. Eyes relatively large (OI 10), consisting of one convex ommatidium, located slightly anterior to the midline of head. Antennae 12-segmented, scapes comparatively long (SI 72), not reaching posterior head margin and thickening apically. Mandibles elongate and triangular, masticatory margin with three teeth in total, apical tooth large and acute, the other teeth smaller and decreasing in size from second to third tooth that is followed by a series of minute blunt denticles.
Mesosoma in profile convex and longer than maximum head length including mandibles. Lower mesopleurae (katepisterna) with demarcated sutures, upper mesopleurae (anepisterna), and promesonotum with inconspicuous and very shallow sutures; lower mesopleurae inflated posteriorly; posterodorsal corners of propodeum with broad teeth that project over less than half of the propodeal lobes in profile, propodeal lobes strongly developed as broadly triangular teeth protruding dorsolaterally; propodeal declivity almost vertical, slightly inclined anteriorly; in posterodorsal view, sides of propodeum separated from declivity by lamellate margins; propodeal spiracle relatively small, located above mid height; in profile, opening ellipsoid and facing posteriorly. Legs comparatively long; all tibiae with a pectinate spur; calcar of strigil with a basal spine; pretarsal claws simple; arolia present.
Petiole in dorsal view longer than broad, sides consistently diverging posteriorly, anterior border with a thick margin that is distinctly angulate on each side; in profile, petiolar node relatively compressed dorsoventrally, its anterior face slightly sloping; dorsum of node relatively flat, weakly convex; ventral face inconspicuous with a thin lamella and no projection.
In dorsal view, abdominal segment III anteriorly much broader than petiole, its sides weakly convex; abdominal sternite III extended ventrally, its outline straight, anteriomedially with a conspicuous depression marked by a broad rim. Constriction between abdominal segments III and IV deep. Abdominal segment IV very large, very strongly recurved (abdominal sternum IV reduced and IGR not measurable) and posteriorly rounded, with a thin lamella on its anterior border; abdominal tergum IV slightly longer than abdominal tergum III (ASI 103), remaining abdominal tergites and sternites inconspicuous and projecting anteriorly. Sting large and extended.
Whole body covered with dense relatively short decumbent to erect hairs; additionally significantly longer suberect to erect hairs abundant on the whole body, including legs and scapes; such hairs also present on funicular joints, but shorter and relatively thicker; dense appressed to decumbent pubescence on the funiculus only; mandibles striate; head, mesosoma, petiole, and abdominal segment III foveolate with superimposed punctures and granules, the foveae relatively deep, large, and irregular; abdominal segment IV smooth and shiny, dorsally without sculpture, laterally superficially punctured; scapes and legs densely punctured. Body color uniformly dark ferruginous-brown, antennae, legs, and abdominal segments V-VII orange brown.
Etymology. This species is named in honor of Dr. Shohei Suzuki (1979-2016, a Japanese marine biologist whose life was tragically lost in a diving accident while conducting coral reef research in Okinawa.
Distribution and ecology. No direct observations of biology and natural history are available. The type specimen was collected from rain forest leaf litter. Like many other ant species occurring in the tropical rain forest of Xishuangbanna, the species probably also occurs in adjacent countries such as Laos or Thailand.
Taxonomic notes. In Liu et al. (2015b) this species was erroneously listed as P. deelemani, a species known from Borneo, peninsular Malaysia, and Thailand (see Baroni Urbani and de Andrade 2003). However, a careful reexamination of the specimen from Yunnan and comparisons with images of the holotype of P. deelemani (CASENT0915370) and further P. deelemani specimens from Borneo (CASENT0790842, CASENT0790847, CASENT0790848; see Suppl. material 2: Figure S1 for micro CT images of CASENT0790842 and see Suppl. material 11: Video 9 for a 3D rotation video of the same specimen) revealed considerable morphological differences that convinced us to separate both species and to describe P. shohei as new. Among the other species of the P. stictum clade occurring in the oriental zoogeographic region (P. deelemani, P. foveolatum, P. stictum), P. shohei is unsurprisingly most similar to P. deelemani, but both species can be safely and easily separated. Proceratium shohei has an indistinct subpetiolar process without a median anterior projection (subpetiolar process with a distinct tooth in P. deelemani; opposed to the P. itoi clade, the subpetiolar process is an informative character in the P. stictum clade). Also, P. shohei has relatively longer scapes (SI 72) (SI 58-68 in P. deelemani), the posterodorsal corner of the propodeum with relatively shorter teeth that project over less than half of the length of the propodeal lobes in profile (at least projecting over half of propodeal lobes in P. deelemani), a very reduced LS4 so that IGR cannot be measured (LS4 also reduced but IGR 0.23-0.29 in P. deelemani), a straight ventral outline of LS3 (with a depression in P. deelemani), and slightly convex frontal carinae (slightly concave in P. deelemani). Superficially, P. shohei also resembles P. stictum and P. foveolatum. From P. stictum it can be distinguished by the subpetiolar process without a median anterior projection (subpetiolar process with a distinct tooth in P. stictum), the longer teeth on the posterodorsal corners of the propodeum that project straightly backwards (short and blunt, projecting slightly dorsally in P. stictum), and the foveolate sculpture of the head, mesosoma, petiole, and LT3 (coarsely granulate with superimposed fovea in P. stictum). The sculpture of the integument likewise easily distinguishes P. shohei from P. foveolatum, which has the entire integument including LT4 covered with large, deep, regular, and clearly demarcated fovea (fovea smaller and shallower, at most superficial punctures but no fovea on LT4 in P. shohei). Also, in P. foveolatum LT4 is extended posteriorly and forms a broad, strong angle while LT4 is not as extended and broadly rounded in P. shohei.
Variation. Since this species is known only from the holotype there is no available information about intraspecific variation.
The genus Proceratium in China
As for most other regions in which Proceratium occur, collection records and distributional information for the Chinese fauna is very limited, which is likely a consequence of the species' cryptobiotic and partly subterranean lifestyle. This is especially true for the P. itoi clade that based on currently available information seems to be restricted to east and southeast Asia (Baroni Urbani and de Andrade 2003, Guénard et al. 2017). All species of this clade except P. malesianum (Peninsular Malaysia) and P. williamsi (Buthan; India) have been recorded from China but are generally only known from few locations. Further collections targeting leaf litter and soil (Wong and Guénard 2017) will be necessary to clarify species-specific distribution ranges. It is expected that several species of the genus, of which some might also be new to science, occur in the large areas in south and southeast China that lack records so far ). Increased specimen availability will also allow associating queens and males to workers, as both reproductive castes are only known for P. itoi and P. japonicum (Onoyama andYoshimura 2002, Baroni Urbani andde Andrade 2003), while for P. zhaoi queens have been described (Xu 2000).
Recently, Liu et al. (2015b) recorded P. deelemani Perrault, 1981, a conspicuous large-bodied species originally described from Borneo, from the tropical rain forests of Xishuangbanna, Yunnan Province. After careful reexamination of the single available specimen, we find that this species differs in several important characters from P. deelemani and describe it as P. shohei. The species belongs to the P. stictum clade and represents the northernmost record of this tropical clade in Asia.
With the exception of P. bruelheidei, which type habitat is an early successional tree plantation with relatively open soil and comparatively little litter cover, all other Chinese species have only been collected from old-growth forests. Unfortunately, forests in tropical and subtropical China have been heavily transformed and fragmented (e.g. Song 2006, Li et al. 2009), which has largely unknown but likely negative consequences for native ant assemblages (e.g. Liu et al. 2016).
Direct observations of ecology and natural history are very rare for Chinese Proceratium. To the best of our knowledge, the nest size of 45 individuals for the type colony of P. zhaoi given by Xu (2000) is the only published information on that mat- Figure 24. Maps of China (country is shown in dark grey with highlighted country and province borders) and South East Asia displaying known species distribution ranges (in green) of P. bruelheidei, P. itoi, P. japonicum, and P. kepingmai.
ter. We assume that the general natural history of the Chinese species conforms to the observations from other parts of the world outlined above (Baroni Urbani and de Andrade 2003). This life history is also documented for the Japanese populations of P. japonicum and P. itoi (Masuko 1986, Onoyama andYoshimura 2002), two species that occur in China. As for distribution ranges and habitat preferences, further observations and collections will be necessary to extend our knowledge on natural history.
Microtomography
One problem encountered by Hita Garcia et al. (2017b) was the poor recovery of pilosity in the 3D reconstructions due to insufficient voxel resolution, which was resolved in Hita Garcia et al. (2017a) by scanning single body parts at higher resolutions. Nevertheless, in this study, we aimed to turn this handicap into an advantage. Like most proceratiines, the Chinese species of Proceratium are very hairy and covered in a thick Figure 25. Maps of China (country is shown in dark grey with highlighted country and province borders) and South East Asia displaying known species distribution ranges (in green) of P. longigaster, P. longmenense, P. shohei, and P. zhaoi.
pelt, which makes morphological examinations challenging. The furry coats cover and hide important character states, such as surface sculpture, and, to make things worse, many specimens are extremely dirty due to numerous soil particles caught within the hairs. Furthermore, potentially harmful cleaning or dissections of specimens are out of the question since, as is typical for the genus in general, the available Chinese material is way too scarce and valuable.
By applying micro-CT scanning and virtually "shaving" the specimens, we were able to examine proceratiine morphology in more detail resulting in clearer diagnostic character definitions and species delimitations without causing any physical harm to the few available specimens. This approach might also be useful for morphological examinations of other very hairy groups of ants, such as Discothyrea Roger or many species of Tetramorium Mayr (previously grouped in the genus Triglyphothrix Forel). For those ants it could complement high-resolution montage images illustrating specimens including pilosity and hairs, which can be diagnostic characters and useful for species identifications, as illustrated with the present study. | 2018-07-14T00:26:24.750Z | 2018-04-07T00:00:00.000 | {
"year": 2018,
"sha1": "a440cec1928ef732be6f81bb8eb5713fc0098ea9",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/24908/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a440cec1928ef732be6f81bb8eb5713fc0098ea9",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
15994875 | pes2o/s2orc | v3-fos-license | Characterizing the adult and larval transcriptome of the multicolored Asian lady beetle, Harmonia axyridis
The reasons for the evolution and maintenance of striking visual phenotypes are as widespread as the species that display these phenotypes. While study systems such as Heliconius and Dendrobatidae have been well characterized and provide critical information about the evolution of these traits, a breadth of new study systems, in which the phenotype of interest can be easily manipulated and quantified, are essential for gaining a more general understanding of these specific evolutionary processes. One such model is the multicolored Asian lady beetle, Harmonia axyridis, which displays significant elytral spot and color polymorphism. Using transcriptome data from two life stages, adult and larva, we characterize the transcriptome, thereby laying a foundation for further analysis and identification of the genes responsible for the continual maintenance of spot variation in H. axyridis.
INTRODUCTION
The evolution and maintenance of phenotypic polymorphism and striking visual phenotypes have fascinated scientists for many years (Darwin, 1859;Endler, 1986;Fisher, 1930;Gray & McKinnon, 2007;Joron et al., 2006). In general, insects have become increasingly popular as study organisms to examine phenotypic variation (Jennings, 2011;Joron et al., 2006). One such insect displaying extensive elytra and spot variation that has yet to be extensively studied is the Asian Multicolored Ladybeetle, Harmonia axyridis.
The mechanisms responsible for the evolution of these phenotypes are as widespread as the species that display them. Aposematism, crypsis, and mimicry may play a role in the evolution of phenotypic variation in the animal kingdom. Members of family Dendrobatidae, poison dart frogs, are aposematically colored (Cadwell, 1996), while Tetrix subulata grasshoppers maintain their phenotypic polymorphism to aid in crypsis (Karpestam, Merilaita & Forsman, 2014). A mimicry strategy is utilized by one particularly well-characterized species that exhibits phenotypic polymorphism, the Neotropic butterfly system, Heliconius. The color, pattern, and eyespot polymorphism seen in Heliconius is thought to have arose as a result of Müllerian mimicry (Flanagan et al., 2004) and the supergenes underlying these traits have been well characterized (Kronforst et al., 2006;Joron et al., 2006;Jones et al., 2012).
These studies, aiming to elucidate the mechanistic links between phenotype and genotype, present a unique opportunity to gain insight into the inner workings of many important evolutionary processes. While systems like poison frogs and butterflies have been pioneering, the use of novel models, especially those that can be easily manipulated, are needed. One such study system that possesses many of the benefits of classical models, while offering several key benefits, described below, is the multicolored Asian lady beetle, Harmonia axyridis. Harmonia, which is common throughout North America, and easily bred in laboratory environments, possesses significant variation in elytral spot number and color.
Elytra color can be red, orange, yellow, or black and spot numbers of H. axyridis range from zero to twenty-two (L Havens, pers. obs., 2013). The patterning is symmetrical on both wings. In some animals, there is a center spot beneath the pronotum which leads to an odd number of spots. The elytral spots are formed by the production of melanin pigments (Bezzerides et al., 2007). The frequency of different morphs varies with location and temperature (Michie et al., 2010). The melanic morph is more prevalent in Asia when compared to North America (LaMana & Miller, 1996;Dobzhansky, 1993). A decrease in melanic H. axyridis has been shown to be correlated with an increase in average yearly temperatures in the Netherlands (Brakefield & De Jong, 2011).
Sexual selection may play a role in color variation in H. axyridis. Osawa & Nishida (1992) remarked that female H. axyridis might choose their mates based on melanin concentration. Their choice, however, has been shown to vary based on season and temperature. Non-melanic (red, orange, or yellow with any spot number) males have a higher frequency of mating in the spring-time, while melanic (black) males have an increased frequency of mating in the summer. While this has been shown with respect to elytral color, no such findings have occurred for spot number. Although these spot patterns are believed to be related to predator avoidance, thermotolerance, or mate choice (Osawa & Nishida, 1992), the genetics underlying these patterns is currently unknown.
To begin to understand the genomics of elytral coloration and spot patterning, we sequenced the transcriptome of a late-stage larva and adult ladybug. These results lay the groundwork for future study of the genomic architecture of pigment placement and development in H. axyridis.
Specimen capture, RNA extraction, library prep and sequencing
One larval (Fig. 1A) and one adult (Fig. 1B) H. axyridis were captured on the University of New Hampshire campus in Durham, New Hampshire (43.1339 • N, 70.9264 • W). The adult was orange with 18 spots. The insects were placed in RNAlater and immediately stored in a −80C freezer until RNA extraction was performed. The RNA from both individuals was extracted following the TRIzol extraction protocol (Invitrogen, Carlsbad, CA, USA). The entire insect was used for the RNA extraction protocol. The quantity and quality of extracted RNA was analyzed using a Qubit (Life Technology, Carlsbad USA) as well as a Tapestation 2200 (Agilent technologies, Palo Alto, CA, USA) prior to library construction. Following verification, RNA libraries were constructed for both samples following the TruSeq stranded RNA prep kit (Illumina, San Diego, CA, USA), which includes PolyA
Sequence data preprocessing and assembly
The raw sequence reads corresponding to the two tissue types were error corrected using the software BLESS (Heo et al., 2014) version 0.17 (https://goo.gl/YHxlzI, https://goo.gl/vBh7Pg). The error-corrected sequence reads were adapter and quality trimmed following recommendations from MacManes (2014) and Mbandi (Mbandi et al., 2014). Specifically, adapter sequence contamination and low quality nucleotides (defined as Phred < 2) were removed using the program Trimmomatic version 0.32 (Bolger, Lohse & Usadel, 2014) called from within the Trinity assembler version 2.1.1 (Haas et al., 2013). Reads from each tissue were assembled together to created a joint assembly of adult and larva transcripts using a Linux workstation with 64 cores and 1 Tb RAM. We used flags to indicate the stranded nature of sequencing reads and set the maximum allowable physical distance between read pairs to 999nt (https://goo.gl/ZYP08M).
The quality of the assembly was evaluated using transrate version 1.01 (Smith-Unna et al., 2016; https://goo.gl/RpdQSU). Transrate generates quality statistics based on a process involving mapping sequence reads back to the assembled transcripts. Transcripts supported by properly mapped reads of a sufficient depth (amongst other things) are judged to be of high quality. In addition to generating quality metrics, transrate produces an alternative assembly with poorly-supported transcripts removed. This improved assembly was used for all downstream analyses and QC procedures. We then evaluated transcriptome completeness via use of the software package BUSCO version 1.1b (Simão et al., 2015). BUSCO searches against a database of highly-conserved single-copy genes in Arthropoda (https://goo.gl/bhTNdr). High quality, complete transcriptomes are hypothesized to contain the vast majority of these conserved genes as they are present in most other species.
To remove assembly artifacts remaining after transrate optimization, we estimated transcript abundance using 2 software packages-Salmon version 0.51 (Patro, Suggal & Kingsford, 2015; https://goo.gl/01UIF6) and Kallisto version 0.42.4 (Bray et al., 2015; https://goo.gl/BsQMpr). Transcripts whose abundance exceeded 0.5 TPM in either adult or larval datasets using either estimation method were retained. TPM is different from FPKM with regards to the order of operations performed, as TPM normalizes for gene length first. We evaluated transcriptome completeness and quality, again, after TPM filtration, using BUSCO and transrate, to ensure that our filtration processes did not significantly effect the biological content of the assembly.
We identified and removed potential plant, fungal, bacterial and vertebrate contamination by using a blastx search. We created a custom protein database based on the collection of protein sequences from each taxonomic group available for download from RefSeq (ftp://ftp.ncbi.nlm.nih.gov/refseq/release/). We queried this database, inferring a given sequence to be a contaminate if the best blast (=lowest e-value) was plant, fungal, bacterial or vertebrate on origin, rather than invertebrate.
In addition to this, we attempted to identify loci involved in color, color patterning, and more generally phenotypic polymorphism. To accomplish this, we downloaded a set of 1,008 candidate genes previously identified as underlying phenotypic evolution in other species, from Dryad (Martin & Orgogozo, 2013a;Martin & Orgogozo, 2013b). We used the gene name listed in this dataset to download the protein sequence from Uniref90. We used a blastX search strategy to identify potential homologues in the dataset.
To identify patterns of gene expression unique to each life stage, we used the expression data as per above. We identified transcripts expressed in one stage but not the other, and cases where expression occurred in both life stages. The Uniprot ID was identified for each of these transcripts using a blastX search (https://goo.gl/J9saMj), and these terms were used in the web interface Amigo (Carbon et al., 2009) to identify Gene Ontology terms that were enriched in either adult or larva relative to the background patterns of expression. The number of unique genes contained in the joint assembly was calculated via a BLAST search of the complete gene sets of human Homo sapiens, fruit fly, Drosophila melanogaster, and the flour beetle, Tribolium casteneda.
Data availability
All read data are available under ENA accession number PRJEB13023. Assemblies and data matrices are available at https://goo.gl/D3xh65, and will be moved to Dryad following manuscript acceptance.
RNA extraction, assembly and evaluation
RNA was extracted from whole bodies of both the adult and the larva stage of a single Harmonia axyridis. The quality was verified using a Tapestation 2200 (all RIN scores >8) as well as a Qubit. The initial concentration for the larva sample was 83.2 ng*µL −1, while the initial concentration for the adult sample was 74.7 ng*µL −1. The number of strand-specific paired end reads contained in the adult and larva libraries were 58 million and 67 million, respectively. The reads were 125 base pairs in length.
The raw Trinity assembly of the larval and adult reads resulted in a total of 171,117 contigs (82 Mb) exceeding 200nt in length. 526 contaminate sequences were removed. This assembly was evaluated using Transrate, producing an initial score of 0.10543, and optimized score of 0.29729. The optimized score indicated that the optimized assembly was better than 50% of NCBI-published de novo transcriptomes (Smith-Unna et al., 2016). This transrate optimized assembly (89,305 transcripts, 62 Mb) was further filtered by removing transcripts whose expression was less than 0.5 TPM. After filtration, 33,648 transcripts (44 Mb) remained. To assess for the inadvertent loss of valid transcripts, we ran BUSCO before and after this filtration procedure. The percent of Arthropoda BUSCO's missing from the assembly rose slightly, from 18% to 21%. Transrate was run once again, and resulted in a final assembly score of 0.29112. This score is indicative of a high-quality transcriptome appropriate for further study (Smith-Unna et al., 2016). In an attempt at understanding how many distinct genes our transcriptome contained, we conducted a blast search against Homo sapiens, Drosophila melanogaster, and Tribolium casteneda. This search resulted in 7,246, 7,739, and 7,741 matches unique to Harmonia axyridis, which serve as estimates of the number of unique genes expressed in these two life stages. The final assembly is available at https://goo.gl/nWdBuv.
Annotation
The assembled transcripts were annotated using the software package dammit!, which provided annotations for 23,304, or 69% of the transcripts (available here: https://goo.gl/ gpGXLG). These annotations included putative protein and nucleotide matches, 5-and 3-prime UTRs, as well as start and stop codons. In addition to this, analysis with Transdecoder yielded 14,518 putative protein sequences (available here: https://goo.gl/qVLWwD), which were annotated by 4,139 distinct Pfam protein families, while 176 transcripts were determined to be non-coding (ncRNA) based on significant matches to the Rfam database (available here: https://goo.gl/x1n7jC). Lastly, 2,925 proteins (7.8% of total) were determined to be secretory in nature by the software package signalP (available here: https://goo.gl/z0ra1g).
Annotation of the sequence dataset resulted in the identification of host of transcripts that may be of interest to other researchers including: 43 heat-shock and 8 cold-shock transcripts, 87 homoebox-domain containing transcripts, 122 7-transmembranecontaining (18 GPCR's) transcripts, 13 solute carriers, 143 ABC-transport-containing transcripts, and 21 OD-S (pheromone-binding) transcripts.
A complement of immune-related genes were discovered as well. These include a single member of the Attacins and Coleoptericins, two TLR-like genes, seven Group 1, and 34 Group 2 C-type lectin Receptors (CLRs). Two CARD-containing Cytoplasmic pattern recognition receptor (CRR) genes were discovered, as were 3 MAP kinase containing transcripts. Finally, 119 RIG-I-like receptors (RLR) were found.
The focused search for genes previously implicated in phenotypic polymorphism (Martin & Orgogozo, 2013a;Martin & Orgogozo, 2013b) identified 483 Harmonia transcripts, corresponding to 65 distinct loci (Table S1). These loci include the transcription factors bab1, Distal-less, Optix, enzymes BCMO1, ebony, yellow, and transporters ABCC2, SLC24A5, and TPCN2, all related to color and color patterning. These genes, and the others identified in Table S1 to are likely to provide fodder for future research.
Analysis of the differences between adult and larval life stages were carried out as well. Because these life stages were only sequenced with a single individual each, they should be interpreted with some caution. The vast majority of transcripts were observed in both life stages (n = 30,630, 91%), with a small number being expressed uniquely in larva (n = 1,094) and adult (n = 1,922). Of these transcripts expressed uniquely in either larva or adult, 45% and 42%c were annotated using at least one method via the software package dammit. 6.1% and 4.6%, respectively, were found to be secretory in nature via signalP analysis.
CONCLUSIONS
Phenotypic polymorphisms and striking visual phenotypes have fascinated scientists for many years. The breadth of evolutionary causes for the maintenance of these phenotypes are as numerous as the species that display them. One organism, Harmonia axyridis, provides a unique opportunity to explore the genetic basis behind the maintenance of an easy to quantify variation-elytral spot number. While understanding these genomic mechanisms is beyond the scope of this paper, we do provide a reference transcriptome for H. axyridis, a foundational resource for this work.
This study indicates that most gene expressed at levels greater than .5 TPM were shared seen in both the adult and larval individuals. While the majority of proteins identified in the assembled transcriptome were structural in function, analyses of protein families using the Pfam database indicated the presence of pigment proteins. In particular, RPE65, which functions in the cleavage of carotenoids, was found. In H. axyridis, increased carotenoid pigmentation has been linked to increased alkaloid amounts (Britton, Liassen-Jesen & Pfander, 2008). In addition, the elytral coloration of the seven spot ladybug, Coccinella septempunctata, is a result of several carotenoids (Britton, Liassen-Jesen & Pfander, 2008). While larva are mostly black (Fig. 1A), we posit that the orange sections on the lower back could be due to carotenoid production. Moreover, this study provides a necessary foundation for the continued study of the genetic link between genes and the maintenance of variation in H. axyridis.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This project was supported by MacManes lab startup funds provided by the University of New Hampshire. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors: University of New Hampshire. | 2017-09-24T23:11:42.509Z | 2015-12-15T00:00:00.000 | {
"year": 2016,
"sha1": "12fa8ca26f269251af44da6085325ad60c370adc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.2098",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "12fa8ca26f269251af44da6085325ad60c370adc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52111686 | pes2o/s2orc | v3-fos-license | A review of the pathology and treatment of canine respiratory infections
Correspondence: Tanya LeRoith Virginia-Maryland Regional College of Veterinary Medicine, Virginia Tech University, Department of Biomedical Sciences and Pathobiology, 225 Duckpond Drive, Blacksburg, VA 24061, USA Tel +1 540 231 7627 Fax +1 540 231 6033 Email tleroith@vt.edu Abstract: Numerous infectious agents are responsible for causing primary or secondary respiratory disease in dogs. These agents can cause upper or lower respiratory infections commonly observed in veterinary practices. Clinical signs might vary from mild dyspnea, sneezing, and coughing to severe pneumonia with systemic manifestations. Depending on the etiologic agent, the gross and microscopic changes observed during these infections can be rather unspecific or have highly characteristic patterns. While histopathology and cytology are not always required for diagnosis of respiratory infections, they are often useful for establishing a definitive diagnosis and identifying specific etiologic agents. Research regarding epidemiology, pathogenesis, diagnostics, and clinical manifestations related to these infectious pathogens provides valuable information that has improved treatments and management of the diseases they cause. This review discusses the epidemiology, general clinical characteristics, and pathologic lesions for some of the important viral, bacterial, fungal, and parasitic etiologies of canine respiratory disease.
Introduction
The respiratory tract is constantly exposed to infectious agents that can reach the upper and lower respiratory tract aerogenously or hematogenously.The invasion of the respiratory tract by deleterious pathogens is normally prevented by physical, chemical, and immunologic mechanisms including mucus and mucociliary clearance, various innate antimicrobial factors, alveolar macrophages, and the pulmonary immune response.Some respiratory pathogens cause infection as secondary, opportunistic invaders after host defense mechanisms have been disrupted by other factors (eg, immunosuppression, environment, stress, toxins, and concurrent infection).There are also primary pathogens that have developed mechanisms or virulence factors enabling them to disrupt and evade the host defenses without predisposing factors.Regardless of whether agents affect the upper or lower respiratory tract, the host response will vary based on the severity of infection, pathogenic mechanism of the pathogen, and immune status of the host.Thus, morphological changes will similarly vary from mild circulatory changes such as mucosal or pulmonary edema and congestion, to severe mucosal or pulmonary inflammation.These pathological changes will impair the normal homeostasis and manifest clinically as respiratory dysfunction.Clinical identification of infectious agents can be challenging since respiratory signs can be unspecific varying from mild unproductive cough to severe pneumonia accompanied with systemic changes.Currently, numerous diagnostic tools are available for the identification of respiratory agents.The development of molecular diagnostic tools allows rapid identification of a wide variety of pathogens and establishment of more accurate treatments. 1 However, diagnosis of respiratory diseases is still often based on history, clinical signs, radiography, cytology, and bacterial culture. 2 Histopathology is rarely used as a diagnostic tool in practice for lower respiratory infections; however, nasal biopsies are commonly used for identification of agents affecting the upper respiratory tract. 1 Due to potential complications associated with collection procedures, biopsies of the lungs for histopathology are less frequently utilized for etiological diagnosis.Instead, morphological changes are most commonly evaluated by cytology. 3till, histopathology specimens obtained during necropsy are considered the best resource for pathogenesis studies and in situ detection of agents associated with local changes.This review discusses general and specific morphological changes, the pathogenesis, clinical signs, and treatment for some of the important viral, bacterial, fungal, and parasitic causes of infectious respiratory disease in dogs.
Infectious nasal diseases Mycotic rhinitis
Fungal rhinitis is most commonly caused by ubiquitous soil fungi, notably Aspergillus fumigatus, or rarely by Penicillium spp., in young to middle aged dolichocephalic and mesaticephalic dogs.Presenting signs include mucopurulent discharge (initially unilateral), sneezing, epistaxis, and nasal depigmentation or ulceration.Stertor and stridor are variably present and, in advanced cases, facial deformity may be appreciated with or without lacrimal duct obstruction and epiphora. 4t is not known why A. fumigatus causes disease in only a small proportion of dogs 5 and whether disease results from higher exposure to fungi or from suppression of normal nasal defenses. 6Fungal organisms are cleared by the innate immune system and invasion requires adherence, penetration of the respiratory epithelium, destruction of surrounding cells, and resistance to phagocytosis. 4The lack of fungal invasion into mucosal tissues and adjacent bone argues against systemic immune dysfunction. 5A predominant T helper 1-regulated cell-mediated immune response seems to be effective at preventing systemic dissemination of the fungus; however, it is not effective at clearing infection from the nasal cavity. 7ersistent infection may be due to defects in local defense mechanisms 7 or in the local immune response. 5,7An unexplainable increase in interleukin-10 may be playing a role in local immune dysfunction. 5Interleukin-10 impairs the antifungal functions of phagocytes, secretion of proinflammatory cytokines, and protective cell-mediated immunity. 8ile these functions may be beneficial during resolution of the inflammatory response after infection is cleared, 8 they may be detrimental during fulminant mycotic infection. 5nvasion into bone is usually limited to the nasal turbinates, 4 with subsequent destruction being caused by a combination of host immune responses and dermonecrolytic fungal toxins. 5arious virulence factors have been studied in vitro and may contribute to the disease by interfering with innate host defenses; however, their significance in the pathogenesis of canine nasal disease is not known. 5ef initive diagnosis is best achieved with direct visualization with rhinoscopy or endoscopy, followed by microscopic examination of scrapings or biopsies of fungal plaques. 5Lesions are often surrounded by hyperemic and edematous mucosa, and there is purulent exudate, caseous debris, and/or hemorrhage in the nasal cavity. 6Destruction and distortion of the nasal turbinates varies from mild to severe.Fungal plaques appear as white, dull, usually flat, and irregular masses sitting on the mucosal surface covered with mucopurulent exudate (Figure 1A).Fungal colonies sometimes form upright spherical structures, or become grayish to black and form solid sheets of material covering large portions of the nasal cavity. 9Histopathology reveals an ulcerated mucosa covered with a plaque of necrotic tissue admixed with fibrin.The underlying lamina propria is heavily infiltrated with lymphocytes and plasma cells with fewer macrophages or, in mild cases, may have mild infiltration with neutrophils. 7Fungal organisms are rarely, if ever, observed invading the mucosa.Instead, they are found in superficial necrotic plaques and free material within the nasal cavity 7 as dense accumulations of nonpigmented, septate, 3-8-µm hyphae that branch dichotomously at 45-degree angles. 4ystemic treatment with oral antifungals is not typically efficacious as a sole therapy, and medications are expensive 5 and associated with side effects including hepatotoxicity, anorexia, or vomiting. 10Instead, treatment is focused on topical antifungal treatments, which may be supplemented with systemic antifungals.Localized therapy by instilling topical antifungals such as clotrimazole and enilconazole into the nasal sinuses is preferred. 4,113][14] To prolong drug-contact time and reduce anesthetic duration, researchers have investigated using antifungal creams, instead of suspensions, into the nasal sinuses to act as a local depot of drug therapy. 15Extensive rhinoscopic debridement of the nasal cavities before drug infusion is also important in improving treatment outcome. 16
Rhinosporidiosis
Nasal disease may rarely be caused by Rhinosporidium seeberi, an aquatic protistan parasite of the Mesomycetozoa family, which some refer to as the DRIP (Dermocystidium, the "rosette agent," Ichthyophonus, Psorospermium) clade. 17,18evelopment of mature endospores is stimulated by water exposure, therefore this agent is associated with wet environments. 19The parasite is endemic in India, Sri Lanka, and Argentina 19 and sporadic cases have been reported in Canada, 20 the United Kingdom, 21 Italy, 22 and the United States.The majority of cases reported in the United States are from southeastern and south central states, extending as far north as Missouri. 23Clinical signs include wheezing, sneezing, unilateral seropurulent nasal discharge, epistaxis, and possibly a visible mass within the nares. 19Exposure to contaminated water 6 and history of trauma 19,22 have been suggested as predisposing factors.Some authors suggest hunting and roaming dogs are at increased risk due to increased exposure to these factors. 22Animal-to-animal and animal-to-human transmission have not been documented, 22 and could be explained by recent research suggesting that there are multiple host-specific strains. 24he typical lesion is a single, unilateral nasal polyp with a characteristic "strawberry" appearance that is soft, pink, and bleeds easily. 22,25Pinpoint white foci, representing mature sporangia, may be visible grossly. 6,19Nasal scrape smears are preferred over fine-needle aspiration since lesions tend to bleed, and demonstration of intact spherules with endospores free and within sporangia is diagnostic. 25owever, sporangia are not often seen in cytology as they do not exfoliate readily. 22Alternatively, histopathology on excisional biopsies are both therapeutic and diagnostic and show polyploid proliferation of the submucosa, hyperplasia, and metaplasia of the overlying epithelium, granulomatous to pyogranulomatous inflammation, and sporangia in various stages of maturation (Figure 1B). 22Endospores are round, eosinophilic to magenta to basophilic bodies that are 5-15 µm in diameter with internal eosinophilic globules and thick walls. 23Stains that may be helpful in identifying endospores include toluidine blue, periodic acid-Schiff reaction, methenamine, Wright's, and Gridley's. 19,23he preferred method of treatment for nasal rhinosporidiosis is surgical excision of lesions; however, there have been reports of slowly progressive recurrence after surgery.Other treatments include systemic ketoconazole or dapsone; however, these drugs are often associated with side effects and there is limited information regarding efficacy in canine patients. 19,22
Nonspecific infectious rhinitis
Nonspecific infectious rhinitis is rare in dogs and commonly occurs secondary to nasal trauma, allergy, or inhalation of foreign material.Clinical signs include sneezing, coughing, and watery-mucous to suppurative nasal discharge.Depending on the severity of infection, these signs may be accompanied by fever, lethargy, and anorexia.There are no specific agents associated with this clinical presentation; although, canine parainfluenza virus, canine adenovirus type 2 (CAV-2), and Bordetella bronchiseptica are the most commonly isolated agents. 26Morphological changes are nonspecific and include mucosal hyperemia and swelling from edema fluid and glandular secretions.Histologically, these changes vary upon the severity and chronicity of the disease.Initially, there is minimal ballooning degeneration of the nasal epithelium accompanied by ciliary loss, submucosal edema, and occasional infiltration of lymphocytes and plasma cells.In more severe cases with secondary bacterial invasion, the mucosa is infiltrated by neutrophils and there is necrosis and desquamation of the epithelium.In chronic stages, the lamina propria is thickened by abundant fibrous connective tissue and there is mucosal gland atrophy and epithelial dysplasia.Because predisposing factors are commonly unknown, treatments are targeted to manage nonspecific respiratory symptoms and secondary bacterial infections.
Upper respiratory infections
In dogs, respiratory infection is typically an upper airway disease referred to as laryngotracheitis, infectious tracheobronchitis (ITB), infectious respiratory disease complex, or kennel cough.The disease is highly contagious and characterized by inflammation of the upper respiratory tract.Many cases of ITB involve both viral and bacterial pathogens and numerous agents have been isolated.The most commonly identified etiologic agents include canine parainfluenza virus, CAV-2, and B. bronchiseptica.As secondary agents, canine herpesvirus, canine reovirus (type 1, 2, and 3), and bacteria such as Streptococcus spp., Pasteurella spp., Pseudomonas spp., coliforms, and mycoplasmas are often reported.Recently, reports have implicated the involvement of unusual agents such as Streptococcus equi subspecies zooepidemicus 27,28 and/or Mycoplasma cynos, 29,30 either alone or in association with predisposing viral agents as part of this multietiological complex.New viruses have also emerged as novel pathogens of the canine respiratory tract including canine influenza virus (CIV), group 1 canine coronavirus (canine pantropic coronavirus), group 2 canine coronavirus (canine respiratory coronavirus), 26 and pneumovirus; 31 however, the role of these pathogens as causes of ITB is still under debate.
Pathogens are normally transmitted by coughing, sneezing, or nose-to-nose contact in dogs that are densely housed (ie, in shelters, day care centers, boarding kennels, or veterinary hospitals). 32The occurrence of clinical signs is due to both high numbers of pathogens in the same densely housed environment and inadequate sanitary conditions.Important factors such as population density, ventilation, sanitation, and staff training also play a role. 33Moreover, these animal facilities have rapid animal turnover, and a high percentage of young animals may be immunocompromised due to a poor or complete lack of passive immunity. 34In animal shelters, it has been demonstrated that this disease has a different epidemiological pattern compared to dogs in households. 33nce the infection is established in a shelter, it quickly reaches high morbidity rates.Normally, clinical signs are self-limiting; however, under predisposing conditions such as overpopulation and immune deficiency, the clinical presentation might vary from severe to fatal. 32esions within the upper respiratory tract vary from apparently normal airways to hyperemia and mucopurulent exudate, depending on the severity of the infection and secondary bacterial involvement.Inconsistently, associated lymphoid tissue, tonsils, retropharyngeal, and tracheal lymph nodes can be moderately enlarged.Histologically, changes are confined to the trachea and bronchi, and are characterized by initial lymphoplasmacytic infiltration of the lamina propria accompanied by interstitial edema and capillary hyperemia.Necrosis of the epithelium is uncommon and secretory glands can be hyperplastic.In the most severe cases, tracheal and bronchial lumens can be occluded by abundant mucoid material intermixed with neutrophils and bacterial colonies.Histological changes are rather unspecific and histopathology is rarely used as a diagnostic technique.Moreover, the etiological diagnosis is uncommon since a large number of pathogens frequently interacts and/or overlaps.Thus, the most common diagnostic routine is based on the presence of clinical signs.
Treatment of ITB is symptomatic; however, due to the common occurrence of secondary infections with a broad spectrum of bacteria, antibiotic treatment is the first therapeutic approach.Antibiotics should be selected based on culture and sensitivity tests of airway specimens collected by transtracheal aspiration or bronchoscopy. 35The antibiotics most commonly used are amoxicillin/clavulanic acid, cephalexin, clindamycin, and azithromycin.These antibiotics can reach effective concentrations in the tracheobronchial mucosa as well as pulmonary parenchyma.When the bacterial infection is severe and animals do not respond to parenteral antibiotics, aerosolized kanamycin sulfate or gentamicin sulfate has been shown to reduce B. bronchiseptica numbers in the distal trachea and bronchi. 36In order to ameliorate the clinical signs of ITB, cough suppressants and bronchodilators have been recommended. 37Antitussives with codeine derivatives such as hydrocodone or butorphanol are used to control persistent nonproductive coughing.Even though ITB does not normally cause bronchial hyperactivity/spasm, bronchodilators such as theophylline and aminophylline may be used to prevent bronchospasm and therefore act as effective cough suppressants.Usually, clinical signs are not severe and affected dogs do not present with anorexia.However, during the acute phase or in prolonged severe cases, supportive care by maintaining adequate caloric and fluid intake may be necessary. 36here are currently vaccines for most of the agents associated with ITB.Some of the agents responsible for ITB, such as canine distemper virus and CAV-2, are included in the core vaccine. 33The main goal of core vaccines is to protect animals against life-threatening diseases that have a global distribution.However, the World Small Animal Veterinary Association has defined a core vaccine for shelter-housed dogs or those in high density populations that are exposed to different environments compared to those in households.Thus, the vaccination guidelines group of the World Small Animal Veterinary Association has defined that this core vaccine for shelter-housed dogs must include canine distemper virus, canine parvovirus type 2, CAV-2, intranasal B. bronchiseptica Bb1, and canine parainfluenza virus. 34The use of CIV is included in the recommended noncore vaccine for shelter dogs; however, the benefit of this vaccine is limited if exposure cannot be prevented for at least a week after the second immunization. 38
Lower respiratory disease
While a number of viral agents have been implicated in case reports of lower respiratory disease, 29,[39][40][41][42][43] the bulk of clinically recognized viral pneumonia can be attributed to three viruses: canine distemper virus, canine parainfluenza virus, and CAV-2.Pure viral pneumonia is characterized by a nonspecific interstitial pattern.When the thorax is opened, the lungs fail to collapse, are diffusely red to gray, meaty, and wet.Rib impressions may be noted on the visceral pleural surface, and there is minimal visible exudate within airspaces unless a secondary bacterial pneumonia has developed.Histological changes observed during the early stages are nonspecific and mostly associated with circulatory changes: congestion and hyperemia of the small and medium-sized capillaries.As the infection progresses, these changes are characterized by thickening of the alveolar septae by interstitial edema, lymphocytes, plasma cells, and occasionally a few macrophages, and alveoli are lined with hyperplastic type II pneumocytes.Pulmonary fibrosis can occur as a result of chronic interstitial pneumonia. 6The clinical presentation of a pure viral pneumonia is rare since viral infection normally impairs pulmonary defenses, and inevitably results in secondary bacterial infections and bronchopneumonia. 36ased on the exudate observed in the later stages of disease, bacterial bronchopneumonias can be classified as suppurative or fibrinous; however, since both exudates can coexist, this morphological differentiation is rather difficult.Suppurative bronchopneumonias are histologically characterized by bronchial, bronchiolar, and alveolar lumina that are filled with numerous neutrophils, macrophages, and fibrin with occasional areas of hemorrhage. 6Treatment is targeted to managing clinical signs and secondary bacterial infections, similar to those described for ITB; however, in more severe cases dogs could need oxygen supplementation or mechanical ventilation. 44he epidemiology and pathogenesis of specific etiologic agents is well studied and reviewed, therefore only those that are newly emerging and concerning will be mentioned because of their independent ability to produce disease with higher mortality rates than expected for typical ITB or pneumonia.
Canine influenza
CIV is considered an emerging disease and has a fairly recent history.Since the 1970s, studies confirmed that dogs could be experimentally infected with human H3N2. 45Prior to 2004, influenza was not considered a specific canine disease.Dogs were also not considered as reservoirs for influenza, because they did not maintain their influenza subtype and could not transmit influenza virus between dogs. 46In 2004, the first interspecies transmission from an equine strain was described in a greyhound population in Florida. 47,48The disease has become widespread and is currently affecting 30 states in the United States, and today is considered a major threat in dogs densely housed.After the first report in the United States, several studies demonstrated interspecies transmission in numerous countries.In the United Kingdom, serological, immunohistochemical, and molecular retrospective studies have linked outbreaks of respiratory diseases in English foxhounds with equine influenza virus. 49During an equine influenza outbreak in Australia, the presence of influenza virus in dogs in close contact with H3N8 equine influenza virus-infected horses was confirmed. 50Conversely, attempts to prove the presence of influenza in Canada showed that only one dog was serologically positive over 225 evaluated dogs presented for primary care to veterinary clinics.This dog was previously in contact with race tracks in Texas. 51nfluenza A virus is composed of eight separate strands of ribonucleic acid that code for eleven proteins.The virus can be classified into subtypes based on two superficial antigens, the hemagglutinin and neuraminidase proteins.There are 16 different types of hemagglutinin (H1 to H16) and nine neuraminidase antigens (N1 to N9). 52 Sequence analysis of the CIV indicates that the strain has 96% nucleotide sequence identity with H3N8 equine influenza virus. 48he first report of influenza in dogs associated with the H3N8 infection described a highly contagious respiratory disease, with high morbidity (reaching 60%-80%) and low mortality (1%-5%). 53However, in the first report within the Florida greyhound population, mortality rates reached 36%. 48H3N8 is not the only strain involved in canine infection.Serological reports from South Korea demonstrated high antibody prevalence against H3N2. 54,55Later, surveys performed in China demonstrated that the H3N2 strain observed in Korea was also responsible for sporadic cases in China. 56Additional studies revealed that respiratory disease in several kennels in South Korea was also caused by an H3N2 strain with 95%-99% homology to influenza strains of avian origin. 55,57The infection with H3N2 caused only mild transient respiratory disease. 58Interspecies transmission with the high pathogenicity avian influenza virus H5N1strain has also been reported in dogs. 59In 2004, a dog with severe respiratory signs was reported in Thailand.The dog died several days after ingestion of a duck carcass from a region where an outbreak of high pathogenicity avian influenza virus H5N1 was described. 60Phylogenetic analysis revealed that the virus isolated from the dog was closely related to the high pathogenicity avian influenza virus H5N1 recovered from viruses from avian influenza outbreaks in the same period. 60espite numerous studies demonstrating that dogs have the epithelial receptor necessary for viral attachment in the upper and lower respiratory tract, 61 H5N1 dog-to-dog transmission could not be demonstrated in contact animals. 62Unlike H5N1 influenza virus, canine-adapted H3N2 influenza virus is capable of dog-to-dog and dog-to-cat transmission and causes respiratory disease in both species. 63,64wo clinical presentations have been described in dogs with CIV infection.A milder, transient respiratory disease is the most common presentation and resembles ITB.The disease is characterized initially by fever, followed by sneezing, ocular discharge, cough, and purulent nasal discharge that resolves 10-14 days after infection. 65The peracuteand fatal -presentation is characterized mainly by severe pulmonary, pleural, and mediastinal hemorrhage, and may be the result of a secondary bacterial infection. 66The histological changes that normally accompany CIV infection are rather nonspecific and vary from mild tracheitis, bronchitis, and bronchiolitis to suppurative bronchopneumonia. 67evertheless, these lesions are insufficient to make a specific diagnosis and require complementary diagnosis such as viral isolation, polymerase chain reaction, immunohistochemistry, and serology. 68he pathogenesis of CIV does not vary significantly from other influenza subtypes affecting mammals.The virus replicates in the epithelial cells of the upper and lower respiratory tract causing epithelial necrosis, followed by neutrophilic infiltration and later infiltration of airways with mononuclear inflammatory cells.In later stages, there is type II alveolar hyperplasia. 68Alveolar macrophages are responsible for production of tumor necrosis factor-α and other important proinflammatory cytokines (interleukin-1 and interleukin-8), which establish the acute inflammatory response and appearance of clinical signs. 58Although it is not clear if the pulmonary source of interleukin-10 is T lymphocytes or macrophages, there is experimental evidence that this cytokine plays an important role in increasing susceptibility to bacterial pneumonia following influenza infection. 69here is no specif ic treatment for CIV infection.Antibiotics are important for treating the secondary bacterial infections that are commonly associated with H3N8 CIV infections.A recombinant equine herpesvirus-1 vaccine expressing the hemagglutinin of equine H3N8 has been shown to reduce clinical signs and virus shedding in dogs challenged with a recent isolate of CIV. 70A commercial vaccine against CIV subtype H3N8 was conditionally approved by the United States Department of Agriculture in 2009 and fully licensed in June 2010. 38,71This inactivated vaccine has proven to control CIV clinical signs and reduce the severity of associated pulmonary lesions as well as viral shedding.In a coinfection trial with S. equi subspecies zooepidemicus, the vaccine significantly reduced clinical signs. 72Given that the risk profile necessary for CIV infection is similar to that of dogs exposed to kennel cough, this vaccine may be useful in those dogs vaccinated against B. bronchiseptica/ parainfluenza. 65
Canine coronavirus
Canine respiratory coronavirus was first detected in the United Kingdom in 2003 from the trachea and lung tissues of dogs. 73his virus is closely related to human and bovine coronaviruses and distinct from canine enteric coronavirus.This respiratory coronavirus is widespread in North America, Japan, and several European countries; the seroprevalence varies from 54.7% in the United States to 17.8% in Japan. 73,74Canine respiratory coronavirus has been associated with respiratory disease, particularly in kenneled dog populations. 75This virus has been detected by reverse transcriptase-polymerase chain reaction in asymptomatic and symptomatic dogs that suffered from mild or moderate respiratory disease. 76The role of canine respiratory coronavirus in ITB is not clear and it is likely that infection with this virus only induces subclinical or mild respiratory disease.Coronavirus has been associated with upper respiratory disease but is also capable of damaging the respiratory epithelium and predisposing to bacterial infection. 26Canine pantropic coronavirus was first described in Italy and now several fatal outbreaks have been reported around Europe. 74These outbreaks were characterized by high mortality in young animals.Several organs including the gastrointestinal tract, nervous system, and respiratory system were affected.Postmortem examination showed extensive lobar bronchopneumonia compromising the cranial and caudal lobes, along with effusions in the thoracic cavity. 76Viral isolation and further characterization revealed the presence of coronavirus type 2 that contains a point mutation, which changes the cellular tropism of this virus.There is no current vaccine, and there is no cross protection with gastrointestinal coronavirus. 77reptococcus spp.
Streptococcus spp.are opportunistic pathogens of the upper respiratory tract.S. equi subspecies zooepidemicus is an important pathogen of horses and pigs but has recently been linked to cases of acute fatal pneumonia in dogs in several countries. 28Numerous reports describe outbreaks of acute, hemorrhagic pneumonia characterized by high morbidity up to 100% and very high mortality reaching 50%-60%. 78urrently there is insufficient data to determine the pathogenesis of S. equi subspecies zooepidemicus in dogs.However, several factors such as host susceptibility, coinfections, and presence of three superantigen genes (szeF, szeN, and szeP) have been proposed as contributors to the rapid onset of disease and fast deterioration of many dogs infected with S. equi subspecies zooepidemicus. 27High levels of proinflammatory cytokines (tumor necrosis factor-α, interleukin-6, and interleukin-8) observed during the acute infection suggest that the pulmonary lesions are the result of a "cytokine storm" and acute respiratory distress syndrome. 79neumonic lesions appear within 24 hours postinfection and are characterized by rubbery, mottled, dark to bright red lungs accompanied by severe hemothorax and areas of collapse. 80In a single study from the United Kingdom, different pneumonic patterns were observed: fibrinosuppurative, necrotizing, and hemorrhagic (65.4%), fibrinous (15.4%), hemorrhagic (15.4%), and fibrinosuppurative (3.8%). 79ommon histopathologic findings include severe suppurative and hemorrhagic bronchopneumonia characterized by diffuse alveolar neutrophilic infiltration, abundant hemorrhage and edema, and large chains or colonies of gram positive bacteria.Occasionally, pulmonary lesions are accompanied by necrosuppurative bronchitis and tracheitis. 78ue to the acuteness and severity of clinical signs, intravenous antibiotics are the treatment of choice.Clinical isolates seem to be sensitive to the following antimicrobial agents: cefalotin, amoxicillin/clavulanic acid, tetracycline, trimethoprim/sulfadiazine, enrofloxacin, marbofloxacin, and penicillin G. Antibiotic sensitivity profiles have become a very important issue, since humans can acquire the infection from canine cases. 81
Fungal pneumonia
Fungal infection may manifest as primary pneumonia or systemic disease with dissemination to the lung and other organs.Clinical signs of lower respiratory tract infection include coughing, tachypnea, dyspnea, and exercise intolerance.Additionally, if there is dissemination, there may be other reported clinical manifestations referable to affected organ systems. 82Blastomyces dermatitidis, Histoplasma capsulatum, and Coccidioides immitis are the primary fungal pathogens involved in canine pneumonia. 82. dermatitidis exists as a mold (saprophytic phase) in sandy, acidic soils near water 83 characterized by hyphae that produce conidiophores with spherical or oval conidia. 84The fungus is endemic in basins of the Mississippi, Ohio, and St Lawrence rivers in North America 6 and also found in Canada, Africa, and India. 85Dogs at higher risk of infection are highly involved in outdoor activities and often include 2-4 year olds, male intact, and medium to heavy weight hound or sporting dogs. 86,87Spores produced by mycelial growth are inhaled and enter terminal airways 83 where they transform into a yeast (parasitic phase), which reproduces by broad-based single buds. 84B. dermatitidis quickly disseminates to other organs through vascular and lymphatic vessels, and in some cases, lung lesions have resolved by the time clinical signs referable to other affected organs arise. 83. capsulatum is a dimorphic fungus that exists as a mold (saprophytic phase) at temperatures of 25°C-30°C characterized by septate hyphae with microconidia and macroconidia. 84he fungus is endemic in temperate and subtropical regions, with most cases in the United States occurring in river valleys of the Ohio, Missouri, and Mississippi rivers. 88The infective fungal particles, microconidia and macroconidia, are inhaled and deposited in the lower respiratory tract where they transform into yeast (parasitic phase). 84,88,89Virulence factors of H. capsulatum allow the organism to enter and survive within phagocytes and subsequently disseminate to monocyte-rich organs early in the course of disease. 84,88,90. immitis is a mold in soil (saprophytic phase) made up of hyphae with secondary branches and chains of infectious arthroconidia.The regional distribution of this fungus is primarily in the Lower Sonoran life zone, which includes areas of southwestern United States (Arizona, Utah, New Mexico, Nevada, and Texas), Mexico, and Central and South America. 82,84,91In tissue (parasitic phase), the arthroconidia grow into spherical sporangia ("spherules"), which produce hundreds of endospores. 84Veterinarians within an endemic region reported that clinical illness seems to occur most commonly in young adult dogs.Additionally, dogs that spent more time outdoors, had more land to roam, and were exposed to more dusty environments in endemic areas were at increased risk of infection. 92Dissemination can occur to nearly any tissue creating a wide range of clinical signs outside the respiratory tract.In dogs with chronic complaints of respiratory disease, lameness, and/or neurologic signs that also have a travel history to endemic areas within the past 3 years, C. immitis infection should be suspected. 93linical signs, history, physical examination, and imaging lead to a strong clinical suspicion of mycotic disease; however, because treatment is expensive and sometimes cause side effects, a definitive diagnosis through identification of the organism is often warranted.When performed together, cytology, histopathology, and culture approach a specificity of nearly 100% depending on the expertise of the pathologist and laboratory. 946][97][98][99] Because the organisms are located within the interstitium, bronchoalveolar lavage is more likely to recover the organisms for identification; 86,96,99 although, transtracheal wash is a less invasive and safer procedure. 99Regardless of the etiology, the lung typically contains multiple, tan-gray, variably sized nodules scattered throughout.In blastomycosis, the nodules contain pyogranulomas or granulomas that occasionally contain extensive central caseous necrosis with a thin rim of macrophages.The number of intralesional yeast bodies is variable within extensive inflammation and often missed in routine hematoxylin and eosin stains, particularly if antifungal therapy or partial immunity was initiated. 6The yeast is round, 5-15 µm in diameter, has a distinct double-contoured wall that is about 1-µm thick, and granular protoplasm that completely or partly fills the center, and occasionally displays single broad-based budding (Figure 2A). 6Rarely, filamentous and pseudohyphal forms with conidia have been identified. 83In histoplasmosis, the nodules are characterized microscopically by granulomatous inflammation containing epithelioid macrophages, many containing numerous H. capsulatum organisms, multinucleated giant cells, and fewer neutrophils, plasma cells, and lymphocytes. 88H. capsulatum organisms are numerous, intracellular, 2-4 µm in diameter, and have round bodies with a basophilic center and light halo with occasional narrow-based budding (Figure 2B).Organisms are often identified within macrophages and sometimes neutrophils in rectal scrapings, imprints of colonic biopsies, 88 peritoneal and pleural effusions, 100,101 and rarely in cerebrospinal fluid. 102arely, H. capsulatum has been identified in circulating neutrophils and eosinophils. 103,104In coccidioidomycosis, the initial reaction is primarily suppurative, then the lesion develops into a pyogranuloma or granuloma characterized by epithelioid macrophages, a few giant cells, lymphocytes, and neutrophils. 6Spherules are large, 10-80 µm, round, double-walled structures (Figure 2C) containing numerous endospores, and may be numerous within microabscesses 91 or lesions with prominent suppuration.However, in chronic cases where there is little neutrophilic inflammation, few organisms are present and may be found engulfed in giant cells. 6In all cases, periodic acid-Schiff, Gridley's fungal, or Gomori's methenamine stains may be needed to help identify the organisms. 6,83reatment of fungal pneumonia mainly involves the use of azole drugs (itraconazole, ketoconazole, or fluconazole) and/or amphotericin B. 83,88,91,105 Itraconazole is preferred for the treatment of B. dermatitidis 83,106 and H. capsulatum infections. 88In general, itraconazole treatment in B. dermatitidis patients should be administered for at least 60 days and for at least 1 month after resolution of clinical signs. 83For H. capsulatum, 4-6 months of treatment with itraconazole is usually sufficient and higher initial loading doses may be required in those with severe clinical signs. 88While itraconazole treatment is expensive, it is easier to administer and causes fewer side effects compared to amphotericin B. 106 In hopes of lowering treatment costs, one study compared fluconazole and itraconazole, and found similar efficacies; however, fluconazole often required a longer treatment period to obtain clinical remission. 107Fluconazole is sometimes used in cases with ocular or central nervous system involvement because of its excellent water solubility allowing for penetration through blood-brain and blood-ocular barriers. 86Although ketoconazole is a cheaper drug, side effects are more commonly reported. 88Ketoconazole is the drug of choice in C. immitis infections, and the required length of administration may extend to at least 1 year in those with disseminated disease or bone involvement. 91,108mphotericin B is used in conjunction with azole therapy in cases with severe or fulminating pulmonary or gastrointestinal disease associated with H. capsulatum infection, or in refractory cases because of its rapid onset of action. 86,88dministration is intravenous only, and there is a risk of nephrotoxicity. 83Using a lower cumulative dose of amphotericin B followed by a 60-day course of ketoconazole 109 or the use of an amphotericin B lipid complex, which is one-tenth as toxic as traditional amphotericin B, 110 have been reported to be safer and effective alternatives.Newer drugs that are being investigated for treatment of fungal infections include chitin synthase inhibitors (nikkomycin Z and caspofungin), terbinafine (a naftifine analog), and newer azoles (voriconazole and posaconazole). 91Some advocate the additional use of anti-inflammatory doses of glucocorticoids in order to decrease secondary inflammation associated with the death of fungal organisms 105 or to reduce airway obstruction from hilar lymphadenopathy in chronic cases. 111However, a clear benefit is not always appreciated 95 and some do not recommend it in order to avoid dissemination. 103,104
Metastrongyloid nematodes
There are four metastrongyloid nematodes that cause respiratory disease in dogs: Oslerus osleri, Crenosoma vulpis, Filaroides hirthi, and Filaroides (Andersonstrongylus) milksi. 105,112,113The prevalence of lungworm infections is presumed to be low, based on a few fecal examination studies; however, this diagnostic technique bears limitations that may underestimate the number of true cases. 113. osleri inhabits the distal trachea, tracheal bifurcation, and first-division bronchi within granulomatous nodules on the mucosal surface. 112The life cycle is direct with transmission occurring through the saliva from the dam to pups during maternal grooming or regurgitation of food, 114 or from ingestion of contaminated feces. 113Infections may be subclinical or cause chronic cough that is sometimes exacerbated with exercise 113 or tracheal palpation. 114ncommonly, infections may present as acute dyspnea or as subacute disease characterized by intermittent difficulties in breathing with no coughing. 115][115][116] Nodules contain coiled adults within spaces -often dilated lymphatics -surrounded by loose connective tissue and marked inflammation composed of plasma cells, neutrophils, and eosinophils (Figure 3B).Some spaces containing worms are open to the airway lumen and intraluminal protrusion of the tail of female worms is sometimes evident. 117Successful therapy for O. osleri is based on cessation of larval output, resolution of clinical signs, and clearance of tracheal and bronchial nodules. 11818,119 Surgery to debulk large, obstructive masses may be considered; however, it is usually not necessary as nodules will regress with medical therapy. 115. vulpis also inhabits the trachea, bronchi, and bronchioles causing a chronic bronchitis-bronchiolitis characterized by a chronic cough. 113Wild canids, such as foxes and coyotes, and domestic canids serve as definitive hosts.First-stage larvae are passed in the feces after being coughed up and swallowed, and subsequently infect terrestrial snails or slugs, which serve as intermediate hosts.Infection is then acquired after ingestion of the intermediate hosts. 113,120Definitive diagnosis is based on detection of first-stage larvae in feces or transtracheal wash samples. 113,120,121In comparison to O. osleri, C. vulpis is found more often within the bronchi, endoscopically, and accompanied by erythema, mucoid discharge, or hyperplastic nodules. 115Successful treatment has been commonly achieved with oral fenbendazole; [120][121][122][123] however, administration of febantel, milbemycin oxime, levamisole, and diethylcarbamazine have also been used. 113,120,124. hirthi and F. milksi are very similar and parasitize the alveoli and terminal airways causing multifocal interstitial pneumonitis.112 The life cycle of F. milksi is unknown, and the transmission of F. hirthi is through ingestion of firststage larvae in feces (coprophagy).113 Infections are usually subclinical; however, some dogs -especially those that are immunosuppressed -may present with acute or chronic coughing and dyspnea.[125][126][127] Findings at necropsy are generally considered incidental and include widely scattered, subpleural, gray-tan or black-green, 1-5-mm diameter nodules, which may have clear cystic centers or be white and firm.Microscopically, larvae typically incite an acute suppurative response.There is very little response to living worms within the alveoli; however, when worms are dead or degenerating, there is marked granulomatous and eosinophilic inflammation.After worm fragments have been cleared, there may be residual foci of granulomatous interstitial pneumonia.6 Differentiation between F. hirthi and F. milksi is not possible in histopathologic sections.113 Treatments for F. hirthi include oral administration of fenbendazole, [125][126][127] 5 days of oral albendazole, which is repeated 3 weeks later, 105 or subcutaneous ivermectin.113
Capillaria aerophila (Eucoleus aerophilus)
A relatively uncommon cause of respiratory disease in dogs is cause by a trichurid nematode, C. aerophila.Sporadic cases have been reported in the United States, Canada, 128 and various countries in Europe.][131] The life cycle is direct and ova are produced from the adults in the bronchi, coughed up, swallowed, and passed in the feces. 105,112The ova require approximately 30-45 days to mature after excretion and can remain viable in the soil for up to a year. 112,129,130Ova may also mature within earthworms, which act as facultative intermediate hosts; although, the importance in transmission is not fully elucidated. 129,130Most infections are subclinical, but some dogs may present with chronic cough.More severe signs of dyspnea, weight loss, or secondary bacterial pneumonia rarely occur. 105,112,113,131iagnosis is based on identification of double-operculated yellow-brown ova in fecal examinations or cytologic evaluation of airway specimens. 112,113Differentiation of C. aerophila ova from those of Trichuris vulpis is important and based on the smaller size, asymmetric bipolar plugs, and anastomosing network of ridges within the wall. 130The worms are slender, 2-3 cm long, and reside as white, coiled masses embedded in the tracheobronchial mucosa, and incite a mild catarrhal bronchitis. 6,105,112Occasionally, there are embryonated eggs associated with the worms. 6Research and reports on treatment is very limited; however, fenbendazole has been reported to be successful. 128
Paragonimus kellicotti
Paragonimiasis is an uncommon disease that is caused by a trematode (fluke) infestation in the lungs of dogs and other mammals.There are over 40 species of Paragonimus that have been described worldwide 132 and are endemic in Asia, the Americas, and Africa. 133,134Two important species include P. westermani, which is the most widely distributed species, 134,135 and P. kellicotti, which is endemic in North America. 105,112,132,134The life cycle is indirect and requires two intermediate hosts.Eggs passed in the feces of definitive mammalian hosts hatch into miracidium and infect aquatic snails, the first intermediate host.After development in the snail, the parasite then infects the second intermediate host, crayfish, by direct penetration or ingestion of the snail.Infection in dogs is acquired after ingesting the crayfish, followed by migration of the parasite from the intestine to the lungs where it resides, in pairs, within a fibrous subpleural cyst or bulla that communicates with bronchioles.][134] Clinical signs are often limited to chronic cough and exercise intolerance; however, severe dyspnea may occur with cyst rupture and pneumothorax. 105,112,113,133efinitive diagnosis is based on identification of characteristic eggs in bronchial mucus, feces, 113 or fine-needle aspiration of lung nodules. 136These nodules may be seen radiographically as early as 2-3 weeks postinfection 137 or by computed tomography examination 30-180 days postinfection. 135The eggs are typically oval, yellow-brown, and operculated with a thickened ridge in the shell wall along the line of the operculum. 113,136Postmortem examination reveals spherical, 1-3-cm diameter, soft, dark red-brown nodules within the caudal lung lobes that contain one or two flukes and thick brown fluid. 133Microscopically, the cavitations contain marked eosinophilic and granulomatous inflammation, hemorrhage with numerous hemosiderinladen macrophages, adult flukes, and a fibrous capsule (Figure 3C).In patent infections, there are also numerous ova within lesions.As the cavitations mature, they become true cysts that are incorporated into the bronchiolar tree and lined with cuboidal epithelium.The bronchioles may contain ova and eosinophilic exudate.Additionally, hyperplasia of the peribronchiolar glands and smooth muscle, and variable chronic catarrhal eosinophilic bronchiolitis, granulomatous pleuritis, and pleural lymphangitis occur. 6ffective treatments for P. kellicotti in dogs have included oral administration of praziquantel, 138,139 fenbendazole, 136,140 or albendazole. 118
Pneumocystosis
The exact taxonomic classification of Pneumocystis carinii is debated and has characteristics consistent with fungi and protozoan. 141The recognized structural forms of the organism are the cyst, with intracystic sporozoites, and the trophozoite.The cyst is thick-walled, spherical to ovoid to crescent-shaped, 4-8 um in diameter, and contains up to eight pleomorphic sporozoites.The trophozoite is thinwalled, 1-4 um in diameter, and appears to be an extracystic form of the sporozoite. 84,141,142Clinical cases of pneumonia caused by P. carinii in dogs are attributed to suspected or documented concurrent cell-mediated immunodeficiency or preexisting pulmonary disease. 105,141Sporadic cases and clinical disease have been reported associated with stress, crowding, immunosuppressive therapy (glucocorticoids, chemotherapy, and irradiation), and concurrently with canine distemper infection. 141efinitive diagnosis is based on direct visualization of organisms from respiratory fluids or biopsy specimens, 143,144 or from amplification of DNA in samples from the lower respiratory tract. 143Grossly, the lungs are firm, consolidated, and pale brown or gray and do not collapse upon opening the thoracic cavity.Microscopically, alveoli are filled with aggregates of amorphous, foamy, eosinophilic material and a few macrophages and detached alveolar lining cells. 141Grocott's methenamine silver stain is often used to help identify the cyst stages of the organisms. 144Neutrophils are absent and there is little to no phagocytosis of intact organisms.Alveolar septa may be thickened with dense accumulations of plasma cells, lymphocytes, and macrophages or fibrosis in chronic infections. 141entamide isethionate or the combination of trimethoprim and sulfonamide has been used for successful treatment of pneumocystosis. 105,141Other drugs that have been used include carbutamide, trimetrexate, and combinations of clindamycin, primaquine, dapsone, and trimethoprim.Symptomatic therapy is often concurrently initiated and includes oxygen administration, mucolytics, bronchodilators, nebulization, and discontinuation of immunosuppressive medications. 145
Conclusion
While modern vaccination and management strategies have decreased respiratory diseases in domestic dogs within households, respiratory infections are still a major concern in high density housing situations and in areas where vaccination and proper animal management and husbandry practices are not implemented.These situations allow these infections to persist, propagate, and potentially spread to other populations.Research of the classic respiratory conditions in dogs, discussed herein, has enabled the development of treatments, diagnostics, and preventative strategies for these infections.However, there are still many unanswered questions regarding the pathogenesis, immunopathology, and characteristics of the infectious agents which may provide information pertinent to the development of more specific treatments, definitive diagnostics, and preventative strategies.
Figure 1
Figure 1 infectious nasal diseases.(A) Nasal cavity: rhinoscopic visualization of characteristic fungal plaques in a dog with mycotic rhinitis (Aspergillus fumigatus).(B) Nasal mucosa: hematoxylin and eosin-stained photomicrograph of Rhinosporidium seeberi endospore and sporangia (inset).Notes: The images are courtesy of (A) Dr K Sennello, Virginia-Maryland Regional College of Veterinary Medicine, Virginia Tech University (Blacksburg, VA); and (B) Dr B Porter, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University (College Station, TX).
Figure 2
Figure 2 Etiologic agents of fungal pneumonia in the lung stained with hematoxylin and eosin.(A) Broad-based budding yeast form of Blastomyces dermatitidis.(B) Numerous Histoplasma capsulatum organisms packed within the cytoplasm of macrophages (inset).(C) intralesional yeast form of Coccidioides immitis.Note: The images are courtesy of (A) Dr P Piñeyro, nVirginia-Maryland Regional College of Veterinary Medicine, Virginia Tech University (Blacksburg, VA); and (B and C) Dr B Porter, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University (College Station, TX).
Figure 3
Figure 3 Etiologic agents of parasitic respiratory diseases.(A) Trachea: multiple raised nodules at the tracheal bifurcation.(B) Trachea: hematoxylin and eosinstained photomicrograph of numerous intralesional Oslerus osleri nematodes within the submucosa.(C) Lung: hematoxylin and eosin-stained photomicrograph of Paragonimus kellicotti within a focus of inflammation.Notes: The images are courtesy of (A) Dr G Saunders, Virginia-Maryland Regional College of Veterinary Medicine, Virginia Tech University (Blacksburg, VA); (B) Dr B Porter, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University (College Station, TX); and (C) Dr T Leroith, Virginia-Maryland Regional College of Veterinary Medicine, Virginia Tech University (Blacksburg, VA). | 2018-09-15T21:18:11.170Z | 2012-06-27T00:00:00.000 | {
"year": 2012,
"sha1": "5dfef18871f694e9bf315e8f217b553edc562b2d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=13102",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cbcb1b0473a9c18d09793bfcb843fe49616922b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233559870 | pes2o/s2orc | v3-fos-license | Tooth Contact Analysis of Herringbone Rack Gears of an Impulse Continuously Variable Transmission
Rack gears are widely used in many areas, such as automobile, robotics, and renewable energy industry [1-3]. Impulse continuously variable transmissions (ICVTs) can provide reliable power conversion from a prime mover, such as an engine and an electric motor, to a driven part, such as a wheel and a chain, with a continuous output-to-input speed ratio [4,5]. Gears have various types according to their tooth profiles and tooth widths, and have a wide range of dimensions as small as the ones in small appliances to the very large gears used in heavy-duty applications [6,7]. Generally, gears are manufactured via hobbing [8] or forming cutting [9-11] based on the theory of gearing. For some gears with special tooth profiles, e.g., concave-convex and spiral tooth profiles, their manufacturing methods and machine-tools are complex. Since meshing performances of these gears with special tooth profiles are highly sensitive to manufacturing errors [12,13], high manufacturing accuracy of gear machine-tools is required for these gears [1416]. Contact patterns and transmission errors are two typical methods for meshing performances evaluation of gear systems [17-19]. A tooth profile modeling method was developed to improve accuracy of tooth contact analysis for gear tooth profiles [20]. Some other meshing performances, e.g., power losses, can also be evaluated based tooth contact analysis [21,22]. Since these gears have convex-concave tooth profiles, they cannot be manufactured via standard gear manufacturing methods. During a manufacture in this way, for each of the gear modules and the radius of curvature, a different blade size and gear holder is needed. However, it's clear that these gears have many advantages, if they can be produced sufficiently in the industry [23,24]. Since these gears have better load-bearing capabilities, have a balancing feature for the axial forces, quiet operation feature and their lubrication characteristics is better than herringbone gears and spur gears. It's noteworthy that there are number of studies carried out recently in relation to these gears. Rack gears are modeled with a computer-aided design (CAD) program in order to eliminate these problems, and in another study, it was emphasized that these gears can be manufactured in computer numerical control (CNC) milling machines using two different methods, and by making use of these presented methods, the manufacturing codes of the gears was created with CAD programs Research Article
Introduction
Rack gears are widely used in many areas, such as automobile, robotics, and renewable energy industry [1][2][3]. Impulse continuously variable transmissions (ICVTs) can provide reliable power conversion from a prime mover, such as an engine and an electric motor, to a driven part, such as a wheel and a chain, with a continuous output-to-input speed ratio [4,5]. Gears have various types according to their tooth profiles and tooth widths, and have a wide range of dimensions as small as the ones in small appliances to the very large gears used in heavy-duty applications [6,7]. Generally, gears are manufactured via hobbing [8] or forming cutting [9][10][11] based on the theory of gearing. For some gears with special tooth profiles, e.g., concave-convex and spiral tooth profiles, their manufacturing methods and machine-tools are complex. Since meshing performances of these gears with special tooth profiles are highly sensitive to manufacturing errors [12,13], high manufacturing accuracy of gear machine-tools is required for these gears [14][15][16].
Contact patterns and transmission errors are two typical methods for meshing performances evaluation of gear systems [17][18][19].
A tooth profile modeling method was developed to improve accuracy of tooth contact analysis for gear tooth profiles [20]. Some other meshing performances, e.g., power losses, can also be evaluated based tooth contact analysis [21,22]. Since these gears have convex-concave tooth profiles, they cannot be manufactured via standard gear manufacturing methods. During a manufacture in this way, for each of the gear modules and the radius of curvature, a different blade size and gear holder is needed. However, it's clear that these gears have many advantages, if they can be produced sufficiently in the industry [23,24]. Since these gears have better load-bearing capabilities, have a balancing feature for the axial forces, quiet operation feature and their lubrication characteristics is better than herringbone gears and spur gears. It's noteworthy that there are number of studies carried out recently in relation to these gears.
Rack gears are modeled with a computer-aided design (CAD) program in order to eliminate these problems, and in another study, it was emphasized that these gears can be manufactured in computer numerical control (CNC) milling machines using two different methods, and by making use of these presented methods, the manufacturing codes of the gears was created with CAD programs first time in the literature, and the manufacture of gears were carried out in an error-free manner using different materials. In this study, in order to determine the performance characteristics of these gears, manufactured correctly, for the applicability in the industry; three-dimensional (3D) tooth contact analysis of the gears have been carried out by using ANSYS.
The remaining part of this paper is organized as follows: a load distribution model of rack gears of the ICVT is introduced in Sec. 2. Some finite element analysis of the rack and pinion system are performed in Sec. 3. In order to verify operation performance of the rack and pinion system, some simulation results are discussed in Sec. 4. Finally, some conclusions from this study are presented in Sec. 5.
Load Distribution Model of Rack Gears
A 3D model of the rack gear for tooth contact analysis, as shown in Fig. 1, which are created by using CATIA. The rack gear is designed with a herringbone and curvilinear involute gear profile, as shown in A tooth model of an involute rack gear is used to describe its geometric parameters [25], as shown in Fig. 3. The base central angle γb of a tooth can be represented as where z is the number of teeth, αt and αn are the transverse pressure angle and the normal pressure angle, respectively, and χ is the addendum coefficient. The pressure angle αc at the contact point can be represented as where ξc is the profile parameter of the contact point that can be represented as in which rb is the base radius and rc is the profile radius of the contact point. The tooth central angle γ(y) can be represented as where v(y) is the tooth profile angle.
A load distribution model of involute rack gears is used in this study based on the minimum elastic potential energy theory. The elastic potential energy of an involute rack gear tooth is composed of a bending component Ux, a compressive component Un and a shear component Us: where Ux, Un and Us can be represented as (6) in which F is the normal load, Suppose that there are n pairs of meshing teeth at a time instant; the normal load acting on the î th tooth surface can be represented as where î U and ˆj U are elastic potential energies of the î th and ˆj th meshing tooth surfaces, respectively, and î V and ˆj V are inverses of elastic potential energies of the î th and ˆj th meshing tooth surfaces, respectively. The elastic potential energy î U of the î th meshing tooth surface can be represented as where ĉ i l and ĉ j l are lengths of contact lines of the î th and ˆj th meshing tooth surfaces, respectively. The involute rack gear tooth is divided into slices with unit lengths. The load acting on the k th slice can be represented as where εβ is the face contact ratio, βb is the base spiral angle and k V is the inverse of the elastic potential energy of the k th slice.
Finite Element Analysis of Rack and Pinion System
In this study, first the modeling of the gear was performed with CATIA, by using the analytical expressions that determine the gear profile. After the rack and spur gear pairs have been created, again using CATIA, the gears are mounted and the gear pairs have been created. Then the stress analyses of the gears have been carried out by ANSYS, after the gear pairs have been created with CATIA. When using ANSYS to perform stress analysis in gears by researchers, the gears have been assumed two-dimensional and the force has been applied to the nodes externally to perform the stress analysis [26,27]. And in this analysis, a tooth profile of a threedimensional solid model was taken as the reference, and the maximum stress distribution and the clutch state of that maximum stress was determined for all gears during clutching, starting from the moment of first contact of a tooth to the last contact. The pinion gear rotates 13.173 deg, from the moment of first contact of a tooth to the last contact. This angle of rotation was divided into 6 equal pieces in order to better determine the maximum stress in this clutching period, and 8 Nm load was applied to the pinion gear at each moment of rotation, under the flow limit, to observe the distribution of stresses produced on these three different gears, and the maximum Von Mises stresses and the maximum deformations were determined. Before starting finite element analysis, the strength values of the PEEK material used in the manufacturing of the gears were input to ANSYS, then model file created in CATIA was opened and the rack-pinion system was divided into finite elements with the help of the mesh menu of CATIA, as shown in Fig. 4. Since the tensions in regions other than the contact region are not important much, these areas were divided into finite elements automatically from the menu of the program. However, the finite element dimensions in the contact regions have been manually set to 0.55 mm, for a fine grain mesh of the gear, since the stress in this area is important. The total number of elements of the finite element model of the rack-pinion system is 7,320,150. Some local segmentation of meshes of gear teeth are performed to improve finite element analysis accuracy and convergent to a stable condition. In order to apply load on to the pinion, the boundary conditions of the rack-pinion system were entered. In this step, which is prior to the final step, the almost realistic boundary condition values were applied to the finite element model of the rack-pinion system. The boundary conditions for the tooth model of the pinion are to locate the tooth force and the supports [7]. For this purpose, the rack gear was supported at one of its tips (Fig. 5, Fig. 6). The load on gear tooth surfaces has been applied in the form of torque on to the pinion, in accordance with the studies performed in the study. The contacting tooth surfaces have been defined in the rack-pinion system, and after defining that the rack and the pinion will undergo a linear elastic deformation, a static load applied to gear tooth surface of the rack-pinion system, as shown in Fig. 7. The maximum Von Mises stresses have been identified because of this tension.
Simulation Results and Discussion
After achieving the necessary boundary conditions required for the analysis of the gears, the stress values for each angle of rotation were calculated. In the results of these stress analysis, carried out by the ANSYS, it has been observed that the weakest gear is the spur gear and the highest strength has been observed in herringbone gear, in the strength comparisons of the analyzed gear models. It has also been observed that the gear with concave-convex profile has stress values close to the herringbone gear.
Considering stress distribution, it has been observed that the stresses occurred on the concave-convex profile gears are smoother and the maximum stresses are at the center of the gear bow, as shown in Fig. 8. In the herringbone gear, however, the stresses are occurred at outer parts of the tooth profile in general, and as the clutch progresses the stresses shift to the center of the gear, as shown in Fig. 9. For spur gears, the maximum stresses were observed on a straight line along the bottom of gear teeth, as shown in Fig. 10.
Conclusions
In this study, a spur gear, a herringbone gear, and a concave-convex profile rack gear, having same module and number of teeth, were subjected to analysis. In the results of these stress analysis, carried out by using ANSYS, it has been observed that the weakest gear is the spur gear and the highest strength has been observed in herringbone gear, in the strength comparisons of the analyzed gear models. It has also been observed that the gear with concave-convex profile has stress values close to the herringbone gear, in terms of the strength. This will then shed light on the experimental studies to be performed on this type of gears.
Considering the stress distribution, it has been observed that the stresses occurred on the concave-convex profile gears are smoother and the maximum stresses are at the center of the gear bow. In the herringbone gear, however, the stresses are occurred at outer parts of the tooth profile in general, and as the clutch progresses the stresses shift to the center of the gear. And in spur gears, the maximum stresses were observed on a straight line along the bottom of gear teeth. Concave-convex gears will be more advantageous than the spur gears and herringbone gears, after finding the optimum radius through a research to be conducted on the curvature radii of the concave-convex gears. It's suggested that use of these gears will be more advantageous in places where load resistance is necessary and in the pumps with gears in particular. In order to ensure wider use of these gears in industry, other operating characteristics such as wear characteristics and operating temperatures should be determined. : Base radius of the contact point (mm) | 2021-05-04T22:06:26.520Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "0c07f4e2d0418539aa0814b442c3bd997b163378",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1436849",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc11222c5a2192d71edeccc1261b57721a2aadf7",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
23173126 | pes2o/s2orc | v3-fos-license | RAPID Cross-reactivity of anti-H pylori antibodies with membrane antigens of human erythrocytes
provide a clue for pathogenic link between H pylori infection and vascular disorders. Abstract AIM: To investigate whether anti-H pylori antibodies have cross-reaction with antigens of erythrocyte membrane. METHODS: Blood samples were collected from 14 volunteers (8 positive and 6 negative for H pylori detected by 13 C-urea breath test) of the general population. Erythrocyte membrane proteins of the subjects were examined by Western blot using anti-H pylori serum. The proteins related to the positive bands were identified by mass spectrum analysis. RESULTS: Anti-H pylori antibodies had cross-reaction with the proteins of about 50 kDa of erythrocyte membranes in all samples independent of H pylori infection. One protein in the positive band was identified as Chain S, the crystal structure of the cytoplasmic domain of human erythrocyte Band-3 protein. CONCLUSION: Anti-H pylori antibodies cross-react with some antigens of human erythrocyte membrane, which may provide a clue for the relationship between H pylori infection and vascular disorders. Cross-reactivity of anti-H pylori antibodies with membrane antigens of human between H pylor i infection and vascular disorders is not clear, this study provides some interesting observations and a new clue for autoimmunity as one of the potential pathopoiesis of H pylori infection.
INTRODUCTION
H pylori, first isolated by Marshall and Warren [1] , a gramnegative spiral bacterium, colonizing in gastric mucosa, is notorious for causing chronic infections and has been linked to various gastric diseases such as chronic gastritis, peptic ulcer, gastric mucosa-associated lymphoid tissue lymphoma and gastric cancer [2][3][4] . In recent years, infection by H pylori has been linked to extradigestive pathologies including ischemic cardiac and cerebral diseases. Many seroepidemiological studies revealed the relationship between H pylori and vascular disorders [5,6] even though the prevalence of positive findings varied widely between studies and not all studies reported positive results [7][8][9] . However, the exact nature of the association is not completely elucidated.
Several investigations revealed that heat shock proteins (HSPs) of H pylori are extremely homologous with HSPs of humans [10] , the O-side chain of the lipopolysaccharide (LPS) of a number of H pylori strains is structurally similar to the Lewis histo-blood group antigens [11] , anti-CagA antibodies cross-reacted with antigens of blood vessels [12] . All these imply that autoimmunity might take part in pathomechanisms of H pylori.
The changes of erythrocytes affect the whole blood viscosity, which contributes importantly to thrombosis and atherosclerosis (AS). Our previous studies found that anti-H pylori serum reacted with parts of erythrocytes and endothelial cells of heart valves using immunohistochemical method [13,14] . But it remains unknown which antigen resulted in these positive reactions. The present study was aimed to investigate whether the proteins of erythrocyte membrane cross-react with anti-H pylori by Western blot assay and to identify the special proteins by mass-spectrum assay in an effort to provide a clue for pathogenic link between H pylori infection and vascular disorders.
Blood samples
Fresh blood samples were collected from 14 subjects from the general population whose results of 13 C-urea breath test ( 13 C-UBT) were supplied by Chinese People's Liberation Army General Hospital. The kit for 13 C-UBT was provided by Altachem Pharma Ltd. Current infection of H pylori was confirmed by a value of 13 C-UBT greater than 4. General data about the subjects are shown in Table 1. Informed consents were obtained from all the volunteers before 13 C-UBT and blood sampling.
Extraction of erythrocyte membrane proteins
Fresh blood collected from the subjects were mixed with heparin as anti-coagulant. The erythrocytes were separated by centrifugation at 1230 × g and were lysed with deionized water and then centrifuged at 12 000 × g for 20 min at 4℃. The pellets were washed in three volumes of cold phosphate buffer at 5 mmol/L, pH 8.0, containing 1 mmol/L EDTA and 1 mmol/L PMSF (Sigma) 6 times until the membranes were white and then were resuspended in the same buffer and centrifuged at 30 000 × g for 1 h at 4℃. The pellets were frozen at -80℃ and dried at -56℃ in cold vacuum. The membranes were resuspended in the 2-DE lysis buffer cocktail consisting of 7 mol/L urea, 2 mol/L thiourea, 10 g/L DTT, and 40 g/L CHAPS at 4℃ for 2 h, then ultrasonicated on ice. The concentration of proteins in each sample was 6-12 g/L determined by Bradford protein assay [15] . The whole proteins of H pylori NCTC11637 were extracted as positive. All reagents in 2-DE lysis buffer were bought from Amersham.
Reactivity of anti-H pylori serum with erythrocyte membrane proteins by Western blot
SDS-PAGE was performed using a Bio-Rad Mini-Protean 3 electrophoresis cell. Approximately 120 μg of membrane proteins were parallelly loaded into two wells of 10% SDS-polyacrylamide minigel, 60 μg per well. Thirty μg of whole proteins of H pylori NCTC11637 as positive control and 5 μL prestained molecular weight standards marker (Fermantas) were also respectively loaded in two wells per gel.
Proteins were transferred to a PVDF membrane (Amersham) using Bio-Rad Semi-Dry transfer unit. Blocking was performed overnight at 4℃ in blocking buffer (TBS containing 50 g/L BSA). The membrane was bisected and one part was incubated with the primary antibody, rabbit anti-H pylori NCTC11637 serum (from immunized rabbits with H pylori NCTC11637, the animals were provided by Vital River Laboratories Co. Ltd. and raised by the Department of Laboratory Animal Science, Peking University Health Science Center) for 2 h at room temperate (RT). To exclude the color reaction resulting from the direct conjugation of the second antibody and Guo FH et al. Correlation between H pylori and erythrocyte 3743 www.wjgnet.com the normal serum with the proteins on PVDF membranes, the normal serum (pre-immunization serum) of the same rabbits was used as control for another part of membranes with the same samples. Other steps were performed according to the Western blot assay. The second antibody, goat anti-rabbit IgG AP conjugate and AP substrates were from Vector.
Excision of protein bands and in-gel reduction, alkylation and trypsin digestion of proteins
The blots incubated in anti-H pylori serum were compared with the others of the same sample incubated in normal serum to find out the different reacted bands. The samples were chosen according to different bands and SDS-PAGE was performed and the gel was stained with Coomassie blue-R250 dye. The bands in the SDS-PAGE gel in accordance with different reacted ones in Western blot were excised, and in-gel reduction, alkylation and trypsin digestion was performed according to EMBL protocol (http://www.proteomics.com.cn/paper/InGel.html). Briefly, after a washing step, gel particles were reduced with DTT and alkylated with iodoacetamide. A second washing was performed before overnight digestion with 3 μL (40 mg/L) trypsin (Sigma). The resulting peptides were extracted with 500 mL/L ACN and 50 mL/L TFA and dried in a cold vacuum.
Mass spectrometric (MS) analyses of tryptic peptides and identification of proteins
The digested samples were mixed with a saturated matrix solution (1:1) (α-cyano-4-hydroxycinnamic acid prepared in 500 mL/L acetonitrile and 1 mL/L formic acid). All mass spectra were obtained on a 4700 Proteomics analyzer with TOF/TOF optics (Applied Biosystems, Foster City, CA, USA) in the positive ion reflector mode with a mass accuracy of about 50 ppm. The MALDI tandem mass spectrometer used a 200 Hz frequency-tripled Nd:YAG laser operating at a wavelength of 355 nm. MS spectra were obtained between Mr 800 and 4000 with ca. 1000 laser shots. MS/MS spectra were acquired with 2000 laser shots using air as the collision gas. The singly charged peaks were analyzed using an interpretation method present in instrument software, where the five most intense peaks were selected and MS/MS spectra were generated automatically, excluding those from the matrix, due to trypsin autolysis peaks. Spectra were processed and analyzed by the Global Protein Server Workstation (Applied Biosystems, Foster City, CA, USA), which uses internal Mascot v2.0 software(Matrix Science, UK) for searching the peptide mass fingerprints and MS/MS data. Searches
Reactivities of anti-H pylori serum with erythrocyte membrane proteins
Both normal rabbit serum and anti-H pylori serum showed immunoreactivities with the membrane proteins of about 110 kDa, 55 kDa, 51kDa, 50 kDa, 40 kDa and 27 kDa of all erythrocytes. However, anti-H pylori serum specially recognized antigens of about 50 kDa (marked as band Y in Figure 1) from erythrocytes compared with the normal serum. Remarkably, this feature existed not only in H pylori + subjects (No. 01, 03, 06, 07, 09, 10, 13, 14) but also in H pylorisubjects (No. 02, 04, 05, 08, 11,12). The immunoreactivity of another band (marked as band X in Figure 1) with anti-H pylori serum was weaker than that with normal serum.
Identification of specific proteins
There were 17-18 bands in the SDS-PAGE 10% gel of erythrocyte membrane protein sample ( Figure 2). The special band of about 50 kDa and another one closely above it (respectively marked as band Y and band X in Figure 2) corresponding to the specially reacted bands in Western blot were faintly stained. Five proteins were identified in the two bands, 4 in band X and 1 in band Y ( Table 2).
DISCUSSION
The pathogenesis of ischemic vascular diseases is multifactorial. AS and thrombosis, the principle basis of ischemic vascular disease, determine the occurrence of ischemic events. However, many AS patients lack traditional risk factors, suggesting that other mechanisms may be involved in the AS development [16,17] . In recent years, more attention has been paid to the relationship between infection and ischemic diseases [16,18,19] . Several studies indicated the association between H pylori infection and ischemic vascular disease especially when the CagA + strain was involved [5,6] , although the results are currently being debated [7][8][9] . By now, most studies have been based on seroepidemiology and nonspecific systemic inflammation. The exact mechanisms by which H pylori infection contributes to the progression of vascular disorders have not been elucidated.
The molecular mimicry between elements of H pylori and those of host cells [10,11] provides clues for autoimmunity as one of the candidate pathopoiesis. Franceschi and his colleagues [12] reported that anti-CagA antibodies crossreacted with antigens of both normal and atherosclerotic blood vessels by immunohistochemistry and anti-CagA antibodies also specifically immunoprecipitated two antigens of 160 and 180 kDa from both normal and atherosclerotic artery lysates. The authors speculated that the immunoprecipitated proteins were not CagA of H pylori but vascular elements because the two antigens were different from CagA (about 116-140 kDa) in molecular weight. The reactivity detected in vessels with anti-CagA antibodies was caused by the mimicking vascular antigens. We think this speculation reasonable. However, the two antigens were not identified. Moreover , Hp 01 02 03 04 M 01 02 03 04 05 06 07 05 06 Erythrocyte is one of most important factors affecting hemodynamics. Its membranes can be easily isolated in large quantities and many blood group antigens are expressed not only on the surface of blood cells but also on vascular endothelial cells. Thus, we chose erythrocyte to investigate the cross-reaction of human plasma membrane and anti-H pylori antibodies. Our previous study showed that anti-H pylori serum reacted with erythrocytes by immunohistochemical method [13] . But we did not know which elements resulted in the immunoreaction and whether the elements belong to erythrocytes or to H pylori. In the present investigation, antigens of about 50 kDa from erythrocyte membrane strongly immunoreacted with anti-H pylori serum rather than normal serum in all 14 samples (Figure 1). This feature did not depend on current infection of H pylori. Therefore, we speculate the reacted antigens are not elements of H pylori but the mimicking erythrocyte antigens. The results of mass spectrum assay confirmed our speculation. One protein was identified as Chain S, the crystal structure of the cytoplasmic domain of human erythrocyte Band-3 protein (Mr 42.5 kDa) in the special band (band Y in Figure 2). Band 3 protein is the most abundant transmembrane protein to maintain the normal metabolism and function of human erythrocyte. This protein of about 95-100 kDa has two domains. The N-terminal domain of about 40 kDa is located within the cytoplasm and participates in signal transmission across membranes and other functions such as growth, differentiation and interaction of cellules, while the C-terminal of 55 kDa domain is membrane-associated and mediates the exchange transportation of anions Cl -/ HCO3across the erythrocyte membrane [20,21] . In this study, the two antigens of 160 and 180 kDa mimicking with CagA were not found possibly because of the diversity of erythrocytes and vascular cells.
We consider that antibodies against H pylori may not contact with cytoplasmic domain of Band 3 of normal erythrocyte. However, oxygen free radicals and systemic inflammation caused by acute or chronic infection could damage erythrocyte membrane leading to the decrease of erythrocyte deformability, increase of erythrocyte fragility and elevation of erythrocyte aggregation index. Some authors reported these changes in several ischemic cardiac disease patients with H pylori infection [22] . The impaired erythrocytes might be easier to be disrupted, inducing internal antigens (including the cytoplasmic domain of Band 3 protein) to be exposed to circulating antibodies.
Then anti-H pylori antibodies could bind the exposed antigens and cause inflammatory cell activation, which might be associated with the changes of hemorheology and hemodynamics, plaque ag gregation, thrombus formation and atherogenesis leading to ischemic events.
In band X (Figure 2), 4 proteins were identified, which were considered to be flotillin 1 variants according to their resource and molecular weight. The reason why the reaction of the band X incubated with normal serum was stronger than with anti-H pylori serum is being investigated.
The protein that cross-reacted with anti-H pylori antibodies probably is another one that we could not identify due to its trace quantity and the limit of separation ability of SDS-PAGE. Nevertheless, our study provides an experimental evidence of molecular mimicry between H pylori antigens and erythrocyte membrane proteins. The results support the hypothesis that autoimmunity induced by H pylori infection plays an important role not only in vascular disorders but also in various extragastric diseases.
Background
The pathogenesis of ischemic vascular diseases is multifactorial. The conventional risk factors do not fully account for the risk of these diseases. In recent years, more attention has been paid to the relationship between infection and ischemic diseases. Several studies indicated the association between H pylori infection and vascular disorders. However, the exact nature of the association is not completely elucidated.
Research frontiers
The molecular mimicry between elements of H pylori and those of host cells provides clues for autoimmunity as one of candidate pathopoiesis. Autoimmunity has become one of the hot spots of studies in recent years. Some studies have found that anti-H pylori antibodies reacted with endothelial cells and erythrocytes.
Innovations and breakthroughs
This study choose erythrocyte, which is easily to be isolated in large quantities, to investigate the cross-reaction of human plasma membrane and anti-H pylori antibodies and found anti-H pylori antibodies cross-reacted with the proteins of about 50 kDa of erythrocyte membranes in Western blot. The protein was identified by mass spectroscopy.
Applications
Erythrocyte is one of most important factors affecting hemodynamics. Many blood group antigens are expressed not only on the surface of blood cells but also on vascular endothelial cells. The materials selecting and the results of this study provide a new clue and experimental evidence for autoimmunity as one of the potential pathopoiesis of H pylori infection in vascular disorders.
Peer review
This study looks at the cross-reaction of human plasma membrane and anti-H pylori antibodies. Although the contribution of the cross-reaction to the relationship Table 2 List of proteins identified from the special bands in Figure 2 Band www.wjgnet.com between H pylori infection and vascular disorders is not clear, this study provides some interesting observations and a new clue for autoimmunity as one of the potential pathopoiesis of H pylori infection. | 2018-04-03T05:35:25.061Z | 2007-07-21T00:00:00.000 | {
"year": 2007,
"sha1": "8d58fc0fbfc0af2e98fa9c99684053b35d654315",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v13.i27.3742",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6905d3bd25c83394ed77f9d3ee1fa10132be9a75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235271326 | pes2o/s2orc | v3-fos-license | A Trajectory Ensemble-Compression Algorithm Based on Finite Element Method
: Trajectory compression is an efficient way of removing noise and preserving key features in location-based applications. This paper focuses on the dynamic compression of trajectory in memory, where the compression accuracy of trajectory changes dynamically with the different application scenarios. Existing methods can achieve this by adjusting the compression parameters. However, the relationship between the parameters and compression accuracy of most of these algorithms is considerably complex and varies with different trajectories, which makes it difficult to provide reasonable accuracy. We propose a novel trajectory compression algorithm that is based on the finite element method, in which the trajectory is taken as an elastomer to compress as a whole by elasticity theory, and trajectory compression can be thought of as deformation under stress. The compression accuracy can be determined by the stress size that is applied to the elastomer. When compared with the existing methods, the experimental results show that our method can provide more stable, data-independent compression accuracy under the given stress parameters, and with reasonable performance.
Introduction
In location-based applications [1], there is a considerable amount of positioning data from sensors of vehicles, ships, and mobile phones. Taking Zhejiang Province of China as an example, 68T vehicle trajectory data and 19T ship trajectory data are generated every year, which brings many difficulties to the storage, processing, and analysis. Raw trajectories contain a lot of noise, some of which are caused by the signal drift of the sensors and others are caused by the random movement of the moving object. The compression of the raw trajectory can eliminate noise, reduce storage occupation, and improve the efficiency of data queries and processing. In the task of spatiotemporal data mining, noise reduction is helpful in a spatiotemporal pattern search from the trajectory [2]. Different application scenarios have different requirements on the compression ratio and compression accuracy. A very high compression ratio can be achieved if the vehicle trajectory is compressed based on the road network and only the turning point is retained [3][4][5]. However, when we want to analyze the lane change of vehicles, we need to reduce the compression ratio and improve the accuracy. For the vessel trajectory, since there is no fixed road constraint, the compression accuracy changes dynamically according to the requirement [6][7][8][9]. Lower compression accuracies are needed for ocean channel analysis, medium accuracy for periodic analysis of vessel, and higher for short-term route prediction of single vessel. In the trajectory monitoring application, there are similar dynamic requirements for the display of trajectory. When the display scale is small, the trajectory with a low accuracy and high compression ratio should be displayed. If the map window is zoomed in, then the higher accuracy trajectory should be displayed with the window scale increasing. Therefore, trajectories can no longer be compressed in a fixed way and stored in advance. We call this kind of compression method as dynamic compression, whose accuracy changes dynamically with the requirement.
Trajectory compression tasks are traditionally divided into offline compression and online compression [10]. The former compresses all of the trajectories obtained, while the latter incrementally compresses the trajectories. In these methods, dynamic compression can be achieved by adjusting compression parameters. However, it is less certain what compression accuracy will result from a given compression parameter, and, even for different trajectories, the results will be different. Our study focuses on trajectory compression from another perspective-ensemble compression, is inspired by the human cognitive activity, which allows one to simplify trajectories without knowing the precise position of each point. People can draw simplified trajectories (the blue line is the raw trajectory, and the red one is the simplified trajectory) by intuitive feeling without accurate calculation of the exact position and distance, as shown in Figure 1. This is actually a compression method according to the whole features of the trajectory, and we call this kind of compression method ensemble compression. Inspired by this ensemble compression ability, we regard the trajectory as a physical entity, and trajectory compression can be thought of as deformation under uniform forces that are applied around the elastic object. The larger the deformation, the more points overlap due to mutual extrusion, so as to achieve the compression effect. In this research, we implement the trajectory ensemble compression algorithm that is based on finite element analysis, and in which we integrate the main direction of trajectory to achieve compression while preserving key features. The compression accuracy can be determined by the elastic parameters that are applied to the elastomer. When compared with the existing methods, the experimental results show that our method can provide more stable, data-independent compression accuracy under the given parameters, and with reasonable performance.
This paper is organized, as follows: the related research is summarized in Section 2. Section 3 provides a detailed statement of our algorithm. Section 4 shows the experiment analysis based on real data sets, Section 5 provides a discussion, and a conclusion is given in the final section.
Related Work
Various trajectory compression algorithms are summarized by [10][11][12]. According to our research, the existing methods are divided into four categories: distance-based compression, gesture-based compression, map-constrained compression, and ensemble feature-based compression.
Distance-Based Compression
The distance-based compression algorithm analyzes each point in the trajectory in turn and decides whether to keep it or not according to its location, distance, direction, and other features of its adjacent points.
The Douglas-Peucker algorithm [13] is the first widely used trajectory compression algorithm. The algorithm connects the begin point and the end point of the original trajectory, and then calculates the distance from each point to the line. If the distance exceeds the threshold, then the trajectory is divided into two subsequences with the point as the splitting point, and then recursively performs the above process until the trajectory does not need to be divided. Another similar distance-based method is the Piecewise Linear Segmentation algorithm [14]. In this method, the point of most deviation is selected, and the threshold parameters are set to determine whether to retain the point, and the process executes recursively. In order to improve the efficiency of the distance-based algorithm, various algorithms have been made [15][16][17][18].
Another class of distance-based algorithms supports online compression. Two online algorithms are proposed by [15,19]: sliding window algorithm and open window algorithm, which build a sliding window on the point sequence, compress the trajectory in it, and then repeat the process for the subsequent trajectory data. Ref. [20] proposes the Dead Reckoning algorithm, which predicted the next point according to the points in the window and reserved points that greatly deviated from the forecast. Thye Dead Reckoning algorithm is improved by [21], who also proposed an algorithm, called Squish trajectory compression, which completes the compression by deleting the points with the least information loss in the buffered window. On the basis of these methods, many algorithms have been developed to further improve performance and reduce complexity [22][23][24][25][26].
Gesture-Based Compression
The distance-based compression method is greatly affected by different threshold parameters, and the improved method introduces gesture information to make up for the deficiency.
Ref. [27] proposes a trajectory compression method, which predicts the next point based on the speed and direction of historical data, and removes the predicted accurately points. The method is suitable for the high sampling density trajectories.
The stop points, similarity points, and turn points are also important semantic information that can be used in trajectory compression. For RFID location data [28], realize data compression by merging and closing the same location points of different trajectories [29], design a lossy compression strategy to collapse RFID tuples, which contain the information of items that are delivered in different locations.
Ref. [30] introduces the sampling information on the time series. Ref. [31] introduces the main direction of the trajectory to remove the noise. In [32], the speed information and stop point are introduced to improve the compression efficiency. Ref.
The key points are preserved by [33] according to the velocity and direction data in the trajectory, and data compression is realized with these key points as constraints.
Map-Constrained Compression
Road information can improve the compression ratio. A large number of trajectory compression methods that are based on road constraints are proposed. A map-matching trajectory compression problem was first proposed by [4], where it is the combined problem of compressing trajectory at the same time matched on the underlying road network. The study gives a formal definition of the map matching compression problem, proposes two naive methods, and then designs improved online and offline algorithms. A path pruning simplification method is proposed by [34], which divides the trajectory simplification process into edge candidate set stage, path-finding stage, and path-refining stage. In the first stage, multiple candidate matching edges are obtained, In the second stage, road matching is performed with the assistance of driving direction for each trajectory position, In the third stage, the algorithm implements path tree pruning and it preserves the position in the trajectory where the direction changes. The algorithm runs on mobile device, which makes network transmission and central processing more efficient. The compressed points are selected by [5] according to the road network. Ref. [3] proposed a similar map matching system and implemented a track compression algorithm, called Heading Change Compression.
Ensemble Compression
Ensemble compression means that, when a trajectory is compressed, other geometric information that is related to the trajectory is combined, such as other similar trajectories, the boundary region of the trajectory, or the space-transformed trajectory.
When compared with the distance-based compression algorithm, there are few methods that are based on ensemble feature compression. Ref. [35] compress trajectories using convex hulls. The authors establish a virtual coordinate system with the starting point as the origin and the rectangular boundary around the trajectory, and make two boundary lines in each quadrant according to the direction of the trajectory. The rectangle and boundary lines form a convex hull, as well as the coordinate points in the constraint convex hull, are compressed.
Ref. [36] designs a trajectory similarity measurement method that is based on interpolation, in which the adopted method is similar to cluster. For each trajectory, the similar reference trajectory is found, and only the difference points with the reference one is retained, the similar points are removed.
In [37], a contour preserving algorithm for trajectory compression is proposed, which can compress the trajectory and keep the contour of the trajectory as much as possible. The algorithm divides the trajectory into multiple open windows, determines the main direction of each open window, and then compresses the trajectory points that deviate from the main direction.
Ref. [38] clusters all of the locations, match the clustering center on the road networks, and search the semantic events on the trajectory, such as parking, road switching, destination arrival, etc., to remove the random noise by only preserving semantic information points.
Ref. [39] regards the trajectories as time series, established linear equations of time and positions, and mapped the positions into the parameter space of the equations by hough transformation. Compression can be achieved by reducing three-dimensional data to hough space, in which the number of dual points is less than the number of points in the origin trajectory.
Preliminary
We give basic concepts related to the algorithm, in which Definitions 1 and 2 are the input of our algorithm, Definition 3 is the output of the algorithm, and Definitions 4-9 are the evaluation index.
Definition 1 (raw trajectory). The raw trajectory can be regarded as a sequence of locations (x i , y i ) and attributes , as shown in (1). The attributes are only speed (s i ) and direction (d i ) of trajectory.
Definition 2 (main direction). A trajectory can be divided into several segments according to its driving direction. The main direction of vehicle is the direction of its road, while the main direction of vessel trajectory is fuzzy, different references have different definitions [6][7][8]. Our compression method makes no distinction between the vessels and vehicles, and the road network is not being used, so the main direction is obtained from only the raw trajectory. Its general definition is shown in (2).
p i is the ith point in the trajectory, length is the distance function, and direction is the azimuth function. According to Equation (2), the main direction of a segment that is composed of n points is the average of the directions of each part weighted by the length of each part.
Definition 3 (simplification and approximation).
There are two types of trajectory compression tasks, simplification and approximation. Simplification means that given a trajectory, a subsequence of the trajectory is generated, as shown in (Figure 2A). Approximation means generating a new sequence, as shown in (Figure 2B), where the two endpoints are the same.
(A) simplification (B) approximation Compression efficiency refers to the number of bytes that can be compressed per unit time.
Definition 6 (compression accuracy). The DTW algorithm [40] is used to calculate the trajectory distance before and after compression, which reflects the degree of their dissimilarity. The compression accuracy is defined as 1 minus this distance divided by the maximum DTW distance, which is the distance when the maximum compression occurs (only preserving the start and end points). The compression accuracy is between 0 and 1.
Definition 8 (length ratio). The ratio of the sum of the lengths between two adjacent points the trajectory after simplification to the sum of the lengths between two adjacent points of the original trajectory.
Definition 9 (curvature ratio). curvature is the sum of the angles between segments, the curvature ratio is the angle sum after simplification to the angle sum of the original trajectory.
Algorithm Description
The input of the Ensemble-Compression algorithm includes the original trajectory and a set of elastic parameters, and the outputs either of the two compression results, although, in real applications, the focus is on simplification. The Algorithm 1 is shown in following.
Discretization
The first step after initialization is to discretize the trajectory together with its bounding rectangle. We meshed the minimum boundary rectangle (line 3), merged mesh nodes and trajectory points (line 4), and then divided the elastomer into small units (line 5), as shown in Figure 3. We use delaunay triangulation [41] and the two-dimensional advancing front technique (AFT) [42] to complete the discretization.
Element Analysis
The main task of element analysis is to generate the element stiffness matrix and solve element matrix equation.
The element stiffness equation represents the relationship between the stress and triangular element, so that we can calculate the displacement of any point in the triangular element with a given magnitude and direction of the stress. For a single triangular element, Equation (3) shows the stiffness equation [41].
where t is the thickness of the element, θ is the elements area, and δ (e) is the element displacement array, as shown in Equation (4). The three nodes of each triangular element ( Figure 2) after triangulation are coded as i, j, and m. We take counterclockwise as the forward direction and establish the element displacement array.
F (e) is the stress column matrix of each node is shown in Equation (5).
D is the elastic matrix, as shown in Equation (6), in which E is the modulus of elasticity and u is Poisson's ratio.
B i , B j , B m is the strain matrix of the element nodes, as defined in Equation (7) [41], in which c x , b x are the stress coefficient constants.
Equation (4) gives the displacements of three nodes i,j, and m of the element under the stress, while the displacements of any point x and y in the triangular element can be obtained by solving Equations (8).
The six coefficients in the formula can be obtained by the positions and displacements of nodes i, j, and m.
The parentheses of Equation (3) are referred to stiffness matrix. The meaning of an element of the stiffness matrix is the stress to be applied to a node of the element, while the node has unit displacement and the others are zero.
Suppose that the whole is divided into m elements and n nodes, then the overall node displacement δ and the overall stress matrix F are all 2N × 1 matrix. Equation (9) shows the overall equilibrium equation of triangular element analysis: The preconditioned conjugate gradient [43] is used to solve the stiffness matrix equation and SSOR [43] is chosen as the preconditioned matrix.
Semantic Polymerization
The displacement of each point can be obtained using the above method. The stress causes spatial competition among the trajectory points. The simplified trajectories can be obtained by screening the subset of trajectories through the threshold, and the approximate trajectories can be obtained if the displacement of the subset is taken directly.
Based on the method shown in Section 3.4, the trajectory point displacement can be obtained (line 8). We calculate the distance at the adjacency point after displacement (line 9), normalize all the distances (line 9), and then sort all of the distances in ascending order. The points whose distance is less than the percentile threshold after sorting become candidate filter points (lines 10-11). Some points with key semantic information, such as direction or speed, will be deleted if only distance threshold is used, so we implemented a trajectory segmentation method based on the main direction. When two points compete due to the close distance, we keep those which are different from the main direction and stop points. The algorithm is shown in the following.
The input of Algorithm 2 is the trajectory sequence that is defined by Equation (1), in which each point is a five-dimensional array consisting of coordinates, time, speed, and direction. Algorithm 2 segments the trajectory according to the main direction of Equation (2). The algorithm adds four dimensions to the initial five dimensions. The sixth dimension records the length from the end of the last segment to the current point, which is the denominator of Equation (2). The seventh dimension records the product of the direction from the previous point to the current point and the sixth dimension, which is the numerator of Equation (2). The eighth dimension records the ratio of the seventh dimension to the sixth dimension, which is the main direction that is obtained by assuming the current point as the splitting point. The ninth dimension records the difference of the eighth dimension between the current point and previous point, which is, the deflection of the adjacent main direction. We take the position with the largest difference of main direction as the candidate splitting point. If the average of the main directions on both sides of the candidate point are very different, then the point is regarded as the end point of a new segment.
Experimental Setup
We select GPS of taxi in Shanghai [44] and AIS of vessels crossing the East China Sea [7] as the experimental data. The taxi data set includes the 24-h trajectories of 4310 taxis, and the average sample frequency is 15 s. The vessel data set consisted of 120-h trajectories of 10,927 vessels with an average sample frequency of 10 s.
We compare our algorithm with two baselines [36,45], the former is offline compression algorithm (named OVTC), the latter is online compression algorithm(named SPM). In [36], many gesture information is considered in trajectory compression, such as static point, turn point, speed change point, break point, etc., so that the method has many compression parameters, and the optimal interval of these parameters is given. In [45], the sliding window, which is popular in the online compression, is improved, and the dynamic changing reference point is introduced to improve the compression efficiency. The sliding window size and threshold distance are the key parameters of the algorithm.
In the experiments, we fixed some parameters, including an elastic modulus of 2, Poisson's ratio of 0.2, mass density of 1.15, maximum iteration number of 100, error threshold of 10 −6 and relaxation factor of 1.
The percentiles of the experiment are 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, and 0.95, respectively. The external force factors (between 0-1) are 0.1, 0.2, 0.3, 0.4 0.5, 0.6, 0.7, 0.8, 0.9, and 1, and the direction of the force points to the center of gravity of the trajectory. The effects of different parameters on the compression indicators were observed. The experimental environment is Intel(R) Core(TM) i5 processor, 4 GB memory, Mac Darwin Kernel Version 17.7.0, and the development language is MatlabR2019.
Comparative Study
The first concern is whether stable and data-independent compression accuracy can be obtained for a specific combination of parameters.
In In Figure 4, the horizontal axis is the number of 250 groups of parameters, and the vertical axis is the standard deviation of accuracy that is obtained by compressing all of the trajectories with corresponding parameters. The algorithm OVTC has the greatest uncertainty between compression parameters and accuracy, which may be due to the simultaneous use of direction, velocity, and distance thresholds. The algorithm SPM has both low standard deviation (left half) and high standard deviation (right half). The finite element method has a small standard deviation of accuracy under all parameters, which means that, when compared with the other two methods, it can get a certain accuracy under certain compression parameters. Although the standard deviation is different, the average accuracy of the three algorithms is very close, in which OVTC is 0.95159, SPM is 0.96056, and FEM is 0.943888.
Compression efficiency is another concern. We calculate the minimum, average, and maximum compression rates of the three algorithms under all parameters, as shown in Table 1. Our method is smaller than the other two in the compression rate. The reason is that the finite element based method needs to solve the equations, while the other two methods only filter point-by-point. The average compression rate of FMT shown in Table 1 is 243.056 kbps, which can meet the needs of the real application scenarios described later (Section 4.2.5). The technology of concurrent services and cache in modern applications also make up for the deficiency of compression rate.
Influence of Percentile and Stress Factor on Compression Ratio and Compression Rate
The two data sets are compressed with different parameters, and Figure 5 shows the results.
(A) Taix data set (B) Vessel data set Figure 5 shows that the compression ratio increases with the increase of the external force factor and with the decrease of the percentile. The influence of percentile on the compression rate is more obvious. When the stress factor remains unchanged, increasing the percentile can increase the compression ratio by 0.68. Figure 6 shows the effect of different parameters on the compression rate. Different from the compression ratio, the compression ratio is affected by both stress factor and percentile, and it is close to the maximum value when the stress factor value is 0.6.
(A) Taix data set (B) Vessel data set
Compression Ratio and Compression Error
We use DTW algorithm [40] to calculate the distance of the trajectory before and after compression, which reflects the degree of compression error. The normalized distance can be considered as the relative error ratio that is caused by compression. The relative error ratio is the ratio of each DTW distance to the maximum DTW distance. The experiment shows the influence of various parameters on compression error, as shown in Figure 7. Figure 7 shows that the error ratio increases with the increase of percentiles. The relationship between the compression rate and error ratio was observed. The stress factor was fixed at 0.6 to observe the change of the error rate with the compression rate, as shown in Figure 8. For every 1% increase in the compression ratio, the error rate increases by 0.47%.
The Influence of Different Parameters on Other Indicators
We study the effect of the algorithm on the length ratio and the curvature ratio. In the experiment, we fixed the stress factor as 0.6, and the results are shown in Figure 9. length & curvature ratio length ratio tortuosity ratio Figure 9. The correlation between length ratio, curvature ratio, and compression ratio.
As the compression ratio decreases, the length ratio and curvature ratio increase. Within the compression ratio of [20%, 40%], the length ratio and the curvature ratio are also maintained at a high level, which conforms to the geometric significance of the simplification [46]. It can also be seen that, even if the compression ratio is insignificant, the length ratio and the curvature ratio are maintained at a high level, and the distortion of the reaction curve is within a reasonable range.
Application Scenarios
We develop an standalone daemon service, which is responsible for the management of multi-source spatial data, including vessel trajectory, vehicle trajectory, RFID, and it provides data query interfaces to multiple third-party applications. These third-party applications include online management systems, safety early warning systems, waterway management systems, etc. The data query service is required to be generic and application independent. Figure 10 shows the system overall. The trajectory compression service is one of core in the system, which realizes concurrent processing and puts the compressed trajectory into the cache based on LRU for performance.
These applications submit the moving object id, time period, and trajectory precision (between 0-1), and the service returns the required trajectory. The precision here is interpreted as the accuracy of our method, which is, the compressed trajectory that is only composed of the begin and end point of the raw trajectory has the minimum accuracy, and the raw trajectory has the highest accuracy. Through the previous experiments, we can obtain the corresponding compression parameters for each accuracy interval, so that we can select the appropriate parameters to implement the compression process. Figure 11 shows example results of the same trajectory with different precision in different application. (A) is the trajectory with a compression ratio of 0.53 at a map scale of 1:200, (B) is the trajectory with a compression ratio of 0.26 at scale of 1:500 (zoomed to 1:200 to make it the same size as (A)). The larger the display scale, the smaller the compression ratio, and the more detail can be shown.
Discussion
Although the algorithm can meet the needs of above application, the algorithm also has some uncertain problems, the most important one is that it is actually a fuzzy compression strategy, and it does not accurately determine whether each point is noise. Therefore, it cannot be applied to the safety critical area without strict theoretical proof. Another problem is that it only considers the main direction, without speed and road network data. In the details of the implementation, the stress of each triangular element is the same, and the direction always points to the geometric center of the trajectory. It is not clear whether the result is different if the stress size and direction change with the point distribution density. All of these need to be further studied in the future.
Conclusions
In this work, a novel trajectory compression algorithm that is based on the finite element method is proposed, in which the trajectory is regarded as an elastomer that is deformed under external forces, and the trajectory is compressed with elasticity theory. The main direction segmentation algorithm is combined to achieve compression while preserving the key position information. The experiments show that our method can provide a more stable, data-independent compression ratio under the given stress parameters.
Accuracy is the only parameter selection basis of current compression algorithm, which is far from enough for rich practical applications. The ensemble compression algorithm that is based on finite element is only a preliminary attempt to realize dynamic compression service, and providing a customized trajectory, rather than fixed compression method should become an important research direction in related fields.
Author Contributions: Writing-draft, review and editing, Haibo Chen; software, Xin Chen. All authors have read and agreed to the published version of the manuscript. | 2021-06-02T13:08:28.259Z | 2021-05-14T00:00:00.000 | {
"year": 2021,
"sha1": "65802218aca31821946fd9ee0347a2f5abf3eb40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/10/5/334/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a5e6426083f0a612f4e2bb26c3883fbe5471639d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
23940873 | pes2o/s2orc | v3-fos-license | Incidence of Oral Lichen Planus in Perimenopausal Women : A Cross ‐ sectional Study in Western Uttar Pradesh Population
© 2017 Journal of Mid‐life Health | Published by Wolters Kluwer ‐ Medknow Background: Hormonal fluctuations during menopause lead to endocrine changes in women, especially in their sex steroid hormone production. Studies have documented the role of estrogen and progesterone (Pg) on autoimmune disorders such as multiple sclerosis, systemic lupus erythematosus, and rheumatoid arthritis. Lichen planus (LP), an autoimmune disorder, seen frequently in perimenopausal women, may also get affected by sex steroid hormones, but no direct relationship has been established yet. Aim: The aim of this study is to find the incidence of oral LP (OLP) in perimenopausal women and evaluate the factors associated with it. Materials and Methods: This cross‐sectional study was conducted over a period of 1 year. All the perimenopausal women (44.69 ± 3.79 years) who came to the dental outpatient department were evaluated for the presence of LP and various factors associated with it. Depression Anxiety Stress Scale‐21 questionnaire was used for psychometric evaluation of perimenopausal women. Results: According to our study, incidence of LP in postmenopausal women was 10.91%, which is higher than incidence of LP in general population, i.e., 0.5% to 2.0%. Incidence of LP increased with the severity of depression in perimenopausal women (P = 0.000). Conclusion: The incidence of OLP is higher in perimenopausal women than in general population and increases significantly with increase in the severity of depression. LP in perimenopausal women can be mediated by declined level of estrogen and Pg directly or indirectly through causing depression that can trigger LP.
Introduction
T he World Health Organization (WHO) has defined three age stages during the midlife age for women: (1) Menopause is the year of the final physiological menstrual period retrospectively designated as 1 year without flow (unrelated to pregnancy or therapy) in women aged ≥40 years.
(2) Premenopause begins at ages 35-39 years; during this stage, decreased fertility and fecundity appear as the first manifestations of ovarian follicle depletion and dysfunction, despite the absence of menstrual changes. (3) Perimenopause includes the period of years immediately before menopause and the 1 st year after menopause. [1] Perimenopause is the time of irregular of the transition to irreversible change, with a quick decline beginning 2 years before the final menstrual period and reaching stability 2 years afterward. [3] Progesterone (Pg) (P4) levels become insufficient or absent. [4] Studies have documented the role of estrogen and Pg on autoimmune disorders such as multiple sclerosis (MS), systemic lupus erythematosus, and rheumatoid arthritis (RA). Lichen planus (LP), a chronic, autoimmune, mucocutaneous, psychosocial disease that usually presents in middle-aged females, may also get affected by sex steroid hormones, but no direct relationship has been established yet. [1] LP is estimated to affect 0.5%-2.0% of the general population, with a prevalence of 2.6% in the Indian population. [5] Our study was an endeavor to find the incidence of oral LP (OLP) in perimenopausal women and evaluate various factors associated with it such as psychosocial factors that may play a role in the etiology of the disease. We also made an effort to explain the influence of fluctuating sex steroid hormones of perimenopausal women on LP.
Materials and Methods
This cross-sectional study was conducted in the Department of Oral Medicine and Radiology. The study protocol was approved by the Institutional Ethical Committee. The duration of the present study was 12 months and included 1576 perimenopausal women (44.69 ± 3.79 years) [2] who reported to our department from January 2016 to December 2016, out of which 172 were clinically diagnosed with OLP. The clinical diagnostic criteria for oral lesions used in this study were as follows: (i) the presence of keratotic, pinhead-sized, white, slightly elevated papules (papular LP), which may be discrete or arranged in reticular (reticular LP) or plaque-like (plaque-like LP) configurations; (ii) atrophic LP, characterized by thinning of the epithelium leading to the appearance of atrophic red areas within the white lesions; (iii) erosive (ulcerative) LP, characterized by areas of well-defined ulceration within the abovementioned lesions; and (iv) bullous LP, characterized by the presence or development of bullous areas within the abovementioned lesions. [5] Exclusion criteria included the use of hormone replacement therapy, any systemic steroids, immunosuppressive drugs, or nonsteroidal anti-inflammatory drugs within the last 4 weeks and the use of topical medications within the last 2 weeks. Other exclusion criteria included tobacco use, the presence of any known systemic diseases, other dermatologic diseases affecting immune system, and any malignancy. Patients receiving any medication that can cause lichenoid reaction such as antihypertensive drugs and oral hypoglycemics were also excluded from the study.
All the perimenopausal women underwent a psychological evaluation using the Depression Anxiety Stress Scale-21 . Other factors such as residential area of patient, menstrual irregularities, extraoral and intraoral location of lesions, and type of intraoral lesion were evaluated in LP patients.
Patients were also evaluated for the presence of hypertension, diabetes, thyroid disorders, and hepatitis B virus (HBV) and hepatitis C virus (HCV). They were also asked to get liver function test (LFT) done.
Statistical analysis of the data was performed using the Statistical Package for Social Sciences (version 17.0; SPSS, Inc., Chicago, IL, USA). Chi-square test was used for nonparametric values. A probability (P) <0.05 was considered to be significant and P < 0.001 was considered to be highly significant.
Results
A total of 9100 females reported to our department from January 2016 to December 2016, out of which 1576 were perimenopausal women. About 10.91% of the perimenopausal women suffered from OLP.
Psychological state
Incidence of LP significantly increased with the severity of depression in perimenopausal women (P = 0.000) [ Table 1].
According to the DASS, depression was severe in 55.2% patients, moderate in 29.7%, very severe in 12.8%, and mild in 2.3% OLP patients. These differences were statistically significant (P = 0.000).
Anxiety was severe in 41.3% patients, mild in 26.7%, moderate in 20.9%, and very severe in 11% patients.
There was moderate stress in 69.8% patients, severe in 21.5%, mild in 7.6%, and very severe in 1.2% patients.
Menstrual cycle
About 73.8% of patients reported menstrual irregularities while 26.2% of patients did not notice any significant change in menstrual cycle.
Rural, suburban, urban areas and oral lichen planus cases
Maximum, i.e., 46.5% of patients belonged to urban areas, 34.3% to suburban areas, and minimum, i.e., 19.2% to rural areas.
Extraoral location of lesion in oral lichen planus cases
Of all the perimenopausal women with OLP who reported to our department, skin lesions were present in only 6.4% cases. Nails were affected in 4.1% and lips in 7% patients. About 2.3% of patients had genital lesions. Nearly 0.6% of the patients had lesions on the scalp.
Visual analog scale score in oral lichen planus cases
Pain in the oral cavity associated with the lesions was assessed using visual analog scale and a rating of 8 was given by 34.3% patients, 7 by 27.9%, 9 by 11%, 0 by 5.8%, 3 by 5.2%, 4 by 5.2%, 5 by 5.2%, and 6 by 5.2%.
Intraoral subtypes of lichen planus in oral lichen planus cases
Out of the six subtypes of LP, the most common was reticular LP, found in 70.3% cases. Erosive LP was seen in 57.6% cases, atrophic in 38.4% cases, plaque-like in 16.3% cases, bullae in 6.4%, and papular in 5.8% cases.
Intraoral location of lesion in oral lichen planus cases
Most common location in the oral cavity was the buccal mucosa which was involved in 89% cases followed by buccal vestibule (61.6%), tongue (16.3%), floor of the mouth (5.2%), and palate (5.2%). Gingival desquamation was present in 68.6% cases.
Hepatitis B virus and hepatitis C virus
None of the patients were positive for HBV, but 1.2% of patients were positive for HCV.
Discussion
According to our study, incidence of LP in postmenopausal women was 10.91%, which was higher than incidence of LP in general population, i.e., 0.5%-2.0%. These results can be justified by understanding the etiopathogenesis of LP and effect of estrogen and Pg which fluctuate and eventually fall down during menopause on the immune system.
Cell-mediated immunity plays an important part in the pathogenesis of OLP. OLP probably results from an immunologically induced degeneration of basal layer and is characterized by cytotoxic CD8+ cell response on modified keratinocyte surface antigen. [6] Langerhans cells are increased in OLP lesions, and MHC class II expression is upregulated. Langerhans cells probably mediate the MHC class II antigen presentation in OLP. There is MHC class II antigen presentation to CD4+ helper T-cells, followed by keratinocyte apoptosis triggered by CD8+ cytotoxic T-cells. [6] Sex hormones are known to play a role in the immune response of the body. Estrogens boost the humoral immunity but have a different impact on cell-mediated immunity which actually plays the main part in pathogenesis of OLP. Estrogen has been shown to modulate all subsets of T-cells that include CD4+ (Th1, Th2, Th17, and Tregs) and CD8+ cells. Estrogen promotes the expansion and frequency of Treg cells, which play a crucial role in downregulating immune responses. Protective effects of estrogen in autoimmune conditions such as MS and RA are believed to be due to a combined result of estrogen-mediated Treg expansion and activation. [7] Androgens and Pg are natural immune suppressors. In vivo and in vitro evidence suggest that Pg can suppress CD4+ T-cell proliferation and Th1/Th17 differentiation and effector functions. In contrast, Pg can enhance Th2 and Treg differentiation. [8] White et al. found that the cytolytic activity of CD3+ CD8+ T-cells in the uterine lining of women in the secretory phase of the menstrual cycle (Pg + E2 effects) was significantly reduced compared to that in the proliferative phase (E2 dominant), signifying Pg and E2 together suppress cytotoxicity. [9] As the level of estrogen and Pg fluctuates and finally goes down, so does the protective effect of these hormones and increases the chances of LP.
In our study, incidence of LP significantly increased with the severity of depression in perimenopausal women, so depression appears to play a role in the etiopathogenesis of LP.
It has been documented that perimenopausal women having symptoms of depression are reported to have lower plasma estrone levels than nondepressed perimenopausal women. Studies have elucidated that depressive symptoms were more common in perimenopausal than postmenopausal women. The majority of depressive episodes occurred during the late menopause transition which is characterized by estradiol "withdrawal" relative to either the postmenopause or the early perimenopause suggesting an endocrine trigger related to the perimenopause in the onset of perimenopausal depression. [10] Existing data suggest that the antidepressant effects of estradiol are mediated by estrogen receptor β and can be reversed by the coadministration of a 5HT1A receptor antagonist. Selective agonists of estrogen receptor β also have anxiolytic effects on behavior tests of anxiety in rodents and decrease the HPA response to stress. Thus, behavioral studies in lower animals confirm that central nervous system function and behaviors relevant to affective adaptation, and stress responsivity is modulated by ovarian steroids. [10] It is well known that psychological factors trigger and exacerbate LP. [11] There is a bidirectional interaction between the skin and the mind. Psychologically, the skin is an erogenous zone and channel for emotional discharge. Hence, a skin disorder could be considered a manifestation of unexpressed anger or an inner conflict due to external stress. [12] Depression is characterized by low positivity, loss of self-esteem and incentive, dysphonic mood (e.g., feelings of sadness or worthlessness), and a sense of hopelessness. Such psychological distress can precipitate a dermatological or mucosal disorder, such as LP. [13] This could also explain the correlation of depression score with LP in the present study.
Transdermal estradiol replacement has shown positive results in the effective treatment of depression in perimenopausal women. [14] It is possible that such a therapy may be beneficial in depression-induced OLP as well.
In our study, majority of the patients reported menstrual irregularities which are characteristic of perimenopausal phase.
In our study, maximum patients belonged to urban areas and minimum belonged to rural areas. In OLP, psychogenic factors seem to play an important role, and increased level of depression, anxiety, and stress has been reported in urban areas due to increased stressors and factors such as overcrowded and polluted environment, high levels of violence, and reduced social support which can result in development of more cases of OLP. [15,16] Of all the perimenopausal women with OLP who reported to our department, skin lesions were present in only 6.4% cases in contrast to 15% that is reported in general population. Similarly, in our study, only 2.3% of patients reported genital lesions in contrast to previously reported 20% in general population. [17] This discrepancy can be due to the fact that patients who reported to our department were concerned mainly with the oral cavity. Either they did not have extraoral lesions at all or they were not prominent enough to be noticed by the patient. Cutaneous and genital involvement of LP can precede, arise concurrently with, or appear after the development of OLP. We, as specialists in oral medicine, should carefully examine the skin of patients with OLP, inquire regarding signs/symptoms of genital lesions, and when relevant, referral to an appropriate specialist should be carried out.
Multiple oral sites involvement and more than one type of OLP were common in our patients. Buccal mucosa was the most common site, and reticular type of OLP was the most common form followed by erosive and atrophic OLP. Maximum patients suffered from oral pain and burning sensation because erosive and atrophic forms were common which are known to be associated with pain.
A rare association between LP, diabetes mellitus, and hypertension was first reported by Grinspan in 1966. Because drug therapy for diabetes mellitus and hypertension is capable of producing lichenoid reactions of the oral mucosa, the question arises as to whether Grinspan's syndrome is an iatrogenically induced syndrome. [18] In our study, patients who were already on antihypertensives or therapy for diabetes were excluded from the study, and 10.46% of perimenopausal women with LP were diagnosed with both hypertension and diabetes which were previously undetected. Hence, in our study, Grinspan's syndrome was present and was not iatrogenically induced.
The concept of possible correlation between thyroid diseases and OLP has emerged from numerous reports of patients who were affected by both OLP and thyroid diseases. [19] In a study conducted by Unnikrishnan et al. in 2013, the overall prevalence of hypothyroidism was 10.95%, of which 7.48% of patients self-reported the condition, whereas 3.47% were previously undetected. [20] In our study, only 1.2% of patients were diagnosed with hypothyroidism and thus did not have any correlation with LP.
Previous studies have revealed a high prevalence of HCV-RNA in patients with LP. [21] In our patients, we found that 1.2% of patients had positive serum anti-HCV or serum HCV-RNA.
Conclusion
The incidence of OLP is higher in perimenopausal women than in general population and increases significantly with increase in the severity of depression. LP in perimenopausal women can be mediated by declined level of estrogen and Pg directly or indirectly through causing depression that can trigger LP. Transdermal estradiol replacement has shown positive results in the effective treatment of depression in perimenopausal women. Further studies are required to evaluate the effect of such a therapy in depression-induced OLP as well.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T02:27:10.743Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "78c236d3b83ee8eaf88449323c80cd46e2d42d6a",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jmh.jmh_34_17",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "78c236d3b83ee8eaf88449323c80cd46e2d42d6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2876869 | pes2o/s2orc | v3-fos-license | Generative Models for Statistical Parsing with Combinatory Categorial Grammar
This paper compares a number of generative probability models for a wide-coverage Combinatory Categorial Grammar (CCG) parser. These models are trained and tested on a corpus obtained by translating the Penn Treebank trees into CCG normal-form derivations. According to an evaluation of unlabeled word-word dependencies, our best model achieves a performance of 89.9%, comparable to the figures given by Collins (1999) for a linguistically less expressive grammar. In contrast to Gildea (2001), we find a significant improvement from modeling word-word dependencies.
Introduction
The currently best single-model statistical parser (Charniak, 1999) achieves Parseval scores of over 89% on the Penn Treebank. However, the grammar underlying the Penn Treebank is very permissive, and a parser can do well on the standard Parseval measures without committing itself on certain semantically significant decisions, such as predicting null elements arising from deletion or movement. The potential benefit of wide-coverage parsing with CCG lies in its more constrained grammar and its simple and semantically transparent capture of extraction and coordination.
We present a number of models over syntactic derivations of Combinatory Categorial Grammar (CCG, see Steedman (2000) and , this conference, for introduction), estimated from and tested on a translation of the Penn Treebank to a corpus of CCG normal-form derivations. CCG grammars are characterized by much larger category sets than standard Penn Treebank grammars, distinguishing for example between many classes of verbs with different subcategorization frames. As a result, the categorial lexicon extracted for this purpose from the training corpus has 1207 categories, compared with the 48 POS-tags of the Penn Treebank. On the other hand, grammar rules in CCG are limited to a small number of simple unary and binary combinatory schemata such as function application and composition. This results in a smaller and less overgenerating grammar than standard PCFGs (ca. 3,000 rules when instantiated with the above categories in sections 02-21, instead of 12,400 in the original Treebank representation (Collins, 1999)).
Evaluating a CCG parser
Since CCG produces unary and binary branching trees with a very fine-grained category set, CCG Parseval scores cannot be compared with scores of standard Treebank parsers. Therefore, we also evaluate performance using a dependency evaluation reported by Collins (1999), which counts wordword dependencies as determined by local trees and their labels. According to this metric, a local tree with parent node P, head daughter H and non-head daughter S (and position of S relative to P, ie. left or right, which is implicit in CCG categories) defines a P H S dependency between the head word of S, w S , and the head word of H, w H . This measure is neutral with respect to the branching factor. Furthermore, as noted by Hockenmaier (2001), it does not penalize equivalent analyses of multiple modi- Figure 1: A CCG derivation in our corpus fiers. In the unlabeled case (where it only matters whether word a is a dependent of word b, not what the label of the local tree is which defines this dependency), scores can be compared across grammars with different sets of labels and different kinds of trees. In order to compare our performance with the parser of , we also evaluate our best model according to the dependency evaluation introduced for that parser. For further discussion we refer the reader to .
CCGbank-a CCG treebank
CCGbank is a corpus of CCG normal-form derivations obtained by translating the Penn Treebank trees using an algorithm described by .
Almost all types of construction-with the exception of gapping and UCP ("Unlike Coordinate Phrases") are covered by the translation procedure, which processes 98.3% of the sentences in the training corpus (WSJ sections 02-21) and 98.5% of the sentences in the test corpus (WSJ section 23). The grammar contains a set of type-changing rules similar to the lexical rules described in Carpenter (1992). Figure 1 shows a derivation taken from CCGbank. Categories, such as´´Ë ÒAEȵ Èȵ AEÈ, encode unsaturated subcat frames. The complement-adjunct distinction is made explicit; for instance as a nonexecutive director is marked up as PP-CLR in the Treebank, and hence treated as a PP-complement of join, whereas Nov. 29 is marked up as an NP-TMP and therefore analyzed as VP modifier. The -CLR tag is not in fact a very reliable indicator of whether a constituent should be treated as a complement, but the translation to CCG is automatic and must do the best it can with the information in the Treebank.
The verbal categories in CCGbank carry features distinguishing declarative verbs (and auxiliaries) from past participles in past tense, past participles for passive, bare infinitives and ing-forms.
There is a separate level for nouns and noun phrases, but, like the nonterminal NP in the Penn Treebank, noun phrases do not carry any number agreement. The derivations in CCGbank are "normal-form" in the sense that analyses involving the combinatory rules of type-raising and composition are only used when syntactically necessary. The models described here are all extensions of a very simple model which models derivations by a top-down tree-generating process. This model was originally described in Hockenmaier (2001), where it was applied to a preliminary version of CCGbank, and its definition is repeated here in the top row of Table 1. Given a (parent) node with category P, choose the expansion exp of P, where exp can be leaf (for lexical categories), unary (for unary expansions such as type-raising), left (for binary trees where the head daughter is left) or right (binary trees, head right). If P is a leaf node, generate its head word w. Otherwise, generate the category of its head daughter H. If P is binary branching, generate the category of its non-head daughter S (a complement or modifier of H).
Generative models of CCG derivations
The model itself includes no prior knowledge specific to CCG other than that it only allows unary and binary branching trees, and that the sets of nonterminals and terminals are not disjoint (hence the need to include leaf as a possible expansion, which acts as a stop probability).
All the experiments reported in this section were conducted using sections 02-21 of CCGbank as training corpus, and section 23 as test corpus. We replace all rare words in the training data with their POS-tag. For all experiments reported here and in section 5, the frequency threshold was set to 5. Like Collins (1999), we assume that the test data is POStagged, and can therefore replace unknown words in the test data with their POS-tag, which is more appropriate for a formalism like CCG with a large set of lexical categories than one generic token for all unknown words.
The performance of the baseline model is shown in the top row of table 3. For six out of the 2379 sentences in our test corpus we do not get a parse. 1 The reason is that a lexicon consisting of the wordcategory pairs observed in the training corpus does not contain all the entries required to parse the test corpus. We discuss a simple, but imperfect, solution to this problem in section 7.
Extending the baseline model
State-of-the-art statistical parsers use many other features, or conditioning variables, such as head words, subcategorization frames, distance measures and grandparent nodes. We too can extend the baseline model described in the previous section by including more features. Like the models of Goodman (1997), the additional features in our model are generated probabilistically, whereas in the parser of Collins (1997) distance measures are assumed to be a function of the already generated structure and are not generated explicitly.
In order to estimate the conditional probabilities of our model, we recursively smooth empirical estimatesê i of specific conditional distributions with (possible smoothed) estimates of less specific distributionsẽ i 1 , using linear interpolation: λ is a smoothing weight which depends on the particular distribution. 2 When defining models, we will indicate a backoff level with a # sign between conditioning variables, eg. A B # C # D means that we interpolatê P´ A B C Dµ withP´ A B Cµ, which is an interpolation ofP´ A B Cµ andP´ A Bµ.
Adding non-lexical information
The coordination feature We define a boolean feature, conj, which is true for constituents which expand to coordinations on the head path.
This feature is generated at the root of the sentence with P´conj TOPµ. For binary expansions, conj H is generated with P´conj H H S con j P µ and conj S is generated with P´conj S S # P exp P H conj P µ. Table 1 shows how conj is used as a conditioning variable. This is intended to allow the model to capture the fact that, for a sentence without extraction, a CCG derivation where the subject is type-raised and composed with the verb is much more likely in right node raising constructions like the above.
The impact of the grandparent feature Johnson (1998) showed that a PCFG estimated from a version of the Penn Treebank in which the label of a node's parent is attached to the node's own label yields a substantial improvement (LP/LR: from 73.5%/69.7% to 80.0%/79.2%). The inclusion of an additional grandparent feature gives Charniak (1999) a slight improvement in the Maximum Entropy inspired model, but a slight decrease in performance for an MLE model. Table 3 (Grandparent) shows that a grammar transformation like Johnson's does yield an improvement, but not as dramatic as in the Treebank-CFG case. At the same time coverage is reduced (which might not be the case if this was an additional feature in the model rather than a change in the representation of the categories). Both of these results are to be expected-CCG categories encode more contextual information than Treebank labels, in particular about parents and grandparents; therefore the history feature might be expected to have less impact. Moreover, since our category set is much larger, appending the parent node will lead to an even more fine-grained partitioning of the data, which then results in sparse data problems.
Distance measures for CCG Our distance measures are related to those proposed by Goodman (1997), which are appropriate for binary trees (unlike those of Collins (1997)). Every node has a left distance measure, ∆ L , measuring the distance from the head word to the left frontier of the constituent. There is a similar right distance measure ∆ R . We implemented three different ways of measuring distance: ∆ Adjacency measures string adjacency (0, 1 or 2 and more intervening words); ∆ Verb counts intervening verbs (0 or 1 and more); and ∆ Pct counts intervening punctuation marks (0, 1, 2 or 3 and more). These ∆s are generated by the model in the following manner: at the root of the sentence, generate ∆ L with P´∆ L TOPµ, and ∆ R with P´∆ R TOP ∆ L µ. Then, for each expansion, if it is a unary expansion, ∆ L H ∆ L P and ∆ R H ∆ R P with a probability of 1. If it is a binary expansion, only the ∆ in the direction of the sister changes, with a probability of P´∆ L H ∆ L P H#P Sµ if exp Ö Ø, and analogously for exp Ð Ø. ∆ L S and ∆ R S are conditioned on S and the ∆ of H and P in the direction of S: They are then used as further conditioning variables for the other distributions as shown in table 1. Table 3 also gives the Parseval and dependency scores obtained with each of these measures. ∆ Pct has the smallest effect. However, our model does not yet contain anything like the hard constraint on punctuation marks in Collins (1999). Gildea (2001) shows that removing the lexical dependencies in Model 1 of Collins (1997) (that is, not conditioning on w h when generating w s ) decreases labeled precision and recall by only 0.5%. It can therefore be assumed that the main influence of lexical head features (words and preterminals) in Collins' Model 1 is on the structural probabilities.
Adding lexical information
In CCG, by contrast, preterminals are lexical categories, encoding complete subcategorization information. They therefore encode more information about the expansion of a nonterminal than Treebank POS-tags and thus are more constraining.
Generating a constituent's lexical category c at its maximal projection (ie. either at the root of the tree, TOP, or when generating a non-head daughter S), and using the lexical category as conditioning vari-able (LexCat) increases performance of the baseline model as measured by P H S by almost 3%. In this model, c S , the lexical category of S depends on the category S and on the local tree in which S is generated. However, slightly worse performance is obtained for LexCatDep, a model which is identical to the original LexCat model, except that c S is also conditioned on c H , the lexical category of the head node, which introduces a dependency between the lexical categories.
Since there is so much information in the lexical categories, one might expect that this would reduce the effect of conditioning the expansion of a constituent on its head word w. However, we did find a substantial effect. Generating the head word at the maximal projection (HeadWord) increases performance by a further 2%. Finally, conditioning w S on w H , hence including word-word dependencies, (HWDep) increases performance even more, by another 3.5%, or 8.3% overall. This is in stark contrast to Gildea's findings for Collins' Model 1.
We conjecture that the reason why CCG benefits more from word-word dependencies than Collins' Model 1 is that CCG allows a cleaner parametrization of these surface dependencies. In Collins' Model 1, w S is conditioned not only on the local tree P H S , c H and w H , but also on the distance ∆ between the head and the modifier to be generated. However, Model 1 does not incorporate the notion of subcategorization frames. Instead, the distance measure was found to yield a good, if imperfect, approximation to subcategorization information.
Using our notation, Collins' Model 1 generates w S with the following probability: -whereas the CCG dependency model generates w S as follows:
P CCGdep´wS c S P H S c H w H µ λP´w S c S P H S c H w H µ · 1 λµP´w S c S µ
Since our P, H, S and c H are CCG categories, and hence encode subcategorization information, the local tree always identifies a specific argument slot. Therefore it is not necessary for us to include a distance measure in the dependency probabilities. P c P #w P P exp c P #w P P exp H#c P #w P S#H exp P P TOP c S c P HWDep P c P #w P P exp c P #w P P exp H#c P #w P S#H exp P P TOP c S #P H S w P c P HWDep∆ P c P #∆ L R P #w P P exp c P #∆ L R P #w P P exp H#∆ L R P #c P #w P S#H exp P P TOP c S #P H S w P c P HWDepConj P c P conj P #w P P exp c P conj P #w P P exp H conj P #c P #w P S#H exp P P TOP c S #P H S w P c P The P H S labeled dependencies we report are not directly comparable with Collins (1999), since CCG categories encode subcategorization frames. For instance, if the direct object of a verb has been recognized as such, but a PP has been mistaken as a complement (whereas the gold standard says it is an adjunct), the fully labeled dependency evaluation P H S will not award a point. Therefore, we also include in Table 3 a more comparable evaluation S which only takes the correctness of the non-head category into account. The reported figures are also deflated by retaining verb features like tensed/untensed. If this is done (by stripping off all verb features), an improvement of 0.6% on the P H S score for our best model is obtained.
Combining lexical and non-lexical information
When incorporating the adjacency distance measure or the coordination feature into the dependency model (HWDep∆ and HWDepConj), overall performance is lower than with the dependency model alone. We conjecture that this arises from data sparseness. It cannot be concluded from these results alone that the lexical dependencies make structural information redundant or superfluous. Instead, it is quite likely that we are facing an estimation problem similar to Charniak (1999), who reports that the inclusion of the grandparent feature worsens performance of an MLE model, but improves performance if the individual distributions are modelled using Maximum Entropy. This intuition is strengthened by the fact that, on casual inspection of the scores for individual sentences, it is sometimes the case that the lexicalized models perform worse than the unlexicalized models.
The impact of tagging errors
All of the experiments described above use the POStags as given by CCGbank (which are the Treebank tags, with some corrections necessary to acquire correct features on categories). It is reasonable to assume that this input is of higher quality than can be produced by a POS-tagger. We therefore ran the dependency model on a test corpus tagged with the POS-tagger of Ratnaparkhi (1996), which is trained on the original Penn Treebank (see HWDep (+ tagger) in Table 3). Performance degrades slightly, which is to be expected, since our approach makes so much use of the POS-tag information for unknown words. However, a POS-tagger trained on CCGbank might yield slightly better results.
Limitations of the current model
Unlike , our parser does not always model the dependencies in the logical form. For example, in the interpretation of a coordinate structure like "buy and sell shares", shares will head an object of both buy and sell. Similarly, in examples like "buy the company that wins", the relative construction makes company depend upon both buy as object and wins as subject. As is well known (Abney, 1997), DAG-like dependencies cannot in general be modeled with a generative approach of the kind taken here 3 .
Comparison with Clark et al. (2002)
Clark et al. (2002) presents another statistical CCG parser, which is based on a conditional (rather than generative) model of the derived dependency structure, including non-surface dependencies. The following table compares the two parsers according to the evaluation of surface and deep dependencies given in . We use Clark et al.'s parser to generate these dependencies from the output of our parser (see
Performance on specific constructions
One of the advantages of CCG is that it provides a simple, surface grammatical analysis of extraction and coordination. We investigate whether our best 3 It remains to be seen whether the more restricted reentrancies of CCG will ultimately support a generative model. 4 Due to the smaller grammar and lexicon of Clark et al., our parser can only be evaluated on slightly over 94% of the sentences in section 23, whereas the figures for are on 97%. model, HWDep, predicts the correct analyses, using the development section 00.
Coordination There are two instances of argument cluster coordination (constructions like cost $5,000 in July and $6,000 in August) in the development corpus. Of these, HWDep recovers none correctly. This is a shortcoming in the model, rather than in CCG: the relatively high probability both of the NP modifier analysis of PPs like in July and of NP coordination is enough to misdirect the parser.
There are 203 instances of verb phrase coordination (Ë ÒAEÈ, with any verbal feature) in the development corpus. On these, we obtain a labeled recall and precision of 67.0%/67.3%. Interestingly, on the 24 instances of right node raising (coordination of´Ë ÒAEȵ AEÈ), our parser achieves higher performance, with labeled recall and precision of 79.2% and 73.1%. Figure 2 gives an example of the output of our parser on such a sentence.
Extraction Long-range dependencies are not captured by the evaluation used here. However, the accuracy for recovering lexical categories for words with "extraction" categories, such as relative pronouns, gives some indication of how well the model detects the presence of such dependencies. times by the parser, out of which 48 times it corresponded to a rule in the gold standard (or 34 times, if the exact bracketing of the Ë Ð AEÈ is taken into account-this lower figure is due to attachment decisions made elsewhere in the tree).
These figures are difficult to compare with standard Treebank parsers. Despite the fact that the original Treebank does contain traces for movement, none of the existing parsers try to generate these traces (with the exception of Collins' Model 3, for which he only gives an overall score of 96.3%/98.8% P/R for subject extraction and 81.4%/59.4% P/R for other cases). The only "long range" dependency for which Collins gives numbers is subject extraction SBAR, WHNP, SG, R , which has labeled precision and recall of 90.56% and 90.56%, whereas the CCG model achieves a labeled precision and recall of 94.3% and 96.5% on the most frequency subject extraction dependency AEÈÒAEÈ, AEÈÒAEȵ ´Ë Ð ÒAEȵ, Ë Ð ÒAEÈ , which occurs 262 times in the gold standard and was produced 256 times by our parser. However, out of the 15 cases of this relation in the gold standard that our parser did not return, 8 were in fact analyzed as subject extraction of bare infinitivals AEÈÒAEÈ, AEÈÒAEȵ ´Ë ÒAEȵ, Ë ÒAEÈ , yielding a combined recall of 97.3%.
Lexical coverage
The most serious problem facing parsers like the present one with large category sets is not so much the standard problem of unseen words, but rather the problem of words that have been seen, but not with the necessary category.
For standard Treebank parsers, the latter problem does not have much impact, if any, since the Penn Treebank tagset is fairly small, and the grammar underlying the Treebank is very permissive. However, for CCG this is a serious problem: the first three rows in table 4 show a significant difference in performance for sentences with complete lexical coverage ("No missing") and sentences with missing lexical entries ("Missing").
Using the POS-tags in the corpus, we can estimate the lexical probabilities P´w cµ using a linear interpolation between the relative frequency estimateŝ P´w cµ and the following approximation: 5 P tags´w cµ ∑t¾tagsP´w tµP´t cµ We smooth the lexical probabilities as follows:
P´w cµ
λP´w cµ · 1 λµP tags´w cµ Table 4 shows the performance of the baseline model with a frequency cutoff of 5 and 10 for rare words and with a smoothed and non-smoothed lexicon. 6 This frequency cutoff plays an important role here -smoothing with a small cutoff yields worse performance than not smoothing, whereas smoothing with a cutoff of 10 does not have a significant impact on performance. Smoothing the lexicon in this way does make the parser more robust, resulting in complete coverage of the test set. However, it does not affect overall performance, nor does it alleviate the problem for sentences with missing lexical entries for seen words.
Conclusion and future work
We have compared a number of generative probability models of CCG derivations, and shown that our best model recovers 89.9% of word-word dependencies on section 23 of CCGbank. On section 00, it recovers 89.7% of word-word dependencies. These figures are surprisingly close to the figure of 90.9% reported by Collins (1999) on section 00, given that, in order to allow a direct comparison, we have used the same interpolation technique and beam strategy as Collins (1999), which are very unlikely to be as well-tuned to our kind of grammar. As is to be expected, a statistical model of a CCG extracted from the Treebank is less robust than a model with an overly permissive grammar such as Collins (1999). This problem seems to stem mainly from the incomplete coverage of the lexicon. We have shown that smoothing can compensate for entirely unknown words. However, this approach does not help on sentences which require previously unseen entries for known words. We would expect a less naive approach such as applying morphological rules to the observed entries, together with better smoothing techniques, to yield better results.
We have also shown that a statistical model of CCG benefits from word-word dependencies to a much greater extent than a less linguistically motivated model such as Collins' Model 1. This indicates to us that, although the task faced by a CCG parser might seem harder prima facie, there are advantages to using a more linguistically adequate grammar. | 2014-07-01T00:00:00.000Z | 2002-07-06T00:00:00.000 | {
"year": 2002,
"sha1": "976c95f69e8ee160868b1d54d477f56212ee794b",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1073139&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "f10472e2b2d35256dfbd20153f8fe301b857d4be",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4350481 | pes2o/s2orc | v3-fos-license | Antimicrobial Activity of Methanol Extract from Ficus carica Leaves Against Oral Bacteria
Ficus carica L. (fig) belongs to the mulberry tree (Moraceae) which is one of the oldest fruits in the world. It has been used as a digestion promoter and a cure for ulcerative inflammation and eruption in Korea. The present study investigated the antimicrobial activity of methanol (MeOH) extract of figs against oral bacteria. The MeOH extract (MICs, 0.156 to 5 mg/ml; MBCs, 0.313 to 5 mg/ml) showed a strong antibacterial activity against oral bacteria. The combination effects of MeOH extract with ampicillin or gentamicin were synergistic against oral bacteria. We suggest that figs could be employed as a natural antibacterial agent in oral care products.
The researchers reported the hypoglycemic action of a fig leaf decoction in type-I diabetic patients and used a chloroform extract, obtained also from a decoction of F. carica leaves, to decrease the cholesterol levels of rats with diabetes (8).F. carica has been reported to include antioxidant, antiviral, antibacterial, hypoglycemic, hypocholesterolaemic, cancer suppressive, hypotriglyceridaemic, and anthelmintic effects (2~4, 9~11).It has also been investigated for its proteolytic enzymes, amino acids, minerals, sugars, triterpenes, organic acids, and allergens (1,6).
This study was aimed at providing the antimicrobial activities of F. carica (figs) MeOH extract against oral bacteria.
Checker board dilution test
The synergistic effects of the MeOH extract, which exhibited the highest antimicrobial activity and antibiotics, were assessed by the checkerboard test as previously described (12).The antimicrobial combinations assayed included the MeOH extract plus ampicillin or gentamicin.
The (FICI) is the sum of the FICs of each of the drugs, which in turn is defined as the MIC of each drug when it is used in combination divided by the MIC of the drug when it is used alone.The interaction was defined as synergistic if the FICI was less than or equal to 0.5, additive if the FICI was greater than 0.5 and less than or equal 1.0, indifferent if the FICI was greater than 1.0 and less than or equal to 2.0, and antagonistic if the FICI was greater than 2.0 (12).
RESULTS
The results of the antibacterial activity showed that the MeOH extract of F. carica leaves exhibited strong activities against S. gordonii, S. anginosus, P. intermedia, A. actinomycetemcomitans, and P. gingivalis (MIC, 0.156 to 0.625 mg/ml; MBC, 0.313 to 0.625 mg/ml), and moderate antibacterial activity against the other bacteria (MIC, 1.25 mg/ ml; MBC, 1.25 to 2.5 mg/ml), while E. coli, S. aureus, S. sanguinis, and S. criceti appeared to be less sensitive (MIC, 2.5 to 10 mg/ml; MBC, 2.5 to 10 mg/ml).The MIC and MBC for ampicillin were found to be either 0.5/0.5 or 256/ 256 μg/ml; for gentamicin, either 2/2 or 256/512 μg/ml The combination effects of MeOH extract with ampicillin or gentamicin against oral bacteria and a few reference strains were presented in Tables 1, 2. In combination with MeOH extract, the MIC for ampicillin was reduced ≥4-8fold and MeOH extract indicated ≥2-8-fold in most of tested bacteria, producing a synergistic effect as defined by FICI ≤ 0.375~0.5.The additive effect of MeOH extract with ampicillin combination led to a reduction of a single or double dilution in E. coli, S. ratti, and F. nucleatum, and as defined by FICI ≤ 0.75 (Table 1).
The combination of the MeOH extract with gentamicin resulted in the decrease in MIC for all tested bacteria (≥2-8-fold), with the MIC of 0.5~32 μg/ml for gentamicin becoming 2~256 μg/ml and MIC of 0.039~2.5 mg/ml for the MeOH extract becoming 0.156~10 mg/ml.The FICI classified the combination of the MeOH extract with gentamicin as a synergistic effect (FICI ≤ 0.375~0.5)for all tested bacteria except an additive effect, such as S. pyogenes, S. sanguinis, S. criceti, P. intermedia, and F. nucleatum (Table 2).The MeOH extract: mg/ml, ampicillin: μg/ml b The checkerboard test was performed as previously described (12).The MICs and MBCs of the MeOH extract with ampicillin against oral bacteria are indicated.c The interaction was defined as synergistic if the FICI was less than or equal to 0.5, additive if the FICI was greater than 0.5 and less than or equal 1.0, indifferent if the FICI was greater than 1.0 and less than or equal to 2.0, and antagonistic if the FICI was greater than 2.0 (12) The MeOH extract: mg/ml, gentamicin: μg/ml b The checkerboard test was performed as previously described (12).The MICs and MBCs of the MeOH extract with gentamicin against oral bacteria are indicated.c The interaction was defined as synergistic if the FICI was less than or equal to 0.5, additive if the FICI was greater than 0.5 and less than or equal 1.0, indifferent if the FICI was greater than 1.0 and less than or equal to 2.0, and antagonistic if the FICI was greater than 2.0 (12)
Ficus
Fig, native to the arid region of Asia Minor, forms a shrub or low-spreading deciduous tree.The large, wavy-margined leaves are usually 5 lobed but may have only 4 or 3 lobes
F
. carica leaves were collected in September 2005 from the Samho farm of Yeongam-gun in Korea.The identity was confirmed by Dr. Bong-Seop Kil, College of Natural Science, Wonkwang University.The voucher specimens 97 (DJ-05-F1) were deposited at the Herbarium of the College of Natural Science, Wonkwang University.The dried and powered leaves (1.2 kg) of F. carica were extracted by repeated refluxing with methanol (MeOH) (2 × 6 L) for 4 h at 80℃.The combined MeOH extract (12 L) was clarified by filtration and evaporated to obtain dark green syrup (210 g).Minimum inhibitory concentration/minimum bactericidal concentration assay The antimicrobial activity of the MeOH extract of F. carica leaves against oral bacteria: Streptococcus mutans (ATCC 25175), Streptococcus sanguinis (ATCC 10556), Streptococcus sobrinus (ATCC 27607), Streptococcus ratti (KCTC 3294), Streptococcus criceti (KCTC 3292), Streptococcus anginosus (ATCC 31412) and Streptococcus gordonii (ATCC 10558), Aggregatibacter actinomycetemcomitans (ATCC 43717), Fusobacterium nucleatum (ATCC 51190), Prevotella intermedia (ATCC 49046), and Porphyromonas gingivalis (ATCC 33277) was determined through the broth dilution method carried out in triplicate.The reference strains used in this study were: Escherichia coli ATCC 25922, Staphylococcus aureus ATCC 29213, Staphylococcus epidermidis ATCC 12228, and Streptococcus pyogenes ATCC 21059.The minimum inhibitory concentration (MIC) was determined as the lowest concentration of test samples that resulted in a complete inhibition of visible growth in the broth.Following anaerobic incubation of MIC plates, the minimum bactericidal concentration (MBC) was determined on the basis of the lowest concentration of the MeOH extract that kill 99.9% of the test bacteria by plating out onto each appropriate agar plate.
Table 1 .
Checkerboard assay of the MeOH extract of F. carica leaves and ampicillin for some oral bacteria with a few reference strains
Table 2 .
Checkerboard assay of the MeOH extract of F. carica leaves and gentamicin for some oral bacteria with a few reference strains | 2018-03-26T20:51:10.131Z | 2009-06-01T00:00:00.000 | {
"year": 2009,
"sha1": "2ee0f6b65752bb8dd762937ef878b2f8eebf8be5",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/0079jbv/jbv-39-97.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2ee0f6b65752bb8dd762937ef878b2f8eebf8be5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
235652679 | pes2o/s2orc | v3-fos-license | Surface Properties and Morphology of Boron Carbide Nanopowders Obtained by Lyophilization of Saccharide Precursors
The powders of boron carbide are usually synthesized by the carbothermal reduction of boron oxide. As an alternative to high-temperature reactions, the development of the carbothermal reduction of organic precursors to produce B4C is receiving considerable interest. The aim of this work was to compare two methods of preparing different saccharide precursors mixed with boric acid with a molar ratio of boron to carbon of 1:9 for the synthesis of B4C. In the first method, aqueous solutions of saccharides and boric acid were dried overnight at 90 °C and pyrolyzed at 850 °C for 1 h under argon flow. In the second method, aqueous solutions of different saccharides and boric acid were freeze-dried and prepared in the same way as in the first method. Precursors from both methods were heat-treated at temperatures of 1300 to 1700 °C. The amount of boron carbide in the powders depends on the saccharides, the temperature of synthesis, and the method of precursor preparation.
Introduction
Boron carbide (B 4 C), due to its specific properties (high hardness, low density, high melting point, high elastic modulus, etc.) [1,2], has been widely used in many applications such as in polishing as an abrasive, in ball mills as a neutron absorber and as a neutron shield, and in boron neutron capture therapy (BNCT). In BNCT research, nuclear reactors or accelerators generate thermal neutrons, which are captured by a variety of nuclei, but the probability of capture by an isotope of boron ( 10 B) is much higher than that of the capture of another isotope. The first studies of synthesized boron carbide (B 4 C) started when Henri Moissan obtained boron carbide from the reduction of diboron trioxide (B 2 O 3 ) by magnesium (Mg) in the presence of carbon (C) [1]. Since 1899, boron carbide has been synthesized using various methods. The synthesis methods determine the properties of the products, morphology, and purity of obtained B 4 C [3,4]. The methods of boron carbide synthesis can be classified as carbothermic reduction [3][4][5][6][7], magnesiothermic reduction [8], synthesis from elements [9], vapor phase reaction [10], synthesis from polymer precursors [11], liquid phase reaction [12], ion beam synthesis [13], and vapor-liquid-solid (VLS) growth [14]. Carbothermal reduction is commonly used in industry to obtain B 4 C as a source of boron (boric acid (H 3 BO 3 ) or boron oxide (B 2 O 3 ), less commonly) and as a source of carbon fine crystalline graphite or petroleum coke. The reaction is carried out at a temperature between 1700 and 2000 • C under a protective atmosphere (Ar) [15]. Powders obtained from carbothermal reduction are strongly agglomerated and aggregated, so they require intensive crushing and grinding so that they become suitable for further use. The main problem is to reduce the cost of synthesis; many researchers have attempted to lower the synthesis temperature using various organic carbon precursors such as phenolic resin, citric acid, polyvinyl alcohol (PVA), and carbohydrates (cellulose, glucose, and sucrose) with boric acid (H 3 BO 3 ) [16]. When the boron carbide is synthesized from an organic precursor, the main problem during synthesis is the removal of water from precursors, which avoids the aggregation of particles in precursors, thereby significantly affecting the size and morphology of the boron carbide after synthesis.
Freeze-drying, also known as lyophilization or cryodesiccation, is a method of removing water by the sublimation of ice crystals from frozen material. Lyophilization is an effective method of drying materials without harming them and is commonly used mainly in the food and pharmaceutical industries [16][17][18]. Freeze-drying is also an effective method of removing water from the precursor obtained by the mixed solvent of boric acid and saccharides.
Previous work by our research group [16] only focused on synthesized boron carbide from different types of saccharide precursors as the carbon precursor in carbothermal reduction and studied the influence of the used saccharide precursor with a molar ratio of carbon to boron of 9:1 on the morphology of the obtained boron carbide powders (B 4 C). In the present work, we aimed to verify the idea that the freeze-drying of mixed precursors (boric acid and different saccharides) has an important influence on the morphology and obtained size of boron carbide after synthesis. During the freeze-drying of saccharide precursors mixed with boric acid, water can be removed from the precursor without heat treatment. The second idea presented in the article focuses on how the type of precursor used and the molar ratio of boron to carbon in precursor influence the obtained morphology and aggregation of the powders.
Materials and Methods
Powders of boron carbide were obtained from boric acid (H 3 BO 3 ), and saccharides, including glucose, fructose, dextrin, and hydroxyethyl starch (HES), were all obtained from Sigma Aldrich (St. Louis, MO, USA) (99% pure). In the first method, boric acid (H 3 BO 3 ) and mono-or polysaccharides were dissolved in distilled water in a molar ratio providing a boron to carbon ratio in the final powder of 1:9. After mixing the precursor in distilled water, the solutions were prepared as described in our earlier article [16], and then, they were dried overnight at 90 • C in a vacuum oven to obtain them in solid states. In the second method, the powders of boron carbide were prepared using a similar procedure as in the first method, but the only difference was that the solutions of boric acid mixed with different saccharides were freeze-dried to remove water from the precursor's powders. All obtained precursors from both methods were pyrolyzed at 850 • C for 1 h under argon flow. The pyrolyzed powders were placed in graphite crucibles and heat-treated at a temperature ranging from 1300 to 1700 • C for 1 h under the argon flow. Changes in saccharide precursors were identified by infrared spectroscopy analysis.
The FTIR measurements of both precursors and boron carbide materials were carried out using a Bruker Vertex 70 v spectrometer (Billerica, MA, USA). The standard KBr pellet method was employed, and 128 scans in the range of 4000 to 400 cm −1 were accumulated with a resolution of 4 cm −1 . Raman spectroscopy measurements were recorded to determine the presence of carbon and carbon hybridization in the precursor. A WITec Alpha 300 M+ spectrometer (Wissenschaftliche Instrumente und Technologie GmbH, Ulm, Germany) equipped with a 488 nm diode laser along with a 600 grating and a Zeiss 100× objective lens (Carl Zeiss AG, Oberkochen, Germany) was used. Such combination provides a laser spot of 661 nm in diameter. The power of the excitation source was set to prevent sample degradation. Each sample was measured in five spots with an accumulation of 2 scans, 120 s each, in each spot. Obtained sets of spectra were then averaged. To analyze the local structure in the near-surface region of the obtained precursors, both pyrolyzed and synthesized at 1700 • C, X-ray absorption spectroscopy (XAS) in total electron yield (TEY) detection mode was used. The measurements were performed using the PEEM/XAS beamline of the Solaris National Synchrotron Radiation Centre, Cracow, Poland [19]. PEEM/XAS beamline is a bending magnet-based beamline, providing the soft X-ray energy range (150 to 2000 eV) and equipped with plane grating monochromator of resolving power, E/∆E > 4000. Samples in the form of powder on carbon tape were measured at boron K edge at room temperature. The carbon tape contribution was subtracted from the collected spectra, and they were normalized to a unit step by subtracting the constant value fitted in the preedge region and then dividing by the constant value fitted in the postedge region. The energy scale was calibrated so that the positions of characteristic spectral features of reference B 4 C were at the same energy values as in [20]. The beginning of absorption starts at 189 eV. On the spectra obtained from the boron edge, we can distinguish four bands from the π* excitation state: (A) 190.9 eV, (B) 191.7 eV, (C) 192.3 eV, and (D) 193.7 eV and three bands from the σ* excitation states: (E) 196 eV, (F) 200 eV, and (G) 204 eV. The obtained spectra were compared with the spectra obtained for boron carbide, amorphous boron, hexagonal boron nitride (h-BN), and cubic boron nitride (c-BN) and boron carbide heated under vacuum conditions to 1000, 1400, 1700, and 1900 K [20][21][22].
The phase composition and the size of the crystallites of the final powders were determined by an X-ray analysis. HighScore Plus software (Version 4, Malvern Panalytical, Malvern, UK) was used to analyze the data. The Scherrer formula was used to calculate the crystallite size, and Rietveld analysis was used to qualitatively analyze the obtained powders. The phase composition of the final powders was determined by an X-ray analysis (XRD) with an Empyrean diffractometer (PANalytical) (Empyeran, Panalitycal, UK) using Cu-Kα1 radiation. The difference between boron carbide obtained from recrystallized and freeze-drying saccharide precursors was observed with a FEI Nano SEM200 (FEG) scanning electron microscope (FEI Company and SU-70, Hitachi, Hillsboro, OR, USA). The measurements were conducted in high vacuum conditions at an accelerated voltage of 10-15 kV with a backscatter electron detector (BSE). The samples were coated with a carbon layer. High-resolution, solid-state 11 B MAS-NMR spectra were measured using the APOLLO console (Tecmag) and a 7 T, 89 mm superconducting magnet (Magnex) (Tecmag Inc., Houston, TX, USA). A Bruker HP-WB high-speed MAS probe (Billerica, Massachusetts, USA) equipped with a 4 mm zirconia rotor and a KEL-F cap was used to spin the sample at 10 kHz. The resonance frequency was 96.11 MHz, and a single 2 µs RF pulse, corresponding to a π/4 flipping angle in the liquid, was applied. The acquisition delay in accumulation was 1 s, and 256 scans were acquired. The parts per million frequency scale referenced to the 11 B resonance of 1 mol H 3 BO 3 .
Results and Discussion
FT-IR measurements in the middle-infrared range (MIR) were recorded to analyze the substrates (saccharides and boric acid) to determine the influence of the precursor preparation process. Figure 1 shows the MIR spectra of the as-obtained starting material (H 3 BO 3 ). Figure 1 presents the MIR spectra of raw boric acid as well as recrystallized and freezedried materials. The raw and freeze-dried H 3 BO 3 are similar, with changes occurring mainly in the O-H stretching and bending range (approximately 3200 and 1640 cm −1 , respectively). Changes in these spectral regions suggest that molecular water was absorbed after the freeze-drying process. The spectrum of boric acid after the recrystallization process indicates that structural changes occurred. Upon reducing the half-width of the bands, as well as the appearance of new ones, some changes can be observed. One of such changes is the appearance of an additional band responsible for vibrations of O-H groups in the B-O-H system at approximately 3284 cm −1 , as well as a more distinct band characteristic of the vibration of molecular water at approximately 3388 cm −1 . In addition, we can clearly see two bands at approximately 1158 and 1198 cm −1 , which are characteristic for B-O-H bending vibrations, which arose from splitting the band at approximately 1190 cm −1 . The appearance of the bands at around 1248 and 1104 cm −1 , the splitting of the 1500-1300 cm −1 range into smaller component bands, and the bands appearing below 500 cm −1 clearly indicate that the material was oxidized after recrystallization and B 2 O 3 was formed. A similar situation occurred with the other materials-glucose, fructose, sucrose, and HES. The process of recrystallization, as well as freeze-drying, leads to structural changes, such as amorphization or the appearance of adsorbed water [17]. Figure 1 presents the MIR spectra of raw boric acid as well as recrystallized and freeze-dried materials. The raw and freeze-dried H3BO3 are similar, with changes occurring mainly in the O-H stretching and bending range (approximately 3200 and 1640 cm −1 , respectively). Changes in these spectral regions suggest that molecular water was absorbed after the freeze-drying process. The spectrum of boric acid after the recrystallization process indicates that structural changes occurred. Upon reducing the half-width of the bands, as well as the appearance of new ones, some changes can be observed. One of such changes is the appearance of an additional band responsible for vibrations of O-H groups in the B-O-H system at approximately 3284 cm −1 , as well as a more distinct band characteristic of the vibration of molecular water at approximately 3388 cm −1 . In addition, we can clearly see two bands at approximately 1158 and 1198 cm −1 , which are characteristic for B-O-H bending vibrations, which arose from splitting the band at approximately 1190 cm −1 . The appearance of the bands at around 1248 and 1104 cm −1 , the splitting of the 1500-1300 cm −1 range into smaller component bands, and the bands appearing below 500 cm −1 clearly indicate that the material was oxidized after recrystallization and B2O3 was formed. A similar situation occurred with the other materials-glucose, fructose, sucrose, and HES. The process of recrystallization, as well as freeze-drying, leads to structural changes, such as amorphization or the appearance of adsorbed water [17]. When the temperature increases, the dehydration of boric acid occurs, which is related to the weight change and dehydration of boric acid from water (Equations (1)-(3)). The same situation was observed when precursors of saccharides were mixed with boric acid and lyophilized. The effect of lyophilization and recrystallization on the precursors used was significant, showing differences in the spectra obtained from the same precursor but using different methods of preparation. Figure 2a,b shows the MIR spectra of HES precursor mixed with H 3 BO 3 at different molar ratios and dried at 90 • C or freeze-dried. The findings agree with previous results [16] and indicate that components react with each other, which manifested in the appearance of a new band at 1020 cm −1 . The band at 1020 cm −1 is described in the literature as the band corresponding to B-C bond vibrations. This hypothesis was confirmed by previous research on spectral analysis based on the analysis of the position of band corresponding to the B-C vibrations present in covalent organic frameworks [22] and amorphous boron carbide [16,19]. The increase in boron content in the precursor sample caused a significant increase in the intensity of bands characteristic for B-O and B-OH bond vibrations. Figure 2a,b shows that typical bands for B-O bonds occurred at 1458 and 1090 cm −1 ; this was the expected and desired effect, which was to create the B-C bond already at the precursor preparation stage and thus positively affect the final product. react with each other, which manifested in the appearance of a new band at 1020 cm −1 . The band at 1020 cm −1 is described in the literature as the band corresponding to B-C bond vibrations. This hypothesis was confirmed by previous research on spectral analysis based on the analysis of the position of band corresponding to the B-C vibrations present in covalent organic frameworks [22] and amorphous boron carbide [16,19]. The increase in boron content in the precursor sample caused a significant increase in the intensity of bands characteristic for B-O and B-OH bond vibrations. Figure 2a,b shows that typical bands for B-O bonds occurred at 1458 and 1090 cm −1 ; this was the expected and desired effect, which was to create the B-C bond already at the precursor preparation stage and thus positively affect the final product. The structure of the pyrolyzed precursor at 850 °C was determined using three methods: Raman spectroscopy, nuclear magnetic resonance spectroscopy (NMR), and X-ray absorption spectroscopy (XAS). In all cases, only two characteristic bands were visible on the spectra, which were attributed to the structure of carbon: the G-band at approximately 1600 cm −1 and the D-band at around 1350 cm −1 . The G-band indicates ten- The structure of the pyrolyzed precursor at 850 • C was determined using three methods: Raman spectroscopy, nuclear magnetic resonance spectroscopy (NMR), and X-ray absorption spectroscopy (XAS). In all cases, only two characteristic bands were visible on the spectra, which were attributed to the structure of carbon: the G-band at approximately 1600 cm −1 and the D-band at around 1350 cm −1 . The G-band indicates tensile vibrations in the plane of the sp 2 carbon structure and the D-band is a characteristic of the symmetrical vibration of a breathable hexagonal carbon ring [23]. To compare the differences between the samples according to the Ferrari and Robertson diagram [21][22][23], the absolute intensity of the D-band to the G-band (ID/IG) ratio was calculated. To determine the carbon phase structure, the obtained Raman spectra were subjected to a deconvolution process using Bruker OPUS software and the Levenberg-Marquardt algorithm. The results of this process are presented in Figure 3 and Table 1. These measurements revealed that in the case of pure saccharides, regardless of their type, the I D /I G ratio was lower than one, and the position of the G-band fell on approximately 1600 cm −1 . According to the Ferrari scheme, every analyzed result indicates that carbon occurs in sp 2 hybridization and forms local regions with a graphite-like structure [16,[24][25][26][27]. In each case, the presence of boric acid in the pyrolyzed mixture caused a significant increase in the I D /I G value. This effect indicates that the graphite-like structure formed during the heat treatment of the precursors is more defective than with pure saccharide, which is in the agreement with our previous paper [16]. This is due to the boron carbide being formed and, more specifically, the B-C/B-O-C bonds [16]. The I D /I G ratio of both recrystallized and freeze-dried samples is similar, which was expected and indicates that both methods produce almost identical carbon-containing materials from a spectroscopic point of view. In order to determine the boron coordination, 11 B MAS NMR measurements were performed. All the obtained spectra were deconvoluted, and an example of recrystallized glucose is presented in Figure 4, whereas obtained data are presented in Table 2. The results show that in all of the samples, boron had a tetragonal coordination, indicating that the B4C structure was obtained. A typical FWHH (full width at half height) of the NMR line was about 16 ppm, and the line position varied by 4 ppm. These small variations were within the experimental uncertainties and the accuracy of numerical fitting procedure. It can be, therefore, concluded that the chemical environment and symmetry of the site occupied by the boron atom was essentially not affected by the precursor type and the preparation method. Figure 5 shows that boron edge spectra for both methods after pyrolyzing (recrystallized or freeze-dried) obtained from the research line PEEM/XAS have the same shape, except for two spectra obtained for the powders where the precursor saccharide was HES. Despite these differences in all the precursors, there was a D-band located around 193.7 eV, which indicated the presence of boron oxide in the precursors. The use of saccharides for the carbothermic reduction of boron carbide probably increased the melting point of boric acid and influenced the formation of new bonds between the saccharide and boric acid, which affected the synthesis of boron carbide. Figure 6 presents the K edge boron XAS spectra for powders of boron carbide obtained from recrystallized and freeze-dried precursor after pyrolyzing at 850 °C and heat treatment at 1700 °C, as well as two reference spectra: commercial B4C and amorphous boron. By comparing with Figure 4 in reference [22], we can see the spectrum of pure B2O3, with its characteristic peak at 193.7 eV, for all samples. Its position is slightly different than that stated in the cited article (194 eV) due to the difference in energy calibration between the experiments (based on reference [20]) and the data shown in reference [22]. Spectra of all samples have a similar shape (glu1700 and fru1700 differ above 197 Figure 6 presents the K edge boron XAS spectra for powders of boron carbide obtained from recrystallized and freeze-dried precursor after pyrolyzing at 850 • C and heat treatment at 1700 • C, as well as two reference spectra: commercial B 4 C and amorphous boron. By comparing with Figure 4 in reference [22], we can see the spectrum of pure B 2 O 3 , with its Materials 2021, 14, 3419 9 of 16 characteristic peak at 193.7 eV, for all samples. Its position is slightly different than that stated in the cited article (194 eV) due to the difference in energy calibration between the experiments (based on reference [20]) and the data shown in reference [22]. Spectra of all samples have a similar shape (glu1700 and fru1700 differ above 197 eV), displaying the characteristic spectral features of the B 4 C reference sample, especially peak A at 191.2 eV (π* states). Features B (191.98 eV) and C (192.6 eV) were not present in the samples. Feature D (194 eV) indicates the presence of B 2 O 3 ; as the spectra are normalized, the height of the feature indicates the amount of boron oxide present. The shape of the spectra above 195 eV is slightly different for both samples where the precursor saccharide was HES. The use of saccharides for the carbothermic reduction of boron carbide probably increased the melting point of boric acid and influenced the formation of new bonds between the saccharide and boric acid, which affected the synthesis of boron carbide. The presence of a depression instead of a peak at 194 eV in sample lioglu1700 was caused by subtraction of the spectra measured on carbon tape from the very weak signal provided by this sample. Features E (196.3 eV), F (199.8 eV), and G (204.3 eV) (σ*-like states) were present in all the samples, except for glu1700 and fru1700, which lacked features F and G. In summary, all obtained materials, regardless of their preparation method, contained the B 4 C phase, which was probably surrounded by a shell of boron oxide.
An XRD analysis confirmed the presence of three phases in the obtained powders from samples heat-treated at a temperature of 1300 or 1700 • C: boric acid (ICSD 98-002-4711), graphite-like structure (ICSD 98-002-4711), and boron carbide with stoichiometry close to B 13 C 2 (ICSD 98-006-8152). The size of boron carbide was calculated using the Scherrer formula along the (021) direction. Figure 7 shows the XRD patterns of dextrin mixed with boric acid with a molar ratio of carbon to boron of 9:1, prepared using both methods and synthesized at 1400 to 1700 • C. Upon comparing the recrystallized and lyophilized diffractograms of the same weight ratios synthesized at the same temperature, we observed a significant difference between the XRD patterns. Powders that were recrystallized had much narrower and less blurry reflections compared with lyophilized samples, in both freeze-dried and recrystallized samples, and we observed a raised background, which indicated the presence of amorphous material; a little more amorphous material was present in the freeze-dried samples than in the recrystallized samples. Figure 8 presents the percentage of boron carbide (B 13 C 2 ) phase in the heat-treated powders for each of the mixtures of saccharides and boric acid. In each case, both with the varying molar ratios of carbon to boron and the different saccharides, which were freeze-dried or recrystallized together with boric acid, an increase in the B 13 C 2 phase content in the obtained powder can be observed with an increase in synthesis temperature. The highest content of boron carbide in the obtained powders was observed at 1700 • C, related to the expansion of boron carbide grains. The content of boron carbide in the obtained powder significantly influenced the comparison of the saccharide precursor used. When we analyzed the results for two monosaccharides, glucose and fructose, in the same proportions, a higher content of B 13 C 2 phase at 1700 • C in lyophilized and non-lyophilized samples was synthesized with glucose. The difference in B 13 C 2 phase with an increase in temperature and boron ratio was visible in all samples with different percent ratios.
Analyzing the obtained results in Figure 9, we concluded that the size of crystallites with the use of both freeze-dried and non-freeze-dried saccharides is significantly influenced by temperature. The crystallite size for each weight ratio and both freeze-dried and recrystallized precursor increased significantly at 1700 • C and varied with each saccharide. By comparing each saccharide with each other, we found that lyophilized polysaccharides had the smallest size of crystallites compared with the same recrystallized polysaccharides. Further particle size increase in boron carbide (B 4 C) may be related to the presence of boron oxide and the mechanism of transfer of individual components through the liquid phase. The Ellingham diagram shows the Gibbs free energy of oxide formation, indicating that the direct reaction between carbon monoxide and boron oxide should occur at a much higher temperature, around 1600 • C. The temperature of boron carbide was lower than as shown in Figures 8 and 9, especially at 1300 • C, suggesting that at least some of the boron was linked to carbon by chemical bonds with carbon in the precursor after pyrolysis, which could become the nucleus of boron carbide crystallization (B 4 C) below 1600 • C. The formation of a nucleus of crystallization at a lower temperature was confirmed by spectroscopic studies (MIR). The further particle size (Figure 8) increase in boron carbide (B 4 C) may be related to the presence of boron oxide and the mechanism of the transfer of individual components through the liquid phase. An XRD analysis confirmed the presence of three phases in the obtained powders from samples heat-treated at a temperature of 1300 or 1700 °C: boric acid (ICSD 98-002-4711), graphite-like structure (ICSD 98-002-4711), and boron carbide with stoichiometry close to B13C2 (ICSD 98-006-8152). The size of boron carbide was calculated using the Scherrer formula along the (021) direction. Figure 7 shows the XRD patterns of Figure 6. Boron K edge XAS spectra for powders of boron carbide obtained from recrystallized and freeze-dried precursor after pyrolyzing at 850 • C and heat treatment at 1700 • C (spectra in color) and reference spectra of commercial B 4 C (black) and amorphous boron (gray). Vertical lines indicate the characteristic spectral features for the B 4 C reference sample. both methods and synthesized at 1400 to 1700 °C. Upon comparing the recrystallized and lyophilized diffractograms of the same weight ratios synthesized at the same temperature, we observed a significant difference between the XRD patterns. Powders that were recrystallized had much narrower and less blurry reflections compared with lyophilized samples, in both freeze-dried and recrystallized samples, and we observed a raised background, which indicated the presence of amorphous material; a little more amorphous material was present in the freeze-dried samples than in the recrystallized samples.
(a) (b) Figure 7. The X-ray patterns of the powders prepared from the mixtures of dextrin: (a) recrystallized and (b) lyophilized. Figure 8 presents the percentage of boron carbide (B13C2) phase in the heat-treated powders for each of the mixtures of saccharides and boric acid. In each case, both with the varying molar ratios of carbon to boron and the different saccharides, which were freeze-dried or recrystallized together with boric acid, an increase in the B13C2 phase content in the obtained powder can be observed with an increase in synthesis temperature. The highest content of boron carbide in the obtained powders was observed at 1700 °C, related to the expansion of boron carbide grains. The content of boron carbide in the obtained powder significantly influenced the comparison of the saccharide precursor used. When we analyzed the results for two monosaccharides, glucose and fructose, in the same proportions, a higher content of B13C2 phase at 1700 °C in lyophilized and non-lyophilized samples was synthesized with glucose. The difference in B13C2 phase with an increase in temperature and boron ratio was visible in all samples with different percent ratios. (a) (b) Figure 7. The X-ray patterns of the powders prepared from the mixtures of dextrin: (a) recrystallized and (b) lyophilized. Figure 8 presents the percentage of boron carbide (B13C2) phase in the heat-treated powders for each of the mixtures of saccharides and boric acid. In each case, both with the varying molar ratios of carbon to boron and the different saccharides, which were freeze-dried or recrystallized together with boric acid, an increase in the B13C2 phase content in the obtained powder can be observed with an increase in synthesis temperature. The highest content of boron carbide in the obtained powders was observed at 1700 °C, related to the expansion of boron carbide grains. The content of boron carbide in the obtained powder significantly influenced the comparison of the saccharide precursor used. When we analyzed the results for two monosaccharides, glucose and fructose, in the same proportions, a higher content of B13C2 phase at 1700 °C in lyophilized and non-lyophilized samples was synthesized with glucose. The difference in B13C2 phase with an increase in temperature and boron ratio was visible in all samples with different percent ratios. Analyzing the obtained results in Figure 9, we concluded that the size of crystallites with the use of both freeze-dried and non-freeze-dried saccharides is significantly influenced by temperature. The crystallite size for each weight ratio and both freeze-dried and recrystallized precursor increased significantly at 1700 °C and varied with each saccharide. By comparing each saccharide with each other, we found that lyophilized polysaccharides had the smallest size of crystallites compared with the same recrystallized polysaccharides. Further particle size increase in boron carbide (B4C) may be related to the presence of boron oxide and the mechanism of transfer of individual components through the liquid phase. The Ellingham diagram shows the Gibbs free energy of oxide formation, indicating that the direct reaction between carbon monoxide and boron oxide should occur at a much higher temperature, around 1600 °C. The temperature of boron carbide was lower than as shown in Figure 8 and Figure 9, especially at 1300 °C, suggesting that at least some of the boron was linked to carbon by chemical bonds with carbon in the precursor after pyrolysis, which could become the nucleus of boron carbide crystallization (B4C) below 1600 °C. The formation of a nucleus of crystallization at a By analyzing the SEM images in Figure 10 of the powders obtained from polysaccharides, we concluded that the selection of the saccharide determines the size and morphology of the powders obtained as a result of the synthesis of polysaccharides. Comparing the two monosaccharides with each other, we concluded that the powders obtained from recrystallized glucose are much smaller than the recrystallized fructose as a precursor (Figure 10a-d). We found that the particle size, when used as a glucose saccharide precursor, ranged from 200 nm to about 1 µm, whereas for fructose, it ranged from 2 to 4 µm. The differences between the two monosaccharides are probably due to the presence of an aldehyde group (-CHO) in glucose, which is an aldose, whereas fructose is a ketose and we have a ketone group (-CO). In the case of polysaccharides, the best results were obtained for powders produced from dextrin, where the particle size for the recrystallized powder ranged from 150 to 600 nm, whereas for the recrystallized HES precursor, it ranged from 10 to 20 µm (Figure 10e-h). Comparing the particle sizes obtained from the same weight proportions of carbon to boron, we noted that, despite the same reaction and the same temperature being used, the selection of the precursor significantly affected the size of the particles obtained.
presence of boron oxide and the mechanism of transfer of individual components through the liquid phase. The Ellingham diagram shows the Gibbs free energy of oxide formation, indicating that the direct reaction between carbon monoxide and boron oxide should occur at a much higher temperature, around 1600 °C. The temperature of boron carbide was lower than as shown in Figure 8 and Figure 9, especially at 1300 °C, suggesting that at least some of the boron was linked to carbon by chemical bonds with carbon in the precursor after pyrolysis, which could become the nucleus of boron carbide crystallization (B4C) below 1600 °C. The formation of a nucleus of crystallization at a lower temperature was confirmed by spectroscopic studies (MIR). The further particle size ( Figure 8) increase in boron carbide (B4C) may be related to the presence of boron oxide and the mechanism of the transfer of individual components through the liquid phase. By analyzing the SEM images in Figure 10 of the powders obtained from polysaccharides, we concluded that the selection of the saccharide determines the size and morphology of the powders obtained as a result of the synthesis of polysaccharides. Comparing the two monosaccharides with each other, we concluded that the powders obtained from recrystallized glucose are much smaller than the recrystallized fructose as a precursor (Figure 10 a-d). We found that the particle size, when used as a glucose saccharide precursor, ranged from 200 nm to about 1 µm, whereas for fructose, it ranged from 2 to 4 µm. The differences between the two monosaccharides are probably due to the presence of an aldehyde group (-CHO) in glucose, which is an aldose, whereas fructose is a ketose and we have a ketone group (-CO). In the case of polysaccharides, the best results were obtained for powders produced from dextrin, where the particle size for the recrystallized powder ranged from 150 to 600 nm, whereas for the recrystallized HES precursor, it ranged from 10 to 20 µm( Figure 10 e-h). Comparing the particle sizes obtained from the same weight proportions of carbon to boron, we noted that, despite the same reaction and the same temperature being used, the selection of the precursor significantly affected the size of the particles obtained. By analyzing the SEM images in Figure 10 of the powders obtained from polysaccharides, we concluded that the selection of the saccharide determines the size and morphology of the powders obtained as a result of the synthesis of polysaccharides. Comparing the two monosaccharides with each other, we concluded that the powders obtained from recrystallized glucose are much smaller than the recrystallized fructose as a precursor (Figure 10 a-d). We found that the particle size, when used as a glucose saccharide precursor, ranged from 200 nm to about 1 µm, whereas for fructose, it ranged from 2 to 4 µm. The differences between the two monosaccharides are probably due to the presence of an aldehyde group (-CHO) in glucose, which is an aldose, whereas fructose is a ketose and we have a ketone group (-CO). In the case of polysaccharides, the best results were obtained for powders produced from dextrin, where the particle size for the recrystallized powder ranged from 150 to 600 nm, whereas for the recrystallized HES precursor, it ranged from 10 to 20 µm( Figure 10 e-h). Comparing the particle sizes obtained from the same weight proportions of carbon to boron, we noted that, despite the same reaction and the same temperature being used, the selection of the precursor significantly affected the size of the particles obtained. The findings from this study imply that freeze-drying of a saccharide precursor mixed with boric acid influences the morphology of the obtained boron carbide, the B 13 C 2 phase content in the obtained powder, and the size and grain of the crystallites. The presented results indicate that the selection of precursors and the conditions of their heat treatment can be used to control the morphology of boron carbide powders. The current problem is the removal of excess carbon from the system and the fragmentation of boron carbide aggregates, which is the subject of current research. We note that our research has two limitations. The first one is the type of saccharide precursors used, which determines the concentration of boron carbide in the obtained powders. The second limitation is the too-pure powders from the graphite-like structure. Despite attempts to remove excess carbon from the obtained powders after saccharide synthesis from the 9 C:1 B molar ratio in the case of thermal oxidation, boron carbide oxidizes first, on the surface of which boron oxide is formed as a result of the oxidation of B 4 C particles, while the graphite-like structure does not affect the powders obtained. The only solution is intensive grinding and crushing of the obtained powders.
We confirmed that the most important factor influencing the particle size is the synthesis temperature, and the highest maximum synthesis temperature for boron carbide nanoparticles is 1600 • C because, above this temperature, we observed a significant increase in the size of the crystallites.
Conclusions
Our work led us to conclude that the morphology and size of boron carbide strongly depend on the saccharide precursor and the temperature of synthesis. Summarizing the obtained results, it can be stated that the reaction temperature has the greatest influence on the particle size of the obtained boron carbide (B 13 C 2 ). As the temperature increases, crystallite growth takes place and the particle size increases, thus increasing the agglomeration and aggregation of boron carbide (B 4 C) particles. Lyophilization of saccharide precursors reduces the size of the particles and allows to obtain fine boron carbide, but the lyophilization process itself causes, for most compositions, a decrease in the percentage of B 13 C 2 phase in the obtained powders, compared to recrystallized powders. The use of saccharide precursors for the carbothermic reduction of boron carbide increases the melting point of boric acid and influences the formation of new bonds between the saccharide and boric acid, which affect the synthesis of boron carbide. The type of saccharides precursors used determines the size of the crystallites, morphology, and the agglomeration and aggregation of boron carbide (B 4 C). The best results are obtained from dextrin (100-400 nm, both for lyophilized and recrystallized samples). Compared to the second used polysaccharide, we can see a significant influence of the saccharide precursor on the B 4 C particle size, because using hydroxyethyl starch (HES) as a saccharide precursor, we obtain particles larger than 10 to 20 µm.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2021-06-28T05:06:14.749Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "4614ad73d5158c8064ce44edf18184ae33a28973",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/12/3419/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4614ad73d5158c8064ce44edf18184ae33a28973",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62804574 | pes2o/s2orc | v3-fos-license | Study of the Antifungal Activity of Shikonin and Alcoholic – Oily Extracts of Iranian Arnebia euchroma L
Introduction: Today regarding drug resistance of fungi and bacteria, many researches have focused on herbal-based medication. As these herbal-based medications can show better adaptivity, the minimum advantage of them compared to synthetic drugs is that they are harmless. This article aimed to study the antifungal effect of alcoholic extract and essence of Arnebia euchroma L (Abukhalsa) roots on saprophytic and dermatophytic fungi. Methods: In this research, the roots were collected from Zagros heights in spring. Then they were dried and 300 mL ethanol was added to each 100 g dried powder. The alcoholic extraction was performed by maceration and the extract was concentrated by distillation in vacuum. The clevenger apparatus was used to extract the essence; then it was extracted by boiling water at vacuum for 4–6 hours. Shikonin was provided in commercial form. The antifungal activities of alcoholic extract, essence and Shikonin were studied and recorded using cylinder test based on the diameter of inhibition zone in Sabouraud-Dextrose agar. Minimum inhibitory concentration (MIC) and Minimum fungicidal concentration (MFC) were measured by broths macrodilution tests. Results: The results from cylinder, MIC and MFC tests showed that 30% of shikonin was more effective than root on fungi. Our data demonstrated that alcoholic extract was better than oily extract. Conclusion: The alcoholic extract had better characteristics than the essence. To confirm the final findings, further researches are required.
Introduction
One of the most common herbal drugs that are used in traditional medication is Abukhals (Arnebia euchroma) from the family of Boraginaceae.This plant is herbaceous, with sharp silver pubes and the flower is cluster shaped with stretched and alternate leaves.One of the most common habitats of this plant is Iran, especially Kerman. 1,2The root of this plant was used as an ointment for wounds and burnings.It was used in reducing the swellings and had anticancer activity.It caused mild constipation, was used in nourishing the liver, kidneys and spleen, and had vulnerary effect. 37][8] Shikonin and alkanin are red and lipophilic pigments seen in most species.The other species of Arnebia such as nobilis, hispidissima, densiflora, and decumbens are found all over the world. 9nother substance that is found in Arnebia's root is naphthazarin (5,8-dihydroxy-1,4-naphthoquinone).The other substances such as cycloshikonin, acylshikonin, acetylshikonin, beta and beta dimethyl acrylate, isovalerate, beta acetoxy isovalerate, and arnebin 5,6 were also extracted. 10,11n this research, we tried to use Iranian native fungi species to provide logical and significant results.The article aimed to study the antifungal effect of alcoholic extract and essence of Arnebia euchroma L (Abukhalsa) roots on saprophytic and dermatophytic fungi.
Extracts and Essence
The roots were collected from Zagros heights in spring and delivered to Kosha Faravar Giti Institute.After confirmation, the samples were dried in a dark and dry place and then the roots were separated. 3After drying, 300 mL of ethanol was added to 100 g of dried powder.In order to complete the extraction process, the mixture was placed on shaker for 72 hours. 11The alcoholic extraction was performed by maceration and the extract was concentrated by distillation in vacuum. 12The clevenger apparatus was used to extract the essence, then it was extracted by boiling water at vacuum for 4-6 hours.The essence was separated by N-hexane from the water, and anhydride sodium sulfate was used to complete the separation process.The essence was collected in a dark dish, then stored in 4°C. 13Shikonin was supplied from market in commercial form.
Dry Weight of Extracts
In order to evaluate the antifungal activity of extracts, 5 mL of alcoholic concentrated extracts were added to 3 pre-weighted test tubes and dried after incubation for 24 hours.All tubes were remeasured and the dry weighs were calculated. 12ngal Strains For this research, the dermatophytic fungal strains including Trichophyton mentagrophytes (PTCC5054), Trichophyton rubrum (PTCC5143), Microsporum canis (PTCC5069), and Candida albicans (PTCC5027) and saprophytic strains such as Aspergillus fumigatus (PTCC5009) and Penicillium chrysogenum (PTCC5076) were provided from Microbial Collection of Iran, Industrial Research Organization.
Preparation of Fungal Suspension
All the fungi were cultured in Dextrose agar.Fresh colonies were diluted by physiological saline and agitated steadily by vortex.The concentration of the suspension was 1.5 × 10 8 cfu/mL equal to the standard of half McFarland. 14aluating the Antifungal Sensitivity to Cylinder Test For this test, prepared fungal suspension was cultured on Sabouraud-Dextrose agar using swap.The sterile cylinders were applied in determined intervals inside the plate (15 mm from plate's wall and 24 mm from center of 2 cylinders).Then, 0.2% of alcoholic extracts and oily extracts as well as 30% of shikonin were added to each cylinder.In one cylinder, ethanol and in the other DMSO were used as negative blanks, and fluconazole was used as positive blank.All the plates were incubated in 25°C for 72 hours and the results were recorded and compared based on the diameter of colonies.All the experiments were repeated 3 times. 14,15termination of MIC and MFC The Broth microdilution was applied to determine the MIC.In a 96-well plate, 22 wells contained Sabouraud-Dextrose broth.The concentration series of 0.01 to 0.9 mL were prepared for each well and 1.5 × 10 6 CFU/ mL fungi were added to each well.Three wells were selected as blanks for fungal growth, lack of contamination of culture, and contamination of extracts.Finally, all test tubes were incubated in 25°C for 72 hours.The last dilution that did not show any growth was MIC value.All the wells that did not show any growth, were cultured again with Sabouraud-Dextrose agar and incubated in 25°C for 48 hours.The extract dilutions that did not show any fungal growth, were selected as MFC values. 14
Statistical Analysis
The data was analyzed by SPSS software (20th edition), using independent t test and one-way analysis of variance (ANOVA) with P value < 0.05.
Results
The results from cylinder, MIC and MFC tests showed that the effect of 30% shikonin on fungi are more efficient than root.Alcoholic extract was better than oily extract (Figure 1 and Table 1).It is necessary to mention that there is a significant difference among T. mentagrophytes, T. rubrum, M. canis, C. albicans, A. fumigatus, and P. chrysogenum.
Results of MIC-MFC tests
The results from MIC and MFC tests using broth microdilution are shown in Tables 2 and 3.
Discussion
Herbal drugs have been used since ancient times.Recently regarding the drug resistance of fungi and bacteria, herbal drugs as natural reservoirs have received much attention from researchers.Many of these plants were seen in human and animal food and it was proven that they have no side effects.Type and amount of the metabolites in different parts of plants vary based on ecological conditions. 1,15European and developed countries are in a period of transition to production of herbal drugs.It has been proven that high dose of these drugs has no adverse effect on human.This view leads to efforts to provide herbal products and derivatives.Today, production of herbal compounds for treating some microorganism-based illnesses are followed by large pharmaceutical companies.As chemical drugs are synthetic or semi-synthetic, they might be harmful and cause side effects on humans.Iran and other ancient countries such as China, Greece, and Italy have used herbal products to treat the diseases. 16,17aranek et al showed that shikonin had positive effects on inhibition of cells. 18Haghbeen et al prepared 2 different cultures for A. euchroma which had high amounts of pigments.However, antimicrobial experiments showed that they had no effects on gram-negative bacteria and fungi, but optimal effects were observed on gram-positive bacteria. 2Doulah et al showed that the extracts of other species of Arnebia had good antimicrobial effects on gram-negative and gram-positive bacteria. 7Ashkani Esfahani et al illustrated that extracts of A. euchroma compared to sulfadiazine silver had better effects on second-degree burns. 19irbalouti et al determined the antimicrobial activities of the extracts of eight plant species endemic in Iran.The antimicrobial activities of these extracts of 8 Iranian traditional plants were investigated against Escherichia coli O157:H7 and Bacillus cereus.Most of the extracts showed a relatively high antimicrobial activity against all the tested bacteria. 20asiri et al showed that A. euchroma ointment was an effective treatment for healing burn wounds in comparison with SSD and could be regarded as an alternative topical treatment for burn wounds. 21n cylinder test, we studied the antifungal effects of shikonin, alcoholic and oily extracts on saprophytic, dermatophytic and yeast Candida.We found that shikonin had better effects compared to alcoholic and oily extracts, as.In our study, MIC-MFC tests provided different results with previous works.These tests were used in mentioned fungi and good results were obtained.According to these tests, comparing to alcoholic and oily extracts, shikonin had better effects on fungi as shown in figures and tables.
Conclusion
Our results indicated that 30% shikonin extract had better antifungal effects than alcoholic extracts and essence.The alcoholic extract had better characteristics than the essence.To confirm the final findings, further researches are recommended.
Competing Interests
Authors declare that they have no competing interests.
Fig. 1 :
Fig.1: Mean Values of Cylinder Test Between Tested Fungi and Extracts Based on milliliter
Figure 1 .
Figure 1.Mean Values of Cylinder Test Results Regarding Different Fungi and Extracts.
Table 1 .
Comparison of Cylinder Test Between Alcoholic Extract and 30% Shikonin (Compared
Table 1 .
Comparison of Cylinder Test Results Between Alcoholic Extract and 30% Shikonin (Compared Means for 2 Independent Populations)
Table 2 .
MIC-MFC of Fungi Corresponding to Shikonin and Alcoholic and Oily Extracts
Table 3 .
Comparison of MIC-MFC Tests Between Alcoholic Extract and 30% Shikonin (Compared Means for 3 Independent Populations) | 2018-12-21T11:14:10.793Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "50762c48966803dcd513d8b7c5eb9163d555ca5c",
"oa_license": "CCBY",
"oa_url": "http://ijbsm.zbmu.ac.ir/PDF/IJBSM-2081-20161231160026",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "50762c48966803dcd513d8b7c5eb9163d555ca5c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
258080454 | pes2o/s2orc | v3-fos-license | Mental health and psychosocial status in mothers of children with Attention De�cit Hyperactivity Disorder (ADHD): differences by maternal ADHD tendencies
This study aimed to clarify differences in mental health, psychosocial status, and mental health-related factors among mothers of ADHD children between those with and without ADHD tendencies of their own. The data from 149 mothers of children with ADHD, collected through an online survey, were analyzed. Mothers with ADHD tendencies had poorer mental health, more ADHD children, and felt a lack of understanding of their surroundings than mothers without ADHD tendencies. There were differences in mental health-related factors depending on the mother's ADHD tendencies. Therefore, individualized interventions based on the presence or absence of the mother's ADHD tendencies may be important to maintain mental health in mothers of ADHD children.
Introduction
Attention de cit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity.Therefore, mothers of children with ADHD experience higher parenting stress [1] and more mental health problems [2][3][4][5][6] than those of children with neurotypical development.Poor mental health leads to dysfunctional parenting skills, such as corporal punishment, lack of discipline, overly strict discipline [7][8][9], worsening children's ADHD symptoms, behavioral problems, and oppositional de ant disorder [10][11][12][13].Furthermore, previous studies also reported that parents of children with ADHD had an increased risk of family separation, such as divorce [14][15] and di culties in their employment [14] compared to parents of children without ADHD.
Therefore, it is important to maintain mental health in mothers of children with ADHD.Previous studies have reported factors related to their mental health.For example, depressive symptoms in mothers of children with ADHD and caregivers were related to children's features (e.g., severity of behavioral disturbance, hyperactivity-impulsivity dominant and mixed features, child's introversion problems [16][17][18]), mothers' socioeconomic attributes (e.g., low income, being the sole caregiver in the family, baseline marital status [17,18]), and parenting stress [1].
On the other hand, Genetic factors are strongly involved in ADHD psychopathology, and 25-50% of ADHD children have ADHD parents [19,20].Furthermore, adult ADHD is often associated with secondary disorders such as depression and anxiety disorders [21,22], impaired work functioning [23], and nancial di culties [23].Therefore, mothers of children with ADHD, especially those with ADHD tendencies of their own, may have more children with ADHD, high parenting stressors, poor socioeconomic status, and mental health problems.However, it has not been studied how mental health, psychosocial status, and mental health-related factors of ADHD children's mothers differ due to the mother's ADHD tendencies.Therefore, this study aimed to clarify the differences in mental health, psychosocial status, and mental health-related factors among mothers of ADHD children between those with and without ADHD tendencies of their own.We hypothesized that mothers with ADHD tendencies would have poorer mental health and worse psychosocial status, such as unmarried, lower income, and higher parenting stressors compared to mothers without ADHD tendencies, and that mental health-related factors would differ by the presence or absence of maternal ADHD tendencies.
Study design and participants
An online survey was conducted between September 24, 2021, and October 19, 2021.Eligible participants were mothers of children (under 18) with ADHD that had been diagnosed by medical experts.The survey link was distributed to the participants by two peer groups for parents of children with developmental disabilities, a provider of after-school care services, and a support group for children with developmental disabilities.A checkbox was set up on the top page of the survey form to con rm that mothers of children with ADHD diagnosed by medical experts were the targets of the survey.Only those who clicked on this checkbox were allowed to participate in the survey.Of the 164 respondents who responded during the survey period, 149 were included in the analysis after excluding six respondents with incomplete data and nine who did not live with their children.
Survey instrument
Attributes, ADHD tendencies, ADHD child's attributes and severity of ADHD symptoms, parenting stressors, and mental health were assessed.
Attributes
Attributes included age, education, marital status, occupation, living with a spouse, household income, medication status, number of children, and number of children with ADHD.The mothers' medication use for mental health problems (antidepressants, anxiolytics, hypnotics) and ADHD (ADHD medications) were questioned.A list of the generic drug names of such medications approved in Japan was provided in the questionnaire.
ADHD tendencies
Part A of the Adult ADHD Self Report Scale (ASRS-v1.1)[24] was used to de ne ADHD tendencies.The ASRS is a validated scale to screen adults for ADHD.The tool comprises 18 items, with six items for screening (Part A) and 12 additional questions (Part B).Respondents are considered positive on the ASRS when they answer four or more Part A questions at the threshold level [24].The ASRS Part A screening has a sensitivity of 68.7% and speci city of 99.5% [24].Cronbach's alpha coe cient of the ASRS Part A in the present study was 0.831.In this study, mothers who tested positive on the ASRS Part A screening were de ned as having ADHD tendencies.
ADHD child's attributes and severity of ADHD symptoms
The attributes of child with ADHD included age, sex, diagnosis of comorbid developmental disorders (ASD, LD, DCD, other), and severity of ADHD symptoms.If the respondents had more than two children with ADHD, they were asked to respond regarding the child with the highest burden.The Japanese version of the home form of the ADHD-RS [25] was used for the severity of ADHD symptoms.The total ADHD-RS score was de ned as the severity of ADHD symptoms.A higher score indicated more severe ADHD symptoms.Cronbach's alpha coe cient of the ADHD-RS in the present study was 0.878.
Parenting stressors
The Developmental Disorder Parenting Stressor Index (DDPSI) [26] was used for parenting stressors.The DDPSI is designed to measure stressors in parents of children with developmental disabilities and has an 18-item questionnaire composed of four factors: (1) di culties in understanding the child and coping with the child's needs, (2) anxiety about the child's future and independence, (3) inadequate understanding of the childʼs disorder from others, and (4) con icting emotions with regard to the childʼs disorder.A higher score indicated higher stress experienced by the respondents.Cronbach's alpha coe cients of the four factors in the respondents were 0.721-0.868.
Mental health
The Japanese version of the Kessler 6-Item Psychological Distress Scale (K6) [27] was used.This scale is used in the Comprehensive Survey of Living Conditions conducted triennially by the Ministry of Health, Labour, and Welfare in approximately 300,000 Japanese households to investigate basic matters of national life, such as health, medical care, welfare, pension, and income.Cronbach's alpha coe cient of K6 for the respondents was 0.866.
Statistical methods
The 149 respondents were divided into two groups according to the presence or absence of ADHD tendencies assessed by the ASRS Part A screening.Group comparisons were conducted for attributes, ADHD child's attributes and severity of ADHD symptoms, parenting stressors, and mental health.A student's t-test was used to compare quantitative variables, and Pearson's chi-square test or Fisher-Freeman-Halton test was used to compare qualitative variables.
We then analyzed the association between mental health (K6 scores) and the variables of attributes, ADHD child's attributes and severity of ADHD symptoms, and parenting stressors in each group.Pearson's correlation analysis was used for associations with quantitative variables.Student's t-test, Welch's t-test, or one-way analysis of variance was used for associations with qualitative variables.SPSS Statistics version 27 for Mac was used for all statistical analyses, and the statistical signi cance level was set at 5%.
Characteristics differences by maternal ADHD tendencies
The characteristics by maternal ADHD tendencies are shown in Table 1.Mothers with ADHD tendencies had signi cantly more children with ADHD (p=0.007),higher scores on "inadequate understanding of the childʼs disorder from others" stressor (p=0.029), and higher K6 scores (10.4 ± 4.7 vs. 7.8 ± 5.3, p=0.009) than those without ADHD tendencies.
[Table 1 will be here] Factors associated with mental health in mothers with ADHD tendencies Table 2 presents the results about factors associated with mental health in mothers with ADHD tendencies.Signi cant associations were found between K6 scores and the following items: age (r=-0.401,p=0.014), "di culties in understanding the child and coping with the child's needs" stressor (r=0.469,p=0.003), and "inadequate understanding of the childʼs disorder from others" stressor (r=0.332,p=0.045).
[Table 2 will be here] Factors associated with mental health in mothers without ADHD tendencies Table 3 presents the results about factors associated with mental health in mothers without ADHD tendencies.Signi cant associations were found between K6 scores and the following items: medications for mental health problems (yes vs. no:12.4±6.6 vs. 7.0±4.6,p=0.006), the severity of ADHD symptoms in child with ADHD (r=0.271,p=0.004), "di culties in understanding the child and coping with the child's needs" stressor (r=0.504,p<0.001), "anxiety about the child's future and independence" stressor (r=0.421,p<0.001), "inadequate understanding of the childʼs disorder from others" stressor (r=0.294,p=0.002), and "con icting emotions with regard to the childʼs disorder" stressor (r=0.360,p<0.001).
[Table 3 will be here] Discussion Currently, mothers of children with ADHD are known to experience higher parenting stress [1] and more mental health problems [2][3][4][5][6] than those of children with typical development.However, it is unknown how the mental health and psychosocial status among mothers of children with ADHD differ according to the presence or absence of their own ADHD tendencies.This study aimed to clarify the differences in mental health, psychosocial status, and mental health-related factors between mothers with and without ADHD tendencies.
First, the results of this study showed that mothers with ADHD tendencies had signi cantly poorer mental health, a higher number of children with ADHD, and higher stressor regarding "an inadequate understanding of the child's disorder from others" compared to mothers without ADHD tendencies.
Regarding mental health, our results support previous studies reporting that adults with ADHD have a higher incidence of depression and other psychiatric disorders than adults without ADHD [21,22].In addition, our nding that mothers with ADHD tendencies had more ADHD children is also consistent with the genetic psychopathology of ADHD [19,20].Furthermore, the result that "inadequate understanding of the childʼs disorder from others" stressor was higher in mothers with ADHD tendencies may re ect social life di culties that adults with ADHD have, such as "I get into arguments or ghts easily," "I lose friends easily because of my impulsive behavior, " and "I am quick to lose my temper with my partner or loved one," as reported in a previous study [23].
Contrary to our hypothesis, there were no differences in socioeconomic status, such as marital status or household income, between the two groups, which might be due to the small sample size of this study.
Second, the results of this study showed that mental health-related factors differed according to the presence or absence of ADHD tendencies in mothers.Among mothers with ADHD tendencies, younger individuales had poorer mental health.This suggests that younger mothers with ADHD tendencies are especially at high risk for poor mental health among mothers of children with ADHD.Additionally, in mothers with ADHD tendencies, parenting stressors of "di culties in understanding the child and coping with the child's needs" and "inadequate understanding of the childʼs disorder from others" were related to poor mental health.When combined with the nding that mothers with ADHD tendencies had a higher stressor of "inadequate understanding of the child's disorder from others" than mothers without ADHD tendencies; improving this stressor seems particularly important for mothers with ADHD tendencies to prevent mental health problems.Moreover, medication use was not related to mental health in mothers with ADHD tendencies, contrary to previous ndings that poor mental health is associated with antidepressant medication use [28][29][30][31].This may indicate that mothers with ADHD tendencies do not have adequate access to medical resources even when they have mental health problems; therefore, it is necessary to support by experts in medical institutions and welfare centers.
On the other hand, in mothers without ADHD tendencies, poor mental health was related to medications, more severe ADHD symptoms in their child, and various kinds of parenting stressors such as "di culties in understanding the child and coping with the child's needs," "anxiety about child's futures and independence," "inadequate understanding of the childʼs disorder from others" and "con icting emotions with regard to the childʼs disorder."Regarding medications in mothers without ADHD tendencies, unlike mothers with ADHD tendencies, our nding supports previous ndings that poor mental health is associated with antidepressant medication use [28][29][30][31].In addition, our ndings suggest that varied parenting stressors and severity of child's ADHD symptoms are the main factors related to poor mental health among mothers without ADHD tendencies.Therefore, it seems to be particularly important to improve these parenting stressors to maintain the mental health of mothers without ADHD tendencies.This study had some limitations.First, as this was a cross-sectional study, causal relationships were not possible to prove.Longitudinal research is necessary to verify causal relationships.Second, since the voluntary nature of the online survey might have led to a selection bias, the respondents of this study may not be representative of the population.Thus, future studies should be conducted using a larger representative study sample.Third, data were collected using self-report questionnaires.Therefore, a reporting bias cannot be ruled out.Further studies using expert assessments of clinical symptoms and diagnostic criteria are needed in the future.Despite these limitations, this is the rst study to cralify differences in mental health, psychosocial status, and mental health-related factors among mothers of children with ADHD, depending on the presence or absence of mothers' own ADHD tendencies.The results of this study showed as following.Mothers with ADHD tendencies had poorer mental health and psychosocial status than mothers without ADHD tendencies.Some factors related to the maternal mental health, which are parenting stressors such as "di culties in understanding the child and coping with the child's needs" and "inadequate understanding of the childʼs disorder from others" were common regardless of the mother's ADHD tendencies, while other factors differed depending on the mother's ADHD tendencies.Threfore, to maintain the mental health of mothers with ADHD children, individualized interventions based on the presence or absence of the mother's ADHD tendencies, while improving common factors, are seemed to be important.These ndings should help explore measures of mental health promotion in mothers of children with ADHD in the future.
Conclusions
Mothers with ADHD tendencies were found to have poorer mental health, more children with ADHD, and a higher stressor regarding "inadequate understanding of the childʼs disorder from others" compared to mothers without ADHD tendencies.Poor mental health in mothers with ADHD tendencies was related to younger age and higher parenting stressors such as "di culties in understanding the child and coping with the child's needs" and "inadequate understanding of the childʼs disorder from others."Poor mental health in mothers without ADHD tendencies was related to medication use, more severe ADHD symptoms in the child, and higher parenting stressors such as "di culties in understanding the child and coping with the child's needs," "anxiety about child's futures and independence," "inadequate understanding of the childʼs disorder from others," and "con icting emotions with regard to the childʼs disorder."Summary Mothers of children with ADHD are known to have poor mental health and more psychosocial di culties than mothers of children without ADHD and it is important to improve mental health problems in mothers of ADHD children.Whereas genetic factors are known to be involved in the psychopathology of ADHD and it is possible that mothers with ADHD tendencies of their own may especially have more mental health problems and psychosocial di culties.Therefore, this study aimed to clarify differences in mental health, psychosocial status, and mental health-related factors among mothers of children with ADHD depending on the presence or absence of their own ADHD tendencies.Data from 149 mothers of children with ADHD obtained through a cross-sectional online survey were analyzed.The presence or absence of ADHD tendencies in the mothers was de ned by ASRS screening.Mothers with ADHD tendencies had poorer mental health, more ADHD children, and felt a lack of understanding of their surroundings than mothers without ADHD tendencies.Poor mental health in mothers with ADHD tendencies was related to younger age and higher parenting stressors regarding "di culties in understanding the child and coping with the child's needs" and "inadequate understanding of the childʼs disorder from others."Poor mental health in mothers without ADHD tendencies was related to medication use, more severe ADHD symptoms in their child, and higher parenting stressors such as "di culties in understanding the child and coping with the child's needs," "anxiety about child's futures and independence," "inadequate understanding of the childʼs disorder from others" and "con icting emotions with regard to the childʼs disorder."The ndings of this study imply the need for individualized interventions based on the presence or absence of ADHD tendencies in mothers to maintain the mental health among mothers of children with ADHD.
Table 2 .
Relationship between mental health and psycho-social status in mothers with ADHD tendencies a: Pearson's correlation analysis, b: One-way analysis of variance, c: Student's t-test, d: Welch's t-test SD: standard deviation
Table 3 .
Relationship between mental health and psycho-social status in mothers without ADHD a: Pearson's correlation analysis, b: One-way analysis of variance, c: Student's t-test, d: Welch's t-test SD: standard deviation | 2023-04-13T06:17:32.085Z | 2023-04-12T00:00:00.000 | {
"year": 2023,
"sha1": "88a2cc81989093524fa8b06f65f93c612269e568",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1809370/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b207dfea13ce519ba4d1f4c609a49337c5e9c4b4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235293887 | pes2o/s2orc | v3-fos-license | Understanding the Design Space of Mouth Microgestures
As wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive interactions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users' preferences of these gestures are not fully understood. In this paper, we investigate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.
INTRODUCTION
Since the early 1990s, researchers have been investigating using the mouth as an eyes-free and hands-free input channel to facilitate human-computer interaction [35]. Nowadays, with the advances of electronic technology, wearable devices such as earbuds and head-mounted displays are becoming increasingly ubiquitous and providing new sensing techniques around the face and the mouth. Therefore, there has been emerging research on mouth-related gestures recently, such as teeth clicking [3,40,45], humming [17], or chewing [6]. These gestures are usually subtle, requiring little effort from users, enabling eyes-and hands-free interactions, and having a tendency to be more socially acceptable.
However, in spite of the rich prior work, there is still a lack of an overall understanding of the design space of mouth microgestures, which we define as any deliberate action or movement involving any part of the mouth with the purpose of controlling some device. It is also unclear whether users would prefer a certain set of gestures over others. Previous studies in this space were conducted independently, making it difficult to compare their findings. Moreover, different sensing modalities might affect or bias the user experience, making the comparison even more difficult. To the best of our knowledge, there is no prior work fully exploring the design space of mouth microgestures, including users' preference in the space. We seek to provide a foundation of this novel design space for interacting with next generation mobile and wearable devices. With an unexplored interaction space such as this, it is important that we first understand what users envision as an ideal gesture, free of the constraints of sensing technology. This knowledge can then help guide future researchers in designing usable systems that people will actively adopt.
In this paper, we conducted a series of user studies to explore and evaluate the design space of mouth microgestures. Note, we will use the terms "microgesture" and "gesture" interchangeably. First, we organized four remote brainstorming sessions obtaining a set of 86 mouth microgestures. From this, we derived a taxonomy of these gestures based on the mouth organs used in the gesture as well as the primary form of how the gesture is represented. Gestures that use the tongue as the main active organ were proposed the most, followed by those of the outer mouth parts (e.g. lips). Next, we conducted an online survey comprising pairwise comparisons of the proposed user-defined microgestures in terms of physical and mental workload. After forming a ranking of the microgestures from the results, we select a preferred subset of 20 gestures from the original gesture set that contains both the least physically and mentally demanding microgestures. Consequently, we conduct a remote Wizard-of-Oz user study to map the subset of microgestures to real-life tasks users would do under two contexts: sitting and walking. We use the participants' agreement on microgestures across tasks to form a practical mouth microgesture set and describe quantitative and qualitative insights from the results of our user studies.
This paper's contributions are threefold: (1) We qualitatively examine various characteristics of userdefined mouth gestures and develop a taxonomy of mouth gestures for understanding the design space in regards to the mouth organs used, modality, and expressive power. (2) Through two user studies, we obtain a practical mouth gesture set for daily tasks for common applications under reallife contexts. (3) With insights about users' preferences, we provide a set of design implications and recommendations for how HCI researchers and designers can design new and usable mouth gesture interfaces.
RELATED WORK
We first review the existing systems of mouth-related gestures. We also summarize user-defined gesture design, which is adopted by our first user study.
Mouth-based gesture interactions
A large body of literature has explored the idea of mouth-based gestures. Many of these systems have been used as assistive technologies, providing an effective alternative mode of interaction for those with diseases or injuries that limit many motor skills [18]. Indeed, mouth-based gesture controls could be used to mitigate permanent, temporary, and situational impairments, suggesting an inclusive design framework that could be applied to all kinds of users. Included in these impairments is the visual attention needed to operate many current consumer devices [31], with mouth-based gestures potentially alleviating the related cognitive load. In addition to convenience, mouth-based gesture controls can be very subtle due to the fine control humans have over their mouths, similar in granularity to hand dexterity [13], and tongue movements being containable within a closed mouth. As a result, mouth-based gesture controls can have potential advantages as an alternative to speech recognition in situations where users are in either a very quiet environment or a very noisy environment, where speaking aloud is not appropriate or difficult to sense [10]. Prior work has focused on developing technical sensing systems to test the feasibility for detecting certain gestures. While some works do evaluate qualitative aspects of their gesture interface from a user experience angle [15,29,45], most mainly assess the performance of their system itself or discuss the user preference as a secondary point. Due to the constraints of sensors, they often target a specific sub-part of the mouth that is propitious for their technical approach. In addition to having possible bias in users' opinions from the physical system, these works, which focus on different parts of the mouth, are not easily comparable in their results. Proximity makes it so parts of the mouth often interact with each other which lends itself to treat the mouth as a whole as an interface. By considering it this way, we also hope to gain common insights that are applicable to an interaction regardless of which part of the mouth is used.
Although there is a gap in this design knowledge, we review the papers proposing technical systems for sub-parts of the mouth and their insights to serve as motivation that a mouth-based gesture interface is a practical and valuable design space.
Tongue interactions.
The tongue is capable of a high degree of expression and dexterity [30]; much prior work has exploited the benefits of these fine motor capabilities to create tongue-based interfaces. TYTH from Nguyen et al. [29] show that the tongue can be used to accurately tap different areas of teeth to enable typing like a keyboard; users approved of the interaction but did not support the physical form factor as much. Other studies involve equipping the tongue with a magnet for precise tracking to control an interface [34]. Tongueboard [22] detects input from the tongue through an oral retainer with capacitive touch sensors. Less invasive techniques have also been explored for tongue input, such as using RGB cameras for tracking [30] or attaching a pressure-based interface to the outer cheek [9].
Teeth interactions.
Prior literature has explored the feasibility of using tooth clicking as input, particularly in the accessibility field. Zhong et al. and Kuzume et al. both used in-ear bone conduction microphones to detect the occurrence of a tooth click [19,48]. Bitey [3] expands upon this work to allow for distinguishing different pairs of teeth clicking, and Byte.it [40] demonstrates that the interaction technique can be implemented with other commodity sensors like an accelerometer or gyroscope. Additionally, Xu et al. proposed a system of clench interactions that differentiate different degrees of force when biting down and found that users appreciated the clench interaction as a hands-free technique [45].
Face-related interactions.
Facial movements and expressions are a natural, common occurrence in everyday human behavior. Recently, researchers have explored systems that can not only recognize these facial muscle movements but also leverage them to serve as a way to directly manipulate interfaces [23,44,47]. Among the many areas of the face, the various parts of the mouth, both internal and external, and the ways they move and interact with each other are particular points of interest for interaction design. Beyond simple facial recognition, camera-based techniques have been used to track lips to perform lip reading [37,38]. Movements of the eyes, eyebrows, and mouth have been shown to be recognizable with electrooculography (EOG) and electromyography (EMG) sensors applied to the face [28,33]. Tongue-in-Cheek allows for gestures using the tongue, cheek, or jaw to be detected in a wireless, non-invasive manner for directional input [15]. Their user study showed that their system was preferred, but the target user group was those with neuromuscular conditions and not the general population. A similarly non-contact solution uses proximity sensors to enable continuous tracking of the cheeks and jaw from a virtual reality headset [21]. Research involving outer ear interfaces (OEI) have also demonstrated that deformations of the ear canal can be sensed to detect facial movements and expressions as a form of input [1,2,24].
User-defined gesture design
When new systems using gesture interfaces are developed, the design of the gestures is often constrained by technical feasibility or the knowledge of those implementing the system. Participatory design is a well-studied approach that integrates users into the decision-making and design process, and this method is valuable for designing gesture interfaces as well [36]. Gesture elicitation studies were proposed by Wobbrock et al. for interactive surface computing [42]. These studies follow a procedure where the participant is shown the effect of an action, called the referent, and asked to provide the sign, the gesture that would produce the referent. Compared to gestures designed by experts, user-defined gestures have been found to be more intuitive to learn and easily memorable for end-users [26,27]. A user elicitation technique also produces gestures covering a much wider scope as well as being more preferred than those human-computer interaction experts could generate [43]. Over the years, gesture elicitation studies have been applied to a wide variety of gesture interactions [41], such as those using the hand [7], foot [12], head movements [46], face [20], and fingers [11,14].
In our work, we take a similar approach to include users' input for the design of gestures specifically focused around the mouth. Inspired by prior work using a framed guessability methodology [5], we employ both open elicitation, where users are unconstrained when proposing gestures, and closed elicitation, where users can only select from a smaller, focused set of gestures [41].
STUDY 1: BRAINSTORMING MOUTH MICROGESTURES
To better understand the design space of mouth microgestures, we first needed to gather a detailed list of possible microgestures that can be performed by users. In this study, we invited 16 participants across four sessions to brainstorm and design mouth microgestures.
Participants
We recruited 16 participants through mailing lists and online communities. Eleven were male and five were female, with ages ranging from 20-34 (mean=24, stdev=3.96). Eleven out of sixteen participants (69%) came from a technical or engineering background. Others worked in areas including design, medicine, and education. All but two had experience with wearable devices, and four self-reported never having used gesture control for a device before.
Procedure
Participants were placed in groups of four along with one of the researchers as a moderator. Groups collaborated remotely over a video conference call. Each brainstorming session, which lasted for about one hour, began with a short icebreaker question and introductions so that participants could familiarize themselves with each other. The moderator then described the purpose and procedure for the brainstorming session, as well as the definition of a mouth microgesture. For this study, we kept the goal of the session openended and asked the participants to simply brainstorm as many microgestures as they could, without considering the sensing feasibility; however, we did not describe specific applications or tasks from which to base their ideas. This choice in procedure was to keep the participants' focus on the physical nature of using the different parts of the mouth to perform microgestures. With more general guidelines, participants would not need to concern themselves with other aspects of their ideas, such as whether it is plausible to detect with current technology or if it is easy to perform. The only limitation we imposed on the design was that microgestures involving spoken words should be avoided, since speech commands as an interface follow and carry different interaction principles than microgestures do. To record their ideas, participants used an online collaborative whiteboard tool called Stormboard 1 to write down their proposed microgestures on virtual sticky notes. An individual brainstorming period was conducted first for 10 minutes where participants worked separately to think of ideas. Next, the participants were brought back together and took turns discussing their ideas, bringing all of their sticky notes into a shared workspace for the whole group to view. After sharing, participants were asked to spend the rest of the time working together to create new ideas, adding to those they thought of individually. The moderator presented questions to spark new lines of thought whenever the group had trouble brainstorming new ideas.
Results
Across the four brainstorming sessions, a total of 104 unique ideas were proposed. We found that many of them were simply variants of other microgestures, such as repeated actions (tapping two times or three times) or duration-dependent actions (holding a pose for one second vs two seconds). In this first study, we were interested in compiling a set of fundamental, unique units of gesture that could be compared fairly in the following study described in Section 4. For example, it would be unfair to compare tapping the tongue to the roof of the mouth twice versus once based on physical demand. Therefore, we merged these gestures if they only differed on repetition and time. We additionally filtered out gestures that did not fit our definition of mouth microgesture or could not be performed by the general population, leaving us with 86 unique mouth microgestures that are distinct in the physical motion of the mouth organ(s).
We create a taxonomy of our full gesture set along two axes. The first is based on the parts of the mouth used to perform the microgesture and the way they interact with each other. We refer to this as the actor-receiver pattern. To perform a mouth microgesture, there is often one mouth organ acting upon another. The actor refers to the primary organ that is moving or controlling the gesture; the receiver is the organ that is receiving the motion or action from the actor. For example, in the gesture bite tongue with left side of teeth, the actor organ would be the teeth, since the main motion of the gesture is the biting down action, and the receiver organ would be the tongue, since the tongue is being acted upon by the teeth.
If the gesture only involves one part of the mouth, then that part serves as both the actor and receiver. Using this pattern, we define four categories of one axis of our taxonomy by the actor in the relationship: teeth, tongue, outer mouth, and throat. Note that the outer mouth refers to the different external areas/muscles of the mouth like the lips, cheeks, and jaw. The second axis is determined by the form or modality with which the microgesture is executed. We characterize our gesture set into three of these categories: state, motion, and acoustic. Although all microgestures contain some degree of motion, state describes microgestures where the goal of the movement is to reach some condition or position for however brief a period. The gesture bite down on tongue with front teeth falls in this category, because the intent of the gesture is to arrive at a state where the tongue is being bitten down on. For the motion category, the movement of the mouth organ itself, between start and finish of execution, defines the microgesture. An example would be slide tongue forward on roof of mouth; the sliding motion of the tongue is what characterizes this gesture. The last category, acoustic, describes gestures that produce a unique sound from the movement of mouth organs, such as clicking the tongue. Figure 1 shows the distribution of the proposed gestures across two axes. Tongue and outer mouth gestures, each with a similar total number of gestures (29 and 26, respectively), made up the majority of the brainstormed gestures. Most of the outer mouth gestures are state gestures, while for tongue gestures, state and motion gestures almost evenly form the majority. When examining the state, motion, and acoustic categories, we also observe that the state group is the largest (49).
STUDY 2: USER-DEFINED GESTURE EVALUATION
With a large set of user-defined microgestures, we conducted a second user study to analyze key features of the gestures that can influence user preference. Motor fatigue and cognitive fatigue are two major concerns when developing a new interaction technique, so we focused on examining and comparing the physical and mental workload of the proposed microgestures.
Categorization of proposed microgestures
To understand the physical and mental demand of different microgestures, a direct comparison between any arbitrary microgestures would not be valid. Some microgestures may have a form of motion that is more expressive or allows for a more intuitive experience for complex interface operations but at the cost of being more physically demanding than other simple microgestures. Moreover, certain microgestures have an intrinsic, analogous microgesture because of the way direction or location plays a role in the action. For example, the gesture slide tongue to the right along the bottom lip would have a corresponding microgesture but sliding toward the left instead. We grouped these such microgestures into pairs for the purpose of this comparison study. In order to fairly compare microgestures against each other, we created three divisions of microgestures based on their information transfer bandwidth: zero-, one-, and multi-bit . Zero-bit microgestures can be described as those that are standalone and represent a single state of interaction. One-bit microgestures are those that function like a pair, as described earlier; their expressive power can represent two different states. Multi-bit microgestures are fluid and variable in how they are performed; they are the most expressive gestures and produce at least two distinct effects in interaction. This taxonomy enables a fair comparison of microgestures within each category. Grouping this way culminates to 19 zero-bit, 28 one-bit, and 13 multi-bit microgestures.
Procedure
Even by consolidating the initial gesture set into the aforementioned categories, there are still too many microgestures in each category to allow for comparing physical and mental demand for every combination of microgesture ( 2 19 = 171, 2 28 = 378, 2 13 = 78). We overcome this problem by using a pairwise ranking scheme based on the Crowd-BT algorithm [8], which establishes a high quality ranking of microgestures from multiple users while allowing each user to only compare a limited number of comparisons.
We implemented a survey as a web application participants could visit to evaluate pairs of microgestures [4]. Participants considered a subset (a third) of the microgestures from each of the three categories (zero-, one-, and multi-bit) for both physical and mental demand, resulting in six phases of the survey. Before each phase, the participants were shown the NASA TLX description for either physical or mental demand [16]. Once a phase was begun, at any one time, participants would be presented with text descriptions of two microgestures from the same category as well as a question asking which of the two was less demanding in regard to either physical or mental demand. Four instructional manipulation checks were administered throughout the survey to check for participants' attention [32]. We recruited 50 participants using Amazon Mechanical Turk with the requirement that they must have completed at least 5000 tasks previously and have a task approval rating of at least 95%.
Results
In total, we collected 2000 comparisons across all microgesture categories, split evenly between physical or mental demand. In detail, for each type of demand criteria, the 19 zero-bit microgestures had 300 comparisons; the 28 one-bit microgestures had 450 comparisons; the 13 multi-bit microgestures had 250 comparisons. We obtained a ranking and raw scores for microgestures in each category based on the inferred quality, calculated by the Crowd-BT algorithm, on both physical and mental demand.
To help visualize our results, we plotted the physical and mental demand scores against each other. As shown in Figure 2, for the one-and multi-bit categories, there is a clear subset of microgestures in the upper right quadrant that are favored for being both the least physically and mentally demanding in their respective groups. Interestingly, the zero-bit microgestures did not have such a clear consensus between both criteria; some that are less physically demanding end up being more mentally demanding and vice versa. The extreme of this is noticeable with the microgestures cough, clear throat, and pucker lips. These were ranked to be the least mentally demanding but also the most physically demanding of the zero-bit gestures. A similar observation can be found in the other categories with the microgestures slide tongue to the left/right along roof of mouth (one-bit) and hum different tones (multi-bit), which are among the least physically demanding but also among the most mentally demanding gestures in their categories. Although microgestures like these may not be ideal in one criterion, we believe that since they excel in the other criterion, they may still be usable in certain applications or scenarios.
For further analysis and later evaluation, we took a subset of microgestures from each category that are both less physically and mentally demanding. To make this selection we averaged the physical and mental demand score of each gesture and then selected the top 20% from each category. This selection process resulted in 14 remaining gestures as seen in Table 2 (one-bit gestures are separated into two zero-bit gestures, resulting in 20 gestures). Using the actor-receiver mouth organ taxonomy as described in Section 3.3, we noticed that 8 out of the 14 top gestures use the tongue as the actor organ; 3 use the outer mouth; 2 use the teeth; and 1 uses the throat. This is not to say though that microgestures using the tongue are preferred over others. From Figure 1, we see that tongue microgestures constitute a large portion of the total gesture set. Of the bottom 20% microgestures, 7 out of 14 are also tongue microgestures. When we take a closer look at the spatial relationships of the motion as well as between the actor and receiver organs, we note a few observations that may point to why some microgestures were favored over others.
We believe that microgestures that lack sufficient feedback (through proprioception) and require the user to maintain an unnatural state for long periods perform poorer than others. Two of the multi-bit tongue gestures in the bottom 20% were move tongue around like a pointer/cursor and hold tongue in air at different angles/heights. With the tongue as the actor organ, these microgestures do not allow the user to easily keep track of the spatial position of the tongue and also require the user to maintain a stretched out tongue which is unnatural. The microgesture Open mouth at different widths, which was one of the preferred gestures, at first glance seems to also lack sufficient feedback, but we believe that the motion of opening and closing the mouth is universal and natural enough that the spatial mental model of opening the mouth more or less leads to less physical and mental demand. Indeed, humans have been shown to have control over opening and closing their mouths with similar granularity as hand dexterity [13]. In the rankings of the zero-bit microgestures as well, we see that trill the tongue and blow raspberry are not preferred since they are complex actions that need to be held for a period.
The results of the ranking also suggest that microgestures involving fewer moving organs are less demanding than those that need the user to move multiple parts of the mouth. When considering one-bit microgestures, we see that bite tongue with left or right side of teeth, bite left or right inner cheeks, and click tongue with mouth open to the left or right, which are in the bottom 20%, all need the user to actively move two different parts of the mouth. Whereas with the top 20% of gestures, even if the actor and receiver organs are different, the receiver is stationary (e.g slide tongue to the left or right along inner surface of top teeth).
STUDY 3: USABILITY EVALUATION IN DAILY TASKS
One goal of this paper is to develop a practical user-defined mouth microgesture set and to understand which and how gestures would Higher scores indicate that the gesture was less physically and mentally demanding.
be used for real-life daily tasks. Beyond evaluating the physical and mental demand of these gestures alone, the context and environment of a user also has a significant impact on whether or not a user uses a gesture to perform an action. In this section, we describe the design and insights from a third study to map the 14 preferred gestures from Study 2 to tasks in commonly used smartphone operations.
Task List and Gestures
We first establish a list of applications and their relevant tasks that the microgestures can accomplish. We choose three types of applications that we believe represents a substantial range of applications people commonly use: audio, video, and textual (see Table 3). These types indicate what users are mainly interacting with and what form of feedback they are receiving. For each type, we choose two applications with 1-3 common tasks. These tasks fall into three categories: (1) some are simple toggle inputs, (2) some take on a small discrete set of inputs, and (3) others have a continuous input. Certain operations can be controlled in both a discrete and continuous manner, as marked in Table 3.
The set of mouth gestures participants could choose to map to tasks was taken from the 14 preferred gestures from Study 2 that were the least physically and mentally demanding of their category. Because the one-bit gestures are actually two symmetrical zero-bit gestures, we separated them into distinct gestures for the purpose of this study. Therefore, participants had a set of 20 mouth gestures to choose from.
Procedure
We designed and conducted a remote within-group Wizard-of-Oz study to simulate an environment where participants would be using mouth gestures to control their smartphone. Due to their ubiquitous nature, smartphones are often used in mobile and multitasking scenarios. To consider the effects of these scenarios on the user's ability and perception of our gestures, our evaluation also included two contexts: the user is sitting down (inactive) or walking around (active). We recruited participants from those who took part in Study 1, and of the original 16, 11 agreed to return for this study. Participants individually met with one of the researchers over a remote video call using their mobile phone. They were also required to have a second device (e.g. laptop, tablet) to use.
At the start of the study, participants were given a document with a list of descriptions of the 20 gestures. Participants then took a few minutes to practice performing each gesture and to ask the researcher any clarifying questions. They could then reference this list for the rest of the study on their secondary device. On the phone that they were using to join the video call, participants would see a shared screen of a mobile phone that the researcher could control. This setup helped the participant to feel like they were actually using a phone while also allowing the researcher to manipulate the visual feedback seen by the participants. The study continued as follows: the researcher verbally prompted the participant with a scenario that they were using a specific phone application to complete a task. The researcher would display the application and the before and after effect of an operation to the participant's phone screen. The participant was given some time to look at the list of microgestures and choose one they would want to use. Once they had decided, they would declare to the researcher that they had made their choice and then immediately perform Figure 3: The filtered microgesture subset used for Study 3. The mouth is drawn slightly exaggerated and more open to make viewing the inner mouth more visible. These drawings are the authors' interpretation of the microgestures. Participants in the studies were not involved in their creation; they proposed or were exposed to descriptions of the gestures.
their chosen gesture. After hearing their declaration, the researcher completed the operation on the phone so the participant could see the resulting visual feedback. The participant then informed the researcher which gesture from the list they would prefer to use. After each scenario, the researcher asked three questions to the participant: (1) whether they preferred the gesture they used over an alternative non-gesture interaction for the task (see Table 4), (2) whether the gesture was easy to perform, (3) whether they thought the microgesture was socially acceptable. Participants answered with a rating from a 7-point Likert scale. This exchange would ID Gesture Description Category 1 Grind teeth together gently zero-bit 2 Tap tongue to roof of mouth zero-bit 3 Stick tongue out zero-bit 4 Curl tongue upwards zero-bit 5 Bite down on tongue with front teeth zero-bit 6 Open mouth one of one-bit 7 Close mouth one of one-bit 8 Smile one of one-bit 9 Frown repeat for all of the tasks for both sitting and walking contexts. All tasks when the participant was sitting were completed together and likewise when they were walking. The sitting and walking contexts were counterbalanced across users, and the applications participants saw were ordered with a balanced Latin square (n=6). In total, each participant completed 28 tasks. When choosing the gesture to map to a task, participants were not allowed to use a gesture for more than one task in a specific application. However, across different applications, participants could pick the same gesture. The exceptions to this are the multi-bit gestures 18, 19, and 20. Since they are more open-ended and execution depends on the user, we let participants reuse the same multi-bit gesture within an application. When notifying the researcher which gesture they used for a task, participants described the differences in the performance if they used a multi-bit gesture multiple times.
Results
Because the one-bit gestures were divided and mixed in with zerobit gestures, the meanings of zero-and one-bit become more fluid, conforming to however the participants used them. For the rest of this analysis, we define singular and paired gestures, which are functionally equivalent to our previous definitions of zero-bit and one-bit, respectively. When we specify zero-or one-bit microgestures, we refer to their original sense in Study 2. A singular gesture can be a zero-bit gesture or one side of a one-bit gesture, and a paired gesture can either be two sides of any one-bit gesture, or a combination of two zero-bit gestures. Table 3 lists out all operations and the property of each operation. Disc/Cont means it can either be controlled discretely or continuously (e.g., volume adjustment). The last two columns are the most commonly selected gestures for each tasks under the sitting(S)/walking(W) context after solving conflicts.
Most Selected Gestures.
Participants only picked singular gestures for toggle operations, including zero-bit gestures, G1-G5, and one side of one-bit gestures, G6-G17 (see Table 2). For disc/cont operations, most participants either selected paired gestures or multi-bit gestures. Of the paired gestures, all followed the original one-bit definitions with only two exception: P4 picked gesture 2 and 3 for the music-volume up/down task in the sitting context, and P10 picked gesture 2 and 4 for the music-next/previous song task in the walking context. We summarized the number of times gestures in each category (singular, paired, multi-bit) were selected in Figure 4. There were a few observations. First, among singular gestures (the left of Figure 4), the top 5 gestures -G5, G2, G16, G4, G14 -were consistent between the sitting and the walking context, and account for 70% of the total singular gesture count. It was interesting to see that two of the five gestures were from one-bit gestures (G16 and G14). However, these are the only two with top rankings. Overall, zero-bit gestures were more preferred than one side of one-bit gesture for toggle operations. The top 7 gestures included all five zero-bit gestures and the average times that a zero-bit gesture was selected was four times of that of a one side one-bit gesture (21.4 v.s. 5.3).
Second, the middle of Figure 4 indicates that the paired gestures G12 & G13 (slide tongue to the left & right along inner surface of top teeth) were significantly more preferred than the other paired gestures for discrete or continuous control in both sitting and walking Table 4: Participants rated their preference of the mouth microgesture compared to an alternative, existing interaction technique. When possible, non-touchscreen interactions were used as baselines to make a more fair comparison, because mouth microgestures are hands-and eyes-free. Third, among the three multi-bit gestures (only selected for dist/cont operations), the most selected was gesture 18 (56% of the cases), followed by gesture 20 (32%) and 19 (12%). Although gesture 18 and 19 were similar and symmetric gestures, with the only distinction on touching the top versus bottom part of the teeth, it was interesting to find that their preference differs significantly, which was different from the rankings in Study 2 (Table 1). We discuss this observation further in Section 5.3.2.
Symmetry and Asymmetry of One-bit Gestures.
In Study 2, participants rated one-bit gestures, disregarding their preference between the two sides in the case of sided gestures. In this study, the selections made for the toggle operation provided the opportunity to compare preference for the two sides.
As we noted in Section 5.3.1, gestures G14 and G16 were among the top 5 commonly selected singular gesture. However, the gestures on the other side (G17 and G15) were significantly less preferred ( s < 0.05), whose selected times were only 36% / 31% and 14% / 25% of their respective pairs in the sitting/walking context. Moreover, even for the multi-bit gestures G18 & G19, touching areas on the bottom teeth were only selected 14% & 28% of the time for touching areas on the top/bottom teeth in the sitting/walking context. These results show asymmetry of one-bit gestures: using the top teeth was more preferred than using the bottom teeth for interaction, and blowing air out was more preferred than sucking air in. While this contrasts with the rankings from Study 2 that place gesture 19 above 18, we believe that this asymmetry relates to our observations in Section 4.3 about sufficient haptic feedback and proprioception. The tongue naturally resides near the bottom teeth which does indeed make it less physically demanding to reach with the tongue. However, it lacks the purposeful feedback that touching the top teeth has when users are actively moving their tongue to the slightly higher position.
In contrast, the preference of left and right gestures was more symmetric. They were rarely selected alone for toggle operations (2/2 times for G13, 0/1 for both G10 and G11 in the sitting/walking context). In addition, the results of GLMMs did not show significance between the two sides of G12 & G13 and G10 & G11 ( > 0.05).
Users' Preference under Different Contexts.
We compared the most common gestures picked by participants in each task in the two contexts. Nine of the fourteen tasks had either the same or overlapping most selected gestures (in some tasks, more than one gesture tied for the maximum selections). We performed Fisher's exact tests [39] on each task. The results did not indicate any significance between the sitting and the walking context ( > 0.05).
We further investigated the gesture selection agreement for each task. The agreement score for an operation was calculated by where is the set of selected gestures for the operation, and is a subset of identical gestures from [42,43]. Figure 5 visualizes the scores of each task under the two contexts. Interestingly, we found that the top five operations were all disc/cont operations (six in total), and that the score for most toggle operations were close to 0.2. This indicates that users' preferences were less diversified for disc/cont operations.
The volume adjustment tasks in both the phone call and music player applications had the highest agreement scores, no matter if users were in the sitting or walking context. The most commonly selected gestures for volume adjustment were G12 & G13 (slide tongue to the left/right along inner surface of top teeth). Therefore, we mapped these two gestures for volume up/down in both the phone call and music player applications. Following the same procedure, we continued to find the most commonly selected gestures for all tasks one after another, in two contexts separately. In the same application, if two operations had the same gestures, such a conflict was solved by having the larger group win the gesture. Our gesture mapping results are summarized in the last two columns in Table 3. The finalized gesture set covers 63.6% of the agreement for the sitting context and 62.6% for the walking context.
Comparison to baseline interactions.
One aspect of mouth microgestures we were interested in is not just users' preference between the gestures but also in general, their preference over standard interaction techniques for specific tasks. For each task, we took the mean of all participants' ratings for comparing to a baseline. The mean aggregated score for a task was 4.97 (stdev=0.53) when participants were sitting down, and 5.04 (stdev=0.44) when walking around. This suggests that the tasks are, for most users, conducive to using mouth microgestures as an interaction. Some of the tasks that had lower mean scores in both contexts and did not clearly favor gestures as much were camera on/off (¯= 4.0 for sitting,¯= 4.36 for walking) for the video call application and increase/decrease font (¯= 4.0 for sitting,¯= 4.82 for walking) for the reading application.
Performance Workload.
To validate our rankings from Study 2, we analyze the results of participants' Likert scale answers to if they thought the gesture they chose was easy to perform. For each gesture, we calculated the overall mean rating over all participants and then calculated the mean and standard deviation of the aggregated ratings. Overall, we found that participants did indeed regard the subset of gestures we presented as easy to perform in both contexts of sitting and walking (mean=5.82, stdev=0.54 for sitting; mean=6.04, stdev=0.37, for walking), supporting the results on physical and mental demand from the previous study. In the sitting context, we notice that the lowest mean ratings belonged to the G1&G18.
Social Acceptability.
We take a similar approach as in Section 5.3.5 and again aggregate the ratings for each microgesture by taking the mean. The results show that participants generally thought the microgestures were socially acceptable (mean=5.42, stdev=1.36 for sitting; mean=5.42, stdev=1.42, for walking), but there are a few microgestures that stand out to be less socially acceptable, as indicated by the increased aggregated variance in scores. Gestures G3 (¯= 2.3), G6 (¯= 3.5), and G20 (¯= 3.4) had the lowest ratings when the participants were sitting down, and gestures G3 (¯= 1.75) and G20 (¯= 3.33) were also rated low in the walking context. A common feature of these gestures was that they are visually noticeable by a third party. It is interesting though that participants still selected these gestures rather often, as shown in Figure 4.
DISCUSSION
In this section, we describe how the results of the user studies have implications on the design of mouth microgestures with end-users in mind.
Design Implications
6.1.1 Design Guidelines for Mouth Microgestures. We deduce several general design guidelines for novel mouth microgestures and summarize them here.
First, we found that short, direct actions as gestures were preferable. Microgestures that were intricate and required compound or sequential motions were not regarded as highly as simpler ones.
Closely tied to the previous is that having fewer moving organs involved was better. Gestures requiring manipulation of multiple parts of the mouth in conjunction with each other were often rated poorly by users. We speculate that this could be because the mouth is still an unfamiliar mode of interaction, making complex actions unfavorable, or that the speed of gesture execution could be important when using the mouth.
The next guideline we report is that natural mouth movements are good for intuition but not necessarily preferred as a gesture. Natural motions, like smiling, may produce gestures that are better for learnability and memorability but these such actions were chosen infrequently. It is possible that for end users, the similarity to everyday actions makes them poor choices for gestures which are meant to be deliberate.
Next, we note that location and direction have strong meanings in the microgesture; details like whether an action is toward Lastly, we suggest that proprioception is important for performing eyes-free gestures. This refers to movements that provide haptic feedback for itself, such as tapping the tongue against the roof of the mouth and feeling the roof of the mouth with the tongue. Because mouth microgestures are eyes-free, users need to rely on another form of feedback during execution to feel confident that the microgesture was correct.
6.1.2 Implications for Mouth Microgesture Systems. When viewing the distribution of proposed gestures from Study 1, a considerable number of them make up the group using the tongue as the primary organ. Taking this into account, it is advisable to avoid obstructing the range of motion of the tongue when developing the sensing system for a mouth microgesture interface. On a similar note, for the other axis of our taxonomy, most microgestures fall in the state category. This means that these gestures rely on position or location within/around the mouth. Considering how many there are in this group, fine-grained localization of mouth parts may be a useful feature to pursue when implementing a technical system.
The taxonomy we propose in this paper could also guide how to organize one's mouth microgesture set. For instance, certain functionality in the user interface may be fairly different or unrelated (e.g. volume control vs. navigation) and the mapped gestures similarly should be distinct. Following the defined groupings of mouth microgestures, a gesture designer could ensure that gestures for each functionality use different categories, like separate primary organs or different modalities.
Factors Influencing Users' Preference of Gestures.
Many interfaces for different applications have similar functional widgets, like having some directional action or a continuous input slider. Gesture reusability across applications has much value as it can help with discovering and remembering mouth microgestures, especially since they are currently a novel interaction. As seen in Section 5.3.1, the microgestures slide tongue to the left/right along inner surface of top teeth were often selected for different tasks in various applications.
Even though mouth microgestures do not rely on visual feedback to be useful, user preference may still be influenced by past experiences with touchscreen interfaces. Mouth microgestures that can be drawn from analogous existing gesture interfaces (i.e. tap, swipe) were found to be most popular in our studies, likely due to existing familiarity. Tongue microgestures share a similar mental model with that of finger gestures on a touchscreen, and since interfaces are often still designed around the touchscreen experience, the spatial reasoning of using a finger may carry over to using mouth microgestures. In study 3, we also noticed some participants chose microgestures associated with rightward motion, like slide tongue to the right along inner surface of front teeth, for the task of answering a phone call. Upon closer examination, we realized that this may have been due to the user interface used to display the referent which indicated swiping to the right to pick up the call and left to hang up. We found that gestures were closely tied with operations, independently verifying results from Wobbrock et al's findings on user-defined gestures in surface computing [43].
Metaphors associated with everyday actions with the mouth, like those related to communication, also may play a role in the design of a mouth microgesture set. P6 from Study 3 chose the close mouth for three of four "mute" tasks for the phone or video call applications. They described how it seemed the most intuitive if they wanted to stop the sound input. This comment suggests to us that natural movements of the mouth carry meaning that can be applied to interaction; some of the user-defined microgestures like clear throat or make a short 'snoring' sound may have intuitive purposes as an interaction.
Technological Context
From the results of our third study, we derived a practical set of microgestures that can be mapped to common smartphone interactions. While we planned our studies to explore the design space to discover the ideal user-designed gestures, free of constraints, it is meaningful to relate our findings within the current technological landscape in regards to sensing capabilities. In our related work section, we reviewed prior work showing that many facial movements can be detected with head-mounted systems and commodity sensors. We expect a variety of these existing sensing techniques to be capable of differentiating between our proposed gestures. Many of the final selected gestures involve tongue movements to different areas of the mouth which have been shown to be possible to detect in [9], [29], and [15]. We envision that recent work using sensors around the ear including acoustic sensing [1], electric field sensing [24] and motion sensor [40], are promising avenues to making mouth microgestures more adoptable. Some of the microgestures may be subtle enough to pose a challenge to detect with a single sensing modality. We believe, however, that a sensor fusion approach can overcome these cases, and as recent wearables around the face are being equipped with more sensors, the issue of insufficient or unreliable data will be less problematic. Our studies reveal important gesture characteristics and insights of what users expect for this new type of interface, and this knowledge can be helpful for providing direction when developing future interfaces and technical systems.
Limitations
The taxonomy we define was developed solely from the gathered gestures of Study 1. Since we did not supplement this set with the gestures used by past technical papers, there may be a loose connection between the insights in gestures design that we derive and any technological implications.
Because of the way we defined our taxonomies of mouth microgestures, participants in the user studies were only exposed to microgestures unique in their spatial design. Variations in their execution, many of which participants in the initial brainstorming session proposed, were not considered. Many microgestures could be performed two or three times as a new microgesture or one could use temporal variations of microgestures like performing them more quickly/slowly. These kinds of modifications may influence a user's choice of using a new microgesture for a task or using a microgesture variation.
The design of the Wizard-of-Oz study has a few drawbacks. In order to smoothly facilitate the study remotely, the primary mode in which referents were administered to participants was visually through a smartphone, so application interfaces were constrained to those normally designed for a mobile phone. People's interactions with wearables with minimal screens or hearables may involve other forms of feedback that our study design does not effectively capture. Mouth microgestures have the advantage of being both hands-and eyes-free, so participants may not have fully experienced this feature. Also, the contexts we tested only capture a limited range of the wide spectrum of possible user activities. There may be other common daily scenarios that could influence a users' choice of microgesture. Multiple participants commented for the reading task that if their hands or fingers were dirty or occupied, like when cooking, then they would have rated their preference of the microgesture over the baseline touch interaction much higher.
CONCLUSION
We explore the design space of mouth-based microgestures and analyze users' perception of them as an interaction technique to accomplish routine tasks with their personal devices. From an original set of 86 collected user-defined gestures, we present taxonomies to characterize how mouth microgestures can be formed and applied. We present a functional set of 20 mouth microgestures, determined by user preference, that can be applied to tasks of common software applications. The insights we've learned on user behavior of mouth microgestures should help future interaction designers of wearables and hearables develop intuitive, usable mouth microgestures. | 2021-06-03T01:16:06.203Z | 2021-06-02T00:00:00.000 | {
"year": 2021,
"sha1": "68496cd67c8c4de1541d2f74b3f345527257173f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.00931",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "68496cd67c8c4de1541d2f74b3f345527257173f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
166165106 | pes2o/s2orc | v3-fos-license | Memory and identity: the influence of early preservation practices on English culture
Until the nineteenth century, written records were often considered an adequate form of preservation for historic monuments, buildings, and landscapes. The shift from written to physical preservation was a gradual one that was pioneered by seventeenth century chorographers, eighteenth century antiquarians, and nineteenth century archaeological and architectural societies. Drawing on the work of historians who have examined these eras of amateur historical study, this paper will examine how chorographers and antiquarians who have not always been given serious consideration by historians of the modern preservation movement were, in fact instrumental in popularising heritage and advocating for early protectionist measures.
connections that can be made across the centuries. The development of the preservation movement in England occurred slowly and only flourished in the closing decades of the twentieth century, but the preservation spearheaded by organisations like the National Trust and Historic England has its roots in the writing that was pioneered by sixteenth-century chorographers. This paper will draw connections between the works of existing historians and expand beyond it, examining how chorographers and antiquarians who have not always been given serious consideration by historians of the modern preservation movement in fact were instrumental in popularising heritage and advocating for early protectionist measures.
Writing as a tool of preservation
Before the nineteenth century, written records were often considered to be an adequate form of historical preservation. Chorographical and topographical writers from the Renaissance onwards Glendinning (2013). The conservation movement: a history of architectural preservation, antiquity to modernity, London: Routledge, Taylor & Francis Group, takes a high-level look at British and European preservation movements, focusing mainly on the nineteenth century campaigns and legislation for physical conservation protects. More work remains to be done to trace the English preservation movement, particularly at the provincial level.
past. 2 William Camden's Britannia, first published in 1586, pioneered this trend, and the Society of Antiquaries' official series the Vetusta Monumenta, which began in 1718, as well as countless antiquarians working independently, compiled similar chorographies of historic monuments and sites throughout the country in the seventeenth and eighteenth centuries. However, antiquarians' interest in the documentation of historic monuments did not translate to a movement to protect such monuments from destruction. In fact, by the eighteenth century, some critics held that the 'art of engraving' was responsible for the loss of many monuments in England. This belief represents a fundamental misunderstanding of the work of chorographers, for many, especially those contributing to the Vetusta Monumenta, sought to record monuments that were already endangered, but it also raises a bigger question: why didn't antiquarians work harder to preserve monuments in situ?
Furthermore, how did the preservation movement rise from these beginnings?
A fundamentally important aspect of the rise of the preservation movement was the cultural shift that led people to see historic buildings and monuments not just as pieces of architecture or even craftsmanship, but as important parts of a legacy created by ancestors and tangible markers of British identity. 3 The conscious change in the treatment of the built environment was one aspect of a general movement towards the acknowledgement and even creation of British identity and culture over the course of the nineteenth century, and it revolutionised how people cared for historic monuments. 4 But the relationship between the built environment and cultural practitioners was a symbiotic one. By the nineteenth century, generations of chorographers, antiquarians, historians, and archaeologists had laid the groundwork that allowed a new set of connoisseurs to use historic structures as sites of memory and culture. The treatment of historic sites prior to the nineteenth century, while generally not recognised as preservation by modern critics, nevertheless played a fundamental role in shaping the views of nineteenth-century preservation advocates.
When William Camden's Britannia was published in Latin in 1586, it was a ground-breaking piece of work. Prior to the publication of Britannia, there existed in Britain a tradition of regional studies, but these studies existed without acknowledging one another, and often failed to situate regional history in the context of broader national history. 5 Camden's Britannia revolutionised the field of regional studies by rejecting the existing tradition and creating a work that examined and connected the touchstones of local history throughout Great Britain and Ireland. His chorography was immensely popular, running to five editions even before an English translation appeared in 1610, and it was hugely influential to the antiquarians that followed in the seventeenth and eighteenth centuries. 6 Chorography was a new practice in Britain at the time of the Britannia's publication, and it created for historians and antiquarians of the time a new opportunity to assess the context and significance of local history. Although the practice has not been widely studied by historians, it has been acknowledged as a fundamental aspect of the establishment of an antiquarian tradition in Britain. 7 Chorographies such as Camden's contextualised landscapes and monuments in local and regional perspectives, utilising techniques that would later be used in antiquarian, archaeological, etymological, geographical, and historical analyses. Chorographies like the Britannia also served an important additional function: they memorialised the landscape and monuments of a region, providing a lasting record not only of their existence, but often of their history and origins, as best as they could be traced. preservation in place took hold in England. 8 Written records of historic monuments served as important memory tools, and records of historic monuments likewise served as tools to interpret and contextualise English identity for readers. According to Rosemary Sweet, in the seventeenth and eighteenth centuries, 'antiquaries were encouraged to record and preserve the memory of the monuments of history before their disappearance from the face of the nation'. 9 There was no concerted effort to present the destruction of monuments or historical sites on the part of chorographers and antiquarians either in Camden's time or in the years that followed, but conversely, such writers would have been acutely aware of the potential for destruction that had been borne out first by the dissolution of the monasteries in the sixteenth century, and then by the iconoclasm experienced during the English Civil War. 10 The destruction experienced during both of these upheavals served as a reminder of the temporality of architecture and the discontinuity of history.
Both the Reformation and the Civil War were difficult topics for chorographers to broach because of their partisan nature, but nevertheless, writers such as John Leland, who was active during the time of the Reformation, as well as Camden and his successors documented ruins from these events and were spurred by the large-scale destruction to catalogue historic structures they encountered. 11 Nonetheless, for a variety of reasons often closely related to the way contemporary chorographers and antiquarians understood land rights and property ownership, memorialisation was seen as an adequate form of preservation well into the nineteenth century. 12 Antiquarians in the eighteenth century devoted considerable effort to chronicling historic monuments, and the Society of Antiquaries' series Vetusta Monumenta, which was begun in 1718, the year the society was founded, focused its attention on structures that were endangered by demolition or restoration, an act which often destroyed original features of a building or monument. 13 The publication of chorographies and illustrated plates was influential in debates over the concept of national identity and heritage, as antiquarians understood historic monuments to be an important aspect of local and even national culture and identity. 14 Antiquarians in the seventeenth and eighteenth centuries viewed historic monuments as memorials to ancestors who had erected them. Although this added a layer of significance to such historic structures, kindred or patriotic attitudes towards the built environment still did not eventuate any sustained movement for preservation in place.
In fact, some early preservationists believed the practice of engraving historic monuments led to their destruction. 15 Although the Society of Antiquaries' Vetusta Monumenta was widely read and hailed as an innovation for preservation in its time, others pushed back against the series of engravings as a factor in the destruction of historic sites. 16 In many cases, however, the factors of destruction were not so straightforward. Historic monuments were generally either destroyed by time and neglect, a process which many antiquarians viewed as natural and even inevitable in the eighteenth century, or because they stood in the path of intended new developments, a process that few contemporaries seemed willing to interfere with. 17 The sites that were chosen for engraving in the Vetusta Monumenta were often already endangered by one of these two processes, and the artworks 13 The Vetusta Monumenta was originally published as a series of independent plates before the first collected volume was compiled and issued in 1747.
14 Michael Cyril William Hunter (1996). 'Introduction: the fitful rise of British preservation', in Michael Cyril William Hunter (Eds.). Preserving the past: the rise of heritage in modern Britain. Stroud: Alan Sutton, p.5. 15 Lolla, p.19. 16 A letter to the editor in Gentleman's Magazine praised the Vetusta Monumenta saying, 'To collect and preserve every thing tending to illustrate the history and antiquities of this country, is a most laudable object'. 'Prints, portraits, engravings, biographical anecdotes, &c'., in Gentleman's Magazine 52 (1782), p.223; and Lolla, p. 19. that the Society of Antiquaries' draftsmen produced served to memorialise historic structures that could well have been lost to time without intervention.
Although the discipline of historic preservation has its origins in antiquarianism, which in turn was influenced by early modern chorography, neither chorographers nor antiquarians were necessarily preservationists in accordance with a modern understanding of the term. While chorographers and antiquarians were certainly interested in the physical vestiges of history that surrounded them, and many antiquarians were avid collectors of historical artefacts, their interest did not often translate to a desire to protect historic buildings and landscapes in place. It was not until overseas factors precipitated a significant scholarly turn to British antiquity that attitudes towards historic structures began to change. Before the beginning of the nineteenth century, antiquarians were often most interested in relating British history to classical antiquity, and antiquarians were often most The nineteenth century saw a sustained and growing movement towards preservation. It was experienced at the national level, in part as a reaction to the plans of groups like the Cambridge Camden Society, which was founded in 1839, to restore ancient churches, a process which performed some necessary repairs and beautified many deteriorating structures, but at the expense of historic fabric and styles. 29 But it was also experienced at the local level, as antiquarian and archaeological groups dedicated to studying local history propagated from the 1830s. 30 Antiquarians' attitudes towards preservation were undoubtedly shaped by their political views, although to define a liberal and conservative preservation principle would be generalising too greatly. Politics brought nuance to preservationism, but it also had the potential to bring tension to preservation groups. Politics and religion did play an important role in access to antiquarian groups, especially in the first half of the nineteenth century. 38 Generally, and particularly in the southern areas of the country, antiquarian societies were founded by prominent local citizens who were often fervent members of the Church of England and politically conservative. Antiquarians who were nonconformists or who were politically liberal could have trouble gaining membership to such groups.
In Cambridge, the university architectural society had specific religious requirement for members, while in Essex, one of the founders of the Colchester Archaeological Society was quickly pushed out of the group, a move he believed was related to his political views. 39 For many local preservationists, politics had particular importance when it influenced interpretations of local historical events and figures, two practices that were closely related to the act of creating memory places in historical monuments. In the nineteenth century, especially outside of London, monuments were often considered to be significant because of their associations with historic people and events, rather than merely because of their architectural significance, especially in the case of secular rather than religious architecture. 40 In Taunton, for example, the Somersetshire Archaeological and Natural History Society purchased and preserved the Taunton Castle not because it was a particularly notable example of Saxon architecture, but because it was believed to be the site of one of the earliest castles in the country, and because the surviving buildings had seen significant battles during the Civil War. 41 The people who were able to historicise structures in accordance with their own worldview could have a significant influence over the sense of culture and community in a locality.
Many, if not most, of the people who became preservationists in the nineteenth century did so because of changes, whether owing to neglectful decay or purposeful demolition, they observed in their surroundings. 42 Local preservationists were aware of the debates taking place at the national level regarding preservation theory and practice, but for all their awareness of high-level argumentation, they found motivation in provincial issues that had personal significance. 43 In the nineteenth century, many people from the English provinces retained lifelong ties to relatively small areas, and antiquarians and preservationists often approached their craft with particular pride of place.
Preservation as a grassroots movement
Even as the preservation movement gained momentum at the beginning of the nineteenth century through the raised awareness created by antiquarian and archaeological societies, the British government did not show a particular concern for preservation until the closing decades of the century. Instead, the rise of the preservation movement was experienced as a grassroots movement in local communities throughout the country. Antiquarians and archaeologists with a particular interest in local history took it upon themselves to use historical remains, including monuments, antiquities, and buildings to interpret and understand the past, and in doing so, they created a more meaningful connection to their history. Historic places became sites of historical understanding and memorialisation, tangible connections to the past. 44 This seemed particularly important in the nineteenth century, as the rapid changes of the industrial revolution made the past seem particularly distant to many students of history. 45 In the beginning of the nineteenth century, the shifting historical consciousness had the effect of creating a new understanding of how history related to the present. 46 Combined with the fact that in the nineteenth century, more people were participating in antiquarian and archaeological societies than ever before, changing historical consciousness led people to historicise architecture, and it is in these changes to participation in and understanding of history that the roots of historic preservation can be found.
However, a cohesive national preservation movement was slow to take hold in the nineteenth century, and most antiquarian and archaeological societies who undertook any preservation advocacy operated only at a local level, at least until the 1890s. Antiquarians in England were aware of preservation organisations in other countries, especially in northern Europe, and when According to John Waller Green, a contemporary of Wright's, he wanted to create an organisation that could lobby the government to protect 'objects of antiquarian and historical interest'. 48 The government seemed wary of private efforts to promote preservation, however. Acknowledging the role of the French government in preserving monuments in that country, Henry Pelham-Clinton, Earl of Lincoln and Commissioner of Woods and Works, declared 'in this Country the Societies which exist have done, and I believe can do, very little good'. 49 The fact that the British Archaeological Association was not created solely as an advocacy group likely dimmed its chances of success, however. Charles Roach Smith, one of Wright's associates, hoped the association would provide an educational opportunity for amateur antiquarians and archaeologists, which created a conflict with archaeologists who hoped the body would take a more academic tone. 50 The first congress of the British Archaeological Association, held in Canterbury in 1844, devolved into infighting which led to a split in which the Royal Archaeological Institute was created to be a more serious archaeological body. 51 Ultimately, neither society ever truly became a lobbying group; the British Archaeological Association became known for organising excursions to historic sites, and Thomas Wright, while he remained a member, concentrated the bulk of his efforts elsewhere. activity and coordination between the different provincial archaeological groups. 52 While the Archaeological Institute retained a professionalising approach to their practices and focused their study largely on ecclesiological remains, the British Archaeological Association picked up on the ascendant spirit of nationalism that was beginning to infuse politics and national life by the midnineteenth century and use archaeological and antiquarian study as a tool to celebrate the illustriousness of the national past. 53 Although the members of the British Archaeological Association were true amateurs, and the appeal of joining the group was often as much social as historical, politics nonetheless had an important effect on the association's activities throughout the country. 54 The societal changes, coupled with the physical changes to the landscape that the technological advances of the nineteenth century brought, created a sense of loss from the destruction of historic monuments amongst antiquarians. In the middle of the nineteenth century, as sentiments towards historic monuments evolved, John Ruskin's books The seven lamps of architecture and The stones of Venice were ground-breaking treatises on the importance of historic architecture, even if they would not be considered preservationist tomes by more modern standards.
In The seven lamps of architecture, Ruskin states that 'if indeed there be any profit in our knowledge of the past […] there are two duties respecting national architecture whose importance it is impossible to overrate: the first, to render the architecture of the day, historical: and, the second, to preserve, as the most precious of inheritances, that of past ages'. 55 Furthermore, Ruskin asserts, 'we may live without architecture, and worship without her, but we cannot remember without her'. 56 Ruskin recognised the importance of a preserved environment to heritage and identity, but Ruskin himself was not a particularly strong proponent of concrete preservation projects in England. He disparaged
Conclusion
The second half of the nineteenth century marked a true turning point for the preservation movement in Britain and was a culmination of the gradual evolution that grew interest in historic monuments in Britain from the seventeenth century onwards. Antiquarian and archaeological societies continued to flourish, and over sixty new groups were formed throughout the country between 1850 and the passage of the 1913 Ancient Monuments Consolidation and Amendment Act. 62 Local groups continued to play a vital role in preservation, which took on a personal tone as groups worked to save monuments because of the specific local cultural currency with which they had been imbued. Even at the close of the nineteenth century, the preservation movement in England was still in its infancy, but the long history that saw chorographers evolve into antiquarians, and antiquarians to archaeologists and preservationists by the middle of the nineteenth century was a fundamental precursor to the actions that followed. Historians of the preservation movement and of local history groups often discount the actions of early antiquarians as insignificant and unlearned, but doing so discredits important early moments in the history of architectural preservation in Britain.
The long legacy of the preservation movement was also an important aspect of the way in which early preservationists played a role in the development of English culture in the nineteenth century. As regional antiquarian and archaeological societies preserved monuments, landscapes, and buildings either through documentation or in situ, such groups made important statements about their understanding of local and national history. Written documentation was easily accessible to other antiquarians and archaeologists across the country, as nineteenth-century societies regularly exchanged transactions and annual reports, allowing aspects of local history to be considered in national contexts. As physical conservation became more important to advocates, the actions of preservationists had even greater importance. The physical remnants of the past that were protected in place, especially when otherwise threatened with destruction, contributed to the collective memory of a locality and allowed all residents, whether members of archaeological societies or not, to engage 62 The 1913 act significantly expanded preservation protections that had first been enacted with the passage of the 1882 Ancient Monuments Act. Simon Thurley (2013). Men from the ministry: how Britain saved its heritage. New Haven: Yale University Press, pp. 82-3. | 2019-05-27T13:25:29.511Z | 2018-12-17T00:00:00.000 | {
"year": 2018,
"sha1": "c9964a8f618f545d37ec14c441949130356570e5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5920/ppp.547",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "136c03dbdd7f2ea5be2969edfcd00844b9d620c2",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
5295873 | pes2o/s2orc | v3-fos-license | Prevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology
Significant survival of cryopreserved cells became a reality only after the discovery and the use of cell-membrane-permeating cryoprotective agents (CPAs) (e.g. glycerol, Polge et al, 1949). Before freezing, one or various CPAs should be added to cell suspensions to prevent the cells from the cryoinjury during the freezing and thawing processes. Unfortunately, the CPAs, themselves, may have chemical toxicity to cells after thawing at room temperatures (Katkov el al, 1998). Therefore, a post-thaw washing of CPAs is required to remove CPAs from cells prior to scientific or medical applications. However, the addition of CPAs to cells before freezing and the removal of CPAs from cells after thawing may cause serious cell loss and damage if the processes are not properly handled.
Introduction
Significant survival of cryopreserved cells became a reality only after the discovery and the use of cell-membrane-permeating cryoprotective agents (CPAs) (e.g.glycerol, Polge et al, 1949).Before freezing, one or various CPAs should be added to cell suspensions to prevent the cells from the cryoinjury during the freezing and thawing processes.Unfortunately, the CPAs, themselves, may have chemical toxicity to cells after thawing at room temperatures (Katkov el al, 1998).Therefore, a post-thaw washing of CPAs is required to remove CPAs from cells prior to scientific or medical applications.However, the addition of CPAs to cells before freezing and the removal of CPAs from cells after thawing may cause serious cell loss and damage if the processes are not properly handled.
"One-step" methods were formerly used for addition/removal CPAs.During the "onestep" CPA addition process, cells are directly (one-step) placed in a solution that is hyperosmotic with respect to the permeating CPA but isosmotic with respect to the impermeable salts/electrolytes.Cells first shrink because of the osmotic efflux of intracellular water and then increase in volume as the CPA permeates and as water concomitantly reenters the cells (as shown in F i g u r e 1 a ) .D u r i n g t h e " o n e -s t e p " C P A removal process, cells with a high intracellular concentration of CPA are directly exposed to an isotonic salt solution without CPA.Cells will swell because of an osmotic influx of extracellular water and then decrease in volume as the CPA diffuses out of the cells and as water concomitantly moves out (as shown in Figure 1b).As a result of these two aspects (i.e.addition and removal of CPAs) of the cryopreservation procedures, the cells may experience severe osmotic volume excursion causing significant cell "osmotic" injury (Sherman, 1973;Mazur andSchneider, 1984, 1986;Penninckx et al, 1984;Leibo, 1986, Crister et al, 1988a, Meryman, 2007).
Several possible reasons for the osmotic injury have been proposed, including (i) rupture of the cell membrane in hypo-osmotic conditions (i.e.expansion lysis); (ii) the water flux hypothesis: frictional force between water and potential membrane 'pores' caused cell membrane damage (Muldrew and McGann, 1994); (iii) the minimum volume hypothesis: 102 cell shrinkage in hyper-osmotic condition is resisted by cytoskeleton components, and the resultant interaction between shrunken cell membrane and the cytoskeleton damages the cells (Meryman, 1970); (iv) the maximum cell surface hypothesis: the cell shrinkage induces irreversible membrane fusion/change, and hence the effective area of cell membrane is reduced; when returned to isotonic condition, the cells lyse before their normal volume is recovered (Steponkus and Wiest, 1979); and (v) the solute loading hypothesis: hyperosmotic stress causes a net leak/influx of non-permeating solutes; when cells are returned to isoosmotic conditions, they swell beyond their normal isotonic volume and lyse (Mazur et al., 1972).In order to minimize osmotic injury, many efforts have been made and several techniques have been proposed.Basically, people utilize so-called "multi-step methods" instead of "one-step method" for addition and removal of CPAs, and the resulting cell recovery rate can be significantly improved.During the multi-step CPA addition process, solution with high CPA concentration is added into a cell suspension step by step and the CPA concentration in the cell suspension increases slowly and gradually.During the multi-step CPA removal process, an isotonic salt solution is added into the cell suspension step by step, and then by means of centrifugation CPAs in the cell suspension are removed (Figure 2).Although to some extend multi-step method reduces osmotic damage of cells, it is complex to operate, requires more laboratory staffs, and costs more time, which makes the addition and removal procedures more expensive and difficult practically.In the past, attempts to develop procedures for the addition and removal of CPAs have been made based primarily on empirical approaches, i.e. for a given cell type, various temperatures, CPA types and concentrations, and number of procedures or steps for CPA addition and removal were empirically tested to find an acceptable procedure.Typical techniques includes (i) a multi-step addition and multi-step removal of permeating CPAs (Watson, 1979) and (ii) a multi-step addition and two-step removal (using a non-permeating solute as osmotic buffer) of CPAs (Rowe et al., 1968;Mazur and Leibo, 1977;Leibo 1981).New CPA addition-removal methods and automated devices have recently been developed based on fundamental cell membrane transport theory and engineering approaches (Gao, et al, 1995;Gilmore et al, 1997;Katkov, 1998;Myrthe, et al ,2004, Zhou, et al, 2011), which are introduced and discussed in this chapter.
Cell membrane transport models and mathematical formulatins
To date, a number of formalisms exist for describing the cell membrane transport process.These include a one-parameter model, a two-parameter model, and a three-parameter model, considering solute-solvent interactions.
i. one-parameter model (Mazur et al, 1974(Mazur et al, , 1976)), The one-parameter model utilizes the hydraulic permeability (L p ) of cell membrane as the only parameter to describe the water transport across cell membrane.The model can be formulized as follows.
where, i w V is the volume of intracellular water, A c is the area of cell membrane surface, Π e and Π i are the extracellular and intracellular osmotic pressures.
ii. Two-parameter model
The two-parameter model was firstly presented by Jacob (1932Jacob ( -1933)), and further developed by Kleinhans (1998), Katkov (2000) recently.The model utilizes the parameters L p and P s (CPA solute permeability) to characterize membrane permeability when water, a permeable solute and a nonpermeable solute are present: where N s is the number of osmoles of solute inside cell, R is the universal gas constant, T is the absolute temperature, M i and M e are the intracellular and extracellular osmolality, respectively.The subscript 's' refers to permeable solute, and remaining symbols are as previously defined.
iii.Three-parameter model The classical formulation of coupled, passive membrane transport was developed by Kedem and Katchalshy (1958) using the theory of linear irreversible thermodynamics.The formulation includes two coupled first-order non-linear ordinary equations which describe the total transmembrane volume flux and the transmembrane permeable solute flux respectively.
In the model (so called Kedem-Katchalssky transport formalism or KK formalism), a reflection coefficient (σ) was introduced with Lp and Ps to describe water and solute (CPA) transport across the plasma membrane: Where V c is cell volume, s M is the average osmolality of intracellular and extracellular solution, and the subscript 'n' refers to nonpermeable solute, respectively.
The KK formalism used to be the most general of the three mentioned.However, more recent literature suggests that aquaporins in cell membrane are highly selective, with nonionic solute transport occurring mainly through the lipid bilayer or through other channels that are distinct from the aquaporins (Gilmore et al, 1995;Preston et al, 1992).In this case, the estimation of σ as independent parameter may be inappropriate and may not be relevant from a biological stand point (Kleinhans, 1998).By assuming that there is no interaction between water and solute during their transport through the membrane, the value of σ can be determined as , where s V = partial molar volume of permeating solute.In this manner, the KK formalism can still get correct result as two parameter model.During Addition and Removal of Cryoprotective Agents: Theory and Technology 105 In the following context, two examples are demonstrated to show how to use cell membrane transport models and mathematical formulations to develop optimal conditions and technology/instrument for the addition and/or removal of the permeating CPAs in cells.An important hypothesis is that the degree of cell volume excursion can be used as an independent indicator to evaluate and predict the possible osmotic injury of the cells during addition and removal of CPAs.
Example 1: Development of optimal "multi-step methods" for addition and dilution of glycerol in human sperm Glycerol is the most commonly used CPA in the cryopreservation of spermatozoa (Polge et al, 1949;Watson, 1979;Critser etl al., 1988a).Glycerol permeability characteristics for human spermatozoa have been very well studied and reported (Du et al, 1994;Gao et al., 1992).The hypothesis above was tested first using the following procedures: (i) to determine sperm osmotic injury as a function of its volume excursion limits (swelling/shrinking) in anisosmotic solutions containing only non-permeating solutes without glycerol; (ii) to simulate, by computer, the kinetics of water-glycerol transport through the sperm plasma membrane and to calculate the sperm volume excursion during different glycerol addition and removal processes using membrane transport equations and previously determined sperm membrane permeability coefficients for glycerol and water; (ii) combining information obtained from procedures (i) and (ii), to predict sperm osmotic injury caused by different procedures of glycerol addition and removal; and (iv) to perform experiments to test the predictions.If the hypothesis is confirmed, the above procedures also provide a methodology for predicting optimal protocols to reduce the osmotic injury associated with the addition and removal of high concentrations of glycerol in human spermatozoa.
Preparation of sperm suspension
Human semen samples were obtained by masturbation from healthy donors after at least 2 days of sexual abstinence.Samples were allowed to liquefy in an incubator (5% CO2, 95% air, 37C, and high humidity) for ~1h.A total of 5 ul of the liquefied semen were used for a computer-assisted semen analysis (CASA) using CellSoft (Version 3.2/C, CRYO Resources, Ltd, Montgomery, NY, USA) (Jequier and Crich, 1986;Crister et al., 1988b).A swim-up procedure was performed to separate motile form immotile cells [layering 500ul of modified Tyrode's medium (TALP: Bavister et al., 1983) over 250 ul of semen, incubating for ~1 h in the incubator and carefully aspirating 400 ul of the supernatant in which >95% of spermatozoa were motile].The motile cell suspensions were centrifuged at 400g for 7min and resuspended in the TALP medium (286~290 Osmol) supplemented with pyruvate (0.01 mg/ml) and bovine serum albumin (4 mg/ml), at a cell concentration of 1×10 9 cell/ml.
Assessment of human sperm membrane integrity
A methodology for the assessment of sperm membrane integrity, using dual florescent staining and flow cytometric analysis, has been developed by Garner et al. (1986) and previously validated in our laboratory (Gao et al., 1992(Gao et al., , 1993;;Noilles et al., 1993).Propidium iodide (catalogue no.P4170; Sigma Chemical Co., St Louis MO, USA) is a bright red, nucleic acid-specific fluorophore which permeates poorly into spermatozoa with intact plasma membrane, but is able to diffuse readily in to spermatozoa with a damaged membrane.6-Carboxyfluoroscein diacetate (CFDA; Sigma, Catalog #C5041) is a membrane-permeable compound.After penetrating into cells, it is hydrolysed by intracellular esterase to 6carboxyfluoroscein which is a bright green, membrane-impermeable fluorophore (Garner et al., 1986).When CFDA is added into the cell suspension with membrane-intact spermatozoa, the cells fluoresce bright green (Garner et al., 1986).Thus 5 ul CFDA (0.25 mg/ml DMSO) and 5 up propidium iodide (1 mg/ml water) stock solutions were added to each o.5ml of the treated sperm suspensions.A total of 1×10 5 spermatozoa per treatment were analyzed using a FACStar Plus Flow cytometer (Becton Dickinson, Rutherford, NJ, USA).The cells with CFDA staining and without propidium iodide staining were considered as intact cells.The percentage of intact cells was determined for each treatment.
The flow cytometer settings used for the experiments were (i) the gates were set using forward and 90° light scatter signals at acquisition to exclude debris and aggregates; (ii) instrument alignment was performed daily with fluorescent microbead standards to standardize sensitivity and setup; (iii) photomultiplier settings were adjusted with unstained overlap with individually stained cells; (iv) excitation was at 488 nm from a 4 W argon laser operating at 200 mW.Fluorescein emission intensity was measured using a 530/30 nm bandpass filter, and propidium iodide intensity using a 630/22 m bandpass filter.
Determination of osmotic injury as a function of sperm volume excursion in anisosmotic solutions of nonpermeating solutes
The anisosmotic solutions, ranging from 40 to 1500 mOsmol, were prepared as follows: hypo-osmotic solutions were made osmotic solutions were made by adding sucrose to TALP medium (sucrose and the solutes in TALP medium are essentially membraneimpermeable compounds).The final osmolality of each solution was measured and checked using a freezing-point depression osmometer (Adanced DigiMate Osmometer, Model 3D2; Advanced Instrument, Inc., Needham Heights, MA, USA).The osmotic tolerance of human spermatozoa was evaluated by exposing the cells to the anisosmotic solutions.A 10ul volume of isotonic cell suspension (286 mOsmol, 1×10 9 cells/ml) was mixed with 150μl of each anisosmotic solution.After 1 s to 30 min, spermatozoa in each anisosmotic solution were returned to near isotonic conditions (272-343 mOsmol) by adding 1500 μl isotonic TALP medium to 100 μl of each anisosmotic cell suspension.Sperm motility and plasma membrane integrity were measured by CASA and CFDA-propidium iodide dual fluorescent staining techniques respectively before and after the anisosmotic exposure.The centrifugal force used in sample preparation was 400 g for 7 min.All experiments were conducted at 22°C.
Thermodynamic modeling and mathematical formulation for glycerol and water permeating across the human sperm membrane
The next step was to compute the osmotic cell volume excursions associated with the addition and removal of hyperosmotic solutions of the permeating cryoprotectant glycerol to suspensions of human spermatozoa in isotonic saline.The classical KK formalism (shown as equations ( 4) and ( 5)) is used here and for the case of a solution consisting of a single permeable solute (e.g.glycerol) the average of extracellular and intracellular cryoprotective agent concentrations (osmolality) can be given as Since human spermatozoa behave as ideal osmometer (Du et al., 1993), intracellular concentrations of impermeable solute (salt) and permeable solute (cryoprotective agent) can be calculated as previously described (Mazur and Schneider, 1984).
Where V b = osmotically inactive cell volume (um3), and 0=initial condition (t=0).Initial conditions for V(0), are known based on each experimental condition or protocol.In the computer simulations, it was assumed that extracellular concentrations of permeating or non-permeating solutes were constant, and that the mixture of solutions during the glycerol addition and removal was instantaneous, i.e. the mixing time =0.
Human sperm volume, surface area, v b , water and glycerol permeability coefficients have been determined and previously published (Gao et al., 1992;Kleinhans et al., 1992;Noiles et al., 1993;Du et al., 1994).The values of these parameters are shown in Table 1.Assuming that there is no interaction between water and glycerol during their transport through the sperm membrane (or in other words, water and glycerol penetrate the cell membrane independently), the value of (Kedem and Katchalsky, 1958), can be calculated.From this equation and the data in Table 1, σ was calculated to be 0.99.This value was used in the present example.1.68×10 -3 cm/min Gao et al. (1993) Table 1.Characteristic of human spermatozoa at 22°C
Surface area
Using equations [4-7] kinetics of glycerol/water transport across the sperm plasma membrane as well as the cell volume excursion during different glycerol addition and removal procedures were calculated using a commercial differential equation solver, SLAB (Civilized Software, Inc., Bethesda, MD, USA).The sperm volume excursion and water transport through the membrane of cells in anisosmotic solution without glycerol were calculated using equation [4] and [5] with M s =0 and N s =0.
Addition of glycerol
A final 1.00 M glycerol in sperm suspension was achieved by 1:1 (v/v) mixing of the original, isotonic sperm suspension with 2.0M glycerol solution which contains an isotonic (non -permeating solute) salt concentration.Two approaches for mixing the 2.0 M glycerol solution with the sperm suspension were used, i.e. a fixed-volume-step (FVS) approach and a fixed -molarity-step (FMS) approach: Approach 1: fixed-volume-step addition A 2.0 M glycerol solution was added stepwise to the sperm suspension, and the volume of the 2.0 M glycerol solution added in each step was constant.For example, to make a four step addition of 1ml of 2.0 M glycerol solution to a 1 ml isotonic sperm sample, 0.25 ml of 2.0 M glycerol solution would be added four times to the isotonic sperm suspension.The time interval between any two steps was 0.5-1 min.
In the general case, the volume of cryoprotective agent stock medium added to cell suspension in each step can be calculated by the following equation: Where M f = the final CPA concentration (molarity) in the cell suspension, M o = cryoprotective agent concentration (molarity) in the original stock cryoprotective agent medium, n= total number of steps, i=i th step addition, V o = the original volume of isotonic cell suspension, and V i = the volume of CPA stock medium added into cell suspension at the i th step.
Approach 2: fixed-molarity-step addition Glycerol-containing medium was added stepwise into the cell suspension in such a way that the glycerol molar concentration in the cell suspension was increased by a fixed amount after each step of addition.For example, to increase the molarity by 0.25 M in each of four steps, 0.14, 0.19, 0.27 and 0.4 ml of 2.0 M glycerol stock solution should be added (step by step, four steps in total) to 1 ml of the sperm suspension.The time interval between any two steps was 0.5-1min.
In the general case, the volume of cryoprotective agent stock medium added to cell suspension at the i th step can be calculated by the following equation: Where Mf = the final cryoprotective agent concentration in the cell suspension (molarity), M o = cryoprotective agent concentration in the original stock cryoprotective agent medium (molarity), n= total number of steps, i =i th -step addition, V o = the original volume of isotonic cell suspension (ml), ∆M= increment of glycerol molarity in cell suspension after each step of glycerol addition, * 1 i V = the total volume of cell suspension before the i th -step addition, V i = volume of cryoprotective agent stock medium added to cell suspension at the i th step.
Removal of glycerol
To dilute the concentrated glycerol in the sperm suspension and remove glycerol from the cells, an isotonic without glycerol was added stepwise to the suspension.The FVS approach, FMS approach, and a two-step osmotic buffer approach were used for the dilution.
Approach 1: FVS dilution
Given the volume of the sperm suspension (V o ) with an initial cryoprotective agent concentration (M o ), the total volume of isotonic solution required to dilute the cryoprotective agent concentration from M o to M s can be calculated by the following equation: Using the FVS approach, the volume of isotonic solution which needs to be added to be cell suspension at the ith-step during the first n-1 steps (n steps in total) can be calculated as follows: 1 11 where M s = cryoprotective agent concentration in the cell suspension (molarity) after n-1 step dilutions, M o =cryoprotective agent concentration initial sperm suspension (molarity), n= total number of steps, i=the i th -step addition, V o = original volume of cell suspension (ml) and V i = volume of isotonic solution added into cell suspension at the i th step.After n-1 steps of addition of isotonic solution into the cell suspension, the diluted sperm suspension was centrifuged (400 g for 5-7 min), and then the sperm pellet was resuspended in an isotonic solution, which results in the last (n th ) step removal of glycerol from the cells.
Approach 2: FMS dilution Concentrated glycerol in the sperm suspension was diluted stepwise by addition of an isotonic solution.The decrement in the molarity of glycerol after each step dilution was fixed.In the general case, the following equation can be used to calculate the volume of isotonic solution added to cell suspension at the ith step during the first n-1 step (n steps in total): where ∆M= the decrement in the glycerol molarity in the spermatozoa after each stepwise addition of the isotonic solution, M o = cryoprotective agent concentration (molarity) in the initial sperm suspension, n= total number of steps, i=i th -step addition, V o =original volume of cell suspension, * 1 i V = the total volume of cell suspension before the i th -step addition and V i = volume of isotonic solution added into cell suspension at i th step.After n-1 step of addition , the cryoprotective agent concentration in the cell was diluted to ∆M.Then spermatozoa were transferred to isotonic conditions, which is the last (the nth) step removal of glycerol, see Table 2 fore examples.
Approach 3: Two-step dilution with an osmotic buffer Eight-step dilution Fixed-volume-step method Fixed-molarity-step method Add 100 μl of isotonic TALP seven times to 100 μl of sperm suspension to achieve a final glycerol concentration of 0.125 M.After centrifugation, 710 μl of supernatant is taken off.The remaining cell suspension is 90 μl (1) Stepwise add 14.3, 19, 26.6 and 40 μl of isotonic TALP medium to 100 μl of sperm suspension with 1.0 M glycerol; (2) centrifuge the supernatant; stepwise volume add 10, 20 and 60 μl of isotonic solution to the remaining 30 μl of sperm suspension.After the seven dilution steps, the glycerol concentration in the sperm suspension is 0.125 M. The final suspension volume is 120 μl.The final sperm suspensions (90 or 120 μl) were further diluted by adding 180 μl of TALP solution.The time interval between any two steps was ~0.5-1 min.The volume of diluent added in each step was calculated using equation [8] or [9] One-step dilution Add 2000 μl of isotonic solution directly to 100 μl of cell suspension with 1.0 M glycerol Table 2. Procedures used in one-step and eight-step removal of 1.0 M glycerol from human spermatozoa 1.
Add 2000 μl of sucrose buffer medium (TALP + sucrose, 600 mOsm to 100 μl of sperm suspension with 1.0 M glycerol.(The total length of time spermatozoa were in contract with sucrose was 0.5 min before centrifugation.Centrifuge the suspension (400 g for 7 min) and aspirate the supernatant.
Resuspend the cell pellet with 500 μl of isotonic TALP medium Table 3. Procedures used in the two-step removal of 1.0 M glycerol from spermatozoa using sucrose as an osmotic buffer In the first step, glycerol was directly removed by transferring cells to a hyperosmotic medium (osmotic buffer, TALP with sucrose) containing no glycerol but only nonpermeating solutes (salts and sucrose), and in the second step spermatozoa in the osmotic buffer were directly transferred to an isotonic solution (TALP), (Table 3) (Rowe et al, 1968;Mazur and Leibo, 1977;Leibo, 1981).
Experimental examination of the predicted osmotic injury during addition /removal of glycerol
Medium (TALP) with 2.0M glycerol was added either in one step or stepwise (using FVS or FMS approaches) to an equal volume of the isotonic sperm suspension to achieve a final 1.0 M glycerol concentration at 22°C.The glycerol in the spermatozoa was removed/diluted by a one-step or stepwise addition (using FVS or FMS approaches) of TALP medium, with or without an osmotic buffer (sucrose), to the cell suspension.Some detailed procedures for the removal of glycerol are described in Table 2 and 3. Sperm motility and plasma membrane integrity were measured before and after the different glycerol addition and removal procedures by CASA and the dual staining technique and flow cytometry respectively.
Statistical analysis
Data were analyzed using standard analysis of variance approaches with the General Linear Models procedure of the Statistical Analysis System (Spector et al., 1985).Comparisons were conducted using a protected LSD (least significant difference) approach (Zar, 1984).
Result
The percentage of spermatozoa which maintained motility or plasma membrane integrity after each treatment was normalized to that of untreated, isotonic control samples and the data are so presented.
Determination of osmotic injury as a function of sperm volume excursion
Human spermatozoa were exposed for 5min to hyper-or hypo-osmotic solutions of sucrose and TALP salts ranging in concentration from 60 to 1200 mOsmol, and their motilities were then determined by CASA while still in those solutions.Figure 3 shows that sperm motilities dropped significantly when the osmolality was >50 mOsmol above or below isotonic (286 mOsmol).Motilities approached zero when the osmolalities were <200 or >600 mOsmol.
The next step was to compare these motilities with those observed after spermatozoa were transferred from these anisosmotic solutions back to near isotonic solutions.Figures 4 and 5 show the motilities as a function of time after transfer from hyperosmotic or from hypoosmotic exposures respectively.In both cases, the more the initial exposure departed from isotonicity, the greater the damage upon return to isotonicty.Most, or all, of the damage was evident in the first 30 s after the return, although in the case of transfer from hypertonic solutions to near isotonic, there was a further slight and gradual decline over the ensuing 30 min.
Figure 6 compares sperm motilities after a 5 min exposure to the various anisosmotic solutions before and after the return to near isotonic conditions.The reduction in the motilities of spermatozoa exposed to hypo-osmotic media was not affected by the return to isotonic media, but most of the apparent loss of motility of spermatozoa in hyperosmotic media of between 286 and 600 mOsmol was reversed when spermatozoa were returned to near isotonic.For example, although only 10% of spermatozoa were motile in 600 mOsmol solutions, 95% of spermatozoa were motile after return to isotonic media.The return to near isotonic became especially damaging, however, when the initial hyperosmotic concentration was >600 mOsmol.
Figure 7 shows that integrity of the plasma membrane of spermatozoa (as assessed by CFDA /propidium iodide) was substantially more resistant to wide excursions from isotonicity than was motility.Thus, >90% of those spermatozoa exposed to a 90 mOsmol salt solution retained intact plasma membrane after return to near isotonic, whereas <10% remained motile both before and after return to isotonic.Loss of plasma membrane integrity in 50% of the spermatozoa occurred only when spermatozoa were exposed to a 60 mOsmol solution, a figure that agrees with a previous report (Noiles et al, 1993); that loss occurs whether or not spermatozoa are returned to isotonic.This has been interpreted to represent lysis from the attainment of a cell volume in excess of that tolerated by the surface area of the plasma membrane.
Using light microscopy, morphological changes in sperm cells were observed during the exposure to anisosmotic solutions.In a portion of the spermatozoa, the tail region became configured as a 'zigzag' pattern when exposed to a hyper-osmotic solution.The pattern of sperm tail curling in hypo-osmotic solutions was osmolality dependent, which is consistent with a previous report (Jeyendran et al., 1984).In addition, the curling of sperm tails occurred not only when the isotonic spermatozoa were exposed to a relative hypo-osmotic condition.(For example, the shrunken spermatozoa in hyperosmotic solutions were returned to iso-osmotic conditions.Iso-osmolality was 'hypo' relative to a given hyper osmolality.)The tail curling was irreversible.The mechanism(s) behind the morphological change is not clearly understood.
Calculated volume excursions associated with exposures to anisosmotic solutions
Since it has been shown that human spermatozoa behave as ideal osmometer over most of the range of osmolalities studied here (Du et al., 1993), a direct physical consequence of the exposures to anisosmotic conditions is major excursion in cell volume.The kinetics of volume excursion of spermatozoa in these hypo-and hyperosmotic solutions (containing only non-permeating solutes) were calculated and are plotted in Figure 8A and B respectively, indicating that only a short time was required for human spermatozoa to achieve osmotic equilibration (<1 s for shrinking, and ≤30 s for swelling).Figure 8A and B also show the maximum or minimum volume of spermatozoa when they were osmotically equilibrated with each anisosmotic solution.Sperm equilibration volume as a function of extracellular osmolality is shown in Figure 9, which can be calculated using equation ( 6) (no cryoprotective agent) or obtained directly from Figure 8A and B. To obtain a high (>95%) motility recovery, the lowest and highest osmolalities which human spermatozoa can tolerate (Figures 4 and 5) were found to be close to 240 and 600 mOsmol respectively.At these two osmolalities, the corresponding cell volume at osmotic equilibrium were directly estimated (Figure 9) to be ~1.1 (for 240 mOsmol) and 0.75 (for 600 mOsmol) times the isotonic sperm volume, indicating that spermatozoa can only swell or shrink in a relatively narrow range to maintain high post-anisosmotic motility recovery.Based on Figure 4, 5 and 9, Figure 10 was plotted, which clearly shows the post-anisosmotic injury (motility loss) as a function of osmotic equilibrium volume of spermatozoa in anisosmotic solutions.Defining lower volume limit (LVL) and upper volume limit (UVL) as cell volumes at which 5% of motile spermatozoa may irreversibly lose their motility, or, reciprocally, 95% of spermatozoa maintain their motility, one can obtain the LVL and UVL values for human spermatozoa from Figure 10 as follows: LVL =0.75×isotonic sperm volume, UVL=1.10×isotonic sperm volume.
Prediction of optimal protocols for glycerol addition/removal
The kinetics of human sperm volume excursion during one-step addition and removal of 0.5-2.0M glycerol were calculated using equations (6-9) and are shown in Figure 11A and B respectively.The higher the glycerol concentration, the longer the time period taken for sperm volume recovery and the greater the volume excursion.
Two different approaches, i.e. fixed-volume-step (FVS) and fixed-molarity-step (FMS), for the addition/removal of glycerol in spermatozoa were considered and used in the present example.Based equations (6-9), the kinetics of water and glycerol transport through the Human spermatozoa were abruptly (one-step) returned to near isotonic conditions after exposure to anisosmotic conditions for 1 min.
sperm membrane were simulated by computer.Figure 12 shows the calculated sperm volume excursion during a one-step or four-step addition of glycerol achieve a final 1.0 M glycerol concentration at 22 C using the FMS and FVS approaches respectively.From Figure 12, a one-step addition of glycerol to spermatozoa was predicted to cause ~20% sperm motility loss because the minimum volume which the cells would achieve during this glycerol addition was ~72% of the cells would achieve during this glycerol addition was ~72% of the original cell volume, i.e. below the LVL (75% or 0.75 ×isotonic sperm volume).
In contrast, a four-step FMS glycerol addition was predicted to be able to prevent sperm loss (<5% loss).Figure 12 also shows a comparison between a four-step FVS and FMS approach.
A four-step FVS method was predicted to cause a lower minimum volume than a four-step FMS method.From Figure 13, a one-step removal of 1.0 M glycerol was predicted to cause >70% motility loss, because the maximum cell volume during the glycerol removal was calculated to be in excess of 1.6 times the isotonic cell volume, which is much higher than the UVL (1.1×isotonic sperm volume).Figure 14 shows that a four-or six-step FMS removal procedure was predicted to reduce sperm motility loss significantly, but these still may cause >*5 % motility loss, while an eight-step FMS removal was predicted to able to prevent sperm motility loss (<5% loss).Figure 13 also shows a comparison between an eight-step FMS and an eight-step FVS removal procedure.An eight-step FVS removal was predicted to cause a maximum cell swelling >1.2* isotonic cell volume (>UVL), while the maximum cell volume during an eight-step FMS removal was predicted to be much lower than the UVL, indicating the eight-step FVS removal is not as good as an eight-step FMS.Based on the data presented in Figures 11-14, it was also found, from calculations, that human spermatozoa will rapidly achieve an osmotic equilibrium (within 15 s) during any stepwise addition or removal of glycerol.For example, from the calculations, human spermatozoa achieve osmotic equilibrium within 15 s after each step addition of glycerol by either one-step of four-step addition (Figure 12).This indicates that only a short time interval between steps of glycerol addition/removal is required for cells to achieve corresponding osmotic equilibration volume after each step of glycerol addition and removal.
In the analysis above, sperm osmotic injury (motility loss) caused by different glycerol addition/removal procedures has been predicted and a four-step FMS addition and an eight-step FMS removal of 1.0 M glycerol were found to be acceptable protocols to prevent sperm motility loss (<5%).
Theoretical evaluation of two-step glycerol removal using an osmotic buffer
A two-step removal of cryoprotective agent from human spermatozoa using a nonpermeating solute as an osmotic buffer has been previously used to avoid osmotic injury in other cell types (Rowe et al., 1968;Leibo and Mazur, 1978;Watson, 1979).The steps involved in this approach are (i) the cryoprotective agent is directly removed and cell swelling is reduced by transferring cells with the cryoprotective agent to a hyperosmotic medium (osmotic buffer) of non-permeating solutes; and (ii) the cells in the osmotic buffer are rehydrated by directly transferring them to isotonic solution.Since current results showed that 600 mOsmol was the hyperosmotic upper tolerance limit for human spermatozoa to maintain 95% motility, the osmolality of the osmotic buffer medium should not exceed 600 mOsmol.Using this liming criterion, a hyperosmolality of 600 mOsmol would be expected to provide the maximum 'buffer effect' to reduce sperm volume swelling during the first step of glycerol removal.Sperm volume excursion during this two-step glycerol removal process was calculated and is shown in Figure 15.It was predicted that the maximum volume spermatozoa would achieve is 1.25 times (15%) the isotonic cell volume, which is higher than the UVL of human spermatozoa, and could be expected to cause >40% sperm motility loss, as predicted from Figure 10.
Results from experimental examination
Glycerol was added to or removed from human spermatozoa using stepwise procedure to test the theoretical predictions.A one-step addition resulted in ~19.2% sperm motility loss or 81.8±8.7% ( SEM X , n=15) motility recovery, while the four-step FMS or FVS addition significantly (P<0.001)increased in the motility recovery to 93.5±5.6% ( SEM X , n=15) or 91±4.8% ( SEM X , n=15) respectively.During different glycerol removal procedures (c.f.Table 2), <30% (28.5±3.8%,n=15) of motile spermatozoa kept their motility after a one-step removal of 1.0 M glycerol, while the majority of spermatozoa (92±8.2%,n=15) maintained motility after the eight-step FMS removal.In comparison, only 62±5.8% of spermatozoa maintained motility after eight-step FVS removal.The motility recovery after a two-step 3) using sucrose as an osmotic buffer was 43±5.3% ( SEM X , n=15).The experimental result agreed well with the predictions generated from the computer simulations.Data analyses indicated that the different glycerol removal procedures caused different motility losses (P<0.001 between any two procedures).Over 90% of spermatozoa maintained membrane integrity under all experimental conditions.Fig. 15.Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after 1 M glycerol was removed from spermatozoa by two steps using a 'hyperosmotic buffer' solution.
Step 1: 1.0 M glycerol was removed from spermatozoa by one -step exposure of spermatozoa to 600 mOsmol hyperosmotic (salt+sucrose) solution without glycerol.
Step 2: Spermatozoa in the 600 mOsmol solution were returned to isotonic condition (286 mOsmol) in one step.
Example 2: Development of a novel dilution-filtration method and instrument to remove glycerol from human red blood cells (RBCs) Cryopreservation has been widely used today around the world for long term preservation of RBCs.In the USA, the FDA has approved the storage of frozen RBCs at -80°C for as long as 10 years (Meryman, 2007).However, the glycerol in RBCs must be reduced to final concentration below 1% before infusion to prevent hemolysis (Valeri et al, 2001).The step of removing CPAs may cause serious cell loss due to the cell volume excursion induced by osmotic disequilibria (Meryman, 2007).In the past decades, many efforts have been made to improve the process (Rowe et al, 1968;Meryman et al, 1972Meryman et al, , 1977;;Valeri et al, 1975Valeri et al, , 2001;;Castino et al,1996;Arnaud et al 2003).
Currently, multi-step centrifuging methods are most commonly used, and some of them can achieve favorable results (Rowe et al, 1968;Meryman et al, 1972Meryman et al, , 1977;;Valeri et al, 1975Valeri et al, , 2001)).However, the procedures are very difficult and time consuming for manual operation due to the large cell suspension volume or high CPA concentration.In addition, most of the systems are not closed and are thus open to contamination (Castino et al,1996;Valeri et al, 2006).Automatic centrifuging systems may significantly reduce human labor and contamination (Valeri et al, 2001), but the expensive cost limits their application in many areas.Recently, Dialysis was considered as an alternative method by some researchers (Castino et al,1996;Arnaud et al 2003;Ding et al 2007Ding et al ,2010)).It can remove CPAs efficiently; however, due to the non-uniformity of distribution of hollow fibers, the mass transport in dialyzer is too complicated to be controlled, especially in the unsteady state.In addition, dialysis method is not efficient to remove large molecular substances (Daugirdas, et al, 2006), such as cell fragment and the released protein from broken cells.These factors limit the use of dialysis method in some applications.
In clinic, hemofiltration, which involves dilution and filtration to remove toxins from blood, has been proved to have better controllability as well as ability of removing large molecular substances than hemodialysis (Daugirdas, et al, 2006).By referencing to hemofiltration, a dilution-filtration system is developed recently for removing CPAs (Zhou et al, 2011).The closed system helps to avoid contamination to cells, and the continuous and automatic process could provide particular advantage in efficiency especially for large-scale samples.The related research work is introduced in the following.
Technical Design
A dilution-filtration system is developed as shown in Fig. 16 (Hemofilter: Plasmflo TM AP-05H/L, ASAHI; Pumps: 400F/M1, Watson-Marlow; silicone tubing: 985-75, Pall).For removing CPAs, thawed cell suspension is first transferred into the special blood bag (made by an infusion bag).Then, the suspension is driven by the blood pump to flow circularly among the bag, the mixer and the hemofilter.While going through the mixer, the suspension is quickly diluted by diluent, and the dilution ratio can be controlled to prevent lysis.In the hemofilter, extracellular solution containing CPA is partly ultrafiltrated while cells keep inside.Along with the circulation goes on, CPAs in cell suspension can be removed continuously.The whole process is conducted automatically in a closed system, and thus it is hopeful for this method to reduce human labor as well as the risk of contamination significantly.
Theory of optimal operation protocol
Optimal operation protocol is defined here as the processes that minimize the operation time (to a final CPA concentration below 10g/L) as well as the osmotic cell volume excursion.A theoretical model was developed to predict the optimal operation protocols under the given experimental conditions (initial CPA concentration, cell density and total volume of cell suspension) and practical constraints.The detailed considerations for this procedure are described below.
Basic Assumptions and Formulation
The theoretical model of the dilution-filtration system is developed (as shown in Fig. 17) under the following assumptions: (1) Both intra-and extra-cellular solutions in cell suspension consist of water, a permeable CPA (e.g.glycerol) and an impermeable salt (e.g.NaCl); (2) Blood bag, hollow fibers and their connecting tubing are filled with cell suspension, and cells are uniformly distributed in the suspension; (3) Extracellular solution is diluted/filtrated immediately and evenly at the diluting/filtrating point when cell suspension circulates in the system; (4) Suspension flow is one dimensional, and the convection factors can be neglected.Fig. 17.Theoretical modeling of the system.A: the overall system, and B: a control volume.
Based on the assumptions, a governing equation about the mass transfer process can be derived by focusing on the extracellular solution: where, A refers to effective mass transfer area, D refers to diffusion coefficient, ϕ e refers to extracellular solute concentration (in osmolality), and S is the mass source/sink term, respectively.
Source/Sink terms
The source/sink term can be derived by temporarily ignored the diffusion term: where the subscripts "d", "f" and "c" refer to the effects of dilution, filtration and cell membrane transport, respectively.According to assumption (2), cell suspension inside the system can be equally divided into a finite number of control volumes (CVs), as shown in Fig. 17B.For each CV, the values of the terms in the right hands of equation [21] and [22] can be determined as flows.
i. Dilution/filtration
According to assumption (3), when a CV is going through the diluting point, the extracellular solution will be diluted immediately.Considering the pure filtration method used in the system, it is also assumed that ultrafiltration happens only at a certain location (the filtrating point, shown in Fig. 17A), and the ultrafiltrate has the same composition as the extracellular solution.Thus the values of CV at the filtrating point () ( 1) 0 CVs at the other locations where Q f , Q b and Q d are the flow rates of ultrafiltrate, cell suspension and diluent, ϕ d is the solute concentration in diluent, e s is the extracellular CPA concentration, s V is the partial molar volume of the CPA, and V CV is the volume of a CV, respectively.dV can be further determined based on mass conservation: where n c is the number of cells in a CV.
Numerical Simulation
With finite volume method, a fully implicit control volume integration of the governing equation will result in a finite difference scheme: where a is the coefficient and K is the total number of CV in the system.The subscript 'k' refers to the kth CV in the system and the superscript 'old' refers to the previous time level.Sc and Sp are the constant portion and gradient of the linearized source term: The subscript 'k-1' and 'k+1' in equation ( 30) refer to the previous and next CVs along the x direction, respectively.Noting that the cell suspension flows circularly in the closed system, the 1st CV is followed by the Kth one.Thus Here, the removal of glycerol from cryopreserved human red blood cells (RBCs) is discussed for an instance.For the ease of discussion, it is further restricted that blood volume keeps constant, i.e. ultrafiltrate flow rate keeps equal to diluent flow rate, although the presented system and model can adapt to more complicated situations.Besides, the concentration of NaCl in diluent and thawed blood are considered to be isotonic (0.29 Osmol/kg•water).In where V iso is the isotonic volume of RBC.When terming the CV at the diluting point (x=0) when t=0 as the 1st CV (CV 1 ), the initial location of each CV can be allocated.Then the values of To quantitatively evaluate the effect of an operation protocol, the maximum cell volume and the total time cost (to a final glycerol concentration below 10g/L (Brecher, 2002)) of the removing process can be taken as criteria for cell recovery rate and removing efficiency, respectively.Then the optimal protocol can be found out by applying different operation parameters to the given experimental conditions and comparing the simulated results.Hereinafter, the diffusion coefficients of glycerol and NaCl in water were set to be 5.43×10-10 m2/s and 14.41×10-10 m2/s, respectively (Ternstorm et al, 1996).The parameters about the dilution -filtration system and RBC membrane are also specified as listed in Table 4 and Table 5.These parameters may be different in various applications and systems.
Sections
Inner volume Effective area From the outlet of blood bag to the diluting point 5ml 1.25×10 -5 m 2 From the diluting point to the filtrating point 5ml 1.25×10 -5 m 2 From the filtration point to the outlet of hemofilter 85ml 5×10 -4 m 2 From the outlet of hemofilter to the inlet of blood bag 5ml 1.25×10 -5 m 2 Blood bag Variable 5×10 -3 m 2 Table 4. Structural parameters of the dilution-filtration system used in the calculation A c ) 135 ×10 -12 m 2 a Hydraulic permeability of cell membrane (L p )
Surface area of RBC(
1.74 ×10 -12 m/Pa/s a Isotonic volume of RBC (V iso ) 98.3 ×10 -18 m 3 a Solid volume of RBC (V cb ) 0.283 ×V iso a Glycerol permeability to cell membrane (P s ) 6.61 ×10 -8 m/s a a From literature (Papanek, 1978); Table 5.Membrane parameters of human RBC used in the calculation
Experiments
Venous human blood was collected from healthy, adult blood donors in the Red Cross Transfusion Center of Heifei.For each donor, up to 200ml whole blood was collected into CPDA-1 anticoagulant solution in PVC plastic bag, and stored for up to 24 hours at 4°C.Then it was centrifuged at 1615×g for 4 minutes, and the platelets, leukocytes and plasma were removed to produce a hematocrit of 75±5 percent.
Each of the RBC suspensions was transferred into a 400-ml plastic bag, and then it was glycerolized by 57.1% w/v glycerol solution with a volume ratio of 2:1 (glycerol to blood) to achieve a final glycerol concentration about 40% (w/v) and a hematocrit of 25%-30%.Subsequently the blood bag was covered by PE foam sheet (thickness: 5mm) and then placed into a metal box (size: 200mm×150mm×20mm).After 30 minutes of equilibrium, the metal box was transferred to a -80°C freezer (MDF-U52V, SANYON, Japan) and the RBC suspension was frozen gradually.After cryopreservation in the freezer for 2~7 days, the RBC suspension was taken out and thawed in a 37°C water bath for about 10 minutes with gentle agitating.
Each unit of the thawed blood was deglycerolized with the dilution-filtration system as shown in Fig. 16, and the operation protocol was theoretically optimized.A typical experimental conditions ( 0 b V = 200ml, 0 h = 30%, 0 s M = 6.28 Osmol/kg•water) was studied first to reveal the general law.To evaluate the effect of each operation parameter, different protocols were applied respectively.Fig. 19 shows that time cost is significantly reduced but maximum cell volume grows directly along with diluent flow rate increases, i.e. the washing efficiency can be improved by applying higher diluent flow rate but more hemolysis may be induced.Thus the diluent flow rate has to be carefully selected to achieve the optimal result.Comparatively, the effect of blood flow rate is not so complicated.Increasing of blood flow rate has little effect on glycerol clearance, but helps to reduce the maximum cell volume excursion.On the other hand, the effect of the operation parameters is also highly related to the blood conditions, especially the glycerol concentration.As shown in Fig. 20, the same operation protocol (Q b =200 ml/min and Q d =20ml/min) is applied to several different conditions, in which 0 b V =200ml, 0 h =30%, and 0 s M varies from 0.56 Osmol/kg•water (5% w/v) to 6.28 Osmol/kg•water (40% w/v).When the glycerol concentration decreases, both the glycerol clearance and the maximum cell volume are reduced (glycerol clearance is defined here as the difference of initial and final numbers of osmoles of glycerol in blood over time cost).This phenomenon indicates us that along with the glycerol concentration drops during washing, diluent flow rate can be continuously increased to speed up the process without inducing extra cell volume excursion.
Based on the analysis above, it can be concluded that to achieve the optimal deglycerolization it is important to: a) use a low diluent flow rate at first, and stepwise increase it as CPA concentration drops; b) always use a high blood flow rate.The detailed operation parameters of the optimal protocol can be found out by the theoretical model with some practical constraints.During the in-vitro experiments, operation protocol for each unit was optimized theoretically according to the specific experimental conditions as well as the following constraints: maximum cell volume: 1.35 times of the isotonic volume (V iso ) of RBCs; maximum flow rate of pumps: 200 ml/min and maximum ultrafiltrate flow rate of hemofilter: 40 ml/min.The value of upper cell volume level was conservatively selected in order to achieve the best cell recovery rate, although the washing efficiency may be limited.
Samples were taken before and after deglycerolization.Cell count and hematocrit were measured by a hematology Analyzer (Ac•T diff II TM, Beckman COULTER®) The Freeze-Thaw-Wash (FTW) cell count recovery rates were calculated by comparing the total cell counts www.intechopen.comPrevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology 129 after thawing to that after washing (Valeri et al, 2001).Residual glycerol concentration in the washed blood was measured by a glycerol assay kit (K-GCROL, Megazyme®) and a spectrophotometer (756MC UV-VIS, Scientific Instrument®, Shanghai, China).
Results
A total number of ten units of blood were cryopreserved and deglycerolized by the dilutionfiltration method, and the results are shown in Table 6.The residual glycerol concentration (5.57±2.81g/L, n=10) is obviously lower than the standard value (10g/L) indicated by American association of blood banking (AABB).During the optimization of the operation procedures, the maximum cell volume constraints was critically applied (1.35×Viso) for the best of cell recovery, and thus the deglycerolizing efficiency is limited.However, each of the unit was processed within an hour, which is similar to the automatic centrifuging method (Valeri et al, 2001).The cell count recovery rate is 91.19±3.57%(n=10).Comparing to the reported methods (Diafiltration method: 70% (Castino et al, 1996), dialysis method, no in vitro data was presented (Ding et al, 2007(Ding et al, , 2010)), manual centrifuging method: >80% (Brecher, 2002), and automatic centrifuging method 89.4±3.0%(Valeri et al, 2001)), the recovery rate indicates an obviously advantage of our method in cell safety.
UNITS
Thawed Blood Volume (ml)
Discussion
An optimized method for addition and removal glycerol from cryopreserved human spermatozoa has been illustrated as an example.Although the mechanism(s) of the osmotic injury during cryopreservation is not clearly understood, the hypothesis has been tested and confirmed, i.e. human sperm volume excursion can be used as an indicator to predict possible osmotic injury to spermatozoa during glycerol addition and removal processes.Hence, the procedures used for testing the hypothesis provide a methodology to predict optimal protocols for cryoprotective agent addition/removal..
The FVS, multi-step procedure for the addition of glycerol to human spermatozoa before cryopreservation is a conventional, commonly used technique, i.e. 'drop by drop' (stepwise) addition of a solution with a relatively high glycerol concentration (the volume of each 'drop' is roughly constant) to the spermatozoa or sperm suspension in order to achieve a 0.6-1.0M glycerol concentration in the final sperm suspension.In practice, the frozenthawed sperm samples containing glycerol are either washed for intrauterine insemination or four in-vitro fertilization or directly transferred into the lower female reproductive tract for artificial insemination (e.g.intercervical insemination).In both cases, the glycerol is abruptly removed from spermatozoa by direct exposure to near isotonic conditions.In the example, it was predicted by computer simulation, and confirmed experimentally, that a one-step removal of glycerol would cause a high frequency of sperm motility loss even without freezing.Based on the results, the FMS removal (≥8 steps) of 1.0 M glycerol is recommended.Within the scope of the present investigation, a four-step FMS addition of glycerol to spermatozoa to achieve a final 1.0 M glycerol concentration and an eight-step
131
FMS removal of 1.0 M glycerol from spermatozoa were predicted and shown to be acceptable procedures which minimize osmotic injury.From calculations, the minimum or maximum cell volumes after each step of FVS addition or removal were shown to be unequal, some of which may exceed the lower or upper volume limits of the cells.In contrast, from calculations, the minimum or maximum cell volumes after each step of FMS addition or removal of glycerol were shown to be relatively even (Figures 12 and 13).For a fixed number of steps, the minimum or maximum of cell volume excursion during glycerol addition or removal using the FMS approach is much smaller than that using the FVS approach (see Figures 12 and 13).
In the example, it was postulated that the sperm osmotic injury as a function of cell volume excursion must be determined to predict the optimal glycerol addition and removal procedures.However, the definition and determination of 'sperm injury' is dependent upon the assays used.In the example, sperm motility was used as a standard of sperm viability because of its relatively high sensitivity to osmotic changes and the requirement of sperm motility for functional viability.If sperm membrane integrity was chosen as the endpoint to evaluate the sperm viability, as shown in Figure 7, different osmotic tolerance limits would be obtained.One can readily repeat the same procedures to predict the extent to which spermolysis is caused by the different glycerol addition/removal procedures used in the example, based on the information provided in Figure 5.For example, it was found (Figure 7) that >85% of spermatozoa maintained membrane integrity when they were returned to isotonic condition after having been exposed to anisosmotic conditions ranging from 90 and 700 mOsmol.The corresponding sperm volume excursion range was 0.7-2.1 times the isotonic sperm volume (Figure 9).From Figures 12 and 13, it can be seen that a one-step addition and one-step removal of 1.0 M glycerol would result in a minimum relative sperm volume of 0.72 and maximum volume of 1.68 respectively, which did not exceed the sperm volume excursion range 90.7-2.1 times relative volume) for maintaining >85% sperm membrane integrity.
Based on this information, one can predict that the majority (>85%) of spermatozoa would maintain membrane integrity even using one-step addition and one-step removal of glycerol.
A dilution-filtration system for removing CPAs from cryopreserved cell suspension was also introduced here.The system realized continuous processing of cell suspension and the dilution & filtration were conducted simultaneously, thus it can achieve much better efficiency than traditional multi-step centrifuging methods.Moreover, dilution in the system is conducted to cell suspension flow in tubing but not whole suspension in container, thus the mixing process should be much rapider and then the osmotic disequilibrium during dilution can be significantly reduced.
A theoretical model was established to simulate the specific process.Based on the model, cell volume excursion and the variation of CPA concentration during the dilutionfiltration process can be simulated.Theoretical analysis indicates the operation parameters, especially the flow rate of diluent, are critical for the dilution-filtration method.In the previous studies concerning removing CPAs with hollow fibers (Castino et al, 1996;Arnaud et al 2003;Ding et al, 2007Ding et al, , 2010 ) ), only the protocols with constant flow rates were discussed.However, it was found to be difficult to balance the requirements in removing efficiency and cell safety.This problem also exists in the presented dilution-filtration method.Removing efficiency can be improved by using higher diluent flow rate, but the cell recovery rate may be seriously reduced in the way.Besides, when using a constant diluent flow rate, the profile of glycerol concentration is nearly exponential, i.e., the removing efficiency starts at the highest value but gradually decreases as the process going on.However, when using a stepwise increased diluent flow rate, the removing efficiency can be maintained at a high level for a quite long period.Moreover, theoretical analysis also indicates stepwise increasing of the diluent flow rate may not cause any extra cell damage.Therefore, a stepwise increased diluent flow rate is necessary to achieve both high cell recovery rates and efficient glycerol clearance when using the dilution-filtration system.In addition, it was also deduced by the theoretical analysis that the removing effect of an operation protocol is highly related to the initial volumes and cell densities of cell suspensions.Therefore, the optimal operation protocols should be specialized and various from case to case.The theoretical model provides an effective tool to find out the optimal protocols for given applications.
The system was also investigated experimentally with deglycerolization from cryopreserved blood, and the operation procedures were optimized based on the theoretical model.It is clearly indicated by the results that the dilution-filtration method is safe and efficient for deglycerolization from cryopreserved RBCs.Comparing to the automatic centrifuging method, the cell recovery rate and removing efficiency are similar, but the equipment cost of the dilution-filtration system is much lower and thus it can be applied in more areas.We can also believe that with properly selected operation parameters, this system can also be applied to various CPA removal applications.In addition, all the media are processed in a closed system, and thus the system should have further advantages in avoiding contamination.It is hopeful for the cells to have a long shelf life after washing.These suppositions will be verified by further experiments.
Fig. 1 .
Fig. 1. Cell volume excursion during addition and removal of CPAs Fig. 2. Multi-step method for addition and removal of a CPA
Fig. 6 .
Fig.6.A comparison of human sperm motility (% mean±SEM, n=8) after a 5 min exposure to the various hypo-and hyperosmotic solutions of non-permeating solutions before (○) and after ( ) the return to near isotonic conditions (273-343 mOsmol).
Fig. 8 .
Fig. 8. (A) Calculated relative sperm volume (normalized to an isotonic sperm volume of 1) as a function of time after spermatozoa were one-step exposed to different hypo-osmotic solution containing non-permeating solutes.(B) Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after the isotonic spermatozoa were one-step exposed to different hyperosmotic solutions containing nonpermeating solutes.
Fig. 9 .
Fig. 9. Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) after spermatozoa were osmotically equilibrated to different anisosmotic conditions.
Fig. 11
Fig. 11.(A) Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after the isotonic sperm were exposed to different hyperosmotic glycerol solution isotonic with respect to non-permeating solutes (salt).(B) Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after spermatozoa, which had been pre-equilibrated with different hyperosmotic glycerol solutions isotonic with respect to non-permeating solutes (salt), were one-step exposed to isotonic (286 mOsmol) saline solution without glycerol.
Fig. 12 .
Fig. 12. (left) Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after 1M glycerol was added to spermatozoa by either one-step or four fixed molarity steps.(right) Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time 1M glycerol was added to spermatozoa by either one step or four fixed-volume steps.The estimates of percent motility recovery as a function of sperm relative volume were obtained from Figure 8 and are indicated in the diagrams.
Fig. 14 .
Fig. 14.Calculated relative sperm volume (normalized to the isotonic sperm volume of 1) as a function of time after 1 M glycerol was removed from spermatozoa by four, six and eight fixed-molarity steps.The dotted lines in this figure indicate the upper volume limit, 1.1, below which >95% of spermatozoa can maintain the motility.The four-or six-step dilution results in a cell volume excursion causing >5% motility loss.
www.intechopen.comPrevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology 121 removal of glycerol (Table
Fig. 16 .
Fig. 16.Principle of the dilution-filtration system.Cell suspension is diluted and ultrafiltrated during circulating in the system, and then the CPAs inside can be continuously removed.
www.intechopen.comPrevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology 125 ii.Transportation across cell membrane For the ternary system as considered in the present example, the mass transport across cell membrane can be described by the two-parameter formalism[2,3].The total cell volume is the sum of the water, CPA and cell solid volumes: this manner, the basic variables for a simulation consist of the experimental conditions (including the initial blood volume ( extra/intracellular solution) as well as the operation parameters (including the flow rates of blood (Q b ) and diluent (Q d )).The initial values of the other parameters in the model can each CV can be calculated according to equations [21]-[31].By alternatively calculating the source terms and solving the linearized governing equation, the concentration variation of extra-/intracellular solution as well as the responding cell volume excursion can be simulated.A typical process is shown in Fig.18, in which 0 28 Osmol/kg•water (approximately 40% w/v), Q b =200ml/min, and Q d = 25ml/min.
Fig. 18.Simulated glycerol concentration variation and cell volume excursion in CV 1 (initially at the diluting point) during a dilution-filtration process.
Fig. 19 .
Fig. 19.Variations of time cost (real line and left Y-axis) and maximum cell volume (dash line and right Y-axis) with blood or diluent flow rates as parameters.
Fig. 20 .
Fig. 20.Variations of glycerol clearance (real line and left Y-axis) and maximum cell volume (dash line and right Y-axis) with glycerol concentation as a paramter.
Glycerol permeability coefficient (P s )
Prevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology 111 www.intechopen.com Prevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology 115 www.intechopen.com
Table 6 .
In-vitro experiments of deglycerolization with dilution-filtration method Prevention of Lethal Osmotic Injury to Cells During Addition and Removal of Cryoprotective Agents: Theory and Technology www.intechopen.com | 2017-09-16T06:31:48.414Z | 2012-03-09T00:00:00.000 | {
"year": 2012,
"sha1": "1608b134d2e3a40f5fa2c56a0fbb78dbc4204ad1",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/31230",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1608b134d2e3a40f5fa2c56a0fbb78dbc4204ad1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
246600726 | pes2o/s2orc | v3-fos-license | Patent Foramen Ovale Closure for Treating Migraine: A Meta-Analysis
Background Observational studies have shown percutaneous patent foramen ovale (PFO) closure to be a safe means of reducing the frequency and duration of migraine. Objective This study evaluated the efficacy and safety of PFO closure in patients with migraine using evidence-based medicine. Methods The Pubmed (MEDLINE), Embase, and Cochrane Library databases were searched for randomized controlled trials (RCTs), cohort studies, and retrospective case series from January 1, 2001, to February 30, 2021. The Jadad scale and R 4.1.0 software were used to assess the quality of the literature and meta-analysis, respectively. Results In total, three randomized controlled trials, one pooled study, and eight retrospective case series including 1,165 participants were included in the meta-analysis. Compared with control intervention in migraine, PFO closure could significantly reduce headache frequency (OR = 1.5698, 95% CI: 1.0465–2.3548, p=0.0293) and monthly migraine attacks and monthly migraine days (OR = 0.2594, 95% CI: 0.0790–0.4398, p=0.0048). Subgroup analysis of patients who all completed PFO surgery showed resolution of migraine headache for migraines with aura (OR = 1.5856, 95% CI: 1.0665–2.3575, p=0.0227). Conclusions Treatment with PFO closure could reduce the frequency of headaches and monthly migraine days and is an efficient treatment for migraine attacks with aura.
Introduction
Migraine is a common chronic neurovascular disorder characterized by self-limited, recurrent moderate-to-severe headaches associated with autonomic symptoms that affects 12% of the population [1,2]. e patent foramen ovale (PFO) is present in 20-25% of the adult population but in 30-50% of those who have migraine with aura [3,4]. Multiple studies have been conducted in the past showing that migraine, especially migraine with aura, is significantly related to the PFO [5][6][7]. Several studies have found that the incidence of PFO in migraine patients is 30-40% and is as high as 48-70% in migraine patients with aura (more than twice than that of the normal population) [4][5][6][7]. e pathogenesis of migraine in patients with PFA remains unclear. It may be a paradoxical embolism of venous microthrombosis [6]. Chemical substances, such as serotonin, are not cleared via pulmonary circulation, which triggers migraine. Multiple studies have reported improvement in migraine symptoms after transcatheter PFO closure [6]. e correlation between the PFO and migraine was originally reported in a case-control study conducted by Del Sette et al. in 1998 [8]. A meta-analysis conducted by Schwedt et al. in 2008 showed that the prevalence of PFO in patients with migraine ranged from 39.8 to 72%, and the prevalence of migraine in subjects with PFO also fluctuated between 22.3% and 64.3% [1]. To date, most single-center observations showed that PFO closure can effectively prevent migraine attacks, while three large randomized controlled trials (RCTs), MIST, PRIMA, and PREMIUM, have all reported negative results [9][10][11]. However, the latest study by Mohammad demonstrates that PFO closure was safe and significantly reduced the mean number of monthly migraine days and attacks, resulting in a greater number of subjects who experienced complete migraine cessation [12]. erefore, the therapeutic effects of this surgical procedure remain controversial. Considering these inconsistent effects, we performed a systematic review and meta-analysis to revisit the utility and safety of PFO closure in migraine with and without aura.
Inclusion and Exclusion Criteria.
e inclusion criteria were as follows: (a) type of study: RCT, cohort studies, and case-control studies; (b) language restrictions: English; (c) participating patients: patients with migraine; (d) intervention: PFO, placebo, or usual care; and (e) outcomes: resolution of migraine headache. e exclusion criteria were as follows: (a) types of study: case reviews, case reports, meta-analysis, and reviews; (b) high rate of missed visits or follow-up time not in accordance with the study design; and (c) a study with the inability to extract OR values.
Information Sources and Search Strategy.
Predesigned literature retrieval strategies were used according to the PRISMA guidelines [11], using "PFO closure," "migraine," and "patent foramen ovale closure" as search terms. Computer retrieval of relevant literature on treatment of migraine with patent foramen ovale closure was performed using databases such as the National Library of Medicine Biomedical Information Retrieval System (PubMed), the Dutch Medical Abstracts (EMBASE/SCOPUS), the Cochrane Library, and references of the included studies to supplement possible omissions of related clinical studies. e retrieval time was from January 1, 2001, to February 30, 2021.
Study Selection and Data Collection.
Two reviewers independently evaluated the study records from the reference list and electronic database based on the aforementioned eligibility criteria. Differences were resolved through discussion or by a third evaluator. If data were missing from the literature, the authors were contacted as often as possible to obtain relevant information. After determining the studies to be included in the meta-analysis, two reviewers independently and in duplicate extracted information from each included trial according to our protocol. e extracted variables included the study type, sample size, age, and migraine headache resolution rate. Baseline data obtained after rigorous selection and assessment of the literature by the two reviewers are summarized in Table 1.
Quality Evaluation of the Literature.
e Jadad scale was used to evaluate the quality of the literature [20]. (1) Method of generating a random grouping sequence: 2 points for using a computer-generated random grouping sequence or a random number table method; 1 point for a random assignment mentioned in the trial, but not given in the paper; and 0 points for a semirandomized or quasirandomized trial, which refers to the method of alternating allocation of cases, such as admission order, date of birth single, and double sign. (2) Randomization of concealment: 2 points for distribution schemes controlled by a medical center or pharmacy, use of numbered containers, present field computer control, using sealed opaque envelopes, or other methods that make it impossible for clinicians or subjects to predict the allocation of sequences; 1 point for only indicating the use of random digital tables or other random allocation schemes; and 0 points for alternating allocation, series numbers, series coded envelopes, or any measures that do not prevent grouping predictability or do not use randomization hiding. (3) Double-blind method: 2 points for describing the specific method used to implement the double-blind method and are considered appropriate, for example, a completely consistent placebo; 1 point for referring a double-blind method but are inappropriate methods in the literature; and 0 points for no reference to blind method. (4) Exit and missing visits: 1 point for mentioning and describing in detail the number and reasons of patients who withdrew and the number of cases lost to follow-up; 0 points for no mention of withdrawal or missing patients. With the highest possible score of 7, a score ≥4 indicated high quality and a score <4 indicated low quality.
e high-quality clinical research included in this study was lower; therefore, it was included in the literature with a Jadad scale score ≥3.
Methodology for Statistical Analysis.
All statistical analyses were performed using R 4.1.0. For continuous variables, we calculated the standardized mean difference (SMD) using the Mantel-Haenszel method in the risk factors for suicidal ideation, suicide attempts, and completed suicide in epilepsy. For counting variables, we calculated pooled odds ratios (ORs). e test level α of the effect was set to 0.05. Statistical heterogeneity was evaluated by the I 2 statistic. e fixed-effects model was used for comparisons with I 2 < 50%, and the random-effects model was applied for comparisons with I 2 ≥ 50%. Sensitivity analysis was used to evaluate the stability of the meta-analysis results through the interconversion between the fixed-effects model and the randomeffects model and to exchange statistical values to recalculate 95% CI, OR converted to risk ratio (RR), and SMD transformed to mean difference (MD). Egger's test was used to test the potential publication bias of the included literature, with p > 0.05 indicating the absence of publication bias [21,22].
Safety and Adverse Events.
Four studies included 484 patients with 28 serious adverse events in the PFO closure group [9][10][11], including 3 device-related events (transient atrial fibrillation, general fatigue, and syncope), 13 implant procedure-related events (access-site bleeding, retroperitoneal hematoma, arm phlebitis from an intravenous line, groin hematoma and pain, transient hypotension, tachycardia, and a vasovagal episode), and 12 unrelated events (muscle wasting, site bleeding, anemia, and nosebleed). All adverse events resolved without sequelae. During the followup of patients with a device, after at least 1 year, no devicerelated side effects were observed.
Sensitivity Analysis and Risk of Bias in Included Studies.
In the sensitivity analysis, we conducted a two-part sensitivity test on our results. e results were consistent, indicating that the results of the meta-analysis were stable (Table 2). e results of sensitivity analyses are shown in Table 2. ere was no publication bias p � 0.7108 (>0.05) in the included studies according to Egger's test, which means that the influence of publication bias on the results could be ignored (Figures 6 and 7).
Discussion
Migraine is a common disease in the world, affecting around 12% of the general population [6]. Despite various migraineprevention interventions, migraines cause a significant burden to the affected patients. Estimates indicate that migraine is the sixth highest cause of years lost due to disability worldwide [19][20][21][22][23][24][25]. PFO has a close relationship with migraine, and previous studies have shown that treating PFO can reduce migraine pain [3][4][5][6]. At present, PFO occlusion is very mature. However, the research on the treatment of migraine with PFO occlusion is still limited, and the use of PFO occlusion to relieve migraine is still controversial [9,10,26]. Studies have shown that the migraine relief rate after PFO closure is as high as 50-80% [27]. In the meta-analysis of three to four randomized controlled clinical trials including 484 patients, we evaluated the effect of PFO closure on patients with migraine refractory to multiple medications. Our primary outcome of reduction in monthly migraine attacks and complete resolution of migraine headache was higher in the PFO closure group compared with that in the control group. Similarly, reduction in monthly migraine days was significantly better in the PFO closure group. is study found complete resolution of migraine headache not significantly, probably due to the small number of studies and the small number of complete headache relief.
Subgroup analysis of migraine patients who had performed PFO surgery found that patients with migraines with aura, in particular those with frequent aura, had a significantly greater reduction in migraine days and a higher incidence of complete migraine cessation following PFO closure. In patients with migraines without aura, PFO closure did not significantly reduce migraine days or improve complete headache cessation. e presence of a precursor can be a predictor of improved migraine symptoms after PFO congestion. However, some studies have shown that some patients without aura do respond to PFO closure, which was statistically significant for the reduction of migraine attacks. However, in some patients, the frequency of migraine attacks increases within 4 weeks after the PFO closure, and the symptoms do not decrease until a few weeks later. It is speculated that the reason for this could be because the occluder activates the endothelial cells of the left heart, thereby activating platelets, which could be due to the increase in the concentration of serotonin in the vein. If serotonin is indeed the triggering substance for certain patients with migraines, then preventive dose antiplatelet therapy, such as aspirin and clopidogrel, can theoretically reduce migraine attacks. e mechanisms by which PFO is involved in the occurrence of migraine include the following [21,22,[28][29][30][31]: (1) e theory of abnormal thromboembolism: under normal circumstances, tiny venous blood clots or platelet aggregates are filtered through the pulmonary circulation. However, when PFO is present, these tiny emboli bypass the pulmonary circulation and directly enter the arteries, causing a short-term occlusion of the arteries, leading to hypoperfusion in the arterial blood supply area and triggering a migraine.
is hypothesis can explain the phenomenon why antiplatelet drugs and anticoagulant drugs can reduce migraine attacks to a certain extent. However, some studies have found that the proportion of patients with visual aura and homocysteinemia in patients with both PFO and migraine is significantly higher, which may mean that not all patients with PFO have microembolisms. (2) eory of vasoactive substances: vasoactive substances (5-hydroxytryptamine, calcitonin-derived gene-related peptide, etc.) can mediate the transmission of central pain signals and participate in the mechanism of migraines. Under normal circumstances, these vasoactive substances are inactivated by the monoamine oxidase in the pulmonary capillaries and do not enter the arterial blood. However, the PFO allows these vasoactive substances to bypass the lungs and directly escape to the systemic circulation, thereby entering the cerebral Journal of Interventional Cardiology circulation in high concentrations and acting on the trigeminal ganglion cells, participating in the dural neurogenic inflammatory response, and thereby inducing a migraine.
(3) Other mechanisms: some studies have found that an atrial shunt conforms to autosomal dominant inheritance, and the inheritance of migraine with aura in some families is similar to that of an atrial shunt. Studies have also found that the greater the degree of PFO shunt in patients with migraines, the more obvious the impairment of cerebral blood flow autoregulation. erefore, the impaired dynamic cerebral blood flow regulation may play a role in the connection between PFO and migraine.
In the study of the treatment of migraine with PFO closure [9,10,26], six adverse events occurred in both the PREMIUM and PRIMA trials, including transient atrial fibrillation, syncope, hematoma, and phlebitis; 16 adverse events occurred in the MIST trial; and nine procedure-related adverse events and four device-related adverse events in the Mohammad trial. ese may be related to the use of occluders in surgery. Although PFO occlusion may cause complications such as arrhythmia, phlebitis, retroperitoneal hemorrhage, aortic erosion, and occluder thrombosis, the incidence is low, and most of them are transient and recoverable complications and are routine after occlusion surgery. Administering antiplatelet drugs to prevent deviceinduced thrombosis can further reduce the risk of long-term stroke; hence, the occlusion is relatively safe. e present study had several limitations. First, most of the included studies were retrospective, and there were only four randomized controlled trials, which might have limited the power of our analysis to measure significant differences in outcomes. Similarly, recall bias cannot be excluded. Second, the postsurgical therapy and the protocol for assessing the outcomes differed among the studies. ird, the surgical procedures used several different devices. Fourth, the abovementioned studies could be affected by the patient's recall deviation based on the degree of headache, the comfort effect brought by the operation, and the antiplatelet therapy drugs used in the perioperative period, which could also lead to a certain degree of bias. Finally, the baseline data on sex and age were not recorded in the four randomized controlled trials. Despite our attempts to contact the studies' authors, we could not obtain some data which would have enriched our analysis.
Conclusions
PFO closure was safe and significantly reduced the mean number of monthly migraine days and monthly migraine attacks, and the treatment was efficient for migraine attacks with aura. e results of this meta-analysis warrant a reevaluation of PFO closure in treating episodic migraine, especially for migraine with frequent aura.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare no conflicts of interest.
Authors' Contributions
YZ and HJW collected data and wrote the article, and LL collected data and reviewed articles. All authors have read and approved the manuscript. YZ and HJWcontributed equally to this article. | 2022-02-06T16:35:08.366Z | 2022-02-02T00:00:00.000 | {
"year": 2022,
"sha1": "375d67788e1ecc2767a6ab2ecc7df1d04b4957f3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jitc/2022/6456272.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6921588ec457d89745f05a9d49200c965d3f3c14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214209700 | pes2o/s2orc | v3-fos-license | Construction of the renewable energy eco-system: strategic distributions along the chain of “photovoltaics-energy storage-electric vehicles”
The “photovoltaics (PV)-energy storage system-electric vehicles (EV)” industry is taken as an instance in this paper to depict the blueprint of the renewable energy eco-system: (1) As the headstream of the whole industry chain, clean energy source is first discussed. Taking full advantage of low cost in the photovoltaic lifetime, the highway PV plant is supposed to be a promising, cheap power supply for new energy vehicles, and to conduce to less traditional energy consumption and carbon emission. (2) Two-level energy storage systems, including power-grid energy storage and customer-side energy storage, would be established by developing novel energy-storage devices, high-capacity materials of EV batteries and standardization of battery manufacturing. Also, with the aid of “internet of things (IOT)”, a network of monitoring, maintenance, recycling and echelon utilization will be established to cover the EV batteries’ whole life. (3) As for the new energy vehicles, novel manufacturing standards will be formulated including modular design and production. Instead of the integrated battery modules, the exchanging mode will be applied to vehicle batteries, accompanying with the construction of the gas-station-like battery supply network, which provides power access and battery charging/exchanging/maintenance. In addition, it is also briefly described that hydrogen evolves from water electrolysis by utilizing surplus photovoltaic and wind power and serves as a significant supplement of the present electricity-dominant renewable energy industry. An ecology is hopefully formed through synergetically developing renewable energy supply, storage and applications in the following years. Thereafter, the energy consumption will be thoroughly changed via the construction of a fossil-fuel-independent new energy eco-system, supported mainly by renewable energy sources.
Introduction
Along with human's civilization, science and technology proceeding, the world energy supply has undergone twice revolution with substitution of coal for fuelwood and oil for coal, and is on the way from fossil fuels to renewable energy sources by force of energy crisis and environmental pollution. However, new challenges emerge at the same time as the expansion of the new energy industry, especially in the EV industry.
With the aid of policies [1], capital and market demands, the basic framework of the new energy industry has already been constructed in the field of EVs, consisting of three sections: renewable energy supply, energy storage and renewable energy application. Nevertheless, a well-established new energy eco-system as well as a more complete layout is urgently required in EV industries to break the constrains of synergetic development issues, such as the high cost of electricity supply from renewable sources, low energy density of EV batteries and absence of unified EV standards.
Following the energy flow, problems are identified throughout the renewable energy industry [2]: (1) as for the energy source, large amount of solar/wind/water power is discarding due to their unstable supply, geographically uneven distribution and difficult transmission, although renewable energy only accounts for a small proportion in the current energy supply structure; (2) in the aspect of energy storage, lack of technologies for large-scale energy storage accounts for solar/wind/water power discarding, and the capacity and the efficiency of power batteries are far from the market demands and limit the wide spread use of EVs; (3) thus, in the application market, thermal power from coal donates more than 70 % electricity consumption, and the goal of CO2 emission reduction is not achieved yet; fuel vehicles still dominate the automobile industry because of EVs' short duration and slow charging, and petroleum import rate exceeds 70 % in China, which threat the natural environment and national energy security.
In the view of aforementioned problems, herein is proposed the industry chain of "highway photovoltaics-energy storage-electric vehicles" as an instance to describe construction of a wellestablished eco-system of renewable energy: the new energy supply/transmission issues for EVs can be solved via building highway photovoltaic power stations, and predicaments in the energy-storage segment will be overcome through development of high-capacity batteries or other energy storage devices, while a breakthrough in the wide range of EV application is anticipated by implementing standardization and modularization of EV manufacture and promoting battery-exchanging modes.
Electricity from renewable energy source
In terms of renewable energy supply, besides traditional hydropower, solar and wind power exhibits the most rapid development. Owing to the continuous technological advancement and scale expansion of China's polysilicon industry, the cost of photovoltaic power is remarkably reduced by 92 %, 87.5 % and 82 %, respectively, in terms of the prices of PV modules, PV systems and PV electricity. In addition, the average CO2 emission in PV lifecycle is only 49 g kWh -1 , much less than that of the thermal power 1000 g kWh -1 . Therefore, the low-cost and environment-friendly advantages enable PV to be the largest source of electricity in the future, as shown in figure 1. What's more, PV generates direct current (DC), which can be used to propel EVs directly and reduce the EVs' usage charges significantly. Howbeit, the present PV electricity has to be converted as alternating current (AC) via inverters before transmitting to customers because of the geographic disadvantages of PV power plants, which are usually far away from habitations, and the AC electric power is rectified to DC at EV charging stations. The PV power supply becomes so complicated after twice transformations between DC and AC and long-distance transmission that not only energy loss but also investment is greatly increased. The consequence is a rise of over 55 % in PV electricity cost, which assigns a minus point to the renewable industry.
Exploration for solution of PV land predicament
The majority of commercial PV modules are made of silicon cell wafers, whose highest conversion efficiency of solar energy is only around 24 % [3], so the total area of module installation must be big enough to ensure sufficient electricity output, and the land use for PV plants turns into a tricky problem. Consequently, new schemes of distributed PV stations are put forward to make full use of the highway utilities and two of them have been carried out. The first one aims to utilize the non-functional areas along highways (including the greenbelts, median strips, service area roofs, car sheds and so on) for PV cell installation [4]. But, the installation area is still not big enough and the power generation can never satisfy the EVs' demand anyhow. The second trial is carried out by paving a PV road surface on the expressway. In spite of experiment implementation, it is already proved to be a complete failure because of the extremely high cost just with regards to the PV modules, which require abrasion-resistant highstrength materials and specially designed roadbeds. Simultaneously, those wearing-resisting protective layers greatly reduce the light exposure intensity of PV cells, and the calculated conversion efficiency is merely 5.06 % [5], which would be much smaller when the traffic flow is high. Other than the low ratio between conversion efficiency and construction cost, maintenance fees would be unimaginably high due to the fast deterioration of pavements.
A practical highway PV scheme
Learning lessons from the foregoing failures, a more practical scheme is proposed, overhead highway PV stations, as illustrated in figure 2. We conceive of taking full advantage of the overhead space and settling PV modules on those stilts along the express way. Except for installation height, overhead highway PV stations have nothing different from the traditional PV plants, so almost every parts of the current PV power plants can be transferred to our proposal, including modules, installation procedures, light tracing technology and maintaining & recycling systems, which guarantee effective control of construction/operation/maintenance investment. Furthermore, massive DC electrical power output can be achieved since complete PV coverage can be easily realized along the motorways except for some tunnel regions. And we can even anticipate more potential improvements based on the highway PV infrastructures such as inductive wireless charging devices for EVs in the future.
Supply capacity of highway PV
In order to clarify the feasibility of the overhead highway PV, its power supply capacity in Henan province is estimated as follows. Taking the 4-lane highway with a 24 m wide roadbed as an example, 1/3 coverage of PV modules is adopted to ensure enough light shedding on the road. Considering the variation of daylight time and sunshine intensity with weather and seasons, those PV cells work equivalently at least 1100 h (apart from the maintenance time) per year at full capacity (0.1 kW m -2 ). In Henan province, the total motorway length is over 7300 km, and then more than 6.43×10 9 kWh electrical power can be harvested from the highway PV facilities.
In order to be more intuitionistic, such a big number is interpreted in another way as follows. The electricity consumption of one Tesla EV is 18 kWh per hundred kilometres and it travels 20 thousand kilometres per year, then electrical power generated by highway PV stations in Henan can satisfy energy demands of 1.78 million Tesla EVs, which means around 13 % of fuel vehicles can be replaced by EVs, given the present car ownership of 13.27 million in Henan. In addition, the PV electricity charge can be as low as 0.6 ¥ kWh -1 as a result of no DC-AC conversion and no long-distance transmission. Then, the payment for energy consumption of one Tesla EV per year is as low as1200 ¥, less than 10 % of that of the fuel vehicle (15000 ¥, given an average gasoline consumption of 10 L per hundred kilometers). Besides highways, more PV facilities can also be installed along ordinary roads to enhance PV's contributions in EV applications.
Energy storage
Despite the rapid PV expansion benefiting from fast development of the polysilicon industry, the intrinsic randomness and intermittence of energy supply hampers its further applications [6]. Therefore, energy-storage devices need to be added between PV and energy-consumption terminals. As the key connection between energy source and consumers, energy-storage systems not only play vital roles in electricity peak cutting but also act as the buffer to neutralize the shock from the unstable power supply [7]. According to the difference of scales and transportability, energy-storage systems are divided into power-grid energy storage and customer-side energy storage.
Power-grid energy storage
The basic duty of power-grid energy storage systems is to balance the temporal and spatial uneven distribution of energy supply by storing it in certain media during the summit of power generation while releasing it during the surge in electricity demand. Thus, such an energy storage system shall satisfy the following qualifications [8]: (1) high gravimetric/volumetric specific capacity to store as much energy as possible; (2) flexible tenability to release energy in accordance with users' demands; (3) high efficiency to save the driving force or energy dissipation during charging and discharging with maximum rates; (4) high reliability and low cost to preserve the economy of large-scale applications.
Flow batteries
Among a variety of grid-level energy storage options, the flow battery is one of the most promising schemes on account of its energy-power-independent properties, high safety and long lifetime. And its fast charging/discharging rate satisfies high-power input/output demands in practical application scenarios, since the charge transfer only happens at the electrode-electrolyte interfaces and there is no tardy charge migration within a solid medium. Furthermore, the all-vanadium flow battery, storing/releasing energy through the variation of vanadium ions' valences, possesses a unique ability of valence balance to prevent the electrolyte degradation, so it would be a preferential option with long durability, high security and low cost for grid energy storage.
Liquid-metal batteries
Very similar with the flow battery, another potential solution of grid energy storage emerges as the liquid-metal battery, whose anodic and cathodic liquid metals are separated by a certain molten salt according the density difference between the working media, so there is no complex separator inside the cell, as shown in figure 5. Such a simple structure guarantees high operation stability and long cycle life and endues the liquid-metal battery with excellent reliability and easy industrial magnification. The degradation rate of the liquid-metal battery is so slow that the capacity retains 99 % after 10-year operation with 1 charging-discharging cycle every day. And one practical liquid-metal battery has been developed with the capacity of 1000 kWh, the charging/discharging power of 350 kW, the total weight and volume of 15 t and 18 m 3 , respectively [10,11]. Currently, the only disadvantage lies in additional energy consumption to maintain the high temperature and the molten state of working media, and a room-temperature technology would enable liquid batteries to possess superiorities competing with flow batteries in the future.
Secondary utilization of EV batteries
Besides development of new energy-storage systems, secondary utilization of EV power batteries offers another energy-storage route. The expansion of EV industries results in more and more retired power batteries with a 60-80 % residual capacity [13], which can be classified and applied to domestic energy storage, standby power source of base stations and microgrid systems ect., in the aftermath of performance diagnosis and safety estimation [14]. Echelon use provides a solution for environment pollutions from EV batteries, and there is no usage cost because the retired batteries are usually free.
Customer-side energy storage
EV, as the most important application terminal of renewable energy discussed herein, is one of the focuses in both scientific and industrial researches. Great efforts are made to break the bottlenecks in performance, price and production capacity of EV batteries [15]. Due to the superiority of high energy density, lithium ion batteries (LIBs) dominate the on-board battery industry, wherein the battery materials play decisive roles in performances of specific capacity, working voltages, cycling life and security. Poor physicochemical properties of materials account for those battery issues of limited capacity, big volume, heavy weight, slow charging and short durability. Additionally, the lagging standardization of power batteries leads to uncontrollable stability and uniformity. Thus, the development of power batteries in future will aim at large capacity, long lifetime and high standardization.
In order to break through the mentioned limitations, a roadmap of power batteries within next 10 year has been formulated in China. By synchronously investigating novel battery materials, management & control technology, echelon utilization and materials recycling, large-scale standardized production of high-performance EV batteries is planned to be implemented gradually, which helps to achieve the energy density goals of 350, 400, 500 Wh kg -1 until 2020, 2025, 2030, successively, as listed in Table 1. However, Chinese government removed the index of 350 Wh kg -1 last year since the research progress is severely delayed in contrast to the expectation. [16], hence novel anode materials with higher capacity are in urgent need. At room temperature, silicon can react with lithium forming Li15Si4 alloy, whose theoretical gravimetric capacity is 3572 mAh g -1 [16], nearly 10 times as that of graphite, and it is widely recognized as the candidate for next generation of anodes. In future, more efforts will be made in developing new technologies like nanosizing and compounding for performance improvement in charging-discharging rate and cycling lifetime and thus promoting the large-scale commercialization of silicon anodes. Notes: (1) RT-room temperature, HT-high temperature; (2) The specific capacity of commercial graphite materials already reaches 370 mAh g -1 , close to the theoretical capacity; (3) Silicon anodes suffer from severe volume change (more than 300 %), leading to fast degradation of mechanical and electrochemical performances. Similarly, the cathodic materials are also faced with problems of limited performances. For a long time, the cathodes of EV batteries are mainly based on LiFePO4 due to its high stability and security, but the sustainable increasing demands in energy density and power density impel the advent of ternary materials (LiNi1-x-yCoxMnyO2), which combine the high capacity of Co-and Ni-based cathodes and the safety of Mn-base cathodes. Except the superior specific energy and power than LiFePO4, ternary cathode materials have a moderate cycling number as well as low toxicity and price compared with its LiMn2O4 and LiCO2 counterparts, benefiting from the synergy of Ni, Co, Mn elements [17,18]. Ternary-cathode batteries will be the general trend in the EV market, and the key issue is how to raise the dosage of Ni to achieve the highest capacity under the prerequisite of safe operations.
Standardization of EV batteries
A lot of attention is payed to the development of novel materials, while what is equally important but usually overlooked is the battery standards [19]. Non-uniform specifications of cell units become one of the main obstacles between EV manufacturers and battery producers, and significantly slow down the expansion rate of EV battery industries. Difficulty lies in the module assembly, which is traditionally accomplished by welding processes. Shortages of the welding assembly have been identified that the welding equipment and operations require high investment, the induced surface protuberance and internal pores would threaten the battery safety, and the integrated modules are difficult to be recycled. Modularization assembly techniques are thereupon put forward, such as the pressure connection method to fast assemble and disassemble modules. Test results from Original Equipment Manufacture (OME) indicate that the new assembly method not only requires much shorter operation time but also has comparable assembly density, reliability and stability compared with the welding method. Moreover, this pressure connection can turn the fixed-capacity batteries into variable-capacity products containing a series of standardized modules, which provide battery suppliers and EV manufacturers a simple and cheap technique to efficiently produce EV batteries with any capacity demanded by the clients, and also reduce the risks during market validation of the new assembly method.
EV battery management
Advanced materials and reasonable standards render batteries excellent properties, while the battery management system (BMS) is the key factor to bring those properties into full play to achieve the best performance in practical operations [21]. With the support from IOT, we suppose to set up a fully interconnected intelligent management system, covering the whole lifetime of EV batteries "registration-charging/usage record-maintenance record-recycling/scrapping treatment" and monitoring electricity output, energy storage and security state in real time [22]. With the assistance of artificial intelligence and big data analysis, the BMS can predicate the potential risks to prevent serious accidents like battery self-ignition, and prejudge the rest lifetime for manufacturers to make appropriate recycling plans in advance.
Electric vehicles
Durability is the most concerned issue for EVs, which depends on both battery capacity and electricity replenishment. Due to the lack of charging stations and fast charging technologies [24], long-distance travelling would not be realized even if the energy density of EV batteries can be elevated to over 500 Wh kg -1 as expected. For the time being, the charging rate of EVs is still not comparable with the refueling of gasoline vehicles, and it is too slow to match the rapid development of EV industries [19]. There seems no possibility to replace fuel vehicles by EVs in the present EV recharging mode [25]. However, the successful operation mode of fuel vehicle industries offers us alternative solutions. One inspiration from the fuel vehicle industry is its widespread fuel supply network and gasoline stations. Similarly, charging stations should also be widely constructed to support the EV industry, but unlike direct charging therein, a battery exchanging/swapping mode (BEM/BSM) is adopted, wherein charged batteries are just substituted for discharged ones unloaded from EVs.
Standardization of EVs
The premise of battery exchanging is standardized power batteries and intelligent management [26]. Only with unified battery sizes, shapes and other specifications, the exchanging servicers can provide EV clients with authenticated batteries from any manufacturer. Assuredly, the BEM also requires standardization of vehicle manufacturing, including vehicle design modularization and battery module standardization (uniform installation position & charging port and standard loading/unloading procedure), in order to form a unified procedure for automatic battery exchanging operations. In addition, a set of communication protocols and interfaces is necessary to be added into the vehicle management system for data transmission with the battery management system, providing BEM servicers with basic information for battery charging and maintenance.
Business mode of battery exchanging
The EV battery exchanging mode is not just limited to engage in the running of the exchanging stations, but involves at least four aspects: EV manufacturers, battery suppliers, BEM servicers and EV users, who play their own roles in EV production & sale, battery supply, battery charging/rent/exchanging/maintenance & construction of exchanging station network, and EV use in the industry chain, respectively. Obviously, the battery exchanging service providers lies in the core position herein. However, the reality is that EV manufacturers are more active to test the BEM. Besides Pand (a ride-hailing servicer), Tesla, Baic and Chery already begin to provide battery exchanging service in some big cities. Once the BEM is approved comparable to the fuel vehicle mode in terms of efficiency and economy, a set of standardized battery exchanging network will be soon established to carry out overall planning and scheduling from electricity generation to consumption.
Hydrogen energy
In the field renewable energy, what has to be mentioned is hydrogen in addition to electricity. As another clean secondary energy, hydrogen has so many advantages, such as rich source, high energy density, high combustion value, renewability, storability, zero pollution, as to be regarded as the ultimate energy source in 21st century. With regards to its applications, the hydrogen fuel cell (HFC) is much more XV International Russian-Chinese Symposium «NEW MATERIALS AND TECHNOLOGIES Journal of Physics: Conference Series 1347 (2019) 012002 IOP Publishing doi:10.1088/1742-6596/1347/1/012002 9 efficient than direct burning. Thanks to the ultra-high energy density of hydrogen, the HFC-driven vehicles shall have comparable or even longer durability than the gasoline vehicles, with the same rate of fuel recharging. However, the hydrogen energy technology is not yet refined, and the industry is faced with cost and technical challenges in hydrogen production, hydrogen storage and HFC catalysts. Anyway, the hydrogen supply issue can at least be well settled by the water electrolysis technology with further decline in the cost of PV/wind power electricity in the near future, and hydrogen can act as the medium to store surplus solar/wind/water power and a supplement for the current electricity-dominant renewable energy industries. Then, the development and improvement of the whole new energy industry chain will be promoted by accelerating progresses of fundamental and application researches in materials of hydrogen evolution, hydrogen storage and HFC catalysis as well as technologies of hydrogen-driven vehicles.
Summary
Herein, the issues relating to the eco-system of renewable energy industries are discussed. The up-and down-stream sections of new energy vehicles (especially EVs) are taken as an instance to describe that how to construct the systems of energy supply, storage and recharging, and how to implement the production and operation of new energy vehicles. Also, hydrogen is briefly introduced as a supplement of the electricity-dominant renewable energy industry. Through synergic development of renewable energy source, energy storage and novel applications, a renewable energy system independent of fossil energy will be established to gradually reduce petroleum import and to guarantee the national energy security. | 2019-12-19T09:18:49.866Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "24c0103fd239d583392f95c117d01024507b246e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1347/1/012002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0af54f5fe23a1f98bf28e58ed9b7fa0077a89b5d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
220715862 | pes2o/s2orc | v3-fos-license | COVID-19 disaster response: A pharmacist volunteer’s experience at the epicenter
In late 2019, a cluster of patients with respiratory illness due to a novel coronavirus (SARS-CoV-2) was first reported in Wuhan, China. The subsequent spread of the virus occurred at a historic rate across the globe, with more than 6.2 million documented infections and more than 375,000 confirmed deaths as of June 1, 2020. The unique disease caused by this pathogen, known as coronavirus disease 2019 (COVID-19), has challenged medical professionals and brought healthcare systems to their knees. By mid-April 2020, the United States had surpassed all other countries in the number of deaths due to COVID-19 despite widespread mitigation efforts. New York City (NYC) quickly became the new epicenter of the pandemic. In NYC confirmed cases, hospitalizations, and deaths rose exponentially relative to increases in other areas of the country. Hospitals throughout NYC reported apocalyptic scenes in emergency departments (EDs), and a tsunami of patients with severe hypoxic respiratory failure exhausted local healthcare systems. A need for additional qualified healthcare professionals became obvious as critically ill patients continued to balloon the censuses of local hospitals, especially in intensive care units (ICUs). Critical shortages of medical supplies, personal protective equipment (PPE), and pharmacologic therapies caused additional strain on frontline workers. The initial patient surge in NYC was bad, and projections were showing that it would worsen as the peak approached. In response, the governor of New York, Andrew Cuomo, released a public plea for healthcare volunteers to assist in managing the unprecedented healthcare crisis. As an ED clinical pharmacist in practice at Ronald Reagan UCLA Medical Center in Los Angeles, CA, I was closely monitoring the spread of COVID-19 cases and how it might impact our community. My institution had established comprehensive plans for managing the anticipated surge in COVID-19 cases—we were ready. As weeks passed, our ED remained unusually quiet as most Californians were sheltering in place. Thanks to swift, aggressive action by local and state officials to implement early physical distancing guidelines, our COVID-19 surge never came and we witnessed the success of “flattening the curve.” We were seeing sporadic cases in our ED, but nowhere near the volume we had expected. As a critical care–trained pharmacist, I have always been stimulated by complex cases and seemingly chaotic environments. I am passionate about using my knowledge base and skill set to improve outcomes in patients, particularly the critically ill. This passion, coupled with my personal and professional ties to NYC, made it very difficult to sit on the sidelines watching the crisis continue to worsen and the death toll continue to rise. Governor Cuomo’s plea resonated with me on many levels, and I decided to add myself to the volunteer pool along with nearly 100,000 other healthcare professionals who stoically answered the call. The response by New York State to the intensifying COVID-19 crisis was coordinated and strategic. One of the pillars of its plan was to create an online portal to link medical volunteers with healthcare facilities, prioritizing hospitals in greatest need. This was accomplished through NYC’s Medical Reserve Corps (MRC). The Web-based MRC portal platform not only allowed hospitals to browse credentials of registered volunteers based on their internal needs, but also gave volunteers the ability to be proactive and search for temporary positions that matched their skills and expertise. Shortly after registering as an out-of-state volunteer, I used the portal to find institutions in need of a pharmacy specialist with training and experience in critical care. I reached out to the director of pharmacy at University Hospital of Brooklyn–SUNY Downstate Health Sciences University, one of 3 designated COVIDonly hospitals in New York. Within a few hours we connected over the phone to discuss the details of the hospital’s needs and get to know each other. A few days later, I was on the ground at the epicenter of the pandemic to begin my 2-week deployment in one of the hospital’s many COVID-ICUs. Acclimating as a pharmacist to a new hospital system is complex and takes time. For this reason, the onboarding process for incoming pharmacy residents and new hires for clinical pharmacist positions typically lasts weeks to months. As a disaster volunteer, the goal is to maximize one’s contributions in a compressed timespan. To accomplish this, my orientation was expedited to accelerate competency in both clinical and operational pharmacy applyparastyle “fig//caption/p[1]” parastyle “FigCapt”
I n late 2019, a cluster of patients with respiratory illness due to a novel coronavirus (SARS-CoV-2) was first reported in Wuhan, China. The subsequent spread of the virus occurred at a historic rate across the globe, with more than 6.2 million documented infections and more than 375,000 confirmed deaths as of June 1, 2020. The unique disease caused by this pathogen, known as coronavirus disease 2019 , has challenged medical professionals and brought healthcare systems to their knees.
By mid-April 2020, the United States had surpassed all other countries in the number of deaths due to COVID-19 despite widespread mitigation efforts. New York City (NYC) quickly became the new epicenter of the pandemic. In NYC confirmed cases, hospitalizations, and deaths rose exponentially relative to increases in other areas of the country. Hospitals throughout NYC reported apocalyptic scenes in emergency departments (EDs), and a tsunami of patients with severe hypoxic respiratory failure exhausted local healthcare systems. A need for additional qualified healthcare professionals became obvious as critically ill patients continued to balloon the censuses of local hospitals, especially in intensive care units (ICUs). Critical shortages of medical supplies, personal protective equipment (PPE), and pharmacologic therapies caused additional strain on frontline workers. The initial patient surge in NYC was bad, and projections were showing that it would worsen as the peak approached. In response, the governor of New York, Andrew Cuomo, released a public plea for healthcare volunteers to assist in managing the unprecedented healthcare crisis.
As an ED clinical pharmacist in practice at Ronald Reagan UCLA Medical Center in Los Angeles, CA, I was closely monitoring the spread of COVID-19 cases and how it might impact our community. My institution had established comprehensive plans for managing the anticipated surge in COVID-19 cases-we were ready. As weeks passed, our ED remained unusually quiet as most Californians were sheltering in place. Thanks to swift, aggressive action by local and state officials to implement early physical distancing guidelines, our COVID-19 surge never came and we witnessed the success of "flattening the curve." We were seeing sporadic cases in our ED, but nowhere near the volume we had expected. As a critical care-trained pharmacist, I have always been stimulated by complex cases and seemingly chaotic environments. I am passionate about using my knowledge base and skill set to improve outcomes in patients, particularly the critically ill. This passion, coupled with my personal and professional ties to NYC, made it very difficult to sit on the sidelines watching the crisis continue to worsen and the death toll continue to rise. Governor Cuomo's plea resonated with me on many levels, and I decided to add myself to the volunteer pool along with nearly 100,000 other healthcare professionals who stoically answered the call.
The response by New York State to the intensifying COVID-19 crisis was coordinated and strategic. One of the pillars of its plan was to create an online portal to link medical volunteers with healthcare facilities, prioritizing hospitals in greatest need. This was accomplished through NYC's Medical Reserve Corps (MRC). The Web-based MRC portal platform not only allowed hospitals to browse credentials of registered volunteers based on their internal needs, but also gave volunteers the ability to be proactive and search for temporary positions that matched their skills and expertise. Shortly after registering as an out-of-state volunteer, I used the portal to find institutions in need of a pharmacy specialist with training and experience in critical care. I reached out to the director of pharmacy at University Hospital of Brooklyn-SUNY Downstate Health Sciences University, one of 3 designated COVIDonly hospitals in New York. Within a few hours we connected over the phone to discuss the details of the hospital's needs and get to know each other. A few days later, I was on the ground at the epicenter of the pandemic to begin my 2-week deployment in one of the hospital's many COVID-ICUs.
Acclimating as a pharmacist to a new hospital system is complex and takes time. For this reason, the onboarding process for incoming pharmacy residents and new hires for clinical pharmacist positions typically lasts weeks to months. As a disaster volunteer, the goal is to maximize one's contributions in a compressed timespan. To accomplish this, my orientation was expedited to accelerate competency in both clinical and operational pharmacy services. The first day was critical. The success of this 1-day onboarding required dedicated coordination among various champions within the department to bring me up to speed quickly in 3 key areas: administration, information technology (IT), and institutional clinical practice.
Administrative tasks were tackled first-badge access, compliance training, human resources paperwork, electronic medical record (EMR) access, meeting my new colleagues, and learning the location of pertinent areas (eg, main pharmacy, clinical offices, ICU). Arguably the most important and useful block of the day was spent learning the institution's IT software, namely the EMR and clinical decision support system. The last 10 years of my clinical practice had been spent using a different EMR, so this was akin to learning a foreign language in a single day. Two of the clinical pharmacists dedicated their time to providing me with a thorough 1-hour crash course on use of each of the programs, including navigating patient profiles, obtaining objective information for clinical assessments (eg, laboratory values, culture data, medication administration record, imaging), reading ICU flow sheets (eg, fluid and medication infusion rates, hemodynamics, tube feeds, intake and output balance), and order entry and verification quirks unique to the system. I would begin COVID-ICU rounds the next morning, so there was substantial pressure to be independent in all aspects of these fundamental practice tools. I dedicated the remainder of day 1 to test-driving the system by simulating rounding workflows until I felt I was competent. The final puzzle piece in my onboarding was gaining a working knowledge of various aspects pertinent to clinical practice at the institution. I invested ample time learning a new hospital formulary, antimicrobial stewardship restrictions, therapeutic drug monitoring procedures, and a litany of hospital and pharmacy-specific protocols (both general and COVID-19 specific) that would impact my recommendations and practice in the ICU.
Being thrust into a foreign hospital system and only loosely grasping the intricacies of the EMR can make even seasoned clinical pharmacists feel as though they have one hand tied behind their back. Despite the new environment and systems-based challenges, I was eager and determined to hit the ground running. Day 1 in the COVID-ICU was unlike any first day I have had in my more than 10-year career. A previously closed hospital unit had been reopened and repurposed as a COVID-ICU when the volume of crit ically ill patients surged. It was originally a ward-style unit that lacked dedicated rooms or proper barriers for airborne isolation. Individual makeshift isolation rooms were fabricated using metal poles and an opaque, heavy-duty plastic tarp. Industrial duct tape was used to seal the temporary plastic walls to the ceiling. A zippered "door" was installed to allow entry into each room, and a small clear window in the cloudy plastic provided a vantage for viewing the patient and monitors from outside. The rooms were labeled with large sequential letters drawn onto the plastic using a bright red marker. Infusion pumps were positioned in the hallway outside of every room to manage various infusions-usually a combination of sedatives, vasopressors, paralytics, and insulin. The use of extension tubing allowed the pumps to be positioned in the hallway outside of each patient's room-an adaptive strategy designed to reduce frequency of entry into the rooms of infected patients, thus limiting unnecessary exposure and helping to conserve precious PPE. I was able to inspect each patient's drips prior to rounds without concern of exposing myself.
My rounding team included a pulmonary/critical care attending physician, a medical fellow, and 2 medical residents. Like the ICU itself, we were a makeshift team. The attending physician on service was a volunteer from Dayton, OH, who had arrived 2 days prior (that made 2 of us who were unfamiliar with the system). Due to infection control concerns, our primary rounds were conducted in the ICU conference room. Rounds themselves were typical for the ICU setting-patient presentations, review of imaging, and a systems-based, head-to-toe approach to develop an assessment and plan. The patients were uniquely critically ill. Every patient in the unit had tested positive for SARS-CoV-2, and their presentations and laboratory abnormalities-and the malevolence of the disease-were oddly similar. The youngest patient was 35 years old, while the oldest was 85. The patient population served by the state-funded hospital primarily consists of underserved patients with high prevalence rates of obesity, diabetes, and hypertensionconditions that have been shown to predispose patients to higher-severity illness due to COVID-19.
My 2 weeks rounding in the COVID-ICU were marked by the highest of highs and the lowest of lows. The theme was 1 step forward, 2 steps back. Nearly every day there was either a code or a death in the unit; some days there were several. Cardiac arrests resulting from mucous plugs were all too common. Our team felt almost helpless in a daily grind to improve the plight of our patients. During my 2 weeks of rounding, only a single patient was successfully transferred to a step-down unit; the others either remained ventilated in the ICU or succumbed to the disease. COVID-19 is unique in that there is no magic bullet, no rigorously studied medical strategy, intervention, or pharmacologic treatment proven to be effective at reversing the course of patients with severe disease. In addition to supportive care, the backbone of our management revolved around therapies supported only by low-quality, sometimes investigational evidence, which were employed alone or in combination on a case-by-case basis: hydroxychloroquine, corticosteroids, antibiotics for superimposed bacterial infections, therapeutic anticoagulation, interleukin-6 receptor antagonists, and convalescent plasma. A constant stream of new information was released into the literature seemingly on a daily basis. In response, we constantly analyzed and adapted our practice based on emerging data that might offer some hope of improved outcomes. Despite our team members being strangers just weeks before, having come together in the midst of a pandemic, we tackled this gray area of medicine, leaning on one another's expertise and unique perspectives, to deliver world-class quality care to our critically ill patients.
From both a personal and professional perspective, my time volunteering as a clinical pharmacist in a COVID-ICU at the epicenter of the COVID-19 pandemic was both rewarding and emotionally taxing. In my usual work in the ED of a large academic hospital and level I trauma center, I am desensitized to chaos, frequently cope with mortality, and am used to oscillating between extreme highs and extreme lows. My experience at a COVID-only hospital was similar yet eerily different. COVID-19 is a relentless disease that occurred at unprecedented rates in Brooklyn, disproportionally decimating the local community and overwhelming hospitals. The emotional toll on frontline healthcare workers is difficult to measure. Engrained into your memory is the image of patients struggling to breathe. The endless sound of ventilator alarms and repeated overhead code pages signaling a cardiac arrest or need for urgent intubation-another patient actively fighting for life. The unprecedented rate of mortality that I witnessed is especially difficult to process. Knowing that these patients often died alone, likely not having seen their loved ones in days or weeks due to restrictive visitor policies, made it even more difficult to cope with. All of this occurred against the backdrop of what felt like a science fiction movie.
Through this difficult and extraordinary time, the healthcare community has rallied together, with medical professionals offering their support to colleagues in hot spots around the world. The response to the peak of the crisis in NYC was unparalleled. Physicians, nurses, respiratory therapists, pharmacists, and many other healthcare workers from across the country volunteered to bring their talents to the frontlines. As pharmacists, we carry a unique skill set and expertise that is unmeasurably valuable in disaster response. This is especially true when pharmacotherapy is integral to the backbone of management of affected patients, such as those with severe COVID-19. Having the ability to contribute to assisting a reeling medical center and work with a temporary ICU team to treat patients with this novel disease was the opportunity of a lifetime.
Disclosures
The author has declared no potential conflicts of interest.
AM J HEALTH-SYST PHARM | VOLUME XX | NUMBER XX | XXXX XX, 2020 | 2020-07-24T13:05:37.476Z | 2020-07-23T00:00:00.000 | {
"year": 2020,
"sha1": "000f1007644df45a59387392af4c929d4bcd4a0d",
"oa_license": null,
"oa_url": "https://academic.oup.com/ajhp/article-pdf/77/21/1786/34499078/zxaa233.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9939c7d91f2cb1b5920a04e68e35b4ca3dd8728c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235628198 | pes2o/s2orc | v3-fos-license | Acute myocarditis during the COVID-19 pandemic: A single center experience
Objective We sought to compare the occurrence and characteristics of patients with acute myocarditis admitted during the coronavirus disease 2019 pandemic to those admitted prior. Design We performed a retrospective chart review of patients with the primary discharge diagnosis of acute myocarditis from September 1st 2017 through August 31st 2020. Results We identified 67 patients, 45 (67%) admitted pre-pandemic, and 22 (33%) during the pandemic. Rate of admissions for acute myocarditis was 1.5/month [95% CI 1.04–1.95] pre-pandemic, and 3.7/month [95% CI 2.36–4.97] (p < 0.001) during the first 6 months of the pandemic. Of the 22 patients admitted during the pandemic, 10 (45%) tested positive for SARS-CoV-2. Patients who tested positive for SARS-CoV-2 were older and had lower peak troponin levels. Conclusions During the pandemic, less than half of the patients admitted with acute myocarditis tested positive for SARS-CoV-2. Patients who tested positive were older and had lower peak troponin levels.
Introduction
The development of myocarditis as a complication of coronavirus disease 2019 (COVID-19) has been recognized since early in the pandemic [1,2].We sought to compare the occurrence of acute myocarditis and characteristics of patients with acute myocarditis admitted to our hospital during the COVID-19 pandemic to those admitted prior to the pandemic.
Methods
After Investigation Review Board approval, we collected data regarding admissions for the diagnosis of acute myocarditis from September 1st 2017 through August 31st 2020 from the University of Florida Health Integrated Data Repository.We reviewed all charts to verify the diagnosis of acute myocarditis using the European Society of Cardiology Working Group definition of clinically suspected myocarditis [3], and excluded all patients with troponin elevation secondary to other conditions including acute myocardial infarction (type 1 and type 2), sepsis, Takotsubo syndrome, and pulmonary embolism.We collected information regarding demographics, comorbidities, cardiac testing, laboratory values, dates of admission, treatment, and in-hospital mortality rates.We compared variables before and during the pandemic, and between SARS-CoV-2 positive and negative patients.Categorial variables were summarized as counts and percentages and were compared using chi-square testing.A 2-tailed p value of <0.05 was considered significant.All analyses were performed using Graphpad Prism (Version 7.0).
Results
We identified a total of 166 patients in the Integrated Data Repository admitted with a diagnosis code of acute myocarditis between September 1st 2017 through August 31st 2020.Following thorough chart review, 67 patients were confirmed to have a primary discharge diagnosis of acute myocarditis with no other causes for troponin elevation, including type I and type II non-ST elevation myocardial infarction and Takotsubo syndrome.These 67 patients were included in the analysis.
The first case of confirmed COVID-19 at the University of Florida was on March 13th 2020.From March 13th 2020 through August 31st 2020, a total of 1112 patients were admitted with COVID-19 to our institution.Those who tested positive for SARS-CoV-2 were significantly older and had lower levels of high sensitivity troponin than those who tested negative for SARS-CoV-2 (Table 1).Other than these factors, we found no other significant differences between patients with acute myocarditis who presented pre-pandemic and during the pandemic (Table 1).Among the 10 patients with acute myocarditis who tested positive for SARS-CoV-2, mean symptom onset to diagnosis was 5 days [range 1-7], and their treatment included tocilizumab (n = 1), remdesivir (n = 1), steroids (n = 4), beta-blocker (n = 5), angiotensin-converting enzyme inhibitor (n = 3), anticoagulation (n = 7), and placement of a mechanical support device in one patient with cardiogenic shock.Prepandemic, a total of 5 patients (11%) patients underwent endomyocardial biopsy for definitive diagnosis [eosinophilic (n = 1), coxsackie (n = 1), giant cell (n = 1), fulminant lymphocytic (n = 1), presumed viral with myocyte enlargement, vacuolization and inflammation (n = 1)].During the pandemic 1 patient (5%) underwent endomyocardial biopsy and was presumed viral (edema and inflammation present).
Discussion
We found a two-fold increase in the occurrence of acute myocarditis admissions during the first 6 months affected by the COVID-19 pandemic compared to years past.In addition, patients admitted with acute myocarditis during the pandemic who tested positive for SARS-CoV-2 were significantly older and had lower levels of high sensitivity
Table 1
Myocarditis before and during the COVID-19 pandemic and according to SARS-CoV-2 status.
troponin than those who tested negative.Less than half of the patients admitted with myocarditis during the early months of the pandemic, however, tested positive for SARS-CoV-2.While all SARS-CoV-2 positive patients were diagnosed with traditional cardiotropic viruses known to cause myocarditis including adenovirus, cytomegalovirus, enterovirus and parvovirus, it is possible that these patients had false negative testing for SARS-CoV-2, or a delayed COVID-19 presentation such that the infection had already cleared.High rates of co-infection between SARS-CoV-2 and other respiratory pathogens [4] and delayed presentation of myocarditis due to COVID-19 in patients who test negative for SARS-CoV-2 on admission have been recently reported [5,6].
Despite multiple reports of COVID-19 related myocarditis cases [2], recent pathological studies suggest that direct SARS-CoV-2 infiltration into the myocardium is exceedingly rare.In these studies, investigators reviewed endomyocardial biopsy and autopsy series and found that COVID-19 related myocarditis occurred in ~4.5% of patients [7,8].Moreover, due to the referral bias for endomyocardial biopsy and autopsy, they suggest the true incidence is likely even lower.These studies underscore the need for a more standardized approach in reporting cardiac pathologic findings in COVID-19, such as myocarditis, and question the utility for routine endomyocardial biopsy testing considering the low yield and risks associated with this procedure.
Limitations of our study include the fact that it is a small, singlecenter, retrospective study.Diagnosis of acute myocarditis was made on the basis of a combination of non-invasive testing (including CMR), endomyocardial biopsy, virology testing and high clinical suspicion.While not all patients underwent endomyocardial biopsy or CMR imaging, they were deemed to have clinically suspected myocarditis according to previously published guidelines [3], and our thorough chart review confirmed that acute myocarditis was the primary discharge diagnosis in these 67 patients.It is possible that the observed increase in acute myocarditis cases is related to a more investigative approach by the medical team during the pandemic searching for an association with SARS-CoV-2.Lastly, antibody testing for prior SARS-CoV-2 infection in patients who were SARS-CoV-2 negative on admission was not routinely performed as part of standard clinical care at our institution.
In conclusion, we found a two-fold increase in occurrence of acute myocarditis admissions during the first 6 months affected by the COVID-19 pandemic compared to years past.Less than half of the patients tested positive for SARS-CoV-2.Patients who tested SARS-CoV-2 positive were older and had lower peak levels of high sensitivity troponin compared to those who tested negative for the virus.
Fig. 1 .
Fig. 1.Acute myocarditis admissions from September 1, 2017 through August 31, 2020 at the University of Florida.The first case of confirmed COVID-19 in the U.S. was January 15, 2020.University of Florida started routine testing for SARS-CoV-2 in all admitted patients in March 2020 and first positive case at the University of Florida was on March 13, 2020.Acute myocarditis patients diagnosed with COVID-19 are in the shaded area. | 2021-06-25T13:17:42.840Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "91d0995c674f4bc10596fb18a5ef21e7e15a2166",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ahjo.2021.100030",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f94224b322ec5b9e95849d3dbad49a297b20d77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15974428 | pes2o/s2orc | v3-fos-license | Circulating Very Small Embryonic-Like Stem Cells in Cardiovascular Disease
Very small embryonic-like cells (VSELs) are a population of stem cells residing in the bone marrow (BM) and several organs, which undergo mobilization into peripheral blood (PB) following acute myocardial infarction and stroke. These cells express markers of pluripotent stem cells (PSCs), such as Oct-4, Nanog, and SSEA-1, as well as early cardiac, endothelial, and neural tissue developmental markers. VSELs can be effectively isolated from the BM, umbilical cord blood, and PB. Peripheral blood and BM-derived VSELs can be expanded in co-culture with C2C12 myoblast feeder layer and undergo differentiation into cells from all three germ layers, including cardiomyocytes and vascular endothelial cells. Isolation of VSLEs using fluorescence-activated cell sorting multiparameter live cell sorting system is dependent on gating strategy based on their small size and expression of PSC and absence of hematopoietic lineage markers. VSELs express early cardiac and endothelial lineages markers (GATA-4, Nkx2.5/Csx, VE-cadherin, and von Willebrand factor), SDF-1 chemokine receptor CXCR4, and undergo rapid mobilization in acute MI and ischemic stroke. Experiments in mice showed differentiation of BM-derived VSELs into cardiac myocytes and effectiveness of expanded and pre-differentiated VSLEs in improvement of left ventricular ejection fraction after myocardial infarction.
Introduction
Rapid progress in the field of experimental studies on cardiovascular regeneration is being translated to the clinical application of stem cells (SC) isolated from the bone marrow (BM) or the myocardium (cardiac stem cells, CSC). Aim of this approach is to promote the myocardial recovery in patients with acute myocardial infarction (MI) or to improve the cardiac function in the setting of ischemic cardiomyopathy. So far, there is no proof that use of SC can lead to bona fide cardiac regeneration and the mechanism of beneficial effects observed in some studies is probably mediated by paracrine effects leading to neoangiogenesis, reduction of apoptosis, as well as recruitment of CSC to the site of the ischemic injury [1]. At the current state of clinical application of SC, there is no convincing data showing the superiority of any particular-type cells or their source, so heterogenous population of BM-derived mononuclear cells (MNC) is used most often; however, some recent studies assess the efficiency of selected subpopulations of BMC, such as CD133+, CD34+CXCR4+ cells, mesenchymal stromal cells (MSC), or CSC [2]. Despite the encouraging experience from trials using BM-derived MNC novel types of SC carrying higher reparatory potential are clearly needed. Such populations include CSC [3], engineered BM-derived progenitor cells (e.g., C-Cure) [4], allogeneic MSC [5], and pluripotent stem cells (PSC). PSC can be produced using gene transfer (induced pluripotent stem cells) [6,7] or isolated from the adult tissues (very small embryonic-like stem cells [VSELs] [8]). Isolation of PSC from adult tissues seems to be very promising approach because cells obtained in such way are ethically acceptable, however efficient methods of isolation and expansion in culture of human cells are still not available [9,10]. This review discusses the recent data on characteristics and potential clinical application of VSELs.
Potential Role of PSC, Including VSELs in Adult Organisms
Adult tissue PSC represent a population of epiblast-derived progeny which survive into adulthood in different locations in BM and solid organs. We hypothesized that these cells, including VSELs, migrate during embryogenesis along with hematopoietic stem cells (HSCs) to the BM, and their migration follows the gradient of chemoattractants, including chemokine stromal cell-derived factor-1 (SDF-1) [11]. Their potential role is to be a reserve population of SC and tissue-committed progenitor cells which can be mobilized after tissue injury. VSELs are primitive cells expressing the markers typical for primordial germ cells including Stella, Fragilis, Nobox, Hdac6, and CXCR4. We hypothesize that quiescent VSELs serve as a reserve pool of PSC and are part of the physiological mechanism of tissue repair and renewal of resident SC [11]. Their quiescence is a safety mechanism preventing the formation of teratomas.
Isolation and Sources of VSELs
Initially, rare population of VSELs was isolated from adult murine BM by Kucia Fig. 1 Strategy for isolation of VSELs from human peripheral blood after mobilization with C-CSF (MPB-VSELs) using FACS-based live cell sorting system. Gating strategy was developed by using of the synthetic beads of known diameters (1, 2, 4, 6, 10, 15 μm) (a) to define the extended lymph-gate for subsequent sorting (b). After lysis of erythrocytes, mobilized peripheral blood total nucleated cells (TNCs) fraction is stained with antibodies against hematopoietic lineages markers (Lin), CD133 stem cell antigen (c), and CD45 pan-leukocytic antigen (d). Gate R1 was set up to include objects with diameter>2 μm. Events included in region R1 were analyzed for presence of hematopoietic lineages markers. Subsequently, only lin − events were included into region R2, and cells expressing CD133 antigens were further isolated depending on the presence of CD45 antigen (gates R3-4). VSELs are lin − CD45 -CD133 + cells (gate R3) whereas hematopoietic stem cells (HPCs) are included in gate R4 and constitute population of lin − CD45 + CD133 + the gating strategy used for FACS sorting. The detailed description of the protocol was published elsewhere. Briefly, the initial step is the lysis of red blood cells to obtain the fraction of nucleated cells. Erythrocyte lysis buffer is used instead of Ficoll centrifugation because the latter approach might deplete the population of very small cells [12].
Subsequently, cells are stained with antibodies against Sca-1 (murine VSELs) or CD133 (human VSELs), panhematopoietic antigen (CD45), hematopoietic lineages markers (lin), and CXCR4 and sorted using a multiparameter, live sterile cell sorting systems (MoFlo, Beckman Coulter; FACSAria, Beckton Dickinson) [8]. We used "extended lymphocyte gate" to include events with diameter 2-10 μm, including approximately 95% of VSELs. The width of the gate was validated by using synthetic beads of predefined size (1-15 μm) [12]. Proper definition of the gate is crucial because this area of the cytogram includes not only cells, but also cellular debris [13,14]. Several other approaches to define the population of small cells were used, including ImageStream system, which is a combination of FACS and immunofluorescent (IF) microscope and allows to "decode" any particular events on cytogram and visualize it to confirm morphology and presence of markers consistent with VSELs phenotype [13,14]. VSELs were so far isolated in mice (BM, peripheral blood, fetal liver, and several solid organs in adult animals including brain, heart, retina, kidneys, pancreas, skeletal muscles spleen, and thymus) and humans [15]. In humans, VSELs were successfully isolated from umbilical cord blood (UCB) and peripheral blood (PB). Currently, our group investigates the presence of VSELs in adult BM and myocardium [16,17].
Structural, Molecular, and Functional Characteristics of Freshly Isolated VSELs
VSELs confer a very small population of cells, and their number is as low as 0.030±0.008% of total BM cells in mice. Both murine BM-derived as well as human PBderived VSELs have significantly smaller diameter than monocytes and granulocytes, and are larger than platelets [4,5]. In addition, some differences between murine and human VSELs were observed. Human UCB-and PBderived VSELs consist of lin − CD45-CXCR4+CD133+ and consistently with differences in mean size of leukocytes and RBC between humans and mice are larger than murine (~6-7 μm) [13]. Population of Sca-1 + lin − CD45 − cells was enriched for PSC markers, such as Oct-4, Nanog, SSEA-1, Rex1, and Dppa3, Rif-1. In addition, VSELs express CXCR4 and migrated to the SDF-1 gradient. Several imaging tools were used to define the morphology of VSELs on the single cell level. Studies using transmission electron microscopy confirmed that VSELs differed in several aspects from HSC. As showed on Fig. 2, VSELs are significantly smaller than HSCs (3-6 vs. 6-8 μm) and have higher nucleus/cytoplasm ratio. Nucleus is large, contains open-type chromatin and is surrounded by the narrow rim of cytoplasm with numerous mitochondria. Therefore, their morphology is consistent with primitive PSC [8,18,19]. VSELs display non-hematopoietic immunophenotype (lin − CD45 − ) and do not offer radioprotection in lethally irradiated recipient mice. Freshly isolated and expanded VSELs do not form hematopoietic colonies in vitro [13,20]. Distinct immunophenotype and size are major criteria for isolation of VSELs, and the presence of PSC markers was confirmed using real-time RT-PCR, at protein level by IF staining and ImageStream system. Importantly, because of the possibility of the detection of pseudogenes raised by some investigators, our group recently demonstrated that the promoters of Oct-4 and Nanog in VSELs contain transcriptionally active chromatin [21]. Importantly, VSELs and HSCs are different not only in terms of their morphology.
Freshly isolated VSELs were expanded in co-culture with C2C12 myoblast feeder layer and after 7-10 days, formed sphere-like clusters consisting of a few hundred cells resembling embryoid bodies (VSEL-derived spheres, VSEL-DSs). VSEL-DSs expressed placenta-like alkaline phosphatase. Co-culture allowed for expansion of VSELs, and after isolation of expanded VSELs, we demonstrated their capacity to differentiate into cell lines from all three germ layers, such as mesodermal cardiomyocytes, ectodermal neural cells, and endodermal pancreatic cells [22]. Importantly, the pluripotent features of VSELs are retained in the population of mobilized to PB murine VSELs. Such circulating cells showed expression of PSC markers on the level similar to ES-D3 murine embryonic cells. Such observations support the hypothesis that mobilized cells might contribute to tissue repair because of their broad differentiation capacity [18].
VSELs display morphological and molecular features of PSCs. Several molecular markers are consistent with PSC phenotype, such as Oct4 and Nanog, presence of bivalent domains in promoters of developmentally important transcription factors (Sox21, Nkx2.2, Dlx1, Lbx14, Hlx9), and partially reactivate inactivated the X chromosome in female PSCs. VSELs differentiate into cells from all three germ layers. Importantly, the pluripotent features observed in murine cells were not demonstrated in human VSELs. In fact, VSELs neither fulfill the criteria for blastocyst complementation nor show the ability to form teratomas in immunodeficient mice. The quiescence of VSELs isolated from adult tissue can be explained by their quiescence related to epigenetic modification of crucial imprinted genes, such as Igf2 and RasGRF1. Quiescence is probably a mechanism of prevention against the formation of teratomas [21].
Mobilization, Circulation, and Homing of VSELs
Bone marrow-derived SC must undergo rapid mobilization in order to participate in tissue repair. In addition, there must be a variety of chemical signals to which the VSELs must respond to orchestrate their homing and engraftment into the ischemic or otherwise damaged tissue (irradiation, burns) [23]. Data from animal studies showed rapid mobilization of SC from the BM to the peripheral blood, as evidenced by an increase of circulating cells enriched with early cardiac markers (GATA-4 and Nkx2.5/Csx), which migrated along the SDF-1 gradient [24,25].
Acute Myocardial Infarction
Acute MI leads to a generalized inflammatory response with increased production and release of inflammatory markers, and also of chemoattractant factors such as kinins, chemokines, cytokines, and growth factors, components of the complement cascade. This coexists with subsequent mobilization of SC and endothelial progenitor cells as well as leukocytes [24,[26][27][28].
In healthy humans, a very small number of VSELs [0.8-1.3 cells/μL] can be detected reflecting continuous efflux of SC from the BM [10,25,28]. During acute MI, the number of VSELs is significantly higher and the expression of mRNA of PSC and cardiac markers is up-regulated [25]. Numerous hematopoietic and inflammatory cytokines known to regulate mobilization of SC are up-regulated in acute coronary syndromes. Several factors involved in trafficking of VSELs are SDF-1, leukemia inhibitory factor, hepatocyte growth factor, and stem cell factor-CD117 axes [24,29,30]. Figure 3 shows the mechanisms of mobilization and homing of VSELs in acute MI.
The number of mobilized VSELs is dependent on several factors, such as age, presence of diabetes, and the extent of myocardial damage [18]. In our hands, mobilization of VSELs was reduced in elderly and diabetic patients with acute MI [31]. Also, patients with more severely impaired cardiac function (reduced left ventricular ejection fraction, high levels of cardiac troponins) had reduced number of circulating cells [25,32].
Ischemic Stroke
Similar to acute MI, ischemic stroke was associated with significant increase of the number of circulating VSELs in PB. These cells expressed increased levels of mRNA for neural markers (GFAP, nestin, beta-III-tubulin, Olig1, Olig2, Sox2, Fig. 2 Representative images of human peripheral blood-derived VSEL and HSPC by ImageStreamX system. Human blood cells were stained for markers distinguishing VSELs such as: (1) CD45 pan-leukocytic antigen (APC-Cy7, cyan), (2) hematopoietic lineages markers (FITC, green) and (3) stem cell antigens CD133 (PE, yellow), and CD34 (APC, violet). Nuclei were stained with Hoechst 33342 dye. Images were collected by imaging flow cytometer-ImageStreamX system. VSELs and HSPCs were distinguished based on CD45 antigen expression and Musashi) as well as PSC markers (Oct-4, Nanog). This suggests that mobilization of VSELs is an important mechanism in different forms of tissue ischemia [33].
Physical Exercise
Physical exercise, especially a regular one, was shown to increase the number and improve the function of circulating SC and progenitor cells. Our preliminary data showed that intensive treadmill exercise induced the transient mobilization of VSELs into peripheral blood. An increased number of VSELs was detected as early as after the exercise, and circulating cells showed increased expression of mRNA for PSC (Oct-4, Nanog), cardiac (GATA-4, Nkx2.5/Csx, MEF2C), and endothelial markers (VE-cadherin). Mobilization was negatively correlated with an extent of coronary Fig. 3 The putative mechanism of mobilization and homing of VSELs in acute myocardial infarction. Myocardial ischemia induces increased expression of chemoattractants [chemokines (SDF-1), growth factors (VEGF, HGF), cytokines (LIF)] and release of phospholipids predominantly in infarct border zone. The increased expression of chemoattractants in the ischemic organ creates the reversal of chemoattractant gradient leading to the release of VSELs from the bone marrow niches and their homing to the site of the ischemic injury. Within the bone marrow niche, the mobilization of VSELs is orchestrated by expression of matrix metaloproteinases and activation of complement cascade. Also, phospholipids, such as sphingosine-1-phosphate influence the mobilization of VSELs. In peripheral blood in healthy subjects, VSELs can be detected at a very low number. After ischemic injury, these cells are rapidly mobilized into peripheral blood and the expression of pluripotent stem cell markers, as well as early cardiac, endothelial, muscle, and neural markers is significantly increased. Hence, we hypothesized that mobilization of VSELs is a part of reparatory mechanism activated in the setting of acute myocardial infarction. Also, circulation of VSELs might contribute to the pool of resident cardiac stem cells artery disease in angiography (one-vs. two-and threevessel disease) [submitted].
Potential Application of VSELs and VSEL-Enriched Populations in Clinical Trials
The measurement of the number of VSELs might, after validation in larger population, serve as a prognostic marker in acute MI because the preliminary data showed inverse correlation with the extent of myocardial injury and presence of co-morbidities, such as diabetes. Another approach is to use the cells for prevention of left ventricular dysfunction after MI. We developed a protocol for expansion and differentiation of BM-derived VSELs into cardiac myocytes. The differentiation of murine VSELderived cardiac myocytes (CM) closely resembles the same process in embryonic stem cell-derived CM [8,34]. Dawn et al. showed that direct intramyocardial injection of freshly isolated VSELs in mice with reperfused MI improved global and regional left ventricular contractility. Moreover, the beneficial effects were observed after 35 days of followup. The histopathology study showed rare VSEL-derived cardiac myocytes in the recipients' heart muscle [35]. Additionally, expansion and pre-differentiation of VSELs in cardiopoiesis-guided media for 5 days before the injection increased their effectiveness leading to increase of left ventricular ejection fraction, myocardial systolic thickening, and attenuated remodeling after 35 days post-MI [36]. Clinical studies using autologous VSELs are needed to validate these promising experimental data, providing that clinically approved protocols for cell expansion and differentiation are available. Several other emerging stem cell technologies such as CSC, genetically engineered bone marrow progenitor cells (e.g., EPC overexpressing endothelial nitric oxide synthase), allogeneic bone marrow-derived MSC and cardiopoiesis-guided bone marrow-derived mesenchymal cardiopoietic cells are under translation into clinical use. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2014-10-01T00:00:00.000Z | 2010-12-17T00:00:00.000 | {
"year": 2010,
"sha1": "e146c8dfc538d4cea80f0b1248bf8afeb29899ce",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12265-010-9254-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e146c8dfc538d4cea80f0b1248bf8afeb29899ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
80312783 | pes2o/s2orc | v3-fos-license | Emphysematous Cholecystitis Complicating a Transarterial Chemoembolization Procedure
Transarterial chemoembolization (TACE) is a wellestablished treatment for patients with hepatocellular carcinoma (HCC). Knowledge of the common and rare complications arising from this procedure is necessary for everyone who comes across this patient group. Cholecystitis is a rare complication of TACE procedures due to reflux of embolic material into the cystic artery. Our patient reported abdominal pain, a very common complaint following a TACE procedure. Before disregarding it as an expected side effect of therapy, one must remain vigilant as serious pathology may exist underneath.
Introduction
Transarterial chemoembolization (TACE) is a wellestablished treatment for patients with hepatocellular carcinoma (HCC). Knowledge of the common and rare complications arising from this procedure is necessary for everyone who comes across this patient group. Cholecystitis is a rare complication of TACE procedures due to reflux of embolic material into the cystic artery. Our patient reported abdominal pain, a very common complaint following a TACE procedure. Before disregarding it as an expected side effect of therapy, one must remain vigilant as serious pathology may exist underneath.
Case Presentation
A 61 year old Asian male presents to the hospital for an elective TACE procedure. He has a history of hepatitis C cirrhosis and HCC status post left hepatic lobectomy a year prior to this admission. The patient underwent a successful right hepatic artery chemoembolization with a mixture of doxorubicin, ethiodol and embosphere particles. A few hours following the procedure, he developed moderate right upper quadrant pain which was managed supportively with intravenous opiates and close monitoring overnight.
Over the next 2 days his pain persisted and was accompanied by intermittent "shakes" and a low grade fever at 38 degrees. He was tachycardic to 110 beats per minute but remained normotensive. He had a tender right upper quadrant with a negative murphy's sign, no rebound and no guarding. Laboratory markers were remarkable for a white cell count of 22.2 x 103cells/µL, alanine and aspartate transaminase peaking at 1190 U/L and 877 U/L respectively, alkaline phosphatase of 180 U/L and bilirubin of 1.1 mg/dL. The INR and hemoglobin levels remained stable. A liver ultrasound revealed changes related to the TACE procedure but also a moderately distended gallbladder with non-dependent air and wall edema. A CT scan of the abdomen and pelvis revealed an infracted gallbladder with emphysematous cholecystitis and small areas of gallbladder wall perforation ( Figure 1). The patient received appropriate resuscitative measures including antibiotic coverage for intra-abdominal pathogens. The general surgery team was consulted and given the patient's deteriorating clinical status and impending gallbladder perforation a decision was made to proceed with an emergent open cholecystectomy. The patient had an uneventful postsurgical course and was discharged home 3 days later. The final pathology report revealed a perforated transmural acute necrotizing cholecystitis with identifiable chemoembolization material.
Discussion
TACE has become a commonly utilized treatment for HCC. It is offered for palliative purposes in inoperable tumors, to shrink tumors prior to surgery or as a bridge to liver transplant [1,2]. Major complications occur in 5% of patients and the risk of death is 1% [3]. Complications related to the procedure include access site injuries, hepatic failure, biloma or abscess formation, pulmonary embolization or cholecystitis [3].
Following a TACE procedure it is not uncommon for patients to experience abdominal pain. This can be accompanied by systemic symptoms such as fatigue and fever, a constellation referred to as postembolization syndrome. Additionally, laboratory abnormalities including leukocytosis and elevation in liver enzyme levels are expected after the procedure. The mechanism of these changes can be explained by hepatocyte damage however some authors postulate cystic artery embolization to be the cause of this pain [4,5].
Therefore on many occasions it may be difficult to discern the etiology of the pain and imaging may be necessary to rule out serious complications. Unfortunately, there are currently no guidelines or recommendations to image patients with abdominal pain following TACE. Clues to pursue further investigations may include persistence of symptoms and deterioration in the hemodynamic or clinical status of the patient. Cholecystitis is a rare but well-documented complication with a variable incidence ranging from 0.3 to 10% [6].
The gallbladder unlike the liver has a single vascular supply through the cystic artery. This makes the gallbladder susceptible to injury or infarction due to inadvertent embolization during the TACE procedure. The cystic artery arises most commonly from the right hepatic artery before dividing into anterior and posterior divisions. This implies that right hepatic artery chemoembolization, as the case in our patient, carries the highest risk of post TACE cholecystitis [6]. The pathogenesis of cholecystitis in TACE patients is related to gallbladder wall ischemia and infarction which is believed to be related to lipiodol embolization to the gallbladder wall [7].
For most cases, cholecystitis following TACE is a benign and self-limiting complication [6]. Patients usually do not require intervention and can be managed expectantly. However in cases such as ours, when there is evidence of gallbladder perforation or emphysematous cholecystitis, surgical intervention is mandatory [8].
Conclusion
We describe a case of emphysematous cholecytitis complicating a TACE procedure. Early imaging with ultrasound and computed tomography of the abdomen may be essential to assess patients with persistent abdominal pain following the procedure as serious pathology may exist underneath. | 2019-03-17T13:12:01.371Z | 2017-03-03T00:00:00.000 | {
"year": 2017,
"sha1": "a469cd7e045ac122fcbec7a3155a0a7f151e60a6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19080/jojcs.2017.02.555584",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d565e10d690e3c3d8b5b64ea26cf608d05195cff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139183227 | pes2o/s2orc | v3-fos-license | Abrasive wear of Cemented Granular Composites: Experiments and Numerical Simulations
The results of earlier experimental work and numerical calculations to determine the effects of technological parameters on abrasion resistance of concrete are summarized and new data presented. The dependences of the near-surface stress-strain state on the geometric and stiffness parameters are analyzed. Particular attention paid to the stages preceding the destruction of the adhesion interaction between cement matrix and grains. Numerical calculations were carried out in the ANSYS software. This numerical model may be useful for understanding heterogeneous material behavior in terms of wear, depending on structure geometrical characteristics, material mechanical properties and interactions occurring in the near surface layer. Adhesive laws gives very interesting advantages, of which the most important is to model the evolution of damage throughout the life of the building constructions.
Introduction
Abrasion erosion damage of concrete surface results from the abrasive effects of sand, gravel, rocks, ice, and other debris impinging on a hydraulic structure and continued movement of wheels on transport pavements during operation. The rate of erosion is dependent on a number of factors including the size, shape, quantity, and hardness of particles being, and the quality of the concrete. While high-quality concrete is capable of resisting high loading for many years with little or no damage, the concrete cannot withstand the abrasive action of debris grinding or repeatedly affecting its surface. In such cases, abrasion erosion ranging in depth from a few centimeters to meter or more can result depending on the operation conditions. These features of abrasion and deterioration processes and fracture should be considered when forecasting life cycle of concrete constructions.
Compared to the structures of metal, ceramics and other materials, to assess the strength and durability of the concrete structures the application of theoretical and experimental methods of the contact mechanics is difficult due to the significant heterogeneity of the concrete. Therefore, for a long time the study of the concrete abrasion resistance investigated mainly on experimental base [1][2][3][4][5][6][7]. The abrasion wear was predicted by used different empirical and statistical modelling depending on the amount and quality of data [8][9] Models of concrete abrasion with consideration of its heterogeneous structure are presented in [10] and developed in [13]. In these papers, the surface abrasion is presented as a result of turning and falling out aggregate grains out of cement-sand matrix. The loss of grains occurs when the exposing of grains reached a certain size comparable to the size of the coarse aggregate. At the same time for a sufficiently long operational period of structures, the surface abrasion takes place only due to wear of thin surface layers. In this regard, it is quite difficult to select criteria for the application of mathematical models of concrete surface abrasion at different structural levels.
In this work, based on the obtained experimental data and numerical calculations we study strength properties of concrete as composite material in the process of its abrasion. Particular attention is paid to the stages preceding the deterioration of the material. Numerical calculations were carried out based on the ANSYS software simulation model.
Statement of the problem
Abrasion process according to [11,[14][15][16] can be divided into the following stages: Stage I -the initial stage, in which occurs the process of deterioration of the cement stone, solvation shells and exposing of the grains of coarse and fine aggregate; Stage II -normal operating stage, in which takes place the process of abrasion of the grains of coarse aggregate and cement-sand matrix, and besides at different speed, depending on their physical properties. After the exposure of the surface of the coarse aggregate, the abraded surface presented by a set of areas with different tribological characteristics; Stage III -the destruction of the concrete surface. There are two possible cases: case 1 -fatigue failure of the matrix occurs (extensive micro-cracking) between the grains of coarse aggregate, which leads to its loss, which is typical for concrete with low-strength cement-sandy matrix; case 2propagation of fatigue macro-crack on the border between the grains of coarse aggregate and cementsandy matrix, which is typical of high-strength concrete ( Figure 1).
Figure 1.
Failure mechanism associated with exposed aggregates (Stage III).
In the paper [12] the choice of structural levels was based on experimental investigations. In order to application of mathematical models we carried out a series of experiments using a standard method (GOST 13087). In addition, 0.5-mm-long resistance strain gauge signals near abraded surface were registered by ADC-DAC module ZET210. As a result, it was found that in a certain, sufficiently long period of time, horizontal strain negligible and are stable (stage I and stage II). After that amplitude of strains occurred an abrupt increase (transition to the stage III). The greatest horizontal strains were observed at the height 0.7-1.5 of the maximum size of coarse aggregate, rather than in the immediate vicinity of the abraded edge. This is explained by the work of the friction forces. By the help of sensors there was recorded a strain maximum on the border between aggregates and matrix, that is explained by the concentration of stresses around the solid inclusions. In [11][12] the development of fatigue cracks also been tested. The experimental results showed that the cause for cracks propagation on the contact is the achieving by the relative deformations the limit values for concrete.
This experimental studies have shown that the process of abrasion of the concrete surface at the initial stages (stage I, II) occurs in sufficiently thin (<1 mm) layer and does not change the structure of the material in the near-surface zone. The rate of abrasion depends on the tribological characteristics of the concrete surface. In the future, it allowed to application more reasonably the mathematical apparatus of the mechanics of contact interaction.
Mathematical modelling
On the basis of the obtained experimental data mathematical modeling of abrasion in the initial stage I leads to modeling of concrete abrasion as a homogenous material [11][12]. The degree of abrasion is defined by t w / * and depends on the speed , pressure p on the surface contact, material hardness H, as well as the parameters that have a specific value for each abrasion process and used for its modeling.
Mathematical modeling in stage II comes to modeling of abrasion of material with an inhomogeneous structure. The softer cement matrix is failure, as result a coarse aggregate grains are exposed on the surface. In this study the application of mathematical models used for stage I, is impossible. The abraded process should be studied simultaneously on the microscale (matrix surface); and on the mesoscale (exposed aggregates). It leads to a change of the surface shape [13]. So here it is acceptable to use the mathematical tools of mechanics of frictional interaction [14][15][16]. Herewith we assumed that the pressure p and the speed are constant. Then the concrete surface may be represented as an elastic half-space with the area strengthened in the circular domain ω ij of the radius a. The distance between centers of the strengthened zones along on one axis is equal l. Application of this mathematical model for strengthened areas various geometric shapes (a square, an octagon, a circle) described in detail in [17][18][19]. Presented integrated solutions take into account both the geometrical parameters of the hardened areas and their number, and tribological characteristics of the material (hardening parameters and size of the hardened areas). The obtained numerical results had a good agreement with experimental data [17,19].
Numerical simulation and results
Numerical image processing technique and parameterization modeling technique are most popular approaches in three-dimensional (3D) modeling the different material phase in concrete mixture. However, due to the difficulty of 3D mesostructure modeling and high computational costs, most of current stress-strain studies are two-dimensional (2D) models. In study of the abrasion process in stage III, the most interesting is the near-surface layer [20], so we modelled this layer as elastic half-plane with grains inclusions. In the 2-D mesoscopic simulation, the near-surface level represented as an elastic half-plane with the circular inclusions when the grain spacing l, grain size a and protrusion height h (Figure 2, 3). To study the concrete abrasion in stage III as interaction between structure components we accepted that the real dynamical wear process of loading can be reduced to applying a static load on the grains. Thus, we fix the lower part of the half-space, and apply the load p to the half of the exposed part of the grains (Figure 2). The finite element method (FEM) is the most acceptable to apply toward the solution of such problems. All calculations were carried out by using ANSYS software [21]. PLANE182 is used for 2-D modeling of solid concrete structures. In the present work, experimental and numerical investigations have been performed to stress-strain analyses around grains and to determine of expected failure zone locations. Six numerical experiments were carried out with different geometric and physical parameters of the cement-sand matrix and [18] are listed in Table 1, where E 1 -Young's modulus of matrix, E 2 -Young's modulus of grains, 1 -Poisson's constant of matrix, 2 -Poisson's constant of grains.
The results of the numerical calculations of Equivalent Stresses (Mises) are shown in Figure 4 - Figure 9. An analysis of the graphs showed that the geometrical relationships are more affect at the formation of fracture zones of the adhesion boundary near of load application. Thus, as the grain size decreases and, correspondingly, the distance between them decreases, the stress value increases (Figures 6, 8). And conversely, the stresses decrease on the opposite side of the grain (Figures 7, 9). The deformation modules ratio to a lesser extent affects the magnitude of the stresses, but at the same time, the stress diagrams are more uniform when the difference between the modules decreases ( Figures 6-9).
The results of determining the strain distribution in the matrix around the inclusions (not represented in this paper) made it possible to preliminarily determine the location of the fracture zones both a matrix, and in the boundaries between the matrix and grains. Analysis of the results showed that in a material with a weak matrix and a large distance between grains, the fracture begins in the near surface level. Conversely, with a grains size increasing and strong matrix, the destruction will occur at a certain distance from the surface, comparable to the grain size. These results are in good agreement with the results of experimental studies [11][12][13].
Summary
This paper developed a mesoscale finite element model for investigation of complex fracture in nearsurface level of abraded concrete. The nucleation and propagation of microcracks and macrocracks in 2D half-plane is realistically modeled in detail with a few important conclusions drawn. The effects of coarse aggregate distributions on performance of concrete abrasion resistance are also evaluated. | 2019-04-30T13:08:53.794Z | 2018-12-31T00:00:00.000 | {
"year": 2018,
"sha1": "3a50d28f638fdde904c2ce6bbf8af6545b8e404f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/463/3/032002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ff07b6cc1a3aac2e695b02ba9468ddb45a671197",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
158638965 | pes2o/s2orc | v3-fos-license | Multimodal Discourse Analysis in Dettol Tv Advertisement
TV advertisement is one of the various kinds of mass media advertisement which inevitably surrounds people’s life today. It is a multimodal discourse in which the text comprising it consists of complex resources of meanings. The complexity of meanings is due to the message delivered in advertisement which is not only used the verbal language but the visual image as well that work together as a unit of meaning. This study observes TV advertisement featuring Dettol (protecting children version) manifests verbal and visual elements. Under the guidance of Linguistic Functional Grammar and visual grammar, this study attempts to look at any multimodal elements which comprise the advertisement and how these elements express meanings that strengthen the message intended by the producer. The analysis is conducted by following Linguistic Functional Systemic proposed by Halliday (2004). Furthermore, the multimodal discourse analysis is conducted by combining multimodal theory from Anstey and Bull (2010) and Kress and Van Leeuwen (2006), while to determine the generic structure of advertisement, this study follows Cheong’s formulation (2004). This study follows the procedures of analyzing multimodal discourse which include verbal and visual elements proposed by Hermawan (2013). The findings suggest that theoretical framework based on functional grammar and visual grammar is adaptive for multimodal discourse of TV advertisement. By virtue of linguistic and non-linguistic analysis provide the clearer meaning from themessage delivered in TV advertisement.
Introduction
In using the language, it is almost ignored non-verbal elements including visual images which follow the verbal one. It must be realized that the language user will succeed to get the entire meaning or message delivered by using the verbal language by conjoining the non-verbal elements which is functional in social contexts [13]. Understanding KnE Social Sciences AICLL language (text) based on a single viewpoint is so called mono-modal, while understanding text based on more than one view point is so called multimodal. Multimodal discourse analysis (MDA for short), as the confluence of discourse and technology, is becoming paradigm in discourse studies which extends the study of language per se to the study of language in combination with other resources, such as images, scientific symbolism, gesture, action, music and sound [10]. One of the texts which use several modes to create a single artifact is advertisement.
Advertising is a means of communication with the users of a product or service.
In more detail, advertisement is message paid for by those who send them and is intended to inform or influence people who receive them. Advertising is always present, though people may not be aware of it. In today's world, advertising uses every possible media to get its message through. It does this via television, print (newspapers, magazines, journals and many more), radio, press, internet, direct selling, hoardings, mailers, contests, sponsorships, posters, clothes, events, colors, sounds, visuals and even people (endorsements). This study observes TV advertisement featuring Dettol (protecting children version) which manifests verbal and visual elements. Under the guidance of Linguistic Functional Grammar and visual grammar, this study attempts to look at any multimodal elements which comprise the advertisement and how these elements express meanings that strengthen the message intended by the producer.
Most studies on MDA mainly focus on the visual images guided by visual grammar, without sufficient attention to verbal text and sound in multimodal discourse.
To comprehensively understand multimodal analysis, especial TV advertisement, it is worthwhile to conduct an integrated analysis of various modalities by combining the existing analysis methods. Therefore, the focus would be equally put on verbal, visual and audial analysis of TV advertisement, as much as possible, hoping to present a comprehensive understanding of the Ad.
In detail, the study of this paper is arranged to analyze linguistic feature under the umbrella theory of Halliday's functional grammar (1994). The analysis is based on the meta function system which comprises three components; ideational, interpersonal, and textual. However, this study is limited or focused on ideational function in transitive clause. The role model applied in this paper follows Gerot and Wignel (2001) and Sinar (2012). The analysis is focused on verbal and visual elements which include three categories; circumstance, process, and participant. Furthermore, the MDA is developed by conjoining the multimodal theories proposed by Anstey and Bull (2010) and Kress andVan Leeuwen (1996, 2006). The goal of this paper is to seek the multimodal DOI images generated from the screen-shots to illustrate the development process of the ad. Secondly, the analysis is added with the analysis of the audial, gestural and spatial elements which highlight the ad.
Research Method
This study basically followed qualitative method with the general principles to draw the
Discussion
From the first time, this ad was dominated by two major characters, a mother and son who showed up together in most part of this ad. Every scene represented the role of mother toward her son. She holds, hugs and carries her son in order to protect him from any (in this case) germ. This ad was intended to delivere the message that the product can be used to replace the role of mother to protect her son from germ. Then these scenes were analyzed in terms of verbal, visual and audial modalities.
Linguistic analysis
Linguistic analysis in Dettol TV advertisement represents clauses which are based on meta-function system under Halliday's (1994) Functional Grammar as umbrella theory.
This study is focused on the ideational function of transitive clauses. The ideational function enables us to express patterns of experience and also to conceptualize the situation, process or states of affairs. The analysis is limited to the verbal of the ad KnE Social Sciences AICLL which include the three semantic categories; circumstance, process, and participant.
One of the clauses analyzed in this paper is anda harus selangkah lebih maju untuk melindungi anak dari kuman. Syntactically, the clause below is in active voice. There is one participant in this clause, which is "anda" as the carrier. This clause uses relational process as represented by "harus selangkah lebih maju". This process was intensified by attributive value medium. By this process the relevant participant is carrier. "Untuk melindungi anak dari kuman" stands for causative meaning of the circumstantial elements with goal orientation. Another linguistic analysis is represented by a non-verbal clause which is visualized in the following scene.
Visual analysis
Linguistic feature which can be interpreted from the scene above is that 1). "A mother holds her son on the way to school in the morning" as in the background, and 2). "A man sweeps the path/sidewalk" as the foreground.
The clause "a mother holds her son on the way to school in the morning" has an experiential material process in which in this case it has two participants "a mother" as the actor and "her son" as the goal or recipient. The circumstance existed in the The second clause as the foreground is not analyzed in this opportunity for it is not played by the main characters.
Visual analysis
As mentioned above, the visual analysis of Dettol TV advertisement (protecting child There is a relation between the first and the second action. In this case, the moving image becomes very dynamic.
The generic structure of the advertisement
The Cheong's (2004) generic structures of the ad can be obviously seen in the following image (2) in which in the foreground the image product is presented the clear emblem.
The other elements which are visualized in contrast are the announcement "original anti bakteri" and enhancer "perlindungan terpercaya". It is also shown the incongruent display in which the type of the product is realized by using symbol.
Emblem Display
Enhancer Announcement Furthermore, the following image visualizes that demand makes a direct interaction between participants and the viewers or audiences via eye contact, as seen below.
This ad also presents the salience that is a message delivered by the participant to the viewers. The message in this case is to clarify the significant value of this product (comparing to other), the viewers will get (if using it). In this case, the body will be protected from germ, visualized by the circle frame with the distinctive color.
The coloring of the following image looks different from the other feature. This distinctive color is potential to keep the meaning to the users (lead). This lead is visualized with different quality of color comparing to other visual.
Audial analysis
This advertisement uses instrument music as the sound effect as the sound announcement from the participant is uttered. The sound effect is not dominant as it functions just as to accompanying the verbal announcement. The instrument has a slow rhythm, suited to the pitch and tone of the informer.
Gestural analysis
Speed movement of the body and the facial expression will be the gesture of the participant. Gesture is this ad is realized from the activity of the participants in the way how they carry out their daily life. It is referred to as seen in image 1 and 5, that participant (mother) protects the other participant (son) as the ad motto. However, in image 5 the role of mother as the protector is replaced by the product. Another gestural analysis which can be presented is as seen in image 4. This image visualizes the activity of taking a shower by using the product.
Spatial analysis
As a whole, the position of the product in the ground manifests the interrelated meaning of the ad. Each image has its own meaning. However, each image supports each other to give entire meaning and message delivered to the viewers as intended by the producer. Furthermore, the activity of the participants, the special quality of the product, and the easy of getting the product is visualized by the image space of active participants.
Conclusion
Dettol TV Advertisement (protecting child version) observed has various semiotic elements as included in multimodal discourse analysis. The study covers the analysis of transitive clause in which the material process occurs dominantly comparing to other process. The visual element including the generic structures of the ad present to delivered the entire and complete meaning to the viewers. Audial, spatial and gestural analysis adds the completeness understanding of the message and meaning as the producer intends to do. | 2019-01-02T00:10:28.373Z | 2018-04-19T00:00:00.000 | {
"year": 2018,
"sha1": "1f5bafd352e4d247c9d01d98a38dde2cd7ae3764",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/1932/4319",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1f5bafd352e4d247c9d01d98a38dde2cd7ae3764",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
251743273 | pes2o/s2orc | v3-fos-license | Sleep patterns and intraindividual sleep variability in mothers and fathers at 6 months postpartum: a population-based, cross-sectional study
Objectives Given that postpartum sleep is an important family process, further investigations including both mothers and fathers are necessary. The present study aimed to describe and compare sleep patterns and intraindividual night-to-night variability in mothers and fathers at 6 months postpartum using subjective and objective sleep measures. Design Cross-sectional study. Setting General community-based study in Montreal, QC, Canada. Participants Thirty-three couples (mothers and fathers) with no self-reported history of medical and mental health conditions participated in this study. Results Parental sleep was measured across 10 consecutive nights using both a daily sleep diary and actigraphy. Results demonstrated that mothers’ subjective and objective sleep was more fragmented compared with fathers (shorter longest consecutive sleep duration and more nocturnal awakenings; p<0.001). While mothers and fathers did not differ in their self-reported nocturnal sleep duration (p>0.05), actigraphy indicated that mothers obtained significantly longer nocturnal sleep duration (448.07 min±36.49 min) than fathers (400.96 min±45.42 min; p<0.001). Intraindividual sleep variability was revealed by relatively high coefficients of variation for parents across both subjective and objective indices related to sleep fragmentation (between 0.25 and 1.32). Actigraphy also demonstrated variability by mothers sleeping 6 hours consecutively on less than 3 nights, 27.27% (±22.81), and fathers on less than 6 nights, 57.27% (±24.53), out of 10. Associations were also found between parental sleep and family factors, such as age and infant sleep location (p<0.05). Conclusions These findings advance our knowledge of how sleep unfolds within the family system beyond the early postpartum weeks and/or months. Given the link between disturbed sleep and family functioning, the current research accentuates the importance of examining postpartum sleep patterns and variability in parents.
-In regards to this statement in the discussion: "Additionally, mothers on maternity leave (employment status) reported more night awakenings. This supports the notion that mothers on maternity were likely to be the designated parent to attend to their infant throughout the night." It's not clear that this finding supports this claim. It could be possible that mothers on leave are spending more time in bed or sleeping during the daytime in an effort to try to catch up on missed sleep, resulting in sleep disruptions that are similar to those with behavioral insomnia that have increased their time in bed to a point that sleep efficiency is decreased.
GENERAL COMMENTS
Comments on Sleep patterns and intra-individual sleep variability in mothers and fathers at 6 months postpartum: A populationbased, cross-sectional study it is a meaningful research with greatness workload. They measured Thirty-three couples (mothers and fathers) sleep was across 10 consecutive nights using a daily sleep diary and actigraphy. It is commendable that they included the fathers' Sleep patterns in the research and measure subjectively and objectively. These findings advance our knowledge of how sleep unfolds within the family system.
However, as the author list in limitation, the small sample size may bias the result and its generalizability. Additionally, the Multivariate analysis was not conduct to control the confounding factors.
Please find the detail comments as follows: 1. inclusion criteria, Did the mother or mother enrolled have sleep problems before the birth of the infant? 2. Subjective sleep was assessed by the question "longest consecutive sleep duration," How did they know that? 3. For 81.80% of mothers reported being on maternity leave, they may have a snap during the day time. Therefore, the day time sleep may influence the nocturnal sleep pattern. The day time sleep data from actigraphy device could be analyzed. 4. Infant health status should be considered. 5. Three objective sleep variables were derived by computing means of the actigraphy data across 10-consecutive nights: (1) nocturnal sleep duration, (2) longest consecutive sleep duration, and (3) number of nocturnal awakenings. Did the t three variables reflect the sleep quality? Did the 10-consecutive nights mean that the week day and weekends were included for all parents? What is the difference between them? Was the data of the duration of nocturnal awakenings analyzed? 6. There was something wrong with the typeset, and table 3 and table 4 could not be seen clearly. We agree that it would be more useful to report the mean and standard deviation, we thank the reviewer for this suggestion. The presentation of results has been modified (pages 11 and 15-16).
7. Same issue for table 4. Table 4 formatting has been modified and should now be legible (page 14).
8. Please report total 24 hour sleep duration for mothers and fathers. The authors hypothesize that " mothers likely had more flexible sleep schedules than fathers', whose opportunities for sleep were restricted to the nighttime." The 24 hour sleep duration that should be available from actigraphy would address this.
Thank you for outlining this important question. The 24-hour sleep duration was indeed available from actigraphy and has been added in the Results section (page 11). We have integrated this point with our initial statement in the Discussion section to reflect the suggestion made by the reviewer (page 18): Results section (page 11): "Additionally, mothers demonstrated more objective 24-hour sleep duration (512.15 minutes ± 34.86 minutes) than fathers (446.71 minutes ± 44.19 minutes; p < 0.001)." Discussion section (page 18): "Moreover, actigraphy data revealed that mothers slept more than fathers throughout a 24-hour period. Therefore, mothers likely had more flexible sleep schedules than fathers, whose opportunities for sleep were restricted to the nighttime." 9. In regards to this statement in the discussion: "Additionally, mothers on maternity leave (employment status) reported more night awakenings. This supports the notion that mothers on maternity were likely to be the designated parent to attend to their infant throughout the night." It's not clear that this finding supports this claim. It could be possible that mothers on leave are spending more time in bed or sleeping during the daytime in an effort to try to catch up on missed sleep, resulting in sleep disruptions that are similar to those with behavioral insomnia that have increased their time in bed to a point that sleep efficiency is decreased.
Thank you for highlighting this point. We have made changes to reflect this comment in the Discussion (page 20-21). "Additionally, mothers on maternity leave reported more night awakenings than actively working mothers. While mothers on maternity leave likely tended to their infant during the nocturnal period, it is also plausible that they engaged in daytime sleep to compensate for nocturnal sleep disruptions. Increased time in bed throughout the day may have thus interfered with mothers' nocturnal sleep.
REVIEWER COMMENTS AUTHOR'S RESPONSE
Reviewer 2 1. It is a meaningful research with greatness workload. They measured Thirty-three couples (mothers and fathers) sleep was across 10 consecutive nights using a daily sleep diary and actigraphy. It is commendable that they included the fathers' Sleep patterns in the research and measure subjectively and objectively. These findings advance our knowledge of how sleep unfolds within the family system. However, as the author list in limitation, the small sample size may bias the result and its generalizability. Additionally, the Multivariate analysis was not conducted to control the confounding factors.
We thank the reviewer for their comment. We do acknowledge that the smaller sample size may bias the results and generalizability (this is elaborated upon in the limitation section of the manuscript and also in the comments -Editor's comment #3 and Reviewer 2 comment #10). The confounding factors (mostly family factors) were used to address our second objective: assess associations between parental sleep and family factors. Since the majority of these family factors did not correlate with parental sleep, especially actigraphy data, they were indeed not used as confounding variables in the first objective. Moreover, considering the smaller sample size it would be difficult to add covariables in the first section, that is more a descriptive part.
2. Inclusion criteria: Did the mother or father enrolled have sleep problems before the birth of the infant? Participants with severe medical health conditions and sleep disorders (using sleep medication) were excluded. However, the goal was to recruit participants from a general population; we did not set specific criteria related to sleep quality. The subjective and objective sleep durations reported in the current sample fall with the range of sleep duration healthy adults should achieve (7 to no selfreported history of chronic medical illness, (4) no selfreported past or current diagnosed mental health conditions, (5) no selfreported sleep apnea or use of sleep medication, and (6) no parental report of diagnosed medical illness amongst infants." 3. Subjective sleep was assessed by the question "longest consecutive sleep duration," How did they know that?
We apologize if that aspect was not clear in the manuscript, but the question about longest consecutive sleep duration was not directly asked to the participants. The longest consecutive sleep duration was rather retrieved from the sleep diary. For each night of participation, parents completed a sleep diary that consisted of a visual representation of the night, depicted by a continuous line divided into boxes, with one box corresponding to 1h (which was further divided by lines denoting 15-minute blocks). Parents were instructed to shade in the boxes corresponding to their estimated sleep period every morning to report their nocturnal sleep patterns (unshaded boxes represented wake period during the night).
We subsequently scored their completed sleep diary and obtained an estimate of their longest consecutive sleep duration. This comment has been addressed in the Method section (page 8-9). "The diary consisted of a visual representation of each night with one box corresponding to 1h (which was further divided by lines denoting 15minute blocks). Parents were instructed to shade in the boxes corresponding to their estimated sleep period every morning to report their nocturnal sleep patterns (unshaded boxes represented wake period during the night). Three subjective sleep variables were then derived by computing means of the sleep diary data across 10-consecutive nights: (1) nocturnal sleep duration, (2) longest consecutive sleep duration, and (3) 5. Infant health status should be considered. Infant health status was considered with regard to our inclusion criteria. That is, only infants with no diagnosed medical conditions were included; this was based on parental report and not an independent physical examination of infants. We have now updated our statement on the inclusion criteria in the manuscript (page 6): "… (6) no parental report of diagnosed medical illness amongst infants." 6. Three objective sleep variables were derived by computing means of the actigraphy data across 10-consecutive nights: (1) nocturnal sleep duration, (2) longest consecutive sleep duration, and (3) number of nocturnal awakenings.
Did the three variables reflect the sleep quality?
Did the 10-consecutive nights mean that the weekday and weekends were included for all parents? What is the difference between them?
These three objective sleep variables do not necessarily reflect sleep quality. They were measured by actigraphy, which refers to continuous activity monitoring using a wristwatchlike device, worn on the non-dominant arm. It is a naturalistic and non-invasive indicator of sleep patterns in adults, without interfering with the families' routine (Paquet et al., 2007 Yes, weekday and weekends were included for all parents across the 10 consecutive nights. Although we did not perform this analysis previously, we went back to the actigraphy data, considering the reviewer's question. We retrieved the individual night-to-night actigraphy data and calculated new distinct means (for the weekdays and for the weekends) for both mothers and fathers for nocturnal sleep duration, night awakenings, and 24-hour sleep duration. Then, we used paired sample t-tests to compare actigraphy sleep variables between weekday and weekends for both mothers and fathers. Mothers: There were no significant differences between mothers' weekday nocturnal sleep duration ( Was the data of the duration of nocturnal awakenings analyzed? Night awakenings and consecutive sleep duration were used as measures of sleep fragmentation. Considering the small sample size and the number of different analyses conducted, we tried to limit the number of sleep variables, especially when reflecting the same construct. As an additional note, actigraphy data was analysed this way: an epoch was set at 1 minute, at medium sensitivity (40-activity count threshold). An epoch was scored as an awakening if the number of activity counts was >40 for 1 minute, based on the assumption that there is less movement during sleep and more during wake. This is highlighted in the Method section (page 9).
7. There was something wrong with the typeset, and table 3 and table 4 could not be seen clearly When the manuscript was uploaded, the formatting for Table 3 and 4 altered. We apologize for this error, all tables should be eligible upon resubmission.
8. Multivariate analysis was not conducted to control the confounding factors.
As explained in the first comment (Reviewer 2 comment #1), the confounding factors (mostly family factors) were used to address our second objective: assess associations between parental sleep and family factors. Since the majority of these family factors did not correlate with parental sleep, especially actigraphy data, they were indeed not used as confounding variables in the first objective. Moreover, considering the smaller sample size it would be difficult to add covariables in the first section, that is more a descriptive part. 10. Lack of sample size calculation. Following the reviewer's comment, we computed a post-hoc power calculation. Considering the effect sizes obtained in the analyses (r between 0.37 and 0.51) and the sample of 33 families, along with an alpha of 0.05, the post-hoc power calculation is between 58% and 94%. Some authors suggest that post-hoc power calculation should be calculated mainly to ensure that nonstatistical findings are not simply due to a low statistical power (Hoenig & Heisey, 2001
GENERAL COMMENTS
The revisions have significantly strengthened the manuscript.
GENERAL COMMENTS
It is a meaningful and interesting research. The authors provided subjective and objective instruments to measure sleep patterns between mothers and fathers. The former reviewers have pointed out the shortcomings. The authors have revised them. I think it is OK to accept this article. | 2022-08-24T06:17:55.254Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "85172756198e570c509e4be53fcfa8993c4fafa6",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/8/e060558.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3902b75f82bdcf7207add706be4e76db39f4743",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258744588 | pes2o/s2orc | v3-fos-license | Epilepsy, gut microbiota, and circadian rhythm
In recent years, relevant studies have found changes in gut microbiota (GM) in patients with epilepsy. In addition, impaired sleep and circadian patterns are common symptoms of epilepsy. Moreover, the types of seizures have a circadian rhythm. Numerous reports have indicated that the GM and its metabolites have circadian rhythms. This review will describe changes in the GM in clinical and animal studies under epilepsy and circadian rhythm disorder, respectively. The aim is to determine the commonalities and specificities of alterations in GM and their impact on disease occurrence in the context of epilepsy and circadian disruption. Although clinical studies are influenced by many factors, the results suggest that there are some commonalities in the changes of GM. Finally, we discuss the links among epilepsy, gut microbiome, and circadian rhythms, as well as future research that needs to be conducted.
Introduction
Epilepsy is a chronic neurological disorder, which is a recurrent and transient brain dysfunction syndrome caused by abnormal synchronous firing of neurons (1). There are more than 65 million patients with epilepsy worldwide, with the majority coming from low-income countries (2). Due to seizures, mental disorders, cognitive deficits, and adverse drug reactions, epilepsy severely impacts patients' daily life. Although great progress has been made in the diagnosis and treatment of epilepsy in recent decades, the specific mechanism of its onset and development still needed more studies. Current studies mainly focus on neuronal network reorganization, neuroinflammation, abnormal neurotransmitter release, axonal sprouting, and cell death (3,4). Also, approximately 30% of epilepsy patients are resistant to conventional antiseizure medications (ASMs), therefore deep research on epilepsy becomes more essential (5,6).
Recently, the microbiota-gut-brain (MGB) axis became more and more popular in neuroscience for its communication role among neuronal, endocrine, metabolic, and immunological pathways in both directions. The study has demonstrated that the intestinal flora plays a key role in central nervous system homeostasis, cognitive development, and behavior (7). Also, gut microbiota (GM) dysbiosis is associated with the onset and development of a number of neurological disorders, such as autism, multiple sclerosis, Parkinson's disease, and Alzheimer's disease. Alterations in the composition or function of GM were reported in epilepsy, particularly intractable epilepsy (8). Therefore, GM can become a target for drug treatment of epilepsy or a biomarker.
With the increased attention given to the circadian rhythm of epilepsy in recent years, time therapy has made great strides in the field of epilepsy treatment. However, to date only limited information on chronotherapy in patients with epilepsy is available. The circadian rhythm of seizures varies between different types of epilepsies (9). Choosing an appropriate medication timing according to the circadian rhythm can not only reduce the recurrence of seizures but also the compliance of patients, reduce adverse drug reaction, and improve the quality of life of patients (10,11). Recently, the bidirectional relationship between GM and the circadian rhythm has also been extensively explored (12).
Limited pieces of literature focused on the relationship between GM, circadian rhythm, and epilepsy. In this review, we summarize the role of GM and the circadian rhythm in the development of epilepsy.
Gut microbiota and epilepsy Gut microbiota
Gut microbiota is a complex microbial community composed of 10 8 -10 11 cells of more than 1,000 different species that exert immune, metabolic, and protective functions through short-chain fatty acids, cytokines, and neurotransmitters (13). Bacterial communities, yeasts, fungi, protozoa, archaea, and viruses together maintain the balance of the ecosystem. Among them, Firmicutes and Bacteroides are the two most important phyla, followed by Actinobacteria, Proteobacteria, Verrucomicrobia, and Fusobacterium (14). The main function of the Bacteroides phylum is carbohydrate degradation, energy production and conversion, and amino acid transport.
Their numbers decreased in intestinal microenvironmental disorders, obesity, and malnutrition. Unlike Bacteroides, Firmicutes increases in dysbiosis and obesity in mice. The abundance of Proteobacteria is very low in healthy people and upregulated in people with metabolic disorders or obesity. Actinomycetes is an anaerobic flora involved in maintaining homeostasis of the intestinal tract, constituted a small proportion of the GM, and declined with aging (15). The GM undergoes dynamic changes under the dual influence of internal (genetic) and external (nutrition, environment, infections, etc.) factors. Newborns mainly acquire actinomycetes and proteobacteria, which are less diverse from their mothers (16). Influenced by many factors such as prematurity, delivery methods, feeding methods, and antibiotic use, infants form a complex GM after 1 year of age, just like adults (17).
The GBM is a two-way information communication system that integrates the brain and gut functions. The diversity of GM is not only essential for gut health but also for the physiological function of other organs, especially the brain. There is an increase in epilepsy incidence in various gastrointestinal diseases. In turn, epilepsy affects the gastrointestinal tract in different ways (18). A cross-sectional study showed that irritable bowel syndrome (IBS) was more frequent in people with epilepsy compared with healthy controls, whereas IBS is associated with a greater burden of affective symptoms and insomnia (19). Another clinical study also indicated a significantly higher prevalence of functional gastrointestinal disorders in epileptic patients than in healthy individuals (20). These results suggest a bidirectional link between the gastrointestinal tract and epileptogenesis.
Ketogenic diet (KD) and Gut microbiota
Among the factors influencing the components of GM, diet is the most important one. The diet causes changes in the microbiota, promotes the interaction of certain microorganisms, and causes differences in the concentration of neurotransmitters in the brain, which in turn affects seizures. The ketogenic diet (KD), which is based on high fat and low carbohydrate, causes a variety of changes in intermediate metabolism and resulted in ketone bodies as the major energy substrate, which is effective in focal and generalized seizures, pyruvate dehydrogenase deficiency, and glucose transporter-1 deficiency syndrome (21).
A recent study has shown that GMs are necessary and sufficient conditions for mediating the anticonvulsant effect of KD (22). After KD administration, not only the seizure threshold was increased but also the composition of GM was changed 4 days after diet treatment. KD-fed mice showed a decrease in alpha diversity, and the abundance of Parabacteroides, Sutterella, and Erysipelotrichaceae increased significantly. In antibiotic-treated and germ-free mice, KD did not increase the threshold of seizures, proving that the absence of microbiota has abrogated the effects of the KD. This suggests that the effect of KD on seizures is mediated by the GM. Zhang et al. (23) found that fecal microbial characteristics after KD treatment showed low alpha diversity, the frequency of Firmicutes was reduced, and the percentage of Bacteroides was increased. The researchers further analyzed the effect of KD on GM composition and function in patients with epilepsy and found that the relative abundance of Bifidobacteria, Eubacterium rectale, and Dialister was significantly reduced, while the relative abundance of Escherichia coli increased. Functional analysis revealed reductions in seven pathways including those involved in carbohydrate metabolism (24). These results may explain the corresponding proportional reduction in bifidobacteria and genes involved in carbohydrate metabolism during KD. Since the KD reduces the number of health-promoting, fiber-consuming bacteria, it also raises concerns about the overall health of the body (24).
Animal model
Early animal model experiments have confirmed that GM mediates the development of behavioral symptoms and neuroinflammation in Parkinson's disease (25), and is involved in the occurrence of autoimmune encephalomyelitis (26) and autism (27). In 2018, researchers demonstrated that transplanting GM from chronically stressed rats to young rats promoted the onset of epilepsy. This suggested that GM imbalance, especially under the influence of chronic stress, increased susceptibility to epilepsy (28). Francesca et al. (29) predicted that transplantation of the microbiota of epileptic mice may induce epileptic seizures by increasing brain excitability in healthy mice. Experimental results showed that mice that received microbiota derived from epileptic animals were more likely to develop status epilepticus than controls, suggesting that microbiota mediated seizure susceptibility. Further investigation of the relationship between gut inflammation and epilepsy revealed that gut inflammation increased seizure activity in epileptic mice. The authors believe that intestinal inflammation may be an effective target for epilepsy control and may also be a factor in seizures in susceptible patients (30). Mahmoud Salami et al. (31) first investigated the effects of probiotic mixtures on pentylenetetrazol-triggered brain attack activity, cognitive performance, and concentrations of aminobutyric acid, nitric oxide, malondialdehyde, and total brain tissue antioxidant capacity in rats. The results showed that probiotics greatly reduced seizure severity. At the same time, oral administration of probiotics also partially improved spatial learning and memory in the kindled rats. Although inhibition/excitation of neurotransmission and an imbalance between antioxidants and oxidants are the main causes of seizures, treatment with probiotics increased GABA activity and improved the balance between antioxidants and oxidants in the kindled rats.
Human clinical research
The possible role of gut microbes in epilepsy was first described in a case report. A 22-year-old patient with Crohn's disease and a 17-year history of epilepsy was treated for Crohn's disease with fecal microbiota transplantation (FMT). However, during the 20-month follow-up period, the patient no longer had seizures despite discontinuing treatment with sodium valproate (32). Researchers began to pay attention to the influence of GM on epilepsy. However, current human clinical research is mainly concerned with two aspects, one is the difference between the gut flora of epilepsy patients and healthy people, and the other is the improvement of symptoms of epilepsy patients after taking probiotics/FMT.
There have been several clinical studies on the differences in GM between epilepsy patients and healthy controls ( Table 1) (33)(34)(35)(36)(37)(38)(39). Using 16S rDNA sequencing, Xie et al. (33) analyzed fecal microbiome sequences from 14 epileptic infants and 30 healthy infants. The results showed higher diversity and increased Bacteroides in healthy infants compared with infants with drug-resistant epilepsy (DRE). Similar results were obtained by Korean experts in 2021, showing a lower number of Bacteroides in the epilepsy group compared to the control group (34). In 2018, Peng et al. (35) used the same technology to compare the stool microbiota of 49 drug-sensitive epilepsy patients (DSE), 42 DRE patients, and 65 healthy controls. Compared with the DSE group and the control group, the DRE patients showed increased alpha diversity, increased Firmicutes, and decreased Bacteroidetes. In contrast, many other rare phyla showed an increased trend in the DRE group, such as Verrucomicrobia. Lee et al. (36) also analyzed the difference in GM between DRE and DSE. In the DSE group, the relative abundance of Bacteroides finegoldii and Ruminococcus_g2 increased, while in the DRE group, the relative abundance of Negativicutes, which belong to Firmicutes, increased. In another study, a total of 55 epilepsy patients and 46 healthy controls (spouses) were recruited and their stool samples were collected for 16S rRNA sequencing and microbiological analysis. The results showed changes in the gut microbiome of the epilepsy group, including a decrease in alpha diversity, an increase in Actinobacteria and Verrucomicrobia, the decrease in Proteobacteria at the phylum level, and an increase of Prevotella_9, Blautia, Bifidobacterium, and others at the genus level (37). However, Birol Şafak et al. (38) found that Proteobacteria and Fusobacteria in epilepsy patients were higher than those of the healthy group, and related reports indicated that Proteobacteria and Fusobacteria were increased in autoimmune diseases and inflammatory bowel diseases (40). This suggests that autoimmune mechanisms and inflammation may play a role in the etiology of epilepsy. A recent study also found an enrichment of Proteobacteria in drug-naive epileptic patients (39). Clinical studies have pointed out the differences in GM between DRE patients, DSE patients, and healthy controls, suggesting that differential GM can be used as biomarkers for predicting prognosis and evaluating treatment response in epilepsy patients. However, most participants in previous studies were enrolled after taking ASMs, it is possible that drug confounding may affect the differences in GM. Recently Ilhan et al.
(41) reported that ASMs affect the growth of gut bacterial species and subsequent host response. The first-generation carbamazepine and the second-generation lamotrigine have greater antimicrobial activity than other ASMs. This may be related to the different efficacy that ASMs have on seizure control. Therefore, in subsequent clinical studies, we should pay attention to exclude potential confounding factors such as age, diet, and ASMs to ensure the accuracy of future experiments.
Considering the MGB axis, dietary supplements (probiotics and prebiotics) may be a useful measure for the treatment of epilepsy. A prospective study investigated the association between neonatal seizures and rotavirus infection and identified significant factors that may be related to the pattern of white matter injury (WMI) in seizures and rotavirus infection (42). The results suggest that rotavirus infection is an independent risk factor for neonatal seizures and is associated with WMI. Probiotics administered immediately after birth can reduce rotavirus-related neonatal seizures by 10-fold. Researchers believe that probiotics may reduce the risk of rotavirus-associated neonatal seizures through the inhibition of non-structural protein 4 or through antiinflammatory effects (42)(43)(44). A pilot, open-label, single-center, and prospective clinical study investigated the effects of probiotics as adjunctive antiepileptic therapy on epilepsy control and quality of life (QoL) in patients with DRE (45). After 4-month use of probiotics in 45 patients with DRE, seizures decreased by 50% in 28.9% of patients, and the QOLIE-10 test showed that patients' QoL was significantly improved with effective probiotics. The abovementioned research findings suggest that probiotic supplementation in DRE is safe, reduces seizure frequency, and improves the quality of life.
These studies suggest an inextricable link between gut microbiota and epilepsy, especially intractable epilepsy. Probiotic intervention and fecal microbiota transplantation can effectively improve seizures and quality of life.
Epilepsy and circadian rhythm Time window of epileptic seizures
According to the findings of electrophysiology, seizures in patients with epilepsy are more likely to occur at a specific time in the circadian cycle, referred to here as peak seizure time. The different peak time of seizures in each subtype of epilepsy also suggests that seizure types have a circadian rhythm. Evidence suggests that myoclonus, atonic seizures, and epileptic spasms occur more frequently during wakefulness, and tonic, clonic, and hypermotor seizures occur more frequently during sleep (46). A retrospective on 407 children with epilepsy relied on video-electroencephalography results to understand the development of generalized tonic-clonic seizures (GTC). The Frontiers in Neurology 04 frontiersin.org results show that the development of GTC occurs most frequently in the early morning, especially in patients with extratemporal epilepsy and in patients without MRI injury. Focal epilepsy is found to be more likely to occur outside of sleep, opening a new direction for our timed treatment of epilepsy management (47).
In addition to the differences in the localization of the epileptic lesions, there are also differences in the peak times of the seizures. Intracranial seizure-like activity rhythms were monitored in patients with focal epilepsy for 84 consecutive days using the RNS system and showed a strong 24-h periodicity with a peak at night. Limbic and temporal lobe epilepsies exhibited different circadian rhythms, suggesting that the circadian rhythm pattern of epileptiform activity varies depending on the onset area (48). Previous studies have reported that frontal lobe seizures occur mostly at night and during sleep (46,49), whereas temporal and occipitoparietal lobe seizures occur more frequently during wakefulness (49,50). A retrospective study also indicated that temporal lobe seizures occurred during waking hours (06:00-09:00 and 12:00-15:00), occipital seizures occurred during daytime and waking hours (09:00-12:00 and 15:00-18:00), and parietal seizures also occurred mainly during daytime (46). Fukuda et al. (51) compared the circadian rhythm characteristics of patients with juvenile myoclonic epilepsy (JME) and patients with temporal lobe epilepsy (TLE). The results showed that JME patients had no obvious circadian rhythm pattern compared with TLE patients, but most patients with JME are in poor condition in the morning.
The relationship between the clinical features of children with postencephalitis epilepsy and the diurnal rhythm of seizures was summarized in an article by Li et al. (11). Regarding seizure types, tonic seizures mostly occur at night and during sleep; epileptic spasms mostly occur during daytime and awake (11). The author compared gender differences and found that seizures occurred more frequently during wakeful periods in boys than during sleep periods in girls (11). This suggests that hormones may play a role in the circadian rhythm of epilepsy. As for prognosis, epileptic spasms are more likely to occur in the waking state, and the prognosis is relatively worse (11). This finding brings us to the origin of temporal lobe epilepsy, which occurs most frequently during wakefulness and is also known as the most common form of refractory epilepsy. Sudden unexpected death in epilepsy (SUDEP) is the leading cause of premature death in patients with refractory epilepsy. SUDEP usually occurs at night, but the specific mechanism is unclear. Purnell's team used two mouse models for seizure-related death, DBA/1 mice and C57BL/6 J mice. In DBA/1 mice with normal locomotion, the time of day can alter the likelihood of seizure-related death. In free-running C57BL/6 J mice that elicit maximal electroshock seizures at the same time at different circadian times, the circadian phase may alter the probability of seizure-related death. In both mouse models, the probability of seizure-related death is greatest at night. The abovementioned results suggest that circadian rhythm may be the reason for the increased nocturnal prevalence of SUDEP (52)
Connection between epilepsy and circadian rhythm
Circadian rhythm refers to the sleep-wake rhythm, physiological and psychological behavior, and biology under the control of the biological clock, including changes in sleep and wakefulness, core body temperature, blood pressure, and hormone levels. Disruption of the rhythm has negative effects on human health.
Animal model
In animal studies, epilepsy models induced by electrical stimulation or excitatory drugs have shown seizures regulated by the circadian rhythm (55,56). However, it is controversial whether the frequency of seizures in rats induced by pilocarpine is related to the circadian rhythm (57). In 2017, a study that also used pilocarpine to induce epileptic mice showed that the circadian rhythm of seizures in status epilepticus (SE) occurred after 4 days (58). In 2019, Baud et al. (59) also examined the epileptic activity cycle of mice at different periods after status epilepticus (chronic phase and incubation period) and at different stages of brain injury and also found that they would occur early after SE. Gregg et al. (60) established an epilepsy model using dogs as research subjects and assessed the aggregation of seizures using the dispersion index. The results showed that the timing of seizures in dogs was not random and that circadian and multiday periodic seizures and seizure clusters were common. Moreover, circadian rhythm and multiday seizures were not associated with ASMs dose, and these patterns may reflect the endogenous rhythm of seizure risk.
Human clinical research
A retrospective cohort study using data from the two most comprehensive human seizure databases included 12 patients with refractory epilepsy from the NeuroVista database and 1,118 patients with Seizure Tracker. Results showed that at least 891 (80%) of 1,118 patients and at least 11 (92%) of 12 NeuroVista patients showed circadian rhythm regulation of seizure rates (61). Campen and colleagues used published data to visually compare the circadian rhythms of epileptic seizures with those of cortisol, which are similar, especially when seizures increase in the early morning hours and subside at night. This similarity can be observed in both children and adults, but there are differences between the different seizure types and the location of the epileptic foci (62). Karoly's team developed a specific epilepsy prediction model based on the circadian rhythm of epilepsy. It is encouraging that implantable devices that can continuously record and store neuronal data have been developed to apply probabilistic epilepsy prediction in clinical practice (63). To investigate the relationship between seizure timing and fluctuations in interictal epileptiform activity (IEA), the team enrolled 37 epilepsy patients implanted with brain stimulation devices. IEA fluctuated with circadian rhythms and multiple cycles over several years, further improving the ability to predict seizure risk. It is possible to provide dynamic and personalized treatment strategies (9).
Role of core clock genes in epilepsy
Circadian rhythms in mammals are regulated by master clock genes and peripheral organ clock genes located in the suprachiasmatic nucleus (SCN) of the hypothalamus (64). CLOCK and BMAL1, the core of the circadian system molecule, form a heterodimer complex in the cytoplasm that is phosphorylated by a protein kinase and migrates to the nucleus, combines with the E-box sequence in DNA, and regulates the transcription of related genes. These genes form a negative feedback pathway, inhibit the transcription of clock genes, create shocks at the gene level, and thus lead to a circadian rhythm (65). There are two pathways of negative feedback. First, PER-CRY complexes produced by Per (Per1 and Per2) and Cry (Cry1 and Cry2) genes bind to CLOCK /BMAL1 complexes to inhibit their transcription. Second, CLOCK /BMAL1 binds to the E-box sequence to activate the PAR bZIP transcription factor, which consists of the Dbp, Tef, and Hlf genes. The abovementioned E-box sequence is controlled by the CLOCK /BMAL1 complex, whereas the D-box sequence is regulated by both PAR bZIP and Nifl3, with Nfil3 being the transcriptional suppressor of the D-box sequence. Activation of the transcription factors REV-ERB α/β and RORα/β requires the binding of CLOCK/BMAL1 to the D-box and E-box and D-box sequences, respectively, and they share a common binding site-the orphan receptor response element (RORE). The Ror protein can act on the transcription of the circadian rhythm genes Bmal1 and Nifl3 activated by RORE, while the REV-ERB protein plays an inhibitory role. Therefore, the PAR bZIP protein and Nifl3, which regulate the Rev. and Ror genes, indirectly inhibit transcription of the Bmal1 gene (66, 67).
The master clock regulates the release of neurotransmitters, such as serotonin and norepinephrine. Serotonin and norepinephrine are at high levels during the day, while melatonin peaks at night (68).
Frontiers in Neurology 06 frontiersin.org
There is evidence that serotonin has a protective effect on neuronal death caused by epilepsy (69). A recent study showed that melatonin significantly increased the expression of circadian rhythm genes and ameliorated NMDA-induced seizures. It has been suggested that the anticonvulsant effect of melatonin may be related to the regulation of circadian rhythm gene expression (70). Felix Chan et al. (66) summarized the potential downstream pathways of the circadian molecular system linking circadian rhythm and epilepsy, namely regulation of pyridoxal metabolism, mammalian target of rapamycin (mTORC) signaling, and redox state, and for the upstream factors affecting circadian rhythm genes, we need to consider GM. Using the Kcna1 knockout, an epileptic mouse model was constructed, and the Clock, Per1, and Per2 genes fluctuated significantly in wild-type (WT) and epileptic mice under the "artificial day and night" environment of 12 h of care and 12 h of darkness, indicating the influencing factors of time on Clock gene expression. Compared with the mice from WT, the total mRNA expression of Clock, Per1, and Per2 was decreased in epileptic mice (71). The researchers also analyzed clock gene expression in WT and epileptic mice exposed to continuous darkness and found that only Per2 expression was affected, suggesting that the circadian rhythm of epileptic seizures may be regulated by endogenous circadian rhythms. This study examined the temporal expression and spontaneous movement activity (SLA) of seven key circadian transcripts (Bmal1, Clock, Cry1, Cry2, Per1, Per2, and PER3) in a post-status epilepticus (SE) model of mTLE. The 24-h oscillation SLA remained intact in the post-SE group. However, in the early post-SE and epileptic phase, circadian rhythms and activity volume and intensity changed. After SE, all clock transcripts except Per2 and Per3 were significantly dysregulated (72). Decreased Clock protein level was observed in epileptic tissue samples from patients with focal epilepsy, and deletion of the Clock gene reduced seizure threshold in mice (73). Rev-ERBA expression is downregulated in the epileptic region of patients with TLE. Rev-erba inhibitors inhibit NLRP3 inflammasome activation, inflammatory cytokine production (IL-1β, IL-18, IL-6, and TNFα), astrocyte proliferation, microglial hyperplasia, and hippocampal neuronal damage according to SE. These results suggest that the reduction of Rev-erba in the epileptic region may be involved in the TLE process, and that activation of Rev-erba may have antiinflammatory and neuroprotective effects (74).
Role of the mammalian target of rapamycin (mTOR) in the circadian rhythm of seizures
Mammalian target of rapamycin (mTOR) is a serine/threonine protein kinase related to cell growth and proliferation, which regulates cell growth and metabolism, affects transcription and protein synthesis, and regulates cell apoptosis, autophagy, etc. (75). It is known that abnormal activation of the mTOR pathway can cause a number of neurological diseases such as tuberous sclerosis and epilepsy (76). In the hypothalamus, mTOR acts as a metabolic sensor to control food intake and regulate energy balance (77). The activity of the mTOR pathway in the SCN has been shown to be strong under light control, suggesting that mTOR signaling plays a role in regulating the circadian clock (78). MAPK, the upstream signaling pathway of mTOR, mediates light activation of mTOR (78). In addition, the PTEN-Akt-RHEb-TOR-S6K pathway has also been found to influence the circadian cycle in Drosophila (79). A number of key factors in the mTOR pathway are regulated by biological clock genes (80). Conversely, key factors in the mTOR pathway also regulate biological clock genes. mTOR regulates protein production and phosphorylation of the central clock nucleus BMAL1, thereby controlling BMAL1 levels and affecting its translation, degradation, and subcellular localization (81). Thus, activation of mTOR signaling pathways can alter the CLOCK gene BMAL1-CLOCK complex and downstream transcription factor function, leading to seizures and changes in circadian rhythm.
To sum up, epileptic seizure type have circadian rhythm. The core clock genes CLOCK and BMAL1 play important roles in epilepsy. In addition, the mTOR signaling pathway may act as a bridge between seizures and circadian rhythm changes.
Gut microbiota and circadian rhythm
Circadian rhythm of Gut microbiota activity In this age of fast-paced life, circadian rhythm disruption affects most people. Research in recent years has also shown that circadian rhythm disruption increases the likelihood of a variety of diseases, including obesity, diabetes, cardiovascular disease, cancer, and neurodegenerative diseases (82)(83)(84)(85). Gut microecology is a hotspot of research in the new era and mediates a variety of chronic, inflammation-related diseases. In addition, the question of whether there is circadian activity has attracted the attention of scientists as the "second largest gene pool" in humans. The results of the study are impressive: approximately 60% of the composition of the gut microbiome exhibits a circadian rhythm, more than 20% of the symbiotic bacterial species of the gut show significant diurnal variation in mice, and 10% of the symbiotic species of humans also exhibit diurnal variation (86). The mechanism of interaction between GM and circadian rhythms is not entirely clear. However, recent studies reported that GM regulates the circadian rhythm of host metabolism via histone deacetylase 3 (HDAC3). The specific content is that the microbiota induces the expression of HDAC3 in intestinal epithelial cells and HDAC3 is rhythmically recruited to chromatin and generates synchronized circadian oscillations in histone acetylation, metabolic gene expression, and nutrient uptake (87). The circadian rhythm of the intestinal flora is closely related to microbiota composition and function and influences host immunity and metabolism. Therefore, related scientists have suggested that microbiology-based therapy could improve the imbalance caused by circadian rhythm disturbances (12).
Changes in circadian rhythms cause dysregulation of intestinal microbial
As a control center of circadian rhythm, the SCN is mainly influenced by light factors (88). When external light conditions change, circadian rhythms are disrupted. Researchers have also studied this extensively. When mice were housed in constant darkness, their circadian rhythms were disrupted and the gut flora changed accordingly. It was found that bacterial diversity decreased, Firmicutes increased and Bacteroides decreased. Similarly, disruption of the host Frontiers in Neurology 07 frontiersin.org circadian rhythm also alters the gut microbial community and functional gene composition when switching to continuous illumination conditions (12). Voigt et al. (89) reported that the composition of GM was altered in male C57BL/6 J mice with disrupted circadian rhythms compared with controls. In addition, we know that peripheral biological clocks also play an important role in regulating circadian rhythms. When the normal sleep rhythm changes, such as during shift work and chronic jet lag, the circadian rhythm of the host GM also changes due to the transcriptional oscillation of peripheral clock genes and the change in feeding time pattern (90). To create a jet lag model, mice were exposed to an 8-h time shift every 3 days. After 4 weeks of jet lag induction, the host lost rhythmic activity. Subsequent microbiome analysis of mice in the experimental group every 6 h showed that the bacterial rhythm disappeared and the number of oscillations of the operational classification units decreased. Moreover, as the time caused by jet lag increases, the extent of intestinal dysbiosis also increases (86). Disruption of GM caused by jet lag may promote impaired glucose tolerance and obesity. Interestingly, the germ-free mice without jet lag induction exhibited metabolic abnormalities when the feces of the experimental group were transplanted into the germ-free mice. This suggests that the gut flora is involved in the metabolic abnormalities caused by circadian rhythm disruption (86).
Earlier we talked about the central clock genes CLOCK and BMAL1 for circadian rhythm. Does the mutation or deletion of the core clock genes affect the imbalance of the gut flora? We will talk about this in detail below. In an experimental setup, a clock mouse model was constructed and different diets were administered, a standard diet, an alcoholic diet, and an alcohol control diet with glucose instead of alcohol calories. The results show that compared to wild-type mice, the diversity of the microbial community in the gut of the clock gene mutants is reduced and the imbalance of the gut flora is exacerbated under the alcohol-containing diet (91). Similarly, the symbiotic bacterial diversity in the Per1/2−/− mouse model almost completely lost the rhythmic fluctuations. To determine whether the disappearance of the rhythm of microbial flora affected the activity of metagenomic metabolic pathways, the researchers performed shotgun sequencing of metagenomic DNA in Per1/2−/− and WT mice. In Per1/2−/− mice, pathways involved in vitamin metabolism, nucleotide metabolism, secretion system, DNA repair, cell wall synthesis, and the movement lost their daily rhythm. The abovementioned results suggest that the daily fluctuation of gut flora composition and function requires the host biological clock (86).
Dysregulation of intestinal microbial causes fluctuations in circadian rhythm expression
The gut microbiome is a key factor influencing the development and function of the central nervous system. As mentioned earlier, the microbiome-gut-brain axis is a hot research topic and also an urgent topic to discuss. According to the study, GM is a key factor in regulating circadian rhythms. Germ-free mice fed a low-fat or high-fat diet showed significantly impaired expression of central circadian clock genes when exposed to light/darkness for 12:12 h and dietary conditions were changed (92). Weger et al. (93) constructed germ-free and antibiotic-treated mouse models and found that intestinal dysbiosis can also alter the expression of peripheral and intestinal clock genes. The research findings of Mukherji's team are consistent with the above findings (94). In addition, the elimination of gut microflora can alter the expression of Rev-erba and RORα in the gut (95). However, it has also been reported that the elimination of gut microflora by antibiotics is not associated with circadian rhythm oscillation in mice (96). The reason for the different results could be due to the different assay methods or sample preservation methods.
The GM regulates the circadian clock and is influenced by exogenous factors such as diet and mealtime (92). We have talked about clock gene expression being affected in response to changes in diet. Researchers have proposed a new dietary pattern. Intermittent fasting has been shown to improve many chronic diseases, such as obesity, hypertension, and hyperlipidemia (97,98). Ye Yuqian et al.
(99) categorized the mice in the experimental study into three groups, namely the free diet group, the high-fat diet group, and the high-fat diet group, which was restricted within 8 h. The results showed that mice on the time-restricted high-fat diet gained less weight than mice on the free high-fat diet, and there were differences in the numbers of Bacteroidetes and Firmicutes between the two groups. Compared with mice on the normal diet, the circadian rhythms of SIRT1, SREBP, and PPAR expression were more distinct in the livers of mice on the time-restricted high-fat diet and the abundance of Bacteroidetes and Firmicutes. The researchers suggest that the eating-fasting rhythm may stimulate fluctuations in our GM and subsequent molecular changes that restore a healthier body clock (100).
Metabolites of intestinal flora, such as SCFAs and unbound cholic acids, may also affect the expression of circadian genes. SCFAs also show rhythmic changes throughout the day, and detection of SCFAs in feces shows that the concentration of SCFAs is highest in the morning and gradually decreases (101). SCFAs play a positive role in protecting the integrity of the intestinal barrier (102). In addition, the reduction of SCFA-producing bacteria may result in the release of proinflammatory bacterial products into the systemic circulation, which triggers and promotes inflammatory diseases (103). Therefore, SCFAs have a beneficial protective effect on a variety of diseases (104). Leone et al. (92) set up a liver cell model, and after administration of butyrate or a small amount of acetate, the expression of the Per2 and Bmal1 genes was significantly increased. The next step was to determine whether short-chain fatty acids directly affect the host clock in vivo. In germ-free mice, butyrate treatment significantly increased the proportion of Per2:Bmal1 mRNA in liver cells 2 h after illumination (the active period of the mice) for 5 days, whereas the same treatment did not result in a significant increase in the proportion of Per2:Bmal1 mRNA in the mediobasal hypothalamus (92). The experimental results further confirm the author's conjecture. Govindarajan et al.
(105) use a synchronized Caco-2 epithelial cell model to demonstrate that unbound bile acids can improve the level of circadian rhythm gene expression. Further oral administration of unbound cholic acid to mice surprisingly showed significant changes in circadian gene expression levels in the ileum, the colon, and the liver.
These data suggest that GM has a potential impact on circadian rhythm expression. An in-depth study of the interaction between gut microbiota and circadian rhythm can help in better understanding the regulation of host physiological functions by gut microbiota.
Frontiers in Neurology 08 frontiersin.org Relationship between epilepsy, Gut microbiota, and circadian rhythm To date, most studies of the influence of GM or the circadian rhythm on disease have been conducted independently and have not combined GM and the circadian rhythm to examine the joint effects of the two. When the circadian rhythm is disrupted, it can affect host immunity and metabolism through the gut flora and promote the onset of disease. Conversely, the imbalance of the GM can affect the circadian rhythm. When the steady state of the circadian rhythm is disrupted, it further affects host homeostasis. Many publications have investigated the relationship between epilepsy and GM, and the circadian rhythm of GM has been gradually recognized, but no hypothesis has yet been proposed for the relationship between GM, the circadian rhythm, and epilepsy.
GM influences functional brain signaling through the MGB axis, and brain signaling also influences the activities and physiological functions of long-lived microorganisms (106). GM can be mediated through the pathway of immune activation, proinflammatory factor release, and neurotransmitter release, causing epilepsy. Seizures may in turn cause dysbiosis in the gut. There is a bidirectional relationship between GM and the occurrence of epilepsy (107). Different types of seizures and the origin of lesions have different peak seizure times, suggesting that seizures have a circadian rhythm (108). Circadian clock genes regulate the occurrence and development of epilepsy (9). Similarly, there is a bidirectional signal between the two. Next, we will talk about GM and circadian rhythms, which, not surprisingly, also have the ability to transmit signals in both directions (12). Previously, the three formed a bidirectional link between two signaling pathways ( Figure 1). In our literature search, we found that Firmicutes increased and Bacteroidetes decreased in epilepsy patients compared with healthy control subjects. Surprisingly, under the condition that the circadian rhythm changes, the bacteria that alter the intestinal flora also showed an increase in Firmicutes and a decrease in Bacteroidetes. This led us to ask whether the interaction between GM and circadian rhythms might play a role in seizures. Therefore, we next need to explore whether there is a circadian rhythm in the GM of patients with refractory epilepsy and the mechanism by which the interaction between circadian rhythm and GM plays a role in epilepsy. The mechanism by which the gut flora mediates the anticonvulsant effect of KD has already been mentioned. We know that KD is a high-fat, low-carbohydrate diet, which reminds us that the expression of circadian clock genes is also influenced by diet, such as low-fat and high-fat diets. Whether clock genes are involved in GM-mediated anticonvulsant effects of KD is the next question we need to ask.
Limitations of the review and future research
In this review, we tried to describe the mutual relationship between GM, circadian rhythm, and epilepsy. Unfortunately, the published literature has studied the interaction between them independently, so in terms of the mechanism of interaction between epilepsy, GM, and circadian rhythm, many unknown problems have occurred. GM is susceptible to the influence of age, diet, drugs, DNA sequencing, and other factors, resulting in different research results. We suggest that these variables should be more strictly controlled in future clinical studies, and the changes in intestinal microbiota at the species and strain levels should be further clarified. Functional analysis and association analysis combined with metabolomics should be carried out to better reveal the possible mechanism between GM dysbiosis and diseases. In addition, more studies are needed to explore the specific mechanisms of how circadian rhythm disorder leads to GM dysbiosis, and how the synergistic effect of circadian rhythm disorder and GM can trigger the process of epilepsy.
Conclusion
With the advancement of sequencing technology, the inextricable links between GM and epilepsy have been gradually revealed, leading to a deeper understanding of their functions. By exploring the relationship between GM and epilepsy, we may discover sensitive biomarkers that improve the understanding of the complex mechanisms of epilepsy. Currently, some clinical studies have confirmed that there are differences in gut microbiota between patients with refractory epilepsy, drug-sensitive patients, and healthy controls. At the same time, GM also mediates the pathogenesis of KD in the treatment of refractory epilepsy. Overall, future microbiomespecific treatment may be an effective option for refractory epilepsy. The circadian rhythm of GM seems to play an important role in the onset and development of epilepsy and also points the direction of our next research. Discovering the relationship between GM, circadian rhythm, and epilepsy will help us to better understand the pathogenesis of epilepsy and thus improve the quality of life of patients with epilepsy.
Author contributions
YW was responsible for the execution of the research project and writing of the manuscript. ZZ assisted in writing the manuscript. HW conceptualized and designed the study and reviewed and revised the manuscript. All authors read and approved the final manuscript. Map of the bidirectional association between epilepsy, gut microbiota, and circadian rhythms. | 2023-05-18T13:16:36.116Z | 2023-05-18T00:00:00.000 | {
"year": 2023,
"sha1": "c091d94ffa183800c0deffc4abd3e3fadebe7874",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c091d94ffa183800c0deffc4abd3e3fadebe7874",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
267634826 | pes2o/s2orc | v3-fos-license | Unexpected findings: loss of corneal endothelial cells in Uygur patients with exfoliation syndrome
Purpose This study aimed to investigate anterior segment parameters in patients with exfoliation syndrome (XFS) and exfoliation glaucoma (XFG). Methods The study adopted a retrospective case series design, involving a total of 56 patients (112 eyes) with unrelated XFS/XFG (XFS: 26 patients/60 eyes; XFG: 30 patients/44 eyes) and 100 age-related cataract cases as the control group (200 eyes). The participants were evaluated at the ophthalmology department of the First Affiliated Hospital of Xinjiang Medical University. Clinical data, including eye axial length, anterior chamber depth, white-to-white distance, central corneal thickness, and corneal endothelial cell density (ECD), were collected for statistical analysis. Results ECD exhibited a significant difference between the XFS/XFG and age-related cataract groups (P < 0.001), while the remaining indexes did not show statistical differences (P > 0.05). Ocular parameters in patients with XFS and XFG were distinct from those in age-related cataract cases, with consistent results. Notably, there were no statistically significant differences between XFS and XFG patients. Conclusions ECD is reduced in XFS/XFG patients compared with age-related cataract subjects. It is crucial to remain vigilant to enhance surgical safety in XFS/XFG patients and prevent complications proactively.
Background
Exfoliation syndrome (XFS) is a condition wherein the extracellular matrix (ECM) undergoes changes, affecting individuals of various ages.It is marked by the continuous buildup of abnormal fibrils in intra and extraocular structures [1,2].This condition impacts 10-20% of the elderly population, leading to a higher incidence among the elderly and glaucoma patients [3].In China, 5.1% of Kashi Uygur residents [4], and 2.2% and 9.5% of Kuche Uygur individuals aged 60 and 80 years or more, respectively, exhibit XFS [5].
XFS gives rise to eye complications, such as exfoliation glaucoma (XFG), and heightens the risk of unsuccessful intraocular surgery due to zonular weakness.XFS endotheliopathy is a gradually advancing condition affecting the corneal endothelial layer, resulting in early corneal endothelial cell decompensation and potentially leading to serious bullous keratopathy [6][7][8].Numerous studies have Abstract Purpose This study aimed to investigate anterior segment parameters in patients with exfoliation syndrome (XFS) and exfoliation glaucoma (XFG).Methods The study adopted a retrospective case series design, involving a total of 56 patients (112 eyes) with unrelated XFS/XFG (XFS: 26 patients/60 eyes; XFG: 30 patients/44 eyes) and 100 age-related cataract cases as the control group (200 eyes).The participants were evaluated at the ophthalmology department of the First Affiliated Hospital of Xinjiang Medical University.Clinical data, including eye axial length, anterior chamber depth, white-to-white distance, central corneal thickness, and corneal endothelial cell density (ECD), were collected for statistical analysis.Results ECD exhibited a significant difference between the XFS/XFG and age-related cataract groups (P < 0.001), while the remaining indexes did not show statistical differences (P > 0.05).Ocular parameters in patients with XFS and XFG were distinct from those in age-related cataract cases, with consistent results.Notably, there were no statistically significant differences between XFS and XFG patients.
examined the clinical data of XFS cases, investigating the characteristics of their anterior segment parameters.However, there is no consensus in the existing literature.To enhance the quality of available data and consider the prevalence of this condition, we conducted an assessment of anterior segment characteristics in Uyghur XFS/XFG cases.
Patients
The inclusion criteria for this study comprised a diagnosis of XFS, confirmed by the identification of exfoliation materials on the anterior lens capsule or pupil margin in one or both eyes following pupil dilation.A diagnosis of XFS was further substantiated by intraocular pressure (IOP) being less than 21 mmHg and the absence of glaucomatous optic neuropathy.
XFG was diagnosed based on the aforementioned exfoliation features, along with the following criteria: (1) IOP equal to or greater than 22 mmHg in one or both eyes; (2) diffused glaucomatous enlargement of the cup changes in the optic disc; (3) visual field loss attributed to glaucoma [9].Individuals exhibiting causative factors for secondary glaucoma, such as uveitis, pigment dispersion syndrome, and iridocorneal endothelial syndrome, were excluded.Uygur residents aged 45 years and above were included in the study.
Control cases were enrolled based on the absence of exfoliation materials on the anterior lens capsule or pupil margin in both eyes after pupil dilation.Additional criteria included no diffused glaucomatous enlargement of the cup changes in the optic disc, normal IOP, vision consistent with cataract, no family history of glaucoma, and the absence of eye pathology, except for low refractive errors (It refers to the absence of axial refractive problems, but may cause low refractive error by cataracts).Participants were unrelated and underwent comprehensive ophthalmic examinations.
Exclusion criteria included: (1) a history of other eye diseases and previous ophthalmic surgeries; (2) refusal to continue cooperation by the selected subjects.
The present trial received approval from the Ethics Committee for Human Research of the First Affiliated Hospital of Xinjiang Medical University, China, in accordance with the Declaration of Helsinki.All participants provided signed informed consent.
Clinical data were collected from the hospital case system, encompassing eye axial length (AL), anterior chamber depth (ACD), white-to-white distance (W-W), central corneal thickness (CCT), and corneal endothelial cell density (ECD) for Uyghur patients with XFS/XFG and age-related cataract patients who visited the ophthalmology department of the First Affiliated Hospital of Xinjiang Medical University between May 2014 and November 2021.
Zeiss Optical Biometry IOLMaster 500, Germany: the subject sat without the need for surface anesthesia.The head was positioned closely in the headrest.The subject focused on the reticle.Measurements of AL of the eye, ACD, and W-W distance were conducted by the same experienced staff member.Five consecutive measurements were taken for each patient.The average value of these measurements was recorded to generate a comprehensive report.
Topcon SP-2000P Corneal Endothelial Cell Counter, Japan: the same operating technician examined both groups of patients.Each examination was repeated three times.The mean value of the three examinations was utilized for the analysis of changes in corneal ECD and CCT.
Data analysis was performed using SPSS 19.0 (SPSS, USA).Baseline data were compared using the χ 2 test.Measures were presented as mean ± standard deviation and compared using the T-test, with P < 0.05 considered indicative of statistical significance.
Results
A total of 56 Uygur patients with XFS/XFG (112 eyes) were included, comprising 44 XFG and 60 XFS eyes (data unavailable for 8 eyes).Additionally, 100 Uygur control patients with age-related cataracts (200 eyes) were enrolled.The study and control group had mean ages of 71.92 ± 5.81, 71.20 ± 4.76, and 71.01 ± 0.80 years, respectively (P > 0.05).The case group exhibited a predominance of male patients (χ 2 = 17.45,P < 0.001), with 44 (78.57%) male and 12 (21.43%)Page 3 of 6 71 Vol.: (0123456789) female XFS/XFG patients.Meanwhile, the control group consisted of 44 (44%) male and 56 (56%) female participants (Table 1).We conducted a statistical analysis of eye parameters, including AL, ACD, CCT, and ECD, in both groups.A statistically significant difference was observed in ECD between the XFS/XFG and the control group, whereas the remaining indices did not exhibit significant differences.Further analysis of ocular parameters in XFS and XFG patients, distinct from those with age-related cataracts, revealed consistent findings.Notably, there were no significant differences between XFS and XFG patients (Table 2).
Discussion
XFS exerts a comprehensive impact on the entire eye, with a notable emphasis on the anterior segment due to the accumulation of exfoliation material (XFM).Independently, it is linked with various disorders, including a small pupil, cataract, and zonular laxity.XFG, as the most common and severe associated disease, demonstrates heightened aggressiveness in This study also examined the ECD change in the XFG group, revealing that although there was no statistically significant difference between the XFS and XFG groups, the ECD in the XFG group (2241 ± 363 cells/mm 2 ) was significantly lower than that in the control group [15].Kristianslund et al. indicated that, before surgery, there was no significant difference in ECDs between the XFS and control groups [16].Ucar et al. [17] similarly found no difference in ECD between XFS and cataracts before their investigation.
Various studies have assessed anterior ocular segment parameters in both XFS and XFG patients, yielding diverse findings.Kaygisiz et al. [18] compared these parameters among XFS, XFG, and normal subjects, finding no differences in anterior segment indexes between the XFS and XFG groups.However, in another report, corneal biomechanical indexes, including corneal hysteresis (CH), corneal resistance factor (CRF), and CCT, differed in XFS cases compared to healthy controls, with more pronounced changes in XFG cases [19].In Turkey patients, those with XFG and XFS exhibited greater lens thickness, increased ACD, and reduced CCT compared to normal subjects [18].Contrarily, our study found no significant difference in CCT between XFS/XFG and control patients, while a clear and significant difference was observed in corneal ECD, without distinction between the XFS and XFG groups.ECD, being a crucial indicator reflecting corneal condition, warrants special consideration in individuals with chronic eye disorders like recurrent uveitis, glaucoma, and PEX.It is also noteworthy in cases of IOL dislocation, especially prior to ophthalmological interventions [14].We hypothesize that factors such as the accumulation of XFM may influence anterior chamber indexes during the progression from XFS to XFG, altering anterior chamber structures.
The outcomes of these studies have shown less consistency, and our endeavor is to contribute additional data to the clinical profile of individuals affected by XFS/XFG.Another study suggested elevated incidence rates for glaucoma (77.4%), cornea guttata (45.2%), and XFG (16.1%) in cases with a short AL [20].However, our data showed no difference in AL between XFS/XFG and cataract patients.There were also no differences in ACD and W-W distance.This may be attributed to the limited sample size, necessitating a broader case pool, balanced gender representation, and preferably multicenter clinical studies for more robust and convincing data.This study underscores the importance of careful patient selection and adequate sample size, with a preference for using the normal population as a reference for comparison.
Conclusions
In this study, corneal ECD was found to be reduced in patients with XFS and XFG when compared to age-related cataract subjects.Despite the lower ECD values observed, all participants maintained values greater than 2000/mm 2 .Advancements in equipment and surgical techniques have significantly reduced the risk of corneal decompensation following cataract surgery in XFS patients.However, it is crucial to remain vigilant in order to enhance surgical safety in individuals with XFS/XFG and proactively address potential complications before they arise.
Table 1
Baseline patient featuresBaseline data were compared by the χ 2 test.Measures were represented by mean ± standard deviation and compared by the T-test | 2024-02-14T06:18:32.986Z | 2024-02-13T00:00:00.000 | {
"year": 2024,
"sha1": "2a1f2acde3e6869efe48b607957cfab96144ca78",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10792-024-02913-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb0ae44a5e5d374087bab3da69761610a3850b6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57375225 | pes2o/s2orc | v3-fos-license | Renal denervation – can we press the “ON” button again?
Nearly ten years ago percutaneous renal denervation (RDN) was introduced in clinical trials as a possible method of interventional treatment of resistant hypertension. The promising results of the first clinical trials initiated the intensive development of this method. However, the role of percutaneous renal denervation in the treatment of patients with resistant hypertension has been questioned since the results of the Symplicity HTN-3 trial have been published. It also resulted in downgrading the indications for RDN in the European Society of Cardiology/European Society of Hypertension Guidelines 2018. The authors discuss potential shortcomings of that trial, describe new generation devices and present the results of recently published trials: SPYRAL HTN-OFF MED, SPYRAL HTN-ON MED, RADIANCE-HTN SOLO and RADIOSOUND-HTN. The results of studies in patients with obstructive sleep apnea are also summarized and discussed. The upcoming large trials (SPYRAL PIVOTAL, RADIANCE II) are outlined – the results of those trials are expected to be published in the next 2–3 years. Until then, according to the European guidelines, the use of device-based therapies is not recommended for the treatment of hypertension, unless in the context of clinical studies and randomized controlled trials.
Introduction
Nearly ten years ago percutaneous renal denervation (RDN) was introduced in clinical trials as a possible method of interventional treatment of resistant hypertension. The promising results of the first clinical trials initiated the intensive development of this method. The Symplicity HTN-1 trial was the first in-human study confirming the safety of the procedure in 45 patients, being then extended to a single-arm trial involving 138 patients. Symplicity HTN-2 was the first randomized controlled trial (RCT). In both trials, significant and sustained blood pressure (BP) reductions achieved after renal denervation (approximately 25 mm Hg) and favorable procedural safety brought hope for a long-term benefit from the treatment in terms of cardiovascular risk reduction [1][2][3].
Symplicity-HTN 3 trial -why did it fail?
Symplicity HTN-3 was the first study with sham treatment implementation. In brief, 535 patients with resistant hypertension were randomly assigned in a 2 : 1 ratio to undergo renal artery denervation or a sham procedure [4]. After 6 months, the differences in office BP and ambulatory blood pressure monitoring (ABPM) reductions between RDN and sham were not significant (14.1 vs. 11.7 mm Hg; 7.75 vs. 4.79 mm Hg respectively). The disappointing results of the trial raised some concerns for the efficacy of the procedure and initiated a discussion about potential reasons for this failure [5][6][7].
First of all, the inclusion criterion of resistant hypertension was based only on systolic office and ambulatory BP measurements. As a result, almost 1/3 of the patients were included in the study on the basis of isolated systolic hypertension, independently of their diastolic blood pressure. Additional analysis of these patients, characterized by increased arterial stiffness and diminished sympathetic nervous system activity, revealed that the effect of RDN was less pronounced as compared to the subjects with systolic-diastolic resistant hypertension.
Secondly, despite the protocol requirements, the antihypertensive drug regimen was changed during the follow-up period in 40% of patients. In might have had an impact on the results obtained after the treatment.
Advances in Interventional Cardiology 2018; 14, 4 (54) Moreover, the experience of 112 operators performing the study procedures in 88 American sites was rather modest. It is of note that more than half of them carried out only 1 or 2 procedures in this trial, being just at the beginning of their learning process. On can speculate that if the reductions of the blood pressure had been similar to those obtained in previous studies (with more experienced operators), the difference would have been statistically significant and the HTN-3 study would have been successfully completed.
In summary, several factors had a substantial impact on the results of the HTN-3 trial. Therefore, the protocols of the next studies had to be modified taking into account the conclusions from the HTN-3 analyses and new modern devices enabling complete damage of the sympathetic nerve fibers were required.
New devices
During the last years, two companies introduced into clinical studies new RDN devices.
The Symplicity Spyral multi-electrode renal denervation catheter (Medtronic US), is a 4 Fr over-the-wire, helical-shaped catheter, whose distal tip is deployed by retracting the guide wire into the catheter lumen ( Figure 1).
Its multi-electrode and helical design enables delivery of radiofrequency energy from the generator to each quadrant of the vessel (simultaneously with all four electrodes), thus maximizing damage to the sympathetic nerves around the renal vessel in a consistent four-quadrant ablation pattern. This device conforms to a wide range of artery shapes and sizes (3 mm to 8 mm in diameter), eliminating the need for multiple catheters per procedure. The Symplicity G3 generator independently controls the temperature and impedance during 60-second treatments.
The Paradise system (ReCor Medical, US) consists of a 6 Fr over-the-wire, multi-lumen catheter shaft with a cylindrical piezoelectric ceramic transducer placed inside an inflatable balloon at the distal end of the catheter combined with a portable generator ( Figure 2). The cylindrical transducer converts the electrical energy delivered from the generator to ultrasound energy, which is then radiated into the renal artery tissue. Due to the physics of sound propagation, direct tissue contact with the ultrasound source is not required for energy transmission. Each energy application lasts only 7 s. The generator is designed to control energy delivery and fluid management inside the balloon. The balloon-based fluid transfer Paradise System: the generator, the Paradise catheter, the mechanism of action mechanism is implemented for cooling the endothelial and medial layers of the arterial wall to preserve the integrity of the vessel wall during the energy delivery. This endovascular catheter achieves a circumferential ring of ablation at a depth of 1-6 mm from the vessel lumen, which is the expected location of the efferent and afferent renal nerves in the adventitia [8][9][10]. The different balloon sizes enable arteries from 3.5 mm up to 8 mm in diameter to be treated.
Second-generation sham-controlled trials
Taking into account the conclusions of the Symplicity HTN-3 study analysis, need for significant modification of the next generation sham-controlled randomized controlled trials' protocols was widely postulated. After the second European Clinical Consensus Conference for device-based therapies for hypertension, new recommendations for the next generation of sham-controlled RCT were published. The main principles assume at first the mandatory use of new devices and dedicated treatment recommendations. If monopolar radiofrequency renal denervation is used, four-quadrant ablation at each renal side is recommended. Furthermore, only experienced interventionalists from experienced centers should carry out the procedure, preferably in the absence of any medication, to assess the 'true' BP reduction of RDN. Witnessed intake of medication and/or medication adherence in each patient should be introduced in the study. The BP lowering efficacy of RDN should be assessed with 24-hour ambulatory blood pressure monitoring (ABPM) [11].
In the last 18 months the results of new RCTs using new radiofrequency or ultrasound based RDN catheters and including different populations of patients have been reported.
SPYRAL HTN trials
SPYRAL-HTN is a multicenter project launched by Medtronic using the abovementioned new generation multi-electrode SPYRAL catheter. Two preliminary randomized trials -SPYRAL HTN-OFF MED and SPYRAL HTN-ON MED -were designed, with modified inclusion and exclusion criteria [12]. The SPYRAL study included patients with office systolic BP in the range of 150-180 mm Hg, diastolic BP above 90 mm Hg (patients with isolated systolic hypertension were excluded) and 24-hour systolic BP in the range of 140-170 mm Hg during the use of one to three antihypertensive drugs used for a period of at least 6 weeks (ON-MED study) or after the gradual withdrawal of antihypertensive drugs (OFF-MED study). In both studies, the concentration of antihypertensive drug metabolites in urine was assessed, either to confirm patients' adherence to antihypertensive therapy (ON-MED study) or to confirm not taking antihypertensive drugs (OFF-MED study). In the actively treated study group 'total' RDN (the largest possible number of energy applications in the main renal arteries within their trunk and their distal branches, as well as in additional renal arteries with a diameter of at least 3 mm) and in the control group sham treatment were performed. The results of the SPYRAL HTN-OFF MED study were presented at the ESC Congress in Barcelona, and then published in Lancet in August 2017 [13]. Townsend et al. presented an analysis of 80 patients remaining off antihypertensive medications throughout a 3-month follow-up. Thirty-eight patients had been previously randomly assigned to the RDN group and in 42 patients a sham procedure had been performed. At the 3-month follow-up, in the RDN group a significant reduction in office systolic and diastolic BP values was observed (-10 mm Hg and -5.3 mm Hg respectively). Also in ABPM, both systolic and diastolic BP decreased significantly (-5.5 mm Hg and -4.8 mm Hg, respectively). The sham treatment was not associated with a significant change in BP levels during the follow-up. The observed decrease in systolic BP was not as high as in the first-generation RCT. It should be noted however that in the SPYRAL HTN-OFF MED study patients with baseline systolic BP > 180 mm Hg were not included, which should be taken into consideration as high baseline systolic BP is one of the strongest predictors of BP response to RDN. The results of the SPYRAL HTN-OFF MED study confirmed the validity of further research on RDN, including the continuation of the SPYRAL HTN-ON MED trial. Four hundred sixty-seven patients were screened and 80 fulfilled the inclusion/exclusion criteria of this study. The results were presented in May 2018 at the European Congress of Interventional Cardiologists Euro-PCR and subsequently published in Lancet [14]. Thirty-eight patients with poorly controlled hypertension on one to three antihypertensive drugs in stable doses for at least 6 weeks were randomly assigned to the RDN group (with the same technique as in the OFF MED study) and in 44 patients a sham procedure was performed. Office and 24-hour ambulatory BP decreased significantly from baseline to 6 months in the RDN group (-9.4/-5.3 mm Hg and -9.0/-6.0 mm Hg, respectively). Similarly to the SPYRAL HTN-OFF MED study, in the HTN-ON MED study, the sham procedure was not associated with a significant change in BP at 6 months. Interestingly, despite the fact that the patients were informed about the measurements of drug concentrations, about half of the patients did not comply with the medical recommendations regarding the use of antihypertensive drugs.
In both SPYRAL HTN studies there were no significant procedure-associated adverse events, which confirms the safety of RDN using a new generation multi-electrode catheter.
RADIANCE-HTN SOLO study
The results of the RADIANCE-HTN SOLO study in which the new ultrasound catheter Paradise was implemented were presented in May 2018 in Lancet [15]. RADIANCE-HTN SOLO was a multicenter, international, single-blind, randomized, sham-controlled trial including patients with combined systolic-diastolic hypertension after a 4-week discontinuation of up to two antihypertensive medications and suitable renal artery anatomy. One hundred and forty-six patients meeting the inclusion/exclusion criteria were randomized to undergo RDN (n = 74) or a sham procedure (n = 72).
After 2 months the reduction in daytime ambulatory systolic BP was greater with RDN than with the sham procedure (-8.5 vs. -2.2 mm Hg, respectively). The primary end-point -baseline-adjusted difference between groups (-6.3 mm Hg, 95% CI: -9.4 to -3.1, p = 0.0001) -was met. No major adverse events were reported in either group. In summary, in the RADIANCE-HTN SOLO study the efficacy and short-time safety of endovascular ultrasound RDN was confirmed at 2 months in patients with combined systolic-diastolic hypertension in the absence of medications.
Comparison of available technologies
Recently, Fengler et al. presented the results of the first trial comparing three different techniques and technologies for catheter-based RDN. One hundred and twenty patients with resistant hypertension were randomized in a 1 : 1 : 1 manner to receive either treatment with 1) radiofrequency RDN of the main renal arteries (39 patients), 2) radiofrequency RDN of the main renal arteries, side-branches and accessories (39 patients), or 3) an endovascular ultrasound-based RDN of the main renal artery (42 patients). At 3 months, daytime systolic and diastolic BP decreased significantly in the overall cohort and also within each treatment group (p < 0.001). However, the systolic daytime blood pressure was significantly more reduced in the ultrasound ablation group than in the radiofrequency ablation group of the main renal artery (-13.2 ±13.7 vs. -6.5 ±10.3 mm Hg). No significant difference was found between the ultrasound RDN and the side branch ablation groups, nor between two strategies of radiofrequency RDN. The authors conclude that endovascular ultrasound based RDN seems to be superior to radiofrequency ablation of the main renal arteries only, whereas a combined approach of radiofrequency ablation of the main arteries, accessories and side branches was not [16].
European Society of Hypertension Position Paper on renal denervation 2018
The promising results of the second-generation RCTs confirming safety and short-time efficacy of RDN in new groups of patients and using new technologies prompted European Society for Hypertension (ESH) experts to develop an up-to-date position paper on RDN [17]. In all three studies, in patients who underwent RDN a similar, significant decrease in BP during the follow-up period was observed (Table I). ESH experts emphasize, however, that some questions about RDN remain unanswered. The heterogeneity of the blood pressure-lowering response point to the clinical need to identify predictors for efficacy, and questions on long-term safety could not be answered due to the short duration of the sham-controlled RCTs. It should also be noted that as afferent and efferent renal nerves also play a crucial role in cardiovascular, metabolic and renal diseases other than hypertension, RDN may offer a new interventional treatment option for various conditions (obstructive sleep apnea (OSA), congestive heart failure, atrial fibrillation, chronic renal failure, diabetes).
Renal denervation and obstructive sleep apnea
Considering RDN as a potential treatment option of various conditions other than hypertension, interesting data on the use of RDN in patients with OSA coexisting with resistant hypertension have been reported recently. In a proof-of-concept, observational study Witkowski et al. evaluated the effects of this procedure on BP and sleep apnea severity in patients with resistant hypertension and sleep apnea. Ten patients with refractory hypertension and sleep apnea (7 men and 3 women; median age: 49.5 years) underwent RDN and completed 3-month and 6-month follow-up evaluations, including polysomnography, selected blood chemistries, and BP measurements. Antihypertensive regimens were not changed during the 6 months of follow-up. Three and 6 months after RDN, decreases in office systolic and diastolic BPs (median: -34/-13 mm Hg for systolic and diastolic BPs at 6 months; both p < 0.05) as well as a decrease in apnea-hypopnea index (AHI) at 6 months after RDN (median: 16.3 vs. 4.5 events per hour; p = 0.059) were observed [18]. In their conclusions Witkowski et al. postulated that RDN may be a potentially useful option for selected patients with true resistant hypertension and moderate-to-severe OSA. The same group of authors designed a randomized controlled clinical trial based on a larger group of patients to confirm initial proof-of-concept data [19]. Sixty patients with true resistant hypertension coexisting with moderate-to-severe OSA (AHI ≥ 15) were randomly allocated to the RDN group (30 patients) and Table II. Summary of renal denervation trials in patients with concomitant obstructive sleep apnea
Study
Witkowski et al. [18] SYMPLICITY HTN-3 [5] GLOBAL SYMPLICITY REGISTRY [21] Daniels et al. [22] Warchol-Celinska et al. [19] Type of study [20] and Global Symplicity Registry studies [21], suggesting that patients with OSA may be particularly responsive to RDN therapy. In another prospective study including twenty resistant hypertensive patients with OSA, moderate blood pressure reduction was achieved after renal denervation with no significant changes in sleep apnea severity [22]. A summary of these trials is presented in Table II. Further studies are undoubtedly warranted to assess the impact of RDN on sleep apnea and its relation to BP decline and cardiovascular risk.
Conclusions
Over the last months, the results of important RCTs using sham treatment have been published, confirming the efficacy and safety of RDN in previously uninvestigated groups of patients -patients with hypertension after drug withdrawal, patients with poorly controlled hypertension despite 1-3 antihypertensive drugs, as well as in patients with resistant hypertension co-existing with obstructive sleep apnea. Despite these promising new results that again widely open up the field of RDN, ESH experts in the current position underline that in accordance with the current recommendations of the European Guidelines 2018 "device based therapies are not recommended in general for the treatment of HTN at least at the current moment" [23]. However, they also recommend conducting RDN in the framework of "clinical studies and sham-controlled RCT (to) further provide safety and efficacy in a larger set of patients". So far the number of patients included in the trials is small, the follow-up duration short and several important questions remain unanswered. The upcoming trials, including pivotal studies, presented in Table III [24][25][26], should provide answers to many questions regarding RDN. It is also of note that RDN may offer a new interventional treatment option for various conditions other than hypertension, especially obstructive sleep apnea. | 2019-01-18T14:13:51.902Z | 2018-12-11T00:00:00.000 | {
"year": 2018,
"sha1": "e60c9b208f8a85da0b22a690a8dc0c69d6e42744",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-35/pdf-34226-10?filename=renal.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e60c9b208f8a85da0b22a690a8dc0c69d6e42744",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220976493 | pes2o/s2orc | v3-fos-license | Invasive Plants: Turning Enemies into Value
In this review, a brief description of the invasive phenomena associated with plants and its consequences to the ecosystem is presented. Five worldwide invasive plants that are a threat to Portugal were selected as an example, and a brief description of each is presented. A full description of their secondary metabolites and biological activity is given, and a resume of the biological activity of extracts is also included. The chemical and pharmaceutical potential of invasive species sensu lato is thus acknowledged. With this paper, we hope to demonstrate that invasive species have potential positive attributes even though at the same time they might need to be controlled or eradicated. Positive attributes include chemical and pharmaceutical properties and developing these could help mitigate the costs of management and eradication.
Introduction
Invasive species are a menace to the ecosystem of their surroundings. These invasions are one of the great threats to biodiversity, since invasive species establish and supersede native species, frequently leading to the extinction of the latter. Invasive plant interactions in the ecosystem comprise the alteration of abiotic or biotic conditions such as nutrient and water availability, and the disturbance of bacterial and fungal communities, as well as of plant-plant and plant-herbivore interactions. Allelopathic compounds may be released by invaders and fire regimes are also affected, whilst the derived increase in decomposition of organic matter influences the nitrogen and carbon cycles. All of these influence the remaining organisms, micro-, animal or vegetable, thus compromising and altering the established biodiversity [1,2]. Apart from the ecological impact, they also have a socio-economic impact by influencing human-health, infrastructures and local economies [3][4][5]. Invasive species are presently one of the concerns of the European Union [6,7], and particularly of Portugal [8,9], where several species have been recognized.
The plant invasive phenomenon begins with their introduction (accidental or deliberate), proceeds by establishment of the species (through biotic and abiotic factors), and ends with its spread and impact [10]. Several theories of the mechanisms involved in the invasion process have been advanced (that may very well act together), including the enemy release hypothesis, the evolution of increased competitive ability, the novel weapon hypothesis and the allelopathic advantage against resident species hypothesis: in short, invasive plants face less or no enemies or predators in the new ecosystem, can thus redirect resources to favor establishment, being more competitive than local species, and may very well develop new biochemical weapons [3,10].
The management of the invasion problem includes many features like risk assessment, vector management, early detection, eradication, mitigation and restoration [5]. Of these, and for the perception of the general audience, early detection, sometimes with the help of the community, and mitigation are the most obvious, the latter being achieved by mechanical means, biological control, and/or chemical remedies. Interesting reviews on the control of invasive species and maintenance of biodiversity can be found in the literature [11,12]. Nevertheless, all management and control actions have costs that, in Europe, are estimated as millions of euros [13]. The search for alternative measures to the current status should thus be addressed. An alternative use for these species should be obtained, since eradication is far from being attained in most cases. We here suggest their use as a source of potential pharmaceuticals that, once available, could generate income, thus reducing the global costs of eradication. We must, however, be cautious in this approach-we do not want to sustain the targeted species, but rather eradicate them. Therefore, we propose eradication procedures that include not incinerating these species or burying them in landfills but rather processing them for chemical constituents. The delicate subject of harvest incentives is already the focus of an interesting review by Pasko and Goldberg [14]: attention must be paid to several points including: biological (population dynamics, overcompensation and dispersal), ecological, and socioeconomical (management goals, market economics) factors, among others. This type of approach would fall in the category of 'commercial use' whose issues and risks have also been the subject of the study of the French National Work Group on Biological Invasions in Aquatic Environments (IBMA) [15]. The temporary commercial use of established invasive species is already foreseen in European regulations (provided it is included in management measures aiming eradication) [16].
Furthermore, we do not intend to use invasive species in traditional medicines or phytotherapies. We defend the search for active principles and scaffolds, together with detailed pharmacological and toxicity studies, namely the usual route to the discovery of lead molecules. And for that, we must start at the beginning. First, we need to acknowledge the underlying potential of invasive species. Their ease of adaptation and control of the new habitat implies adaptation to different soil compositions, different water and weather stress conditions, ability to find reproductive strategies, competitive advantage, and, of course, resistance to new predators. For that, plants rely on their chemical machinery: their ability to synthesize allelopathic or deterrent compounds may very well mean the difference between life and death, especially when it comes to resistance to predators. The chemistry of invasive species must, as such, be very varied and with an enormous biological activity potential that remains yet to be explored. This, of course, implies new studies in this area, focusing on biological activity of isolated products-most of the existing phytochemical studies of invasive species focus on biological activity of extracts and no active principles were isolated; the search for the active metabolites should thus be a priority. It is also desirable that these studies focus on the species as invaders, and not on their native constitution. Most probably the prevalence of invasive species over endemic ones relies on a yet unknown biological activity of the metabolites they produce; these are surely responsible for their ease of expansion and dominance of the new habitat, and so their chemistry can surely be correlated to their ability to survive in non-native ecosystems. Moreover, the use of these species as a source of therapeutics would allow a rational use of resources that would eventually mitigate the cost of their removal. Are phytochemistry and bioactive natural products the miraculous solution to the invasive problem? Of course not, but they may be something worth trying, as we are trying to illustrate in this review.
In this paper, we chose worldwide invasive species that are a threat to Portugal where they are recognized by government [8] and the scientific community [9], in order to illustrate their potential as a source of bioactive metabolites: Carpobrotus edulis, Hakea salicifolia, Hakea sericea, Oxalis pes-caprae and Phytolacca americana. They are all invasive to Mainland Portugal, and Carpobrotus edulis, Oxalis pes-caprae and Phytolacca americana are also invasive to Madeira and Azores. Some of them are recognized by the European and Mediterranean Plant Protection Organization either as Pests (Hakea sericea, invasive also to Spain and France) or Invasive Plants (Carpobrotus edulis, invasive also to Spain, France, UK, Italy, Malta and Israel, and Oxalis pes-caprae, invasive also to Malta, Georgia, and Israel). Although their chemistry is poorly studied, either as native or invasive, several reports exist on the biological activity of their extracts that could, and should, be further explored.
Carpobrotus edulis L.
Carpobrotus edulis (common name ice plant, Aizoaceae) is a succulent perennial subshrub that can grow to several meters tall. It was introduced in Portugal for ornamental purposes where it is grown for maintenance of dunes and slopes. It shows vigorous growth leading to the formation of continuous vegetative areas that prevent the existence of native vegetation. It promotes soil acidification and can exist in damp or dry areas. It is native to South Africa [9], where it finds use in traditional medicine for symptoms of tuberculosis, throat infections, diarrhea, dysentery, burns, stomach ailments, chilblains, mouth ulcers, sinusitis and diabetes [17]. It is also invasive in Southern Europe, Western USA, New Zealand and North Africa [18] From a study of the MeOH extract of a population collected in Sintra, Portugal, the compounds in Figure 1 were isolated [19].
Carpobrotus edulis L.
Carpobrotus edulis (common name ice plant, Aizoaceae) is a succulent perennial subshrub that can grow to several meters tall. It was introduced in Portugal for ornamental purposes where it is grown for maintenance of dunes and slopes. It shows vigorous growth leading to the formation of continuous vegetative areas that prevent the existence of native vegetation. It promotes soil acidification and can exist in damp or dry areas. It is native to South Africa [9], where it finds use in traditional medicine for symptoms of tuberculosis, throat infections, diarrhea, dysentery, burns, stomach ailments, chilblains, mouth ulcers, sinusitis and diabetes [17]. It is also invasive in Southern Europe, Western USA, New Zealand and North Africa [18] From a study of the MeOH extract of a population collected in Sintra, Portugal, the compounds in Figure 1 were isolated [19]. Their ability to inhibit P-glycoprotein (the efflux pump responsible for the multidrug resistance of the used cell line) of mouse lymphoma cells containing the human efflux pump gene MDR1 and their antibacterial activity was studied [19,20]: uvaol 3 was the most effective and promising compound in the reversal of multidrug resistance in MDR mouse lymphoma cell line, whilst oleanolic acid 2 presented high antibacterial activity against a large number of bacterial strains [20].
There have been several studies on the biological activity of extracts of this species (Table 1). We can find Anti-Proteus [21] and anti-Klebsiella [22] activities of the MeOH and water extracts of South African species, indicating their potential for blocking the onset of rheumatoid arthritis and preventing the onset of ankylosing spondylitis [21,22]; Inhibition on the growth of phagocytosed multidrug-resistant Mycobacterium tuberculosis and methicillin-resistant Staphylococcus aureus of the MeOH/water extract of a species from Sintra, Portugal, suggesting that this plant may serve as a source of new antimicrobial agents that are effective against problematic drug-resistant intracellular infections [23]; neuroprotective properties of the n-hexane, CH 2 Cl 2 , AcOEt and MeOH extracts of a species from Faro, Portugal, suggesting that the consumption of leaves from C. edulis can contribute for a balanced diet and may add to the improvement of cognitive functions [24]; the effect of the MeOH/water extract of an undisclosed species in inhibiting the MDR efflux pumps, enhancing the killing of phagocytosed S. aureus and promoting immune modulation, indicating that the resistance modifier and immunomodulatory effect of this plant extract can be exploited in the experimental chemotherapy of cancer and bacterial or viral infections [25]; antioxidant, metal chelating and anticholinesterase activities of MeOH extracts of species collected in the Algarve, Portugal, together with their fatty acid profile, indicating that C. edulis is a candidate on novel and alternative therapies for the treatment of neurological disorders associated with low levels of acetylcholine in the brain [26]; antioxidant and antimicrobial activity of the acetone/water extract of a species collected in Monastir, Tunisia emphasizing the beneficial cosmetic and therapeutic use of this plant [27]; antioxidant activity of the n-hexane, acetone, EtOH and water extracts of a species from Eastern Cape, South Africa, that may justify the traditional use of this plant in the management of common diseases in HIV/AIDS patients in Eastern Cape Province [28], and inhibition of protein glycation, antioxidant and antiproliferative activities of the EtOH and EtOH/Water extracts of species collected in Sousse, Tunisia [17]. From this study, and by HPLC analysis with standards, sinapic acid, ferulic acid, luteolin 7-O-glucoside, hyperoside, isoquercitrin, ellagic acid and isorhamnetin 3-O-rutinoside were identified [17]. The results suggest that the C. edulis extracts could be used as an easily accessible source of natural antioxidants and as potential phytochemicals against protein glycation and colon cancer. More recently a study concerning the biological activity of the peel and flesh extracts (water, EtOH and acetone) of the fruits of a specimen of C. edulis, collected in Algarve, Portugal, was published [29]. Antioxidant, anti-microbial, enzymatic inhibitory properties and toxicity were evaluated and more than 80 compounds (mostly phenolic acids, flavonoids, and coumarins) were identified by HPLC-ESI-MS/MS, with or without standards. The potential use of the fruits of C. edulis as sources of molecules and/or products to be used in the food, pharmaceutic, agriculture and cosmetic areas is suggested. undisclosed MeOH antioxidant, metal chelating and anticholinesterase [26] leaves Acetone/water antioxidant and antimicrobial [27] leaves n-hexane, acetone, EtOH and water antioxidant [28] Undisclosed 1 EtOH and EtOH/water inhibition of protein glycation, antioxidant and antiproliferative [17] Fruits 2 water, EtOH and acetone Antioxidant, anti-microbial, enzymatic inhibitory activity [29] 1 sinapic acid, ferulic acid, luteolin 7-O-glucoside, hyperoside, isoquercitrin, ellagic acid and isorhamnetin 3-O-rutinoside were identified; 2 more than 80 compounds (mostly phenolic acids, flavonoids, and coumarins) were identified.
Hakea salicifolia (Vent.) B. L. Burtt and Hakea sericea Schrader
Hakea salicifolia (common name willow-leaved Hakea, Proteaceae) is a perennial shrub or small tree (up to 5 m) with reddish twigs. It was introduced in Portugal for ornamental purposes and for the formation of hedges in windy zones, near the shore. It is well adjusted to nutrient depleted soils, preferring sunny areas. It is native to Southeast Australia and Tasmania [9]. It is also invasive in Europe, Australia, New Zealand and South Africa [18] Hakea sericea (common name silky Hakea, Proteaceae) is a perennial shrub or small tree (up to 4 m) with irregular top and robust and very sharp needle-like leaves. It was introduced in Portugal for ornamental purposes and for the formation of hedges. It prefers disturbed areas such as along the sides of roads. It is resistant to wind and drought. It is native to Southern Australia [9]. It is also invasive in Southern Europe, New Zealand and South Africa [18].
Hakea salicifolia (Vent.) B. L. Burtt and Hakea sericea Schrader
Hakea salicifolia (common name willow-leaved Hakea, Proteaceae) is a perennial shrub or small tree (up to 5 m) with reddish twigs. It was introduced in Portugal for ornamental purposes and for the formation of hedges in windy zones, near the shore. It is well adjusted to nutrient depleted soils, preferring sunny areas. It is native to Southeast Australia and Tasmania [9]. It is also invasive in Europe, Australia, New Zealand and South Africa [18] Hakea sericea (common name silky Hakea, Proteaceae) is a perennial shrub or small tree (up to 4 m) with irregular top and robust and very sharp needle-like leaves. It was introduced in Portugal for ornamental purposes and for the formation of hedges. It prefers disturbed areas such as along the sides of roads. It is resistant to wind and drought. It is native to Southern Australia [9]. It is also invasive in Southern Europe, New Zealand and South Africa [18].
Chemical studies on this species refer only to the isolation of 9-(3,5-dihydroxy-4methylphenyl)nona-3(Z)-enoic acid 8 (Figure 2) from the MeOH extract of the fruits of H. sericeae collected in Serra da Estrela, Portugal [30,31]. The antibacterial properties of this new alkenylresorcinol were studied against several strains of Gram-positive and Gram-negative bacteria using the resazurin microtiter assay. Good MIC values were obtained against Staphylococcus aureus strains (0.005-0.16 mg/mL), including the clinical isolates (SA 01/10, SA 02/10 and SA 03/10) and MRSA strains [30]. The possible economical valorization of this species is suggested, based on the putative use of this compound in the preservation of foods or as an alternative to conventional antibiotic therapy [31].
Three reports can be found on the biological activity of extracts of these species (Table 2). These comprise the antimicrobial activity of n-hexane, CH2Cl2, EtOAc, MeOH and water extracts of both species, collected at Lisbon, Portugal, against Gram-positive and Gram-negative bacteria, including S. aureus MR where the twigs' aqueous extract showed the strongest antimicrobial activity (MIC 7.5-62 μg/mL) against the tested methicillin and vancomycin resistant strains of S. aureus [32]; the antioxidant potential of MeOH extracts of H. sericeae, collected at Serra da Estrela, Portugal [33], and the antimicrobial, antibiofilm and cytotoxic activities of the MeOH extracts of H. sericea collected at Serra da Estrela, Portugal, demonstrating that H. sericea is a potential source of bioactive compounds with antimicrobial activity, namely against several S. aureus strains, including clinical MRSA [34]. The antibacterial properties of this new alkenylresorcinol were studied against several strains of Gram-positive and Gram-negative bacteria using the resazurin microtiter assay. Good MIC values were obtained against Staphylococcus aureus strains (0.005-0.16 mg/mL), including the clinical isolates (SA 01/10, SA 02/10 and SA 03/10) and MRSA strains [30]. The possible economical valorization of this species is suggested, based on the putative use of this compound in the preservation of foods or as an alternative to conventional antibiotic therapy [31].
Three reports can be found on the biological activity of extracts of these species (Table 2). These comprise the antimicrobial activity of n-hexane, CH 2 Cl 2 , EtOAc, MeOH and water extracts of both species, collected at Lisbon, Portugal, against Gram-positive and Gram-negative bacteria, including S. aureus MR where the twigs' aqueous extract showed the strongest antimicrobial activity (MIC 7.5-62 µg/mL) against the tested methicillin and vancomycin resistant strains of S. aureus [32]; the antioxidant potential of MeOH extracts of H. sericeae, collected at Serra da Estrela, Portugal [33], and the antimicrobial, antibiofilm and cytotoxic activities of the MeOH extracts of H. sericea collected at Serra da Estrela, Portugal, demonstrating that H. sericea is a potential source of bioactive compounds with antimicrobial activity, namely against several S. aureus strains, including clinical MRSA [34]. Gram-positive and Gram-negative bacteria [32] Stems, leaves and fruits of H. sericeae MeOH antioxidant [33] Fruits of H. sericea MeOH antimicrobial, antibiofilm and cytotoxic [34] All these reports refer to Hakea species as invasive.
Oxalis pes-caprae L.
Oxalis pes-caprae (common name bermuda buttercup, Oxalidaceae) is a perennial herb (up to 40 cm) with bulbills. It was probably introduced for ornamental purposes. It grows in cultivated lands and bare places, especially on loamy soils. It does not stand the frost and low temperature that lead to dryness of the aerial parts. It is native to South Africa (Cape region) [9]. It is also invasive in Mediterranean Europe, Western USA, Asia, Australia, New Zealand and South Africa [18].
Oxalis species owe their sour taste to the presence of oxalic acid, a toxic compound that may cause nervous system paralysis in large herbivores when consumed in great quantities [35]. Several Oxalis species have been used in folk medicine due to their antihypertensive effects [35].
Few reports exist on the chemistry of this species. These include the identification of phenolics and flavonoids from the EtOAc, MeOH and BuOH/water extracts of a specimen collected in Crete ( Figure 3) [36]. While compound 12 was isolated, compounds 9-11 were tentatively identified by LC-DAD-MS. The extracts exhibited high levels of anti-oxidant activity and the authors suggest that these invasive plants may serve as an inexpensive and renewable source of bioactive compounds [36]. Gram-positive and Gramnegative bacteria [32] Stems, leaves and fruits of H. sericeae MeOH antioxidant [33] Fruits of H. sericea MeOH antimicrobial, antibiofilm and cytotoxic [34] All these reports refer to Hakea species as invasive.
Oxalis pes-caprae L.
Oxalis pes-caprae (common name bermuda buttercup, Oxalidaceae) is a perennial herb (up to 40 cm) with bulbills. It was probably introduced for ornamental purposes. It grows in cultivated lands and bare places, especially on loamy soils. It does not stand the frost and low temperature that lead to dryness of the aerial parts. It is native to South Africa (Cape region) [9]. It is also invasive in Mediterranean Europe, Western USA, Asia, Australia, New Zealand and South Africa [18].
Oxalis species owe their sour taste to the presence of oxalic acid, a toxic compound that may cause nervous system paralysis in large herbivores when consumed in great quantities [35]. Several Oxalis species have been used in folk medicine due to their antihypertensive effects [35].
Few reports exist on the chemistry of this species. These include the identification of phenolics and flavonoids from the EtOAc, MeOH and BuOH/water extracts of a specimen collected in Crete (Figure 3) [36]. While compound 12 was isolated, compounds 9-11 were tentatively identified by LC-DAD-MS. The extracts exhibited high levels of anti-oxidant act Studies of DellaGreca et al. on the AcOEt, MeOH and water extracts of specimens collected in Bacoli, Naples, where this species in invasive on cultivated lands, led to the isolation of the compounds in Figure 4, together with common phenolics [37][38][39][40]. These include p-coumaric acid, dihydrocinnamic acid, cis-p-coumaric acid, cinnamic acid, 1,2,3,4-tetrahydro-1-methyl-β-carboline-3carboxylic acid, 3-methoxyphenol, 2-methoxyphenol, 4-hydroxybenzoic acid, 4-(1hydroxyethyl)phenol, and 3-(1-hydroxyethyl)phenol. The isolated compounds were tested as to their activity towards the germination and growth of Lactuca sativa (lettuce). The phytotoxicity observed for some of these compounds on germination and growth of lettuce seeds seems to contribute to the invasiveness of the plant and their use as agrochemicals if suitably prepared and/or modified is suggested [37][38][39]. Studies of DellaGreca et al. on the AcOEt, MeOH and water extracts of specimens collected in Bacoli, Naples, where this species in invasive on cultivated lands, led to the isolation of the compounds in Figure 4, together with common phenolics [37][38][39][40]. These include p-coumaric acid, dihydrocinnamic acid, cis-p-coumaric acid, cinnamic acid, 1,2,3,4-tetrahydro-1-methyl-β-carboline-3-carboxylic acid, 3-methoxyphenol, 2-methoxyphenol, 4-hydroxybenzoic acid, 4-(1-hydroxyethyl)phenol, and 3-(1-hydroxyethyl)phenol. The isolated compounds were tested as to their activity towards the germination and growth of Lactuca sativa (lettuce). The phytotoxicity observed for some of these compounds on germination and growth of lettuce seeds seems to contribute to the invasiveness of the plant and their use as agrochemicals if suitably prepared and/or modified is suggested [37][38][39]. Further reports include the studies of an extract of the leaves of an undisclosed specimen towards vascular, antioxidant and neuroprotective activities, suggesting the potential use of this extract as a source of bioactive compounds [35].
Phytolacca americana L.
Phytolacca americana (common name pokeweed, Phytolaccaceae) is a big branched herb (up to 3 m) sometimes lignified at the base. It was introduced for medicinal purposes and for use in dyeing. It exists in disturbed and ruderal habitats, agricultural fields and along the sides of roads. It is native to North America [9]. It is also invasive in Europe and Western USA [18].
On the chemistry of this species we can find the isolation of saponins in the works of Ding et al. on acaricidal activity of the petroleum ether, acetone and MeOH extracts of a Chinese specimen. By LC/MS the two compounds in Figure 5 were identified [41]. Further reports include the studies of an extract of the leaves of an undisclosed specimen towards vascular, antioxidant and neuroprotective activities, suggesting the potential use of this extract as a source of bioactive compounds [35].
Phytolacca americana L.
Phytolacca americana (common name pokeweed, Phytolaccaceae) is a big branched herb (up to 3 m) sometimes lignified at the base. It was introduced for medicinal purposes and for use in dyeing. It exists in disturbed and ruderal habitats, agricultural fields and along the sides of roads. It is native to North America [9]. It is also invasive in Europe and Western USA [18].
On the chemistry of this species we can find the isolation of saponins in the works of Ding et al. on acaricidal activity of the petroleum ether, acetone and MeOH extracts of a Chinese specimen. By LC/MS the two compounds in Figure 5 were identified [41]. Among the P. americana extracts evaluated, the root acetone extract showed the highest acaricidal activity for T. cinnabarinus female adults [41].
The work of Jeong et al. reports the isolation of α-spinosterol from the MeOH extract of the roots of a Korean specimen and its action on diabetic Nephropathy, suggesting that this compound has a significant therapeutic potential [42], while Jerz et al. report the isolation of betalains from the berries of an undisclosed specimen [43].
Works of Takahasi et al. report the isolation of 1,4-benzodioxane derivatives from the MeOH extract of the seeds of Japanese specimens and their neuritogenic activity in primary cultured rat cortical neurons, suggesting their role as potential candidates for nonpeptide neurotrophic agents ( Figure 6) [44][45][46]. The saponins esculentoside B and esculentside S were also isolated [46]. Among the P. americana extracts evaluated, the root acetone extract showed the highest acaricidal activity for T. cinnabarinus female adults [41].
The work of Jeong et al. reports the isolation of α-spinosterol from the MeOH extract of the roots of a Korean specimen and its action on diabetic Nephropathy, suggesting that this compound has a significant therapeutic potential [42], while Jerz et al. report the isolation of betalains from the berries of an undisclosed specimen [43].
Works of Takahasi et al. report the isolation of 1,4-benzodioxane derivatives from the MeOH extract of the seeds of Japanese specimens and their neuritogenic activity in primary cultured rat cortical neurons, suggesting their role as potential candidates for nonpeptide neurotrophic agents ( Figure 6) [44][45][46]. The saponins esculentoside B and esculentside S were also isolated [46].
Reports on the biological activity of extracts of this species include (Table 3): moluscidal activity of the water extract of the berries against invasive snails (Viviparus georgianis and Pimephales promelas) suggesting that P. americana could be used as a mollusk control agent in aquaculture applications [47]; antifungal activity of the MeOH/water extracts of the aerial parts of a Korean species towards phytopathogenic fungi, confirming that extracts originated from invasive plants can be used directly to develop new and effective classes of natural fungicides to control severe fungal diseases [48]; allelopathic activity of the aqueous leaf extract of a South Korean species on Cassia mimosoides [49]; antibacterial effect of MeOH/water extract of aerial parts of a Korean species on pathogens responsible for periodontal inflammatory diseases and dental caries, suggesting that these extracts have the potential for use in the preparation of toothpaste and other drugs related to various oral diseases [50]; antiproliferative activity of the EtOH/water extract (saponin rich) of the roots of a Chinese specimen [51] and inhibition of infection by Cucumber Mosaic virus and Influenza virus by a phosphate buffer extract of the leaves of a California specimen [52]. Reports on the biological activity of extracts of this species include (Table 3): moluscidal activity of the water extract of the berries against invasive snails (Viviparus georgianis and Pimephales promelas) suggesting that P. americana could be used as a mollusk control agent in aquaculture applications [47]; antifungal activity of the MeOH/water extracts of the aerial parts of a Korean species towards phytopathogenic fungi, confirming that extracts originated from invasive plants can be used directly to develop new and effective classes of natural fungicides to control severe fungal diseases [48]; allelopathic activity of the aqueous leaf extract of a South Korean species on Cassia mimosoides [49]; antibacterial effect of MeOH/water extract of aerial parts of a Korean species on pathogens responsible for periodontal inflammatory diseases and dental caries, suggesting that these extracts have the potential for use in the preparation of toothpaste and other drugs related to various oral diseases [50]; antiproliferative activity of the EtOH/water extract (saponin rich) of the roots of a Chinese specimen [51] and inhibition of infection by Cucumber Mosaic virus and Influenza virus by a phosphate buffer extract of the leaves of a California specimen [52]. Finally, a patent registers a method for treating all types of polycystic kidney disease using the herb Phytolacca americana, among others [53]. Table 3. Biological activity of extracts of Phytolacca americana.
Part of Plant
Extract Activity Ref.
Conclusions
In this review, we chose examples of shrubs (Hakea), herbs (Oxalis and Phytolacca) and a succulent plant (Carpobrotus) to illustrate the varied chemical and pharmaceutical potential of invasive plants. Although poorly studied from the perspective of beneficial attributes, as most invasive species are, the extracts of these species show interesting biological activities, ranging from antioxidant, antimicrobial and antifungal, to neuroprotective and neuritogenic, including antiproliferative and cytotoxic, anticholinesterase, allelopatic and inhibition of viral growth. We thus clearly demonstrate the chemical potential of several kinds of invasive species, potential that should be further explored-invasive plants pose an up to date problem that should be turned into a profitable resource. The use of invasive species as a source of active metabolites could help reduce the actual and future costs of control and management, becoming that added value resource. As such, additional efforts should be directed towards the phytochemical study of these species in their invasive habitat. These studies should be complemented with a large scope analysis of bioactivities of isolated products, such as antimicrobial, antioxidant, anticancer/antiproliferative, and anti-inflammatory activities, among others. This of course is only the beginning-time will tell if there is in fact any use for the isolated bioactive metabolites: the discovery of pharmaceutical lead compounds is long, and substantial toxicity studies will also have to be made. We hope, however, to encourage the development of chemical studies of invasive species in the EU and worldwide, since they are most probably a source of active metabolites, and possibly of new active principle scaffolds. As such, we want to stimulate the scientific community to proceed with the thorough and detailed chemical analysis of invasive species at the same time eradication measures are being maintained. We want to alert the scientific community to the possibility of taking advantage of the metabolites produced by invasive species while eradicating them. We have no intention of valuing these species in order to delay or discourage their eradication, but rather to conduct studies on chemical composition and pharmacological application at the same time as control actions are being maintained.
Author Contributions: All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | 2020-08-06T09:06:16.671Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "8a096949cec7ad0316adb2887434630e7bf9b6db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/15/3529/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d839cd0bc0618f2f330c4b83a554f0d0ee886bab",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258063116 | pes2o/s2orc | v3-fos-license | Neutralization of the new coronavirus by extracting their spikes using engineered liposomes
The devastating COVID-19 pandemic motivates the development of safe and effective antivirals to reduce morbidity and mortality associated with infection. We developed nanoscale liposomes that are coated with the cell receptor of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes COVID-19. Lentiviral particles pseudotyped with the spike protein of SARS-CoV-2 were constructed and used to test the virus neutralization potential of the engineered liposomes. Under TEM, we observed for the first time a dissociation of spike proteins from the pseudovirus surface when the pseudovirus was purified. The liposomes potently inhibit viral entry into host cells by extracting the spike proteins from the pseudovirus surface. As the receptor on the liposome surface can be readily changed to target other viruses, the receptor-coated liposome represents a promising strategy for broad spectrum antiviral development.
and antibodies). In the absence of an effective vaccine at the early stages of an emerging highly pathogenic outbreak, the availability of safe and effective treatments is critically important to save lives. At present, clinically approved antiviral drugs are only available for 10 human viral pathogens, despite the vast diversity of more than 200 human viruses. 2 As viral disease emergence and re-mergence is expected to accelerate due to the massive increase in globalization and increasing human connectivity, there is a critical need to develop broad-spectrum anti-viral treatments to prepare humanity for future outbreaks. 3 Viruses are known to infect specific human target cells, following such sequential stages as attachment, penetration, uncoating, replication, assembly, and release. 4 Virions are the infectious form of viruses that are highly organized nanoscale structures for the protection of the viral genome before delivering it to suitable host cells. 5 For certain viruses, the genome-containing capsid is enclosed in an envelope, which is a lipid bilayer membrane. 6 Virion envelopes contain one or more species of virus-specified membrane glycoproteins that mediate viral entry into host cells. 7 After entry, the viral capsid is removed and degraded by viral or host enzymes releasing the viral genomic nucleic acid. The host transcription and translation machinery is then hijacked to replicate the viral genome and proteins. Assembly of the essential viral proteins with a newly replicated viral genome yields new virions that will be released from host cells by either budding or lysis. 4 undeniable need to develop broadly-acting antiviral drugs since knowledge of which deadly virus will be the next to emerge into human beings is not normally predictable.
Nanomaterials have been intensively pursued in the biomedical field over the last three decades due to their uniquely appealing features for drug delivery, diagnosis, imaging and miniaturized medical devices. 10 Considerable technological success has been achieved in cancer treatment, largely driven by the unmet need to reduce cancer mortality and morbidity. 11 The ongoing pandemic awakened growing interest in developing nanomaterials for antiviral purposes. 12 Given that the physical size of most viruses falls in the range of 20 -200 nm, the relevance of engineering nanomaterials with various well-established technologies to target viruses is clear. 13 Compared with cancer cells in the complex tumor environment, viruses are much simpler in structure and function, and thus may turn out to be a relatively easy-to-conquer target.
SARS-CoV-2 attaches to host cells via the same angiotensin-converting enzyme 2 (ACE2) receptor as SARS-CoV using its spike glycoprotein. 14 Upon binding, the spike protein is primed by the transmembrane protease serine 2 (TMPRSS2) on the cell surface to initiate cellular entry.
Several studies have shown that recombinant human ACE2 (rhACE2) and antibodies against the spike protein both blocked cellular entry of the virus in vitro. [15][16][17] Inspired by these findings, we herein demonstrate the specific neutralization of pseudotyped SARS-CoV-2 by rhACE2-coated liposomes as an example to show the antiviral potential of receptor-coated nanomaterials. liposomes was measured by dynamic light scattering (DLS) using a Zetasizer (Marvin Panalytical). The liposomes were also imaged using TEM with negative staining to analyze the morphology.
Construction of Pseudovirus.
Pseudoviruses bearing the spike protein of SARS-CoV-2 and carrying either green fluorescent protein (GFP) or a firefly luciferase reporter gene were produced in human embryonic kidney 293T cells using packaging plasmids obtained from Takara Bio following the manufacturer's instructions. Plasmids for SARS-CoV-2 pseudovirus construction, including a mixture of two plasmids, one for lentiviral packaging the other encoding SARS-CoV-2 spike, and two reporter vectors, pLVXS-ZsGreen1-Puro and pLVXS-Luciferase-Puro, and Lenti-X Concentrator, were purchased from Takara Bio. These vectors were amplified in E. coli and stored at 4°C before use. Pseudovirus was imaged using TEM with negative staining to analyze the morphology. 293T cells expressing both ACE2 and TMPRSS2 was used to analyze the infectivity of pseudovirus.
Pseudovirus Neutralization. Neutralization was measured by the reduction in luciferase gene expression, as reported for the HIV pseudovirus neutralization assay. 19 The 50% inhibitory dilution (EC50) was defined as the rhACE2 concentration at which the relative light units (RLUs) were reduced by 50% compared with the no treatment control wells.
TEM Imaging
The pseudoviral particles suspended in cell culture medium or PBS were absorbed onto the carbon film of a TEM grid (Product # 01810, TED PELLA, Redding, CA) by placing the grid carbon side down on top of a drop of sample for 20 min. The excess fluid was then blotted off using filter paper and the grid was placed within a large drop of PBS containing 2% paraformaldehyde for 5 min to fix the proteins of the pseudovirus. The excess fixing fluid was J o u r n a l P r e -p r o o f Journal Pre-proof blotted off and rinsed with two drops of DI water. The sample was then stained with 2% phosphotungstic acid (PTA) for 30 sec. The excess staining fluid was blotted off and the grid was left to dry at room temperature overnight before being imaged by TEM.
Synthesis and Characterization of rhACE2-Conjugated Liposomes
To modify ACE2 onto the liposome surface, we included a lipid with one end group of nitrilotriacetic acid-Nickel (II) (NTA-Ni 2+ ) in the liposome formulation and incubated preformed liposomes with his-tagged rhACE2 for conjugation based on the chelation chemistry between NTA-Ni 2+ and his-tag. 20,21 The average hydrodynamic size of naked and rhACE2 liposomes was measured by dynamic light scattering (DLS) to be 112±2.3 to 125±2.7 nm respectively ( Figure 1A). The size increase after conjugation indicates that the protein was associated with liposomes after incubation. TEM with negative staining also revealed tiny spikes of less than 5 nm on the surface of rhACE2 liposomes in contrast to the smoother surface of naked liposomes ( Figure 1B,C). However, the difference in surface morphology of liposomes alone may not be conclusive to support rhACE2 association with liposomes as the spikes could be artifacts caused by negative staining. To confirm the rhACE2/liposome association after the incubation for conjugation, we included liposomes without NTA-Ni 2+ lipid as a control in the conjugation experiment. No significant size change of NTA-Ni 2+ absent liposomes from before to after the incubation was observed, which strongly supports that the size increase of the liposomes with NTA-Ni 2+ was a convincing confirmation of rhACE2 association with liposomes ( Figure 1D). Both naked and rhACE2 liposomes showed high colloidal stability for more than two weeks when stored at 4°C, as found by monitoring their size distribution with DLS ( Figure 1E).
Construction and Characterization of SARS-CoV-2 Spike Pseudotyped Virus
J o u r n a l P r e -p r o o f Journal Pre-proof Antiviral testing of highly infectious virus must be conducted in biosafety level (BSL) 3 or 4 facilities. 22 A popular alternative assay is to instead use pseudoviruses which are synthetic chimeras that consist of the cell entry-mediating surface proteins of the targeted virus and a surrogate viral core of a different virus. 23 Pseudoviruses are replication-incompetent as they lack the essential viral genes while possessing the same tropism and host entry pathway characteristics as live viruses, and thus are allowed to be safely handled in a BSL-2 lab. 24 We constructed SARS-CoV-2 spike pseudotyped lentivirus and used it to test the SARS-CoV-2 neutralization potential of rhACE2 liposomes. To prepare SARS-CoV-2 pseudovirus, 293T cells were transfected with a lentiviral packaging plasmid, a plasmid encoding the spike protein and a transfer plasmid encoding either GFP or luciferase. The success of pseudotyping inside cells was assessed by the expression of GFP (Figure 2A). TEM with negative staining revealed an average size of ~114.5±13.8 nm (n=200) for the pseudoviral particles collected from cell culture medium ( Figure 2B). Spike-like projections up to ~20 nm in size on the surface can be clearly observed at higher magnifications, which is consistent with the size of trimeric spike protein ( Figure 2C). We attempted to concentrate the pseudoviruses using a polyethylene glycol (PEG) precipitation method in which PEG preferentially traps solvent and sterically excludes virions from the solvent phase. However, we observed unexpected dissociation of spike proteins from the pseudovirus surface under TEM after the concentrated pseudoviral particles were re-dispersed in PBS ( Figure 2D). The spike proteins on the pseudovirus can be either partly or completely removed from the surface after the concentration step ( Figure 2E,F). It is likely that the spike proteins of the pseudovirus are not as firmly anchored to the lentiviral membrane as those in SARS-CoV-2 live virus. A recent publication reported lentiviral particles pseudotyped with only the spike protein of SARS-CoV-2 are less infectious to ACE2 expressing cells than those J o u r n a l P r e -p r o o f Journal Pre-proof pseudotyped also with the membrane and envelop protein in addition to the spike protein. 24 The stabilization of spike proteins on the membrane of the pseudovirus by the membrane and envelop protein of SARS-CoV-2 may explain the increased infectivity.
Infectivity of SARS-CoV-2 Pseudovirus
We used 293T cells that stably express ACE2 and TMPRSS2 to test the infectivity titer of the pseudovirus. 25 The original medium containing the pseudovirus was sequentially diluted before addition to the cells. After 48 h, the infected cells, as indicated by the green fluorescence of pseudovirus labeled with GFP, was found to be inversely dependent on the dilution factor ( Figure 3A). When luciferase-labeled pseudovirus was used, the infectivity was quantified with a luciferin assay ( Figure 3B). An excellent linear curve fit was established between the log scale of dilution factor and infectivity. Although the infectivity titer of this lentivirus-based pseudovirus might be lower than those with a vesicular stomatitis virus (VSV) backbone, their pseudotyping procedure is more straightforward and less time-consuming, and more importantly, they do provide a large dynamic range for generating neutralization dose-response curves. 26
Neutralization of SARS-CoV-2 Pseudovirus by rhACE2 Liposomes
To test if rhACE2 liposomes can prevent the SARS-CoV-2 pseudovirus from infecting ACE2-TMPRSS2 expressing 293T cells, luciferase-labeled pseudovirus was incubated with the liposomes at specified concentrations of rhACE2 and 37°C for 1 h before addition to the cell culture medium. This assay has been optimized, and is suitable for a variety of SARS-CoV-2 entry and neutralization screening assays. 25 Luciferase assay revealed that the rhACE2 liposomes potently inhibited the infectivity while the naked liposomes did not ( Figure 4A). The halfmaximal inhibitory concentration (IC50) was measured to be ~0. 41 (Figure 4B,C,D). The spikes appeared to be removed by the rhACE2 liposomes during the incubation while the virus core was relatively structurally intact ( Figure 4C,D). This is unexpected as we actually designed the liposomes conjugated with multiple rhACE2 to crosslink pseudoviruses coated with multiple spike proteins, assuming the spike proteins were firmly anchored on the pseudovirus membrane.
Discussion
More than two years into the pandemic, SARS-CoV-2 mutations continue to emerge, necessitating continued development of booster vaccines. 31 One of the earliest treatments is transfusion of convalescent plasma from recovered patients. 32 While its clinical benefits in some COVID-19 patients is encouraging, the treatment is limited by the availability of donor plasma.
The repurposing of clinical drugs has so far yielded four antivirals including remdesivir, nirmatrelvir/ritonavir (sold under the brand name Paxlovid), molnupiravir (sold under the brand name Lagevrio) and baricitinib. 33 Their effectiveness in reducing the morbidity and mortality associated with infection varies. Only Paxlovid has decent effectiveness but needs to be administered within five days of symptom onset and only for the treatment of mild-to-moderate disease. 34 Five monoclonal antibodies received emergency use authorization (EUA) from FDA but four of them have been paused due to lack of activity against newly emerging variants. 35 Engineered nanomaterials that have been proposed for the treatment of COVID-19 include J o u r n a l P r e -p r o o f Journal Pre-proof nanotraps which are rhACE2 coated polymer-lipid hybrid nanoparticles, ACE2 presenting nanodecoys or nanocatchers made by rupturing ACE2 overexpressing cells into nanoscale vesicles, and exosomes secreted from ACE2 expressing cells. While all of these nanomaterials appear to be promising for COVID-19 treatment as they were demonstrated to potently inhibit viral entry, the complexity in their fabrication may impede further development towards the clinic. 27, 28,30,36,37 There is still a pressing need to develop antivirals not only for the ongoing pandemic but also for future viral outbreaks that are likely to occur.
In order to demonstrate the potential of nanotechnology for broad-spectrum antiviral development, we coated rhACE2 on nanoscale liposomes. Surprisingly, the rhACE2 liposomes extracted the spikes from the pseudovirus membrane after 1 h co-incubation at 37°C. One reasonable explanation is that the random Brownian motion of the two types of nanoscale particles during the specific interaction extracted the spikes from the pseudoviruses as the rhACE2 may be more firmly attached to the liposome membrane than the spikes to the pseudovirus membrane. rhACE2 proteins have been proposed for the treatment of COVID-19, but the small proteins may be rapidly cleared from the blood following administration. 39,40 The plasma circulating time of rhACE2 could be significantly extended when associated with liposomes as observed in many other nanomedicines. 41 Liposomes are arguably the most successful nanomaterial for drug delivery, with over 15 different liposome formulations in clinical use with high stability, excellent pharmacokinetics, high cargo loading and minimal toxicities. 42 rhACE2 liposomes are biocompatible and straightforward to fabricate in large scale with high repeatability, representing a more promising therapy for COVID-19. 38 Liposomes have been used as a model of plasma membranes to study cellular entry and membrane fusion between viruses and cells or cellular organelles. 43 While we didn't observe J o u r n a l P r e -p r o o f Journal Pre-proof membrane fusion between the rhACE2 liposomes and pseudotyped SARS-CoV-2, it may occur under conditions more relevant to intracellular fluid such as low pH in lysosomes, or when SARS-CoV-2 live virus is used instead. Moreover, antiviral therapeutics may be encapsulated into the liposomes for direct delivery to individual virus. 44 Membrane fusion between liposomes and viruses could bring the encapsulated antiviral therapeutics into direct contact with viral genes for degradation in a highly specific manner. The rhACE2 liposomes did not crosslink SARS-CoV-2 pseudovirus as expected. However, it is likely that the spikes are more firmly anchored in SARS-CoV-2 live viruses. In that case, the liposomes may crosslink the viruses into aggregates to prevent viral entry by sequestration and promote immune clearance due to increased size.
The liposomes potently inhibited viral entry of pseudotyped SARS-CoV-2 into host cells by extracting the spike proteins from the pseudovirus surface. Receptor-coated nanoscale liposomes represent a new strategy for rapid antiviral development in the early stages of a viral outbreak.
This strategy could be applied to target other viruses that require specific cell receptors for viral entry since the receptor proteins on the liposome surface can be readily replaced. This broadspectrum antiviral approach that can provide engineered liposomes in a timely manner for testing in preclinical and clinical studies in an expanding pandemic. We would thus be able to delay spread, attenuate virus evolution, and narrow the window between emergence and prevention and intervention. | 2023-04-12T13:05:03.447Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "2920be8da40c4ccfd3b91956db350991689f2991",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nano.2023.102674",
"oa_status": "HYBRID",
"pdf_src": "ElsevierCorona",
"pdf_hash": "2920be8da40c4ccfd3b91956db350991689f2991",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49651340 | pes2o/s2orc | v3-fos-license | Multi-modal Non-line-of-sight Passive Imaging
We consider the non-line-of-sight (NLOS) imaging of an object using the light reflected off a diffusive wall. The wall scatters incident light such that a lens is no longer useful to form an image. Instead, we exploit the 4D spatial coherence function to reconstruct a 2D projection of the obscured object. The approach is completely passive in the sense that no control over the light illuminating the object is assumed and is compatible with the partially coherent fields ubiquitous in both the indoor and outdoor environments. We formulate a multi-criteria convex optimization problem for reconstruction, which fuses the reflected field's intensity and spatial coherence information at different scales. Our formulation leverages established optics models of light propagation and scattering and exploits the sparsity common to many images in different bases. We also develop an algorithm based on the alternating direction method of multipliers to efficiently solve the convex program proposed. A means for analyzing the null space of the measurement matrices is provided as well as a means for weighting the contribution of individual measurements to the reconstruction. This paper holds promise to advance passive imaging in the challenging NLOS regimes in which the intensity does not necessarily retain distinguishable features and provides a framework for multi-modal information fusion for efficient scene reconstruction.
There may also be a shadow (i.e., spatial variation in intensity pattern) on the wall, whose edge resolution decreases with the decrease of the field's spatial coherence; contrast the highly coherent case in Fig. 1(b) with the lower coherent case in Fig. 1(c). In addition, a second source floods the wall with light, see Fig. 1(d). While a lensed camera may still be able to image the shadow, the image quality will be degraded due to noise and quantization error.
Existing approaches to the passive imaging problem have relied mostly on intensity-only measurements; for example by assuming some known occlusions are also present, as in the "accidental" pinhole camera [7], or the "corner" camera [8]. Other related problems which use intensity or light-field measurements concern imaging through volumetric scattering in turbid media such as fog [9], [10] or water [11], [12], and their solutions require weak scattering. However, some recent use of the autocorrelation in intensity at different locations seems promising to work under more relaxed scattering assumptions [13]. Phase-space measurements have also been used for imaging [14] or for determining the three-dimensional location of point sources embedded in biological samples [15].
The imaging problem of concern here assumes surface scattering is stronger than volumetric scattering. One such instance was recently demonstrated for scattering from walls at grazing angles [16]. In ideal cases, this may allow images to be formed from the reflection with a normal lensed camera (recent work even suggests this phenomenon accounts for mirages previously attributed to air temperature differentials [17]).
Here, we consider the less ideal case, where a useful image cannot be formed using a regular camera, but information is still retained in the spatial coherence of the reflected light.
The spatial coherence W of an electric field E at two points r 1 , r 2 is defined as an ensemble average over random field realizations W (r 1 , r 2 ) = E(r 1 )E * (r 2 ) , where * denotes the complex conjugation, and · is an ensemble average over field realizations (see [18]). As customarily used in optics, we work with W in rotated coordinates: the midpoint r = (r 1 + r 2 )/2, and displacement ρ = r 1 − r 2 , yielding W (r, ρ) = E(r + ρ/2)E * (r − ρ/2) . Note that the intensity of the field I(r) = W (r, 0).
In line-of-sight (LOS) imaging, coherence preserves information such that a 3D scene can be reconstructed [19]. Recently, the retention of information was noted experimentally in NLOS sensing [16]. Here, we propose an imaging method, which demonstrates the ability to reconstruct discernible 2D projections of obscured objects in NLOS settings, by leveraging the experimentally-verified physical models of [16].
Coherence has classically been measured using double slits [20] with modern experiments realizing the slits using digital micromirror devices [21]. Many other modern techniques have also emerged including use of shearing interferometers [22], microlens arrays [23] and phase-space tomography [24].
Our approach is physics-driven in the sense that we use established physics-based models from the theory of light propagation and scattering to describe the transformation between the source image and the measurements [25]. The proposed imaging method is based on a multi-modal data fusion. We formulate and study a convex optimization problem, and propose an algorithm for solving it based on the Alternating Direction Method of Multipliers (ADMM) [26]. The optimization problem incorporates regularization for sparsity, and reconstructs the image in a suitable transformed basis in which the source image is assumed to have a sparse representation.
In contrast with some existing fusion approaches, which merge multiple images in a spatial or wavelet domain [27]- [29], our method reconstructs a single image by fusing multiple measurement types at different spatial scales while exploiting their respective propagation models. In spirit, our approach to fusion relates to that of [30], where a convex optimization problem is devised to pansharpen medical images.
We provide a means of assessing the null space of the model, and a weighting scheme and decision framework by which individual samples of a measurement may be excluded.
The simulated results demonstrate the concept of NLOS imaging using spatial coherence. We further give examples of fusion, and show how the null space of the measurement transformations can be analyzed.
The paper is organized as follows. In Section II, we review the physical models for propagation and scattering. In Section III, we formulate the NLOS image reconstruction problem and describe the algorithm. The results of running the algorithm are presented in Section IV. In Section V, we discuss possible extensions to this work and how our work fits into a practical framework. The details of the optimization algorithm are described in Appendix A, and details of the physical model are given in Appendix B.
A. Notation
Vectors and matrices are denoted using bold-face lowercase and upper-case letters, respectively. Given a vector a, its p -norm is denoted by a p and a(i) is the i th element. The diagonalization operator Diag(a) returns a matrix with the elements of a along the diagonal. The vectorization of an M × N matrix A is denoted vec {A}, with the result taking the form of an M N element vector. The unit vector with a one in the i th entry is denoted e i . Matrices or vectors containing all ones or all zeros are denoted 1 and 0, respectively, where the dimensions will be clear from the context. The Hadamard product returns the element-wise product of its arguments. A weighted norm is defined as where ω x and ω y are angular frequencies. The 2D Discrete Fourier Transform (DFT) of matrix A is expressed as F 1 AF 2 , where F 1 and F 2 are the 1D DFTs along the columns and rows of A, respectively. The notation is used to indicate both the continuous and discrete forms of the two-dimensional convolution operator.
II. PHYSICAL MODEL
In this section, we describe the physical model. More details regarding the derivations of the equations can be found in Appendix B. Additional details regarding the models, including experimental verification, can be found in [16].
We consider the wave model in which a light source emits a random field. It is assumed that light propagates along the longitudinal z direction according to the Fresnel model, where the normals to the wave front make small angles with the direction of propagation.
The approach works with targets in which the projection on the z-axis is much smaller compared to the (optical) distance d to the detector (a requirement which is met in many practical situations), thus reducing the problem to that of reconstructing a 2D image; see the illustration in Fig. 2(a).
In the Fresnel model, given a 2D intensity function I(r), the coherence of the light after propagating distance d can be calculated using the linear transformation where λ is the wavelength, k = 2π/λ is the wave number, and the Fourier transform is 2D with regard to the x and y components of r (see Appendix B for derivation). The variable r indicates spatial position in the object plane, whereas r indicates spatial position along the wall. Because ρ appears in the argument to the Fourier transform of (1), a natural way to measure the coherence function W d (r, ρ) is along the ρ x and ρ y axes with r fixed, i.e. measure a 2D slice of the 4D coherence function. We will refer to a set of measurements along this slice as a coherence sample. An (e) Set of incident coherences plotted on a 7 × 7 grid. Each plot is centered at the corresponding spatial point r. The radius of each plot is 5.5 µm. The coherence measurements are shown in the style of light field plots as found for example in [31] and [32]. (f) Scattered coherences as in (d).
example plot of such a coherence sample is shown in Fig. 2 For the interaction with the wall, the angular spread of photons can be assumed to be governed by a Gaussian function [15], [33]. The standard deviation of the angular spread along the x and y axes is w = (w x , w y ). The geometry of the scene is such that the angles of incidence and reflection are fairly close, which results in a specular reflection due to surface scattering. Due to the paraxial nature of the incident waves, coupled with the narrow spread of the specular reflection [34], we can use the approximation where and the intensity of the scattered field where is the 2D convolution operator. Fig. 2(d) shows the results of wall scattering with parameters w = (1 µm, 6 µm). To achieve spatial diversity, a full reconstruction will typically require a collection of 2D coherence samples, each centered at a different r. An example collection of 49 samples is given in Fig. 2(e) showing the coherence incident to the wall, with the r falling onto a 7 × 7 grid. The corresponding scattered coherence functions are shown in Fig. 2 We remark that while the 2D intensity function constitutes a slice of the 4D coherence function, cameras used to measure intensity differ from devices used to measure coherence, therefore they are commonly considered as different modalities.
III. NON-LINE-OF-SIGHT IMAGE RECONSTRUCTION
In this section, we turn to the problem of reconstructing the opacity profile of the object. This 2D profile is represented in discretized form by matrix G, which has vectorized form g = vec {G}. Matrix G is formed by sampling the opacity profile on a uniform grid over the finite support of the profile. First, we consider reconstruction using intensity-only measurements in the presence of ambient light from secondary sources. Then, leveraging the physical model for spatial coherence introduced in Section II, we develop the reconstruction framework using coherence measurements. Finally, we define the complete problem in which we fuse information from both modalities and exploit the natural sparsity of the object's profile.
A. Intensity Measurements
The intensity pattern on the wall may be measured using a variety of readily available devices. For example, if intensity variations are strong enough, a simple Charge-Coupled Device (CCD) camera with a suitable lens may be used. At the other extreme, a device such as an Electron Multiplying CCD (EMCCD) can distinguish minute intensity variations, due to the camera's high single photon sensitivity.
We define the intensity measurement matrix Φ I , which samples the scattered intensity function I S (r) in (6) at the wall. Hence, in discretized form where H is the discretized Gaussian kernel H(r ) defined in (3). Because G is an opacity profile, the intensity in the object plane takes the form 1 − G, where the 1 term represents the light incident on the object immediately prior to obstruction.
In the experiments, we implement (7) using a linear convolution, i.e., elements outside the boundaries of the domain of r are set to zero. This operation is performed through the use of convolution matrices such that the grids of r and r may be different. If the grids are the same, we could also use the Fast Fourier Transform (FFT) to perform a fast circular convolution. Therefore, to recover an estimation of the object profile g from intensity measurements (see Section III-E for a discussion of the null space), we formulate the convex program min g,α where y I is the measurement vector. This formulation includes a free coefficient α along with an associated vector a modeling the ambient light. Specifically, vector a captures the spatial intensity distribution of the ambient light on the wall and the coefficient α represents its magnitude. Here, we set a = 1, i.e., the ambient light blankets the wall with constant intensity. While this problem may be successful if a clear shadow is discernible, two major factors limit its effectiveness. First, the shadow will be faint if there is significant ambient light present. Although the shadow can be measured with sensitive cameras, the Signal-to-Noise Ratio (SNR) falls as the amount of ambient light increases. Second, if the coherence of the light sources is low, the edges of the shadow will be indistinct due to diffraction, making the reconstruction ill-posed; this effect can be seen as a manifestation of the convolution in (6).
B. Coherence Measurements
To address the aforementioned limitations of the intensitybased approach, we develop a framework for reconstruction from coherence measurements next.
As described in the introduction, an increasing number of techniques have been developed for capturing coherence information. An example of practical measurements matching the requirements of our approach can be found in [16], which makes use of a Dual Phase Sagnac Interferometer (DuPSaI).
We define the coherence measurement matrix Φ C r , which samples the scattered coherence function along the ρ x and ρ y axes at a fixed r. Obtaining a discretized form of the function in (4), we can write Matrix S is the discretized form of the function S(ρ) defined in (5), which represents the scattering effects of the wall. Matrix C r is the discretized form of the function C(r, ρ) defined in (2), which is one component of the free-space propagation operator. Both S and C r are discretized along the ρ x and ρ y axes using the same set of points as Φ C r , with C r using the same fixed r position as Φ C r . The other component of the free-space propagation operator is matrix H r , which discretizes the function H defined in (3). Specifically, this matrix contains samples of H(r − r ), with r fixed, and r falling on the same discrete grid as G.
Calculation using the measurement matrix (9) admits a tractable form, requiring only element-wise products and Fourier transforms, which may be implemented using the FFT.
The measurement vector corresponding to the coherence sample at Φ C r is labeled y C r . We define the set R containing the values of r at which the full collection of coherence samples are made. To perform the reconstruction using coherence measurements, we consider the least squares formulation, A major factor influencing the quality of the coherence measurements are the geometry and characteristics of the wall which determine the amount of scattering. Because these factors may vary depending on spatial position along the wall, the different sets of measurements y C r within the collection may vary in their quality, or some may be unusable. We will explore such a scenario in Section IV-B.
Given the geometry of the scene, the ambient light that reaches the detector will necessarily result from diffuse scattering (i.e., specularly reflected ambient light from secondary sources will not reach the detector due to unequal angles of incidence and reflection). Because there is a Fourier transform relationship between scattered photon angle and coherence (see Appendix B for more details), the large angle diffuse spread in the ambient light introduces a narrow peak in the coherence function at ρ = 0 [33], [35]. Recalling the relationship between intensity and coherence I(r) = W (r, 0), we can see that the peak exactly coincides with the intensity measurements. Therefore, the ambient light tends to dominate the intensity measurements and obscure the shadow. On the other hand, this diffusely scattered ambient light has little effect on the coherence function away from ρ = 0, where the specular component of reflection (containing information about the object) dominates. For this reason, spatial coherence coordinates for which ρ 2 < p are excluded. We remark that unlike (8), this exclusion obviates the need for an ambient term for the coherence measurements in the formulation of (10).
C. Fusion Framework
As mentioned in the previous sections, it is possible that one or another modality may be of a lower quality, and therefore it is advantageous to use both intensity and coherence modalities in the same reconstruction.
Additionally, the profile g is likely to admit a sparse representation x in a particular basis Ψ. Here, we use the two-dimensional Discrete Cosine Transform (DCT) as the sparsifying basis Ψ (in which it is well established that natural images possess a sparse representation [36]), however, another basis such as a wavelet basis could also be used. As such, the object profile can be expressed as g = Ψx. We then include x 1 as a regularization term to promote sparsity in the reconstruction, where the 1 -norm is a convex relaxation of the 0 -norm [37].
To fuse information from both modalities and exploit the sparsity of the opacity profile in Ψ, we can readily formulate the convex program where κ and µ are used to balance the objectives.
D. Algorithm
To solve (11), we propose an iterative algorithm based on the ADMM approach first introduced in [26]. This algorithm performs a dual ascent using the Augmented Lagrangian [38], which can be written as where y is the Lagrange multiplier. We solve the minimization using the following updates at each step k: x k+1 , α k+1 = arg min where the initial values x 0 , α 0 , z 0 , y 0 are zero. The stopping criteria consist of thresholds placed on the residuals [26]. Specifically, the algorithm stops if the norm of the primal residual x k − z k 2 < pri and the norm of the dual residual β(x k+1 − x k ) 2 < dual . Here, pri = 0.5 and dual = 10 −6 .
Details regarding the calculation of the x and z update steps are given in Appendix A.
E. Mapping of Null Space
Due to various factors in the propagation and scattering process, the measurement matrices Φ I and Φ C r will typically possess a null space. We use the general notation Φ i to refer to the i th measurement matrix, which may take the form of Φ I or Φ C r , depending on the enumeration order of the matrices. We can characterize the null space associated with measurement i as follows. The degree of coherence between the j th element of the object profile g j and the measurement can be quantified by τ i (j) = Φ i e j 2 . If τ i (j) is close to zero, i.e. the SNR is very small, the element is considered to be in the null space of the measurement.
Similarly, we can look at the degree of coherence in the sparse domain using a similar operator τ i (j) = Φ i Ψe j 2 . The null space map may be especially useful when an explicit model is not known, for example in data-driven approaches.
F. Sample Weighting
It may improve the results if we can exclude certain measurements from the reconstruction rather than give equal weight to all measurements in the samples. To this end, we can substitute a weighted norm · v in place of any of the Euclidean norms · 2 in (11).
If the noise is known, the sample weight vector for the i th measurement can be constructed using the decision metric where j is the sample number, n i is the noise level present in measurement i, and η is a calibration constant. This is a metric similar to the Transform Point Spread Function found in [39]. For a given measurement sample, this metric finds other samples which are coherent with the same image pixels. A given sample will be included in the optimization if it has a higher SNR than the other measurements.
G. Extensions
Here, we comment on possible extensions to the framework. We are not constrained to problems in which the object is blocking light, but can also work in reflective scenarios. This can be accomplished by redefining G as the reflectivity rather than opacity of the object, and making the simple substitution 1 − G → G in (7) and (9).
The problem (11) includes a single weight µ associated with the measurements. We may instead associate a weight coefficient with each measurement matrix in (11). These could be adjusted along a continuum to control the impact of particular samples. If the magnitudes of measurements are significantly different, these weights can maintain balance, e.g., by setting µ i = 1/ y i 2 2 . If there is Gaussian noise in the measurements with known magnitude, the Bayesian Compressive Sensing methodology can be used [40].
Another possible extension to the optimization problem is to incorporate an auto-scaling coefficient, e.g., to handle cases when the magnitude of measurements from different modalities are not calibrated to the same scale. To this end, we can add a scaling coefficient B to some of the measurements by making the substitution y i → By i , and updating B in step (12). With this modification, the problem (11) remains convex.
IV. NUMERICAL RESULTS
We now present examples demonstrating the proposed method laid out in Section III and making use of optimization problem (11). In all examples, the opacity profile of the actual object is as shown in Fig. 3(a) with corresponding DCT in (b). For simulated measurements, the source intensity function I(r) used in the forward model is as shown in the diagram of Fig. 2(a) (left side), with the function extended by ones to x, y ∈ (±6 m), thus representing an opaque star object surrounded by a plane of light. The extension of the function is required to properly model the significant spreading of the light after being emitted from the physical light sources and before being obstructed by the object.
Additive-white-Gaussian-noise (AWGN) with standard deviation (SD) n I is added to the intensity samples, and complex AWGN with SD n C is added to the coherence samples.
The following parameters are used in all results: λ = 525 µm, d = 6 m, p = 1 µm, β = 5 × 10 −3 , and µ = 1. The intensity image of the wall has resolution 101 × 101 pixels with domain r x , r y ∈ [±2 m]. Unless otherwise specified, the coherence measurements have resolution 51 × 51 pixels (with the domain of ρ varying depending on the example). A constant value of 100 is added to all intensity measurements to model ambient light; this offset will be absorbed by the coefficient α in (11).
A. Non-line-of-sight Imaging
We first demonstrate the potential of spatial coherence measurements to enable passive NLOS imaging when no shadow information is available. Two reconstructions are included, each with wall scattering parameters set at opposite extremes.
In this example, coherence measurements are made on the same spatial grid as shown in Fig. 2(f). The simulation parameters are σ = 5 µm, n C = 10 −3 , κ = 0, and the coherence measurements are over domain ρ x , ρ y ∈ [±15 µm]. Fig. 4(a) and (b) shows the reconstructed image and DCT for a wall with relatively little scattering, where the scattering parameters are set to w = (3 µm, 18 µm). For comparison purposes, pixels in the reconstructed images with value > 1 are set to one and values < 0 are set to zero, a practice which will be used for the remainder of this section. Fig. 4(c) and (d) show the results for a wall that introduces more scattering with parameters w = (0.25 µm, 1.5 µm). The DCTs clearly show that the scattering of the wall acts as a lowpass filter, with increased scattering leading to more filtering.
B. Fusion of Intensity and Coherence Measurements
As demonstrated in the next example, by fusing intensity and coherence measurements, a better reconstruction can be made as compared to using either modality alone.
First, Fig. 5(a) shows an intensity sample. Note that the color range of the intensity plot has been constrained to a narrow range to clearly show the shadow. The light is not coherent enough to reveal the edges of the star. Fig. 5(b) shows the reconstructions results when only this sample is used. Next, the coherence samples shown in Fig. 5(c) are also included in the reconstruction to augment the intensity measurements. Fig. 5(d) shows the improved results. In the top half of the reconstruction, the coherence measurements contain more information about the high frequency components of the object profile and therefore dominate the reconstruction providing sharper edges. However, because these coherence measurements only cover the top half of the wall, the intensity contains more information about the bottom half of the object, albeit only at lower frequencies thus resulting in less definition.
We will now provide some insight into the improvements which have been made based on Section III-E.
First, the spatial limitation inherent in coherence measurements is demonstrated. This limitation comes from the multiplication by the Gaussian term H(r) in (1). In the following discussion, we denote the index of the intensity sample as I, and the index set of coherence samples as C. In Fig. 6(a), we show τ i for a single coherence sample located at r = (0, 0.8 m). Fig. 6(b) shows max i∈C τ i , which returns a vector containing the most coherent coherence measurements with each pixel. This is the combined effect of all coherence samples, clearly demonstrating that more samples allow more spatial coverage. In contrast, Fig. 6(c) shows max i∈(I∪C) τ i , demonstrating that when all coherence measurements are used together with intensity measurements, virtually the entire object profile is covered.
We can perform a similar analysis in the sparse DCT domain. Fig. 6(d) shows max i∈C τ i , which is the combined effect of the coherence samples in the sparse basis, and Fig. 6(e) shows τ I , which is the effect of the intensity measurements in the sparse basis. It can be seen that the coherence measurements have a stronger correlation with the high frequency components, explaining why the top half of Fig. 5(b) has improved edges over the bottom half. The low pass filtering in the intensity measurements comes from the convolution in (6) due to diffraction, whereas the filtering in the coherence measurements comes from wall scattering.
C. Improved Fusion using Sample Weighting
In some cases, simply adding new measurements is insufficient. Because of noise levels, while certain parts of the reconstruction will improve, other parts will degrade. In this cases being able to exclude individual measurements as described in Section III-F may resolve the issue. We now provide such an example.
Coherence measurements and the associated reconstruction are shown in Fig. 7(a) and (b) respectively. In these panels we do not use regularization, since the measurements lack noise.
If the intensity sample shown in Fig. 7(c) is also used in addition to the coherence measurements, the results in Fig. 7(d) are obtained. Here, sparsity regularization is used due to noise in the intensity measurements. Although the bottom half of the object is now visible in the reconstruction, the top half has degraded due to the intensity noise.
To resolve this problem, we calculate sample weights for the intensity measurement using (15). The results are shown in Fig. 7(e) with black representing zeros (excluded intensity samples) and white representing ones (included samples).
The result of the reconstruction using these weights is shown in Fig. 7(f), where the top half can be seen to improve. Note that because we are regularizing in the frequency domain, noise which is spatially isolated to a particular section of the image will be coupled to other noise-free regions, and thus the top half is not ideal as possible. Using a wavelet basis may eliminate this issue.
D. Sparsity Fig. 3(b) confirms that the DCT of this object profile is approximately sparse (disregarding the small high frequency components). At the same time, noise in the measurements tends to introduce relatively large high frequency components into the reconstruction. Therefore, one use for the sparsity regularizer in (11) is to serve as a de-noising tool.
In Fig. 8(a) we show the result of a reconstruction using noisy coherence measurements where no regularization is used, i.e. κ = 0. As shown in Fig. 8(b), the noise appears mostly in the high frequency components of the DCT. Fig. 8(c) and (d) show the improved results when sparsity is enforced using κ = 5 × 10 −4 . The coherence measurements are at the same spatial locations as shown in Fig. 2(f). The simulation parameters are σ = 5 µm, n C = 10 −2 , w = (1 µm, 6 µm) and ρ x , ρ y ∈ [±15 µm] with a resolution of 25 × 25.
In Table I, we repeat this experiment using the same parameters, except varying the noise levels and κ values. Ten trials are performed at each setting, and the average and SD of the resulting Mean Square Error (MSE) are shown.
Likewise, Table II shows the results using only intensity measurements (and no coherence measurements). Here, the coherence level used for the forward model is σ = 2.5 µm (to reduce the distinctness of the shadow).
For each noise level (column), the minimum error is bolded. We can see in both tables that a larger noise level requires a larger value of κ to achieve minimal MSE. The errors in the bottom row are roughly equal for all noise levels: beyond a certain threshold of κ, the estimates only contain low frequency components and are nearly identical.
V. DISCUSSION
Here, we considered the problem of passive NLOS imaging. This theoretical study, based on reliable models, aimed at providing a framework for solving the inverse problem, fusing multi-modal measurements, and understanding the measurement operators. The reconstruction algorithm can leverage intensity (i.e. shadow) information when available. However, during the propagation process, intensity becomes blurred. As an alternative, we can use samples of spatial coherence, which retains information during the propagation process. However, scattering may significantly attenuate this surviving information. By fusing the two modalities, all information available in both sets of measurements may be captured. If some of the measurements have a large noise level, a decision algorithm was presented by which they may be excluded.
In our work, we assume the optical distance to be known. In [16], a technique is provided for determining the optical distance using the phase of the measurements at different spatial positions along the wall, information which is readily available in the measurements we use here. This estimation could be performed as a preprocessing step, prior to running our algorithm. The estimation of depth in the presence of scatterers has also been studied previously [11], [15], and those results may help here as well.
In our problem, we reconstruct a planar object profile. An extension of this work would be to consider three-dimensional objects, for example as was done in [41] and [42].
We conclude by noting that the measurement matrices and optimization problem, as well the sample weighting and null space characterization, are general in nature. While only two modalities were presented here, other modalities could be easily incorporated as well.
APPENDIX A OPTIMIZATION ALGORITHM DETAILS
We define U i := Φ i Ψ, and let V i = diag v i be the weight matrix associated with the weighted norms (if sample weighting is not used, then V i should be an identity matrix).
The minimization step (13) takes the analytic form [26] z k+1 = S κ/β z k + y k /β , where the component-wise shrinkage operator is For simplicity, in the following equations we use a single summation over all samples, rather than separating the intensity sample from the coherence samples as was done in (11). Additionally, the weight coefficient has been indexed and moved inside the summation. For coherence samples, i.e. where i ∈ C, the ambient vector is set to a i = 0.
We solve step (12) using a gradient descent algorithm. The gradients are The initial conditions for the gradient descent at step k + 1 are the values calculated at the previous step, i.e., x k and α k . The j th step of the gradient descent inner loop is chosen to minimize the quadratic interpolation at points x j − q (∇ x L β ) and α j − q (∇ α L β ), where q ∈ {0.1, 0.5, 1}. Let f j = L β (x j , α j , z k , y k ). The descent algorithm stops when f j+1 − f j /f j < grad .
For the x-update, we use the early termination technique described in [26, §4.3.2]. This is accomplished by splitting the ADMM algorithm into two parts: first the algorithm is run with pri = 1, dual = 10 −4 , grad = 10 −3 . Then, the thresholds are set to the final values of pri = 0.5, dual = 10 −6 , grad = 10 −8 .
While we used gradient descent for its simplicity and robustness, a possible enhancement would be to use an optimization algorithm with faster convergence.
APPENDIX B COHERENCE DERIVATIONS
The quasi-homogeneous approximation is W (r, ρ) = I(r) exp − ρ 2 2 where σ is termed the coherence width. In this approximation, the function is separable with regard to the "intensity" and "coherence" components [43], [44]. Under the Fresnel approximation, the impulse response function for the electric field in free space is Then, the propagation is given by | 2018-08-01T17:35:34.347Z | 2018-07-06T00:00:00.000 | {
"year": 2018,
"sha1": "44b5c888b74dbc16980201d8d9ffd4ffb9fb8427",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1109/tip.2019.2896517",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "48401b26da0ab054170b62b4cac5eccba2e51e7d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Medicine",
"Mathematics"
]
} |
250679466 | pes2o/s2orc | v3-fos-license | Construction of Lie algebras and invariant tensors through abelian semigroups
The Abelian Semigroup Expansion Method for Lie Algebras is briefly explained. Given a Lie Algebra and a discrete abelian semigroup, the method allows us to directly build new Lie Algebras with their corresponding non-trivial invariant tensors. The Method is especially interesting in the context of M-Theory, because it allows us to construct M-Algebra Invariant Chern-Simons/Transgression Lagrangians in d = 11.
Introduction
In the context of M-Theory, there is an interesting interplay between different supersymmetries. As a matter of fact, the M-Algebra and the osp (32|1)algebra are related through the Maurer-Cartan forms power-series expansion procedure (See Refs. [1,2]). This procedure is formulated in terms of the Maurer-Cartan forms of the Lie Algebra. Due to the fact that the invariant tensor is written using the generators of the algebra, finding an explicit expression for the M-Algebra invariant tensor (and therefore a Chern-Simons Lagrangian) becomes non-trivial.
This problem has been treated through several approaches in the past; for example, in Refs. [1,2], the relationship between expansion and Chern-Simons forms has been treated through free-differential algebras. In Refs. [5,6] the problem has been treated applying the Noether Method on Chern-Simons forms; other approaches can be found in Refs. [4,3].
All these approaches have focused directly in the construction of the Chern-Simons form for the expanded algebra. Here we will face the problem in a different way: we show a new general method for Lie Algebra manipulation in terms of the generators: the S-Expansion, which provides us automatically with expressions for a non-trivial invariant tensor. From it, the construction of a Chern-Simons/Transgression form for the algebra becomes straightforward. The S-Expansion method requires as input an abelian semigroup S and a Lie algebra g, and gives as output a new, and in general larger, symmetry G. More details on the S-Expansion method can be found in Refs. [7,9,10]; for the construction of a gauge theory for the M-Algebra in d = 11 using this method, see Refs. [8,9,10].
Making Algebras Smaller: Reduction
Let g be a Lie algebra of the form g = V 0 ⊕ V 1 , with {T a 0 } being the generators of V 0 and {T a 1 } the ones of [T a 1 , then it is straightforward to show that the structure constants C c 0 a 0 b 0 satisfy automatically the Jacobi identity by themselves. Therefore, also correspond to a Lie algebra. This algebra, with structure constants C c 0 a 0 b 0 , will be called reduced algebra of g, and denoted by |V 0 |.
It is important to notice that in general |V 0 | does not correspond to a subalgebra. In some way it could be regarded as an "ideal division" or "inverse extension" but we have to notice that V 1 in general does not need to be an ideal.
Making Algebras Bigger: S-Expansion
In order to construct larger Lie Algebras, the key ingredients in the present approach are a semigroup S and a Lie algebra g. A semigroup S is a set provided with a closed, associative product. From now on, the conditions of abelianity and finiteness will be also imposed. Provided with an arbitrary Lie Algebra g, it is possible to prove (See Refs. [7,9,10]) that the product G = S × g corresponds to the Lie Algebra given by
Resonant Subalgebras
In order to systematically extract subalgebras from G = S × g, it is necessary to codify the subspace structure of the original algebra g. In order to do this, let us consider the Lie algebra g = p∈I V p , where I is a set of indexes. Let i be the mapping i : I × I → 2 I , such that the subspace structure of g can be written as 1 In this way, the mapping i codifies all the information on the subspace structure of g. It is possible to prove (See Refs. [7,9,10]) that when a subset decomposition of S = p∈I S p , such that the condition is fulfilled can be found, then G R = p∈I S p × V p is a subalgebra of G = S × g, called resonant subalgebra of G. 2
Resonant Reduction
The systematic codification of the subspace structure of g through the mapping i : I × I → 2 I allows us to go further, defining also reduced algebras from the resonant subalgebra G R . Let G R = p∈I S p × V p be a resonant subalgebra, and let S p =Ŝ p ∪Š p be a partition of the subsets S p ,Ŝ p ∩Š p = ∅, such that the conditioň is fulfilled. Then, it is possible to show (See Refs. [7,9,10]) that Ǧ R ,Ĝ R ⊂Ĝ R , wherě Therefore, Ǧ R corresponds to a reduction of the resonant subalgebra G R . Let us consider a semigroup S provided with an element 0 S such that for every λ α ∈ S, 0 S λ α = 0 S , and let S = p∈I S p be a subset decomposition satisfying eq. (5) and such that each S p includes the element 0 S . Then,Ŝ p = {0 S } andŠ p = S p − {0 S } satisfies eq. (6), and the associated reduced algebra corresponds to imposing the condition 0 S T A = 0 on G R ; we will call this particular case 0 S -reduced algebra.
The present approach provides us with non-trivial invariant tensors for S-expanded algebras and in particular, for 0 S -reduced ones. It is possible to prove (See Refs. [7,9,10]) that for a 0 S -reduced algebra the invariant tensor reads where T ap ∈ V p , T ap 1 · · · T ap n corresponds to the invariant tensor of g, the index i p is such that λ ip ∈Š p , the index j is such that λ j = 0 S , and K j ip 1 ···ip n corresponds to the n-Selector associated to S.
satisfies the resonant condition eq. (5), and therefore, G R = 2 p=0 S p × V p corresponds to a resonant subalgebra of S (2) E ×osp (32|1) [see Fig. 1 (a) and (b)]. On the other hand, let us notice that in our case, λ 3 = 0 S . Therefore, it is possible to apply the reduction procedure, choosinĝ S p = {0 S } andŠ p = S p − {0 S }, or equivalently, applying the condition 0 S T A = 0 on G R . As a result, we obtain the M-Algebra [see Fig. 1 (c)]. In the present approach, an invariant tensor for the M-Algebra is given by the expression from eq. (8), T (ap 1 ,ip 1 ) · · · T (ap n ,ip n ) = α j δ j ip 1 +···+ip n T ap 1 · · · T ap n , where δ is the Kronecker delta and the range of the indices is given by i 0 = 0, 2, i 1 = 1, i 2 = 2 and j = 0, 1, 2. The construction of a Chern-Simons/Transgression theory for the M-Algebra using this approach has been considered in Refs. [8,9,10].
Conclusions
The procedure sketched here is completely general and becomes a very practical 'tool' in order to construct algebras with some special behaviour. Given an algebra, it is possible to construct bigger symmetries for different choices of semigroup and applying the resonant subalgebra and reduction theorems. The procedure of Maurer-Cartan forms power-series expansion and theİnönü-Wigner contraction can be reobtained from a particular choice of semigroup (See Refs. [7,9,10]). In the case of the M-Algebra, it is possible to observe that it belongs to a family of symmetries with similar behaviour which arise from osp (32|1) expansions; examples and a deeper analysis of this are provided in Refs. [7,8,9,10]. On the other hand, the procedure not only gives us a new symmetry, but also a non-trivial invariant tensor, in general different from the supertrace. For the case of the M-Algebra, this is a very important feature, since the supertrace provides us only with a trivial (Lorentz-valued only) invariant tensor, which is completely useless in order to construct a supersymmetric Chern-Simons theory. In the present context, the construction of the theory is straightforward using the invariant tensor from eqs. (8) and (9); see Refs. [8,9,10]. | 2022-06-27T23:52:45.247Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "95aa0ad631248d4c8d30890f70cd60a718d6ca42",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/134/1/012005/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "95aa0ad631248d4c8d30890f70cd60a718d6ca42",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15371208 | pes2o/s2orc | v3-fos-license | Facilitating myoelectric-control with transcranial direct current stimulation: a preliminary study in healthy humans
Background Functional Electrical Stimulation (FES) can electrically activate paretic muscles to assist movement for post-stroke neurorehabilitation. Here, sensory-motor integration may be facilitated by triggering FES with residual electromyographic (EMG) activity. However, muscle activity following stroke often suffers from delays in initiation and termination which may be alleviated with an adjuvant treatment at the central nervous system (CNS) level with transcranial direct current stimulation (tDCS) thereby facilitating re-learning and retaining of normative muscle activation patterns. Methods This study on 12 healthy volunteers was conducted to investigate the effects of anodal tDCS of the primary motor cortex (M1) and cerebellum on latencies during isometric contraction of tibialis anterior (TA) muscle for myoelectric visual pursuit with quick initiation/termination of muscle activation i.e. 'ballistic EMG control’ as well as modulation of EMG for 'proportional EMG control’. Results The normalized delay in initiation and termination of muscle activity during post-intervention 'ballistic EMG control’ trials showed a significant main effect of the anodal tDCS target: cerebellar, M1, sham (F(2) = 2.33, p < 0.1), and interaction effect between tDCS target and step-response type: initiation/termination of muscle activation (F(2) = 62.75, p < 0.001), but no significant effect for the step-response type (F(1) = 0.03, p = 0.87). The post-intervention population marginal means during 'ballistic EMG control’ showed two important findings at 95% confidence interval (critical values from Scheffe’s S procedure): 1. Offline cerebellar anodal tDCS increased the delay in initiation of TA contraction while M1 anodal tDCS decreased the same when compared to sham tDCS, 2. Offline M1 anodal tDCS increased the delay in termination of TA contraction when compared to cerebellar anodal tDCS or sham tDCS. Moreover, online cerebellar anodal tDCS decreased the learning rate during 'proportional EMG control’ when compared to M1 anodal and sham tDCS. Conclusions The preliminary results from healthy subjects showed specific, and at least partially antagonistic effects, of M1 and cerebellar anodal tDCS on motor performance during myoelectric control. These results are encouraging, but further studies are necessary to better define how tDCS over particular regions of the cerebellum may facilitate learning of myoelectric control for brain machine interfaces.
Background
Functional electrical stimulation (FES) can electrically activate a set of muscles selected to address individual movement deficits with a pre-programmed pattern of electrical stimulation [1,2]. Users normally employ a switch to manually trigger each pre-programmed stimulation pattern, but triggering and/or modulation of the electrical stimulation using residual electromyogram (EMG) from the paretic muscle -which is an alternative option to control FES -may encourage sensory-motor integration, where the residual volitional effort is reinforced with FES-assisted functional movement [3], and thus fostering re-learning of self-initiated movements. Unfortunately the muscle activity in hemiparetic limbs often suffers from a lack of coordination and delays in initiation/termination [4]. These deficits, which likely are controlled by the central nervous system (CNS), might be alleviated with an appropriate adjuvant treatment that improves CNS function. One possibly suited adjuvant treatment at the CNS level to facilitate learning of myoelectric control for brain machine interfaces is transcranial direct current stimulation (tDCS), which induces cortical excitability changes [5][6][7][8] induces neuroplasticity [9], and has been shown to improve motor learning in healthy humans [10,11], as well as in stroke survivors [12,13]. Therefore tDCS in combination with rehabilitative therapy has been suggested for stroke rehabilitation [14][15][16]. However tDCS-facilitated motor learning in lower limbs has not been explored systematically. Its effects on initiation and termination of muscle activations need further investigation to determine an appropriate adjuvant treatment with tDCS that may help in facilitating myoelectric control of FES. Tanaka and colleagues [17] found that anodal tDCS of the primary motor cortex representation of the tibialis anterior (TA) muscle (M1) had no significant effects on reaction time, but transiently enhanced maximal leg pinch force. Also, Madhavan and colleagues [18] found that M1 anodal tDCS of the primary motor representation of TA muscle applied to the lesioned motor cortex of moderate to well recovered stroke patients enhanced voluntary control of the paretic ankle. However, Galea and colleagues [19,20] did not observe any changes of reaction time with either M1 or cerebellar anodal tDCS.
In order to understand and further investigate the effects of tDCS on EMG latencies, we followed the general feedback-error-learning model [21] where both, the M1 and the cerebellum, are presumed to mediate generation of force profiles during manual tasks [22,23], as illustrated in Figure 1. The model incorporates three basic elements: 1. an inverse model that captures the feedforward part, 2. a feedback controller that captures the feedback part, and 3. a learning rule that adapts the inverse model based on motor command errors. The inverse (feedforward) model is primarily associated with the cerebellum and the feedback controller is primarily associated with premotor/motor cortices [23]. In this study, we investigated volitonal control of EMG during isometric conditions that reflects muscle force quite well [24]. Specifically, we investigated the impact of anodal tDCS of M1 and cerebellum on two commonly used myoelectric control paradigms for FES control [25], which are initiation/termination of muscle activation, i.e., 'ballistic EMG control' for switching FES on-off with a threshold-based classifier [26] and modulation of EMG for 'proportional EMG control' of FES [27]. The myoelectric visual biofeedback was presented with proportional system dynamics, where the subjects had to modulate the EMG activity (here, EMG is the system input) from one level to another in a finite time. In this randomized sham-controlled study, we specifically explored two cases: 1. the effects of offline anodal tDCS of M1 and cerebellum on delays in initiation (step-up response) and termination (step-down response) of muscle activity to a visual on/off cue with maximal contraction of isometric TA muscle during 'ballistic EMG control' , 2. the effects of online anodal tDCS of M1 and cerebellum on learning visual pursuit while following a sinusoidal target with EMG from TA during 'proportional EMG control'.
Subjects
Twelve healthy right leg dominant male volunteers (age: 24-36 years) provided informed consent for this study. All the experiments were approved by the local ethics committee of the University Medicine Goettingen and conducted in accordance with the Declaration of Helsinki. Since most people display a dominance of kicking ability on one side, we assigned that side their dominant side. Determination of leg dominance is relevant, because it is thought to be associated with cortical movement representations, and might thus be a source of variability of lower limb motor function. Participants had no known neurological or psychiatric history, nor any contraindications to tDCS. One subject out of the total 12 subjects did not participate in Experiment 1 due to personal reasons. Figure 2 shows the electrode montages for anodal tDCS (1 mA direct current for 15 min per session) via 2 saline-soaked 5 cm × 7 cm sponge electrodes with a DCstimulator (NeuroConn, Germany). The stimulating anode was placed, 1) 1.5 cm left lateral and 2 cm posterior to Cz (10-20 EEG system) for targeting the primary motor cortex (M1) representation of the right leg TA muscle [28], 2) 3 cm left and lateral to the Inion (10-20 EEG system) for targeting the Cerebellar left hemisphere [19]. The cathodal return electrode was placed on the forehead above the right supraorbital ridge. During sham stimulation, the current was ramped up and then down to zero in 10 sec to provide blinding effects.
Data collection and analysis
The experimental setup for myoelectric control with visual feedback is shown in Figure 3. Surface EMG was collected from the TA muscle, amplified and low-pass filtered (anti-aliasing, frequency cutoff = 1000 Hz) before being sampled at 2400 Hz by a 16-bit data acquisition system (NI USB-6215, National Instruments, USA) in a PC. Data-processing and graphical (GUI) display were performed with Matlab R2010a (The MathWorks, Inc., USA) using the Psychophysics Toolbox extensions [29][30][31]. The sampled EMG in a 400 ms moving window was digitally band-pass filtered (5th order zero-lag Butterworth, 20-500 Hz), de-trended, and rectified before being evaluated as a command signal (i.e., the TRACKING signal). A moving average of 400 ms of rectified EMG was found to provide appropriate smoothing for the EMG control task [32]. The average rectified EMG during one second of maximum voluntary isometric contraction (MVC) was used for normalization. Then, an estimate of the resting-state baseline EMG activity was set as one standard deviation over the average magnitude of the rectified EMG over one second while the subject was asked to relax the muscle. During visual pursuit tasks, the moving average of the rectified EMG was provided as visual feedback when it exceeded the resting-state baseline EMG activity. The normalized EMG was displayed as the TRACKING signal along with the TARGET signal ( Figure 3). Both the TARGET signal that goes from 0 to 1 and the TRACKING signal (i.e., normalized EMG) pursuing the TARGET signal were updated at 100 Hz accounting for software-induced delays in processing, and were projected on the wall in front of the subject at the eye level, as illustrated in Figure 3.
In the first one-day session, the subjects learned to isometrically contract the TA muscle as quickly and as forcefully as possible, in response to a visual cue when the TARGET signal jumped from 0 to 1 (step-up response), while the ankle was kept fixed in an ankle-foot-orthosis (AFO). Then the subjects learned to relax the TA muscle as quickly as possible on termination of the visual cue when the TARGET signal jumped from 1 to 0 (step-down response). After the subjects were comfortable with this step-up and then step-down evaluation procedure, they participated in two sets of experiments, as illustrated in Figure 4, with each one-day test session separated by at least a week.
1) Experiment 1
11 subjects performed five baseline trials (i.e., baseline task block) where they responded with quick and forceful contraction of the TA muscle to a visual cue when the TARGET signal jumped from 0 to 1 (step-up response), and then quickly relaxed the TA muscle on termination of the visual cue when the TARGET signal jumped from 1 to 0 (step-down response). The visual cue duration was either 3 sec, 4 sec, or 5 sec, during which the subject had to maintain the TRACKING signal as close as possible to the TARGET signal. The visual cues were presented in a pseudorandom order with a random 3 sec, 4 sec, or 5 sec inter-cue-interval. Following the baseline task block, 15 min of 1 mA anodal tDCS was administered to M1, cerebellum, or under sham stimulation in a repeated measure counter-balanced design, after which the subjects performed 5 post-intervention trials (i.e., post-intervention task block) similar to the baseline trials. Figure 2 Electrode montages for anodal tDCS (1 mA direct current for 15 min) of 1. primary motor cortex representation area of the right leg, where a 5 cm × 7 cm saline-soaked sponge anode was placed 1.5 cm lateral and 2 cm posterior to Cz (10-20 EEG system), 2. cerebellum of left hemisphere where the 5 cm × 7 cm saline-soaked sponge anode was placed 3 cm lateral to Inion (10-20 EEG system). The 5 cm × 7 cm saline-soaked sponge cathode was placed above the right contralateral orbit.
During offline analysis in Matlab R2010a (The Mathworks Inc., USA), the raw EMG sampled during each task block of the experiment was digitally zerophase band-pass filtered (5th order Butterworth, 3 dB bandwidth = 10-500 Hz), then full-wave rectified, and then zero-phase low-pass filtered (5th order Butterworth, 3 dB frequency cutoff = 25 Hz) to generate its linear EMG-envelope (LE). The delay in initiation of the EMG LE was defined manually as the time interval between onset of the visual cue and the instant the LE crossed above baseline LE (i.e., mean resting-state LE + 1 standard deviation). The delay in termination of the EMG LE was defined manually as the time interval between termination of the visual cue and the instant the LE crossed below baseline LE. Each LE tracing was displayed on a PC monitor in random order without reference to subject, cue duration, or tDCS targets, in order to reduce relative bias.
2) Experiment 2 12 subjects performed five baseline trials (i.e., baseline task block) similar to the baseline task block of Experiment 1. After the baseline trials, the subjects were randomly divided into M1, cerebellum, or sham stimulation groups. Then they performed the myoelectric visual pursuit task for 15 min. 1 mA anodal tDCS was simultaneously administered to M1, cerebellum, or sham stimulation was performed (i.e. 4 subjects per group). The subjects were asked to track the absolute value of a sinusoid of 0.7 amplitude and 0.01 Hz frequency over its half time-period (i.e., the TARGET signal) during each trial. A set of six of such consecutive 50 sec trials with 2 min of rest inbetween, were presented to the subjects during the administration of anodal/sham tDCS.
During offline analysis in Matlab R2010a (The Mathworks Inc., USA), the response latency was computed from the delay in initiation of the TRACKING signal with respect to the start of the TARGET signal where the initiation was defined as the instant the TRACKING signal crossed baseline EMG activity. Then, the response latency was normalized by subjects' respective mean baseline delay over their 5 baseline trials. Each TRACKING signal tracing was displayed on a PC monitor in random order without reference to subject or tDCS targets, in order to reduce relative bias. The absolute value of the difference between the TARGET and TRACKING signals, i.e., the tracking-error signal (ERROR signal = |TARGET signal -TRACKING signal|) was computed after removing the response latency from the TRACKING signal, where the mean of the absolute ERROR was analyzed as a measure of tracking accuracy.
Statistical analysis 1) Experiment 1
The delays in initiation and termination of muscle activity during baseline and post-intervention trials were tested for normal distribution by the univariate Lilliefors test ('lillietest' in Matlab R2010a, The MathWorks, Inc., USA) for sessions of each tDCS target -M1, cerebellar, sham -pooled from all subjects. Then, a balanced threeway (tDCS target: M1, cerebellar, sham x step-response type: step-up, step-down x subjects) ANOVA ('anovan' in Matlab R2010a, The MathWorks, Inc., USA) was conducted on the step-response, i.e., the delay in initiation and termination of muscle activity during the baseline trials. Also, a balanced two-way (tDCS target: M1, cerebellar, sham × step-response type: step-up, step-down) ANOVA ('anova2' in Matlab R2010a, The MathWorks, Inc., USA) was conducted on the normalized delay in initiation and termination of muscle activity during the postintervention trials. The delay was normalized by subjects' respective mean baseline delay over their 5 baseline trials. To find which pairs were significantly different, post hoc tests ('multcompare' in Matlab R2010a, The MathWorks, Inc., USA) were performed with the critical values found from Scheffe's S procedure.
2) Experiment 2
The delays in initiation and termination of muscle activity during baseline trials were tested for normal distribution by the univariate Lilliefors test ('lillietest' in Matlab R2010a, The MathWorks, Inc., USA) for each tDCS group -M1, cerebellar, sham. Then, a balanced two-way (tDCS target: M1, cerebellar, sham x step-response type: step-up, step-down) ANOVA ('anova2' in Matlab R2010a, The MathWorks, Inc., USA) was conducted on the stepresponse, i.e., the delay in initiation and termination of muscle activity during the baseline trials.
The normalized response latency and the mean absolute ERROR during the last 5 myoelectric visual pursuit trials (Trial# 2-6) were assessed by fitting the performance with a power law function [33] using the Levenberg-Marquardt algorithm ('cftool' in Matlab R2010a, The MathWorks, Inc., USA). The 95% confidence bounds of the coefficients of the fitted power law function were compared for the tDCS groups: M1, cerebellar, sham.
1) Experiment 1
The delays in initiation and termination of muscle activity during baseline trials passed the Lilliefors test for normal distribution at 5% significance level for pooled sessions of each tDCS target -M1, cerebellar, sham. The balanced three-way (tDCS target: M1, cerebellar, sham × step-response type: step-up, step-down × subjects) ANOVA on the delay in initiation and termination of muscle activity during baseline trials showed a significant main effect of the step-response type (F(1) = 2597.11, p < 0.001), but no significant effect for other factors, tDCS target (F(2) = 0.55, p = 0.58), subjects (F(10) = 0.87, p = 0.56), or the interaction effect between the tDCS target and the stepresponse type (F(2) = 0.01, p = 0.99), the interaction effect between tDCS target and subjects (F(20) = 1.35, p = 0.15), the interaction effect between step-response type and subjects (F(10) = 1.1, p = 0.36). The post hoc tests performed with the critical values found from Scheffe's S procedure confirmed that for the step-response type, the termination of muscle activity was significantly slower (95% confidence interval for mean delay: 590 ms to 603 ms) than initiation of the muscle activity (95% confidence interval for mean delay: 241 ms to 255 ms) during the baseline trials.
The delays in initiation and termination of muscle activity during post-intervention trials passed the Lilliefors test for normal distribution at 5% significance level for pooled sessions of each tDCS target -M1, cerebellar, sham. The balanced two-way (tDCS target: M1, cerebellar, sham x step-response type: step-up, step-down) ANOVA on the normalized delay in initiation and termination of muscle activity during post-intervention trials showed a significant main effect of the tDCS target (F(2) = 2.33, p < 0.1), and interaction effect between tDCS target and step-response type (F(2) = 62.75, p < 0.001), but no significant effect for the step-response type (F(1) = 0.03, p = 0.87). With the critical values found from Scheffe's S procedure, the 95% confidence interval for the mean normalized delay in initiation and termination of muscle activity during post-intervention trials are shown separately in Figure 5. The differences in the normalized delay in initiation of muscle activity were significantly different for all factor levels of tDCS target, with cerebellar anodal tDCS increasing and M1 anodal tDCS decreasing it when compared to sham tDCS, but the differences in normalized delay in termination of the muscle activity were significantly different only for M1 anodal tDCS, which increased the normalized delay when compared to cerebellar anodal tDCS and sham tDCS. Cerebellar anodal tDCS trended towards decreasing the normalized delay in termination of the muscle activity when compared to sham tDCS.
2) Experiment 2
The delays in initiation and termination of muscle activity during baseline trials passed the Lilliefors test for normal distribution at 10% significance level for each tDCS group -M1, cerebellar, sham. The balanced twoway (tDCS target: M1, cerebellar, sham × step-response type: step-up, step-down) ANOVA on the delay in initiation and termination of muscle activity during baseline trials showed a significant main effect of the stepresponse type (F(1) = 506.89, p < 0.001), but no significant effect for the other factor, tDCS target (F(2) = 0.6, p = 0.55). The post hoc tests performed with the critical values found from Scheffe's S procedure confirmed that for the step-response type, the termination of muscle activity was significantly slower (95% confidence interval for mean delay: 492 ms to 514 ms) than initiation of the muscle activity (95% confidence interval for mean delay: 230 ms to 253 ms) during the baseline trials.
The normalized response latency and the mean absolute ERROR during the last 5 myoelectric visual pursuit trials (i.e., Trial# 2-6) were assessed by fitting the performance with a power law function [33]. Figure 6 shows the results of the myoelectric visual pursuit task for the tDCS groups: M1, cerebellum, sham, and trainingdurations: Trial# 2-6. The top row of Figure 6 shows the overall TARGET and the TRACKING signals, the middle row shows the effects on the normalized response latency along with the fitted power law function, and the bottom row shows the effects on the mean absolute ERROR along with the fitted power law function. The 95% confidence bounds of the coefficients of the power law function fitted to normalized response latency and the mean absolute ERROR are provided in Table 1 for each tDCS group: M1, cerebellar, sham. Here the power law exponent for the mean absolute ERROR of the cerebellar tDCS group did not overlap with those of the other tDCS groups.
Discussion
In this study, the motor control task involved visual pursuit of a TARGET signal with EMG-based proportional control of a visual TRACKING signal. Prior work has shown that EMG reflects muscle force quite well during isometric conditions where EMG follows a quadratic increase in its root-mean-square value across force levels [24]. In Experiment 1, the myoelectric step-response task was familiar to the subjects but the trials were presented in an unpredictable temporal manner during baseline and post-intervention to avoid cognitive anticipation, and to identify the delay in initiation and termination of muscle activity during an open-loop 'ballistic EMG control' task. We first ruled out subject specific effects on the initiation and termination of muscle activity during baseline trials, and then found that termination of muscle activity was significantly (p < 0.05) slower than initiation of muscle activity for all subjects over all baseline trials. We also found that cerebellar anodal tDCS increased the normalized delay in initiation of muscle activity post-intervention while M1 anodal tDCS decreased it, when compared to sham tDCS, as shown in the top panel of Figure 5. Also, M1 anodal tDCS increased the normalized delay in termination of muscle activity post-intervention when compared to cerebellar anodal tDCS and sham tDCS, as shown in the bottom panel of Figure 5. Therefore in this study, off-line anodal tDCS of M1 decreased the delay in initiation while it increased the delay in termination during performance of the 'ballistic EMG control' task. However Galea and colleagues [20] did not observe any changes in reaction time with either M1 or cerebellar anodal tDCS during a more complex task where the subjects had to move a digitizing pen with their right hand over a horizontal digitizing tablet. The outcome differences could be caused by different EMG recordings in the respective studies. In the present study, EMG latencies were obtained only for the excitation dynamics of the target muscle. Excitation and contraction processes of multiple muscles crossing a joint and subsequent joint mechanics, as recorded in the study by Galea and colleagues [20], might limit comparability. Moreover, the premotor cortex might have played an important role in the rather complex (fine) motor task movements employed by Galea and colleagues [20] that required controlled contraction of multiple muscles compared to the control of a single muscle in this study. Here, M1 is involved in task performance in part through its reciprocal interaction with the cerebellum [34]. Cerebellar priming of M1 plasticity in shaping the impending motor command Figure 6 Results from myoelectric the visual pursuit task (i.e., Experiment 2) for Myoelectric Training Trial# 2-6 for the tDCS groups: M1, cerebellum, sham. The top row illustrates the overall TARGET and TRACKING signals during the modulation of EMG during 'proportional EMG control' trials, the middle row shows the effects on the normalized response latency, and the bottom row shows the effects on the mean absolute ERROR. by favoring or inhibiting the recruitment of several muscle representations has been shown recently [35]. Prior work has suggested that cerebellar anodal tDCS may increase the Purkinje cells' excitability and facilitate the inhibitory tone the cerebellum exerts over M1 (cerebellar brain inhibition) [19], which explains an increase in normalized delay in initiation of muscle activity following cerebellar anodal tDCS and a respective decrease following M1 anodal tDCS. Conversely, cerebellar-caused M1 inhibition should then facilitate sudden termination of ongoing muscle activity. In principal accordance with this concept, cerebellar anodal tDCS trended towards decreasing the normalized delay in termination of muscle activity post-intervention, while M1 anodal tDCS significantly increased it, when compared to sham tDCS.
In Experiment 2, naive subject learned a novel visual pursuit task. Subjects had to minimize spatial ERROR between the TARGET signal and the TRACKING signal using visual feedback. The response latency was high at the start of myoelectric training during 'proportional EMG control' , but reduced with training as the TRACK-ING response signal temporally shifted with respect to the TARGET signal, as shown in the top panel of Figure 6. The visual pursuit was initially driven primarily with feedback motor command where the feedback motor command also served as the motor command error for developing or modifying the inverse model for this novel visuomotor transformation [23] as the subject learned proportional dynamics of the myoelectric visual pursuit. The feedback control is inherently slow because it uses delayed sensory (e.g., proprioceptive and visual) signals to compute the motor command [36,37] but feedforward control uses the inverse model to predict the (feedforward) motor command necessary to pursue the TARGET signal, which should increase the speed and accuracy during visual pursuit if the inverse model is accurate [21,36]. However, it was found that the absolute value of the power law exponent for the mean absolute ERROR of the cerebellar tDCS group was lower than other tDCS groups (see Table 1), which indicated slower motor learning (bottom panel of Figure 6) during 'proportional EMG control.' This is in contrast with the results from Galea and colleagues [18], which may be explained in terms of a computational model of human motor learning. In the computational model of the cerebellar circuit, the simple spikes represent the feedforward motor commands and the parallel fiber inputs represent the desired trajectory (TARGET) as well as the sensory state of the TRACKING signal [21]. The climbing fiber inputs are assumed to carry a copy of the feedback motor commands where the complex spiking of the climbing fibers in the cerebellum are considered to be the biological representation of an error signal. Marko and colleagues [38] recently found that the probabilities of complex spiking declined with increasing error size and therefore they postulated that complex spiking is a representative of the sensitivity to error, and not the error itself. Therefore learning of the inverse model may be dependent on the sensitivity to error, in addition to the magnitude of the error presented during the task performance. From results of Experiment 1, it can be postulated that there was facilitation of cerebello-thalamocortical inhibitory connections at movement initiation [39] with cerebellar anodal tDCS, which might have reduced the sensitivity of the Purkinje cells to errors represented by complex spiking of the climbing fibers that resulted in a slower decrease in the mean absolute ERROR during 'proportional EMG control' trials. In fact, Galea and colleagues [20] hypothesized that cerebellar tDCS may change Purkinje cells response to the input of the climbing fibers by affecting secondary events such as long-term depression. Moreover, Popa and colleagues recently showed that modulation of the cerebellar cortex by noninvasive brain stimulation affects the response of M1 to a subsequent plasticity induction protocol that involves sensory afferent input but not otherwise [35].
In terms of clinical applications, the current study on healthy subjects showed a decrease in the normalized delay in initiation of muscle activity following M1 anodal tDCS which may be beneficial for stroke survivors who often suffer from delays in initiation of muscle activity Power law function: f(x) = a*x b ; SSE: sum of squares due to error; R-square: coefficient of determination; RMSE: root mean squared error (standard error). [4]. Therefore an adjuvant treatment with M1 anodal tDCS may facilitate appropriate myoelectric triggering of FES where delays in initiation of muscle activity makes it difficult for EMG-triggered FES to assist time-critical functional tasks such as ankle dorsiflexion during overground walking. However, the optimal positioning of the cerebellar tDCS electrode remains unclear. More studies are required to better define how tDCS over particular regions of the cerebellum affects individual cerebellar sensory-motor functions given its topographical organization [40]. In our future studies, we will investigate optimization of the electrode montage for cerebellar tDCS to target different sensory modalities (e.g., proprioceptive instead of visual), since recent studies in patients with cerebellar damage demonstrated that adaptation to proprioceptive versus visual errors relies on the integrity of different regions of the cerebellum [41,42]. Therefore anodal tDCS-induced changes in excitability of different regions of the cerebellum may differentially affect proprioceptive versus visual sensory modalities [40].
Conclusions
The preliminary results from healthy subjects showed specific, and at least partially antagonistic effects, of M1 and cerebellar stimulation on motor performance. An appropriate adjuvant treatment with tDCS may help to facilitate myoelectric control for brain machine interfaces, however the neuroprosthetic and neurotherapeutic efficacy of such an adjuvant treatment needs further investigation in stroke survivors. Furthermore, an adjuvant treatment with tDCS may improve muscle recruitment and coordination during post-stroke neurorehabilitation. | 2016-05-02T22:33:48.769Z | 2014-02-10T00:00:00.000 | {
"year": 2014,
"sha1": "842ddf941f75e6c1257d4890d93938eb764049c1",
"oa_license": "CCBY",
"oa_url": "https://jneuroengrehab.biomedcentral.com/track/pdf/10.1186/1743-0003-11-13",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "842ddf941f75e6c1257d4890d93938eb764049c1",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
10599866 | pes2o/s2orc | v3-fos-license | HybTrack: A hybrid single particle tracking software using manual and automatic detection of dim signals
Single particle tracking is a compelling technique for investigating the dynamics of nanoparticles and biological molecules in a broad range of research fields. In particular, recent advances in fluorescence microscopy have made single molecule tracking a prevalent method for studying biomolecules with a high spatial and temporal precision. Particle tracking algorithms have matured over the past three decades into more easily accessible platforms. However, there is an inherent difficulty in tracing particles that have a low signal-to-noise ratio and/or heterogeneous subpopulations. Here, we present a new MATLAB based tracking program which combines the benefits of manual and automatic tracking methods. The program prompts the user to manually locate a particle when an ambiguous situation occurs during automatic tracking. We demonstrate the utility of this program by tracking the movement of β-actin mRNA in the dendrites of cultured hippocampal neurons. We show that the diffusion coefficient of β-actin mRNA decreases upon neuronal stimulation by bicuculline treatment. This tracking method enables an efficient dissection of the dynamic regulation of biological molecules in highly complex intracellular environments.
Here, we developed HybTrack, a novel software tool that enables the tracking of dim particles using a combination of manual and automatic detection. Rather than leaving the whole process to an automatic algorithm, HybTrack provides the user an opportunity to participate in particle tracking. To our knowledge, this is the first particle tracking software that allows switching between manual and automatic detection. We demonstrate that a little intervention using manual selection can dramatically improve the performance of particle tracking. Analysis of β-actin mRNA transport in neurons highlights the advantages of HybTrack to track heterogeneous populations of particles with a low SNR.
Results
Overview of HybTrack. Most of the automatic tracking programs generally use a two-step process: (i) detection of particles in all image frames, and then (ii) linking of the particles in consecutive images. In contrast, HybTrack performs particle detection and tracking simultaneously frame by frame (Fig. 1). In each image frame, tracking of a particle is performed with the following procedure: (i) selecting a particle scan region, (ii) detecting local maxima in the scan region and sub-pixel localization of the particle, and (iii) saving the particle position and updating the scan region for the particle in the next frame (see Supplementary Note and Supplementary Fig. S1).
At the beginning of the tracking procedure, the user provides the number of particles to track, and the initial positions of those particles are selected by clicking on the bright spots in the first image frame ( Supplementary Fig. S2). Based on this information, the program defines a particle scan area which has a height and width of The user needs to annotate particles to track in the first image frame. Based on the initial positions, the tracking algorithm proceeds to search for local maxima and calculates the sub-pixel coordinates. If the local maxima is not bright enough, a pop-up window appears for manual detection of the particle. If there are overlapping particles within the scan region, two options are provided, Manual selection or Linear motion. This process is repeated for all annotated particles and image frames. (b) GUI interface of HybTrack. After setting the parameters, tracking process is started, and the result is saved as a text file.
ScIentIfIc REPORTS | (2018) 8:212 | DOI:10.1038/s41598-017-18569-3 the pre-defined parameter, Scan row and Scan col. Because HybTrack typically works with a small scan area (Scan row ≈ Scan col ≈ 5-50 pixels), image filtering is generally not required to detect local maxima within the search region. Local maxima are found by calculating the mean intensity of all rectangles with the size parameter Window size within the search area. After a local maximum is found, the sub-pixel position of the particle is calculated by the centroid or by fitting the image with a two-dimensional (2D) Gaussian function. Finally, the particle position is saved and used to define a new scan area for the next frame.
If the image data have a sufficiently high SNR and the particles exhibit small movements, HybTrack completes automatic tracking without any interruption. However, when there is an ambiguity in the automatic particle tracking, a pop-up window appears for manual tracking ( Supplementary Fig. S3). There are two representative cases where manual tracking is required. First, the particle image could be too dim or noisy to detect local maxima automatically. Even though computer algorithms fail to detect such a dim signal, human vision can sometimes distinguish a particle out of a noisy background. Therefore, we implemented the HybTrack software to provide the user an opportunity to examine the image. If no particle was detected automatically in the scan area, HybTrack offers three options: Stop, Manual detection, and Gap. By choosing Stop, the user can terminate the trajectory of the corresponding particle. If Manual detection is selected, a new window pops up so that the user can select the position of the particle manually in the image. The Gap option leaves the position of the particle as a NaN (Not-a-Number) value. The second case that requires manual tracking is when two particles are found within a scan region. In this case, HybTrack provides two options under Two-particle overlap. One option is to select each particle's position manually in the image. The other one is the Linear motion option which predicts the particle's position based on the previous velocity of the particle. Then the predicted spot is used as an approximate position for sub-pixel localization of the particle. The Linear motion option is useful when a particle exhibits directed motion with a constant velocity during an overlapping event ( Supplementary Fig. S4).
Tracking single mRNA in live neurons.
To demonstrate the utility of the HybTrack software, we performed single particle tracking of β-actin mRNA in live hippocampal neurons. By imaging neurons cultured from MCP × MBS mice which express GFP-labeled β-actin mRNA 9 , we observed the real-time dynamics of single β-actin mRNA molecules. In the MCP × MBS mouse, 24 repeats of the MS2 binding site (MBS) stem-loop are inserted in the 3′ untranslated region (3′UTR) of the endogenous β-actin gene 10 . The mouse also expresses the MS2 capsid protein (MCP) fused with GFP (MCP-GFP), a dimer of which binds to an MBS stem-loop with high specificity and affinity. Thus, each β-actin mRNA is labeled with up to 48 GFPs and becomes bright enough for single mRNA imaging albeit the background of free MCP-GFPs in the cytoplasm. Figure 2a shows an image of a dendritic segment and a kymograph generated from a time-lapse movie of a dendrite (Supplementary Movie S1). Projected on the y plane of x-y-time voxels, the kymograph shows single mRNA paths along the dendrite. Most of the β-actin mRNAs in the neurons are stationary, but some mRNAs show diffusive or directed motion 11 and occasionally change their motion types 12 . Moreover, a moving mRNA sometimes gets out of the imaging plane or overlaps with another mRNA. Although automatic algorithms have been developed to address these cases, manual tracking can be a more direct and easier way to handle them.
To assess the performance of HybTrack, we compared the tracking results with those from two state-of-the-art automatic tracking programs, u-Track 13 and TrackNTrace 14 . u-Track is one of the most widely used single particle tracking (SPT) program, and TrackNTrace is one of the latest tracking programs which offers an extendable open-source framework for various applications. To compare the tracking results, we plotted the traces obtained by each program on the kymograph (Fig. 2b-d). It is evident that even highly sophisticated automatic tracking programs suffer from incomplete linking of particle trajectories that have a low SNR. Although human eyes can trace about 10 particles in the image shown in Fig. 2a, u-Track and TrackNTrace recognized them as 23 and 25 different particles (with a track length >20 frames), respectively. However, HybTrack was able to construct the full trajectories of 10 particles that had a low SNR ranging from 1.15 to 3.65 (Fig. 2d). Supplementary Fig. S5 also shows an example of tracking very dim particles.
Another advantage of HybTrack can be found when tracking particles showing directed motion with a relatively high speed. For example, in Fig. 2a, the mRNA on the far-left travels to the middle of the image (yellow box) at a speed of 1.3 μm/s. During the directed motion, the mRNA traveled much faster than the other mRNAs and left only sparse spots. By using the manual tracking option in HybTrack, those sparse spots can be linked with just a few clicks (Fig. 2d, yellow arrow). Another example of linking directed motion is shown in Supplementary Fig. S6. In this 460-frame-long time-lapse image, HybTrack successfully tracked three mRNA particles with only 6 clicks of manual detection.
In Fig. 2e, particle trajectories obtained by HybTrack are overlaid on the dendrite image shown in Fig. 2a. Most of the β-actin mRNAs were localized near the dendritic spine necks or inside the spines suggesting that the local translation of β-actin has a role in stabilizing the dendritic spines 15 . The time-averaged mean squared displacement (TA-MSD) of each mRNA is plotted in log-log scale in Fig. 2f. The TA-MSD of the mRNA that showed directed motion (blue line) has an exponent larger than 1, indicating super-diffusive motion. However, most of the mRNAs exhibited sub-diffusive motion with exponents less than 1. The heterogeneous nature of the mRNA movement is also demonstrated by the wide distribution of the diffusion coefficients in Fig. 2g.
Finally, we investigated the activity-dependent dynamics of β-actin mRNA by tracking dendritic mRNAs using HybTrack. To investigate the changes in the movement of mRNA upon stimulation, we treated hippocampal neurons from MCP × MBS mice with bicuculline which is a GABA receptor blocker. We performed tracking of mRNAs in the same dendrite before and after stimulation and calculated the diffusion coefficients of diffusive mRNAs. After the treatment with bicuculline, there was a significant decrease in the diffusion coefficients ( Fig. 2g; n = 44 mRNAs in the baseline, n = 23 mRNAs after bicuculline treatment; P KS = 0.0013, Kolmogorov-Smirnov test). The mean diffusion coefficient of the mRNAs in the dendrites also decreased upon stimulation ( Fig. 2h; n = 5 dendrites; P = 0.059, pairwise t-test). This result is consistent with a previous report which showed a ScIentIfIc REPORTS | (2018) 8:212 | DOI:10.1038/s41598-017-18569-3 decrease in the diffusion coefficient of β-actin mRNA upon neuronal stimulation by KCl depolarization 9 . These observations suggest that β-actin mRNAs may be anchored by so-called synaptic tags 16 , which are expected to be identified in the future.
Discussion
We have developed HybTrack as a new practical tool for the analysis of single particle imaging data. Fully automated tracking often fails to capture the entire trajectory of a particle visible to the researcher. Manual tracking software such as MTrackJ 5 offers flexible track editing functionalities for trajectory inspection and curation. However, it would be extremely tedious to manually follow each particle through hundreds to thousands of image frames. HybTrack facilitates tracking of single particles that have a low SNR by combining the advantages of both automatic and manual tracking methods. The combination of these two methods enables our algorithm to give results closest to the human vision with high efficiency.
While automatic particle tracking programs generate tracks by constructing paths for all detected particles, our algorithm generates tracks starting from the initial particle positions selected by the user in the first image frame. Starting with manual selection substantially reduces the interference from noise. There have been a couple of semi-automatic particle tracking software 17,18 that require particle annotation in the first image frame and perform automated tracking afterwards. However, HybTrack is the first kind of single particle tracking program, to our knowledge, that enables switching between manual and automatic detection during frame-by-frame tracking of individual particles. For this reason, HybTrack is very useful for efficient tracking of a highly heterogenous population of particles that alternate between different motion types. Such trajectories obtained from HybTrack can be subsequently processed by an objective analysis method such as MSD-Bayes approach 12,19 to automatically classify particle motions.
A limitation of HybTrack is that it may not be suitable for high-throughput analysis of numerous particles in a single data set. While other automatic tracking algorithms would be advantageous to analyze the overall behavior of many particles, HybTrack is more useful for the precise analysis of individual particle trajectories. For example, our algorithm can be readily applied to analyze the movement of single particles with respect to subcellular components such as nuclear pores 20 , focal adhesions 21,22 , P-bodies 23,24 , cytoskeletons 25,26 , and so forth. We expect that this new software will be a valuable addition to the SPT analysis tools to solve many outstanding problems in single-cell single-molecule biology.
Methods
Software. The tracking software used in this work is available at http://github.com/bhlee1117/HybTrack/.
The software details are described in the Supplementary Note. Primary mouse neuron cultures. All animal experiments were conducted in accordance with methods approved by the Institutional Animal Care and Use Committee (IACUC) at Seoul National University. Primary hippocampal neurons were cultured from 1-day-old pups of the MCP × MBS mice 9 using a method described previously 27 . Briefly, hippocampi were dissected out from the brains of 3-4 pups and dissociated by trypsin. Glass-bottom dishes were coated with poly-D-lysine, on which ~10 5 dissociated neurons were seeded. The neuron cultures were grown for 8-14 days in vitro in Neurobasal-A medium (Gibco) supplemented with B-26 (Gibco), Glutamax (Gibco) and Primocin (Invivogen) at 37 °C and 5% CO 2 .
Imaging single mRNA in live neurons. Live neuron imaging experiments were performed as described previously 27 . Prior to the imaging, culture medium was removed from the neuron culture and replaced with HEPES-buffered saline (HBS) containing 119 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 2 mM MgCl 2 , 30 mM D-glucose, and 20 mM HEPES at pH 7.4. Wide-field fluorescence images were taken with U Apochromat 150 × 1.45 NA TIRF objective (Olympus) on an Olympus IX73 inverted microscope equipped with an iXon Ultra 897 electron-multiplying charge-coupled device (EMCCD) camera (Andor), an MS-2000 XYZ automated stage (ASI), and Chamlide TC top-stage incubator system (Live Cell Instrument). A 488-nm diode laser (Cobolt) was used to excite the GFP, and the fluorescence emission was filtered with a 525/50 band-pass filter (Chroma). Time-lapse images were taken at 10 frames per second (fps) with the Micro-Manager software. Image analysis. 2D Gaussian fitting was performed with a weighted overdetermined regression method 28 .
Background-subtracted fluorescence images were fit with where A is the peak amplitude; x c and y c are the center position of the Gaussian function, and w x and w y are the standard deviations in the x and y coordinates. In HybTrack, the output intensity value is calculated by the integration of the 2D Gaussian function πAw w 2 x y . In the centroid method, the particle location was obtained by The SNR of a particle image was calculated by In Fig. 2f, the relative error of MSD was calculated following Qian et al. 29 where ∆ρ n denotes ρ − 〈ρ 〉 n n , and N is the total frame number. The diffusion coefficient of each mRNA was calculated by linear fitting of the MSD data.
Data availability. Image data of β-actin mRNA in live neurons analyzed in the current study are available from the corresponding author upon reasonable request. | 2018-04-03T01:50:39.290Z | 2018-01-09T00:00:00.000 | {
"year": 2018,
"sha1": "118ebd3ada20ab290fe465043c0cd384dcf2e067",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-18569-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81eb000938f96470be236c3616600f720539f26d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
226312323 | pes2o/s2orc | v3-fos-license | Dexmedetomidine attenuates sevoflurane-induced neurocognitive impairment through α2-adrenoceptors
It has been reported that sevoflurane induces neurotoxicity in the developing brain. Dexmedetomidine is an α2 adrenoceptor agonist used for the prevention of sevoflurane-induced agitation in children in clinical practice. The aim of the present study was to determine whether dexmedetomidine could prevent sevoflurane-induced neuroapoptosis, neuroinflammation, oxidative stress and neurocognitive impairment. Additionally, the involvement of α2 adrenoceptors in the neuroprotective effect of dexmedetomidine was assessed. Postnatal day (P)6 C57BL/6 male mice were randomly divided into four groups (n=6 in each group). Mice were pretreated with dexmedetomidine, either alone or together with yohimbine, an α2 adrenoceptor inhibitor, then exposed to 3% sevoflurane in 25% oxygen. Control mice either received normal saline alone or with sevoflurane exposure. Following sevoflurane exposure, the expression of cleaved caspase-3 was detected by immunohistochemistry in hippocampal tissue sections. In addition, the levels of tumor necrosis factor-α (TNF-α), interleukin (IL)-1β, IL-6 and malondialdehyde, as well as superoxide dismutase (SOD) activity in the hippocampus were measured. At P35, the learning and memory abilities were assessed in each mouse using a Morris water maze test. Dexmedetomidine significantly decreased the expression of activated caspase-3 following sevoflurane exposure. Moreover, dexmedetomidine significantly decreased the levels of TNF-α, IL-1β and IL-6 in the hippocampus. SOD activity also increased in a dose-dependent manner in dexmedetomidine-treated mice. MDA decreased in a dose-dependent manner in dexmedetomidine-treated mice. Lastly, sevoflurane-induced learning and memory impairment was reversed by dexmedetomidine treatment. By contrast, co-administration of yohimbine significantly attenuated the neuroprotective effects of dexmedetomidine. These findings suggested that dexmedetomidine exerted a neuroprotective effect against sevoflurane-induced apoptosis, inflammation, oxidative stress and neurocognitive impairment, which was mediated, at least in part, by α2 adrenoceptors.
Introduction
Sevoflurane is an inhaled anesthetic introduced into clinical practice >20 years ago (1). It is a sweet-smelling, fast-onset and recovery agent, with a low blood-gas partition coefficient and limited cardiorespiratory depression properties (2). The use of sevoflurane for general anesthesia in the pediatric population has become common (3). However, an increasing number of studies on rodents and nonhuman primates suggested that sevoflurane can cause neuronal apoptosis in the developing brain and result in learning and memory deficits later in adulthood (4)(5)(6)(7). Sevoflurane has also been demonstrated to inhibit the proliferation of neural progenitor cells, reduce the self-renewal capacity of neural stem cells and induce neuroinflammation through microglial cells in mice (8)(9)(10)(11). These findings have raised concern about the detrimental effects of sevoflurane on brain development and neurocognitive function in children.
Dexmedetomidine is an α2 adrenoceptor agonist that has been used as an anesthetic agent and sedative for several years (12,13). In clinical practice, dexmedetomidine is used to prevent sevoflurane-related agitation in children (14,15). Previous studies have suggested that dexmedetomidine could suppress sevoflurane-induced neuronal apoptosis and neurocognitive impairment in neonatal rats. Importantly, our preliminary study also indicated that dexmedetomidine could attenuate sevoflurane-induced learning and memory impairment in mice. However, the mechanism underlying the neuroprotective effect of dexmedetomidine remains poorly understood. Dexmedetomidine is an α2 adrenergic receptor agonist, and α2 adrenoceptors are known to act as trophic factors in the central nervous system (16). Moreover, adrenoceptors activate endogenous norepinephrine, which promotes cell survival, notably through the Ras-Raf-ERK pathway (17,18).
Several studies have suggested that dexmedetomidine has neuroprotective effects against ischemic cerebral injury through activation of α2 adrenergic receptors and binding to imidazoline-1 and -2 receptors (19,20). It was suggested that neuroinflammation and oxidative stress may cause synapse dysfunction, which results in cognitive dysfunction (21)(22)(23)(24)(25). The aim of the present study was to investigate the effect of dexmedetomidine on sevoflurane-induced neuroinflammation, oxidative stress and neuroapoptosis. The role of α2 adrenoceptors in the neuroprotective effect of dexmedetomidine was also examined.
Animals.
A total of 60 of postnatal day 6 (P6) C57BL/6 male mice (weight, ~1.7 g) were purchased from Changzhou Cavens Laboratory Animal Co., Ltd. Mice were housed with their mothers for 4 weeks for 12:12 h of light and dark cycles at a temperature of 24±2˚C and 60±10% humidity prior to sevoflurane exposure. All animals had free access to food and water.
Experimental procedures. All animal procedures were approved by the Animal Experimental Ethics Committee of The Huai'an Maternity and Child Clinical College of Xuzhou Medical University and were performed in strict accordance with the guidelines of University Laboratory Animal Management.
In a first experiment, animals were divided into four groups: NS + Air (control group), NS + Sev, Dex20 + Sev and Dex20 + Sev + Yoh (n=6/group). P6 mice received an intraperitoneal injection of 20 µg/kg dexmedetomidine (Jiangsu Hengrui Medicine Co., Ltd.) or normal saline 2 h prior to sevoflurane exposure (26,27). The mice were then exposed to either 6 h of 3% sevoflurane in 25% oxygen or to air in a temperature-controlled chamber, and injection of yohimbine (1 mg/kg) 15 min before sevoflurane exposure for Dex20 + Sev + Yoh group. A Morris water maze (MWM) test was conducted to study hippocampal-dependent learning and memory ability from P35 till P41.
In a separate experiment, animals were allocated into six groups: NS + Air, NS + Sev, Dex10 + Sev, Dex20 + Sev, Dex20 + Sev + Yoh and Dex + Air. Normal saline or 5, 10 or 20 µg/kg dexmedetomidine with or without injection of a2-adrenoceptor antagonist yohimbine (1 mg/kg; Absin) were administered by intraperitoneal injection 2 h before exposure, and the mice were then exposed to either 6 h of 3% sevoflurane in 25% oxygen or to air. At the end of sevoflurane exposure, all mice were sacrificed by removal of the brain under anesthesia by intraperitoneal injection of 100 mg/kg sodium pentobarbital. The hippocampus was then dissected out on ice for subsequent experiments.
MWM test. The MWM test was conducted in a circular tank filled with 20˚C water opacified with titanium dioxide (diameter, 1.8 m; depth, 60 cm). In the center of the tank, a 11x11 cm platform was located 1.0 cm from the board of the tank The mice were tested on the MWM four times a day, from P35 to P41 (7 days in total). Mice were randomly placed in the pool. If a mouse found the platform, it was allowed to stay on it for 15 sec. If the mouse was not able to find the platform within 90 sec, it was guided to the platform and allowed to stay on it for 15 sec. The swimming process was recorded by a video tracking system, and the data were captured using motion-detection software (Biobserve FST Analysis). The platform was removed from the pool after the reference training, and the mice were placed in the opposite quadrant. Both the number of crossings completed within 60 sec and the crossing time were recorded. At the end of the test, each mouse was wiped dry to prevent hypothermia.
Immunohistochemistry. The hippocampus tissue was cut into 5-µm sections, which were fixed in 4% paraformaldehyde overnight at 4˚C and embedded in paraffin. Then, tissue was de-paraffinized and rehydrated. After 24 h, the sections were dried at 37˚C, then incubated with 0.3% hydrogen peroxidase in methanol for 30 min at room temperature and washed in PBS blocked with 1% bovine serum albumin (BSA, MP Biomedicals, LLC) in PBS at room temperature for 60 min. The sections were then incubated with goat-anti-cleaved caspase-3 primary antibody (1:200; cat. no. sc-166589; Santa Cruz Biotechnology, Inc.) at 4˚C overnight, then with a Vectastain ® Avidin-Biotin Complex staining kit (Vector Laboratories, Inc. PK-6100) for 40 min at room temperature in the dark. Tissue sections were then stained with diaminobenzidine (Vector Laboratories, Inc.), then placed in a gradient of ethanol solutions (70-100%) and finally covered with a coverslip using neutral resin. A light microscope (magnification, x200) was used to observe sections, NIS-Elements BR image processing and analysis software (cat. no. E100; Nikon Corporation) was used to quantify the 3 fields of cleaved caspase-3-positive cells in the very 3 hippocampal CA1.
ELISA. The levels of tumor necrosis factor-α (TNF-α), interleukin (IL)-6 and IL)-1β in the hippocampus were determined using ELISA kits purchased from R&D Systems (cat. no. MTA00B, M6000B and MLB00C for TNF-α, IL-6 and IL-1β, respectively), according to the manufacturer's instructions. The hippocampal tissue was homogenized using ice-cold lysis buffer (Promega Corporation) and an electric homogenizer and centrifuged at 7,155 x g for 5 min at 4˚C and the protein concentration was quantified using a Pierce™ BCA Protein Assay kit (Thermo Fisher Scientific, Inc.).
Superoxide dismutase (SOD) activity measurement. SOD is an enzyme that catalyzes the dismutation of superoxide radicals into either oxygen or hydrogen peroxide (26). SOD activity was analyzed as described according to the SOD kit procedure (cat. no. bc0170; Beijing Solarbio Science & Technology Co., Ltd.), Samples were prepared, and analyzed based on the procedure measured at 560 nm.
Measurement of malondialdehyde (MDA) levels. MDA is a marker of oxidative stress-mediated lipid peroxidation (26). MDA levels were measured using the thiobarbituric acid reaction method from Beijing Solarbio Science & Technology Co., Ltd. MDA kit (cat. no. BC0025). A total of ~0.1 g tissue was weighed and 1 ml extract was added for ice bath homogenate. Following centrifugation at 8,000 x g at 4˚C for 10 min, the samples were taken re-suspended and placed on ice to be measured. Microplate reader was used, absorbance was read at a wavelength of 450, 532 and 600 nm. The MDA levels (in nmol/mg protein) were calculated.
Flow cytometry. The frequency of apoptotic cells in the brain was assessed by flow cytometry. Briefly, hippocampus were harvested on ice immediately after sacrifice. Hippocampus cells were isolated into a single-cell suspension using 10% trypsin at 37˚C for 15 min. An annexin V-FITC and propidium iodide apoptosis detection kit (BD Biosciences, cat. no. 556547 was used to stain apoptotic cells. A total of 3x10 4 single cells per sample were analyzed by flow cytometry (BD Accuri, C6) and FlowJo 8.6 software (both Becton Dickinson & Company).
Western blot analysis. Western blotting was used to examine phosphorylated (p)-cAMP response element-binding protein (CREB) levels after dexmedetomidine treatment. Hippocampus were centrifuged at 12,000 x g at 4˚C for 5 min immediately after mice brains were dissected and digested by RIPA lysis buffer (Sangon Biotech) with 1% PMSF on ice for 15 min. BCA assay was used to determine protein concentration. Then, the lysate was heated at 95˚C for 10 min and 30 µg was loaded on a 10% gel. All protein samples were separated by SDS-PAGE, then transferred to a nitrocellulose membrane with 5% BSA TBST (0.1% Tween-20) for 60 min. After incubation with rabbit anti-mice primary antibodies (1:2,000; p-CREB, cat. no. ab32096; CREB, cat. no. ab32515; and β-actin, cat. no. ab6276; all Abcam) at 4˚C overnight and horseradish peroxidase-conjugated secondary antibody [Goat Anti-Rabbit and Anti-Mouse (1:2,000; cat. nos. ab205718 and ab205719, respectively; both Abcam)] for 1 h at room temperature, all membranes were exposed in a dark room with ECL reagent and imaged using a Tanon 1600/1600R Gel Imaging System (UVP, LLC).
Statistical analysis. All data are presented as the mean ± SD. All experiments were repeated at least twice. Student's t-test was performed to compare the difference between two groups. Multigroup comparisons were performed using one-way ANOVA followed by Tukey's post hoc test. GraphPad Prism 5 (GraphPad Software, Inc.) was used to conduct the analysis. P<0.05 was considered to indicate a statistically significant difference.
Results
Dexmedetomidine reverses sevoflurane-induced learning and memory impairment via α2 adrenoceptors. P6 mice were treated with 3% sevoflurane for 6 h, and neurocognitive function was tested using a MWM from P35-P41. The mice exposed to sevoflurane displayed significantly increased escape latency times from days 2-7 compared with control mice exposed to air (Fig. 1A). Moreover, sevoflurane-treated mice also completed significantly fewer crossings compared with the control group (Fig. 1B). These observations suggested that sevoflurane exposure in young mice could induce cognitive impairment after three weeks. However, mice treated with dexmedetomidine 2 h prior to sevoflurane exposure displayed significantly reduced cognitive impairment, as indicated by shorter escape latency times and increased numbers of crossings, compared with mice receiving sevoflurane alone. There were no significant differences between the dexmedetomidine treatment group and the control group. By contrast, the α2 adrenoceptor antagonist yohimbine significantly inhibited the neuroprotective effect of dexmedetomidine. No significant differences in escape latency and number of platform crossings were observed between the yohimbine-treated group and mice exposed only to sevoflurane, indicating that the neuroprotective effect of dexmedetomidine may be mediated by α2 adrenoceptors.
pCREB is a molecular marker for memory processing in space learning in the hippocampus (28). Compared with the control group, Western blot analysis demonstrated that pCREB levels were significantly decreased in the brain following sevoflurane exposure, but restored by dexmedetomidine treatment. However, this effect was inhibited by yohimbine pretreatment (Fig. 1C). These results indicated that dexmedetomidine could reverse learning and memory impairment caused by sevoflurane.
Dexmedetomidine attenuates sevoflurane-induced neuronal apoptosis through α2 adrenoceptors in 6-day-old mice. The CA1 pyramidal cell layer has neurophysiological signature characteristics, which serve a role in the hippocampal memory circuit as an outstanding output node (27). A 6-h exposure to 3% sevoflurane resulted in a significant increase in caspase-3-positive cells in the CA1 layer of the hippocampus, compared with air-exposed control mice. Pretreatment with dexmedetomidine significantly decreased the sevoflurane-induced increase in caspase-3-positive cells ( Fig. 2A). However, yohimbine attenuated the effect of dexmedetomidine treatment on the number of caspase-3-postive cells, suggesting a role for α2 adrenoceptors in dexmedetomidine-mediated neuroprotection. Moreover, dexmedetomidine significantly reduced sevoflurane-induced apoptosis in the brain, and this effect was partially inhibited by yohimbine (Fig. 2B).
Dexmedetomidine attenuates sevoflurane-induced proinflammatory cytokine release through α2 adrenoceptors in 6-day-old mice. Mice exposed to 3% sevoflurane for 6 h displayed a significant increase in IL-1β, IL-6 and TNF-α levels compared with control mice exposed to air (Fig. 3). However, pretreatment with dexmedetomidine significantly reduced the sevoflurane-induced release of the proinflammatory cytokines. In particular, dexmedetomidine decreased the levels of IL-1β in a dose-dependent manner. Yohimbine significantly increased the levels of IL-1β, IL-6 and TNF-α, restoring the expression of these pro-inflammatory cytokines to levels comparable to sevoflurane alone. Thus, α2 adrenoceptors may mediate the anti-inflammatory effect of dexmedetomidine (Fig. 3).
Dexmedetomidine attenuates sevoflurane-induced oxidative stress through α2 adrenoceptors in 6-day-old mice. Exposure to 3% sevoflurane for 6 h significantly increased oxidative stress, as indicated by increased MDA levels and reduced SOD activity, compared with control mice exposed to air (Fig. 4). By contrast, pretreatment with dexmedetomidine significantly decreased sevoflurane-induced oxidative stress in a dose-dependent manner. However, the protective effects of dexmedetomidine on oxidative stress were inhibited by yohimbine, indicating that dexmedetomidine could modulate oxidative stress through α2 adrenoceptors.
Discussion
The present study demonstrated that pretreatment with dexmedetomidine could attenuate sevoflurane-induced neurotoxicity, as indicated by reduced learning and memory impairment, and decreased neuronal cell apoptosis, inflammation and oxidative stress. Moreover, the neuroprotective effects of dexmedetomidine was reversed by yohimbine, an α2-adrenoceptor antagonist, suggesting that the effects of dexmedetomidine may be mediated by α2 adrenoceptors.
The MWM is broadly used for the evaluation of learning and memory function in mice, particularly spatial learning and memory (28,29). In the present study, the MWM test was used to determine the effect of dexmedetomidine on sevoflurane-induced cognitive impairment. Consistent with previous studies, exposure to sevoflurane in the developing brain induced learning and memory functional impairment in adulthood (4,30). Shan et al (31) suggested that dexmedetomidine ameliorated sevoflurane-induced neurocognitive impairment. Although the methods used to evaluate neurocognitive function differed, the present study also indicated that dexmedetomidine could reverse sevoflurane-induced cognitive impairment.
Neuroapoptosis is strongly associated with neurocognitive function (32). In the present study, sevoflurane exposure significantly increased neuroapoptosis. Moreover, dexmedetomidine decreased neuroapoptosis induced by sevoflurane. These findings were consistent with previous studies by Shan et al (31) and Li et al (11), which demonstrated that dexmedetomidine ameliorated isoflurane-induced neuroapoptosis.
In addition, cognitive impairment is associated with neuroapoptosis, inflammation and oxidative stress (22,23,25). Proinflammatory cytokines, such as TNF-α and IL-6, are associated with neuroinflammation and lead to cognitive impairment following surgery under cbupivacine anesthesia (33). In the present study, sevoflurane increased IL-1β, IL-6 and TNF-α levels in the hippocampus, which was reversed by dexmedetomidine pretreatment. Microglial cells, the resident macrophages of the central nervous system, play an important role in innate immunity and neuroinflammatory processes in the brain (34). Although microglial cells can promote healing, activation of microglia can generate cytotoxic mediators, such as IL-1β, IL-6, and TNF-α, which may be toxic to neighboring neurons (35). Thus, dexmedetomidine might function by inhibiting the activation of microglia. However, in the present study, the potential effect of dexmedetomidine on microglia was not evaluated, and further study would be required to validate this hypothesis.
A n imbalance between radical-generating and radical-scavenging systems causes oxidative stress (36).
Increased reactive oxygen species induce lipid peroxidation of polyunsaturated fatty acid in biofilms and plasma lipoproteins, which may lead to multiple organ dysfunction (37). Sevoflurane can impair the function and affect the morphology of immature neuronal mitochondria (38). In the present study, sevoflurane increased MDA levels and decreased SOD activity in the hippocampus. However, dexmedetomidine pretreatment decreased MDA levels and increased SOD activity, compared with mice exposed to sevoflurane alone. Thus, dexmedetomidine attenuated oxidative stress. In the present study, yohimbine, an α2 adrenoceptor inhibitor, significantly attenuated the positive effects of dexmedetomidine on neurocognitive impairment, neuroapoptosis, neuroinflammation and oxidative stress. These findings suggested that α2 adrenoceptors might mediate the protective effects dexmedetomidine, consistent with previous studies (19,20).
In conclusion, the present study suggested that dexmedetomidine could provide neuroprotection against sevoflurane-induced neuroapoptosis, neuroinflammation, oxidative stress and neurocognitive impairment by activating α2 adrenoceptors. These findings may provide insight into the development of treatment options that could prevent neurotoxicity caused by sevoflurane. | 2020-11-12T09:09:16.567Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "7d541a65fbc4fe2648c477a764094e0b6fc228ed",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2020.11676/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b96cabdc7a61c87d28a05a9b4a3e2bd9b2827d1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225868733 | pes2o/s2orc | v3-fos-license | DIFFERENCE OF THE PLASTIC STRESS AND RESIDUAL BY HOLLOMAN AND HOOKE EQUATION FOR TWO DIFFERENT STEELS
This paper presents a few applied stresses imposed on two different materials, and the effect of these stresses on material’s residual stress. It also shows an equation to calculate residual stress towards the combination between Hooke ́s and Holloman Laws. In order to calculate the residual stresses, two different materials were chosen, the first is an alloy steel SAE 4340 and a second high strength steel UHB-20C. It is seen that both materials behave similarly, which means the difference between the applied stress and the residual stress is minimal when the applied stress is closer to material tensile strength. However, when the applied stress is closer to the materials yield strength the difference between applied stress and the residual stress is significantly higher.
INTRODUCTION
The evolution of the engineering materials and manufacturing processes, from which mechanical parts are generated, have demanded the development of new mathematical models to simulate raw material's internal stresses and strains, generated by external loads, during the manufacturing process of a part. As an example, the process of metal working of metallic materials produces internal stresses called residual stresses, which can result in failure either in raw material during the manufacturing of a part or even when the final product in used. As a matter of fact, failure of parts most likely occurs when the calculations of their residual stress are neglected [1].
As part of residual stress calculation, the elastic stress and elastic strain should be taken into consideration. Elastic stress and elastic deformation can be directly related by Hooke's Law [2] because usually the stress and strain are time independent. It means, upon release of the load, the elastic strain is completely recovered (i.e., the strain returns to zero immediately) [3]. The Hooke's Model is a powerful simple equation which simulates the behaviors of most of materials providing the stress applied and the strain directly related to the stress. The Hooke's Equation is used in most of the linear-elastic software which deal with the design of parts that cannot fail. Failure under linear-elastic perspective means that a part reaches stresses that exceed the material's elastic limit when loaded with an external force or pressure [4][5].
When a material is deformed plastically there is another phase called plastic deformation or non-linear stress-strain. This phase occurs when, in order to change the geometry of materials to produce parts, an external load is applied which exceeds the elastic limit of the material. Therefore, plastic deformation occurs in most of the steps taken along the manufacturing processes. For example processes like forging, drawing, stamping, bending, hydro forming, etc… generate stresses and strains within the materials plastic deformation phase [6]. When the deformation of the material is higher than the elastic limit of the material the internal dislocations in its atomic structure happens, therefore the shear stress acting on the slip plane causes these dislocations to move through the atomic structure. As they move ahead they encounter a few barriers such as solute atoms, metallic particles, grain boundaries and other dislocations, effectively stopping the dislocation until the stress is increased and the dislocation can overcome the obstacle. This atomic movement can be simulated by using an equation called Holloman Equation [7]. Another possibility is when residual stresses are introduced into the material even when the process doesn't change the shape of the part, but it applies pressure enough to add dislocations in atomic structure. Processes like machining, grinding, shot-peening are under this category [8-10].
Residual stress is a topic of great interest in manufacturing of parts and components because the residual stress affects directly the reliability of the final parts in field in terms of life span. This topic is even more relevant when fatigue is one of the topics of main interest for studying. The definition of residual stress, according to Schijve [11], is a stress distribution present in a structure, component, place or sheet, while there is no external load applied.
This paper aims to calculate the residual stress, by combining Hooke's Equation and Holloman Equation, in materials when they are deformed plastically and the external stress is released. For this purpose, Hooke and Holloman laws are combined in order to keep the calculation simple but at the same time providing a guideline for engineers in the manufacturing processes. Additionally, two materials with different mechanical properties were chosen to demonstrate the difference between plastic stress and residual stress.
RESIDUAL STRESS CALCULATION
As described previously, residual stress is a combination of plastic deformation and material elastic recovery. Elastic deformation or elastic recovery, also called spring back modulus [12], is when the material returns to its original shape (geometry) just after the force is applied. Elastic deformation is called elastic recovery when the material is plastically deformed but when the force is released, the material tries to come back to its original shape but it can´t due to permanent strain -increased dislocation density. The elastic portion of deformation, called elastic recovery, can be written by using Hooke´s Law as follow: where s !" as elastic recovery of material, E (210 GPa) as Young Modulus and ee as elastic deformation When the part passes through a process where its shape is permanently changed it is called plastic deformation, as mentioned, processes like forging, drawing, stamping, bending, etc… promote the change of materials shape. For permanent change -plastic deformation -Holloman Equation is commonly used and it considers that the stress applied on material when it exceeds the material yield strength, as follow: where s # is the plastic deformation, called permanent deformation, K is the strength index or strength coefficient ep as plastic deformation and n as strain hardening index.
In order to determine the residual stress in the material the combination of both equations is necessary. It means that the plastic deformation generates a stress which is close to residual stress but it is not exactly the same because the elastic portion (elastic recovery) of the deformation has to be subtracted from the plastic deformation: The sr is the residual stress and sp is the plastic deformation.
Usually the elastic deformation is calculated by using Hooke´s Law, as well as the elastic recovery of the material when it passes through a plastic deformation: Where eer is the elastic recovery, sa is the stress applied on component. In regards to plastic deformation the Holloman Equation -Equation (2) -is used and can be manipulated as follow: In order to determine the residual stress in the material when deformed plastically the elastic portion of material, elastic recovery, has to be subtracted from the total deformation as seen below; Rearranging the Equation (2) placing Equation (7) into it, it leads to: Because sa = sp can be assumed as having the same value, the Equation (8) can be rewritten as follow:
Comparison of two different materials
At the first glance, when comparing differences materials in term of elastic recovery and plastic deformation, which results in residual stress, the values seems to be negligible but they are not. The values are directly related to the type of material considered. In order to elucidate the difference, two materials were chosen, the first alloy steel AISI 4340 [13] and a second high strength steel . Their mechanical properties are demonstrated at Table 1. For the material AISI 4340 the difference between the residual stress and plastic stress is very small as seen in Figure 1.
The Figure 1 shows the stress difference, which is basically the difference between the stresses applied minus the stress calculated towards Equation 9. For example, when a stress of 500 MPa is applied the calculated remaining stress (residual stress) is 497.44, which results a difference between the stress applied (500 MPa) and the residual stress calculated (497.44 MPa) of 2.56 MPa. As seen, the stress difference decreases with the increase of applied stress in the material. For 500 MPa for example, the stress difference is 2.56 MPa, it means 0.51 % of the total applied stress. As the stress increases 50%, rising to 747 MPa the stress difference decreases 6.5 times, which represents a difference of 0.05%.
For the high strength steel, UHB-20C, once the mechanical strength increases the stress difference also increases.
The Figure 2 shows that, for values close to elastic limit the difference between the stress difference and the applied stress are smaller than those close to the material tensile strength. The high strength material UHB-20C presents a stress difference of 55.74 MPa for an applied stress of 1.900 MPa, which represents around 3% of it. When the material reaches a very high applied stress of 2.300 MPa, which means an increase of 21% on applied stress, the difference drops to 0.05% (1.23 MPa). | 2020-10-30T10:09:47.434Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "07aa10fb8dc54f590d23dee383e7ba377fc80f16",
"oa_license": "CCBY",
"oa_url": "http://www2.ifrn.edu.br/ojs/index.php/HOLOS/article/download/9449/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "16b34ecc1d756308caa2aa954413c4cfc37c695f",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
234684139 | pes2o/s2orc | v3-fos-license | Sleep, Cognition, and Yoga
Stress is one of the major problems globally, associated with poor sleep quality and cognitive dysfunction. Modern society is plagued by sleep disturbances, either due to professional demands or lifestyle or both the aspects, often leading to reduced alertness and compromised mental function, besides the well documented ill effects of disturbed sleep on physiological functions. This pertinent issue needs to be addressed. Yoga is an ancient Indian science, philosophy and way of life. Recently, yoga practice has become increasingly popular worldwide. Yoga practice is an adjunct effective for stress, sleep and associated disorders. There are limited well controlled published studies conducted in this area. We reviewed the available literature including the effect of modern lifestyle in children, adolescents, adults and geriatric population. The role of yoga and meditation in optimizing sleep architecture and cognitive functions leading to optimal brain functioning in normal and diseased state is discussed. We included articles published in English with no fixed time duration for literature search. Literature was searched mainly by using PubMed and Science Direct search engines and critically examined. Studies have revealed positive effects of yoga on sleep and cognitive skills among healthy adults as well as patients of some neurological diseases. Further, on evaluating the published studies, it is concluded that sleep and cognitive functions are optimized by yoga practice, which brings about changes in autonomic function, structural changes, changes in metabolism, neurochemistry and improved functional brain network connectivity in key regions of the brain.
activities, and diet. The modern lifestyle patterns often have negative effects on health physically, psychologically, and socially. A significant basis of healthy life is sleep. Sleep disorders have several social, psychological, economical, and health consequences. Lifestyle may have an adverse effect on sleep which in turn has a clear influence on mental and physical health. The modern lifestyle adversely affects sleep, i.e., advances in modern technology, causing later bedtimes and longer hours of nighttime arousal due to the use of electronic media devices These challenges are common World Wide both in the Western world and in our country, due to the rampant use of all electronic media, including mobile phones, television, games, and computers. [4] The effects are commonly observed in all age groups, including toddlers, school-going children, teenagers, and adults. The usage of electronic media has been associated with shorter sleep duration and excessive body weight. Several behavioral and environmental factors, which interfere with normal sleep patterns are summarized. Describes the effect of environmental factors on individual behavior which further leads to sleep disturbance. [5] The effect of night light exposure, which suppresses the secretion of the key sleep regulatory hormone melatonin and leads to sleep disturbances including fragmented sleep and delayed sleep onset. Moreover, the stimulatory and unpleasant media content like violent content and games needing undivided attention can lead to disrupted sleep. Interruptions due to the ringing of phones and texting late at night disturbs sleep and awakens people and makes it difficult to go back to sleep. All these issues as well as poor parental monitoring and control leads to an additive effect in children. [6] Stimulants and drugs such as caffeine, nicotine, and alcohol used for alertness or mood elevation interfere with normal sleep. Altered patterns of diet and exercise and obesity are also associated with changes in sleep architecture. [7] Coffee which contains caffeine, widely used psychoactive compound stimulates dopaminergic circuits associated with "reward," producing behavioral effects similar to other dopamine-mediated compounds, such as cocaine and amphetamine. The accumulation of neuro-chemical adenosine in the basal forebrain promotes sleep by decreasing the sensitivity to dopamine receptors and helps initiate sleep. Caffeine is a nonspecific blocker of the adenosine receptor, which enhances the effect of dopamine on the D2 receptor and increases the availability of dopamine, leading to a stimulatory effect. Caffeine, due to its adenosine antagonistic effect, brings about electroencephalographic (EEG) changes associated with decreased homeostatic sleep pressure, responsible for promoting wakefulness. [8] In certain cases, daily caffeine consumption has been linked to impaired sleep architecture, sleep fragmentation, and impaired daytime functioning. The effect is possibly due to a long half-life of caffeine, ranging from 3 to 7 h. Caffeine consumed during the afternoon or late evening would be present in the system till past sleep time and influence the physiological arousal system, hindering sleep initiation. Energy drinks too lead to sleep disturbances. Moreover, these fall in the category of "wake-inducing drug supplement" are not subject to regulations valid for soft drinks. Psycho-stimulant Caffeine and Modafinil fall in this category and are used to maintain alertness for in patients and for professional requirements during sleep deprivation. [9] The circadian sleep-wake cycle is influenced by nutritional and hormonal signals and mealtimes can alter the sleep onset. In some situations, metabolic cues can affect the master circadian clock as well as the circadian responses to light. Recent studies have shown that time of feeding is important to set the circadian rhythmicity and delayed dinner time is linked to increased sleep latency, reduced sleep duration, and short total sleep time. [10] Sleeping may be thought of as counter-productive due to a lack of proper information and awareness about the deleterious effects of sleep loss. [11] The sociocultural milieu including erratic work schedule, late-night entertainment, use of psycho-stimulants, use of electronic media compromises healthy sleep-wake schedules, giving a low priority to sleep. The busy schedules of parents add to the issue. The dinner and family activities are postponed to later hours. [12] Teenagers and children are often involved in several extracurricular activities which are eating into the evening time as well as demanding school events taking a toll into personal time with family. Sleep loss is not currently considered a public health issue. The busy parents and children often push back sleeping in order to accomplish other activities considered a priority. With long hours at school, children are unable to obtain adequate sleep. On the other hand, the idea of greater achievement through prolonged wakefulness for accomplishing more at studies and extracurricular activities is counter-productive as exhausted children under-perform during the day and would accrue less from creative and extracurricular tasks. [13] Sleep is integral to academic excellence. However, its value and relevance is often ignored in the academic realm and in designing programs aimed at excelling academic performance. Moreover, sleep is rarely integrated into interventions designed to improve overall health and well-being. A weight regulation program targeting childhood obesity for instance, would target only nutrition and exercise. The essentiality of optimum sleep as an essential factor is not a part of government policy in or pediatric practice. [14] Thus, there is a general lack of awareness when it comes to the serious consequences of chronic sleep insufficiency on the health and success of children and adults. Modern lifestyle affects body's regulatory processes associated with sleep regulation and are a reflection of our personal and professional demands leading to rampant sleep deprivation in children and adolescents.
Sleep and cognitive functions
There is ample literature available that establishes the role of sleep in brain maturation as well as the development and maintenance of cognitive functions such as learning and memory consolidation. [15] New learning and its consolidation, i.e., the formation of long-term memories is attributed to REM sleep and a relationship between cholinergic function, duration, and depth of REM sleep and cognitive functioning was observed. [16] The hippocampus-dependent declarative memory also is facilitated sleep stages. Sleep spindles occurring in Stage 2 sleep are associated with verbal memory retention which is correlated with an increase in the number of sleep spindles. It is proposed that there is a hippocampal and neocortical network for consolidation of memory wherein NREM sleep facilitates the conversion of episodic memories from hippocampus dependent to relatively hippocampus-independent. In a situation of sleep deprivation occurring after learning this process is hampered so that there is a greater chance of memory retrieval from the hippocampus. [17,18] Sleep deprivation is categorized as acute, i.e., an extended single wake episode or chronic, with inadequate sleep over several days. There is literature on the effect of chronic sleep deprivation. A study showed that shorter sleep duration and fatigue were associated with subjective rather than objective measures of cognitive function. [19] It has been shown that chronic changes in sleep architecture are associated with compromised cognitive function scores in several measures. Circadian phase alterations which routinely occur for instance after returning to work after a weekend influence cognitive function. [20] Another research also pointed out that tests of memory and verbal fluency showed reduced scores on Monday morning following longer hours of weekend sleep. [21] Sleep quality may likewise assume a vital role in cognitive functions. Sleep quality alludes to how well an individual sleep during the night. It is normally assessed by means of self-revealed recurrence of night-time awakenings; sleep latency, sleep duration, awakening, and feeling of freshness or tiredness and utilizing standard tests like Pittsburgh Sleep Quality Index. [22] One examination, however, found that while disturbed sleep was related to an impairment in cognitive functions but it was not linked to increased cognitive decline. [23] It is well documented that sleep as well as cognitive functions decline with age. Cognitive decline is linked with impairment in working and episodic memory with lesser impact on semantic and recognition memory. [24] The rate of cognitive decline for individuals emerges to vary significantly, furthermore neuronal changes that are associated with cognitive decline emerge to begin during middle age. In addition, poor sleep (quantity, quality, and efficiency) are also associated with cognitive decline.
Yoga to improve sleep and cognition
Yoga is rooted in Indian culture and it is a way of life, which promotes physical, spiritual, and mental well-being. There are different composites of yoga including postural activities (asanas), breath control (pranayama), and meditation. [25] The following text reviews the effect of yoga and meditation on sleep, cognitive functions, and mental well-being in normal adults, the elderly, and in some neurologically patients (epileptics and migrainers). The reviewed literature is summarized in Table 1.
Meditation for sleep quality and mental well-being in young and middle-aged adults
Medication practice helps in maintaining homeostasis in the body via producing Global changes in the brain including sleep and its regulation. [45] The practice of Yoga improves sleep architecture and mental well-being in young and middle-aged adults. Sudershan Kriya and Vipassana practice prevented the decline in NREM and Vipassana increased REM sleep in healthy adults 31-55 years. [26] In another study, Vipassana practitioners had increased NREM and REM across age groups, young, middle, and older ages. [27] Yoga improves mental functions in professionals working in demanding environments. A 5-day capsule program was reported to improve the anxiety, insomnia, and mental well-being of managers. [28] The same group also showed improved Emotional intelligence following a Yoga course among a sample of university students. [29] In a previous study by Patra and Telles, cyclic medication (twice a day) practice had shown positive effects on sleep quality. [46] Cyclic medication practice had been found to be beneficial for respiratory, muscular, and cardiac variables due to yoga posture and guided relaxation, specifically during sleep. [47] In another recent study, a Yoga training program for 15 days improved mental well-being and reduced anxiety among primary school teachers. [30] In a recent meta-analysis on the effect of mindful meditation on sleep, [48] observed that sleep architecture was improved and that it may be helpful in treating some aspects of sleep disturbance but pointed out that further research into the area was warranted.
Studies have been conducted on experienced practitioners of meditation showing morphological changes in brain regions as well as functional networks pointing to changes associated with brain plasticity in meditators. EEG studies in experienced meditators have shown that alpha-band functional network topology is better integrated but not beta and theta bands. [31] In a recent study, Sevinc et al. [32] reported that hippocampal circuits have a role in reducing anxiety following meditation training by modulating fear memory.
Meditation to improve cognitive functions in stress (sleep deprivation) and neurological disease (migraine and epilepsy)
All meditation practices involve two attention guided International Journal of Yoga | Volume 14 | Issue 2 | May-August 2021 Contd...
Salient findings
Panjwani et al. [25] Contd... practices. First is focused attention centered on a given object. The second one encourages the practitioner to be silent while not to responding and passively observing the thoughts. The growing scientific and clinical interest in mindfulness meditation has produced various intervention studies authenticating its benefits for managing pain; boosting immune response; regulating brain activity to enhance positive emotion, and simultaneously preventing depression, anxiety and negative affect and decreasing perceived stress, and promoting self-compassion. Literature has shown that meditation practice helps an individual in coping up with the various type of physical and mental health issues. [33,49,50] "Om" meditation practice for 2 months significantly reduced various aspects of cognitive decline including different components of memory, attention, and vigilance. During total night sleep deprivation, standard neurophysiological tests including Raven's Advanced Progressive Matrices, Auditory Evoked potential component Middle Latency Response, Event-related potential P300-ERP, and Contingent Negative Variation were recorded. All the measures of cognition were impaired during sleep deprivation. "Om" meditation practice ameliorated the standard deviation induced impairment in cognitive decline. [34,51] Meditation in addition to promoting health and well-being in normal people also has a curative function in diseased states. The frequency and intensity of headaches improved in patients of migraine who practiced Yoga in addition to conventional therapy. Yoga practice has been used by clinicians for the management of epilepsy. [35,52] Sahaj yoga meditation was found to be a beneficial intervention against epilepsy as shown in some of the electrophysiological responses of patients. There was a significant reduction found in seizure frequency, improvement in EEG, with an increase in percent Alpha frequency, reduction in Delta frequency, increase in ratios of Alpha/Delta, Alpha + Beta/ Delta + Theta, increased Galvanic Skin Response response, improved Visual Contrast Sensitivity (VCS), reduced urinary catecholaminergic metabolites following meditation practice. On these grounds, it was suggested that meditation practice by modulation of the limbic system modulates the hypothalamo-hypophyseal axis and the autonomic nervous system activity and regulates endocrine functions. A reduced stress level contributed to seizure reduction since stress was a trigger for seizure episodes in most of the patients. The reduced level of stress following meditation practice also led to a better response to auditory and visual stimuli in terms of Auditory Evoked Potentials and VCS. The changes observed after meditation practice may also be attributed to behavior alterations. The ability to focus attention, improve concentration, and motivation are relevant. An altered lifestyle with a reduction in stress is important for the clinical and electrophysiological changes. Meditation practice as an adjunct to anti-epileptic medication helped in seizure reduction and brings about changes in electrophysiological and biochemical measures. [25,36,37] Geriatric population and yoga Due to the physiological process of aging, sleep architecture alters in the elderly. Furthermore, sleep disturbances also increase with age. Alterations in sleep architecture in elderly people include increased sleep latency (time to fall asleep), frequent awakenings, and sleepiness during the daytime. It has been observed that there is a direct correlation between poor sleep quality and morbidity, compromised cognitive function, and quality of life. Most basic elements referred to sleep disturbances influences are inadequate physical activity, poor sleep preparation, and unreasonable daytime napping. Even though sleep-related issues in old individuals put additional weight on medicinal services as well as economic burden doctors overlook this issue and consider it as a part of normal aging Different medications like benzodiazepines and nonbenzodiazepines are accessible for pharmacological treatment of sleep-related issues in the elderly. The drugs, however, have detrimental side effects, hitting the sensitive geriatric age group. The use of these compounds leads to physical just as mental dependence, compromised psychomotor function, REM sleep disturbance, and reoccurrence of insomnia. [53][54][55] Hence, the adverse effects of sleep medications further compromise the quality of life of the elderly. Due to the aforementioned considerations, the relevance of yoga in improving the health and well-being of the elderly cannot be over-emphasized.
The health beneficial effects of Yoga like improved sleep quality, reduction of blood pressure , and better serum lipid profile have been observed. Also, after regular Yoga exercises for 6 months, shorter sleep latency, reduced night sleep disturbance, better quality of sleep, and reduced use of sleep medications were observed in geriatric people. [33] In another study, improvement in different aspects of sleep and decrease in symptoms of depression was observed after Yoga practice. Aged people who practiced Yoga on regular basis had a better quality of sleep with enhanced NREN and REM sleep, less lethargy during daytime, reduced intake of sleep medications, and subjective feeling of freshness in morning. [27] Yoga practice help in the amelioration of age-related degeneration by changing cardiometabolic risk factors, autonomic function, and BDNF in healthy males. [54] In another study, a relationship between poor sleep quality and reduced oxygen saturation of <90% which compromised physical performance in the form of decreased grip strength and walking speed was observed. [55] Pranayama improves the strength of the respiratory muscles which leads to better tissue perfusion and improved oxygen saturation. Since sleep apnea is associated with decreased oxygen saturation, improved oxygen saturation due to Yoga may explain reduced sleep disturbances in Yoga group. [38] Obstructive sleep apnea (snoring) increases the chances of sleep disturbances which may be attributed to the weakened upper airway muscles and narrowing of the respiratory passage. Yoga strengthens upper airway muscles resulting in less sleep disturbances. [56] Yoga encompasses asanas, pranayam and meditation. A reduced muscular strength and muscle mass leading to a decreased exercise capacity is associated with aging. Day to day activities become difficult leading to dependency on family and other support systems. Yoga improves body flexibility, prevents decline in physical function and improves the quality of life of elderly persons. [39] Comparative outcomes have been obtained in another study on Yoga professionals in the age group 60 years or more. Better body flexibility and critical decreases in movement execution timings were noted in the Latin-American Development Age Group. Yoga practices over long durations like stretching of the joints lead to the slackening of the muscles and connective tissues encompassing the bones and joints. Regular exercise involving joints in Yoga practices prevents the dystrophy of the ligament and improves the functionality of joints. Yoga practices are related to less sleep debt and better physiological function and old individuals can live independently. A putative explanation for better sleep quality in individuals who perform Yoga on a daily basis is that all Yoga postures involve stretching and relaxing of muscles which lead to enhanced physical and mental activity leading to better sleep quality. [57] It also appears that the benefits of Yoga are retained even after long-term Yoga practice.
Some mechanisms of action of Yoga to improve sleep quality have been proposed. Studies on health volunteers have shown that there is an increase in the vagal tone, decrease in sympathetic discharge with reduced postural heart rate response and decreased catecholamine levels in plasma after Yoga practice. [40] Relaxation with reduced responsiveness to extraneous signals may be a factor for reduced sleep disturbances with Yoga practice. Some reviewers have pointed out that caution needs to be exercised in the interpretation of findings. In a review on the effect of meditation on age-related cognitive decline, it was observed that although most studies have a high risk of bias and small sample size, yet it could be interpreted that meditation could ameliorate the cognitive decline in elderly. [58] Aging is associated with changes in the central nervous system structurally and functionally, especially in regions associated with cognitive functions such as executive functions, attention, and working memory including a reduction in the volume of cortical areas. [59,60] Yoga improves cognitive function; several studies have shown better scores of cognition after yoga. Yoga improves mood and relieves stress. Sleep disturbance impairs psychomotor alertness, which further reduces cognitive function linked with brain regions like the prefrontal cortex. As age advances, blood flow to the brain decreases in a time-dependent manner. Habitual practice of Yoga enhances parasympathetic activity with a simultaneous decrement in autonomic over-activity. [41] This reduces the decline in oxygen consumption and metabolic rate of prefrontal cortex cells, thereby ameliorating neuronal loss and cognitive function]. Meditation practice increases gray matter volume and glucose metabolism in prefrontal and cingulate cortices and insula and temporoparietal junction suggesting long-term meditation practice reduces age-related cognitive decline. [42] Elderly women long-term yoga practitioners had a greater prefrontal cortical thickness than age-matched controls. [43] The default mode network has been an area of interest in meditation. In elderly practitioners, a better anterior-posterior brain functional connectivity on Decision Model and Notation may also be present in long-term meditators. [44] Overall, regular yoga practice by the elderly improves the age-associated cognitive decline by structural and functional changes in key brain regions.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2021-05-17T13:39:49.571Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "dbb8712f4a1fb29962c5dce0097cebfe4fc6d27e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijoy.ijoy_110_20",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cf063b6fa7c94c53af270b4b4b73a7f9f2111ca9",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254814681 | pes2o/s2orc | v3-fos-license | The silver learning curve for photovoltaics and projected silver demand for net‐zero emissions by 2050
The clean energy transition could see the cumulative installed capacity of photovoltaics increase from 1 TW before the end of 2022 to 15–60 TW by 2050, creating a significant silver demand risk. Here, we present a silver learning curve for the photovoltaic industry with a learning rate of 20.3 ± 0.8%. Maintaining business as usual with a dominance of p‐type technology could require over 20% of the current annual silver supply by 2027 and a cumulative 450–520 kt of silver until 2050, approximately 85–98% of the current global silver reserves. A rapid transition to higher efficiency tunnel oxide passivated contact and silicon heterojunction cell technologies in their present silver‐intensive forms could increase and accelerate silver demand. As we approach annual production capacities of over 1 TW by 2030, addressing the silver issue requires increased efforts in research and development to increase the silver learning rate by 30%, with existing silver‐lean and silver‐free metallisation approaches including copper plating and screen‐printing of aluminium and copper.
| INTRODUCTION
In 2022, the world reached a cumulative photovoltaic (PV) installed capacity of 1 TW, 1 accounting for >4% of worldwide electricity demand. 2,3 However, techno-economic roadmaps [4][5][6] predict that to fulfil the Paris Climate Agreements to mitigate climate change, between 15 TW 6 and >60 TW 2,7 need to be installed by 2050. Annual growth rates for PV installations of 23-30% are required 2,8,9 to reach at least 15 TW by 2050, which the industry has consistently demonstrated for decades. 2 Details of projected scenarios can be found in Figure S1 and Table S1.
The global energy transition shifts material requirements from fossil fuels to different materials such as metals. As the land area requirement of coal mining is large, the energy transition can greatly reduce the requirement 10,11 despite the more intensive metal requirements, such as copper 12,13 increase sixfold by 2040. 14 While environmental impacts go beyond the directly affected area, the mining area correlates with overall impacts. 15,16 Present mainstream PV does not use rare-earth elements, but abundant metals such as aluminium and copper, required for multiple clean energy technologies. 17,18 The rapid growth of the PV market is not an option but a necessary step to mitigate climate change. However, such growth can lead to new challenges regarding material consumption. Concerns of surging material demands from PV production were raised for terawatt-Brett Hallam and Moonyong Kim contributed equally to the manuscript. level deployment of PV in 2008. 19 Based on the current rate of PV production, a number of studies and reports have highlighted the concern of increasing material demand, which will greatly impact supply chains and the long-term sustainability of PV manufacturing. [20][21][22][23][24][25] Currently, silver is the most critical metal posing price and supply risks when PV production expands. 9,19 In 2020, PV used approximately 12.7% of annual silver production, 18,26 despite the fact that only $3.2-8 g/m 2 of a PV module is needed. Many previous studies have highlighted that the current estimated silver consumption is too high to allow sustainable terawatt-scale production. [27][28][29] One particular study by Goldschmidt et al. highlighted material consumption learning rates (LRs) for PV including silver. 30 However, it was not clear what data were used to establish the LR of approximately 20%, and the study did not consider the impact of cell technology on silver consumption or the impact of a transition of the industry towards high-efficiency cell technologies on silver consumption by the PV industry.
While the potential for recovering silver from PV modules is significant, the current low collection and recovery rates, coupled with the 20-30% per annum growth rate of the PV industry and 25-year module lifetime, mean that recycled silver from PV modules can contribute only marginally to the silver supply for PV for quite some time.
All parts of the PV systems and modules can be dismantled mechanically, 31,32 recovered chemically 33,34 or via electrowinning. 35 The PV industry has room to facilitate recycling and optimise the design for reuse. Currently, there is a very limited recycling industry for PV modules because the number of end-of-life modules is still too small. 19 This is a general problem with technologies that are scaled up, such as batteries: Recycling technology lags behind until end-of-life volumes become sufficient to ensure a profitable business. We should also bear in mind that even some established technologies have low recycling rates, such as plastics, which makes a policy directed towards the circular economy urgently necessary.
The PV community is working on addressing the silver issue, by substitution with more abundant metals such as copper and aluminium. Copper has a similar conductivity as silver and is therefore a practical substitute, but there are still processing and reliability challenges to be solved before mass production. 2 Aluminium has a lower conductivity and challenges for forming a contact with n-type silicon without shunting the device, due to the alloying of aluminium and silicon during the high-temperature metallisation firing process to form a local p-type Al-doped silicon region. They are nevertheless feasible and a question of production costs and complexity.
In this work, we present a silver learning curve for PV based on the current industry's global silver consumption and module production, to project silver demand under different growth scenarios towards 2050. We consider the impact of cell technology and projected technology market shares on silver requirements by transitioning from p-type technology (e.g., passivated emitter and rear contact [PERC]) to n-type technology (e.g., tunnel oxide passivated contact [TOPCon] and silicon heterojunction [SHJ]) and the subsequent impact on global silver supply and reserves. The results show that the current rate of reduction in silver consumption is not sufficient to avoid increasing silver demand from the PV industry and that the transition to high-efficiency technologies including TOPCon and SHJ could greatly increase silver demand, posing price and supply risks.
| The silver learning curve for global PV deployment
As a whole, the PV industry has demonstrated a remarkable reduction in silver consumption over the past 10 years from a value 51.8-65.1 mg/W in 2010 to $19.5 mg/W in 2020 (see Figure 1A). A key driver for this reduction was manufacturing cost. Silver accounts for approximately 60% of the non-wafer cost 2 and 5-10% of the module manufacturing cost. For the emerging TOPCon and SHJ cell technologies (see Table S2), the cost of silver metallisation is even higher. Predictions of technology-dependent silver consumption per cell (CPC) F I G U R E 1 (A) Silver learning curve for the photovoltaic industry with silver consumption based on global reported silver use by the PV industry and global installed PV capacity also highlighting key global PV deployment scenarios. (B) Historical and projected silver consumption as a function of year under different scenarios along with predicted values of different PV technology from the 2021 ITRPV (IRV21). 2 For full details of scenarios for (A), see Table S1. are given annually in the International Technology Roadmap for Photovoltaic (ITRPV), with values expected to halve over the next decade 2 through gradual improvements in printing technology (see Table S2).
However, a key limitation of such projections is that they fail to fully account for the learning that comes from manufacturing an immense cumulative number of solar cells. 30 In 2021, we estimate that there were approximately 30 billion solar cells fabricated globally. Figure 1A shows the silver learning curve for global PV deployment, with the silver consumption (mg/W) reducing by 20.3 ± 0.8% for every doubling of cumulative installed capacity, consistent with estimations in reference, 30 although with a substantially higher pre-factor (see Table S5). This value is based on the last 10 years of data since 2010 (LR Recent ). When we consider all historical values, the long-term silver LR for the PV industry (LR All ) is estimated to be 18.7 ± 1.3%, which is slightly lower than the recent values. If the industry continues along the LR Recent learning curve in Figure 1A, for the broad electrification scenario in 2050, the estimated silver consumption would be in the range of 5.3 ± 0.5 mg/W. For PERC, if only using silver for metal fingers, this translates to a relative power loss of <0.36% rel with current pastes. If also requiring silver for busbars/tabbing regions, this equates in an excessive power loss of 2.76% rel . For TOPCon and SHJ, this would undesirably lead to greater power loss (12.9-24.9% rel ). Realising this lower limit without power losses will be challenging and require significant changes in cell metallisation and/or interconnection methods.
From this LR, we can estimate the silver consumption as a function of year under different industry growth scenarios (see Figure 1B).
| Technology-dependent silver consumption for solar cells
Although increased solar cell efficiencies present an immediate opportunity for lower silver consumption, silver consumption is primarily driven by the solar cell technology choice ( Figure 1B). For the industry-dominating p-type PERC cell (see Figure S4b), based on the first silicon solar cell to reach 25% efficiency, 36 silver is only required for the front n-type contact, while the rear p-type contact is formed using low-cost and abundant aluminium, with small amounts of silver for interconnection tabs, 37 providing an estimated module-level silver consumption of 14.4-15.7 mg/W in 2020 (see Table 1).
Emerging next-generation high-efficiency n-type TOPCon and SHJ solar cell technologies, with record efficiencies of 25.5% 41 and 26.3% 42 for two-sided contact devices, respectively, have a substantially higher requirement for silver. The current industrial implementation of TOPCon uses silver for the rear n-type contact as well as silver/Al for the front p-type contact to balance between optical and electrical resistive losses, which results in a silver consumption of 20.4-26.0 mg/W, 30-80% higher than that of PERC. SHJ solar cells use a low-temperature silver paste for both contacts with silver consumption reported in the range of 30.3-37.4 mg/W, more than double that of PERC (see Figure 2).
| Projected future cumulative silver demand in the PV industry
Due to the higher potential efficiencies of TOPCon and SHJ than PERC (see Table S2), these n-type technologies are expected to have It is also important to understand the impact of PV's silver consumption on global silver reserves. Figure 3 shows the cumulative sil-
| Annual silver demand projections for the PV industry
The future annual silver demand required by the PV industry will greatly depend on prior cumulative installed PV capacity through accumulated learning, technology choice and annual PV installed (see Figure 4). As shown in Figure 4A, only the most conservative scenario in the ITRPV (IRV21 Low) with $160 GW/year in 2030, which is less than the installed capacity in 2021, 44 the PV modules will hinder the material availability from recycling. To maintain silver demand within the PV industry to less than 10 kt/year ($43% annual silver supply), the silver LR must accelerate substantially to $30% and even higher at 30-40% for a shift towards silverintensive n-type technologies (see Figure 4B).
| DISCUSSION
The PV industry's average silver consumption in 2020 of 19.5 mg/W is approximately 20-27% higher than the estimate for PERC, despite PERC having more than 80% market share. 2 will not be feasible if also requiring silver busbars and tabbing regions. 37 Even for PERC, a 5-mg/W goal may impose challenges with reliability due to finger breakages with the small cross-sectional areas required. 37,47 Modified interconnection approaches to reduce silver consumption could also potentially introduce new material challenges such as bismuth. 37 To reduce the impact of global PV deployment on silver reserves, we must increase R&D investment for innovation with silver-lean PV technologies. Currently, a key driver for innovation in the PV sector is the manufacturing cost. For PV manufacturers, increasing solar cell efficiency is one generic method used to reduce manufacturing cost in terms of $/W, with follow-on benefits for the levelised cost of electricity through financial savings in balance-of-system components.
However, manufacturing costs and efficiency can also overlap with material consumption. Increasing efficiencies can reduce material consumption per unit of power (CPP) (mg/W), along with reducing the associated social and environmental impacts across the value chain of PV deployment, even for abundant materials such as copper, aluminium and steel.
The average yearly silver price has increased by 57-60% since 2019, 48 which can increase the cost of PV. To mitigate this risk, manufacturers can focus on adopting technologies that allow reduced silver consumption. The transition from the previous industry-dominating ptype aluminium back-surface field (Al-BSF) to PERC is a great example of this, in increasing cell efficiencies from 20% to over 23% without requiring additional silver (see Figure S3 and to accelerate the LR substantially to greatly reduce silver demand on the path to net zero by 2050 and innovate to provide feasible solutions towards silver reduction. One area of innovation is to improve silver utilisation by taking into account spatially varying resistive losses in devices. 37 To avoid any decrease in the silver LR of the PV industry as a whole, any major deployment of silver-intensive screen-printed ntype TOPCon and SHJ technologies must be balanced by a substantial deployment of silver-free or silver-lean TOPCon and SHJ solar cells. For screen-printed TOPCon cells, silver consumption could be greatly reduced by replacing the silver/Al p-type contact by a pure Al contact, similar to that of PERC. For SHJ solar cells, the existing low-temperature silver paste has a lower conductivity than hightemperature pastes used for PERC and TOPCon, which therefore requires more silver to achieve similar resistance. Innovation for these solar cells could focus on improving the conductivity of lowtemperature silver pastes. Alternatively, the use of copper pastes could greatly reduce silver consumption for both the front and rear contacts. However, despite pure copper exhibiting a similar bulk resistivity as pure silver, the conductivity of copper screen-printing pastes is substantially lower than that of silver pastes. One emerging approach is using silver-coated copper pastes, allowing a reduction in silver consumption by 30-50%. 49 of manufacturing should be based on copper plating. It could be argued that increasing the material supply or the reserve can overcome material sustainability issues. However, although reserve-to-production ratios for silver have been roughly constant since the 1950s, 20 typically the most economical resources to develop have already been mined and newly opened resources are more energy intensive to mine (e.g., are deeper or the ore grade is reduced). 20,57,58 This will 58 lead to a higher cost and a higher embodied energy (carbon footprint) of metal production, with a negative impact on deploying low-cost and sustainable PV. 59 Therefore, reducing material consumption is not only important to avoid shortage and cost reduction but also to minimise the increase of embodied energy per material. While it can be debated that using clean and renewable energy can compensate for the higher embodied energy for extraction, the energy should be used to decarbonise the electricity and not to compensate for the undesirable increase in embodied energy.
In the longer term, we must ensure that the recycling of PV panels recovers silver. With appropriate levels of recycling, and a stable long-term capacity of PV production, the embedded silver in solar panels may sustain future long-term PV production beyond 2050.
| Historical and projected silver demand in existing reports
The annual global silver consumption from the PV industry was obtained from the Silver Institute's 2020 report on the role of silver in Table S1).
Estimates of the annual installed PV capacity were also used from the IEA PVPS.
A number of different scenarios for future PV deployment were considered in this work with cumulative installed capacities ranging from 2.02 to 63.4 TW by 2050 (see Table S1) Table S1.
Where data for scenarios are not presented each year, such as for the four scenarios listed in the 2021 ITRPV (with data provided in
5-year increments), a linear interpolation of the annual PV installed
capacity is made between the provided data points.
A limited sensitivity analysis of the impact of growth rate of the PV industry on annual silver demand is performed by changing the growth rate of hypothetical scenarios with a fixed cumulative installed capacity by 2050 (see Figure S6).
| Conversion from yearly energy yield to a nominal power output of PVs
For scenarios listing either the nominal PV installed capacity additions or total PV energy yield, the conversion between nominal power output of PV and annual energy yield was assuming 1.2 PWh/TWp. For other scenarios, the published value is also included.
| Silver CPC, module and unit of power
From the annual PV capacity additions ( Figure S1a) and global consumption of silver from the PV industry, 26,44 the silver CPP in terms of mg/W was estimated using Equation (1), representing system-level silver consumption data for the PV industry: where ASD is the annual silver demand and APC is the annual PV capacity installed.
Technology-dependent silver consumption was obtained from a Equation (2).
where CPC is the silver CPC and P Cell is the power of a PV cell. Equation (3).
where n Cell is the number of equivalent full cells, CPC is the consumption of silver per cell and P Module is the power of a PV module.
| Silver learning curve for the global PV industry
The equation for the LR of silver consumption within the PV industry is given by Equation (4).
where A is a pre-factor for the value of the silver consumption at 1 TW of cumulative production; P Total is the cumulative installed capacity in terawatts; and LR Ag is the LR in per cent, representing the percentage of reduction in silver CPP, every time P Total doubles.
Curve fitting was used with Equation (4) where i is the cell technology (PERC, TOPCon or SHJ) and CPP i,2020 is the silver CPP of different technologies, which is the averaged value from Table 1 for each technology.
In this work, 'Industry' represents the PV industry in its entirety.
The Industry silver consumption is calculated using Equation (1) and the values from reported annual silver demand 26,44 and the annual PV production. Values for PERC, TOPCon and SHJ represent reported values for individual technologies in the ITRPV 2 and other sources. [38][39][40] 'N-type' represents cases with a transition of the PV industry to a domination of n-type technology (see Figure S4a). This assumes that two thirds of n-type technology market share is TOP-Con and one third is SHJ, consistent with forecasts in PV Magazine. 40 A value for overall n-type (A n-type ) = A PERC Â p-type% + n-type% (2/3 Â A TOPCon + 1/3 Â A SHJ ). p-type% and n-type% are the % of the PV market share from Figure S4a.
Curve fitting was used with Equation (4) in Origin Pro to determine the pre-factor and LR for the historical data for the global silver usage by the PV industry and cumulative installed capacity by minimising the root-mean-square error.
To calculate the cumulative silver demand (Ag Total ), which accounts for production in each year, n, and annual demand, Equation (6)
| Technology-dependent silver consumption learning curves
For estimating technology-dependent learning curves, the premium in silver consumption for the TOPCon and SHJ solar cell technologies over PERC was calculated from the silver CPC listed in historical ITRPV reports with the first data point in each respective report, along with future projections beyond this year. Similarly, the stabilised efficiency reported in the respective ITRPV reports is used to estimate the power at the cell level by taking into account the device area. Stabilised module efficiencies and module powers for the respective technologies are also obtained from the respective ITRPV reports.
From this, silver consumption is estimated in terms of mg/W. Average and standard deviations for the premiums of silver consumption for TOPCon and SHJ over PERC are then calculated accounting for all historical and predictions available at the module level. Table S2 shows the historical and projected silver consumption at the cell and module level. For Figure 4, the premiums in silver consumption were made above the global silver consumption of the PV industry, given the historical dominance of p-type technologies. | 2022-12-18T16:12:10.101Z | 2022-12-15T00:00:00.000 | {
"year": 2023,
"sha1": "d8e67c4696856f3a77da21359992996f39fae400",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/pip.3661",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "60c1822abbdb54e9c0ea90d3d5f601fcc8a3464d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
237549993 | pes2o/s2orc | v3-fos-license | Skeleton-Based Action Recognition With Low-Level Features of Adaptive Graph Convolutional Networks
Skeleton-based action recognition is a typical classification problem which plays a significant role in human-computer interaction and video understanding. Since a human skeleton has natural graphic features, methods based on graph convolutional networks (GCN) are widely applied in skeleton-based action recognition. Previous studies mainly focus on structural links in GCN to generate high-level features of human skeleton. However, low-level features are also important in many applications. For instance, low-level edge gradient and color information are important for image classification. This paper introduces a multi-branches structure to capture different low-level features of human skeleton. We combine both high-level and low-level features to recognize human action. We validate our method in action recognition with two skeleton datasets, NTU-RGB+D and Kinetics. Experiment results indicate that the proposed method achieves considerable improvement over some state-of-the-art methods.
I. INTRODUCTION
Human action recognition is a typical classification problem which plays a key role in computer vision. Action is an important dimension of human beings to express their feelings and intentions just like languages and facial expressions. Lots of applications based on action recognition have received extensive studies, such as video surveillance [1], humancomputer interaction [2], game control [3], virtual reality [4] and video retrieval [5]. These applications have a rapid development in the past twenty years. Data modalities of action recognition have changed from mainly RGB-based data [6] to a variety of modalities, including skeleton [7], point cloud [8], radar [9], WiFi [10], etc. Recently, methods based on skeleton data attract increasing attention due to the development of different kinds of accurate and affordable sensors, such as Leap Motion and Kinect [11]. Nowadays, skeleton data can be more easily captured than ever before. Human skeleton is a topological representation for human body with locations of key joints in 3 dimension space. Compared with other modalities, human skeleton retains most human action The associate editor coordinating the review of this manuscript and approving it for publication was Juntao Fei . information with less data. For example, an image may have a complex background, whereas a skeleton only contains joint information. Therefore, skeleton-based approaches appear less computation consuming and more robust to variations of viewpoints, motion speed, body scales, etc. [12].
In the early days, RGB videos which contain temporal dynamics of human motions are more easily to obtain than other modalities. Most human action recognition works are RGB-based. Just like many other tasks, human action features of RGB videos gradually change from hand-crafted features [13] to deep features [14]. Even with deep learning methods, human action recognition is still challenging. Because RGB videos are sensitive to viewpoint variations, illumination conditions and background. Besides, action data have a much larger file size in RGB video format than they are in skeleton format. This can lead to a higher computational cost. Naturally, researchers turn to skeleton-based methods. In human action recognition, hand-crafted spatial and temporal features [15] of skeleton data are firstly applied. At present, deep learning methods have become the mainstream in this field because of their powerful feature learning ability. Skeleton data can be seen as a sequence of static skeleton frames. RNN is suitable for learning dynamic dependencies of sequential data. Various methods based on RNN [7], [16] are applied to model temporal information of skeleton data. Some researchers try to use CNN to model spatio-temporal information of skeletons. The main idea is treating a 3D skeleton sequence as a sequence of pseudo-images [17]. Both CNN and RNN cannot effectively extract complex spatio-temporal information and correlations between joints in a skeleton due to the characteristics of networks.
As mentioned, human skeleton is a natural topological graph structure including key body joints. It is difficult to use proven models like CNN or RNN on graph structure directly. GCN can be seen as a generalized version of CNN on structure of an arbitrary graph. Therefore, some researchers try to utilize GCN to model skeleton data. The ST-GCN [18], as shown in Fig.1(a), is the first work which applies GCN to model dynamic graphs over large-scale human skeleton sequences successfully. ST-GCN uses natural connections FIGURE 1. Illustration of deep learning frameworks for skeleton-based action recognition. From top to bottom: (a) GCN is used to capture connections in one skeleton in one frame and TCN is used to capture connections on the same joint in different frames. The alternate placement of GCN and TCN blocks has been used by subsequent methods. (b) AS-GCN constructs A-links and S-links to improve the adaptability of adjacency matrix. A-Links is generated by an encoder-decoder structure to get predicted action. (c) 2s-AGCN constructs a parameterized adjacency matrix (C k ) to capture links between indirectly connected joints dynamically and generates a bone stream (the line skeleton) to capture the second-order information of a skeleton. of joints in a human body to construct a graph structure. Meanwhile, it adds temporal edges to connect the same joint across consecutive time steps. Multiple layers of GCN and temporal convolutional networks (TCN) are constructed thereon. Spatial features can be extracted by GCN layers. And temporal features can be extracted by TCN layers. GCN layers and TCN layers are stacked alternatively. The main problem of ST-GCN is that the skeleton graph is predefined and the adjacency matrix represents only the physical structure of a human body. Some indirectly connected joints have semantic relationship in some actions like ''walking'' and ''clapping''. ST-GCN cannot capture action features which need long dependence of joints. Establishing connections between indirectly connected joints is the main improvement direction. Li et al. [19] propose an Actional-Structural GCN (AS-GCN), as shown in Fig.1(b), to connect long distance joints using two types of links named actional links and structural links. In AS-GCN, actional links generated by an encoder-decoder structure are used to capture latent dependencies between arbitrary joints. Structural links generated by a high order adjacency matrix are used to represent high order relationships. Both actional links and structural links are fixed during classification essentially. Based on [19], Li et al. propose Sym-GNN [42] to capture body parts links. The main idea of Sym-GNN and AS-GCN is to determine the adjacency matrix with segmentation. This kind of segmentation is still based on physical structure of a body. Almost at the same time, an adaptive graph convolutional network is proposed in [20] namely 2s-AGCN. 2s-AGCN takes advantage of a feature integration strategy. They generate a bone stream from joints to improve recognition performance as shown in Fig.1(c). Besides, they design an adaptive adjacency matrix to capture links between indirectly connected joints dynamically. Based on [20], Shi et al. [21] add an attention mechanism to further improve recognition performance. After that, Shi et al. [43] propose DSTA-Net which looks into the spatial-temporal in human action sequence closely and uses the idea of transformer [44] to decouple spatial-temporal features. MS-G3D [41] utilizes a high order adjacency matrix to enhance the adaptability and proposes a unified spatial-temporal graph convolution to capture crossspacetime correlations. MS-G3D has high computational complexity because of the use of high-order adjacency matrix. Cheng et al. [45] propose a shift graph convolutional network (Shift-GCN) based on Shift CNN [46] to reduce computational complexity.
As seen in Fig.1, most deep learning frameworks are linear stacks of GCN and TCN blocks. Final features are extracted from the last layer of networks. Based on graph theory, information propagated through a multi-layers graph convolutional network can achieve a convergent state on joints of a human body finally. Features in the last layer are high-level. Low-level features are gradually integrated into high-level features in the process of propagating. We try to utilize low-level features for two reasons. The first reason is that the experience of CNN shows low-level information is also VOLUME 9, 2021 FIGURE 2. The pipeline of the proposed framework. In each branch, we apply a GCN-TCN block to capture different low-level features directly. The input dimension of each branch is different, and the output dimension is the same. Before classification, we concatenate different low-level features and backbone features together. critical for classification [22]. In [18]- [21], all frameworks apply a local residual structure to improve the performance. In some cases, global low-level features are also helpful for classification. For example, in [23], different levels of features represent different areas of a vehicle. Another reason we try to use low-level features is to sovle the degradation problem. As we all know, along with network depth increasing, accuracy degrades rapidly in CNN. In [22], He et al. introduce a residual structure to solve this problem. The core idea of residual structure is utilizing low-level features directly. Based on both considerations, we propose a novel framework taking advantage of global low-level features based on [20]. Fig.2 presents the pipeline of our framework. We directly capture features from different low-level layers and concatenate them together before the last layer. Besides, we apply a preprocess strategy [17] to improve action recognition performance. The core idea of this strategy is to determine whether a frame is valid by calculating the variance of joints, so as to delete valid frames. The details of this strategy will be introduced in the experiment section. To verify the effectiveness of the proposed model, namely low-level adaptive graph convolutional network (LAGCN), we conduct extensive experiments on two large-scale datasets: NTU-RGB+D [24] and Kinetics-Skeleton [25]. The experiments demonstrate that LAGCN has achieved the state-of-the-art performance.
The main contributions of our work lie in three aspects: (1) A human action recognition framework with a multi-branches structure is proposed to learn the low-level features of skeleton data. (2) A better preprocess strategy is used to improve action recognition performance. (3) On two large-scale datasets for skeleton-based action recognition, the proposed LAGCN exceeds the state-of-the-art.
II. RELATED WORK A. NEURAL NETWORKS ON GRAPHS
Classical neural networks have achieved great success in processing structured data. For example, images can be seen as a grid and texts can be embedded into a fixed-length vector. Recently, researchers begin to pay attention on unstructured data. The graph based method is a hot topic in deep learning research. GNNs [26] is the first work to combinate the graph and recurrent neural network for graph representation learning. Scarselli et al. [26] prove mathematically that graph representation learning using the recurrent neural network and Almeida-Pineda algorithm [27] is convergent. After that, GGNN [28] based on GRU is proposed. Although GGNN cannot guarantee convergence in an arbitrary initial state, it is more flexible and practical in applications. Subsequently, spectral and spatial GCNs appear. Spectral GCN [29] based on the spectral graph theory transforms graph signal with the Laplacian on graph spectral domain. Because its simplicity as a mean neighborhood aggregator, many subsequent spatial GCN frameworks have been developed. Spatial GCNs [15], [30] apply a convolution operation on each node and its neighbors to compute a new feature vector directly. References [18]- [21] and this work all adapt the layer-wise update rule in [29].
B. SKELETON-BASED ACTION RECOGNITION
Feature extraction methods for skeleton-based action have gradually changed from early hand-crafted [13], [15] to deep learning like many other applications. Although traditional manual methods have good interpretability, the performance is hardly satisfactory. With the acquisition of large amounts of data getting easier, data driven methods based on deep learning become the mainstream. Since skeleton videos can be seen as a sequence of frames, methods based on RNN [7], [30], [31] are introduced into action recognition. Basically most of these methods try to convert the human action video classification problem to a sequence classification problem. Methods based on CNN [32]- [34] try to treat the human action classification problem as a 2D or 3D pseudo-image classification problem. All methods mentioned above do not take into account the characteristics of human skeleton. Since GCN can better extract neighborhood features between joints, GCN-based methods have become a hot research direction [18]- [21].
A. GRAPH CONVOLUTIONAL NETWORKS
It is natural to use graph convolutional networks on skeleton data from an intuitive point of view. The most essential reason is that GCN can extract irregular neighborhood features. The number of neighborhood vertex is variable as shown in Fig.3. We can use the Laplacian to characterize the degree of difference between vertexes with where N i is the number of neighborhood of vertex v i . If we apply (1) to all vertexes in the graph, then we have where d i = j∈N A ij is the degree of vertex v i and A is the adjacency matrix. Then, we have where H is called the filter function. Kipf and Welling. [29] replace H with In practice, θ will be set to 1 and L is replaced byL = D There are two main contributions of ST-GCN [18]. The first is that they utilize Open Pose [37] to process action videos to get skeleton data. Each skeleton video consists a 4 dimensions data block. The 4 dimensions are numbers of joint features dimension, frames, joints and humans. For example, a (3,150,18,1) block means that there is one subject with 18 3D joints coordinates in 150 frames. By observing this data form, we can see that this is a normalized matrix form of data. Unlike the traditional graph convolution problem, the graph structure of human skeleton is fixed. The process of ST-GCN is very similar to CNN. As shown in Fig.1(a) where f in is a feature map function on the vertex v tj using its 1-distance neighbors. w is a weighting function.
C. ADAPTIVE GCN
The skeleton graph used in ST-GCN is just the physical structure of human body. There are no links between long distance joints. Shi et al. [20] try to solve this problem with an adaptive adjacency matrix of the graph. In ST-GCN, (1) is transformed into where K v is the number of subset. A k is the adjacency matrix of the subset correspondingly. W k is a C out ×C in ×1×1 weight vector. C denotes the number of features of a joint. M k is a N × N weight matrix that indicates the importance of each vertex. is the dot product. In (8), A k is just the skeleton adjacency matrix which represents the physical structure of the human body. To make the adjacency matrix adaptive, they introduce another two types of adjacency matrices as shown in where A k is the same as the one in (8). B k is a parameterized adjacency matrix which indicates the existence of connections between arbitrary two joints. Because values in B k can be arbitrary, they can also indicate the importance of connections like M k in (8). And C k is a data-dependent graph which can learn a unique graph for each sample.
D. LOW-LEVEL FEATURES OF ADAPTIVE GRAPH CONVOLUTIONAL NETWORKS
The pipeline of proposed work is illustrated in Fig.2. The workflow of our framework is mainly based on [20]. The main difference between our work and [20] is that we take advantage of low-level features. In [20] and [18], there are also similar structures in blocks of their networks. But they just utilize local low-level features like ResNet [22]. Some works [23], [35] have shown that low-level features are helpful for many applications. Kong et al. [36] have shown that a well-trained convolutional neural network is capable of producing well-organized features consisting of abundant semantic and fine information. Although skeleton data has removed a lot of irrelevant information, it still has multi-view information, different kinds of adjacency and semantic of an action. According to the classic convolutional neural network theory, shallow layers are more effective to capture subtly fine features to represent delicate structures. And deep layers can extract high-level semantic features. Utilizing both high-level and low-level features are essential for human action recognition. In human action recognition, information will reach a stable state after multi-levels propagation. But in some cases, the original information is more discriminative. For example, ''taking off shoes'' and ''putting on shoes'', final features will encode he entire process. Their main difference is the start position. Therefore, we argue information like ''start position'' exists in low-level features. Based on this idea, we propose a multi-levels feature extraction framework to fully exploit the complementary information in skeleton. First, we put different low-level features into branches before they are put into the backbone network. Then, we aggregate features from different low-level layers by some different size GCN-TCN blocks. More specifically, the feature dimension of skeleton frame gradually increases from 3, 64, 128 to 256. We put low-level features into branches before using the stride structure to increase feature dimension. Thus, we save multi-stages low-level features. We utilize the GCN-TCN block for two purposes. The first purpose is to capture low-level spatial and temporal features in skeleton like the backbone network. And the second purpose is to enlarge low-level features to the output of the backbone network. So kernel sizes of different branches are different. At last, we add all enlarged features directly as shown in where f i is the output feature of the ith branch. n b is the number of branches. α is the weight of different low-level features. The other parts are normal fully connection network. f out is just the one in (9).
IV. EXPERIMENTS A. DATASETS
In this section, we evaluate the performance of our method with two large-scale action recognition datasets: Kinetics [25] and NTU-RGB+D [24]. Both of them are benchmark datasets in human action recognition.
1) NTU-RGB+D.
This is the largest in-house captured action recognition dataset. It contains vastly different properties like color, position, depth, etc. This dataset has both video and skeleton data for human action recognition. In this paper, we use the skeleton data.
2) KINETICS
Deepmind Kinetics human action dataset has 300000 video clips in 400 classes retrieved from YouTube. Yan et al. [18] use OpenPose [37] toolbox to generate locations of 18 joits in each frame. We use the skeleton data generated by [18] directly. The dataset is split into a training set with 240436 clips and a validation set with 19796 clips. Following the evaluation method in [18], we report the top-1 and top-5 accuracies on the validation set.
B. NETWORK ARCHITECTURE AND TRAINING DETAILS
The backbone network is based on [20]. The whole model is composed of 10 layers of GCN-TCN blocks. The first layer have 3 input channels and 64 output channels. The next three layers have the same output channels to the first layer. The stride in layer 5 is set to 2 as a pooling layer for changing the output channels to 128. Another two layers have 128 output channels too. The stride in layer 8 is 2 and the output channels in last two layers is 256. Before the layer 1, 2, 6, we have three branches. Input channels of the three branches are 3, 64, 128 and output are all 256. All experiments are conducted on PyTorch deep learning toolbox with 2 Tesla V100 GPUs. The Nesterov momentum of stochastic gradient descent optimization strategy is set to 0.9 and the learning rate is set to 0.0001. Cross-entropy is selected as the loss function to backpropagate gradients. For the NUT-RGB+D dataset, the batch size is 32 and we decay the learning rate by 0.1 at 30, 40, 50 epochs. The max number of frames in each sample is 300. We pad the video to 300 frames if there are less than 300 frames in one sample. In the Kinetics dataset, the batch size is 128 and we decay the learning rate at 45, 55, 65 epochs. The input tensor is set to the same as [20] with 150 frames and 2 subjects in each frame. Preprocess is also a critical factor for the performance. In the NUT-RGB+D dataset, first 50 action classes clips have only one subject and the last 10 action classes clips have two subjects. The body tracker of Kinect is prone to detecting more than 2 bodies. We need to filter the incorrect bodies. The preprocess strategy in [17] is just used to process clips having more than two subjects. First, if the number of valid frame in raw skeleton sequence is less than a predefined threshold, we delete the subject. Then, if the difference of y-axis is greater than that of x-axis in a frame, that is, body height is greater than body width, the frame is considered invalid. If the percentage of invalid frames is greater than the predefined threshold, we delete the subject too. At last, we sort subjects according to joints variance and select the two with the lowest variance. For data consistency, if the number of subject is 1, then the other subject will be padded with 0. And this preprocess is denoted as VA. Another preprocess is normalization and translation following [18], [20].
C. COMPARISON WITH THE STATE-OF-THE-ART
Because the 3 dimension feature in Kinetics dataset consists of a 2 dimension position vector and a confidence score. We cannot proprocess skeleton data with VA. First we compare our best configuration denoted as VA+LAGCN with the state-of-the-art on NTU-RGB+D dataset. We only compare the performance using joint data, which is the most basic feature. Results are shown in Table 1. In Table 1, reference [15] is a hand-craft method. References [24], [30]- [32] and the RNN method in [17] are RNN-based methods. Reference [38]- [40] and the CNN method in [17] are CNN-based methods. References [18]- [20], [41]- [43], [45] and our method are GCN-based methods. First, we validate the effectiveness of VA preprocess. The result denoted as VA+2sAGCN shows an improvement on the performance of existing algorithms. Then, we add low-level features in 2s-AGCN (denoted as VA+LAGCN), the performance has been further improved. Our method achieves the state-of-the-art performance in X-Sub evaluations of NTU-RGB+D dataset.
D. EFFECTIVENESS OF THE LOW-LEVEL FEATURES
In order to verify the effectiveness of low-level features, we perform a head-to-head comparison with 2s-AGCN on both Kinetics and NUT-RGB+D datasets. Results are shown in Table 2 and Table 3. The multiple feature integration is a common strategy in the field of machine learning. 2s-AGCN calculates bone features from original skeleton data and integrates both joint and bone features together to improve the performance. We also conduct our experiments with joint, bone and both. Using low-level features shows an obviously improvement on X-Sub with just joints data. This result is consistent with that in Table 1. It shows that low-level features are more effective in extracting joint features. In most cases in NTU-RGB+D dataset, our method shows an improvement on accuracies. Especially in Kinetics dataset, the top-5 accuracies show an obviously improvement. This shows that low-level features can widen the gap between similar classes. To verify the effectiveness of proposed algorithm, we generate two confusion matrices for CS and CV evaluation of the NUT-RGB+D dataset as shown in Fig.4 and Fig.5. The accuracy of most categories is high, but there are obvious misclassification in class 10, 11, and 29. The corresponding actions are reading, writing and play with phone/tablet. Intuitively, the hand actions of these classes are similar. In the future work, we should focus on how to further fine portion of hand actions.
V. CONCLUSION
In this work, we review the development of human action recognition based on skeleton data. After analyzing the network structure of previous GCN-based methods, we propose a novel low-level adaptive graph convolutional neural network (LAGCN) for skeleton-based action recognition. It constructs multi-branches from global view to improve the performance of classification. These branches can extract low-level features of the network. The final network is evaluated on two large-scale action recognition datasets, NTU-RGB+D and Kinetics. And it achieves the state-of-the-art performance on both of them. | 2021-09-18T13:09:15.916Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "01056ea2631e8d6a6253a1fc8e5ac216861683f6",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09534751.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "01056ea2631e8d6a6253a1fc8e5ac216861683f6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
271540424 | pes2o/s2orc | v3-fos-license | NGF-BMSC-SF/CS composites for repairing knee joint osteochondral defects in rabbits: evaluation of the repair effect and potential underlying mechanisms
Background With the rapid growth of the ageing population, chronic diseases such as osteoarthritis have become one of the major diseases affecting the quality of life of elderly people. The main pathological manifestation of osteoarthritis is articular cartilage damage. Alleviating and repairing damaged cartilage has always been a challenge. The application of cartilage tissue engineering methods has shown promise for articular cartilage repair. Many studies have used cartilage tissue engineering methods to repair damaged cartilage and obtained good results, but these methods still cannot be used clinically. Therefore, this study aimed to investigate the effect of incorporating nerve growth factor (NGF) into a silk fibroin (SF)/chitosan (CS) scaffold containing bone marrow-derived mesenchymal stem cells (BMSCs) on the repair of articular cartilage defects in the knees of rabbits and to explore the possible underlying mechanism involved. Materials and methods Nerve growth factor-loaded sustained-release microspheres were prepared by a double emulsion solvent evaporation method. SF/CS scaffolds were prepared by vacuum drying and chemical crosslinking. BMSCs were isolated and cultured by density gradient centrifugation and adherent culture. NGF-SF/CS-BMSC composites were prepared and implanted into articular cartilage defects in the knees of rabbits. The repair of articular cartilage was assessed by gross observation, imaging and histological staining at different time points after surgery. The repair effect was evaluated by the International Cartilage Repair Society (ICRS) score and a modified Wakitani score. In vitro experiments were also performed to observe the effect of different concentrations of NGF on the proliferation and directional differentiation of BMSCs on the SF/CS scaffold. Results In the repair of cartilage defects in rabbit knees, NGF-SF/CS-BMSCs resulted in higher ICRS scores and lower modified Wakitani scores. The in vitro results showed that there was no significant correlation between the proliferation of BMSCs and the addition of different concentrations of NGF. Additionally, there was no significant difference in the protein and mRNA expression of COL2a1 and ACAN between the groups after the addition of different concentrations of NGF. Conclusion NGF-SF/CS-BMSCs improved the repair of articular cartilage defects in the knees of rabbits. This repair effect may be related to the early promotion of subchondral bone repair.
Introduction
With the rapid growth of the ageing population, chronic diseases such as osteoarthritis have become one of the major diseases affecting the quality of life of elderly people [1].The main pathological manifestation of osteoarthritis is articular cartilage damage.Alleviating and repairing damaged cartilage has always been a challenge [2].The application of cartilage tissue engineering methods has shown promise for articular cartilage repair [3].Articular cartilage has no direct blood supply and lacks nerves and lymphatics [4].The joint cavity is a hypoxic environment, and its nutritional status mainly depends on the diffusion of surrounding synovial fluid and the blood supply to the subchondral bone; therefore, it is challenging for articular cartilage to repair itself once damaged [5].There are multiple ways to treat cartilage damage [6][7][8], but clinically, no single treatment can completely treat this condition [9].With the development of cross-disciplinary medicine in recent years, research and progress in the field of tissue engineering have shown promise for articular cartilage repair [10].Tissue engineering is a discipline that uses engineering and life science technologies to repair or replace damaged human tissue and organs [11].Several basic experimental studies have confirmed that cartilage tissue engineering has a certain effect on the repair of articular cartilage [12].In our previous experimental study, we found that a silk fibroin (SF)/chitosan (CS) scaffold combined with bone mesenchymal stromal/stem cells (BMSCs) promoted the repair of articular cartilage defects in rabbits [13].To further promote articular cartilage repair, we wanted to determine whether adding cytokines to the SF/CS scaffold-BMSC composite can increase the repair effects and achieve better therapeutic results.
The repair of articular cartilage damage can be promoted by the addition of cytokines for tissue engineering [14], such as transforming growth factors (TGFs) [15], insulin-like growth factor (IGF ) [16], and bone morphogenetic protein (BMP) [17].These factors stimulate the synthesis of the intracellular matrix through different pathways, promote the expression of aggrecan (ACAN) and type II collagen (COL2A1), promote the differentiation of cartilage precursor cells into mature chondrocytes, maintain the chondrocyte phenotype, and inhibit the decomposition of the intracellular matrix [18].
Nerve growth factor (NGF) is a crucial cytokine in the nervous system that obviously influences the growth, development, differentiation, and survival of neurons and has clinical importance due to its ability to regulate nerve regeneration and repair following injury [19].Adverse effects, such as rapidly progressive osteoarthritis, may occur in some patients treated with anti-NGF antibodies, indicating that the presence of NGF plays an important role in maintaining the process of repairing damaged bone and cartilage [20][21][22][23].NGF has a short half-life in the body and is easily inactivated [24].Systemic or local administration of a single dose of NGF is not effective and has obvious side effects, for example, pain at the injection site, transient aminotransferase elevation, dizziness, insomnia, and continuous administration is required to maintain its effect.The use of biomaterials for sustained release can maintain the biological activity of NGF in a physiological environment.The double emulsification-solvent volatilization method is the most common and mature method for preparing cytokine-loaded sustained-release microspheres [25,26].
In this study, NGF-loaded sustained-release microspheres were prepared and incorporated into SF/CS scaffolds carrying BMSCs to obtain NGF-SF/CS-BMSC composites, which were then implanted into cartilage defects in rabbit knees.This study aimed to investigate the effect of NGF-SF/CS-BMSCs on the repair of rabbit knee articular cartilage defects.
Preparation of the SF/CS scaffold
As shown in Fig. 1, SF powder (Zhejiang, China) was dissolved in a CaCl 2 (Macklin, Shanghai, China)/H 2 O/ C 2 H 5 OH (Macklin, Shanghai, China) ternary solution at a molar ratio of 1:8:2.After extraction and purification, an SF solution at a concentration of 3% was obtained.CS powder (Aladdin Shanghai, China) was dissolved in 2% acetic acid (Kelong, Chengdu, China) to prepare a 3% CS solution.Then, 50 ml of SF solution and 50 ml of CS solution were mixed and stirred continuously with a magnetic stirrer for 1.5 h.The mixture was then poured into a mould and frozen at -80 °C for 24 h, followed by drying in a vacuum freeze dryer (Ningbo Xinzhi Biochemical Co., Ltd., Ningbo, China).The SF/CS scaffolds were immersed in a solution (75% methanol (Kelong, Chengdu, China) and 1 mol/L NaOH (Kelong, Chengdu, China) at a volume ratio of 1:1, 50 mmol/L EDC (Aladdin, Shanghai, China) and 20 mmol/L NHS (Aladdin, Shanghai, China) at a volume ratio of 1:1), crosslinked twice, and then dried under a vacuum.The morphology of the SF/CS scaffold was observed by gross observation, scanning electron microscopy (SEM) and pathological examination.
Isolation, purification and culture of rabbit BMSCs
All animal procedures were approved by the Institutional Animal Care and Use Committee of The First People's Hospital of Zunyi (No.2020-1-17).As shown in Figs. 2 and 5-to 6-month-old New Zealand white rabbits (Jinan Jinfeng, China) were anaesthetized.After the operative field was routinely disinfected and covered with a sterile drape, 4 mL of bone marrow was extracted from the bilateral femur and tibia with a syringe containing heparin.Four millilitres of phosphate-buffered saline (PBS) was added, and a cell suspension was prepared to isolate monocyte cells using monocyte isolation solution (Anhui Biosharp, China).Then, the cells were seeded in culture flasks at a density of 1.4 × 10 4 cells/cm 2 and placed in a cell culture incubator.The medium was changed every 2-3 days.When the cells were grown to 80-85% confluence, they were digested with trypsin and subcultured at a ratio of 1:2.The subculture methods used were the same as those used for the primary culture.The growth of BMSCs at different time points was observed under a microscope.Third-generation BMSCs were used for subsequent experiments [27,28].
Differentiation of BMSCs on the SF/CS scaffold
The sterile SF/CS scaffold was placed in 24-well plates.Well-grown third-generation BMSCs were removed to prepare a cell suspension by adding medium.Then, the cell suspension was seeded onto the scaffold at a density of 5 × 10 5 cells, and cultured in a CO 2 incubator with 5% CO 2 at 37 °C.After 4 h of culture, osteogenic differentiation medium (Procell Wuhan, China) and chondrogenic differentiation medium (ScienCell, USA) were added, and the medium was changed every 2-3 days.After 21 days of culture, the SF/CS scaffold was removed and fixed in 4% paraformaldehyde for 24 h.After dehydration in graded alcohol, embedding in paraffin wax and sectioning, the samples were stained with haematoxylin and eosin (H&E), Alcian blue, and 2% Alizarin red S, after which they were photographed and analysed.
Preparation and performance evaluation of NGF-loaded sustained-release microspheres
As shown in Fig. 3, 100 µg of NGF (Wuhan, China) and 100 mg of bovine serum albumin (Gibco, USA) dissolved in 100 µL of deionized water were used as the internal aqueous phase (W 1 ).One hundred milligrams of PLGA (Aladdin, Shanghai, China) dissolved in 2 mL of dichloromethane (Kelong, Chengdu, China) was used as the oil phase (O).The external aqueous phase (W 2 ) contained 10 ml of 2% polyvinyl alcohol solution.W 1 was poured into O, which was then sonicated for 30 s in an ice bath to create the primary emulsion.The primary emulsion was poured into W 2 and stirred by a magnetic stirrer at 1000 r/min for 30 min in an ice bath to obtain a W 1 /O/W 2 double emulsion.Then, the double emulsion was poured into 400 ml of 10% deionized water containing sodium chloride solution and stirred by a magnetic stirrer (800 r/min) at room temperature for 4 h to volatilize residual organic solvents.Sustained-release microspheres were collected by centrifugation at 3000 r/min at 4 °C, washed with 100 mL of deionized water 5 times, and freeze-dried in a vacuum freeze dryer for 4 h to obtain NGF-loaded sustained-release microspheres.The above procedure was repeated, and a total of 3 batches of microspheres were prepared.The NGF-loaded sustainedrelease microspheres were stored at 4 °C (refrigerator) for further experiments.The morphology of the NGF-loaded sustained-release microspheres was observed, and the particle size, encapsulation efficiency, drug loading rate and cumulative release percentage of the microspheres were determined.The NGF-loaded sustained-release microspheres was mixed with PC12 cells to evaluate the biological activity of NGF by counting the number of axons.
Preparation of NGF-SF/CS-BMSC composites
One hundred milligrams of NGF sustained-release microspheres was dissolved in 1 ml of distilled water and thoroughly mixed.The SF/CS scaffold was placed in a 24-well plate, and 200 µl of the NGF sustained-release microsphere suspension was dropped into the scaffold to ensure that the scaffold was wetted completely.Then, the NGF-SF/CS scaffold composite was frozen in a -80 °C refrigerator for 24 h, dried in a vacuum dryer for 12 h, removed and sterilized with ethylene oxide.Then, the NGF-SF/CS scaffold was placed in a 24-well plate.After the third-generation BMSCs were digested and centrifuged, the cell density was adjusted to 2 × 10 6 cells/mL.Two hundred microlitres of cell suspension was seeded onto the scaffold and cultured for 4 h.Then, basal medium was added, and the cells were cultured for 24 h.The prepared NGF-SF/CS scaffold-BMSC composite was used in the following experiments.
Animal experiments
All animal experiments were approved by the Institutional Animal Review Committee of the Third Affiliated Hospital of Zunyi Medical University (No.2020-1-17).All animal experimental procedures were performed under pentobarbital anaesthesia, and efforts were made to minimize animal suffering.
A total of 24 healthy New Zealand white rabbits (male and female, 5-6 months old, weighing 2.5 ± 0.4 kg) were divided into three groups (8 rabbits per group): the NGF-SF/CS-BMSC group (experimental group, treatment of osteochondral defects with the NGF-SF/CS-BMSC composite), the SF/CS-BMSC group (control group, treatment of osteochondral defects with the SF/CS-BMSC scaffold composite), and the SF/CS group (blank group, treatment of osteochondral defects with the SF/CS scaffold).Sample size is determined from the degrees of freedom in ANOVA.
All rabbits were anaesthetized with 2% pentobarbital sodium (1 kg/ml) (Wuhan, China) via the ear vein.After the operation, the field was routinely disinfected and covered with a sterile drape.A 2 cm incision was made over the lateral side of the patella of the bilateral knees.The skin and subcutaneous fascia were incised layer by layer.The patella was dislocated medially.The non-load-bearing region between the femoral condyles was exposed.A osteochondral defect (5 mm diameter, 4 mm depth) [29,30] was drilled at the non-weight-bearing region of the bilateral femoral condyles using an electric drill to establish a rabbit osteochondral defect model.After washing, the corresponding NGF-SF/CS-BMSCs, SF/CS-BMSCs, and SF/CS were implanted into the defect region of rabbits in each group.After the patella was repositioned, the wound was closed in layers and disinfected with iodophor solution (Fig. 4).After surgery, all rabbits were housed in individual cages, allowed to move freely and had free access to water and food.Gentamicin (4 mg/kg) was injected once daily for 3 days after surgery.
Observation indicators
At 4, 8, and 12 weeks, computed tomography (CT) (GE, USA), magnetic resonance imaging (MRI) (Siemens, USA) and morphological observation were used to assess the repair of articular cartilage defects, and images were acquired.Then, the knees of the rabbits in each group were harvested.Gross observation of the samples was performed by using the International Cartilage Repair Society (ICRS) scoring system.Pathological stains, such as H&E and Alcian blue, were histologically graded using the modified Wakitani cartilage repair scoring system.
In vitro experiments The effect of NGF on the proliferation of BMSCs on the SF/CS scaffold
Well-grown third-generation BMSCs were harvested, digested and centrifuged.A cell suspension was made with medium containing 10% serum.Subsequently, the cell concentration was adjusted to 1 × 10 5 cells/mL, the cells were seeded on the SF/CS scaffold in 96-well plates, and the edge wells were filled with sterile PBS.The control wells contained medium only.The plates were then incubated in an incubator for 4 h, different concentrations of NGF were added, and 5 replicate wells for each concentration were used.Cells in the control wells were cultured with complete culture medium containing 10% serum without NGF.After 24, 48, 72 h and 4, 5, 6, 7, and 8 days of culture, a 96-well plate was removed every day, the medium was changed, and 10 µL of Cell Counting Kit-8 (Keygen, China) solution was added to each well, followed by incubation for 1 h at 37 °C.The medium was then aspirated, and the cells were transferred to another 96-well plate.The absorbance (OD) at 450 nm was determined by a microplate reader.The growth curves of BMSCs cultured with NGF were drawn with the OD value as the ordinate and the time (days) as the abscissa.
The effect of NGF on the directed chondrogenic differentiation of BMSCs on the SF/CS scaffold
Third-generation BMSCs were harvested, digested and centrifuged.A cell suspension was made with medium containing 10% serum.Then, the cell concentration was adjusted to 5.0 × 10 6 cells/ml, the cells were seeded on the SF/CS scaffold in 96-well plates, and the edge wells were filled with sterile PBS.The cell suspensions (100 µl) were added to the SF/CS scaffold and cultured for 4 h.Chondrogenic differentiation medium containing different concentrations of NGF was then added, and the medium was changed every 2-3 days.After 7 and 21 days of culture, COL2A1 and ACAN protein expression was determined by western blot analysis, and the mRNA expression of ACAN, SOX9 and COL2a1 was determined by real-time PCR.
Statistical analysis
Statistical analysis was performed using the SPSS 18.0 software package.All the data are expressed as the mean ± standard deviation (SD).Comparisons between two groups were performed using a t test.Comparisons among multiple groups were performed by one-way ANOVA.A value of P < 0.05 was considered to indicate statistical significance.
Observation of the prepared scaffold
Gross observation revealed that the SF/CS scaffold was white or milky white, with a regular shape similar to the shape of the mould.The SF/CS scaffold had no special odour, was extremely light, and had obvious pressure resistance and elasticity (Fig. 5A).The transverse SEM images of the SF/CS scaffold showed that the scaffold exhibited a honeycomb shape with irregularly shaped pores (such as polygons and circles), and the pores were highly interconnected (Fig. 5B and C).Histological examination of H&E-stained sections revealed irregular hollow structures inside the scaffold, with pores of varying sizes and thin walls (Fig. 6).
Morphological observation of rabbit BMSCs
The adherent BMSCs were predominantly triangular, spindle-shaped or fusiform in shape.After 24 h of culture, a few cells adhered to the bottom of the culture flask, and the cells then gradually became denser and more homogeneous in morphology.After 5 days of culture, the cells gradually grew to confluency and covered the bottom of the flask after 9 days of culture.After generation, second-and third-generation cells were more homogeneous, predominantly spindle or fusiform in shape, and the cells were arranged in parallel and grew in a school-of-fish-like or swirling pattern (Fig. 7).
Osteogenic and chondrogenic differentiation of BMSCs on the SF/CS scaffold
In terms of osteogenic differentiation (Fig. 8A), H&E staining did not reveal differences in the amount of newly formed bone between the experimental and control groups.Alizarin Red staining revealed that the number of calcification nodules in the experimental group was significantly greater than that in the control group.Indicating that rabbit BMSCs showed good compatibility with the SF/CS scaffold and could differentiate into osteocytes and express osteogenesis-related proteins under osteogenic induction.
In terms of chondrogenic differentiation (Fig. 8B), HE staining revealed no significant difference between the experimental and control groups.Alcian blue-positive staining was significantly greater in the experimental group than in the control group.Immunohistochemical staining showed that COL2A1 expression was greater in the experimental group than in the control group.The results also suggested that rabbit BMSCs showed good compatibility with the SF/CS scaffold, could differentiate into chondrocytes, and expressed chondrogenesisrelated proteins under chondrogenic induction.
Microsphere morphology observed under light microscopy and SEM
As shown in Fig. 9, the sustained-release microspheres displayed spherical shapes with different sizes.A highmagnification view of the local area from light microscopy images showed that the microspheres had cores.There was a clear boundary between the core and outer shell.The outer shell of the microsphere was translucent in solution, nearly circular, and intact and provided full encapsulation.
The size of NGF sustained-release microspheres
The particle size of the microspheres was determined by a laser particle size analyser (Mastersizer, Malvern, UK), and the size of the sustained-release microspheres varied from 0.461 μm to 756 μm, with a mean particle size of 180 μm.The particle size distribution range was relatively concentrated.
Encapsulation efficiency, drug loading rate and cumulative release percentage of the microspheres
As shown in Table 1, the mean mass of the prepared NGF sustained-release microspheres was 99.312 ± 2.725 mg.The encapsulation efficiency and drug loading rate were 31.15% and 31.36%,respectively (Fig. 10).
Biological activity of NGF
As shown in Fig. 11A, B, after PC12 cells were cultured for 7 days, statistically significant differences in NGF biological activity were found between the blank group and the 50 ng/mL NGF group and between the blank group and the NGF sustained-release microsphere group.There were no statistically significant differences between the 50 ng/mL NGF group and the NGF sustained-release microsphere group, indicating that NGF released by NGF sustained-release microspheres has biological activity.
Results of animal experiments
Imaging findings MRI (Fig. 12A) revealed no obvious cartilage tissue repair in any group at 4 weeks after surgery.At 8 weeks, little newly formed cartilage tissue was observed; the tissue covered the defects of the knee joints in the NGF-SF/CS-BMSC group, but the repair was incomplete.In the SF/ CS-BMSC group, little repair tissue was also observed in the cartilage defects, but the amount of local repaired cartilage was less than that in the NGF-SF/CS-BMSC group.In the SF/CS group, the defect was not filled with new bone or cartilage.At 12 weeks, the defect was fully covered by new cartilage tissue in the NGF-SF/CS-BMSC group.Slight depressions were observed in the centre of the cartilage defect in the SF/CS-BMSC group.In the SF/ CS group, the defect was not covered by cartilage.CT images (Fig. 12B) showed that at 4 weeks after surgery, a sporadic and scattered distribution of bone tissue was observed in the defects in the NGF-SF/CS-BMSC group.No bone tissue was observed in the defects in either the SF/CS-BMSC or SF/CS group.At 8 weeks after surgery, the newly formed bone tissue completely covered the bottom of the articular cartilage defects in the NGF-SF/ CS-BMSC group.Only a small amount of bone tissue was scattered in the defects in the SF/CS-BMSC group.No bone tissue was observed in the defects in the SF/CS group.
At 12 weeks after surgery, cartilage tissue repair in the NGF-SF/CS-BMSC group was similar to that observed at 8 weeks.More bone tissue was observed at the bottom of the knee joint defects in the SF/CS-BMSC group than in the other groups at 8 weeks after surgery.There was almost no bone tissue at the bottom of the defects in the SF/CS group.
Gross observation of knee joints of rabbits
Gross observation of articular cartilage repair at 4, 8, and 12 weeks after surgery is shown in Fig. 13.At 4 weeks, there was no significant difference in the ICRS between the NGF-SF/CS-BMSC and SF/CS-BMSC groups, whereas significant differences were found between the NGF-SF/CS-BMSC and SF/CS groups.At 8 weeks and 12 weeks, the ICRS scores were significantly greater in the NGF-SF/CS-BMSC group than in the SF/CS-BMSC and SF/CS groups (Fig. 14).
As shown in Fig. 15A, B, H&E staining and Alcian blue staining revealed no cartilage tissue repair in any of the groups at 4 weeks, but the subchondral bone and underlying trabecular bone tissue were obviously repaired in the NGF-SF/CS-BMSC group.The defects were not repaired in the SF/CS-BMSC or SF/CS group.Cartilage and subchondral bone were not repaired in the SF/CS group.At 12 weeks, the cartilage in the NGF-SF/ CS-BMSC group was completely repaired, and the layers of cartilage and subchondral bone were clear.In the SF/ CS-BMSC group, the cartilage was partially repaired, and the cartilage tissues were embedded into part of the subchondral bone, with unclear layers.No repair of cartilage or subchondral bone was observed in the SF/CS group.
Cartilage repair was histologically graded using the modified Wakitani cartilage repair scoring system (Fig. 16).At 4 weeks, there was no statistically significant difference in Wakitani scores between the NGF-SF/ CS-BMSC and SF/CS-BMSC groups, while statistically significant differences were found between the NGF-SF/CS-BMSC and SF/CS groups and between the SF/ CS-BMSC and SF/CS groups.At 8 and 12 weeks, the Wakitani scores in the NGF-SF/CS-BMSC group were significantly lower than those in the SF/CS-BMSC and SF/CS groups.
NGF does not affect the proliferation of BMSCs on the SF/CS scaffold
The proliferation of BMSCs on the SF/CS scaffold fluctuated, but the fluctuation patterns were the same among all groups, and there was no significant correlation between the proliferation of BMSCs and the addition of different concentrations of NGF, indicating that NGF had no effect on the proliferation of BMSCs on the SF/CS scaffold (Fig. 17).
NGF exerts no effect on directed chondrogenic differentiation of BMSCs on the SF/CS scaffold
After 21 days of culture, the expression levels of COL2a1 and ACAN secreted by chondrocytes differentiated from BMSCs on the SF/CS scaffold were detected by western blot analysis, and the results showed that there was no significant difference in the protein expression of COL2a1 and ACAN between the groups after the addition of different concentrations of NGF (Fig. 18A, B).
Similarly, RT-PCR revealed no significant differences in the mRNA expression of COL2a1, ACAN or SOX9 between the groups.These results indicated that NGF has no obvious promoting or inhibitory effect on the expression of chondrogenic matrix genes (COL2a1 and ACAN) secreted by chondrocytes differentiated from BMSCs on the SF/CS scaffold (Fig. 18C).
Discussion
Repairing articular cartilage after injury is challenging, and tissue engineering is a potential solution [10].Previous experiments have demonstrated that SF/CS scaffolds, when combined with bone marrow mesenchymal stem cells, exhibit a beneficial effect on the repair of rabbit articular bone cartilage defects.In order to enhance this effect, cytokine NGF was added, and our experimental grouping was determined by this rationale.In this study, a composite of slow-release microspheres containing nerve growth factor (NGF), silk fibroin/chitosan (SF/ CS) scaffolds, and bone marrow-derived mesenchymal stem cells (BMSCs) was implanted into the knee joints of rabbits with endochondral defects to evaluate its effects.Imaging and pathology revealed that at 8 and 12 weeks, the experimental group exhibited significantly better cartilage repair than the control and blank groups.The repaired cartilage tissue in the experimental group was similar in structure and thickness to the original articular cartilage.Our findings suggest the SF/CS scaffolds with NGF slow-release microspheres and BMSCs had better reparative effects on rabbit joint defects, providing a promising approach for articular cartilage repair.
In vitro experiments revealed that NGF did not significantly promote or inhibit the proliferation of BMSCs, and NGF did not promote the expression of cartilage matrix by BMSCs or differentiated chondrocytes.As shown by our in vivo results, especially at 4 w, no obvious cartilage tissue repair was observed in any of the groups, but in the experimental group, there was obvious subchondral bone tissue generation in the cartilage.At 8 and 12 w, articular cartilage repair gradually occurred, whereas in the control group, although there was repair of subchondral bone at 8 w, incomplete articular cartilage repair was observed at 12 w.Therefore, we speculate that NGF-SF/CS-BMSCs promote the repair of rabbit knee joints, probably not by promoting the proliferation of BMSCs and directional chondrogenic differentiation following the expression of COL2a1 and ACAN but rather by promoting the repair of subchondral bone at an early stage, which in turn promotes the repair of articular cartilage.
Many experimental studies have confirmed that NGF plays a role in promoting osteogenesis and can promote bone tissue regeneration, chondrocyte hypertrophy and differentiation, and the formation of osteoblasts, but few studies have investigated the role of NGF in promoting cartilage repair.At present, it is considered that the use of bionic composite scaffold with mesenchymal stem cells can promote chondrocyte differentiation; the direct promotion of the transformation of BMSCs into chondrocytes; the maintenance of the chondrocyte phenotype; the delay of chondrocyte hypertrophic differentiation; or the promotion of COL2a1 and ACAN expression to achieve cartilage tissue repair.Results from our experiments, including both in vivo and in vitro experiments showed that NGF did not promote the transformation of BMSCs into chondrocytes and had no promotional effect on the chondrocyte expression of COL2a1 or ACAN.NGF may not promote articular cartilage repair by promoting subchondral bone repair.
It has long been believed that degeneration and wear and tear of articular cartilage are the main causes of cartilage damage and defects, and many studies on the repair and treatment of cartilage defects have focused on the cartilage layer, often ignoring the importance of the subchondral bone [31,32]; clinical studies have shown that the subchondral bone of patients with osteoarthritis is altered earlier than the articular cartilage [33,34], which suggests that the integrity of the subchondral bone and the process of remodelling are important.
In particular, the use of the new drug tanezumab (NGF antibody) has been found to cause rapidly progressive osteoarthritis, which requires immediate joint replacement treatment in severe cases.The specific reason for this is still unclear, and in combination with the experiments, we speculate that it may be related to the fact that tanezumab mainly interferes with the homeostasis of the subchondral bone, which in turn affects the stability of the subchondral bone, leading to the rapid progression of osteoarthritis.Therefore, future studies need to focus more on the reconstruction and repair of the subchondral bone [35].
The grouping design of this experimental study was based on the preliminary analysis of the repair effect of NGF on articular cartilage with the SF/CS scaffold composite BMSCs, so the NGF slow-release microsphere group and the blank control group were not separately established.Second, the number of slow-release microspheres in vivo was not further tested, which was mainly considered to indicate that the slow-release microspheres are affected by various factors in vivo.Third, this study investigated the reparative effect of NGF on articular cartilage, but NGF is an important biological factor for pain generation [36], and no pain-related indices were measured.
Conclusions
In summary, this study clarified that SF/CS scaffolds containing NGF slow-release microspheres combined with BMSCs have better effects on the repair of articular osteochondral defects, and this reparative effect may be related to the promotion of subchondral bone repair at an early stage.
Fig. 2
Fig. 2 Flow chart of the isolation and culture of BMSCs
Fig. 1
Fig. 1 Flow chart of SF/CS scaffold preparation
Fig. 3
Fig. 3 Flow diagram showing the preparation of NGF sustained-release microspheres
Fig. 5 Fig. 4
Fig. 5 Gross observation and scanning electron microscopy image of the prepared SF/CS scaffold.A: SF/CS support gross observation; B: SEM image of SF/CS scaffold 1 mm ×; C: SEM image of SF/CS scaffold 50 μm ×
Fig. 8
Fig. 8 Identification of osteogenic differentiation (A) and chondrogenic differentiation (B) of BMSCs on the SF/CS scaffold
Table 1 12 Fig. 10
Fig. 10 In vitro release profile of sustained NGF release microspheres
Fig. 11 A
Fig. 11 A: Effects of NGF sustained-release microspheres on PC12 cell differentiation; B: Number of axons in PC12 cells in each group
Fig. 12
Fig. 12 MR images (A) and CT images (B) of knee joints at 4, 8, and 12 weeks after surgery in each group
Fig. 15 A
Fig. 15 A: Haematoxylin and eosin staining of knee sections from each group at 4, 8, and 12 weeks after surgery.B: Alcian blue staining of knee sections from each group at 4, 8, and 12 weeks after surgery
Fig. 18 AFig. 17
Fig. 18 A: Western blot analysis of COL2a1 and ACAN protein expression in chondrocytes differentiated from BMSCs on the SF/CS scaffold after the addition of different concentrations of NGF.B: Graph showing grey values of the COL2a1 and ACAN protein bands.C: mRNA expression of ACAN, COL2a1, and SOX9 detected by RT-PCR | 2024-07-30T04:09:47.785Z | 2024-07-29T00:00:00.000 | {
"year": 2024,
"sha1": "6a2a4b9a98c68eac8d1ab100d987de30305c6b08",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6a2a4b9a98c68eac8d1ab100d987de30305c6b08",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247189813 | pes2o/s2orc | v3-fos-license | Estimation of brain receptor occupancy for trazodone immediate release and once a day formulations
Abstract Trazodone is approved for the treatment of major depressive disorders, marketed as immediate release (IR), prolonged release, and once a day (OAD) formulation. The different formulations allow different administration schedules and may be useful to facilitate patients’ compliance to the antidepressant treatment. A previously verified physiologically‐based pharmacokinetic model based on in vitro and in vivo information on trazodone pharmacokinetics was applied, aiming at predicting brain receptor occupancy (RO) after single and repeated dosing of the IR formulation and repeated dosing of the OAD formulation in healthy subjects. Receptors included in the simulations were selected using static calculations of RO based on the maximum unbound brain concentration (Cmax,brain,u) of trazodone for each formulation and dosing scheme, resulting in 16 receptors being simulated. Seven receptors were simulated for the IR low dose formulation (30 mg), with similar t onset and duration of coverage (range: 0.09–0.25 h and 2.1–>24 h, respectively) as well as RO (range: 0.64–0.92) predicted between day 1 and day 7 of dosing. The 16 receptors evaluated for the OAD formulation (300 mg) showed high RO (range: 0.97–0.84 for the receptors also covered by the IR formulation and 0.73–0.48 for the remaining) correlating with affinity and similar duration of time above the target threshold to the IR formulation (range: 2–>24 h). The dose‐dependent receptor coverage supports the multimodal activity of trazodone, which may further contribute to its fast antidepressant action and effectiveness in controlling different symptoms in depressed patients.
INTRODUCTION
Trazodone hydrochloride is a triazolopyridine derivative, defined as the first member of the class of serotonin antagonist/reuptake inhibitor (SARI) developed for the treatment of depression. Trazodone is currently approved for the treatment of major depressive disorder (MDD), with or without anxiety. 1 Trazodone acts as a potent antagonist of the serotonin (5-HT) receptors 5-HT 2A and shows moderate affinity to 5-HT 1A , 5-HT 2C , and 5-HT 7 receptors, acting as a weak agonist for the first one and as a weak antagonist for the following two receptors. 2 Trazodone also shows moderate affinity for the serotonin transporter SERT. Moreover, it was demonstrated that trazodone binds with high affinity to adrenergic receptors, where it acts blocking α 1 -adrenoceptors and moderately antagonizing α 2 -adrenoceptors. 3,4 On the other hand, trazodone has very low affinity for acetylcholine muscarinic, dopaminergic, or GABA/benzodiazepine receptors, whereas there is no full consensus about trazodone affinity for the H 1 -histaminic receptors. 5 Trazodone can be defined as a multifunctional drug due to its dose-dependent pharmacological activity. Different clinical trials suggest that low doses of trazodone (i.e., 30 mg-50 mg per day) may be useful for controlling insomnia, probably due to the antagonism of 5-HT 2A / 2C and α 1 that provides a hypnotic effect. 6 When used at the proper antidepressant doses (e.g., starting from 100 to 150 mg per day, until 300 mg per day), trazodone is able to exert additional pharmacological actions, such as the SERT blockade. This layered response allows for full antidepressant efficacy, with a complex mixture of pharmacological functions due to the simultaneous inhibition of the serotonin (5-HT) transporters (SERT), and 5-HT 2A , 5-HT 2C , and 5-HT 7 receptors, together with the partial agonism of 5-HT 1A receptors. The combination of these pathways allows for the antidepressant action of trazodone. 1 Considering the trazodone pharmacokinetic profile, and in particular the variability of the plasma concentration, to achieve therapeutically effective levels to manage major depressive disorder episodes, trazodone should be administered at the target dose of 300 mg/day. 1 Two formulations for trazodone are currently available-immediate-release (IR) tablets requiring multiple administrations daily, and, in Europe, prolongedrelease tablets for twice-daily administration. Trazodone once a day (OAD) is a prolonged release formulation of trazodone for once daily administration. The IR formulation has a rapid onset and short duration of action, whereas the prolonged release (PR) formulation is characterized by an absorption boost as soon as it is administered and has a comparatively delayed maximum concentration (C max ). Conversely, the OAD formulation provides a controlled release of trazodone over 24 h without the early high peak plasma concentration seen with the IR and PR formulations. 7,8 To better support and clarify the multimodal mechanism of action of trazodone and its relevance in managing MDD symptoms, it is important to understand the a day (OAD) tablets. The IR formulation has a rapid onset and short duration of action, whereas the PR formulation is characterized by an absorption boost as soon as it is administered and has a comparatively delayed maximum concentration (C max ). Conversely, the OAD formulation provides a controlled release of trazodone over 24 h without the early high peak plasma concentration seen with the IR and PR formulations.
WHAT QUESTION DID THIS STUDY ADDRESS?
This work aims to identify the brain receptors reaching a threshold occupancy of 50% through static predictions and determine the occupancy versus time profile for those of interest following administration of short-and long-acting trazodone formulations.
WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE?
Brain receptor occupancy (RO) for key targets were predicted based on free drug concentrations, allowing for a physiologically relevant assessment of the different pathways affected by each formulation and dose.
HOW MIGHT THIS CHANGE CLINICAL PHARMACOLOGY OR TRANSLATIONAL SCIENCE?
The presented physiologically-based pharmacokinetic approach to assess RO can be used to guide formulation selection and dosing in clinical studies. occupancy at the receptors indicated in MDD, the time at which a significant RO (t onset ) is reached, as well as the duration where the target RO is maintained. Therefore, the aim of this project was to simulate unbound brain concentrations in adults using an existing physiologically-based pharmacokinetic (PBPK) model for trazodone, refined with a series of human in vitro parameters, and apply a dynamic pharmacodynamic (PD) model to determine RO over time. The 30 mg IR (at single dose) and the 300 mg OAD (at steady-state) formulations, standing at the extremes of the available dosing range, will be assessed with the aim of simulating the widest possible range of exposures achievable by trazodone.
Development of trazodone PBPK models
The Simcyp Population-Based Simulator (version 18, release 2; Simcyp Ltd., Sheffield, UK) and Caucasian Healthy Volunteer population was used for all simulations. A PBPK model for IR trazodone has previously been developed. 9 Plasma CL iv was used directly in the model as input. A K p scalar was incorporated to predict the observed volume of distribution at steady state (V ss ) value accurately.
For OAD trazodone, this model was updated to include the mechanistic absorption model, Advanced Dissolution Absorption and Metabolism (ADAM) model. The ADAM model within Simcyp has been described previously. 10 In brief, the dissolution profile of OAD trazodone was estimated using the Weibull cumulative dissolution function parameters (F max , α, and β, respectively, standing for maximum %Dissolved in vivo, dissolution profile scale factor, and dissolution profile shape factor) for each subject in the clinical trial (Equation 1). The mean deconvoluted in vivo dissolution profile was then compared with the optimized in vitro dissolution (Supplementary Methods). 11 The dissolution profile for the commercial formulation (manufactured by Aziende Chimiche Riunite Angelini Francesco S.p.A) was input as a discrete profile with a coefficient of variation (%CV) at each time point as estimated from in vivo deconvolution to capture population variability and a fixed value of %CV was also assumed. Simulations were then performed to reflect the reference clinical study for a 300 mg single dose. The mean area under the curve (AUC) last , AUC inf , and C max were calculated for each of the simulated subjects and then compared with those from the clinical trial (Supplementary Methods). 11 The updated model also includes a perfusion-limited, one-compartment brain model. Given that in vitro studies show trazodone is not a substrate for relevant transporters (P-gp, 12 BCRP, MRPs, OATPs, and OCTs) this model is valid (Table S1). Simulated total brain concentrations were corrected for tissue binding by human f u,brain values from the literature. 13 A maximum effect (E max ) sigmoid model was applied to compute receptor occupancy using the simulated unbound brain concentration and the in vitro K i values for the identified receptors of interest. The final model input parameters used are shown in Table 1.
Simulations for IR and OAD trazodone model verification
To verify the developed models for trazodone IR and OAD formulations, simulated plasma concentrations were compared with observed clinical data for a single 30 mg dose of IR trazodone and a single 300 mg dose of OAD trazodone. Simulation study design was matched to the clinical studies (number of subjects, demographics, etc.). For the IR formulation, 10 trials of 23 subjects per trial (age 22-54, 49% female) were simulated, whereas for the OAD formulation, a single trial of 43 subjects aged 18-56 years (46.5% women) was used. Model predictions were determined to be acceptable if the simulated parameters fell within 1.5-fold of the observed values. A comparison of the observed and predicted concentration-time profiles (visual check) for each formulation was also performed.
Target identification
Measured K i values for several human molecular targets related to activity in the central nervous system (CNS) were available. These K i values were determined using competitive radioligand binding assays using recombinant (CHO or HEK-293) cells expressing human receptors (58 targets, mainly GPCR and transporters, reported in Table S1). Trazodone was tested at seven log dilutions starting from 10 µM concentration. From the resulting competition curve, half-maximal inhibitory concentration (IC 50 ) values were determined by a nonlinear least square regression analysis. Inhibition constant (K i ) was calculated for each receptor according to the Cheng-Prusoff equation. 14 Affinity toward the different receptors was determined in the range 10-1000 nM. (1) The list of targets was refined by identifying those with greater than or equal to 50% predicted receptor occupancy (RO; ≥0.5) at the maximum unbound brain concentration (C max,brain,u ) for each dose/formulation level. This threshold was selected as it is the minimal SERT RO achieved for another multimodal drug, vortioxetine, in its clinical effective dose range. 15 Free drug concentrations in the brain were used for final calculations as only protein-unbound drug concentrations are considered pharmacologically active. The RO was calculated using an E max sigmoid model following the formula: with K i being the affinity constant for trazodone HCl on several molecular targets (Table S1). Trazodone maximum free base brain concentrations were taken from Simcyp V18r2 simulations for each dose (30 mg IR single dose, 30 mg IR at steady-state, and 300 mg OAD at steady-state). The reported literature value for human f u,brain of 0.077 was used to correct for the unbound concentration. 13
Receptor occupancy simulations
The previously developed and validated trazodone model was used for all simulations. 9 To determine the RO for the IR trazodone formulation, 10 virtual trials of 10 healthy subjects (50% women) aged 20-50 years receiving either a single oral dose or multiple oral doses (q.d., 8 days) of 30 mg IR trazodone were generated. Simulations for the OAD formulation were completed in the same manner, with 10 virtual trials of 10 healthy subjects (50% women) aged 20-50 years receiving multiple oral doses of OAD (2) [trazodone] brain K i + [trazodone] brain Effect of in vitro variability on RO As trazodone K i values used for computations were the result of repeated in vitro assays on human receptors of interest, the effect of standard variability was evaluated on SERT and its RO value was determined for both formulations using two relative K i values (160 and 280 nM; Table S2). The difference in the resulting RO, onset time, and duration of time at or above the target threshold between the two K i values was assessed to determine the impact of the observed variability on RO predictions.
Model validation
The observed and simulated pharmacokinetic (PK) parameters for the 30 mg single dose IR trazodone and 300 mg single dose OAD trazodone are summarized in Table S3. The concentration-time profiles for the IR and OAD formulations are shown in Figure S1.
Target identification
The simulated plasma and brain concentrations of trazodone following single and multiple oral doses of 30 mg IR trazodone and multiple doses of 300 mg OAD trazodone are shown in Figures S2-S4. The PK parameters for all dosing intervals and formulations are summarized in Table 2.
Using predicted brain C max,brain,u , static calculations were completed for previously identified targets to determine those reaching at least 50% occupancy for each dose and formulation of interest (Table 3). A total of 16 targets were found to be at or above the threshold value at least in one of the conditions evaluated (i.e., 300 mg trazodone OAD at steady-state) and were included in subsequent receptor occupancy modeling.
Immediate release formulation
Single dose receptor occupancy Mean brain receptor occupancy following a single oral dose of 30 mg (IR formulation, adjusted to free base concentration) to healthy subjects (10 trials of 10 subjects) was simulated. RO was predicted for the seven targets through the application of the E max sigmoid model using unbound brain concentrations. Simulations were used to estimate mean t onset , that is the time to reach RO greater than or equal to 0.5, and duration of coverage at or above the target RO threshold for each target of interest.
Mean peak RO ranged from 0.59-0.91 for the targets of interest with an average t onset of 0.12 h (Figure 1; Table 4). As would be expected, time at or above the target RO threshold (TAT) of 0.5 decreased with increasing K i , with TAT of 27 h for the most potent (5-HT 2A , K i = 14 nM) dropping to just over 2 h for the highest K i (α 1a , K i = 98 nM). T A B L E 2 Summary of total plasma and brain PK parameters for 30 mg IR trazodone following single and multiple oral doses and 300 mg OAD trazodone following multiple oral doses Multiple dose receptor occupancy Ten trials of 10 healthy subjects were simulated for each of the seven targets showing static RO greater than or equal to 50% following multiple doses of 30 mg IR trazodone. The E max sigmoid model was applied to compute RO using simulated dynamic brain unbound concentration-time profiles at steadystate, day 7 of 8 days of dosing. The predicted mean peak receptor occupancies for the targets of interest were then used to estimate t onset (relative to the first dose) and duration of time at or above the threshold, shown in Table 4 and Figure 2.
Dose/formulation
Mean peak RO at steady-state was similar to what was observed following a single IR dose, ranging from 0.64 to 0.92. Similarly, average t onset to reach the target RO was 0.15 h (range: 0.09-0.25 h). Targets with high affinity showed TAT greater than 24 h (5-HT 2A and α 1B , K i = 14 nM and 15 nM, respectively) with TAT ranging from 18.0 h to 2.10 h for the remaining receptors evaluated.
Once a day formulation
Multiple dose receptor occupancy Ten trials of 10 healthy subjects were simulated for each of the 16 targets showing static RO greater than or equal T A B L E 3 Calculated RO for proposed targets of interest by unbound brain concentration (C max,brain,u ) For normalization from molar to ng/ml the molecular weight of the trazodone free-base was considered.
Values in bold indicate RO greater than or equal to 50%.
to 50% following daily administration of 300 mg OAD trazodone. The E max sigmoid PD model was applied to compute receptor occupancies at steady-state (day 7), as shown in Figure 3. The predicted mean receptor occupancy, t onset , and TAT for targets of interest are summarized in To determine if the average brain concentration at steadystate (C ave,brain,u ) could be used to determine those targets at or above the RO threshold, receptor occupancies were also computed using C ave,brain,u . The simulated difference in RO between the two methods ranges from 10 to 17% for targets with relatively low binding affinity (K i >100 nM), whereas minimal difference was observed for those with high affinity (≤5% ; Table S4). It is important to note that the simulated difference in RO also reflects the difference between mean RO (C ave,u as inputs) and peak RO (dynamic brain concentration as inputs), both calculated at the seventh dosing day. For those targets with lower affinity, time to reach the target RO (t onset , which is calculated starting from the first dose) is reached after the second dose (i.e., >24 h), once the drug has accumulated in the brain. This is supported by the same targets not reaching the threshold value when mean steadystate concentration was used to determine occupancy and can be observed in the PD profiles presented in Figure S5.
In vitro variability Ten virtual trials of 10 healthy subjects, as described previously, for each dose and formulation of interest were generated using Table S5 and Figure S6. For the IR formulation, a difference of ~ 1.4-fold in RO was observed for the low and high K i values. There was no significant difference in RO between day 1 and day 7. A similar difference was seen for the OAD formulation, with a 1.
DISCUSSION
Using the updated trazodone PBPK model, brain receptor occupancy was determined for the 30 mg IR (single dose F I G U R E 3 Simulated mean receptor occupancy following 7 days of dosing of once a day trazodone (300 mg q.d.; mean -black line, 95th and 5th percentiles -grey lines) in healthy subjects and at steady-state) and 300 mg OAD (at steady-state) formulations. As it is difficult to determine brain concentrations, and therefore receptor occupancies in vivo, this approach allows for a better mechanistic understanding of the mechanism of action of trazodone when administered as an IR versus OAD formulation. This analysis demonstrated the utility of a PBPK-based approach for generating initial estimates of in vivo occupancy, particularly when clinical data are unavailable. Simulating the widest possible range of exposures for trazodone, 30 mg IR (SD) to 300 mg OAD (QD), allowed to further support determination of the optimal exposure-related differentiation of brain receptor activation by trazodone between formulations.
For the 30 mg IR formulation, seven targets (5-HT 2A , α 1B , 5-HT 1D , α 1D , 5-HT 2B , 5-HT 1A , and α 1A ) were identified through static predictions from the unbound brain C max to meet the threshold of 50% RO. Simulation shows that all targets exhibit rapid onset (t onset ≤0.25 h) after the first dose, with time above the target threshold (i.e., RO ≥50%) ranging from 2.4 to 26.9 h. It is important to note that, in the current study, the kinetics of association and dissociation of drugs binding to a specific target is not considered due to lack of data on k on and k off rates. Therefore, the duration of coverage at or above the target RO threshold may be overestimated. In this work, a "time above threshold" is reported as compared to "duration of occupancy" to address this difference. No significant differences were observed in the RO between day 1 and day 7.
Sixteen targets were identified from static modeling to reach 50% RO for the OAD formulation (300 mg q.d.). Simulations showed geometric mean peak RO at steadystate ranging from 0.48 to 0.97 with higher affinity targets (K i ≤300 nM) reaching the threshold occupancy after the first dose (t onset range: 0.42-3.44 h). Targets with poor affinity did not reach RO greater than or equal to 0.5 until the second or third day of dosing, and in one case did not reach the threshold value at all, depending on the K i value.
Comparing mean RO determined from the C ave,brain,u to the mean peak RO simulated using the dynamic brain unbound concentration-time profiles as PK inputs, differences of 6% or less in the predicted RO was observed for those targets with relatively tight binding affinity (K i <100 nM). The simulated difference in RO increased to 10-17% for targets with relatively poor binding affinity (K i >100 nM). Due to the similar results obtained from the two methods, it can be assumed that an adequate estimate of RO can be determined from average concentrations when dynamic concentration-time data is unavailable.
It is frequently noted that there are multiple reported values for parameters, such as K i , with little information on how this effects subsequent clinical predictions. Here, the utility of PBPK for assessing the impact of this variability is shown through simulations with two reported K i values for SERT-a key receptor in trazodone efficacy. This also serves to evaluate the effect of PD variability, as it is important to note that the simulator only accounts for variability in the PK simulation through built-in covariates (i.e., age, weight, sex, and plasma protein levels), not the PD calculations. Using different experimental K i values, RO for a specific formulation and dosing scheme differed by an average of 1.4-fold. For repeat doses of 30 mg trazodone IR, this resulted in predicted ROs that reached the target threshold in one scenario and not in the other scenario (RO = 0.52 and 0.38 for the low and high K i value, respectively). This difference appears to be more pronounced for borderline occupancy cases. In contrast, the difference in RO was significantly reduced in the case of OAD trazodone where SERT RO was generally higher (RO = 0.76 and 0.65, respectively).
Although previous work has been performed to determine brain RO for trazodone targets, the approach presented here differs in three key areas. 16 First, a comprehensive evaluation of potential targets was completed using initial static calculations with over 50 targets evaluated compared to ~30 previously explored. Next, the RO calculations used here were based on unbound trazodone concentrations in the brain as compared to total brain trazodone concentrations, as only the free drug is able to bind with the receptors. Finally, RO was determined using a full PBPK model with a perfusion-limited brain compartment, as is supported by data showing no significant involvement of brain transporters, compared to a twocompartment model using a rodent K p,uu (i.e., a ratio of unbound brain to unbound plasma concentration) value to adjust concentrations. These key differences allow for an updated prediction of RO that better captures the processes involved in vivo.
It should be noted that the simulations presented here for the IR formulation do result in a 1.6-fold overprediction in trazodone AUC compared to the clinical value, which could affect the interpretation of the simulated RO. In model development for the IR and OAD formulations, two studies reporting trazodone clearance were identified with reported CL IV values differing by approximately twofold (5.3 L/h vs. 10.0 L/h). 17,18 From the data presented, there is no obvious explanation for the discrepancy as the only difference in study design appears to be PK sampling duration (24 h vs. 26 h). In the final model, the CL IV has been set to 5.0 L/h for both formulations as to ensure the developed models can adequately capture the observed drug exposures for both formulations. Also, of importance for this discussion, is that the simulated F is different for both formulations-primarily due to the difference in the predicted value for F a . For the IR formulation, F a is ~ 1 in agreement with the findings from a human mass balance study in healthy volunteers, whereas the simulated F a for the OAD formulation is 0.5, predicted from a mechanistic absorption model and in vivo dissolution profile. To improve the prediction of the IR formulation, a higher CL IV or lower F a would be required-the former resulting in poor prediction of the OAD formulation, and the latter not supported by the high permeability and reasonable solubility of the drug.
As described by Morgan et al., it is important to understand drug exposure at the site of action, target binding, and expression of functional pharmacological activity. 19 In turn, achieving sufficient exposure, target binding, and pharmacology modulation for efficacy is key to clinical success. Although further work including imaging studies to visualize the in vivo occupancy during treatment would serve as confirmation for these results, this modeling exercise was able to simulate exposure at the site of action (brain) and occupancy at the relevant receptors. Overall, the results are consistent with the dose-dependent pharmacological activity of trazodone. The lower doses (i.e., the 30 mg of the IR formulation, both after single and repeated q.d. dosing), acts via the most potent functional properties-5 -HT 2A and α 1 adrenergic antagonism and 5-HT 1A partial agonism and achieves reasonable levels of occupancy for these receptors (>0.59). The proper antidepressant dose of trazodone (i.e., 300 mg of the OAD formulation after repeated q.d. dosing) recruits additional pharmacological actions, such as the SERT blockade and antagonism of histaminergic H 1 , 5-HT 2C , and 5-HT 7 , and is able to achieve reasonable levels of occupancy at these additional receptors (>0.56). These findings form a strong foundation to further evaluate the multifunctional and multimodal mechanism of action for trazodone in achieving full antidepressant efficacy at target daily dose of 150-300 mg. | 2022-03-03T06:23:53.752Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "549337fdd1b36c7910a3d7fff53b512cac5605f7",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cts.13253",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "5c907591d13d6865c07d437cbd945bf72caaa45e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14748326 | pes2o/s2orc | v3-fos-license | Generalized Uncertainty Principle, Modified Dispersion Relations and Early Universe Thermodynamics
In this paper, we study the effects of Generalized Uncertainty Principle(GUP) and Modified Dispersion Relations(MDRs) on the thermodynamics of ultra-relativistic particles in early universe. We show that limitations imposed by GUP and particle horizon on the measurement processes, lead to certain modifications of early universe thermodynamics.
Introduction
Generalized Uncertainty Principle is a common feature of all promising candidates of quantum gravity. String theory, loop quantum gravity and noncommutative geometry(with deeper insight to the nature of spacetime at Planck scale), all indicate the modification of standard Heisenberg principle [1][2][3][4][5][6][7][8][9][10]. Recently it has been indicated that within quantum gravity scenarios, a modification of dispersion relation(relation between energy and momentum of a given particle) is unavoidable [11][12][13]. There are some conceptual relations between GUP and MDRs. These possible relations have been studied recently [14,15].
These quantum gravity effects, in spite of being small, are important since they can modify experimental results. There are several efforts to provide experimental evidence of these small effects. For example, Amelino-Camelia et al, by investigation of potential sensitivity of Gamma-Ray Burster observations to wave dispersion in vacuo, have outlined aspects of an observational programme that could address possible detection of these quantum gravity effects [16]. Amelino-Camelia and Piran have argued that Planck-scale deformation of Lorentz symmetry can be a solution to the Ultra High Energy Cosmic Rays(UHECR) with energies above the GZK threshold and the TeV-γ paradoxes [17]. Gambini and Pullin have studied light propagation in the picture of semi-classical spacetime that emerges in canonical quantum gravity in the loop representation [18]. They have argued that in such a picture, where space-time exhibits a polymer-like structure at microscales, it is natural to expect departures from the perfect non-dispersiveness of ordinary vacuum. They have evaluated these departures by computing the modifications to Maxwell's equations due to quantum gravity, and showing that under certain circumstances, non-vanishing corrections appear that depend on the helicity of propagating waves. These effects could lead to observable cosmological predictions of the discrete nature of quantum spacetime. Then, they have addressed to observations of non-dispersiveness in the spectra of gamma-ray bursts at various energies to constrain the type of semi-classical state that describes the universe. Jacobson et al have shown that threshold effects and Planck scale Lorentz violation are combined constraints from high energy astrophysics [19]. These literatures provide possible experimental schemes for detection of small quantum gravity effects. However, there are two extreme domains: black hole structure and early stages of the universe evolution where these quantum gravity effects are dominant. Corrections to black hole thermodynamics due to quantum gravitational effects of minimal length and GUP have been studied extensively(see [20] and references therein). On the other hand, part of the thermodynamical implications of GUP and MDR have been studied by Amelino-Camelia et al [21] and Nozari et al [22]. Thermodynamics of early universe within standard Heisenberg principle has been studied by Rahvar et al [23]. Since quantum gravitational effects are very important in early stages of the universe evolution, it is natural to investigate early universe thermodynamics within GUP and MDRs frameworks. Here we are going to formulate thermodynamics of ultra-relativistic particles in early universe within GUP and MDRs frameworks. In the first step, using GUP as our primary input, we calculate thermodynamical properties of ultra-relativistic particles in early universe. In formulation of the early universe thermodynamics within GUP framework, due to limitations imposed on the measurement processes, two main points should be considered: first due to casual structure of spacetime, maximum distance for causal relation is particle horizon radius and secondly, there is a minimum momentum imposed by GUP which restricts the minimum value of energy. In the next step, for a general gaseous system composed of ultra-relativistic particles, we find density of states using MDRs with Bose-Einstein or Fermi-Dirac statistics and then thermodynamics of the system will be followed. In each step we discuss ordinary limits of our equations and we compare consequences of two approaches.
Preliminaries
Emergence of the generalized uncertainty principle can be motivated and finds support in the direct analysis of any quantum gravity scenario. This means that GUP itself is a model independent concept. Generally, GUP can be written as [24] δxδp ≥h where κ, η and γ are positive and independent of δx and δp (but may in general depend on the expectation values of x and p). This GUP leads to a nonzero minimal uncertainty in both position and momentum for positive κ and η [24]. If we set κ = 0 we find Since we are going to deal with absolutely smallest uncertainties, we set γ = 0 from now on. So we find δxδp ≥h This relation leads to a nonzero minimal observable length of the order of Planck length, (δx) min =h √ η. Any position measurement in quantum gravity has at least (δx) min as its lower limit of position uncertainty. This relation has an immediate consequence for the rest of statistical mechanics: it modifies the fundamental volume ω 0 of accessible phase space for representative points. In ordinary statistical mechanics, it is impossible to define the position of a representative point in the phase space of the given system more accurately than the situation which is given by (δq δp) min ≥h. In another words, around any point (q, p) of the (two dimensional) phase space, there exists an area of the orderh which the position of the representative point cannot be pin-pointed. In ordinary statistical mechanics we have the following definition of fundamental volume Since in quantum gravity era δp ∼ p, we can interpret equation (3) as a generalization of h,h ef f =h 1 + ηp 2 ).
Therefore, we find the following generalization of the fundamental volume Since the total number of microstates is given by Ω = ω (ω 0 ) ef f (here ω is the volume of the accessible phase space), we see that GUP leads to a reduction of accessible microstates and therefore a reduction of entropy. In other words, when we approaches Planck scale regime with high energy and momentum particles, the volume of the fundamental cell increases in such away that eventually the number of microstates tends to unity and therefore entropy vanishes. This is a novel prediction of quantum gravity. Recently we have calculated microcanonical entropy of an ideal gaseous system and we have observed an unusual thermodynamics of systems in very short distances or equivalently very high energy regime [22]. Another consequence of GUP in the form of relation (3), has been formulated by Kempf et al [24]. They have shown that within the momentum representation, the generalization of the scalar products reads where φ and ψ are momentum space state functions. For ultra-relativistic particles with E = pc, we should consider the following generalization where we have set c = 1.
On the other hand, if we set η = 0 in (1), we find where for positive κ leads to nonzero minimal uncertainty in momentum. This statement leads to a space-dependent generalization ofh. This type of generalization has nothing to do with dynamics and there is no explicit physical interpretation of it at least up to now. From another perspective, in scenarios which consider spacetime foam intuition in the study of quantum gravity phenomena, emergence of modified dispersion relations takes place naturally [25]. As a consequence, wave dispersion in the spacetime foam might resemble wave dispersion in other media. Since Planck length fundamentally set the minimum allowed value for wavelengths, a modified dispersion relation can also be favored. Recently it has been shown that a modified energy-momentum dispersion relation can also be introduced as an observer-independent law [26]. In this case, the Planckian minimum-wavelength hypothesis can be introduced as a physical law valid in every frame. Therefore, the analysis of some quantum-gravity scenarios has shown some explicit mechanisms for the emergence of modified dispersion relations. For example, in the framework of noncommutative geometry and loop quantum gravity approaches this modified dispersion relations have been motivated(see for example [21] and references therein). In most cases one is led to consider a dispersion relation of the type(note that from now on we set where f is the function that gives the exact dispersion relation, and on the right-hand side we have assumed the applicability of a Taylor-series expansion for E ≪ 1/l p . The coefficients α i can take different values in different quantum-gravity proposals. Note that m is the rest energy of the particle and the mass parameter µ on the right-hand side is directly related to the rest energy, but µ = m if the α i do not all vanish. Since we are working in Planck regime where the rest mass is much smaller than the particle kinetic energy, there is no risk of confusing between m and µ. While in the parametrization of (3) we have included a possible correction term suppressed only by one power of the Planck length, in GUP such a linear-in-l p term is assumed not to be present. For the MDR a large number of alternative formulations, including some with the linear-in-l p term, are being considered, as they find support in different approaches to the quantum-gravity problem, whereas all the discussions of a GUP assume that the leading-order correction should be proportional to the square of l p (as has been indicated by Amelino-Camelia et al [21], linear-in-l p term in MDR has no support in string theory analysis of black holes entropyarea relation and therefore it seems that this term should not be present in MDR. Recently we have shown that coefficients of all odd power of E in MDR should be zero [15]). Within quantum field theory, the relation between particle localization and its energy is given by E ≥ 1 δx , where δx is particle position uncertainty. It is obvious that due to both GUP and MDR this relation should be modified. In a simple analysis based on the familiar derivation of the relation E ≥ 1 δx [27], one can obtain the corresponding generalized relation. Since we need this generalization in forthcoming arguments, we give a brief outline of its derivation here. We focus on the case of a particle of mass M at rest, whose position is being measured by a procedure involving a collision with a photon of energy E γ and momentum p γ . According to Heisenberg's uncertainty principle, in order to measure the particle position with precision δx, one should use a photon with momentum uncertainty δp γ ≥ 1 δx . Following the standard argument [28], one takes this δp γ ≥ 1 δx relation and converts it into the relation δE γ ≥ 1 δx using the special relativistic dispersion relation. Finally δE γ ≥ 1 δx is converted into the relation M ≥ 1 δx because the measurement procedure requires δE ≤ M, in order to ensure that the relevant energy uncertainties are not large enough to allow the production of additional copies of the particle whose position is being measured. If indeed our quantum-gravity scenario hosts a Planck-scale modification of the dispersion relation of the form (9) then clearly the relation between δp γ and δE γ should be re-written as follows This relation will modify density of states for statistical systems. Note that one can use GUP to find such relation between δp γ and δE γ [15].
GUP and Early Universe Thermodynamics
Now we are going to calculate thermodynamical properties of ultra-relativistic particles in early universe, using the generalized uncertainty principle. We consider the following GUP as our primary input, where ξ is a dimensionless constant. Consider the early stages of the universe evolution. Analogue to a particle inside a box, in the case of the early universe one can consider a causal box (i.e. particle horizon) which any observer in the universe has to do measurements within this scale [29]. In the language of wave mechanics, if Ψ denotes the wave function of a given particle, the probability of finding this particle by an observer outside its horizon is zero, i. e. |Ψ(x > horizon)| 2 = 0. From the theory of relativity, measurement of a stick length can be done by sending simultaneous signals to the observer from the two endpoints, where for the scales larger than the causal size, those signals need more than the age of the universe to be received. Looking back to the history of the universe, the particle horizon after the Planck era grows as H −1 , but inflates to a huge size by the beginning of inflationary epoch. Here H is Hubble parameter. In the pre-inflationary epoch, the maximum uncertainty in the location of a particle, δx = H −1 results in an uncertainty in the momentum of the particle which is given by This leads to a minimum uncertainty in momentum as Therefore, we can conclude that(assuming that p ∼ δp) which leads to for ultra-relativistic particles in three space dimensions. Now, suppose that where ϑ is given by To obtain complete thermodynamics of the system, we calculate partition function of the system and then we use standard thermodynamical relations. In classical statistical mechanics, partition function for a system composed of ultra-relativistic noninteracting monatomic particles (Fermions or Bosons) is given by In our case, due to limitation imposed by GUP and particle horizon, we should consider the following generalization where we have used relations (7), (15) and (16) respectively. By definition, the entropy of the system is given by where F is the free energy of the system defined as So the entropy of the system can be written as For ultra-relativistic fermions this relation leads to the following expression where for simplicity we have defined While for bosons we find where In these equations D is defined as p H and B = πH. Note that both of the equations (23) and (24) are well behavior in high and low temperature limits. In the standard situation, we have ξ = 0, η = 0 and E min = 0. So we find the well-known and standard results for entropy of the corresponding ultra-relativistic fermionic or bosonic systems. From (22) we find the following expression for standard entropy which leads to and for fermions and bosons respectively. Now the pressure of the ultra-relativistic gas is given by P = 1 βV ln Z. For fermions and bosons we find respectively and In the standard situation, we find the following well-known result which leads to and for fermions and bosons respectively. The specific heat of the system which is defined as can be written in the following closed form One can obtain explicit form of C V for fermions and bosons using relation (33), (23) and (24). A simple calculation gives and for specific heat of fermions and bosons respectively. In the standard case we find and (38) Figure 1 shows the values of entropy in different situations. In standard thermodynamics of ultra-relativistic fermionic or bosonic gas, the entropy of the system tends to zero in T 0 = 0. This situation is shown in Figure 1, (a) and (b). Within GUP framework, entropy tends to zero in a nonzero temperature, that is, for T > T 0 . This is a result of quantum fluctuation of spacetime itself. Figure 2 shows the corresponding behavior of pressure as a function of temperature. Note that these figures are plotted in arbitrary units and they show only general behaviors of the functions. Figure 3 shows the behavior of specific heat of the system in various conditions. In GUP framework, the general behavior of C V has considerable departure from its standard counterpart in high temperature regime.
MDR and Early Universe Thermodynamics
Now we are going to formulate early universe thermodynamics within MDR framework. We consider a gaseous system composed of ultra-relativistic monatomic, non-interacting particles. First we derive the density of states. Consider a cubical box with edges of length L (and volume V = L 3 ) consisting black body radiation(photons). The wavelengths of the photons are subject to the boundary condition 1 λ = n 2L , where n is a positive integer. This condition implies, assuming that the de Broglie relation is left unchanged, that the photons have (space-)momenta that take values p = n 2L . Thus momentum space is divided into cells of volume V p = 1 2L 3 = 1 8V . From this point, it follows that the number of modes with momentum in the interval [p, p + dp] is given by g(p)dp = 8πV p 2 dp (39) Assuming a MDR of the type parameterized in (9) one then finds that (m = 0 for photons) and dp ≃ 1 + α 1 l p E + ( Using this relation in (39), one obtains This is density of states which we use in our calculations. Note that we have not set α 1 = 0 to ensure generality of our discussions, but we will discuss corresponding situation at the end of our calculations.
To obtain thermodynamics of the system under consideration, we start with the partition function of fermions and bosons, where + and − stand for fermions and bosons respectively and β = 1/T since k B = 1.
Using equation(42) in the following form where for simplicity we have defined a = 2α 1 l p and b = 5 1 2 α 2 + 1 8 α 2 1 l 2 p , one can compute the integral of equation (43) to find the following expression for entropy of fermions and bosons This relation can be written as follows By calculating this integral, we find for fermions and bosons respectively and One may ask about the relation between these two results and corresponding results of GUP, that is, relations (23) and (24). Although these results seem to be different in their β dependence, but note that if we set α 1 = 0(which is reasonable regarding the argument presented in page 5), we find a = 0 and then β dependence of our findings will coincide with each other. The only difference which remains is the differences between numerical factors. This argument shows that essentially the results of GUP and MDRs for thermodynamics of the early universe do not differ with each other in their temperature dependence and overall behaviors.
In the standard situation, we have a = b = 0 and E min = 0, so we find For entropy of fermions and bosons we find respectively and In the presence of MDR, the pressure of corresponding systems are for fermions and bosons respectively. In the standard situation we find the following well-known relation which for fermions and bosons leads to and P b = 8π 1 45 respectively. The specific heat of the system can be written in the following closed form One can use relations (33), (47) and (48) to find the following explicit results for fermions and bosons respectively and In the standard case we find and respectively. As has been indicated, there are severe constraints on the functional form of MDR which these constraint are motivated when one compares black hole entropy-area relation in different points of view [15,21]. In this case we should set α 1 = 0 which leads to a = 0. We find from (47) and (48) the following expressions for entropy of fermions and bosons respectively S f = 8π 7 90 and S b = 8π 4 45 where b ′ = 5 2 α 2 l 2 p . These statements for partition function are more realistic since black hole thermodynamics within MDRs when is compared with exact solution of string theory, suggest the vanishing of α 1 . It is important to note that the formalism presented in this section is not restricted to early universe. Actually, it can be applied to any statistical system composed of ultrarelativistic monatomic noninteracting particles which has a minimum accessible energy . The Possible relation between GUP and MDRs itself is under investigation [14,15]. Generally these two features of quantum gravity scenarios are not equivalent, but as Hossenfelder has shown, they can be related to each other [14](see also [15]). As a result, it is natural to expect that under special circumstances, our results for early universe thermodynamics within GUP and MDRs should transform to each other. This is a transformation between coefficients of our equations and overall behaviors of thermodynamical quantities, specially their temperature dependence are similar.
Summary
GUP and MDRs have found strong supports from string theory, noncommutative geometry and loop quantum gravity. There are many implications, originated from GUP and MDRs, for the rest of the physics. From a statistical mechanics point of view, GUP changes the volume of the fundamental cell of the phase space in a momentum dependent manner. On the other hand, MDR leads to a modification of density of states. These quantum gravity features have novel implications for statistical properties of thermodynamical systems. Here we have studied thermodynamics of early universe within both GUP and MDRs. We have considered early universe as a statistical system composed of ultra-relativistic particles. Since both particle horizon distance and GUP impose severe constraint on measurement processes, the statistical mechanics of the system should be modified to contain these constraint. Since GUP and MDRs are quantum gravitation effects, the modified thermodynamics within GUP and MDRs tends to standard thermodynamics in classical limits. There are severe constraints on the functional form of MDRs from string theory considerations. When we consider these constraints, the results of MDRs and GUP for thermodynamics of early universe tends to each other in their general temperature dependence and they differs only in their numerical factors. This fact may be interpreted so that GUP and MDRs essentially are not different concepts of quantum gravity proposal. Although the exact relation between GUP and MDRs is not known yet, our formalism of early universe shows the very close relation between these two aspects of quantum gravity. In standard statistical mechanics of bosonic and fermionic gases, the entropy of the system tends to zero in T 0 = 0. As our equations and corresponding numerical result show, within GUP framework entropy of the system tends to zero in a temperature larger than zero( T > T 0 ). This is a consequence of the relation (5). The volume of the fundamental cell of phase space increases due to GUP. Note that MDRs give entropy-temperature relation which has no difference with GUP result in its general behavior. Figure 2 shows the pressure of the system versus temperature. Pressure tends to zero in a temperature larger than T 0 = 0. The same behavior is repeated by specific heat of the system. So, our analysis shows an unusual thermodynamics for statistical systems in quantum gravity eras. This unusual behaviors have been seen in other context such as black hole thermodynamics [30,31]. | 2014-10-01T00:00:00.000Z | 2006-01-23T00:00:00.000 | {
"year": 2006,
"sha1": "9569faa518c604fb9821675806d7cbbd83a45594",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/0601092",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1a6a70c4faee53f4be72fc3f8399a15f29003b74",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
27375001 | pes2o/s2orc | v3-fos-license | Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization
This article introduces the work of the United Nations Food and Agriculture Organization (FAO), and describes its interest in the application of distance learning strategies pertinent to the challenges of food security and rural development around the world. The article briefly reviews pertinent examples of distance learning, both from the experience of FAO and elsewhere, and summarises a complex debate about the potential of distance learning in developing countries. The paper elaborates five practical suggestions for applying distance learning strategies to the challenges of food security and rural development. The purpose of publishing this article is both to disseminate our ideas about distance learning to interested professional and scholarly audiences around the world, and to seek feedback from those audiences. Introduction: FAO and Distance Learning The mission of the Food and Agriculture Organization of the United Nations (FAO) is to help build a food-secure world for present and future generations. The achievement of this mission depends upon the capacities and actions of a globally distributed set of individuals, organisations and communities. While a range of factors determines such capacities and actions, education and learning are widely recognised as important components of development. Since its inception, FAO has played a significant role in producing, managing and disseminating knowledge for processes of education and learning of importance to Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 2 food security around the world. The Organization has adopted five corporate strategies to guide its activities over the next fifteen years: 1. Contributing to the eradication of food insecurity and rural poverty. 2. Promoting, developing and reinforcing policy and regulatory frameworks for food, agriculture, fisheries and forestry. 3. Creating sustainable increases in the supply and availability of food and other products from crop, livestock, fisheries and forestry sectors. 4. Supporting conservation, improvement and sustainable use of natural resources for food and agriculture. 5. Improving decision-making through the provision of information and assessments and fostering of knowledge management for food and agriculture. The accomplishment of this strategic agenda will necessarily involve processes of education and learning. Over the past decade, there has been a resurgence of international interest in distance learning as a potentially useful strategy for addressing human development issues. This resurgence has been rooted, in part, in the evolution of new information and communications technologies, and, in part, in the improvement of pedagogical and administrative models for facilitating learning at a distance. United Nations agencies have contributed to the resurgence of international interest in distance learning. UNESCO (1997) has issued a policy document encouraging the use of distance learning, at all levels of educational systems, for purposes of development. The World Bank (1999) promotes “innovative delivery” as one of its global priorities for the educational sector. The World Health Organization (1998) promotes the use of “telematics,” including distance health education, in support of its Health-for-All agenda. Both UNESCO [http://www.unesco.org/education/e learning/index.html] and the World Bank [http://www1.worldbank.org/disted] host Internet sites providing information to promote the appropriate use of distance learning. In addition to such policy advocacy and information dissemination functions, many United Nations agencies have employed distance learning strategies through their own programmatic interventions, and provided financial or technical assistance to a multitude of national and regional distance learning projects in developing countries. FAO has accumulated significant experiences in the field of distance learning. Since the 1960s, FAO has contributed to the development of rural radio as a International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 3 medium of information exchange and learning in many African countries. More recently, FAO has used distance learning strategies both for formal education and information dissemination purposes. One example is the ongoing collaboration between FAO and the REDCAPA network. REDCAPA is the “Network of Institutions Dedicated to Teaching Agricultural and Rural Development Policies for Latin America and the Caribbean” (Red de Instituciones Vinculadas a la Capacitacion en Economia y Poĺıticas Agricolas en America Latina y el Caribe). REDCAPA was founded in 1993 through the initiative of the FAO Policy Assistance Division in collaboration with organisations from eleven Latin American and Caribbean countries, and financial support from the government of Italy. The REDCAPA network currently involves 66 universities and other organisations concerned with teaching agricultural economics and policies and sustainable rural development [http://www.redcapa.org.br]. Most members are from the region, although several European and American universities take part. REDCAPA’s main objectives are to contribute to the improvement of teaching and research in agricultural economics, rural development and the environment, support institution building, and improve national and international cooperation among its members. Among the various activities implemented to accomplish these objectives, the network coordinates regular distance learning courses on pertinent topics. In addition to its role in the establishment of REDCAPA, FAO has assisted the network financially, and provided training materials and direct support for a number of distance learning courses offered in the areas of food security policy, macroeconomics and gender analysis. The Information Network on Post-harvest Operations (INPhO) is a second example of FAO experience with distance learning. INPhO is managed and facilitated by the Agro-Industries and Post-Harvest Management Service on behalf of a range of international partners. INPhO provides three basic services [http://www.fao.org/inpho]: 1. Information and data bases concerned with a range of postharvest issues (e.g., storage, transportation, processing, marketing and food safety). 2. Interactive communication services connecting users with one another and with resource people. 3. Links to other electronic sources of post-harvest information. INPhO’s long-term objective is to contribute to food security and rural development by enhancing post-production systems around the world. The more immediate objectives are to disseminate selected information in a user-friendly way, to facilitate communication between post-harvest actors, and to support decision makers. INPhO’s targeted beneficiaries are small farmers, small enterprises and consumers. These beneficiaries are influenced through intermediary International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 4 target groups including governmental institutions, research centres, universities, schools, non-governmental organisations, extension workers and entrepreneurs. INPhO was developed in 1997, became operational in 1998, and has grown into an important network for information dissemination and learning. The INPhO website is a busy one, with over 8,000 hits per day (and 800 user sessions per day) recorded in October 2000. In addition to the website, INPhO disseminates CD-Rom versions of its information services (some 8,000 copies have been produced to date). The interactive communications services involve a question and answer service on post-harvest issues, as well as a structure for moderated and non-moderated email conferences. In addition to these existing initiatives, FAO is developing several projects with distance learning components. The Fisheries Industries Division is developing a series of three correspondence courses aimed at building local technical knowledge and management skills for sustainable artisan fisheries. The Outreach Programme of the World Agricultural Information Centre (WAICENT) is launching a CDROM-based Information Management Resource Kit to share tools and methodologies with member nations to build their capacity to manage agricultural information. In the context of its own experiences and growing international interest in the field, FAO is exploring how distance learning could be most usefully applied to the achievement of its mission. This paper represents an important step in such an exploration. It summarises various arguments that have been made concerning the potential of distance learning in developing countries, and then makes five practical suggestions for applying distance learning strategies to the challenges of food security and rural development. The purpose of publishing this article is both to disseminate our ideas about distance learning to interested professional and scholarly audiences around the world, and to seek feedback from those audiences. Distance Learning and the Developing Countries The use of distance learning strategies in developing countries is by no means novel. The potential connections between distance learning and development processes have been recognised for decades, as the following passage from Kabwasa and Kaunda (1973, p. 8) demonstrates: Correspondence education has yet to make an impact in Africa. We feel it is our responsibility to give it as much publicity as we can, so that our people know its potentialities and possibilities, and how they can go about making greater use of it in the development of our continent. International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 5 In a recent overview, Hilary Perraton (2000) organises distance learning experiences in developing countries into four categories: (1) non-formal and adult education, (2) primary and secondary schooling, (3) teacher training, and (4) higher education. He provides numerous examples to indicate that countries in Africa, Asia and Latin America have had significant experience with distance learning since at least the 1960s. The following four examples of distance learning programmes related to agriculture have all reached substantial numbers of learners in developing countries, and have been sustained for at least a decade. First, since the 1960s, “INADES – formation” (Institut Africain pour le développement économique et social) has provided non-formal distance learning opportunities to tens of thousands of farmers, extension agents and other agents of rural development in Africa (Dodds, 1999; Perraton, 2000). Courses for farmers include those on agricultural production and animal husbandry, as well as those on basic mathematics, management, marketing, credit and cooperatives. For extension agents and other development workers, additional courses are available on communication, extension methods, management and the rural economy. The delivery strategy for “INADES-formation” courses is a combination of print-based correspondence packages with local study groups and tutorial support. Second, since 1973, the G.B. Pant University of Agriculture and Technology has offered a Correspondence Course Programme to farmers and rural youth in Uttar Pradesh, India (M.P. Singh, 1992, 1999). About 500 learners each year select four courses from a list of seventeen options (fourteen concern the cultivation of particular crops, and one each concern dairy production, insecticide use and fertiliser use). The Programme’s delivery strategy is print-based correspondence. Each course comprises five or six lessons, written in elementary Hindi. Course scheduling is timed to coincide with the seasonal production of the various crops under study. The University has twenty District Extension Centres students may contact for personalised guidance and study support. Non-credit certificates are issued to all students passing end of term examinations in each course. Third, since 1986, the Women’s Secondary Education Programme of Allama Iqbal Open University has been providing rural women in Pakistan with courses to meet secondary school equivalency and to increase income generating opportunities through building practical skills (Batool & Bakker, 1997). The range of practical courses includes Selling of Home Made Products, Garment Making, Poultry Farming, Food and Nutrition, First Aid, Home and Farm Operations, and General Home Economics. The content of all courses has been designed to reflect the priorities, needs, and prior experiences of adult rural women. All courses are delivered through print-based correspondence methods, and learners receive tutorial support through local study centres. As of 1996, the Programme enrolled about 4,000 learners per semester. Fourth, since 1988, Wye College of the University of London has delivered an External Programme that uses distance learning to provide learners around International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 6 the world with opportunities for graduate study in agricultural development (Bryson & Hakimian, 1992; Pearce & Sharrock, 2000). Currently, over 1,000 learners from over 100 countries are enrolled in a range of programmes rooted in agricultural and environmental economics, management and planning. The Programme initially used traditional correspondence methods, and has recently added an Internet-based learning system for delivery of learning materials, tutorial support, assignment submission and feedback, and opportunities for learnerlearner interaction. The fact that distance learning is an established form of educational delivery in many developing countries does not mean that distance learning is necessarily an effective tool in development efforts. Understanding the past influence and future potential of distance learning for challenges related to food security and rural development is not an easy task. Substantial literature has emerged that either describes or evaluates the past experiences and future potential of distance learning in developing countries (Arger, 1985, 1990; Bilham & Gilmour, 1995; Daniel, 1990; Dodds, 1996; Farrell, 1999; Guy, 1991; McAnany et al., 1983; Perraton, 2000; Shrestha, 1997a, 1997b; UNESCO, 1997; Young et al., 1980) or in particular regions such as Africa (Chale & Michaud, 1997; Fillip, 2000; John, 1996a, 1996b, Saint, 1999; UNESCO 1990, 1991, 1995). In these and other publications, a range of general claims has been made about the strengths and limitations of distance learning in developing countries. Many of these claims contradict one another. Table 1 indicates that there is no overall consensus about distance learning in developing countries. Table 1. The Case For and Against Distance Learning in Developing Countries What can we conclude about distance learning as a means to promote rural development and food security? With regard to its track record, distance learning has had both successes and failures in developing countries. The lengthy list of problems and disappointments identified by critics of distance learning would lead to a pessimistic conclusion, unless one recognises that conventional alternatives in developing countries have also, at times, been unable to provide adequate levels of educational access, equity and quality (Perraton, 2000, p. 198). With regard to its future potential, distance learning seems to be a promising response to certain educational challenges, but it should not be seen as a panacea. Many institutions in developing countries are steadily increasing their capacity to engage in distance learning, and appropriate technological innovations are being used in many contexts. Practical Suggestions for Distance Learning The appropriateness and effectiveness of distance learning depends on why, how, and how well it is designed and delivered. Distance learning initiatives should International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 7 be undertaken for appropriate reasons, and in a manner that is suitable to the stakeholders of the initiative. Organisations undertaking distance learning initiatives must have the capacity to do so, and must invest or obtain the necessary resources in order to do it well. The claims listed in Table 1 are rooted in specific experiences of distance learning in contexts pertinent to food security and rural development in developing countries. Some of these experiences are from within FAO, but most are described in the literature cited in the last section of this article. By analysing these past experiences, it is possible to distil important lessons that have been learned. Paying attention to those lessons is a first step in creating an approach to distance learning that would enable FAO to act appropriately in this challenging field. Distance Learning for the Right Reasons Meacham (1993, p. 227) suggests that distance learning initiatives have been undertaken in developing countries for political or commercial purposes: “Apart from the obvious purpose of teaching more people more effectively, distance learning systems have been used to impress donors, placate ministers, justify consultancies, and even sell technologies.” In the context of the contemporary development of new information and communication technologies, there is a danger that distance learning initiatives can be driven by the availability of innovative technologies (and the desire to be seen using them), rather than by the educational needs of individuals and communities. Fillip (2000, p. 42) argues: Starting with the real needs of communities cannot be stressed enough. There is a strong tendency in the donor community to start with the technology rather than with the needs of the community and to ask the wrong questions. The important question is not “Can the Internet be used to provide distance learning to communities?” The important question is “What is the most appropriate, cost-effective and sustainable way to address the educational needs of communities?” The FAO undertakes distance learning initiatives in support of its strategic objectives. In the struggle for food security and rural development around the world, distance learning should be conceptualised as a means to an end, and not an end in itself. International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 8 Distance Learning that is Sensitive to Context There is no universally appropriate model for designing and delivering distance learning initiatives. The potential target audiences for distance learning initiatives in which FAO might become involved is broad indeed, ranging from agricultural producers and marginalised rural populations, to relatively privileged urban professionals such as policy makers and information managers. It is essential that the form of distance learning selected be appropriate to the particular context in which it is being applied. In a study of South Africa, Geidt (1996) identifies significant practical challenges that indicate adult basic education at a distance cannot function on an open university model adopted from the United Kingdom. Communities most in need of adult basic education provision in South Africa tend to have the following characteristics: slow and unreliable postal systems, few and unreliable telephones, lack of access to television, lack of electrification, poor road conditions, few and inadequate libraries, and inadequate school or other public facilities for studying. In addition to these infrastructure challenges, Geidt (1996, pp. 16-19) identifies several social and economic characteristics of disadvantaged communities in South Africa that make an open university style of distance learning unlikely to succeed. First, many people live in crowded housing conditions, and as a result learners do not have easy access to appropriate conditions in which to study. Second, written texts are not commonly used in day-to-day life; as a result learners are not accustomed to critically interpreting textual messages and constructing written responses. Third, previous school experiences of most learners are of rote learning, and as a result learners must make a difficult transition to become independent and critical learners. Fourth, there is tremendous cultural and linguistic diversity; as a result, many learners may have difficulty with the language and culture of standardised instructional materials. Geidt (1996, pp. 14-15) concludes that distance learning can only be effective when its delivery system and curriculum are appropriately matched to the social and political context of the learners. In the case of adult basic education in South Africa, Geidt (1996, pp. 19-20) suggests that a substantial component of face-to-face support is essential, and identifies several means through which such support could be provided (e.g., community-based tutors, community learning centres, and regional study centres). One model of distance learning cannot be appropriate to all potential target groups of interest to FAO. Distance learning models and practices must be adapted to the social, cultural, economic and political circumstances of learners and their environment. As with other forms of educational activity, it is important to integrate gender analysis into the planning and implementation of distance learning initiatives. International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 9 Distance Learning that Uses Existing Infrastructure and Has Sustainable Costs One disturbing tendency in the history of distance learning in developing countries is the large number of initiatives that demonstrate significant learning outcomes and programmatic success during pilot projects, but are not sustained or replicated on a larger scale after the pilot project is complete and donor funding is withdrawn. While the lack of sustainability and scalability may reflect a number of variables, it is frequently related to the use of inappropriate delivery strategies. The failure of many educational television projects in developing countries in the 1970s and 1980s is an example of what Meacham (1993, p. 227) calls “technological overkill” in distance learning. This phenomenon refers to the use of expensive and complex delivery strategies when inexpensive and simple alternatives could be pedagogically effective. Fillip (2000, p. 25) argues that when it comes to choosing technologies for distance education “...it is essential to take a careful look at the level of infrastructure that the target populations have access to, and the extent to which the same target populations can afford to make use of that infrastructure for educational purposes.” When donors have tried to provide a communication infrastructure for distance learning programmes, such programmes have very rarely been sustainable. Given challenges with the costs and servicing of equipment, educational projects should use technologies that have already been established through entertainment and commercial sectors (Perraton & Creed, 2000, p. 17). With regard to sustainable technology choices, Dodds’ (1972, p. 46) conclusion from nearly thirty years ago is still pertinent: “The installation of new and glamorous media at great expense may be less effective than the careful integration of existing resources.” The question of technologies and delivery strategy is related to the more general question of the cost-effectiveness of distance learning. Distance learning is sometimes presented as universally more cost-effective than conventional education. Past experiences in both developed and developing countries indicate that this is not necessarily the case. Distance learning has the potential to be, but is not necessarily, more cost-effective than conventional education (Perraton, 2000, pp. 136-138; Rumble, 1997, pp. 203-204; Rumble, 1999, p. 133; UNESCO, 1997, pp. 33-34). A range of factors that contribute to substantial cost differences between different distance learning initiatives are: numbers of learners enrolled, mixture of communication technologies, media and learning materials, degree of learner support and interaction, salaries and employment conditions of distance learning staff, production standards, and institutional working practices and overhead costs. A general conclusion that can be drawn is that distance learning tends to be more economically attractive at higher levels of education (Perraton, 2000, p. 196). This is because the costs of distance learning are relatively similar at all levels, whereas the costs per student of conventional education are higher at higher levels. Distance learning is not simply an inexpensive alternative to other forms of International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 10 educational programming or field interventions. In some cases, distance learning may provide a cost-effective means of reaching target groups of learners, but in other cases conventional face-to-face contact may be more cost-effective. The assumption that distance learning is a low-cost alternative can undermine the quality and impact of distance learning programmes by systematically depriving them of necessary resources. In the field of food security, organisations should not endeavour to establish independent systems of communication for the delivery of distance learning initiatives. Rather, in each specific case, delivery strategies for distance learning initiatives should be developed according to the communication infrastructure that is currently available, reliable and affordable to the learners who will take part in the initiative. This does not mean that Internet-based delivery strategies must be universally rejected in favour of simpler alternatives such as print and radio. It does mean that the pedagogical strengths of any potential delivery strategy must be carefully assessed according to the practical constraints facing each group of learners. Some target audiences will have ready access to computers and the Internet, while others will not even have electrical power or reliable telephone service. Distance Learning that Engages Stakeholders Many of the problems with previous distance learning programmes in developing countries relate to a lack of participation on the part of those individuals and communities who were supposedly the beneficiaries in the design and delivery of the programmes. Guy (1991, p. 169) argues that an appropriate: conception of distance education would require a focus on programs in which participants have control over not only what is taught, but how and where distance education takes place. It is dependent on the participation of people, who through participatory planning and action, to develop a deeper understanding of their lives and the structures which surround them in time and space. The need for participatory and empowering educational practice has been identified by FAO in its work in the fields of agricultural education, extension and communication for development. FAO (1999) has published a guide entitled Participatory Curriculum Development in Agricultural Education. The guide (FAO, 1999, pp. 70-73) categorises general groups of stakeholders in curriculum development processes as the “insiders” (i.e., leaders with training organisations, teachers, students, producers of educational materials), and the “outsiders” (i.e., policy-makers, politicians, educational administrators, educational experts, employers, professional bodies, clients, funders, parents, past students and special interest groups). Early in the analysis of a potential educational intervention, International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 11 it is important to identify the stakeholders, understand those stakeholders’ diverse interests, and develop a process through which such stakeholders will be represented in the planning, implementation and evaluation of the intervention. The process of identifying, understanding and involving stakeholders help ensure that distance learning initiatives are undertaken for the right reasons, are sensitive to the contexts of learners and their environments, and are sustainable. Distance Learning Based on Sound Pedagogical and Administrative Models The substantial number and range of distance learning experiences accumulated in developing countries can help FAO craft pedagogical and administrative models that avoid replicating some of the fundamental mistakes that have been made in the past. While ideal models and practices have yet to be developed, practitioners and scholars in both the Northern and Southern hemispheres have done much to critically examine distance learning and make its application more appropriate to diverse circumstances around the world. Over the past decade, the practice of distance learning in both developed and developing countries has evolved substantially. In developing countries, Perraton (2000, p. 197) suggests: The best-run programmes are probably better, more effective, and more interesting for their students than they were a generation back. There is a reasonable consensus on good practice, which will include using a combination of media, ensuring that there is effective tutoring and student support, having an efficient administrative system, and developing clear and well-produced teaching material. In developed countries, technological change has led to what Garrison (1997) calls the “post-industrial age” of distance education. In higher education, mainstream research universities in the Northern hemisphere are creating models of “distributed education” and “little distance education” as they use networked learning environments that blend distance education with face-to-face instruction (Garrison & Anderson, 1999). There is now increasing sensitivity to gender issues as important variables in the practice of distance education (Burge, 1998). Any organisation contemplating the application of distance learning strategies to the challenges of food security and rural development should be aware of the pedagogical innovations of the past decade. Table 2 identifies a basic outline of best practices in distance learning. Table 2. Best Practices in Distance Learning International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 12 Conclusion: Looking toward the Future The Food and Agriculture Organisation can be an international catalyst for the learning of a diverse and globally distributed set of individuals, organisations and communities whose capacities and actions influence the achievement of food security and rural development. In collaboration with a wide range of partners, and in conjunction with other methods of intervention, the Organisation can employ innovative and appropriate distance learning methods to accomplish its strategic objectives. International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 13
Introduction: FAO and Distance Learning
The mission of the Food and Agriculture Organization of the United Nations (FAO) is to help build a food-secure world for present and future generations.The achievement of this mission depends upon the capacities and actions of a globally distributed set of individuals, organisations and communities.While a range of factors determines such capacities and actions, education and learning are widely recognised as important components of development.Since its inception, FAO has played a significant role in producing, managing and disseminating knowledge for processes of education and learning of importance to Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 2 food security around the world.The Organization has adopted five corporate strategies to guide its activities over the next fifteen years: 1. Contributing to the eradication of food insecurity and rural poverty.
2. Promoting, developing and reinforcing policy and regulatory frameworks for food, agriculture, fisheries and forestry.
3. Creating sustainable increases in the supply and availability of food and other products from crop, livestock, fisheries and forestry sectors.
4. Supporting conservation, improvement and sustainable use of natural resources for food and agriculture.
5. Improving decision-making through the provision of information and assessments and fostering of knowledge management for food and agriculture.
The accomplishment of this strategic agenda will necessarily involve processes of education and learning.Over the past decade, there has been a resurgence of international interest in distance learning as a potentially useful strategy for addressing human development issues.This resurgence has been rooted, in part, in the evolution of new information and communications technologies, and, in part, in the improvement of pedagogical and administrative models for facilitating learning at a distance.United Nations agencies have contributed to the resurgence of international interest in distance learning.UNESCO (1997) has issued a policy document encouraging the use of distance learning, at all levels of educational systems, for purposes of development.The World Bank (1999) promotes "innovative delivery" as one of its global priorities for the educational sector.The World Health Organization (1998) promotes the use of "telematics," including distance health education, in support of its Health-for-All agenda.
Both UNESCO [http://www.unesco.org/education/elearning/index.html]and the World Bank [http://www1.worldbank.org/disted]host Internet sites providing information to promote the appropriate use of distance learning.In addition to such policy advocacy and information dissemination functions, many United Nations agencies have employed distance learning strategies through their own programmatic interventions, and provided financial or technical assistance to a multitude of national and regional distance learning projects in developing countries.
FAO has accumulated significant experiences in the field of distance learning.REDCAPA's main objectives are to contribute to the improvement of teaching and research in agricultural economics, rural development and the environment, support institution building, and improve national and international cooperation among its members.Among the various activities implemented to accomplish these objectives, the network coordinates regular distance learning courses on pertinent topics.In addition to its role in the establishment of REDCAPA, FAO has assisted the network financially, and provided training materials and direct support for a number of distance learning courses offered in the areas of food security policy, macroeconomics and gender analysis.In the context of its own experiences and growing international interest in the field, FAO is exploring how distance learning could be most usefully applied to the achievement of its mission.This paper represents an important step in such an exploration.It summarises various arguments that have been made concerning the potential of distance learning in developing countries, and then makes five practical suggestions for applying distance learning strategies to the challenges of food security and rural development.The purpose of publishing this article is both to disseminate our ideas about distance learning to interested professional and scholarly audiences around the world, and to seek feedback from those audiences.
Distance Learning and the Developing Countries
The use of distance learning strategies in developing countries is by no means novel.The potential connections between distance learning and development processes have been recognised for decades, as the following passage from Kabwasa and Kaunda (1973, p. 8) demonstrates: Correspondence education has yet to make an impact in Africa.We feel it is our responsibility to give it as much publicity as we can, so that our people know its potentialities and possibilities, and how they can go about making greater use of it in the development of our continent.
International Review of Research in Open and Distance Learning
Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 5 In a recent overview, Hilary Perraton (2000) organises distance learning experiences in developing countries into four categories: (1) non-formal and adult education, (2) primary and secondary schooling, (3) teacher training, and (4) higher education.He provides numerous examples to indicate that countries in Africa, Asia and Latin America have had significant experience with distance learning since at least the 1960s.The following four examples of distance learning programmes related to agriculture have all reached substantial numbers of learners in developing countries, and have been sustained for at least a decade.
First, since the 1960s, "INADES -formation" (Institut Africain pour le développement économique et social) has provided non-formal distance learning opportunities to tens of thousands of farmers, extension agents and other agents of rural development in Africa (Dodds, 1999;Perraton, 2000).Courses for farmers include those on agricultural production and animal husbandry, as well as those on basic mathematics, management, marketing, credit and cooperatives.For extension agents and other development workers, additional courses are available on communication, extension methods, management and the rural economy.The delivery strategy for "INADES-formation" courses is a combination of print-based correspondence packages with local study groups and tutorial support.
Second, since 1973, the G.B. Pant University of Agriculture and Technology has offered a Correspondence Course Programme to farmers and rural youth in Uttar Pradesh, India (M.P. Singh, 1992Singh, , 1999)).About 500 learners each year select four courses from a list of seventeen options (fourteen concern the cultivation of particular crops, and one each concern dairy production, insecticide use and fertiliser use).The Programme's delivery strategy is print-based correspondence.Each course comprises five or six lessons, written in elementary Hindi.
Course scheduling is timed to coincide with the seasonal production of the various crops under study.The University has twenty District Extension Centres students may contact for personalised guidance and study support.Non-credit certificates are issued to all students passing end of term examinations in each course.
Third, since 1986, the Women's Secondary Education Programme of Allama Iqbal Open University has been providing rural women in Pakistan with courses to meet secondary school equivalency and to increase income generating opportunities through building practical skills (Batool & Bakker, 1997) the world with opportunities for graduate study in agricultural development (Bryson & Hakimian, 1992;Pearce & Sharrock, 2000).Currently, over 1,000 learners from over 100 countries are enrolled in a range of programmes rooted in agricultural and environmental economics, management and planning.The Programme initially used traditional correspondence methods, and has recently added an Internet-based learning system for delivery of learning materials, tutorial support, assignment submission and feedback, and opportunities for learnerlearner interaction.
The fact that distance learning is an established form of educational delivery in many developing countries does not mean that distance learning is necessarily an effective tool in development efforts.Understanding the past influence and future potential of distance learning for challenges related to food security and rural development is not an easy task.Substantial literature has emerged that either describes or evaluates the past experiences and future potential of distance learning in developing countries (Arger, 1985(Arger, , 1990;;Bilham & Gilmour, 1995;Daniel, 1990;Dodds, 1996;Farrell, 1999;Guy, 1991;McAnany et al., 1983;Perraton, 2000;Shrestha, 1997aShrestha, , 1997b;;UNESCO, 1997;Young et al., 1980) or in particular regions such as Africa (Chale & Michaud, 1997;Fillip, 2000;John, 1996a, 1996b, Saint, 1999;UNESCO 1990UNESCO , 1991UNESCO , 1995)).In these and other publications, a range of general claims has been made about the strengths and limitations of distance learning in developing countries.Many of these claims contradict one another.Table 1 indicates that there is no overall consensus about distance learning in developing countries.What can we conclude about distance learning as a means to promote rural development and food security?With regard to its track record, distance learning has had both successes and failures in developing countries.The lengthy list of problems and disappointments identified by critics of distance learning would lead to a pessimistic conclusion, unless one recognises that conventional alternatives in developing countries have also, at times, been unable to provide adequate levels of educational access, equity and quality (Perraton, 2000, p. 198).With regard to its future potential, distance learning seems to be a promising response to certain educational challenges, but it should not be seen as a panacea.Many institutions in developing countries are steadily increasing their capacity to engage in distance learning, and appropriate technological innovations are being used in many contexts.
Practical Suggestions for Distance Learning
The appropriateness and effectiveness of distance learning depends on why, how, and how well it is designed and delivered.Distance learning initiatives should
International Review of Research in Open and Distance Learning
Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 7 be undertaken for appropriate reasons, and in a manner that is suitable to the stakeholders of the initiative.Organisations undertaking distance learning initiatives must have the capacity to do so, and must invest or obtain the necessary resources in order to do it well.
The claims listed in Table 1 are rooted in specific experiences of distance learning in contexts pertinent to food security and rural development in developing countries.Some of these experiences are from within FAO, but most are described in the literature cited in the last section of this article.By analysing these past experiences, it is possible to distil important lessons that have been learned.Paying attention to those lessons is a first step in creating an approach to distance learning that would enable FAO to act appropriately in this challenging field.
Distance Learning for the Right Reasons Meacham (1993, p. 227) suggests that distance learning initiatives have been undertaken in developing countries for political or commercial purposes: "Apart from the obvious purpose of teaching more people more effectively, distance learning systems have been used to impress donors, placate ministers, justify consultancies, and even sell technologies."In the context of the contemporary development of new information and communication technologies, there is a danger that distance learning initiatives can be driven by the availability of innovative technologies (and the desire to be seen using them), rather than by the educational needs of individuals and communities.Fillip (2000, p. 42) argues: Starting with the real needs of communities cannot be stressed enough.
There is a strong tendency in the donor community to start with the technology rather than with the needs of the community and to ask the wrong questions.The important question is not "Can the Internet be used to provide distance learning to communities?"The important question is "What is the most appropriate, cost-effective and sustainable way to address the educational needs of communities?" The FAO undertakes distance learning initiatives in support of its strategic objectives.In the struggle for food security and rural development around the world, distance learning should be conceptualised as a means to an end, and not an end in itself.
International Review of Research in Open and Distance Learning
Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 8
Distance Learning that is Sensitive to Context
There is no universally appropriate model for designing and delivering distance learning initiatives.The potential target audiences for distance learning initiatives in which FAO might become involved is broad indeed, ranging from agricultural producers and marginalised rural populations, to relatively privileged urban professionals such as policy makers and information managers.It is essential that the form of distance learning selected be appropriate to the particular context in which it is being applied.
In a study of South Africa, Geidt (1996) identifies significant practical challenges that indicate adult basic education at a distance cannot function on an open university model adopted from the United Kingdom.Communities most in need of adult basic education provision in South Africa tend to have the following characteristics: slow and unreliable postal systems, few and unreliable telephones, lack of access to television, lack of electrification, poor road conditions, few and inadequate libraries, and inadequate school or other public facilities for studying.In addition to these infrastructure challenges, Geidt (1996, pp. 16-19) identifies several social and economic characteristics of disadvantaged communities in South Africa that make an open university style of distance learning unlikely to succeed.First, many people live in crowded housing conditions, and as a result learners do not have easy access to appropriate conditions in which to study.Second, written texts are not commonly used in day-to-day life; as a result learners are not accustomed to critically interpreting textual messages and constructing written responses.Third, previous school experiences of most learners are of rote learning, and as a result learners must make a difficult transition to become independent and critical learners.Fourth, there is tremendous cultural and linguistic diversity; as a result, many learners may have difficulty with the language and culture of standardised instructional materials.Geidt (1996, pp. 14-15) concludes that distance learning can only be effective when its delivery system and curriculum are appropriately matched to the social and political context of the learners.In the case of adult basic education in South Africa, Geidt (1996, pp. 19-20) suggests that a substantial component of face-to-face support is essential, and identifies several means through which such support could be provided (e.g., community-based tutors, community learning centres, and regional study centres).
One model of distance learning cannot be appropriate to all potential target groups of interest to FAO.Distance learning models and practices must be adapted to the social, cultural, economic and political circumstances of learners and their environment.As with other forms of educational activity, it is important to integrate gender analysis into the planning and implementation of distance learning initiatives.
International Review of Research in Open and Distance Learning
Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 9 Distance Learning that Uses Existing Infrastructure and Has Sustainable Costs One disturbing tendency in the history of distance learning in developing countries is the large number of initiatives that demonstrate significant learning outcomes and programmatic success during pilot projects, but are not sustained or replicated on a larger scale after the pilot project is complete and donor funding is withdrawn.While the lack of sustainability and scalability may reflect a number of variables, it is frequently related to the use of inappropriate delivery strategies.The failure of many educational television projects in developing countries in the 1970s and 1980s is an example of what Meacham (1993, p. 227) calls "technological overkill" in distance learning.This phenomenon refers to the use of expensive and complex delivery strategies when inexpensive and simple alternatives could be pedagogically effective.Fillip (2000, p. 25) argues that when it comes to choosing technologies for distance education "...it is essential to take a careful look at the level of infrastructure that the target populations have access to, and the extent to which the same target populations can afford to make use of that infrastructure for educational purposes."When donors have tried to provide a communication infrastructure for distance learning programmes, such programmes have very rarely been sustainable.Given challenges with the costs and servicing of equipment, educational projects should use technologies that have already been established through entertainment and commercial sectors (Perraton & Creed, 2000, p. 17).With regard to sustainable technology choices, Dodds ' (1972, p. 46) conclusion from nearly thirty years ago is still pertinent: "The installation of new and glamorous media at great expense may be less effective than the careful integration of existing resources." The question of technologies and delivery strategy is related to the more general question of the cost-effectiveness of distance learning.Distance learning is sometimes presented as universally more cost-effective than conventional education.Past experiences in both developed and developing countries indicate that this is not necessarily the case.Distance learning has the potential to be, but is not necessarily, more cost-effective than conventional education (Perraton, 2000, pp. 136-138;Rumble, 1997, pp. 203-204;Rumble, 1999, p. 133;UNESCO, 1997, pp. 33-34).A range of factors that contribute to substantial cost differences between different distance learning initiatives are: numbers of learners enrolled, mixture of communication technologies, media and learning materials, degree of learner support and interaction, salaries and employment conditions of distance learning staff, production standards, and institutional working practices and overhead costs.A general conclusion that can be drawn is that distance learning tends to be more economically attractive at higher levels of education (Perraton, 2000, p. 196).This is because the costs of distance learning are relatively similar at all levels, whereas the costs per student of conventional education are higher at higher levels.In some cases, distance learning may provide a cost-effective means of reaching target groups of learners, but in other cases conventional face-to-face contact may be more cost-effective.The assumption that distance learning is a low-cost alternative can undermine the quality and impact of distance learning programmes by systematically depriving them of necessary resources.
In the field of food security, organisations should not endeavour to establish independent systems of communication for the delivery of distance learning initiatives.Rather, in each specific case, delivery strategies for distance learning initiatives should be developed according to the communication infrastructure that is currently available, reliable and affordable to the learners who will take part in the initiative.This does not mean that Internet-based delivery strategies must be universally rejected in favour of simpler alternatives such as print and radio.It does mean that the pedagogical strengths of any potential delivery strategy must be carefully assessed according to the practical constraints facing each group of learners.Some target audiences will have ready access to computers and the Internet, while others will not even have electrical power or reliable telephone service.
Distance Learning that Engages Stakeholders
Many of the problems with previous distance learning programmes in developing countries relate to a lack of participation on the part of those individuals and communities who were supposedly the beneficiaries in the design and delivery of the programmes.Guy (1991, p. 169) argues that an appropriate: conception of distance education would require a focus on programs in which participants have control over not only what is taught, but how and where distance education takes place.It is dependent on the participation of people, who through participatory planning and action, to develop a deeper understanding of their lives and the structures which surround them in time and space.
The need for participatory and empowering educational practice has been identified by FAO in its work in the fields of agricultural education, extension and communication for development.FAO (1999) it is important to identify the stakeholders, understand those stakeholders' diverse interests, and develop a process through which such stakeholders will be represented in the planning, implementation and evaluation of the intervention.The process of identifying, understanding and involving stakeholders help ensure that distance learning initiatives are undertaken for the right reasons, are sensitive to the contexts of learners and their environments, and are sustainable.
Distance Learning Based on Sound Pedagogical and Administrative Models
The substantial number and range of distance learning experiences accumulated in developing countries can help FAO craft pedagogical and administrative models that avoid replicating some of the fundamental mistakes that have been made in the past.While ideal models and practices have yet to be developed, practitioners and scholars in both the Northern and Southern hemispheres have done much to critically examine distance learning and make its application more appropriate to diverse circumstances around the world.Over the past decade, the practice of distance learning in both developed and developing countries has evolved substantially.In developing countries, Perraton (2000, p. 197) suggests: The best-run programmes are probably better, more effective, and more interesting for their students than they were a generation back.
There is a reasonable consensus on good practice, which will include using a combination of media, ensuring that there is effective tutoring and student support, having an efficient administrative system, and developing clear and well-produced teaching material.
In developed countries, technological change has led to what Garrison (1997) calls the "post-industrial age" of distance education.In higher education, mainstream research universities in the Northern hemisphere are creating models of "distributed education" and "little distance education" as they use networked learning environments that blend distance education with face-to-face instruction (Garrison & Anderson, 1999).There is now increasing sensitivity to gender issues as important variables in the practice of distance education (Burge, 1998).
Any organisation contemplating the application of distance learning strategies to the challenges of food security and rural development should be aware of the pedagogical innovations of the past decade.Conclusion: Looking toward the Future The Food and Agriculture Organisation can be an international catalyst for the learning of a diverse and globally distributed set of individuals, organisations and communities whose capacities and actions influence the achievement of food security and rural development.In collaboration with a wide range of partners, and in conjunction with other methods of intervention, the Organisation can employ innovative and appropriate distance learning methods to accomplish its strategic objectives.
International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 3 medium of information exchange and learning in many African countries.More recently, FAO has used distance learning strategies both for formal education and information dissemination purposes.One example is the ongoing collaboration between FAO and the REDCAPA network.REDCAPA is the "Network of Institutions Dedicated to Teaching Agricultural and Rural Development Policies for Latin America and the Caribbean" (Red de Instituciones Vinculadas a la Capacitacion en Economia y Políticas Agricolas en America Latina y el Caribe).REDCAPA was founded in 1993 through the initiative of the FAO Policy Assistance Division in collaboration with organisations from eleven Latin American and Caribbean countries, and financial support from the government of Italy.The REDCAPA network currently involves 66 universities and other organisations concerned with teaching agricultural economics and policies and sustainable rural development [http://www.redcapa.org.br].Most members are from the region, although several European and American universities take part.
Since the 1960s, FAO has contributed to the development of rural radio as a . The range of practical courses includes Selling of Home Made Products, Garment Making, Poultry Farming, Food and Nutrition, First Aid, Home and Farm Operations, and General Home Economics.The content of all courses has been designed to reflect the priorities, needs, and prior experiences of adult rural women.All courses are delivered through print-based correspondence methods, and learners receive tutorial support through local study centres.As of 1996, the Programme enrolled about 4,000 learners per semester.Fourth, since 1988, Wye College of the University of London has delivered an External Programme that uses distance learning to provide learners around International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 6
Table 1 .
The Case For and Against Distance Learning in Developing Countries Distance learning is not simply an inexpensive alternative to other forms of International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 10 educational programming or field interventions.
International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 11
Table 2 .
Table 2 identifies a basic outline of best practices in distance learning.Best Practices in Distance Learning International Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 12 Review of Research in Open and Distance Learning Distance Learning for Food Security and Rural Development: A Perspective from the United Nations Food and Agriculture Organization 13 | 2017-10-28T08:19:20.181Z | 2002-04-01T00:00:00.000 | {
"year": 2002,
"sha1": "15abf88de4d87f88689033d40ce9167884e294a1",
"oa_license": "CCBY",
"oa_url": "https://www.irrodl.org/index.php/irrodl/article/download/81/157",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "15abf88de4d87f88689033d40ce9167884e294a1",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
247607214 | pes2o/s2orc | v3-fos-license | Vector and Host C-Type Lectin Receptor (CLR)–Fc Fusion Proteins as a Cross-Species Comparative Approach to Screen for CLR–Rift Valley Fever Virus Interactions
Rift Valley fever virus (RVFV) is a mosquito-borne bunyavirus endemic to Africa and the Arabian Peninsula, which causes diseases in humans and livestock. C-type lectin receptors (CLRs) represent a superfamily of pattern recognition receptors that were reported to interact with diverse viruses and contribute to antiviral immune responses but may also act as attachment factors or entry receptors in diverse species. Human DC-SIGN and L-SIGN are known to interact with RVFV and to facilitate viral host cell entry, but the roles of further host and vector CLRs are still unknown. In this study, we present a CLR–Fc fusion protein library to screen RVFV–CLR interaction in a cross-species approach and identified novel murine, ovine, and Aedes aegypti RVFV candidate receptors. Furthermore, cross-species CLR binding studies enabled observations of the differences and similarities in binding preferences of RVFV between mammalian CLR homologues, as well as more distant vector/host CLRs.
Introduction
Hundreds of arthropod-borne viruses (acronym: arboviruses) such as Zika virus, Dengue virus, or Rift Valley fever virus (RVFV) are transmitted to humans via the bite of infected mosquitos. They cause severe diseases or even death in endemic areas [1,2]. RVFV (Order Bunyavirales, Family Phenuiviridae [3]), as one of these arboviruses, is endemic to Africa and the Arabian Peninsula [4,5]. Besides public health issues in humans, Rift Valley fever poses a major threat to livestock and agricultural productivity. Sheep and goats are among the most susceptible farm animals [6]. After experimental infection, the mortality rate of newborn lambs and the abortion rate of pregnant ewes reached nearly 100% [7,8]. Consequently, periodically occurrent Rift Valley fever outbreaks result in high economic losses, in addition to illnesses and deaths.
Mosquito CLR-hFc Fusion Protein Expression
To evaluate RVFV-CLR interactions in a cross-species approach, mosquito vector CLR-hFc fusion proteins were expressed in addition to available murine and ovine CLR-hFc fusion protein libraries [33][34][35]. Initially, a maximum likelihood phylogenetic analysis of the CRDs was performed to compare the CLRs across the different species. In a recent publication, we showed that ovine, bovine, and caprine CLRs have higher degrees of morphological similarities to humans than to murine homologues [35]. In this study, we further compared the mammalian CLRs with mosquito CTLDcps. Alignment ( Figure 1) and subsequent phylogenetic tree analysis (Figure 2) of the mammalian and mosquito CRD amino acid sequences confirmed that mammalian CLR homologues possessed highly conserved residues, while it has predicted with more than 70% accuracy that all five Aedes CTLDcps share close roots ( Figure 2).
To identify the CRD of the mosquito CLR sequences, an alignment to known mammalian CLRs was performed ( Figure 1). The CRD of CLRs comprised 110-130 amino acid residues, with conserved disulphide bonds and up to four Ca 2+ binding sites [36]. This CRD could be found in all mammalian and mosquito proteins used in this study and showed high sequence similarities (Figure 1, Supplementary Figure S1). In comparison to the human, murine, and ovine myeloid CLRs, the selected mosquito proteins aeCTLMA15, aeCTLGA6, aeCTLMA14, aeCLSP2, and aeCTL23 did not show any hydrophilic part ( Figure 1). To roughly estimate possible CTLDcps ligands, the CRD amino acid sequences were screened for conserved Ca 2+ dependent glycan-binding residues. The WND motif (Trp-Asn-Asp), known for galactose and N-acetylgalactosamine (GalNAc) binding [36][37][38], was present at the C-terminal region of all five CTLDcps ( Figure 3A; Supplementary Figure S1). Furthermore, aeCTLMA15 and aeCLTMA14 contained EPN motifs (Glu-Pro-Asn), described as a mannose-, N-acetylglucosamine (GlcNAC) and glucose-binding motif, while aeCTLGA6 and aeCLSP2 displayed the QPD motif known to interact with galactose (Gln-Pro-Asp) [37,38] (Figure 3A; Supplementary Figure S1). Additionally, all Aedes aegypti CLRs had one or more putative N-glycan sequons (Asn-X-Ser/Thr, where X can be any amino acid except proline [39,40]) ( Figure 3A). amino acid sequences confirmed that mammalian CLR homologues possessed highly conserved residues, while it has predicted with more than 70% accuracy that all five Aedes CTLDcps share close roots ( Figure 2). To identify the CRD of the mosquito CLR sequences, an alignment to known mammalian CLRs was performed ( Figure 1). The CRD of CLRs comprised 110-130 amino acid residues, with conserved disulphide bonds and up to four Ca 2+ binding sites [36]. This CRD could be found in all mammalian and mosquito proteins used in this study and showed high sequence similarities (Figure 1, Supplementary Figure S1). In comparison to the human, murine, and ovine myeloid CLRs, the selected mosquito proteins aeCT-LMA15, aeCTLGA6, aeCTLMA14, aeCLSP2, and aeCTL23 did not show any hydrophilic part ( Figure 1). To roughly estimate possible CTLDcps ligands, the CRD amino acid sequences were screened for conserved Ca 2+ dependent glycan-binding residues. The WND motif (Trp-Asn-Asp), known for galactose and N-acetylgalactosamine (GalNAc) binding [36][37][38], was present at the C-terminal region of all five CTLDcps ( Figure 3A; Supplementary Figure S1). Furthermore, aeCTLMA15 and aeCLTMA14 contained EPN motifs (Glu-Pro-Asn), described as a mannose-, N-acetylglucosamine (GlcNAC) and glucose-binding motif, while aeCTLGA6 and aeCLSP2 displayed the QPD motif known to interact with galactose (Gln-Pro-Asp) [37,38] (Figure 3A; Supplementary Figure S1). Additionally, all Aedes aegypti CLRs had one or more putative N-glycan sequons (Asn-X-Ser/Thr, where X can be any amino acid except proline [39,40]) ( Figure 3A). Amino acids similarity analysed with Score Matrix Blossum62 (Threshold 1) and symbolised in light grey (60-79% similarity), dark grey (80-99%), and black (100%). Hydrophobic part, indicating a transmembrane region, marked in dark red below the sequence. Above, consensus and mean pairwise identity over all amino acid sequences shown (green: 100% identity; yellow: 30-99%; red: <3 0%). Mosquito amino acid sequences were obtained from VectorBase and mammalian sequences from NCBI (Tables 1 and 2). Sequence analysis was performed, and figures were prepared using the Geneious Prime software. . Amino acids similarity analysed with Score Matrix Blossum62 (Threshold 1) and symbolised in light grey (60-79% similarity), dark grey (80-99%), and black (100%). Hydrophobic part, indicating a transmembrane region, marked in dark red below the sequence. Above, consensus and mean pairwise identity over all amino acid sequences shown (green: 100% identity; yellow: 30-99%; red: <30%). Mosquito amino acid sequences were obtained from VectorBase and mammalian sequences from NCBI (Tables 1 and 2). Sequence analysis was performed, and figures were prepared using the Geneious Prime software. As the correct CRD folding and processing is crucial for subsequent analysis of CLR-hFc fusion protein-ligand interaction, the full CRD sequence with its potential glycanbinding sites and N-linked glycosylation motifs was ligated into the pFUSE-hIgG1-Fc2 vector ( Figure 3A,B). After expression in CHO-S cells and protein purification, protein identity and purity were confirmed by SDS-PAGE and subsequent Western blot analysis. CLR-hFc fusion proteins were purified from cell culture supernatant and detected by anti-hFc staining ( Figure 3C,D). The fact that some ovine and mosquito CLR-hFc fusion proteins showed two bands or had an apparently higher molecular weight than calculated based on their amino acid sequence (Table 1) may indicate the presence of different glycoforms [35], thereby increasing the protein mass. CLR-hFc fusion protein preparations were highly pure, as no further protein bands were detectable by protein staining (Supplementary Figure S2A,B).
ELISA-Based Binding Studies
Given the observed CRD similarities between mosquito and mammalian CLRs, as well as the sugar-binding motifs in the amino acid sequence of mosquito CLRs, we hypothesised that known mammalian CLR ligands may be recognised by mosquito CLRs as well. Mannan, an α-1,6 linked mannose homopolysaccharide and a known Dectin-2/Langerin ligand [41,42], and zymosan, a β-1,3-linked glucose homopolysaccharide and As the correct CRD folding and processing is crucial for subsequent analysis of CLR-hFc fusion protein-ligand interaction, the full CRD sequence with its potential glycanbinding sites and N-linked glycosylation motifs was ligated into the pFUSE-hIgG1-Fc2 vector ( Figure 3A,B). After expression in CHO-S cells and protein purification, protein identity and purity were confirmed by SDS-PAGE and subsequent Western blot analysis. CLR-hFc fusion proteins were purified from cell culture supernatant and detected by anti-hFc staining ( Figure 3C,D). The fact that some ovine and mosquito CLR-hFc fusion proteins showed two bands or had an apparently higher molecular weight than calculated based on their amino acid sequence (Table 1) may indicate the presence of different glycoforms [35], thereby increasing the protein mass. CLR-hFc fusion protein preparations were highly pure, as no further protein bands were detectable by protein staining (Supplementary Figure S2A,B).
ELISA-Based Binding Studies
Given the observed CRD similarities between mosquito and mammalian CLRs, as well as the sugar-binding motifs in the amino acid sequence of mosquito CLRs, we hypothesised that known mammalian CLR ligands may be recognised by mosquito CLRs as well. Mannan, an α-1,6 linked mannose homopolysaccharide and a known Dectin-2/Langerin ligand [41,42], and zymosan, a β-1,3-linked glucose homopolysaccharide and Dectin-1 ligand [43,44], were coated on ELISA plates and probed with mosquito CLR-hFc fusion proteins. AeCTL23 and aeCLSP2 bound to both mannan and zymosan ( Figure 4A,B). These interactions were abrogated by adding 10 mM ETDA to the binding buffer ( Figure 4A,B), indicating Ca 2+ -dependent binding.
To identify novel RVFV-CLR interactions, the purified virus was immobilised on the ELISA plate. The purified mock control served to determine the level of unspecific CLR-hFc fusion protein binding to remaining host cell proteins from the virus preparation. Human DC-SIGN was used as a positive control, as its role in RVFV recognition and host cell entry was already shown [24,45]. mLangerin, mDectin-1, mDectin-2, mMincle, and mMicl all showed increased absorption, compared with respective mock controls ( Figure 5A). In the presence of EDTA, binding of the fusion proteins mMicl-hFc and hDC-SIGN-hFc to RVFV was abolished, while binding of mDectin-2-hFc and mMincle-hFc remained unaltered.
Ovine CLRs were tested in the same assay. While mLangerin-hFc and mDectin-1-hFc bound substantially to RVFV ( Figure 5A), oLangerin-hFc and oDectin-1-hFc did not ( Figure 5B). In summary, we could observe differences in the RVFV interaction of murine and ovine CLR homologues. Despite their similarities in their amino acid sequences ( Figure 2), they exhibited differential ligand binding and varied in the calcium dependency of the interactions ( Figure 5A,B). The ELISA-based screening thus identified novel ovine RVFV-CLR interactions such as oMcl and oDcir. To analyse RVFV-mosquito CTLDcps interactions, Aedes aegypti CLR-hFc fusion proteins were tested accordingly. Compared with the positive control hDC-SIGN-hFc [24,45] and respective mock controls, aeCTL23-hFc and aeCLSP2-hFc showed marked binding to RVFV ( Figure 5C). These binding studies suggest that mosquito CTLDcps may be involved in RVFV recognition; however, their role in RVFV infection and immunity in mosquitoes needs to be further investigated.
the interactions (Figure 5A,B). The ELISA-based screening thus identified novel ovine RVFV-CLR interactions such as oMcl and oDcir. To analyse RVFV-mosquito CTLDcps interactions, Aedes aegypti CLR-hFc fusion proteins were tested accordingly. Compared with the positive control hDC-SIGN-hFc [24,45] and respective mock controls, aeCTL23-hFc and aeCLSP2-hFc showed marked binding to RVFV ( Figure 5C). These binding studies suggest that mosquito CTLDcps may be involved in RVFV recognition; however, their role in RVFV infection and immunity in mosquitoes needs to be further investigated. [34,46].
Data are depicted as mean + SEM of duplicates. One representative of n = 4 (EDTA n = 2) independent experiments is shown. One-way ANOVA, along with subsequent pairwise Tukey tests, was performed to compare the binding of the CLR-hFc fusion proteins above the threshold to PBS control and the EDTA supplementation each; **** p < 0.0001. [35,41] and hFc-empty employed as negative control. One representative of n = 4 (EDTA n = 2) independent experiments shown; (B) ELISA-based screening of zymosan with five mosquito CLR-hFc fusion proteins, in the presence and absence of EDTA. mDectin-1/oDectin-1 are known to recognise zymosan (positive control) [35,43] and hFc-empty employed as negative control; (A,B) to discard possible false positives, the dotted line represents the cutoff defined as the threefold margin of the absorbance value relative to hFc, based on previous screenings with the CLR-hFc fusion protein library [34,46]. Data are depicted as mean + SEM of duplicates. One representative of n = 4 (EDTA n = 2) independent experiments is shown. One-way ANOVA, along with subsequent pairwise Tukey tests, was performed to compare the binding of the CLR-hFc fusion proteins above the threshold to PBS control and the EDTA supplementation each; **** p < 0.0001. hDC-SIGN served as positive control and hFc-empty as negative control. One representative of n = 3 (EDTA: n = 2) independent experiments is shown; (A-C) to discard possible false positives, the dotted line represents the cutoff defined as the threefold margin of the absorbance value relative to hFc, based on previous screenings with the CLR-hFc fusion protein library [34,46]. Data are depicted as mean + SEM of duplicates. One-way ANOVA with subsequent pairwise Tukey tests were performed to compare the binding of the CLR-hFc fusion proteins above the threshold to mock and the EDTA supplementation; ** p < 0.01, *** p < 0.0005, **** p < 0.0001.
Discussion
Myeloid CLRs play important roles in virus recognition, resulting in diverse immune responses or host cell entry [17]. In this study, we used murine and ovine CLR-hFc fusion proteins [33][34][35] to identify RVFV binding candidates and compared the binding of CLR homologues in a cross-species approach. Since only human DC-SIGN and L-SIGN were previously reported to interact with RVFV and subsequently facilitate viral host cell entry [24,45], the role of many mammalian CLRs in RVFV recognition has not been investigated so far. Murine Langerin, Dectin-1, Dectin-2, Mincle, and Micl (Clec12a) seem to be CLR-binding candidates for RVFV ( Figure 5A). Except for Langerin, their role in viral recognition is largely unknown. Micl has been shown to interact with monosodium urate crystals [47,48], and plasmodial hemozoin [49], whereas Dectin-1 and Dectin-2 are well known to recognise fungal ß-glucans and α-mannans, respectively, and both contribute to host defence [41,43,44,50,51]. Only a limited number of publications highlight the role of these CLRs in viral recognition. Whole-blood transcriptome analysis identified Micl to be upregulated during SARS-CoV-2 infection in humans [52]. Dectin-2 senses influenza virus hemagglutinin, which initiates IL-12p40 and IL-6 production in murine bone marrow-derived dendritic cells [53]. In a previous study, we showed Mincle to interact with La Crosse virus; however, its role in early antiviral responses against this bunyavirus was limited in vitro [34]. Further approaches are needed for the validation of the identified CLR-RVFV interactions and for a functional characterisation to investigate their potential biological relevance. Thus, it remains to be determined whether Langerin, Dectin-1, Dectin-2, Mincle and/or Micl specifically interact with RVFV and/or contribute to antiviral responses.
Langerin (CD207), among those identified as RVFV-binding candidates, is expressed by resident dendritic cells of the epidermis, known as the so-called Langerhans cells [54]. Consequently, this CLR is present at the anatomical site of initial RVFV infection after a mosquito bite. Langerin has been reported to be a receptor preventing Langerhans cells from human immunodeficiency virus I infection by internalising this virion and subsequently initiating its degradation [55]. Furthermore, Langerin is an attachment and entry receptor for influenza A virus [56]. As Langerin is known as a viral receptor [55,56], and to recognise high-mannose glycans [57], which are also present on RVFV surface glycoproteins [58], this RVFV-binding candidate may be functionally involved in host RVFV recognition.
While soluble CLR-hFc fusion proteins are useful to screen for pathogen-CLR interactions, the mode of presentation of the CRD may markedly affect ligand recognition [19,34,35,46,59]. By using a library of dimeric CLR-hFc fusion proteins with the CRD N-terminally fused to the Fc fragment of human IgG1, we may miss RVFV interactions with CLRs for which a multimeric presentation is crucial, thereby leading to false-negative results. However, dimeric CLR-hFc fusion proteins have proven useful for initial screenings to identify novel CLR-pathogen interactions [19,34,35,46]. Dimeric DC-SIGN-hFc, for instance, also recognises low-affinity ligands in comparative screenings with monomeric and/or tetrameric DC-SIGN presentation [60,61].
All binding studies were performed with RVFV particles matured in the Aedes albopictus cell line C6/36. Thus, the viral particles are likely similar to RVFV from virus-containing saliva injected into the mammalian host. Previous publications already reported structural differences in the composition of envelope lipids [62,63], glycans [24,64], and even surface proteins [65] of arboviruses replicating in mammalian versus insect cells. Sindbis virus, for example, was shown to bind more efficiently to hDC-SIGN and hL-SIGN, when produced in insect cells [66]. Furthermore, experimental RVFV infection in goats varied in early viral replication and immune response when animals were infected with RVFV maturated in C6/36 cells versus Vero cells (African green monkey cells) [67]. Consequently, RVFV derived from insect cells may interact with CLRs in a different manner than mammalian cell-derived virions. Previous studies on the binding of RVFV to hDC-SIGN investigated the interaction of hDC-SIGN with RVFV maturated in BHK-21, Vero, or other mammalian cells [24,45,58]. Here, we showed that RVFV matured in mosquito cells was bound by hDC-SIGN as well. Thus, host CLRs may be involved in recognition of initial virus infection after mosquito bites but may further interact with RVFV proliferating in the host. Whether species-specific differences in glycosylation, envelope lipids, or surface protein composition of bunyaviruses have a direct impact on CLR binding and downstream signalling has to be further investigated in future studies.
Besides finding murine RVFV binding candidates, we could observe differences between the binding affinity of murine and ovine CLR homologues. Different ligands [68,69] and diverse functionalities between CLR homologues [69,70] were already described in the context of bacterial infections. Even though the Langerin sequence is highly conserved among mammalian species, human and murine homologues showed miscellaneous binding to numerous bacterial polysaccharides [68]. The binding site of both receptors is highly similar, but even subtle changes besides the glycan-binding site seemed to yield diverse ligand specificity [68], as was also reported for the closely related human CLRs DC-SIGN and L-SIGN [71]. Consequently, homology in protein sequences of CLRs does not necessarily result in similar ligand-binding preferences, as also seen in our cross-species CLR-RVFV binding study for ovine and murine CLR homologues. Moreover, CLR homologues can bind different ligands, and their downstream signalling pathways and effector functions can vary among different species as well [69,70,72,73].
As arboviruses such as RVFV circulate between insect vector and mammalian hosts, they are able to interact with very different host systems. On the one hand, they replicate in poikilothermic insects with innate cellular and humoral immune responses [74,75]; on the other hand, the same virus infects vertebrates and has to cope with their complex innate and adaptive immune mechanisms. Given that a virus only encodes for a limited number of structural proteins, the question remains as to how arboviruses can interact with the host and vector PRRs to maintain their cross-species transmission cycle. As RVFV is known to interact with host CLRs [24,45], and C-type lectin domain-containing proteins (CTLDcps) were described in Aedes aegypti [28], we hypothesised that RVFV may interact with these insect receptors as well. In total, 57 CTLDcps genes of Aedes aegypti are known so far [28]. In this study, we focused on five selected CTLDcps (CTLMA14, CTLMA15, CTLGA6, CTL23, and CLSP2), as their expression was upregulated after West Nile virus and/or Japanese encephalitis virus infection [29,30], suggesting a role in mosquito antiviral responses. While the CRD of mosquito CTLDcps deviate from mammalian CLRs, conserved motifs, known for calcium-dependent glycan-binding in myeloid CLRs, were found in all five mosquito CTLDcps. In our ELISA-based binding study, Aedes aegypti CTL23 (also named mosGCTL-11) and CLSP2 (also named CTLGA9) bound to α-1,6 linked mannose (mannan) and β-1,3 linked glucose (zymosan) in a calcium-dependent manner. This finding indicates that mosquito CTLDcps may recognise similar pathogen-associated molecular patterns such as mammalian CRLs, thereby highlighting the phylogenetic relevance of this PRR class for the innate recognition of viral pathogens. Whether these CRD motifs coevolved due to the high evolutionary pressure in host-pathogen interactions or remained unchanged since the latest common ancestor remains unanswered. Xia et al. hypothesised that insect CTLDcps have undergone species-specific expansion, as Aedes aegypti, A. gamiae, A. pisum, and further insect CTLDcps form species-related clusters in a cross-species phylogenetic analysis [76].
Virus Cell Culture and Purification
Rift Valley fever virus strain MP12 was produced in seven T-175 flasks by infecting 80% confluent C6/36 cells with a multiplicity of infection (MOI) of 1. Mock infection was performed in the exact same manner, by cultivating seven T-175 flasks of uninfected C6/36. Virus containing supernatant as well as mock infection supernatant were collected at 3 dpi, pooled, and cleared by centrifugation (1200× g; 20 min; 4 • C). Afterwards, supernatants were concentrated and cleared from host-cell-derived proteins via ultracentrifugation in an Optima XPN (Beckman Coulter, Brea, CA, USA) using SW32Ti and SW60Ti rotors (Beckman Coulter). This RVFV purification method was adapted from a protocol published for UUKV concentration [77]. First, 35 mL virus or mock supernatant were filled into a 38.5 mL polyallomer centrifuge tube (Seton scientific, Petaluma, CA, USA) and layered under with 3 mL of 25% sucrose solution in 1× HNE buffer (10 mM HEPES, 150 mM NaCl, 1 mM EDTA, pH 7.3). The centrifugation occurred at 96,000× g and 4 • C for 2 h. Formed pellets were resuspended in 1× HNE buffer. A sucrose density gradient centrifugation was performed to further purify the virus and remove host-cell-derived proteins. In a 4.4 mL clear ultracentrifuge tube (Seton scientific), 600 mL of 60%, 45%, 30%, and 15% sucrose solution in 1× HNE were layered from highest to lowest density. After adding 1.5-2.0 mL virus or mock solution, the density gradient centrifugation was performed at 96,000× g and 4 • C for 90 min with deceleration set on minimum. The arising virus band and mock at the same density were collected. To further clear the virus from the remaining sucrose, the virus band was transferred into a 4.4 mL clear ultracentrifuge tube (Seton Scientific), 0.5 mL of 25% sucrose solution was underlayered and centrifuged at 96000× g and 4 • C for 90 min. The arising pellet was resuspended in 250 µL 1× HNE buffer and stored at −80 • C. The 50% tissue culture infective dose (TCID50) was determined using Vero E6 cells. Plaque forming units (PFU/mL) were estimated using the formula 0.69 × TCID50, as described previously [78].
CLR-hFc Fusion Protein Production
Human, murine, and ovine CLR-hFc fusion proteins were produced, as described earlier [33][34][35]. Following these protocols, the mosquito CTLDcps were transiently expressed as chimeric hFc fusion proteins. In short, proteinogenic sequences of each mosCTLDcps, obtained from VectorBase AaegL5.1 genome assembly (AaegL5.1 IDs shown in Table 1), were synthesised by Eurofins Genomics (Ebersberg, Germany) and served as template DNA. To define the CRD, mosquito CTLDcps were aligned with known murine and ovine CLRs (Figure 1). The carbohydrate recognition domain (CRD) and the C-terminal parts of the proteins were amplified by PCR (primers shown in Table 1; amplified region shown in Figure 3A) and ligated into the pFUSE-hIgG1-Fc2 expression vector (InvivoGen, Toulouse, France). The sequences were verified by Sanger sequencing with a Mix2seq Kit (Eurofins Genomics) according to the manufacturer's protocol. For protein expression, the pFUSE-hIgG1-Fc2 plasmids encoding the CRDs ( Figure 3B) were transiently transfected with 25 kDa linear polyethylenimine (Polysciences, Warrington, PA, USA) into FreeStyle™ CHO-S cells (Thermo Fisher Scientific). The negative control (hFc empty) was produced in the exact same manner, by transfecting the empty pFUSE-hIgG1-Fc2 plasmid. After 96 h, secreted fusion proteins were purified from cell supernatant using HiTrap protein G affinity chromatography columns (GE Healthcare, Danderyd, Sweden) and final protein concentrations were calculated using a Micro BCA™ Protein Assay Kit (Thermo Fisher Scientific) according to the manufacturer's protocol.
Western Blot
Western blot and ROTI ® -Blue staining of SDS-page gels were performed to verify the size, integrity, and purity of the produced fusion proteins. A total of 0.3 µg of each protein was separated in denaturing SDS-PAGE gel electrophoresis and transferred to nitrocellulose membrane. After blocking with milk powder overnight and staining with goat anti-human IgG-horseradish peroxidase antibody (Dianova, Hamburg, Germany) for 1 h, the membrane was covered with SuperSignal TM West Dura solution (Thermo Fisher Scientific) as described in the manufacturer's manual. Chemiluminescence was detected using a ChemiDoc TM MP System (Bio-Rad Laboratories, Hercules, CA, USA). For purity control, the gel was stained with ROTI ® -Blue (Carl Roth, Karlsruhe, Germany) overnight and imaged using a ChemiDoc TM MP System.
ELISA-Based RVFV MP12-CLR Binding Studies
Overnight, wells of a medium-binding half-area 96-well plate (Greiner Bio-one GmbH, Frickenhausen, Germany) were coated with either 1 µg of mannan (Sigma-Aldrich, MO, USA) or zymosan (Sigma-Aldrich) in 50 µL PBS, or 50 µL of 1 × 10 8 PFU/mL purified RVFV MP12 or mock (see Section 2.2. Virus Cell Culture and Purification). The coated wells were washed three times, each with 150 µL of washing buffer (1X PBS, 0.05% Tween-20), and then blocked with 150 µL of 1% BSA (fraction V, IgG free, fatty acid poor, Thermo Fisher Scientific, Darmstadt, Germany) in 1X PBS for 2 h at room temperature, to prevent unspecific binding. The plate was again washed, followed by the addition of 0.25 ng/well fusion proteins diluted in 50 µL lectin binding buffer (50 mM HEPES, 5 mM MgCl 2 , 5 mM CaCl 2 , pH 7.4) for one hour. To test for calcium dependency, the fusion proteins were diluted in EDTA buffer (10 mM EDTA, 50 mM HEPES, pH 7.4) instead of lectin binding buffer. The plate was washed again, and 50 µL of anti-human IgG-horseradish peroxidase (HRP) antibody (Dianova, Hamburg, Germany) diluted 1:5000 in 1X PBS with 1% BSA and 0.05% Tween-20 was added. After one hour of incubation, the plate was finally washed before adding 50 µL of substrate solution (O-phenylenediamine dihydrochloride substrate tablet (Thermo Fisher Scientific), 24 mM citrate buffer, 50 mM phosphate buffer, and 0.04% H 2 O 2 ). After 5 min of colour development, the reaction was stopped with 2.5 M sulfuric acid, and absorbance was measured at 495 nm using a Multiskan GO microplate spectrophotometer (Thermo Fisher Scientific).
Statistical and Phylogenetic Analysis
Nucleotide and amino acid sequence analyses, as well as their alignment and primer design, were performed using Geneious prime 2020.1.2 software (San Diego, CA, USA). Biophysical properties of CRL-hFcs, such as protein molecular weight and transmembrane domain position, were assessed using the Sequence Manipulation Suite [79] and TMHMM-2.0 [80,81] respectively. The phylogenetic analysis was carried out with MEGA 11.0.10 software (www.megasoftware.net, 13 December 2021) and with 500 bootstrap replicates. Protein and nucleotide sequences were obtained from the National Centre for Biotechnology Information (NCBI) genome database (www.ncbi.nlm.nih.gov/genome, last access date: 13 December 2021) or from VectorBase (www.vectorbase.org, last access date: 13 December 2021) (Tables 1 and 2). ELISA data plots were generated with GraphPad Prism 7 software (San Diego, CA, USA), and metric data are represented as a mean + SEM (standard error of mean) for all experiments.
Conclusions
In this study, we identified candidate CLRs from human, mouse, sheep, and Aedes aegypti binding to RVFV. Furthermore, species-specific differences in CLR homologues in RVFV binding were observed. Our findings may present a first step towards a better understanding of virus-CLR interactions across species and may help to develop novel strategies for interfering with such interactions.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors. | 2022-03-23T15:11:38.709Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "e5fd687e1ef2818df4b119d3c8ecd5fa814de645",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/6/3243/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c91342751ec31c88300305ecb2bcc64fe155d9ca",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
166621304 | pes2o/s2orc | v3-fos-license | Student relationship management in Germany: Foundations and opportunities
The objective of the article is to introduce to the topic of Student Relationship Management (SRM) in Germany. The concept has been derived from the idea of a Customer Relationship Management (CRM), which has already been suc-cessfully implemented in many enterprises. Its objective is to canvass for customers, obtain their loyalty towards the company and, if necessary, win them back. Furthermore, potential uses of a SRM within the context of Higher Education Management will be demonstrated by means of examples of German universities and by applying new methods.
Motivation
Students in Germany became more and more demanding. At the same time, the motives of high school graduates to choose their study place(s) have changed over the past years. The German university-information-system HIS ascertained that 65% of the first-year students decided to choose their university due to their place of residence and/or the "hotel mummy". Good equipment of the university is an important criterion for 58% of high school graduates, as well as the reputation of the university (52%). Nevertheless, for 90% of the high school graduates it is above all important that the courses offered correspond to their specialized interests (Heine et al. 2005: 193f.).
Furthermore, the use of information offered by university rankings becomes more and more frequent. However, not only German but also foreign students select German universities on basis of the results of university rankings which are available in English as well. Therefore, it is not amazing that faculties with good ranking results register more students in the following term -in relation to the previous years. However, neither the university management nor education politicians are aware of this competition trend because they associate competition rather with areas of research and professors than with students. Good universities, though, are not only characterized by outstanding researchers but by excellent students as well (Spiewak 2005). Thus it is not astonishing that first approaches to bind potential students to universities are already done by several universities, e.g. by offering workshops for mathematic pupils (University of Munich) or also by "children universities" (Technische Universität Dresden).
Consequently, the goal of this article is to introduce to the topic of Student Relationship Management (SRM). The concept itself deals with a holistic, systematic care of the "business relationship" between university and student, whereby particularly the service quality becomes a more and more interesting point (Göpfrich 2002: 97). Thereby, the satisfaction of students can be increased and the commitment between the students and the university will be intensified, also beyond the final degree.
Next to a lasting relation between students and their university, the students' satisfaction is a crucial factor. Therefore, the general concept of a Customer Relationship Management (CRM) will be defined. Based on this general theory, a model describing the idea of a SRM will be designed. Moreover applicable measures in Management of Higher Education are presented. Furthermore, the intention of the article is to show the potential of SRM by discussing appropriate applications when focusing on increasing the satisfaction of the students.
Customer Relationship Management as basis for SRM
In order to discuss the possibilities of applying the framework of CRM -as used in the economy -to Higher Education, there has to be a basic understanding of the concept itself. Therefore, definitions of CRM as well as SRM need to be pointed out. Furthermore, three modules of a holistic CRM as well as the framework of a customer relationship lifetime cycle (CRLC), representing the basis of the model proposed for managing university-student-relationships, are to be described in their basic elements.
Defining Customer Relationship Management
Many economists have addressed the concept of CRM in theoretic models, which has led to an abundance of definitions of the concept in economic literature. Since the different approaches seem to be rather contradictory in some cases (for example Homburg/Bruhn 2003: 8;Schumacher/Meyer 2004: 19), defining a working definition appears reasonable. Customer Relationship Management, as understood in this article, is a fundamental strategic orientation which is pursued by all members of a company in order to increase customer satisfaction, customer loyalty and the benefit for the consumer as well as for the company during the entire supplier-customer-relationship.
Holistic Customer Relationship Management
The concept of the holistic CRM consists of three components -analytical, operational and collaborative CRM -which are built upon each other. The particular modules of an analytical CRM are on the one hand a data warehouse as data pool and on the other hand the analysis of the data, provided that the data are already collected and thus available. In the operational CRM step -built upon the analytical step -precise activities can be derived on basis of the gained knowledge. The collaborative CRM is the last step and contains the intensive and more or less individual contact with the student (Töpfer 2004: 228).
While in chapter 3.4 the implementation of the operational and collaborative CRM is described, chapter 4 is dedicated to a possible implementation of the analytical CRM in Higher Education.
Introduction to the Customer Relationship Lifetime Cycle
Due to the cyclical development of the supplier-customer-relationship, Stauss (2000): 15, proposes the transfer of the concept of the product life cycle to customers. In his words, many companies nowadays are facing a change in perspective, moving from product to consumer lifetime focus of the cycle. As illustrated in Stauss (2000): 16, the CRLC is divided into several phases, each of them being assigned to one of the three kinds of customer-oriented management tasks -management of prospective customers, customer loyalty or customer win-back.
According to Stauss (2000): 16, during the initiation stage the customer shows particular interest in products or services offered by a company, although he does not contribute to the financial profit of the company. In this stage an efficient management of prospective customers -with the goal of initiating new business relations -is essential. Deciding to buy a product or a service offered by the company enables the customer to gain first experiences concerning the range of products and the support provided by the supplier. Consequently, he enters the socialization stage -the first stage of the management of customer loyalty -in which the goal of the company should be the consolidation of this new business contact by using an efficient management of new customers (Stauss 2000: 16 ff).
In analogy to the rising sales figures in the chronological sequence of the product life cycle, the customer enters a growth stage by generating rising revenues through follow-up purchases. At the end of this stage, the consumer still contributes high sales volumes, but with stagnating expansion rates. He therefore enters the maturity stage management revue, volume 18, issue 2, 2007 207 of the model. During these two phases, the company needs to apply a management of customer satisfaction in order to strengthen already stable business contacts (Stauss 2000: 16 ff).
Following the description of the customer lifetime cycle by Stauss (2000): 16 ff, the decline stage is characterized by a stagnating or even declining intensity of contacts between the customer and the company. For the company it becomes essential to identify endangered relationships by applying an effective complaint management and to impede abrogation by means of churn management 1 . If the company is not successful in communicating its advantages to the customer, he enters the phase of termination. This does, however, not imply that he is to be considered as a lost customer. Rather, the company has to convince him to withdraw his notice by management of termination (Stauss 2000: 16 ff). Should the customer nevertheless abort the relationship, Stauss (2000): 16 ff emphasizes, that the company may try to revitalize it. However, after considering the termination reasons, an abstinence stage may be appropriate.
After describing the CRM as a basis for the following discussion, the focus of the subsequent chapters will be the introduction of the SRM.
Student relationship
In order to create a common understanding, the concept of SRM is to be defined in accordance to the working definition of CRM proposed in chapter 2.1. In addition, the reason why institutions of Higher Education in Germany should, respectively will have to introduce such a strategic orientation will be explained.
The content of the following pages is furthermore the development of a SRM concept based on the CRLC described in section 2.3. The intention is to provide a model on which institutions of Higher Education can base its efforts to bind students as early and as long-dated as possible.
Reasons for introducing SRM in German institutions of higher education
Due to the shift of the self-conception of German institutions of Higher Education towards being a service provider in research and teachings : 2) and due to the increasing stress of competition between academies (Heiling 2003: 4), the question arises, how the concept of a successful CRM can be applied to German universities. Another fact contributing to the increasing interest in possibilities of introducing a SRM are decreasing public funds in general and savings in education in particular. Trends show that expenditures for teaching and research have been continuously rising over the past years, while financial support by the state remained stable (Hahlen 2005: 6f). Therefore, institutions of Higher Education need to apply fundraising for further financial support in addition to governmental financial aid. Potential financiers are -next to the state and public trusteeships -graduates of the academy 1 Churn Management is an artificial word, consisting of "change" and "turn" respectively "return". It describes the attempt to avoid the loss of customers respectively to reduce the rate of shifts to competitors. It is also sometimes referred to as "Regain Management". (Giebisch/Langer 2005: 16). The prerequisite is, however, that they have been successfully bound to their educational institution during their time of studies (Ziegele/Langer 2000: 46). Accordingly, German universities need to learn from the economy and modify the concept of a CRM in order to fit their particular needs. As Lemon (2004): 2 discovered, students increasingly see themselves as customers who purchase education services from competing providers. Consequently, German institutions of Higher Education have recognized that offering education is not sufficient to distinguish a university from other academies and to attract prospective students (Oetker 2000: 4).
Due to an increased competition between educational institutions, current and prospective students are increasingly able to enforce their requirements (Spiewak 2005). Therefore, the main advantage for them, arising from the implementation of a SRM, is the consideration of their interests. Furthermore, an effective SRM can improve the quality of teachings by integrating alumni and their practical experience into the lectures, thereby discussing and analyzing real-life cases. Consequently, graduates will be better prepared for their duties on the job and thus a better reputation of their university at companies. This improvement of the image of the educational institution leads to improved prospects for alumni once they enter the labour market. Students interested in highly paid employments will therefore choose the institution in order to establish the basis for their subsequent success in professional life. Increasing numbers of matriculation lead to higher allocation of funds to the university by the state, allowing a better financial endowment. Once the academy has acquired such a status, it can create better study-conditions, e.g. by improving teaching material, offering a better proportion of students to employees and increasing the number of books available at the library. These improved conditions initiate another cycle starting with the attraction of more students (Spiewak 2005).
Huber (2003) has mentioned, that academic perception is about to become an investment relevant asset securing a welfare state's wealth in the long run. Therefore, it is important to attract and bind new students to German educational institutions using an effective SRM to ensure the prosperity of the nation. Huber (2003): 99 has pointed out, that today about 14 percent of all German graduates leave their country -and almost one third of them stay abroad. Consequently, the country loses its most brilliant people which leads to a deterioration of teaching as well as research. To be able to compete on the global market, however, a country needs smart and educated people.
Transferability of CRM to institutions of higher education
In order to apply the model of CRM to the concept of SRM, it is necessary to prove the conformity of the services of institutions of Higher Education with the term of services as understood economically. Since the intention of the present paper is an introduction to a student orientation in academies, services offered to companies, research establishments and other institutions are not to be considered. Haller (2005): 6 f has identified two basic characteristics of services. The first feature -the immateriality -is closely related to the concept of intangibility, implying that a service is neither tangible nor perceptible through sensation of the customer. management revue, volume 18, issue 2, 2007 209 Therefore, services are characterized by a high proportion of confidence and expert knowledge attributes compared to the search attributes, which can be appraised by the customer in advance. Furthermore, services are neither superposable nor portable (Haller 2005: 7). These aspects are also applicable in the context of education. When choosing an educational institution, the prospective student is neither able to estimate the quality of the corresponding education nor its capabilities to impart professional competence useful to his career.
The second characteristic -the integration of the external factor -implies that without integrating either the customer himself or an object belonging to him, the service delivery is impossible. Consequently, supply and demand have to be synchronous in time and space, a feature called "uno-actu"-principle (Haller 2005: 8). When applying these aspects to institutions of Higher Education, the student is to be considered as the external factor receiving the service "education". Through his dedication during the lectures or even as a student assistant, the student essentially influences the result of the service (Langer et al. 2001: 8). In contrast to the customer in a traditional service process, university education is a central factor during the life of a student. Furthermore, to accomplish a successful graduation, the student needs to exhibit a lot of intellectual capabilities as well as high learning motivation (Hennig-Thurau et al. 2001: 332). As the discourse above shows, the service offered by institutions of Higher Education to its students is comparable to the term service as used within the economic context. Therefore, the principles of an efficient CRM can be applied to the sector of Higher Education.
Development of a model for SRM
Since the concept of Student Relationship Management is a relatively new approach in German Higher Education Research, most German definitions existing at this time (i.e. Pausits 2005: 145) are based on English proposals. In order to create a common understanding it seems to be applicable to define the term "Student Relationship Management" as a basis for the following illustration.
In accordance to the working definition of CRM proposed in chapter 2.1, the concept of SRM used in this article is to be understood as a fundamental strategic orientation of the entire academy aiming at the increase of student satisfaction and the creation of additional value for the students as well as for the academy. The goal is to bind students to the academy not only during their years of study but particularly after their graduation.
Even though many scholars have discussed the issue of student loyalty and its significance for institutions of Higher Education, a generally acknowledged model of SRM has not been published so far. Recognizing this fact, Hennig-Thurau et al. (2001: 332), proposed a description of student-loyalty as an idealistic multi-phases-concept, covering all stages from enrolment to graduation. For Langer et al. (2001: 63) the prerequisite for an effective linkage between graduates and their academy -generating positive effects for both sides -is the conception of a model of loyalty management.
Both approaches, however, do neither consider initiating relationships to prospective students, nor staying in contact with graduates. In order to include these aspects, the CRLC presented in chapter 2.3 will be modified and applied to institutions of Higher Education. The result will be an idealistic cycle of a student-university relationship covering all stages from management of prospective students to churn management. When applying the concept of CRM to Higher Education, the student is to be considered the customer requesting the service "education" (Weinberger 2004: 38).
The main difference between the CRLC and the one proposed for students are the different stages of the process. As figure 1 shows, the student relationship lifetime cycle (SRLC) covers only the stages initiation, socialization, risk of churn, growth, maturity, abstinence and revitalization. The maturity stage ends with the graduation of the student which implies that he does not request the service "education" any longer. Therefore, neither the decline stage nor the phase of termination needs to be considered. Furthermore, the idealistic course of the supplier-customer-relationship has been adjusted to fit the target group. Due to the missing phase of termination, the intensity of the relationship between alumni and academy decreases faster than in the case of customer and service provider. Moreover, the possibilities for graduates to do a doctorate or to apply for postgraduate professional education of the university have to be considered. In this case, the relationship remains on a high level which represents the prolongation of the maturity stage. Most students, however, leave their alma mater after graduation. In this case, the institution's objective should be not to completely lose contact to its graduates. This circumstance is considered by inserting the dotted line into the model, which represents the ideal progression of the relationship after implementing an effective SRM. As already described for the CRLC, the first issue which institutions of Higher Education have to consider is the management of prospective students. In this stage, an ad-management revue, volume 18, issue 2, 2007 211 aptation to the requirements of universities is not necessary. Once the university was successful in attracting the application of a prospective student, the fragile relationship needs to be further developed and strengthened. The manifold offers provided for students at American Universities could be an example for German academies, because institutions of Higher Education in the United States have been successful in binding their students for more than 200 years. In particular, Erhardt (2000): 7 emphasizes the abundance of services available to students and the excellent personal supervision as well as institutionalized rituals, ceremonies and traditions, and the universities' corporate identities.
In analogy to the CRLC, the satisfaction of the students needs to be increased during the stages growth and maturity of the relationship. According to a study conducted by Langer in cooperation with the "Centrum für Hochschulentwicklung Gütersloh", perceived quality of teaching has by far the highest impact on student loyalty. Other important factors are the emotional commitment, meaning the perceived responsibility of students towards their university and faculty (Ziegele/Langer 2000: 47), and the academic integration of the students as well as their ambitions to reach their individual objectives (Langer et al. 2001: 47). According to Ziegele/Langer (2000: 48), there are a lot of factors influencing the quality of teaching. They stated, for example, a vast range of teaching offered, supervision of students by employees of the university, and modalities concerning examinations as well as recreation offers.
If a student does enter the abstinence stage by matriculating at another university or by terminating his university education prematurely, he will at first seem out of reach for the educational institution. Consequently, his potential for a longer relationship decreases (Krempkow/Pastohr 2003: 14). To detain the student from entering this stage should be the objective of an effective loyalty management. Therefore, it is necessary to identify the reasons of shifting to another university or dropping out. In addition, students could be informed about further perspectives when staying at the university. If the student decides to leave the institution of Higher Education despite all efforts, the university can try to reinitiate the relationship by management of revitalization.
Particular measures applicable to the model
What can an institution of Higher Education do to bind its students during and after their time of studies? Which measures can be applied in order that students identify themselves with the university und are ready to stick up for their university -either in a financial or an active way? In this section, measures will be proposed to help the institutions of Higher Education answer these questions. According to Langer et al. (2001): 64, only measures compatible to the universities' mission to educate will be conceptualized.
Management of prospective students
In the first stage of the SRLC, prospective students or their parents start looking for an appropriate educational institution. Due to the immateriality of the service education, surrogates are used to estimate its quality. Therefore, it is important to emphasize the institution's strengths and competencies during the initial steps of the SRLC.
Hilbert, Schönbrunn, Schmode: Student Relationship Management in Germany
One possibility to create the impression of a high quality institution is to design the web page and information leaflets using a uniform layout. Many universities, as the TU Dresden for example, have started to implement a corporate design in order to facilitate orientation when using the web pages to find information. According to Heiling (2003): 2, the allocation of responsibilities at the university is usually intransparent, especially for prospective students. Reasons are often the autonomy of the faculties and the absence of coordination between them. Such circumstances give the impression of a mismanaged institution, leading to a negative perception of the university. In order to avoid such undesirable effects it is necessary to implement a corporate, mutually agreed information strategy, stating the competencies and the used channels of information. Another possibility, already widely used to spark potential students' interest in the offerings of the university, are so called information events. These events serve as a contact point for prospective students to receive a first impression of the educational institution. Since even today -in the era of internet -it is not possible to experience the adventure "university" on-line, these events have not become less important. Some examples to be mentioned in this context are the "open house day", the so called "Schnupperstudium" and the "Schüleruniversität" offered by the TU Dresden. The intention of the initiative introduced as open house days is to present every faculty with its educational offers. On these days, the Technische Universität Dresden offers the possibility to attend recitations, to conduct conversation with either students or scholars as well as to tour the different research establishments (Zentrale Studienberatung 2006c). The Schnupperstudium aims at reaching high-school students as well as teachers who want to find out more about the university. During these days, prospective students are invited to attend lectures in order to get an impression of the every day life of a university student. Furthermore, information on various branches of study is offered next to the possibility to talk to scholars (Zentrale Studienberatung 2006b). The last example is the Schüleruniversität at the Technische Universität Dresden, aiming at high school students with high intellectual potential who are interested in continuing their education. Admitted junior and senior highschool students are allowed to attend preselected lectures -especially in mathematics, natural sciences, engineering, and computer science -thereby obtaining their first credit points counted toward a university degree. The intention is to patronize highschool students in order to bind young talents closer to the TU Dresden (Zentrale Studienberatung 2006a). Concerning the point of view of the institutions of Higher Education, the selection of prospective students is an important factor to improve their subsequent loyalty. Müller-Böling 2000: 10 f has stated, that criteria and methods used to select students should be utilized to communicate values as well as objectives of the university and to create a connection between those values and objectives and the goals of the prospective students.
At the Albert-Ludwigs-University Freiburg, for example, prospective students are able to use a self-assessment tool to decide whether or not their imagination of their chosen field of study corresponds to the actual requirements -before applying on-line for a college place. This shows that the university has already acknowledged the importance of student-selection, defining it as a strategic process. The final goal is to management revue, volume 18, issue 2, 2007 213 minimize the risk of wrong decisions during the recruiting process (Albert-Ludwigs-Universität (2006): 3ff).
Management of student loyalty
According to Müller-Böling (2000): 10, relevant factors to bind students to their university are especially supervision during the years of study and assistance when entering professional life. As Heiling (2003): 4 ff argued, service quality, next to an excellent reputation in teaching and research, is an important criterion when competing for the most talented students. However, it is the service quality which is still neglected in the German Higher Educational sector. The number of matriculated students increased slightly during the winter term 2005/2006 (October -March), remaining at a level of about 1.98 million (Statistisches Bundesamt 2006: 6). Considering this high number, it is quite obvious that personalized services are a challenge for educational institutions, which cannot be appropriately met facing a situation of decreasing financial state support. One means to meet the service-needs of such a high number of matriculated students is the implementation and usage of information technology. Heiling (2003): 7 f proposes the introduction of portal systems, thereby eliminating the number of employees in charge as well as the multitude of different points of contact. The objective is to offer an institutional service platform, which allows a personalized access according to the needs of the student. In this context, one starting point could be represented by the integration of different application and information systems as for example administration of examinations, e-learning systems, e-mail service and library catalogues. However, the integration of these different systems requires a transmission of personal data, which has to be treated with care due to the strict regulations of the Bundesdatenschutzgesetz (BDSG) -German Federal Law for Data Protection -about elicitation and disposition of data. In §3(1) BDSG personal data are defined as "Einzelangaben über persönliche oder sachliche Verhältnisse einer bestimmten oder bestimmbaren natürlichen Person" 2 (BDSG 2006: 5). Name, gender, birth date and education are examples for information affected by this definition. In addition, with the application of §4 BDSG, personal data can only be used for further processing if it has either been permitted by law or by another legal provision, or if the party concerned has agreed by means of a contract (BDSG 2006: 5 ff). To meet these restrictions when offering different functionalities of an implemented portal system, the students' authorization concerning the collection and usage as well as archiving of these personal data needs to be obtained. Since users might only want to take advantage of the basic functionalities out of a wide range of possibilities offered by the system, a global gathering of the data is to be avoided. Students should be able to authorize the concession of their personal data according to the functionalities which are of particular interest to them. Such a system impedes the usage of personal data for undesired services and ensures the protection of personal rights of every individual. Major disadvantages of the introduction of such a comprehensive and global information portal are the dimensions and the complexity of the system. Since this can be seen as one of the rea-2 English translation: "Particulars about personal circumstances or artefacts of a specific or a determinable natural person." sons for the existence of many information and communication channels which are neither standardized nor synchronized, Heiling (2003): 9, proposes a multi-channelaccess. It enables students to use different media in order to obtain information and to contact the staff of the university. When implementing such a multi-channel-access, however, it is necessary to guarantee conformance of the contents of the different channels offered. Heiling (2003): 9 adds that concentrating on a small number of communication channels, as for example the internet or e-mail, is not desired by all students. Therefore, offering contact possibilities such as visiting times, telephone conversations and postal services, should remain in existence.
Many software providers have already identified the potential of portal-solutions for institutions of Higher Education. SunGard Higher Education, for example, offers a system designed to fit the students' needs throughout their whole universityenrolment. This system, which is currently in the process of implementation for example at the University of Iowa, offers students all the information they need concerning their field of study -ranging from curriculum to contact information -and thus improves the perceived quality of service (SunGard Higher Education 2005 and SunGard Higher Education 2007).
Next to the improvement of the service, another crucial point is the variety and quality of offered teachings. For Langer et al. 2001: 5, 48, advancing the quality of teachings is the most important element in the process of binding students. Primarily, they say, an institution of Higher Education should concentrate on the configuration of the offer of teachings and the supervision of students by professors and other staff. The organization of examinations, services and infrastructure as well as spare-time offers are secondary.
An evaluation of these facts leads to priorities concerning the implementation of quality-improving measures. In addition, Langer et al. 2001: 5 detected, that improvement of the quality of teaching by its own is not enough to bind students in the longrun. Therefore, on the one hand, the university should offer events and activities on a facultative basis. On the other hand, it is important to support students' initiatives and to broadly integrate students in teaching and research. Since these approaches to an academic integration create a higher emotional commitment -defined as voluntary commitment -they are more effective in achieving the objective of binding students than social integration achieved through university sports programs and parties. The basis of a high commitment towards the university is a high degree of integration of the student in the workflows of the academy : 24 f).
Management of retrieval
After students leave the university, the institution should attempt not to lose contact to its alumni. In doing so successfully, it retains the possibility of convincing them to return to their alma mater for further cooperation. In this context it is not only important to regain them as prospective students, but also to keep them integrated in scientific activities. Consequently, valuable practical experience can be integrated to improve the quality of teaching, for example through visiting lecturers. This mode of cooperation reflects the definition of a SRM elaborated in section 3.3, which stated that the relationship should also generate value for the university. Prerequisite for this management revue, volume 18, issue 2, 2007 215 positive collaboration is to keep alumni informed about postgraduate professional education on the one hand and up to date concerning the university's research on the other. Alumni-networks are a central contact point for alumni who are further interested in incidents taking place at their alma mater. These networks are designed to meet the requirements and needs of graduates. According to a study conducted by Krempkow/Pastohr (2003): 75, top earners, for instance, are willing to keep contacts to their faculty, but usually they are living and working far away. Therefore, one of their needs is the availability of information through media which are able to overcome this distance. The alumni-network of the TU Dresden, for example, consists of different associations, each of which is assigned to a faculty. Their members regularly receive information and benefit from the extensive possibilities of sharing technology and know-how. Responsible for the supervision of alumni is an office which keeps graduates informed about events according to their field of study. Furthermore, next to an alumni-platform providing, for example, information about alumni-specific news, job offers and friends of the university as well as the online newspaper "Kontakt-online", printed media containing up to date information are also published on a regular basis. In addition, an alumni-day is organized every year in cooperation with the university day (Kokenge 2006: 8 f). In the year 2000, the "Stifterverband für die deutsche Wissenschaft" 3 has initiated a challenge called "AlumniNetzwerke" (alumni networks). Competing with other universities in terms of organization and service, nexus -the alumni association of the faculty of economics of the TU Dresden -was rated third (Albrighton 2000: 52). This excellent ranking shows that the faculty of economics is already successful in working with its alumni, being therefore considered an ideal for other alumni-organizations (Meyer-Guckel 2000: 60 ff). The examples listed above show, that if the institution of Higher Education aims at reaching alumni students -graduates as well as college dropouts -alumni and dropouts have to be informed about further offerings of the university.
Prospect: Data mining and its range of applications in higher education management
The previous results deal with the operational CRM -in terms of student-oriented adjustment of educational processes for a strategically student-group oriented behaviour -and the collaborative CRM -in terms of intensive and individual contact with the students. In the following chapter the possibilities of the analytical CRM are shown exemplarily, such as the improvement of service quality for students on basis of results of data mining analysis. Data mining is an integral part of knowledge discovery in databases (KDD), which is the overall process of converting raw data into useful information (Tan et al. 2006). This process is represented in Figure 2 and consists of five steps. During the data preprocessing, the input data are prepared (e.g. by means of dimension reduction). Afterwards, the individual data mining methods are applied, followed by the post processing of the data (e.g. the visualization of gained information) and the extraction of the gained information.
In the data mining step, a multitude of trans-sectoral data analysis methods can be used, which are derived from the fields of statistics, artificial intelligence, and machine learning (Petersohn 2005: 57 ff). Table 1 shows possible questions in the field of Higher Education Management which can be analysed and answered with data mining methods. These are equivalent to the appropriate questions of the economy. Real and hypothetical (case) studies in the context of Higher Education Management have already been accomplished, especially in the USA. One example is the identification of student typologies, which are used to divide students into different groups of learning typologies. In order to do so, students were identified and clustered by means of different data mining methods. Furthermore, forecasts of the successful completion of studies respectively drop out can be generated and, according to that, students can be supervised. Luan (2004: 6) shows, how data mining helps universities to focus on alumni most likely to make pledges and to optimize mailing costs, especially when considering outliers (e.g. unexpectedly high donations of alumni). In summary, the yielded results of the data mining analysis can be used to better allocate resources and staff, proactively manage student outcomes, and improve the effectiveness of alumni development in educational institutions (Luan 2004: 4-7). What type of courses will attract more students?
One data mining method is the association analysis, where correlations between conjoined occurring objects are analysed. It may concern e.g. products of a supermarket, which customers bought together -the so called basket analysis (Bollinger 1996: 257).
In the field of Higher Education Management not the products but the frequently selected major fields of study of students can be examined. Within the framework of SRM, the students can benefit from the results of such analysis, for example by reducing overlaps in their curriculum and thus minimizing their duration of study. Conse-quently, attention to the development of the time table could be paid in order to avoid the overlapping of courses of two major fields of study. Thus students have the possibility to enrol for courses of both major fields of study in the same semester and do not have to wait for one or two semesters. Furthermore, students can be supported by faculty staff according to their individual needs. For prospective students it offers the possibility to survey the combination of major fields of study, whose relevance has been proved in practice -adapted from historical data. Based on these individual offers for students, the university is able to improve its image, which leads to higher numbers of student enrolments leading to a higher allocation of funds. Another benefit of data mining in the framework of SRM arises from analyzing donation behaviour of alumni -using the obtained information for individual mentoring of alumni. As Luan (2004: 2) points out, one "way to effectively address [..] student and alumni challenges is through the analysis and presentation of data, or data mining".
However, in this context, acting in conformance to the German data protection act is problematic. For further analysis and methods data are needed, which are available but may not be used (e.g. data of enrolment). According to §4 paragraph 2 Federal Law for Data Protection (in the version of 2003) (Gola/Schomerus 2005), data may only be used for the purpose they were collected for. A possible solution to this restriction may be to set up a data warehouse as already implemented in the United States. Facing a similar legal situation concerning the data security presented comparable constraints to data analysis, researcher collect their data using survey portals, as e.g. NSSE (national survey of student engagement). The corresponding basis for data analysis is represented by a data warehouse which contains a large data set of about 100 colleges and resembles data warehouses used in the economy.
Result and prospect
As shown above, potential uses of CRM can be found throughout the field of Higher Education. Due to changes in the educational system -e.g. the increasing international competition in the education market -and persistent financial shortage, most German universities have already recognized, that they have to endeavour after the satisfaction of students as well as alumni (Ewers 2000: 24 f). First approaches of introducing a SRMenhancing the satisfaction of students respectively keeping up the contact to alumniare already identifiable. However, a common model as well as a clearly defined concept for an effective realization of student-orientation is still missing.
The presented article shows also future possibilities for the use of data mining methods in the German Higher Education Management and for the use of the yielded results particularly in the SRM. Applying the results of the analysis supports the service quality for students as part of a SRM. The university can determine the needs of its students and consequently improve the relationship.
Altogether the improvement of the connection between students and university can lead to increasing student quantities in the future and thus also to increasing revenues for the university. By the adjustments of the event management to the needs of the students, the time of study can be reduced. As a final conclusion it is to state, that German institutions of Higher Education need to differentiate themselves from rival-ling academies to be able to obtain the desired number of matriculations in future times. Therefore, the importance of implementing an effective SRM will continue to rise within the next years. | 2019-05-28T13:14:35.575Z | 2007-04-01T00:00:00.000 | {
"year": 2007,
"sha1": "0be211c5171d1fd642581109bbc072514fd6510f",
"oa_license": "CCBY",
"oa_url": "http://www.nomos-elibrary.de/10.5771/0935-9915-2007-2-204.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "59e6f287d142d00b5892bd8d20490cfa87408657",
"s2fieldsofstudy": [
"Education",
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
18732336 | pes2o/s2orc | v3-fos-license | The Complete Plastid Genome of Lagerstroemia fauriei and Loss of rpl2 Intron from Lagerstroemia (Lythraceae)
Lagerstroemia (crape myrtle) is an important plant genus used in ornamental horticulture in temperate regions worldwide. As such, numerous hybrids have been developed. However, DNA sequence resources and genome information for Lagerstroemia are limited, hindering evolutionary inferences regarding interspecific relationships. We report the complete plastid genome of Lagerstroemia fauriei. To our knowledge, this is the first reported whole plastid genome within Lythraceae. This genome is 152,440 bp in length with 38% GC content and consists of two single-copy regions separated by a pair of 25,793 bp inverted repeats. The large single copy and the small single copy regions span 83,921 bp and 16,933 bp, respectively. The genome contains 129 genes, including 17 located in each inverted repeat. Phylogenetic analysis of genera sampled from Geraniaceae, Myrtaceae, and Onagraceae corroborated the sister relationship between Lythraceae and Onagraceae. The plastid genomes of L. fauriei and several other Lythraceae species lack the rpl2 intron, which indicating an early loss of this intron within the Lythraceae lineage. The plastid genome of L. fauriei provides a much needed genetic resource for further phylogenetic research in Lagerstroemia and Lythraceae. Highly variable markers were identified for application in phylogenetic, barcoding and conservation genetic applications.
Introduction
The Lythraceae include approximately 620 species in 31 genera; most are herbs, with some trees and shrubs adapted to a wide variety of habitats. The four largest genera (Cuphea, Diplusodon, Lagerstroemia, and Nesaea) include three-fourths of all species in Lythraceae [1]. The family has been traditionally classified in the order Myrtales and closely allied with the Onagraceae based on morphological, anatomical, and embryological evidence [2,3]. Within the Lythraceae, Lagerstroemia ("crape myrtle") is the most economically important and wellknown genus. Lagerstroemia comprises about 55 species [4][5][6] and its center of diversity is in southeast Asia and Australia [7], mainly in tropical and sub-tropical habitats of southern China, Japan, and northeast Australia. Most Lagerstroemia species are easily propagated, resistant to multiple pathogens, grow rapidly, and have colorful flowers that open from summer to fall [8]. Given the importance of Lagerstroemia as an ornamental, more than 260 cultivars have been created and registered (http://www.usna.usda.gov/Research/Herbarium/Lagerstroemia/ index.html). Due to the ornamental and economic value of Lagerstroemia, research programs have been initiated to develop hybrid cultivars, study the genetic diversity of cultivars, and evaluate germplasm [9][10][11][12][13]. Molecular tools have been employed to identify Lagerstroemia cultivars and interspecific hybrids [14,15]. Despite the development of microsatellite markers and subsequent research in Lagerstroemia, no complete chloroplast (plastid) genomes have been described from Lythraceae.
Phylogenomic-related research in Lythraceae is limited. Within the Myrtales, Lythraceae was resolved as sister to Onagraceae using the plastid gene rbcL [16]. Within Lythraceae, Lagerstroemia and Duabanga are supported as sister groups based on atpB-rbcL, psaA-ycf3, rbcL, trnK-matK, trnL-trnF, and ITS (internal transcribed spacer region of the nuclear genome) data [1,17]. Phylogenetic inferences within Lagerstroemia and the Lythraceae could be improved if plastid genomes are made available, potentially providing dozens of valuable molecular markers for further research.
In contrast to huge nuclear genomes, the plastid genome, with uniparental inheritance, has a highly conserved circular DNA arrangement ranging from115 to 165 kb [18,19], and the gene content and gene order are conserved across most land plants [20]. With the development of next-generation sequencing approaches, sequencing whole plastid genomes has become cheaper and faster [21]. To date, more than 900 land-plant species' completed plastomes can be accessed through the National Center for Biotechnology Information (NCBI) public database [22]. Such genetic resources have provided a useful set of tools for researchers interested in species identification by using DNA barcoding [23], genetic data used for plastid transformation [24], and designing molecular makers for systematic and population studies [25,26]. All of these research areas have benefitted from the conserved sequences and structure as well as the lack of recombination found in plastid genomes to simplify analyses. For example, plastids maintain a positive homologous recombination system [27][28][29][30], which enables precise transgene targeting into a specific genome region during transformation. Different plastid loci have been used for evaluating phylogenetic relationships at different taxonomic levels, including the interspecific and intraspecific levels [31]. Recently phylogenomic approaches [32] to study plant relationships have employed complete-plastid-genome sequences for studying phylogenetic relationships.
In an effort to comprehensively understand the organization of the Lagerstroemia plastid genome, we present the first complete plastid genome sequence of L. fauriei, which was generated using Illumina sequencing. The three aims of our study are to: deepen our understanding of the structural diversity of the complete L. fauriei plastid genome, compare molecular evolutionary patterns of the L. fauriei plastid genome with other plastid genomes in the Myrtales, and provide a set of genetic resources for future research in Lagerstroemia and the Lythraceae. manufacturer's instructions (Illumina Inc., San Diego, CA). Paired-end (PE) sequencing libraries with an insert size of approximately 300 bp were sequenced on an Illumina HiSeq 2000 sequencer at the Beijing Genomics Institute (BGI) and 30,887,628 clean reads were obtained, each with a read length of 100 bp.
Plastid genome assembly and annotation
The raw Illumina reads were demultiplexed, trimmed and filtered by quality score with Trimmomatic v0.3 [34] using the following settings: leading: 3, trailing: 3, sliding window: 4:15 and minlen: 50. Then the CLC Genomics Workbench v7 (CLCbio; http://www.clcbio.com) was used to conduct de novo assembly of reads from L. fauriei with the default parameters. The following three separate de novo assemblies were made: PE reads, single-end forward reads and single-end reverse reads [22]. These three separate assemblies were then combined into a single assembly. Assembled contigs (!0.5 kb) with > 100× coverage from the complete CLC assembly were compared to several Myrtales species with completed plastid genomes, including Oenothera argillicola (Onagraceae; NC_010358), Syzygium cumini (Myrtaceae; GQ870669), and Eucalyptus aromaphloia (Myrtaceae; NC_022396). Local BlastN [35] searches were used to match the contigs from the plastid genomes. Based on the conserved features of the plastid genome [19,22], the mapped contigs were orientated onto the related plastid genomes [36] and those separate contigs were connected into a single contig to construct the circular map of the genome using Informax Vector NTI Contig Express 2003 (Invitrogen, Carlsbad, CA). Seven short gaps ( 100 bp) were filled by aligning individual Illumina sequence reads that overlapped at the contig ends. Longer gaps (>100 bp) between contigs were filled by designing primers in flanking regions, conducting PCR amplifications, and closing the gap regions by adding sequence data generated from Sanger sequencing (by BGI).
We designed additional primers (S1 Table) to test for correct sequence assembly. PCR was conducted in 40μl volumes containing 4 μl 10× Taq buffer, 0.8 μl dNTP (10 mM), 0.4μl Taq polymerase (5 U/μl), 0.5ul each primer (20 pmol/ul; all from Sangong Biotech (Shanghai, China)), 0.5 ul DNA template, and 33.3 μl ddH 2 O. The amplification program consisted of an initial heating at 94°C for 5 min, then 32 cycles including denaturation at 94°C for 45 s, annealing at 55°C for 45 s, elongation at 72°C for 2 min, and a final elongation at 72°C for 10 min. After incorporation of the Sanger results, the finished plastid genomes were applied as the reference to map the previously unincorporated short reads in order to iteratively refine the assembly based on evenness of sequence coverage.
DOGMA v1.2 [37] was employed for genome annotation of the protein-coding genes, transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs). To accurately confirm the start and stop codons and the exon-intron boundaries of genes, the draft annotation was subsequently inspected and adjusted manually based on plastomes from a related species, Syzygium cumini [36], from the NCBI database. Additionally, both tRNA and rRNA genes were identified by BLASTN searches against the same database of plastomes. Finally, tRNAscan-SE v1.21 [38] was also used to further verify the tRNA genes. The schematic diagram of the plastid genome map was generated using OGDraw [39].
Comparative plastid genomic analysis
Expansion and contraction of four junction regions. Genome-size variation among different photosynthetic species is generally caused by different junctions between the two inverted-repeat regions (IR A and IR B ) and the two single-copy regions (LSC and SSC) [36]. There are four junctions (J LA , J LB , J SA , and J SB ) in the plastid genome between the two single copy (LSC and SSC) regions and the two IRs (IR A and IR B ) [40]. The detailed IR border positions and the adjacent genes among seven Myrtales species plastomes (Lagerstroemia fauriei, Oenothera argillicola, Angophora costata, Corymbia eximia, Eucalyptus aromaphloia, Stockwellia quadrifida, and Syzygium cumini) were compared in this study.
Survey for loss of the rpl2 intron. In the process of annotation and comparison with other species in the Myrtales, we found that the intron of rpl2 is absent in the plastome of L. fauriei. In order to infer the history of this intron loss, we designed a pair of primers (Forward-CAAAACTTCTACCCCAAGCA; Reverse-TCTTCTTCCAAGTGCAGGAT) to amplify the whole rpl2 region and then applied them to 11 Lagerstroemia species and three species (Cuphea hyssopifolia, Punica granatum, and Lythrum salicaria) from other Lythraceae genera, as well as the outgroups Oenothera albicaulus and Catha edulis. In L. fauriei, the target rpl2 fragment without the intron is about 750 bp, whereas it is about 1,400 bp in species containing the intact intron. PCR was used to amplify the rpl2 region and the amplicons were run out on 1% agarose gels. Fragment sizes were determined by comparison to DNA size standards [41]. Sanger sequencing of forward and reverse sequence of gene rpl2 was done for Cuphea hyssopifolia, Punica granatum, L. salicaria, L. fauirei, L. limii and Oenothera albicaulus at the Proteomics and Metabolomics Facility of Colorado State University.
Repetitive sequence analysis. Repetitive elements were investigated using two different approaches. In order to avoid redundancy, repeat-sequence analysis was only carried out using just one IR region [42]. Tandem Repeat Finder [43] was used with the minimum-alignment score and maximum-period size set at 50 and 500, respectively, with default parameters for all other search criteria to find small tandem repeats from 15 to 30 bp in length. The numbers of forward, reverse, complementary and palindromic repeats were quantified using the REPuter [44], setting Hamming distance equal to three and minimum repeat size !30 bp. Overlapping repeats were merged into one repeat motif where possible. Microsatellites (SSRs) were detected using SSR Hunter v1.3 [45]. We identified SSRs as mononucleotides with ! 8 repeats, dinucleotides ! 4, trinucleotides ! 3, and tetranucleotides and pentanucleotides both ! 3.
Dot-plot analysis. We compared plastomes of the other six Myrtales species to L. fauriei with dot-plot analysis using Perl scripts to visualize arrangement recurrences and structural differences in two-dimensional plots (S1 Fig).
Informative variables analysis from coding and non-coding regions. To identify divergent regions that may be highly informative for phylogenetic analyses, each region, including CDS (coding regions), introns, and IGS (intergenic regions) from seven Myrtales plastid genomes was individually examined. For the longer genes (>1500 bp), we employed the sliding window method to divide the gene into shorter fragments to detect the most informative portions by using a 1000 bp sliding window and 500 bp increments. These regions were aligned using Clustal X 2.0 [46] and adjusted manually using the similarity criterion [47]. The aligned sequences were analyzed using parsimony in PAUP Ã 4.0b10 [48] with tree-bisection-reconnection branch-swapping. The ensemble retention index (RI) [49] was calculated for each of the 78 coding regions and 128 non-coding regions. The 10 coding and 10 non-coding regions with the highest percentages of parsimony-informative characters were then selected as candidates for phylogenetic markers.
Phylogenetic analysis. The 73 shared protein-coding genes from the plastid genomes in the seven Myrtales species and the three Geraniaceae outgroup species were aligned in Clustal X using the default settings, followed by manual adjustment to preserve the reading frames. The data matrix is posted as S1 Matrix. Three phylogenetic-inference methods were used to infer trees from these 73 concatenated genes. Parsimony analysis was implemented in PAUP Ã 4.0b10 [48], maximum likelihood (ML) in PHYML v 2.4.5 [50], and Bayesian inference (BI) in MrBayes 3.1.2 [51] using the settings from [22].
Sequencing, assembly and annotation
The whole plastid genome for Lagerstroemia fauriei was found to be 152,440 bp in length after combining the Sanger and Illumina sequence data. Through mapping the paired reads onto the finished genome, we verified our assembled length for the finished plastid genome with 1,473,293 (5% of the total reads) mapped reads across the whole genome with at least 951 reads per position. Based on this number of reads we consider the assembled genome to be of highquality. Our annotated plastid genome of L. fauriei is available from GenBank (KT358807).
Plastid genome features
In most land plants, the plastid genome is a single circular structure of 115-165 kb in length that consists of one large single-copy (LSC) region, one small single-copy (SSC) region, and a pair of inverted repeats (IRs). Although gene order and content are highly conserved in plastid genomes, they differ in the extent of gene duplication, size of intergenic spacers, presence or absence of introns, as well as the length and number of small repeats [52]. Such differences not only leave molecular patterns that allow for the inference of evolutionary history, but can also influence the molecular functioning of the cell as a whole (e.g., [20,32]).
The plastid genome of L. fauriei is composed of two single-copy regions separated by a pair of 25,793 bp IRs (Fig 1, Table 1), which account for 34% of the whole plastid genome. The LSC and SSC regions span 83,921 bp and 16,933 bp, respectively. The proportion of LSC and SSC length in the total plastid genome is 55% and 11%, respectively ( Table 1). The L. fauriei plastid genome consists of protein coding genes, transfer RNA (tRNA), ribosomal RNA (rRNA), intronic and intergenic regions (Table 2) Table 2).
The plastid genome of L. fauriei contains 129 coding genes, including 84 protein-coding genes, 37 tRNA genes, and eight rRNA genes. Among the 129 genes, 4 rRNA genes, 7 tRNA genes and 6 coding genes are duplicated in the two IR regions (Fig 1; Table 3). Of the 112 unique genes, 82 are located in the LSC region (60 protein-coding genes, 22 tRNA genes), 13 in the SSC region (12 protein-coding genes, 1 tRNA gene), and 17 in both IR regions (6 coding genes, 4 rRNA genes, 7 tRNA genes). The following four genes span regional plastid boundaries: ycf1 spans the SSC and IR B regions, rps12 spans the LSC and two IR regions (5' end exon was in LSC and two 3'end exons were duplicated in IR regions), ndhF spans the IR A and SSC regions and rps19 spans the LSC and IR A region (Fig 1). In the whole plastid genome, 17 genes contain introns, including eight protein-coding genes with a single intron each (atpF, ndhA, ndhB, petB, petD, rpl16, rpoC1, rps16), five tRNA genes with a single intron each (trnA GUC , trnG UCC , trnI GAU , trnK UUU , trnL UAA , trnV UAC ), and three protein coding genes with two introns each (clpP, rps12 and ycf3). Among the 17 genes with introns, 13 genes are located in LSC, one in SSC, and three in both IRs (S2 Table). The rps12 gene is a trans-spliced gene with a 5' end exon in the LSC region and two duplicated 3'-end exons in IR regions. The 2,497 bp intron of trnK UUU is the longest, but 1491 bp of it codes for the matK gene.
Comparison of the plastid genomes with six other Myrtales
We compared the plastid genome of L. fauriei (Lythraceae) to six other species in the Myrtales with dot-plot analysis. The plastid genomes in these species possess identical gene order with the exception of O. argillicola, which contains a large inversion of about 56 kb in the LSC region (S1 Fig) [53,54]. These results further verified the conserved feature of the plant plastid genome and partial lineage-specific variation [19]. The seven plastid genomes vary in length from 152,440 to 165,055 bp. From the comparative results (Table 1), the plastid genome of O. argillicola is the longest of the seven species, which is explained partly by expansion of intergenic regions in the SSC and IR regions. However, the plastome of L. fauriei is the shortest because of reduction of intergenic regions, which only occupy 41% of the genome (Table 2). These comparisons demonstrate that the dynamic variation of the intergenic regions is the main cause of length differences between plastid genomes [19,22]. The GC content of the plastid genome is stable across most land plants [19]. The GC content of the entire L. fauriei plastid genome is 38%, with 36% GC content in the LSC region, 31% in the SSC region and 43% in the IR regions. These percentages are generally similar to other plastid genomes [55]. The overall GC contents in seven Myrtales plastid genomes ranged from 37% to 39%, with O. argillicola having the highest GC content and A. costata having the lowest ( Table 1). The GC content of protein-coding regions in the seven Myrtales species range from 37% to 40%, of which O. argillicola has the highest and C. eximia has the lowest ( Table 1).
From these cross-species comparisons, we verified that the Myrtales plastid genomes are highly conserved in genome content, gene order and overall genomic structure relative to L. fauriei. They have similar gene orders at the IR-SSC and IR-LSC borders, with the exception of (Fig 2).
Expansion and contraction of four junction regions
The typical quadripartite structure of plastomes includes two single-copy regions and two inverted repeat regions, though length of the IRs differ between plant species because of contraction and expansion in these regions [19]. We examined the four junctions (J LA , J LB , J SA , and J SB ) across the seven Myrtales species to assess the junction variation between the IRs and single-copy regions following Wang [40] and Wu [22]. The length of the IRs ranged from 25,792 to 28,772 bp, and the positions of all four IR boundaries (J LA , J LB , J SA , and J SB ) varied (Fig 2) [56]. The LSC/IR A junctions in plastid genomes of L. fauriei, O. argillicola, and S. quadrifida were located in the coding region of rps19, which extended into the IR B region 75 bp, 106 bp, and 37 bp, respectively. In the other four species the LSC includes an intact rps19 gene together with 8 bp (A. costata, C. eximia), 22 bp (E. aromaphloia), or 6 bp (S. cumini) of non-coding region beyond the LSC/IR A border. The IR B /LSC border in these four species is located in the intergenic spacer between rpl2 and trnH. The trnH gene of S. cumini is 56 bp away from the IR B /SSC border, whereas in L. fauriei and S. quadrifida Table 3. List of genes in the L. fauriei plastid genome.
In O. argillicola, the ycf1 gene does not extend into the IR B region at the border of SSC/IR A . Rather, in contrast to the other six species wherein ycf1 extends across the border, ycf1 in O. argillicola is separated by 257 bp. Hence the SSC/IR B junction resulted in the duplication of the 3' end region of ycf1 in these six species, and consequently a pseudogene with variable length at the IR A /SSC border (Fig 2) [49].
Variable gene composition was found at the IR A /SSC border. In O. argillicola the ψycf1 gene is absent, and instead the IR A /SSC border was positioned in the ndhF gene, which had 115 bp in the SSC region and 2,203 bp in the IR A region. Similarly, ndhF extends 38 bp into the IR A region in L. fauriei, which also has 20 bp overlap with ψycf1. The entire ndhF gene is located in the SSC region in the other five species and is separated by 82-225 bp from the IR A /SSC border. The IR/LSC border region has been used extensively for phylogenetic studies in Eucalyptus [36,57] and given the variation we observed, this region could be similarly useful for resolving the relationships between L. fauriei and its relatives.
Loss of the rpl2 intron from Lagerstroemia and Lythraceae
The distribution and number of introns in the L. fauriei plastid genome are similar to other Myrtales plastid genomes (S2 Table), with the exception of the intron of rpl2. The structure and the length of the intron for rpl2 is conserved across all other Myrtales and also present in the more distant Arabidopsis thaliana (NC_000932; Fig 3A). The length of this intron is approximately 660 bp in the other sampled six Myrtales species and the two exons are also highly conserved. To verify the loss of the rpl2 intron in the whole Lagerstroemia or even broadly within Lythraceae as a whole, we designed a pair of primers in the flanking exons to amplify and sequence the region spanning the intron among different species. From the rpl2 gene alignment, the intron was absent among all 14 Lythraceae species sampled (S2 and S3 These results indicate that the intron was lost after the divergence of the Lythraceae from the Onagraceae (S2B and S3 Figs) but prior to the divergence of the four Lythraceae genera sampled.
Plastid introns have been lost numerous times in other species, such as those reported from the legume tribe Desmodieae [58,59], and have been documented in both monocots and dicots [60]. Specifically, rpl2 intron loss has been reported from five other lineages of dicotyledons: Saxifragaceae, Convolvulaceae, Menyanthaceae, two genera of Geraniaceae, and one genus of Droseraceae [59]. The discovery of this intron loss indicates a structural difference between Lythraceae and the six other Myrtales families sampled. And we could confirm that many times instances of independent intron loss have happened in the history of plastid genome evolution. Two different theories had been proposed to explain loss of the rpl2 intron [61,62]. First, through the homologous recombination, the full rpl2 transcript (cDNA) could replace the rpl2 gene by the reverse-transcriptase mediated mechanism to precisely delete the entire intron. Alternatively, rpl2 intron loss could be caused by unknown processes involving intron removal by DNA-level deletion or gene conversion between an intron-containing gene and its spliced transcript. In near future, by combining the density samplings within Lythraceae and The Complete Plastid Genome of Lagerstroemia fauriei Onagraceae, and by employing the data from RNA and DNA could answer this intron loss history around this family.
Long repetitive sequences
Long repetitive sequences have an important role in structural variation in plastid genomes via recombination and rearrangement [63]. Tandem repeats (!15 bp), and forward and palindromic repeats (!30 bp) were compared across the seven Myrtales species (S4B Fig). Most of these repeats are located in intergenic spacers, except for some that are distributed in the shared coding regions of ycf2 and psaB. L. faurei has the fewest (22) repeats, which is consistent with the small genome size of L. fauriei compared with the six other Myrtales species sampled (S4B Fig). Repeated sequences have been demonstrated to affect genome length [64]. Our data are consistent with these findings given that the length and number of repeat in O. argillicola and L. fauriei (S4 Fig) are correlated with their genome size. Forward-repeat sequences are often associated with transposons [65], which can proliferate during episodes of cellular stress [66,67]. The origins and proliferation of large tandem repeats are not as well understood as interspersed repetitive sequences [68]. Forward repeats can cause genomic reconfiguration, and therefore have potential to be useful markers in phylogenetic studies.
Plastid SSRs
Simple sequence repeats (SSRs) in the plastid genome can be highly variable at the intraspecific level, and therefore valuable markers for population-genetic studies [56]. We identified 204 SSRs in the plastid genome of L. fauriei, of which 132 are located in non-coding regions and 72 in coding regions. These SSRs include 115 mononucleotide SSRs (homopolymers; 56%), 35 dinucleotide SSRs (17%), 46 trinucleotide SSRs (23%), seven tetranucleotide (3%), and one pentanucleotide SSR (1%). Of the 204 SSRs, 143 are in the LSC region, 35 in SSC, and 26 in IR A region accounting for 70%, 17%, and 13% of the total SSRs, respectively. Among the 115 homopolymer SSRs, 113 (98%) are the A/T type with a repeat number from 8 to 14. Among the coding regions, ycf2 was found to possess 13 SSRs, followed by ycf1 with eight SSRs. This result is consistent with previous studies which found that these genes are highly variable in other species [67,68,69]. From this result ycf1and ycf2 are potential candidates for species-level DNA barcoding [70].
Among the seven Myrtales species sampled, L.faurei has the fewest SSRs (S4C Fig). The total length of SSRs in these species does not have a strong overall correlation to genome size. However L. fauriei has the shortest chloroplast genome and had the smallest contribution from SSRs. Thus, reduction in the size and presence of SSR's may contribute somewhat to the short chloroplast genome of L. fauriei [71].
We aligned all coding and non-coding regions ! 200 bp separately to identify regions with the highest percentage of parsimony-informative sites, and the highest ensemble retention index, among the seven Myrtales species sampled (Table 4, S3 Table). Among the coding regions, rpoA and matK have the highest percentage of parsimony-informative characters (7% and 6%, respectively). Among non-coding regions, trnR UCU -atpA and trnK UUU -rps16 have the highest percentages (20% and 14%, respectively). These non-coding regions should be particularly informative for DNA barcoding and species-level phylogenetic analyses within the Myrtales given the high percentage of variable sites (S3 Table). In order to better understand the variation from the longer genes (>1500 bp) and make them usable in practical applications, we employed the sliding-window method (S4 Table). By applying this method, we identified the most variable regions within each gene that would be valuable as molecular makers in phylogeny or for marker-assisted breeding analysis. For example, the most variable region of ycf1, which is over 7000 bp in length, is located from 5 to 6 kb downstream from the start.
Shaw [25,78] evaluated the phylogenetic utility of noncoding plastid regions and found that those that are most commonly used for phylogenetic analyses (e.g., trnL intron, trnL-trnF spacer) are among the least variable. Thus, our identification of ten more variable noncoding regions provides a valuable resource for future phylogenetic studies within Myrtales, including our focal genus, Lagerstroemia.
Phylogenetic analysis
Phylogenetic analysis using plastid sequences have resolved numerous lineages within the angiosperms [79,80]. Furthermore, atpF-atpH, matK, psbK-psbI, rbcL and trnH-psbA have been used successfully as species-level barcodes [76,81,82]. Phylogenetic relationships within Lythraceae have been inferred using morphology and DNA sequences from the rbcL gene, the trnL-F region, and the psaA-ycf3 intergenic spacer from the plastid genome, together with ITS from the nuclear genome [1,17]. Our phylogenetic analyses included seven Myrtales species together with three outgroups from Geraniaceae. These analyses all corroborated the sister relationship between Lythraceae and Onagraceae based on 73 shared protein-coding genes (Fig 4). From the branch-length differences between the two main Myrtales clades, we infer that both Lythraceae and Onagraceae have undergone a more rapid rate of nucleotide substitution than their Myrtaceae sister group. This more rapid nucleotide-substitution rate was also accompanied by more structural differences in the Onagraceae and Lythraceae. | 2018-04-03T02:00:27.977Z | 2016-03-07T00:00:00.000 | {
"year": 2016,
"sha1": "d77d4ba4e6329cf111fb7cb63c469b4747347567",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0150752&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d77d4ba4e6329cf111fb7cb63c469b4747347567",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
56162936 | pes2o/s2orc | v3-fos-license | Single large or several small? The influence of prey size on feeding performance of Philodryas nattereri (Squamata: Serpentes)
This study aimed at evaluating the energetic return and feeding time on Philodryas nattereri kept in captivity. Snakes were fed biweekly for 60 days (four feeding trials), in two different feeding treatments (single and multiple prey items). The energetic return revealed no significant difference between the feeding treatments; however, we found a negative relationship between snake size and prey handling time during a feed using multiple prey items. In P. nattereri, when large preys are as easy to find as small ones, there seems to be no difference in energetic return.
Introduction
According to the Optimal Foraging Theory (OFT), animals search, capture, and consume prey containing the maximum nutritional value, spending the least energy as possible during this process (MacArthur & Pianka, 1966;Pyke, 1984).Large animals tend to ingest large prey, except when smaller prey is abundant (Schoener, 1971).However, this tendency of animals to ingest larger prey does not seem to be general in snakes, an exception that seems to occur due to low handling costs for large and small prey, allowing snakes to consume items found regardless of their size (Shine, 1991).
Handling and eating time may present negative relationship to head length and body size in most snakes (Shine, 1991;Vincent & Mori, 2008;Vincent, Vincent, Irschick, & Rossell, 2006).A possible explanation would be that small snakes are eating preys almost as large as their maximum prey size, which is limited by gape size, while it is not a problem for the large ones (Shine, 1991).Furthermore, large snakes do not have difficulty in handling and ingesting small prey (Shine, 1991).Hence, it is expected that larger animals, which are less affected by gape limitation than the smaller ones, could manipulate prey faster than smaller individuals.
Philodryas nattereri snake (Steindachner 1870) is distributed along the Caatinga and Cerrado from Central Brazil to Paraguay (Vanzolini, Ramos-Costa, & Vitt, 1980).Because of its high abundance, foraging abilities, and fecundity, P. nattereri is classified as a key predator in the Brazilian semi-arid region (Mesquita, Borges-Nojosa, Passos, & Bezerra, 2011).It is diurnal, semi-arboreal, and active throughout the year, presenting a generalist diet and activity peak during periods of rainfall and maximum temperature (Mesquita et al., 2011).
This study aimed at comparing the energetic return and feeding time of P. nattereri in captivity by analyzing whether single and multiple prey are equally profitable to individual snakes when search time is reduced.Our objective was also to evaluate the influence of snake size and head length on the prey handling time.
Material and methods
We used 17 snakes (nine females and eight males), with an average snout-vent length (SVL) of 99.29 ± 10.21 cm (mean ± standard deviation) and average head length (HL) of 3.02 ± 0.33 mm.Snakes were housed at the "Núcleo Regional de Ofiologia da Universidade Federal do Ceará" (NUROF-UFC) and kept in wooden vivariums (51 x 37 x 37 cm) with water offered ad libitum and clay pots that were used as refuges where animals could hide in.The snakes were taken from the wild and held in captivity during a period of between two and seven years.
All snakes were fed four times with intervals of fifteen days.Along the experiments, all the objects, such as the recipient with water and the clay pots, which could cause distractions to the snakes, were removed from the enclosure.All of the snakes had been subjected to a 15-day fasting and in the first feeding trail were weighed and fed with one live adult mouse that was approximately ten percent of the snake mass.After fifteen days, the snakes were fed again, but with three live subadult mice that together weighed approximately 12.5 percent of the total P. nattereri weight.We offered the small mice consecutively: the following prey was offered as soon as the snake had eaten the previous prey.All of the P. nattereri used in this study were usually fed with an adult mouse weighing approximately 10-12.5 percent of their mass in NUROF-UFC.Therefore, the prey weight percentage used in this study was based on a feeding protocol previously used by NUROF-UFC, in order to prevent the animals from having additional stress.The two types of procedures were repeated in the second month, numbering four feeding trials.We measured the time required for the animal to recognise, capture, and ingest the prey using seconds in a stopwatch, ranging from the time of the first introduced prey to each snake in the wooden box until the snake finished eating the whole last prey to be offered.The snakes were left with their prey for a maximum of 40 minutes in order to avoid discrepancy in sampling times.We carried out all of the feeding events between 8:00 and 12:00h a.m., respecting the P. nattereri diurnal activity patterns (Mesquita et al., 2011).
We estimated the energy return using a simplified form of the formula proposed by (Schoener, 1971).Total prey mass was used as a measure of its "potential energy".Time and costs related to pursuit were strongly reduced in our sampling design because of the restricted space of the vivarium (snakes rapidly found and captured the mice as soon as they were introduced into the vivarium); therefore, we did not consider these variables when calculating energetic return.Furthermore, handling and eating costs use to be very low in snakes (below one percent of the energy provide through prey ingesting) (Cruz, Andrade, & Abe, 1999;Feder & Arnold, 1982;Shine, 1991) and were not considered in this approximation either.Accordingly, energy return was calculated as total ingested prey mass divided by feeding time.
The energetic returns in both of the feeding procedures were compared through paired Wilcoxon tests for each month since data was not normally distributed.Since difference in time between the second and third feeding was similar to the difference in the trials for each month, we also compared these events using the paired Wilcoxon test.The relationships between feeding time and snake size (SVL) and between feeding time and HL were assessed using Spearman Rank correlations per feeding trial.The reasons to use HL were that the snake feeding process is gape limited due to head measurements and it is the measure more commonly applied in studies regarding feeding performance of snakes (Shine, 1991;Vincent & Mori, 2008).We also performed analyses using SVL instead of only HL, since P. nattereri may use constriction when handling its prey.For the statistical analyses, we converted time measurements in seconds into minutes in order to reduce data variance.When a snake had not consumed its prey during the allotted time, it was removed from the trial statistical analysis.We performed the analyses using R software ver.2.15.3 (R Devepolment Core Team, 2014) with a significance level of five percent (p < 0.05).
Results and discussion
Following, the energy return in snakes according to the feeding: first (1.70 ± 0.80 g min. - , n = 14); second (1.66 ± 0.67 g min. - , n = 15); third (1.64 ± 0.83 g min. - , n = 17), and fourth (1.65 ± 0.38 g min. - , n = 12).We found no significant difference between the single prey and multiple prey feeding groups in the first month (U = 53.5, p = 0.27) nor in the second (U = 32, p = 0.62).Likewise, we observed no significant difference in energy return between the second and third feeding trials either (U = 47, p = 0.49).We found negative correlations between feeding time and snake size and between feeding time and head length only in the second round of feedings, corresponding to a multiple prey feeding (Table 1, Figure 1).
Table 1.Relationships between the morphological measurements (snout vent length -SVL and head length) and the feeding times in the four feeding events of P. nattereri kept in captivity.Feedings 1 and 3 correspond to single prey feedings, 2 and 4 to multiple prey feedings.Despite the high diversity of snakes in Brazil and extensive natural history studies on these animals, to the best of our knowledge, this was the first time that predictions from foraging theory have been experimentally tested using a Brazilian snake species.P. nattereri has a generalist diet and explores many different microhabitats (Mesquita et al., 2011), which characterises an opportunistic predator.Our results indicate that the presence of several small preys that are more accessible than a large one in a given environment is a possible and viable option for P. nattereri.
Feeding time is known to have a negative association with the body size and head length of snakes (Shine, 1991;Vincent & Mori, 2008;Vincent et al., 2006).This pattern has been found, for example, in terrestrial diurnal snakes (P.porphyriacus and M. spilota; Shine, 1991) and even in aquatic species (Nerodia fasciata; Vincent et al., 2006), reinforcing its generality.However, this effect was only observed for the multiple prey feeding treatment.According to Shine (1991), large snakes are able to handle prey of different sizes better than smaller snakes, a statement that is corroborated in our results.Feeding time commonly increases with prey size, which probably occurs due to a general positive correlation between prey size and number of upper jaw movements used to ingest the prey and to gape size limitation (Hampton, 2013;Shine, 1991).This difference in handling seems to be sufficiently high to allow large snakes to have these advantages.Such a result was not observed in the fourth feeding routine, where the smallest snakes did not eat, reducing the sample size and possibly affecting the lack of significance in the comparisons between morphological measurements (SVL and HL) and feeding time in this trial.The high similarity between the results using HL and SVL is expected since they were highly correlated (Pearson correlation: r = 0.80, df = 15, p < 0.001; data were normally distributed).
Animals in captivity can sometimes show different behaviours, which do not correspond to the behavioural repertoire of the species in nature.For example, in the wild, hunting three underweight preys may probably require more time and energy than hunting a single prey with an "ideal weight".Thus, further studies in natural conditions are important to assess the results found in this paper.
Conclusion
Finally, for P. nattereri, single and multiple prey feedings have equal energy return when search costs are reduced.Abundant small prey can be as energetically viable as scarce large prey.Therefore, these animals can choose their prey based on availability and practicality.We also found a negative relationship between feeding times and the body size as well as head length of snakes.Hence, the larger size in this species (in both the body size and the head length) may provide it with energetic advantages related to the handling time of small prey.
FeedingFigure 1 .
Figure 1.Relationships between feeding time and morphological measurements of P. nattereri from the second feeding trial.a) Negative correlation between feeding time and snout-vent length; b) Negative correlation between feeding time and head length.Both figures indicate that feeding time was reduced for snakes with larger body size and head length.min = minutes. | 2018-12-05T11:26:15.962Z | 2016-07-21T00:00:00.000 | {
"year": 2016,
"sha1": "312748d92fed9e13d72f5979beeca75df842df1a",
"oa_license": "CCBY",
"oa_url": "http://periodicos.uem.br/ojs/index.php/ActaSciBiolSci/article/download/28774/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "312748d92fed9e13d72f5979beeca75df842df1a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
153498486 | pes2o/s2orc | v3-fos-license | Evaluation as an organizational growth of a contemporary employee
In order to survive and prosper, organizations need to respond in a timely and flexible way to change. Organizations are increasingly recognizing that the key to their success is largely contingent upon the capabilities of their employees—their human capital. In order to achieve the results expected regarding the human resource, a organization must have a training department, department that has to consider the need of training, to measure this need so that the management can take the necessary steps to improve the economical status of the organization.
About needs, as before irrespective of the route into training there should be a clear link between the needs which have been identified and corporate objectives.In reality such a level of sophistication is unlikely to exist in many organizations.Nevertheless, by adopting some of the approaches and through an understanding of the perspectives which have been described, trainers can make a positive contribution to organizational effectiveness even though their understanding of the corporate mission and objectives may not have been conveyed to them directly.Furthermore, the trainer may be limited by a number of unforeseen or uncontrollable constraints which may not allow a fully comprehensive study of behavioral problems or training needs to be undertaken.In circumstances such as these, the trainer may have to employ very basic methodology and take a number of short cuts that may not be very satisfying.However, there are many situations when a series of swift, professionally conducted interviews is all that is needed to find the critical issues and needs.Success on short projects that have an impact can gain a considerable amount of credibility for the trainer which can be put to good use when asking for more time and resources for other projects.Trainers should not always be prepared to give in to the 'quick and dirty' approach.Whatever the outcome of training projects, the trainers are accountable at all levels of their involvement; memories are short when it comes to remembering success and trainers are quick and easy targets when it comes to directing blame.It is not unrealistic to recommend that trainers should be responsible for familiarizing senior managers with the tools of the trainer's trade.The ability to recognize the systems and subsystems of an organization is an important element in all training and development activities.Training and development exists to promote individual and organizational excellence by providing opportunities to develop workplace skills.The design and implementation of effecting training interventions cannot be accomplished without first identifying the various processes operating within the system.One way of looking at it is to envision training as the subsystem that acquaints the people with the material and the technology.It helps them learn how to use the material in an approved fashion that allows the organization to reach its desired output.
Because growth and change are inherent in organizations, they create a plethora of training needs.The term "learning organization" has become a popular buzzword to describe the way organizations must cope with their dynamic nature.A learning organization is based upon the principle of continuous learning, or a systematic method designed to increase learning within an organization, thereby enabling a more effective response to organizational change.Learning organizations emphasize the importance of learning at the individual, team, and organizational levels, thereby increasing the likelihood of further developing a competent and competitive workforce.Peter Senge defines the term as an organization that is "continually expanding its capacity to create it's future."Doing so requires that individuals improve existing skills as well as learn new skills.Collectively, these newly acquired or refined skills can accomplish shared organizational goals.And, by anticipating future changes and working toward the knowledge and skills necessary to meet the demands resulting from these changes, the organization can systematically expand its capacity.Able people may grow to a point where they are ready for responsibilities beyond their initial assignments.When this happens, the organization can profitably help them develop new, larger capabilities.Training has become concerned not only with helping individuals to fill their positions adequately but also with helping entire organizations and subdepartments to grow and develop.Thus the title has changed from "Training and Development" to titles reflecting missions such as "Employee Development," "Organization Development," or "Human Resource Development."This trend makes it wise for us to look a bit more closely at the interrelationship of the four inputs: people, technology, materials, and time.
Training and development, though primarily concerned with people, is also concerned with technology and processes, or the precise way an organization does business.To accomplish the desired final output, an organization requires work.That work is divided among positions; and positions are divided into tasks-and tasks are assigned to people.And there we have our second input: people!To perform their assigned tasks properly, all workers need to master and apply the unique technology governing their tasks.So here's where training enters the picture.Civilization has not yet found the way to conceive and run an employee-free organization.Nor has it found a magic technology-and-skill potion that can be injected into people.Training is concerned primarily with the meeting of two inputs to organizational effectiveness: people and technology.Since organizations can rarely find people who are, at the time of employment, total masters of the unique requirements for specific jobs, organizations need a subsystem called "training" to help new employees master the technology of their tasks.Training changes uninformed employees into informed employees; training changes unskilled or semiskilled workers into employees who can perform their assigned tasks in the way the organization wants them done; employees become workers who do things "the right way."This "right way" is called a standard-and one major function of training is to produce people who do their work "at standard."In fact, one simple way to envision how training contributes is to look at the steps by which people control their positions: Step 1. Define the right (or standard) way for performing all the tasks needed by the organization.
Step 2. Secure people to perform these tasks.
Step 3. Find out how much of the task they can already perform.(What is their "inventory" of the necessary technology?) Step 4. Train them to meet skill gaps-the difference in what they cannot already do and the standard for performing the task.
Step 5. Test them to make certain they can perform their assigned tasks to minimum standards.
Step 6. Give them the resources necessary to perform their tasks.From that six-step process, we can also identify the two remaining inputs: time and material.People can't be miracle workers who create something from nothing.Management usually makes some statement about quality; it specifies what the finished product must look like; management also sets quantity standards.The job of the training department is to "output" people who can meet those standards, both in quality and quantity.This description may imply that all training takes place after people are hired but before they are assigned to their jobs.That's obviously not true.Just look at the rosters of training programs and you'll see the names of lots of old-timers.One legitimate reason for including old-timers in training programs is that the organization has undergone a major change such as equipment changes, processes change, policies change, and procedures change.Thus, veteran employees and new employees alike need training initiatives and benefit from them.When change occurs, an organization will have incumbent workers who no longer know how to do their jobs the new, right way.When people do not know how to do their jobs the right way, there is a training need.People do not usually know how to do the "next job" properly.Thus transfers, or the promotions implied in some career-planning designs, imply potential education needs.Some organizations have training departments that help prepare for the future.But sometimes we find people in training programs even when the technology hasn't changed, or even when they aren't preparing for new responsibilities.Training is a remedy for people who do not know how-not for people who do know how but for one reason or another are no longer doing it.These other problems are performance problems-but they are not truly training problems; therefore, training is not an appropriate solution.
The function once known as "training" has had to expand its own technology, strategies, and methodologies.Organizations get outputs because people perform tasks to a desired standard.Before people can perform their tasks properly, they must master the special technology used by the organization.Training is the acquisition of the technology which permits employees to perform to standard.Thus training may be defined as an experience, a discipline, or a regimen that causes people to acquire new, predetermined behaviors.Whenever employees need new behaviors, then we need a training department.But as we have already noted, training departments do more than merely fill the gaps in peoples' repertoires for carrying out assigned tasks; training specialists are also now involved in career development: developing people for "the next job," for retirement, and for their roles in society outside the employing organization.One can never consider training if there was not an assessment regarding training needs.Even if a training activity has almost always a good result upon the people it has acted, it is a costly action than diminishes the profit of an organization.The need itself can not be explained in other way than by thinking that it is helpful to consider two classes of training needs: individual and organizational.The difference is very simple, but it has heavy impact on the response made by the Training department.Of course, an individual training need exists for just one person, or for a very small population.Organizational training needs exist in a large group of employees such as the entire population with the same job classification.That happens, for example, when all clerks must be trained in a new procedure, or all managers in new policy.A manager in a specialized department, however, may develop an individual training need when some new technology is introduced into that field, or when performance as a manager reveals the noncomprehension of one facet of good managerial practice.When new employees enter the organization, it is assumed that they know nothing of policies and procedures, nothing about organizational goals or structures.These deficiencies of knowledge are assumed to apply to all new people.However, there may also be individual needs involving special tasks the newcomer will perform; it is a good idea to "take inventory" to see whether the individual meets the standards for some of the skills necessary to the satisfactory performance of a position.Because there may be serious lapses in such areas, some organizations use "certification testing."These might be written exams, performance demonstrations, or both.They are tests requiring the employees to demonstrate their capabilities to perform a specific task or job duty.
One Training manager describes the process for certification testing this way: "They are used as predictors of job performance to assure the company that employees are ready to perform job responsibilities safely and accurately following the completion of their training."In other words, there is individual testing or assessment before there is organizational assessment.It must be quite apparent that the Training manager has many sources of data about potential training needs.Training managers keep their eyes on the operation, on key communications, and on personnel moves even as they poll their client population.If there are lots of signals from lots of sources, the training needs (or the need for some performance-problem solution) may exceed the resources available to meet those needs.At such moments, a written policy statement comes in mighty handy.But on what basis does that policy rest?In most organizations, at least four criteria must be considered: costeffectiveness, legal requirements, executive pressure, and the population to be served.The cost of a performance problem can usually be determined.It's relatively easy if one immediately knows the cost of a defective unit.For either the deficiency or the undecided grievances, it's then necessary to compute the cost of the solution: development costs, salary costs, special expenses.A second criterion is the legal requirement.Numerous government statutes dictate some of the decisions about what training to offer, like equal-employment legislation, occupational safety and health acts.It may be necessary to introduce programs for which no immediate tangible cost saving can be computed-it's the law.Executive pressure is a third criterion.It usually comes from within the organization-and it's a criterion that smart training managers do not ignore.When Training managers complain that they don't get support from the top, they should ask themselves how many suggestions from chief executive managers, vicepresidents, or directors they turned down recently-or even in recent years.Finally, there is the criterion of population.Sometimes this means simply that training goes to the most extensive problem.Macro needs may take priority over individual needs.Fortunately, it doesn't always need to work that way.The factor of influence and impact must also enter the decision table.Possibly the people who perform defectively occupy positions that affect the entire operations-for example, senior managers.Performance problems that affect many workers, that are costly, that are related to the law, or that interest executives-all these deserve attention.Actual or potential knowledge deficiencies (DK) deserve training.Problems stemming from lack of practice (DP) should produce drill, or enforced on-the-job application.Problems stemming from other causes are probably deficiencies of execution (DE) and nontraining solutions are in order.The Human Resources Department has in the job description to determine the need of training in an organization.This action it is not taken lightly, considering the effort it is needed to maintain such an activity-here we refer to financial cost, time spent, people involvement.In order to realize an efficient activity determining training needs, the organization can apply one or more different formulas to obtain the results needed.We can not state that we know all the formulas that might be used in determining that factor, but we can suggest an easier way to determine it, so that every organization, regardless of it's growth, can use it.Even if we are used to consider that not all information from different cultures are applicable in our country, we consider that in this particular matter it is not the case.Skill in writing performance standards, or at least in describing human behavior, is a "must" for all Training managers and specialists.Some organizations have begun to train managers from all departments in how to define performance standards.One such firm (Kemper Insurance Companies) has conducted workshops so that line managers become trainers for workshops at which still other line managers learn how to develop standards for their subordinates.Kemper stresses the importance of developing the actual standards as a joint effort between the manager and the subordinate-not as a product of staff trainers.Once the standards are agreed upon by key people in the client department, the Training specialist is ready to ask that all-important question: "Do the people who must meet these standards possess the knowledge and skill to do so right now?"If the answer is yes, no training is indicated.For newcomers, that seldom happens.They rarely know how to do their new jobs perfectly.For them, we have discovered a training need.It does not follow, however, that newcomers need training in all facets of their positions.Even newcomers have some ability and some knowledge, and we call this their "inventory."If we match the inventory against the standard we have set, we have a possible training need.What the employee must do to meet the standard can be represented by the letter M for minimum mastery, or "must do."From this M we subtract the inventory to discover what the newcomer needs to learn to perform properly.The test is somewhat different for employees who are already incumbent in their positions.We can again let M represent what the worker must do; from that we still subtract the I, or inventory.But this time the inventory is what the worker is actually doing now.The difference between the M and the I is a potential training need.We now have a formula for potential training needs: M -I = A potential training need.
Studies and Scientific Researches -Economic Edition, no. 15, 2010
The word "potential" is accurate.Why?Because with incumbents we are not yet certain that the reason for difference is lack of knowledge or skill.We don't yet know that they do not know how.Only if the reason for the difference is their not knowing how do we have a training need.It's helpful to regard the distance between the "must do" and the "is doing" as a deficiency.We can put this into our formula by assigning it the letter D. Now our formula looks like this: At this stage we are now ready to consider several different types of deficiency.When employees don't know how, we call this DK for "deficiency of knowledge."All DK's are regarded as training needs.If the difference between the "must do" and the "is doing" stems from other causes, we consider it a "deficiency of execution" and call it a DE.What "other causes" might there be?To name a few: lack of feedback, badly engineered jobs, or punishing consequences.DE's are not solvable through training.Sometimes people know how to do the job, but have so little practice that they cannot maintain a satisfactory level of performance.This might be called a DP, or "deficiency of practice;" training in the form of drill may solve DP problems.As we have often noted, there is no sense in training people to do what they can already do.
Training is an appropriate solution to job-related problems for people who have what we call DK (deficiency of knowledge) or DP (deficiency of practice), both of which cause performance problems and deficiencies in knowledge, skills, or abilities.Feedback systems are attractive alternatives to training because they motivate workers, are inexpensive, and can be part of the regular management reporting system.First, let's see why feedback is in itself something of a motivator.When employees are able to see their own accomplishments, they have more reason to be interested in their work, more reason to be satisfied with their assignments, a greater sense of being needed, and a keener awareness of their contributions.Motivation is a fundamental component of performance.Supervisors and managers are responsible for achieving the goals of the organization through leading the performance or efforts of their employees.Individual job performance can be summarized as follows: Performance = Ability x Motivation (effort) In this model, performance is the product of ability times motivation:
Ability = Aptitude x Training x Resources
• Aptitude refers to current skills and capabilities, education, and previous job experience.
• Resources are the tools that an employee needs to do the work (e.g., equipment, supplies, the work of other employees, time to complete the tasks, etc.).
Motivation = Desire x Commitment
• Desire means wanting to perform the job, but desire by itself is not enough.An employee who wants to complete a task but who is easily distracted or discouraged cannot perform well (high desire/low commitment).
• Commitment means being persistent or trying hard to complete a task.However, without desire, an employee could be committed to his/her work but proceed slowly and produce only adequate results (low desire/high commitment).
Studies and Scientific Researches -Economic Edition, no. 15, 2010
The multiplication symbol (x) demonstrates that all elements are essential.Someone with 100 percent of the motivation and 75 percent of the ability needed for a job can perform at above average level.However, an individual with only 10 percent of the ability would not be able to perform acceptably regardless of how motivated he or she is.By finding out the need of training, one can say that the first step is finished: it follows the hard part consisting in TRAINING.Such an activity can be seen very different, depending on the people to be trained -here we must think of their number, age, education level -, the trainers -their personality, state of mind, knowledge regarding the object of training, all these can be disregarded if an organization prefers to collaborate with a specialized company in delivering training-and not finally the type of information they will have to know at the end of the course.This part, even it is mostly considered as the hard part in a training process, will not be here in question given the extent of the subject.We will try to give to an entrepreneur the possibility to take the matter in his own hands in order to measure the training need but also the effect: alpha and omega.Measurement has some other effects, too.The very process of measuring tends to increase the use of the new behaviors.There is an old adage that says, "In organizations, what is important gets measured" and its corollary, "What's measured, becomes important."If any, one element is more important than others in effective measurement, it is selecting the proper thing to count.There's a significant dilemma there.If measurement doesn't count, if it isn't quantitative, then it isn't really measurement.If it counts the wrong things, it is an inappropriate measurement.First, when we measure training achievements, the things we count should represent what we are seeking.That's true whether we measure perceptions, learning, or performance units.Next, those things should be inherently valuable.Finally, the search itself should develop an increasingly satisfactory performance of those inherently valuable units.The thrust of effective evaluation is to make responsible judgments about important questions.If an improved operation is what the Training department wants to contribute, the inquiry must focus on hard data-and the evaluation must indicate whether or not the problem has been eliminated or significantly diminished.It breaks down into these steps: 1. Identify an unbearably deficient performance.2. Identify specific units that characterize the problem.3. Count the number of unacceptable units to establish a baseline.► Compute the application quota: Divide the number of successful demonstrations by the number of graduates.► Evaluate.Do the retentions of the new behavior equal the goals established?T&D officers who want to be relevant and accountable seek the hardest possible data from the widest possible range of representative sources.The purpose of training is to change employees: their behavior, opinions, knowledge, or level of skill.The purpose of evaluation is to determine whether the objective was met and whether these changes have taken place.One way to consider the importance of evaluations is to recognize the feedback it provides.Feedback can be obtained through self-reporting or by observing the learner.Various kinds of evaluations provide feedback to different people such as: ♦Employees regarding their success in mastering new knowledge, attitudes, and skills.♦Employees concerning their work-related strengths and weaknesses.Evaluation results can be a source of positive reinforcement and an incentive for motivation ♦Trainers for developing future interventions and program needs or creating modifications in the current training efforts.♦Supervisors as to whether there is observable change in employees effectiveness or performance as a result of participating in the training program.♦ The organization regarding return on investment in training.The world of work continues to become more and more complex and for everyone, including trainers, there are many learning curves ahead.The demands inferred by Senge (1990), 'As the world becomes more interconnected and business becomes more complex and dynamic, work must become more learningful', indicate that there will be a crucial and demanding role for training in the future.There are many techniques, approaches and theories which can be applied in training and no single volume can do justice to them all.What we have done here was an attemptsuccessfully we hope -to draw attention in one of the most important part of an organization, and that is: TRAINING.
4 .
Establish quantitative goals-a post-program baseline objective.5. Conduct the change program.6. Count the satisfactory and unsatisfactory units after the program.7. Evaluate.Is the number of satisfactory units equal to the objective established in step 4? In other words, did the program produce the desired results?If the production of new behaviors is the extent of the Training purpose, the evaluation will focus on the demonstrated acquisition and the perseverance of those behaviors.The successive steps are: 1. Establish the performance (learning) objectives.2. Establish a desired achievement quota (the number of trainees divided into the number of behaviors acquired successfully).3. Conduct the training or install the change program.4. Test each trainee over each learning objective.5. Compute the actual achievement quota.6. Evaluate.Does the actual achievement quota equal or surpass the desired achievement quota?When the mere acquisition isn't what the department wants to evaluate, there are additional steps to evaluate the on-the-job application of the new behaviors: ► Wait until a predetermined time and retest the graduates on each of the learning objectives. | 2019-05-15T14:31:28.979Z | 2010-12-15T00:00:00.000 | {
"year": 2010,
"sha1": "66e09fe41613a0f0da19684923334d4a8dc9d0f9",
"oa_license": "CCBY",
"oa_url": "http://sceco.ub.ro/index.php/SCECO/article/download/136/136",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66e09fe41613a0f0da19684923334d4a8dc9d0f9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
209376572 | pes2o/s2orc | v3-fos-license | Fourier transforms on the basic affine space of a quasi-split group
We extend the Gelfand and Graev construction of generalized Fourier transforms on basic affine space from split groups to quasi-split groups over a local non-archimedean field $F$.
• Let F be a local non-archimedean field with the norm | · | = | · | F , the ring of integers O and a fixed uniformizer ̟ such that |̟| = q −1 , where q is the cardinality of the residue field. • We fix a non-trivial additive character ψ throughout the paper. The self-dual Haar measure dx on F with respect to ψ defines the Haar measure d × x = dx |x| on F × . • For a quadratic extension K of F we denote by χ K the quadratic character of F × , associated to K by class field theory. We also denote by χ 0 the trivial character of F × . • For a space Y over F we denote by S ∞ (Y ) (resp. S c (Y )) the space of locally constant (resp. locally constant of compact support) functions on Y . • Throughout this paper we use boldface characters for group schemes over F , such as H, and plain text characters for their group of Fpoints, such as H. • Let G be a simply-connected quasi-split group defined over F with a maximal F -split torus T ′ and the maximal torus T = Z G (T ′ ). We fix a Borel subgroup B of G containing T so that B = T · U. We write U op for the unipotent radical of the opposite Borel subgroup. • The Weyl group W = N G (T ′ )/T acts on T by conjugation and we write t w for w −1 tw for all t ∈ T , w ∈ W . • The quotient X = U \G is called the basic affine space of G. For any g ∈ G we write [g] for the element U g in X. The space X admits unique, up to a scalar, G-invariant measure ω X . The precise choice of ω X is not important for general G, but will be fixed for groups of rank 1.
1.1. Fourier transforms on the basic affine space of a quasi-split group. We define a unitary representation θ of the group G×T on L 2 (X, ω X ) , where δ B is the modular character.
For split groups Gelfand and Graev in [GG73], see also [KL88], [Kaz95], extended the action θ of G × T to a representation of G × (T ⋊ W ), so that every element w of W acts on L 2 (X, ω X ) by an operator Φ w , called a generalized Fourier transform. Our paper has two goals: • To extend the construction by Gelfand and Graev to quasi-split groups. • To show that the Whittaker map intertwines the action of W on a dense subspace S 0 (X) in L 2 (X) with the natural action of W on the space of Whittaker vectors. We show (see Theorem 1.2) that this property characterizes uniquely the operators Φ w .
defines an isomorphism S c (X) U op ,Ψ ≃ S c (T ). We define an action of W on S c (T ). For split groups set w · ϕ(t) = ϕ(t w ).
For quasi-split groups see Definition 5.6. We define (see 6.1) a G × T submodule S 0 (X) that is dense in L 2 (X) and put S 0 (T ) = W Ψ (S 0 (X)) ≃ S 0 (X) U op ,Ψ . There is a natural map κ Ψ : End G (S 0 (X)) → End C (S 0 (X) U op ,Ψ ) = End C (S 0 (T )) such that for every B ∈ End G (S 0 (X)) the following diagram is commutative.
S 0 (X) S 0 (X) We prove in Proposition 6.2 that the map κ Ψ is injective.
1.1.2. Main Theorem. With this notation we formulate our main result.
(1) First consider a quasi-split, almost simple, simply-connected group G 1 of rank one. The group G 1 is isomorphic to either Res L SL 2 or Res L SU 3 for a finite extension L of F . Without loss of generality we can assume that L = F . In both cases the Weyl group W = {e, s} consists of two elements. We shall define the generalized Fourier operator Φ s , separately for these two cases.
• In the case G 1 = SL 2 the set X can be identified with V − 0 for a symplectic two dimensional plane V . In this case Φ s ∈ Aut(L 2 (X)) = Aut(L 2 (V )) is defined to be the classical Fourier transform with respect to the symplectic form on V . Theorem 1.2 in this case is proven in Section 3. • In the case G 1 = SU 3 , the set X can be identified with the set of non-zero isotropic vectors in a 6 dimensional quadratic space. The treatment of this case is the crux of the paper.
In [GK22] we have defined a unitary operator Φ ∈ L 2 (X) of order 2, commuting with G 1 and anti-commuting with T ′ , and provided an explicit formula for the restriction of Φ to the space S c (X). We put Φ s = Φ and prove Theorem 1.2 in this case in Section 4.
(2) For a general quasi-split group G and any simple reflection s we, using the results for groups of rank 1, define a unitary involution Φ s ∈ Aut(L 2 (X)), satisfying (3) For arbitrary w ∈ W with a presentation w = s 1 · s 2 · . . . · s n as a product of simple reflections we define Φ w = Φ s 1 •Φ s 2 . . .•Φ sn . Hence the operators Φ w are unitary and possess the desired equivariance properties. It remains to prove that Φ w does not depend on the presentation. For every ϕ ∈ S 0 (T ) one has κ Ψ (Φ w )(ϕ) = w · ϕ and so κ Ψ (Φ w ) does not depend on the presentation of w. Since κ Ψ is injective, the operator Φ w does not depend on the presentation of w as well. In particular, Φ w 1 • Φ w 2 = Φ w 1 w 2 for w 1 , w 2 ∈ W and the operators {Φ w , w ∈ W } satisfy 1.3.
Remark 1.4. We expect that a similar strategy can be applied to prove Theorem 1.2 for F = R.
Acknowledgment. The research of the second author is partially supported by the ERC grant No 669655. We thank the referee for careful reading of the paper and pointing several inaccuracies in the first version.
2. On the space S 0 (X) In [BK99] the authors have defined for split groups the spaces In particular S 0 (X) ⊂ S c (X) ⊂ S(X) ⊂ L 2 (X, ω X ) and the spaces S 0 (X), S(X) are preserved by the family of operators Φ w , w ∈ W . The space S(X), called Schwartz space, is potentially important for construction of integral representations of L-functions.
The description of the Schwartz space S(X) explicitly is a deep problem. For example for G = SL 2 one has S(X) = S c (V ) and for G = SU 3 the space S(X) can be identified with the space of smooth vectors in the unitary minimal representation of a group SO(8) containing SU 3 inside its Levi subgroup GL 1 × SO(6), see [GK22].
The space S 0 (X) in this paper is contained in S 0 (X). Let us highlight its useful properties: • It is explicitly given as an intersection of kernels of certain partial Mellin transforms. • The Fourier transforms corresponding to simple reflections preserve this space and can be written as integral operators with explicitly given continuous kernels. • The family {Φ w } is unique for given S 0 (X). On the other hand this space is not canonical and can easily be replaced by other subspaces in S c (X), dense in L 2 (X, ω X ) and preserved by Φ w , for example by S 0 (X).
The space S 0 (X) will be defined separately for the groups of rank one, and, based on this, for general group.
The density of S 0 (X) in L 2 (X, ω X ) is the consequence of Proposition 2.1 below.
Consider a finite set where L i is a finite extension of F , a i : L × i ֒→ T is an embedding and χ i is a character of L × i . For each (L i , a i , χ i ) consider a partial Mellin transform It is a G × T invariant subspace of S c (X). The following proposition will be repeatedly used in the paper.
Proposition 2.1. The space S B (X) is dense in L 2 (X, ω X ).
Proof. Let us prove this first for the case all the characters χ i are not unitary. Precisely assume that all χ i satisfy To show that the space S B (X) is dense, assume existence of a non-zero function f ∈ S B (X) ⊥ ⊂ L 2 (X, ω X ). Since S c (X) is dense in L 2 (X) there exists a function g ∈ S c (X) such that f, g = 0.
Denote by ̟ i an uniformizer of L i . For any n ∈ N define operators E i n , E n on S c (X) by Clearly, E n (g) ∈ S B (X). Set g n = g − E n (g). Note that |χ i (̟ i )| (resp. as n → ∞, which is a contradiction. Now let us treat the general set of characters B. For any compact subset K in X let S B (X; K) be the space of functions in S B (X) supported on K.
Since the action of T on X is free, for any character χ of T there exists a smooth function h on X, such that Multiplication on h defines a T -equivariant isomorphism between S B (X) and S B ′ (X), where B ′ = {(L i , a i , χ i · χ • a i )}, which is also homeomorphism between S B (X; K) and S B ′ (X; K) for all compact K ⊂ X. Hence S B (X) is dense if and only if S B ′ (X) is dense in L 2 (X, ω X ). By choosing appropriate χ we can ensure that B ′ does not contain unitary characters. We are done.
The group G 1 acts on V on the right, preserving the symplectic form. Let B 1 = T 1 · U 1 be the Borel group, stabilizing the line F e 2 . The space The G 1 -invariant measure ω X on X is fixed to be the self-dual measure |dv| on V with respect to the additive character ψ and the symplectic form on V .
The Fourier transform Φ ∈ Aut(S c (V )) is defined by the formula The following properties of Φ are well-known: Proposition 3.1.
(1) Φ extends to a unitary involution on L 2 (V, |dv|) = L 2 (X, ω X ) For a function f on X the argument will be denoted either as a class [g] or as a vector (x, y) = xe 1 + ye 2 ∈ V − 0.
We define certain typical elements of G 1 : One has α(t(a)) = a 2 for the unique positive root α of G 1 with respect to T 1 .
3.1. The space S 0 (X). Let B be the set of two triples We define S 0 (X) to be S B (X), see section 2 for the definition. It is obviously a G 1 × T 1 representation and is dense in in L 2 (X, ω X ) by Proposition 2.1.
Proof. First note, that for any
For any character χ of T 1 one has Since f is of compact support, the integral defining Φ(f ) is taken over a compact set in X, and hence the integral over T 1 can also be replaced by an integral over a compact set. By interchanging the order of integration we see that if f ∈ Ker P (χ −1 ) then Φ(f ) ∈ Ker P (χ).
Hence for f ∈ S 0 (X) the function Φ(f ) belongs to S 0 (X). This proves the Lemma.
Proof. See the proof of 6.2 for a general quasi-split G.
Definition 3.4. We define an action of W on S c (T 1 ) by Indeed, once this is proven one has for t ∈ T 1 and F ψ : S c (F ) → S c (F ) denotes the one-dimensional Fourier transform with respect to ψ and the self-dual measure dx on F .
Theorem 3.8. There exists a unique unitary operator Φ s ∈ Aut(L 2 (X, ω X )), that preserves the space S 0 (X) and satisfies Proof. The injectivity of κ Ψ implies the uniqueness of the operator Φ s , hence it is enough to construct such an operator. We define Φ s to be Φ. The properties follow from Propositions 3.1, 3.2, 3.5.
The structure and compatibility of measures.
4.1.1. The field. Let K be a quadratic field extension over F with the Galois involution x →x, the norm Nm and the trace Tr. We write | · | K for the absolute value on K, such that |x| The space K admits a quadratic form x → Nm(x) and the associated bilinear form on K is (x, y) → Tr(xȳ). We fix on K a self dual measure dx with respect to ψ and Nm. The Fourier transform on K is denoted by F ψ,K , to distinguish it from the Fourier transform F ψ with respect to ψ and the self-dual measure on F . 4.1.2. The unitary group. Let (W, h) be the following Hermitian space The group G 1 = SU (W, h) is the group of automorphisms of W, acting on the right, preserving the Hermitian form h and having determinant 1. Its elements are 3 × 3 matrices over K.
We denote by B 1 = T 1 · U 1 the Borel subgroup of G 1 , preserving the line K(0, 0, 1) in W. The unipotent radical U 1 is the stabilizer of the vector (0, 0, 1). The space X = U 1 \G 1 is naturally identified with the set W 0 of h-isotropic non-zero vectors in the space W. We write T ′ for the maximal split torus of T 1 . 4.1.3. The measures. The space W with dim F (W) = 6 admits the F -bilinear form v 1 , v 2 = Tr h(v 1 , v 2 ) and the corresponding quadratic form q is given by We fix the self-dual measure dw on W with respect to ψ and q. It gives rise to a measure on the cone W 0 and hence to a measure on X which we denote by ω X .
We fix bijections We also fix a representative n s of the Weyl element s by The Haar measures on K × √ τ F and K × define the measures on U op 1 and T 1 respectively. By Bruhat decomposition for G 1 , there is an embedding j : It is straightforward to check that for any f ∈ S c (X) one has The root system with respect to the torus T ′ is The operator Φ s for the group G 1 is defined using the normalized Radon transform on the cone X. Below we recall the definition and the relevant properties. We refer to [GK22] for proofs. 4.2. Mellin transform. Let χ be a character of O × , extended to F × by setting χ(̟) = 1. We write χ s for the character χ| · | s of F × . The character Mellin transform can be also computed on functions on S ∞ (X), not necessarily of compact support, provided the integral converges.
The following statement is obvious and will be used later.
The Radon transform.
Recall that X can be identified with the space W 0 of non-zero isotropic vectors in W. In this section elements in X will be denoted by u, v, w . . ., isotropic vectors in W.
For any vector w ∈ W 0 = X consider an algebraic map The measure ω X defined above and the measure dx on F give rise to welldefined measure ω w,a on the fiber For any a ∈ F we define Radon transform R(a) : In addition set Below we list the properties of the operators R(a) andR, all proven in [GK22], section 3. The quadratic space (V K , q K ) in loc. cit. is isomorphic to the quadratic space (W, q) and the results proven in loc.cit. hold in our setting.
Let us fix terminology for convergence of integrals of locally constant functions, not necessary of compact support, on By properties (4), (5) the integral converges absolutely. By property (2) it satisfies the equivariance property for G × T ′ .
In [GK22], using the minimal representation for the group O(8) we proved (2) extends to a unitary involution on L 2 (X, ω X ), The operator Φ is our candidate for Fourier transform. To prove Theorem 1.2 for G 1 it remains • to show that Φ enjoys the equivariance property with respect to T 1 , • to define a space S 0 (X) ⊂ S c (X), preserved by Φ and dense in L 2 (X, ω X ) and • to compute κ Ψ (Φ) on the space S 0 (T 1 ) = W Ψ (S 0 (X)).
4.5. The space S 0 (X). Define the space S 0 (X) = S B (X), where B is the following finite set of characters of T ′ ≃ F × : Proposition 4.4. The operator Φ preserves S 0 (X).
Proof. We start by showing that for f ∈ S 0 (X) one has Φ(f ) ∈ S c (X). Since Φ(f ) has bounded support, it is enough to show that the germ [Φ(f )] 0 at zero vanishes.
Proposition 4.5. For f ∈ S 0 (X) one has Proof. For a ∈ F × , the integral defining L stabilizes both at zero at and infinity. In particular, there exists a compact set K 1 in F × such that For f ∈ S 0 (X), the function a → R(a)(f )(w) is of compact support on F × by Lemma 4.2, part (2). We can assume that the support is contained in K 1 .
By the Fubini theorem We can change the order of integration over compact sets. This gives as required.
Proof. This is a straightforward computation and is very similar to the proof of the equivariance property of the classical Fourier transform.
One has v, tw = (t s ) −1 v, w for all t ∈ T 1 . Applying the change of variables v → (t s ) −1 v and taking the measure into account, we get that the integral equals as required.
Proposition 4.8. Let f ∈ S 0 (X). ( The proof occupies the rest of this subsection. We start with the following technical Lemmas, whose proofs are postponed to the end of this subsection. Lemma 4.9. For any x ∈ F and g ∈ S c (K) one has According to Weil, [Wei64] there exists a constant γ(χ K , ψ), which is a fourth root of unity, satisfying (4.10) For all t ∈ F × denote by ψ t the additive character ψ t (x) = ψ(tx). One has Lemma 4.11. For any g ∈ S c (F ) one has (4.12) Proof of Proposition 4.8. It is easy to see that the first part implies the second. Indeed, assuming part (1), for any t ∈ T 1 and f ∈ S 0 (X) one has Using Bruhat decomposition for G 1 this equals.
To ease notation we writef ∈ S c (F × ) for the function b → W Ψ (f ) (t(b)). One has Hence the above equals Writing explicitly the expression for L from 4.6 and rearranging the change of integrals this equals We apply Lemma 4.9 to the middle line, i.e. for g(b) =f (b)ψ(− Tr(bx Nm(r)/2)).
It remains to prove Lemmas.
Proof of Lemma 4.9. We fix the isomorphism of vector spaces The self-dual measure on K with respect to (ψ, Nm) is transported under this isomorphism to |2||τ | 1/2 db 1 db 2 .
It is enough to prove Lemma for g = g 1 ⊗ g 2 , where g 1 , g 2 ∈ S c (F ), so Let us write y = √ τ y ′ for y ′ ∈ F and dy = |τ | 1/2 dy ′ . Then Haar measures on the fibers of Nm : K × → F × . All the fibers are compact and have the same measure C. By the Fubini theorem for any function h ∈ L 1 (F × ) one has (4.14) Applying this integral over F × in the LHS of 4.12 we obtain g(Nm(y))ψ(− Nm(r/y))−g(c Nm(y))ψ(−c −1 Nm(r/y))d × xψ(Tr(r))dr.
After the change of variables r → rȳ this equals where g c (x) = g(cx) for any x. This equals by 4.10 Applying equation 4.14 again this equals The restriction W Ψ : S 0 (X) → S c (T ), whose image we denote by S 0 (T ), gives rise to the homomorphism κ Ψ : End G (S 0 (X)) → End C (S 0 (T )).
Proof. See the proof of 6.2 for the general case.
Let us define the action of W on S 0 (T 1 ).
Definition 4.16. The action of W on S 0 (T 1 ) is defined by Theorem 4.17. There exists a unique unitary involution Φ s ∈ Aut(L 2 (X, ω X )) that preserves the space S 0 (X) and satisfies Proof. The injectivity of κ Ψ implies uniqueness of such operator. and hence it is enough to construct such Φ s . We put Φ s = Φ. The properties follow from Theorem 4.3, Propositions 4.7, 4.4 and 4.8, part (2).
Quasi-split groups
We recall below the structure of reductive quasi-split groups. Our main reference is [BT84]. 5.1. Relative and absolute root systems. Let G be a reductive, connected, simply-connected quasi-split group over F with a maximal split torus T ′ . We denote by Lie(G) the Lie algebra of G an by Ad the adjoint action of G on Lie(G). Let T be the centralizer of T ′ and N be the normalizer of T ′ , both defined over F .
The root datum of G with respect to T ′ is a quadruple (X * (T ′ ), R, X * (T ′ ), R ∨ )), where the set of roots R ⊂ X * (T ′ ) consists of the weights that appear in the representation Ad : T ′ → Aut(Lie(G)).
The root system R is not necessarily reduced. For any root α ∈ R, its root ray is defined as 1 ⊗ R ∩ R >0 ⊗ α, in R ⊗ X * (T ′ ). Each root ray contains one or two elements. We denote by R the set of root rays.
The choice of a Borel subgroup B, containing T and defined over F determines the decomposition R = R + ∪ R − into the set of positive and negative roots and the subset ∆ ⊂ R + of simple roots. We call a root ray positive (resp. negative, resp. simple) if it contains a positive (resp. negative, resp. simple) root.
The groups G and T are split over the separable closure F s of F . There exists a minimal extension F ⊂ E ⊂ F s over which T and hence G splits. Then E/F is Galois. We denote this split E-group byG. It has a root datum (X * (T),R, X * (T),R ∨ ). Note that all root rays in X * (T) ⊗ R >0 are singletons.
The Borel subgroupB containing B ofG, determines the setR + of positive roots and the set∆ of simple roots. The Galois group Γ = Gal(E/F ) acts on X * (T),R,R + and∆.
There is a bijection β ↔R β between the set R of roots and the set of Γ orbits ofR. The restriction of every root inR β to T ′ equals to β.
Definition 5.1. Let α ∈R. The field L α = E Γα is called the field of definition of α, where Γ α ∈ Γ is the stabilizer of α.
(2) For α ∈R, if α| T ′ is a divisible root in R, then there exist roots α 1 , α 2 ∈R such that In addition L α 1 = L α 2 is a quadratic extension of L α .
The Chevalley-Steinberg pinning.
For any a ∈ R there exists a maximal connected subgroup U a of G, defined over F , such that the weights that appear in the representation Ad : T ′ → Aut(Lie(U a )) belong to a. The group U a is called the root subgroup corresponding to a ∈ R. For any simple root ray a in R, let G a be the group generated by U a and U −a . Since the group G is simply-connected, the group G a is a simply connected group of rank 1 over F . We denote by T a and T ′ a the maximal torus and the maximal split torus of G a respectively. The groupG a inG is G a considered as a group over E.
The following proposition describes G a andG a .
Proposition 5.3. Let a be a root ray. There are two possible cases • a = {α}. In this case the groupG a is isomorphic over E to a product of copies of the group SL 2 , indexed byR α . There exists an isomorphism φ a : SL 2 (L α ) → G a such that In this case the groupG a is isomorphic to a product of copies of SL 3 indexed by the set I of subsets {α 1 , α 2 } ⊂R α , such that α 1 + α 2 ∈R. The field L α 1 = L α 2 is a quadratic extension of Lα 1 + α 2 with a non-trivial automorphism x →x. Let SU 3 be the group of automorphisms on the Hermitian space L 3 α 1 preserving the form h(x, y, z) = Tr(xz) + Nm(ȳy) and having determinant 1. It is a quasi-split group of rank 1 over L α 1 +α 2 .
There exists an isomorphism φ a : SU 3 (L α 1 +α 2 ) → G a such that From now on we fix a family of isomorphisms φ a , a ∈ R such that φ a define a Steinberg-Chevalley pinning of the group G. See [BT84], page 78. 5.3. The Weyl group. The Weyl group W is isomorphic to N/T . For any a ∈ R the image of the element n sa = φ a (n s ) in W is denoted by s a . These elements, called simple reflections, generate W .
The roots in the same W orbit have the same field of definition.
For any w ∈W we denote by l(w) the length of a reduced presentation of w as a product of simple reflections.
For any w ∈ W we define R(w) = R + ∩ w −1 (R − ). Then l(w) = |R(w)|. We denote by w 0 the longest element of W , and by n 0 its representative in N .
The action of W on S c (T ).
Definition 5.4. Define for any w ∈ W the element and where t a = φ a (t(−1)) for a = {α, 2α} and t a = 1 otherwise.
Lemma 5.5. t w 2 · (w −1 2 t w 1 w 2 ) = t w 1 w 2 Proof. The set R(w 1 w 2 ) can be written as a disjoint union and the union is R(w 1 w 2 ). Besides R(w) = −wR(w −1 ). Writing by definition we conclude that t w 2 w −1 2 t w 1 w 2 = t w 1 w 2 .
Proposition 5.6. The map W × S c (T ) → S c (T ) defined by is an action of W on S c (T ).
For groups of rank 1 this action was defined in 3.4 and 4.16.
Generalized Fourier transforms
In this section we generalize Theorems 3.8 and 4.17 that concern the quasi-split groups of F -rank one to a general quasi-split group G. We keep the notation of Section 5.
For any root ray a of the group G we fix the isomorphisms φ a : G 1 → G a , where G 1 is a quasi-split group of rank 1.
To formulate the main result we introduce the spaces S 0 (X), S 0 (T ) and the homomorphism κ Ψ : End G (S 0 (X)) → End C (S 0 (T )).
6.0.1. The space S 0 (X). We define for each positive root ray a a set of triples B a as in section 2 as follows.
(1) Assume that a = {α} and L α be the field of definition of α. Then Definition 6.1. Define S 0 (X) = S B (X), where B = ∪ a B a and the union is taken over all positive root rays.
In particular S 0 (X) = ∩ a S Ba (X). The Weyl group acts naturally on the set B, by w(L α , φ a • t, χ i ) = (L w(α) = L α , φ w(a) • t, χ i ) where α ∈ a. Note that under this action w(B a ) = B wa .
For groups of rank one, the definition of the space S 0 (X) coincides with the definition given in 3.1 and 4.5. 6.0.2. Whittaker map and the map κ Ψ . We define a distinguished non-degenerate character Ψ : U op → C that is compatible with the fixed family of isomorphisms {φ a } from section 5.
Let Ψ be the unique character of U op such that for every simple root ray a the restriction Ψ to U −a equals Ψ a 1 = Ψ 1 • φ −1 a . For this Ψ the Whittaker map W Ψ : S c (X) → S c (T ), defined as in the introduction, gives rise to an isomorphism S 0 (X) U op ,Ψ ≃ S 0 (T ), where S 0 (T ) = W Ψ (S 0 (X)). This isomorphism induces the map Proof. Let us show that Ker W Ψ does not contain non-zero G-modules. Indeed, assume that V ⊂ Ker W Ψ ⊂ S 0 (X) is a non-zero G-module. For any character χ of T the space of coinvariants S 0 (X) T,χ −1 is naturally isomorphic to the normalized principal series representation Ind G B (χ). The functor of coinvariants induces a map V T,χ −1 → Ind G B (χ). For every character χ in a Zarisky-open set one has: • for some f ∈ V the Mellin transform P χ (f ) = T θ(t)f · χ(t)dt = 0, • the representation Ind G B (χ) is irreducible. We pick such χ. Since f does not belong to the kernel of P χ , so the map V T,χ −1 → Ind G B (χ) is non-zero, thus surjective. The functor of coinvariants with respect to (U op , Ψ) is exact and hence there is a surjection This is a contradiction.
Let B ∈ End G (S 0 (X)) such that κ Ψ (B) = 0. Then W Ψ •B = 0, and Im(B) is a G-module, contained in Ker W Ψ and hence is zero. So B = 0 and κ Ψ is injective.
We have defined all the notation, mentioned in Theorem 1.2. It states: There exists a unique family of unitary operators Φ w ∈ Aut(L 2 (X)), w ∈ W that preserves the space S 0 (X) and satisfies We begin with the construction of the operators Φ s for simple reflections, based on the results for the groups of rank one. 6.0.3. The definition of Φ sa . The space L 2 (X) is the unitary completion L 2 -ind G U 1 of the space S c (X) = ind G U 1. For a simple root ray a of G consider a parabolic subgroup P a = M a · U a , with the derived group P ′ a = M ′ a U a , where M ′ a = G a is a semisimple group of rank 1. We denote by B a = T a · U a the Borel subgroup of G a and put X a = U a \G a .
Consider the isomorphism, implied by the transitivity of induction, The isometry Φ s on L 2 (X a ), defined in sections 3 and 4 gives rise to an isometry on L 2 (X) by functoriality of induction. We continue to denote this isometry by Φ sa .
Proof. The only non-trivial statement is the equivariance of T which is enough to prove for f ∈ S 0 (X).
Consider an embedding with dense image j : T a × U a ֒→ X a , (t, u) → t −1 n sa u.
For f ∈ S 0 (X) the Fourier transform is given by Proposition 6.6. For any simple root ray a the operator Φ sa preserves S 0 (X).
Proof. We have defined for any root ray a the set of triples B a such that S 0 (X) ⊂ S Ba (X) ⊂ S c (X). In fact S Ba (X) = ind G P ′ a S 0 (X a ) which is preserved by Φ sa by Definition 6.1 and by Propositions 3.8, 4.17. In particular, Φ sa (S 0 (X)) ⊂ S c (X).
Let a be a positive root ray.
We use decomposition U op = U −a U −a where U −a is the product of all root subgroups corresponding to the negative root rays, except −a. One has The character Ψ restricted to U −a equals Ψ a 1 by the definition of Ψ. The inner integral equals U −a Φ sa (ι a (f )(u 2 ))([u 1 ])Ψ a 1 (u −1 1 )du 1 = W Ψ a 1 (Φ sa (ι a (f )(u 2 ))(1).
Now we are ready to prove Theorem 1.2.
Proof. The injectivity of κ Ψ implies the uniqueness of the family Φ w , w ∈ W .
To prove Theorem it is enough to construct the operators Φ w . For any w ∈ W there is a presentation w = s a 1 · . . . · s an as a product of simple reflections. We define the operator Φ w ∈ Aut(L 2 (X)) The operator Φ w is unitary, preserves S 0 (X) and satisfies θ(g, t) • Φ w = Φ w • θ(g, t w ) for g ∈ G, t ∈ T . | 2019-12-15T17:06:57.000Z | 2019-12-15T00:00:00.000 | {
"year": 2019,
"sha1": "9c81497d57f598e4d5d702bdb63b7facf9853164",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9c81497d57f598e4d5d702bdb63b7facf9853164",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
6899900 | pes2o/s2orc | v3-fos-license | Gonadotropins Activate Oncogenic Pathways to Enhance Proliferation in Normal Mouse Ovarian Surface Epithelium
Ovarian cancer is the most lethal gynecological malignancy affecting American women. The gonadotropins, follicle stimulating hormone (FSH) and luteinizing hormone (LH), have been implicated as growth factors in ovarian cancer. In the present study, pathways activated by FSH and LH in normal ovarian surface epithelium (OSE) grown in their microenvironment were investigated. Gonadotropins increased proliferation in both three-dimensional (3D) ovarian organ culture and in a two-dimensional (2D) normal mouse cell line. A mouse cancer pathway qPCR array using mRNA collected from 3D organ cultures identified Akt as a transcriptionally upregulated target following stimulation with FSH, LH and the combination of FSH and LH. Activation of additional pathways, such as Birc5, Cdk2, Cdk4, and Cdkn2a identified in the 3D organ cultures, were validated by western blot using the 2D cell line. Akt and epidermal growth factor receptor (EGFR) inhibitors blocked gonadotropin-induced cell proliferation in 3D organ and 2D cell culture. OSE isolated from 3D organ cultures stimulated with LH or hydrogen peroxide initiated growth in soft agar. Hydrogen peroxide stimulated colonies were further enhanced when supplemented with FSH. LH colony formation and FSH promotion were blocked by Akt and EGFR inhibitors. These data suggest that the gonadotropins stimulate some of the same proliferative pathways in normal OSE that are activated in ovarian cancers.
cells express FSH and LH receptors, but normal TECs do not proliferate in response to gonadotropins [8]. FSH and LH signal together in vivo in post-menopausal women and during ovulation these hormones are almost always found at the same time. However, very little has been published regarding the combined effects of gonadotropins on signaling in OSE or ovarian cancer.
The purpose of this study was to identify the pathways downstream of the gonadotropins in normal OSE and their contribution towards proliferation and oncogenesis. Many in vitro studies using SV40T immortalized OSE cells or in vivo studies using animal models have been reported to evaluate the role of FSH and LH, but these systems fail to separate ovulation and the effects of gonadotropins, do not use completely normal cells, or separate the cells from their microenvironment [7,10,24]. This study used two different model systems to evaluate the actions of gonadotropins on normal OSE function. A three-dimensional (3D) organ culture system was employed to study the role of gonadotropins in normal cells grown within their microenvironment in the absence of ovulation [25]. Simultaneously, the effects of gonadotropins on the OSE alone were studied using a normal mouse OSE cell line. FSH, LH and the combination of FSH and LH (FSH+LH) enhanced cellular proliferation by activating Akt signaling and upregulating pro-proliferative cyclin dependent kinases and anti-apoptotic Birc5.
Gonadotropins Enhance Proliferation of Normal OSE
Gonadotropins have been reported to have widely variable growth stimulatory properties on OSE in vitro [26][27][28], but in vivo they seem to enhance proliferation [18,19]. Therefore, to further characterize the contribution of the gonadotropins to OSE proliferation, a 3D organ culture system that propagates normal OSE in an alginate hydrogel was employed. Ovarian organoids were cultured for 8 days with FSH, LH and the FSH+LH at a dose of 1, 10 or 100 mIU/mL, representing a range of physiologically relevant concentrations. To determine the percentage of proliferating cells, BrdU was incorporated into the culture media 24 h prior to fixation. FSH at all three doses in 3D significantly increased proliferation of OSE as compared to basal media, while LH and the combination of FSH and LH only significantly increased proliferation at 10 and 100 mIU/mL (Figure 1a). In order to compare the effects of FSH, LH and FSH+LH on OSE proliferation in vitro, 2D mouse ovarian surface epithelial cells (MOSE) were analyzed for proliferation after stimulation with gonadotropins for 8 days [29]. FSH at 1, 10 and 100 mIU/mL increased proliferation above basal levels, LH increased proliferation at 1 and 10 mIU/mL, and FSH+LH increased proliferation at 10 and 100 mIU/mL (Figure 1b).
Figure 1.
Gonadotropins increase ovarian surface epithelium (OSE) proliferation in 3D organoids and mouse ovarian surface epithelial (MOSE) cell line. (a) Using BrdU as a marker of DNA synthesis, the gonadotropins increased proliferation of OSE in organoids after 8 days. (b) Proliferation of MOSE cells in response to gonadotropin stimulation was measured by sulforhodamine B (SRB) assay after 8 days. * different than basal p < 0.05.
Gonadotropins Regulate Oncogenic Signal Transduction Pathways in Normal Mouse OSE Cultured in 3D
To investigate the signal transduction pathways altered in normal mouse OSE cultured in 3D that may be involved in the proliferative response observed following culture with the gonadotropins, organoids were cultured for 3 days in basal media followed by 24h incubation with 10 mIU/mL FSH, LH or FSH+LH. The organoids were incubated with collagenase to collect an enriched OSE cell preparation and the mRNA was subjected to a Cancer Pathway Finder qPCR array. The array identified several signal transduction pathways in OSE that were altered in response to FSH, LH and FSH+LH as compared to OSE from organoids cultured in basal media. The gonadotropins increased gene expression of some pro-proliferative factors, including Akt. Although both FSH and LH significantly amplified the Akt pathway, LH and FSH+LH amplified both Akt1 and Akt2 isoforms, while FSH only amplified the Akt2 isoform. The pro-proliferative epidermal growth factor receptor (EGFR) was upregulated more by FSH and FSH+LH than LH alone. FSH and FSH+LH treated OSE amplified cyclin-dependent kinase 2 (Cdk2) as well as Cdk4 mRNA expression when compared to basal cultured OSE. The anti-apoptotic factor, Birc5, was amplified more when treated with FSH and FSH+LH than LH alone ( Table 1). The gonadotropins alone and combined also upregulated expression of (a) (b) 1 mIU/ml 10 mIU/ml 100 mIU/ml angiopoietin 1, which is involved in vascularization, and reduced the expression of pro-apoptotic caspase 8. Expression of mRNA for cyclin-dependent kinase inhibitor 2A (Cdkn2a), which slows progression through the cell cycle, was increased in response to FSH, LH and FSH+LH.
Gonadotropins Enhance Akt Expression in Normal OSE
In order to evaluate the effects of gonadotropins on Akt expression in the OSE, MOSE cells were employed [29]. MOSE cells were treated with FSH, LH or FSH+LH at 100 mIU/mL each for 5 min, 15 min, 1 h, and 24 h. A significant increase in the expression of phosphorylated Akt (p-Akt) was noted after 5 minutes when stimulated with FSH or LH ( Figure 2). FSH and FSH+LH also enhanced p-Akt expression after 24 h. Gonadotropin treatment did not affect the expression of total Akt in any of the treatment groups. The phosphatase PTEN inactivates Akt, and loss of homozygosity or mutation of this gene has been noted in ovarian cancer and endometrioid ovarian cancer, respectively [30,31]. Therefore, PTEN expression was analyzed to determine if its loss contributed to the activation of Akt. Levels of PTEN and p-PTEN were not altered upon stimulation with FSH, LH or FSH+LH ( Figure 2). Because p-ERK expression has previously been noted downstream of FSH and LH in human immortalized OSE and in human ovarian cancer cells, the activation of p-ERK was investigated [26]. Expression of p-ERK in all three groups was elevated after 5 min, but did not persist after 24 h ( Figure 2).
Figure 2.
Gonadotropins upregulate pAkt and pERK expression in MOSE cells. Activated pAkt and pERK expression was observed in MOSE after treatment with 100 mIU/mL of (a) FSH; (b) LH or (c) FSH+LH. Activated p-PTEN and total PTEN levels were similarly probed. Protein was normalized to actin. All gels were run in 3 independent experiments with a representative image shown.
The gonadotropins increased proliferation of the OSE in 3D organ culture ( Figure 1). To determine if proliferation of the OSE induced by the gonadotropins could be blocked by inhibiting the Akt pathway, organoids or MOSE cells were cultured with gonadotropins in the presence of MK-2206, a chemical inhibitor of Akt phosphorylation. First, 5 µM MK-2206 was added to the MOSE culture . "a" is different than "b"; p < 0.05, "c" is different than groups treated with gonadotropins in absence of the inhibitor; p < 0.05.
The transcription pathway array indicated that EGFR mRNA was upregulated by gonadotropins, which has been observed in other studies using human ovarian cancer and normal OSE cells [26]. First, 100 nM AG1478 was added to the MOSE culture media for 5 min in the presence of 100 mIU/mL of FSH or LH to validate that AG1478 decreased p-ERK expression, which is downstream of EGFR (Supplemental Figure 1b). To determine if inhibition of EGFR signaling blocked OSE proliferation, organoids were cultured for 8 days with FSH, LH or FSH+LH alone or with AG1478. AG1478 decreased OSE proliferation stimulated by FSH, LH, or FSH+LH in 3D organ cultures ( Figure 4a) and in MOSE cells treated with the gonadotropins (Figure 4b). Proliferation of the OSE caused by the gonadotropins is blocked after 8 days of culture in the presence of EGFR inhibitor AG1478 (100 nM) in both 3D cultured organoids and 2D cultured MOSE cells. "a" is different than "b"; p < 0.05, "c" is different than groups treated with gonadotropins in absence of the inhibitor; p < 0.05.
Gonadotropins Increase Expression of Proliferative and Anti-Apoptotic Proteins in Normal OSE
To determine if the mRNA expression levels identified from the transcription array correlated with protein expression, western blot analyses were performed for Birc5, Cdk2, Cdk4 and Cdkn2a. MOSE cells were treated with serum-free basal media or the gonadotropins for 15 min, 1 h and 24 h. Birc5, also known as survivin, is an anti-apoptotic protein that could modulate the total number of cells as measured in the growth assays. Birc5 protein was upregulated by FSH, LH and FSH+LH at 24 h when compared with basal conditions ( Figure 5). Incubation with the Akt inhibitor, MK-2206, was able to block Birc5 induction by FSH, LH and FSH+LH. The EGFR inhibitor, AG1478, did not antagonize 10 mIU/ml 100 mIU/ml gonadotropin induction of Birc5, indicating that FSH and LH regulate Birc5 expression through an Akt-dependent pathway. Despite the fact that DMSO enhanced basal levels of protein expression, the Akt and EGFR inhibitors were able to block DMSO and gonadotropin induced expression of proteins [32]. Cdk2 protein was increased by FSH, LH and FSH+LH at 24 h ( Figure 5). Both the Akt inhibitor MK-2206 and the EGFR inhibitor AG1478 mitigated the induction of Cdk2 by FSH and FSH+LH. However, only MK-2206 reduced the expression of Cdk2 in LH treated MOSE cells. Cdk4 was induced by FSH, LH and FSH+LH ( Figure 5). MK-2206 and AG1478 mitigated the gonadotropin induced Cdk4 expression suggesting that Cdk4 is located downstream of Akt and EGFR. Protein expression for cyclin-dependent kinase inhibitor 2A (Cdkn2a) was elevated at shorter time points, but was repressed after 24 h when treated with FSH and LH, despite the increase in mRNA expression identified by the transcription array. In the presence of FSH and LH alone, MK-2206 and AG1478 failed to relieve this repression of Cdkn2a at 24 h. FSH+LH failed to repress Cdkn2a expression at 24 h.
Gonadotropins Enhance Soft Agar Colony Formation
To determine if the gonadotropins enhanced growth of colonies in soft agar, OSE from 3D organoids were cultured in 10 and 100 mIU/mL FSH, LH or FSH+LH for 3 days. Following stimulation with gonadotropins, the OSE from the organoids was collected using collagenase as previously described [33]. Only OSE from the organoids cultured in 100 mIU/mL LH demonstrated a significant increase in the number of colonies compared to those cultured in basal medium (Figure 6a). FSH and the combination of FSH+LH did not enhance colony formation. In order to investigate if the LH-induced increase in colony formation was dependent on Akt and EGFR signaling pathways, MK-2206 and AG1478 were added to 3D organoids cultured with LH. Both the inhibitors significantly reduced the number of colonies compared to those cultured without the inhibitors. To determine if gonadotropins promoted colony formation, organoids were cultured in 1mM H 2 O 2 for 3 days, a condition that previously supported soft agar colony formation [33] followed by addition of OSE to soft agar overlaid with FSH, LH or FSH+LH medium. H 2 O 2 -induced OSE colonies in soft agar overlaid with 100 mIU/mL FSH, showed a significant increase in number of colonies compared to those cultured in basal overlay media (Figure 6b). MK-2206 and AG1478 in presence of FSH in the media overlay significantly reduced the number of colonies formed compared to those formed by the OSE cultured with FSH alone. Figure 6. LH treated OSE increased anchorage independent growth. (a) OSE from 3D organoids cultured in 100 mIU/mL LH demonstrated an increase in colony formation when compared to basal cultured organoids that could be blocked with MK-2206 and AG1478 whereas (b) OSE cultured in 1 mM H 2 O 2 and overlaid with FSH showed an increase in colony formation compared to OSE overlaid with basal medium that could be blocked with MK-2206 and AG1478. * different than basal; p < 0.05, "#" is different than * p < 0.05.
Discussion
Ovarian cancer is commonly diagnosed in post-menopausal women, when the levels of FSH and LH are elevated [10]. The intersection between high levels of gonadotropins and the average age of (b) 10 mIU/ml 100 mIU/ml (a) incidence of ovarian cancer is the basis of a hypothesis suggesting a relationship between gonadotropins and an increased risk and incidence of ovarian cancer. Gonadotropins induced OSE proliferation in an in vitro 3D mouse model of primary OSE cells and in a normal mouse OSE cell line. Elevated levels of cell cycle regulatory and anti-apoptotic proteins that regulate proliferation were observed in MOSE cells treated with gonadotropins. LH stimulated colony formation of 3D cultured OSE in soft agar. FSH+LH was not able to completely mimic either hormone alone and reduced colony formation as compared to LH. Proliferation and colony formation could be blocked with both Akt and EGFR inhibitors indicating that these are important regulators of growth in normal OSE. FSH, LH, and the combination of FSH+LH induced Birc5, which was blocked when Akt was inhibited. FSH and FSH+LH induced Cdk2, which was reduced by Akt and EGFR inhibition. Overall these data indicate that the gonadotropins individually and in combination regulate proliferation, but the mechanisms of regulation by each hormone are different.
Proliferation of the OSE and its association with ovulation has been suggested to play a role in OSE transformation and cancer progression [18]. The results from this study are similar to in vivo findings of increased OSE proliferation in response to the gonadotropins in different animal models [17][18][19]34]. When comparing the 2D and 3D systems, OSE grown in 2D, stimulated with gonadotropins, began to proliferate much faster (day 2-data not shown) than in 3D (day 8). However, both 2D and 3D systems displayed enhanced proliferation after 8 days. The discrepancy between model systems implies that the architecture of the ovarian microenvironment likely impacts the proliferation of normal OSE. This report did not evaluate estrogen receptor (ER) signaling in the ovarian surface, which potentially could occur if the gonadotropins are stimulating the follicles to secrete estrogen in the organoid [6,35]. Further, the 2D proliferation assay and the CK8/BrdU labeled immunohistochemistry on 3D organ culture did not account for the ERα induced OSE signaling that could have occurred in addition to proliferation.
Investigating pathways involved in carcinogenesis allowed for the detection of a series of specific genes regulated in the OSE by FSH, LH and FSH+LH. Akt is a serine/threonine kinase that is activated in roughly 68% of ovarian cancers [30]. Akt1 is activated in ovarian cancer, Akt2 is overexpressed in primary tumors as well as human ovarian carcinoma cell lines [36][37][38][39], and Akt3 expression is elevated in 20% of ovarian cancers [40]. The qPCR array identified transcriptional upregulation of Akt1 and Akt2 by LH and FSH+LH, while FSH increased expression of only Akt2. The transcription array data indicated that total Akt expression was enhanced but it did not correlate with the 2D MOSE cell line data likely because the mRNA was from an enriched OSE preparation that contained some underlying stromal cells. In addition to stromal cells, the OSE preparation is likely to be contaminated with a small percentage of theca and granulosa cells that express gonadotropin and EGF receptors [14,41]. FSH and LH signaling via these receptors could account for the discrepancy in the fold change observed in Akt1 and Cdk4 in response to FSH, LH and FSH+LH. Immunohistochemistry indicated that the stroma had abundant expression of Akt after stimulation with gonadotropins (data not shown). However, the 2D cell line data does support that Akt is phosphorylated by the gonadotropins. The Cancer Genome Atlas Network did not identify mutation of Akt as the primary mechanism for its activation, suggesting that perhaps stimulation by gonadotropins, is one possible mechanism for the elevated p-Akt levels observed in ovarian cancer.
Epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein that contains an external binding domain and an intracellular tyrosine kinase domain [42,43]. EGFR has been implicated in growth and progression of ovarian cancer, and may represent a prognostic indicator or potential therapeutic target [42]. EGFR overexpression has been shown to correlate with poor survival outcomes in women that have advanced ovarian cancer and have undergone cytoreductive surgery and combined therapy [43][44][45]. Inhibition of EGFR signaling by AG1478 in organ culture and MOSE cells significantly decreased OSE proliferation likely due to reduced expression of Cdk2 and Cdk4.
A soft agar assay demonstrated that post-menopausal concentrations of LH induced growth of OSE in soft agar. Initiation of anchorage-independent growth was recently demonstrated in non-tumorigenic ovarian epithelial cells that overexpressed β-hCG, which is a ligand that also activates LHR [46]. Overexpression of the hormone specific β-subunit of hCG induced cell cycle progression through elevated cyclin D1, Cdk2 and Cdk4 expression, similar to the results reported by this study [46]. Interestingly, when LH was combined with FSH it was no longer capable of inducing transformation, suggesting that FSH may somehow block LH induced transformation. Furthermore, when colony formation was initiated by exposure to oxidative stress, LH did not further enhance colony growth. However, FSH increased the formation of colonies that were derived from oxidative stress. We have recently demonstrated that oxidative stress transforms OSE by activating Akt, damaging DNA, and stimulating secretion of an ovarian stromal factor [33]. An interesting future direction will be to determine if FSH functions similarly to the stromal factor to enhance colony formation downstream of DNA damage and Akt activation.
While the gonadotropins are typically studied as individual hormones, the current study attempted to monitor OSE proliferation when exposed to both hormones simultaneously. In postmenopausal women as well as during ovulation, FSH and LH are almost always circulating at the same time. Since the circulating levels of FSH and LH fluctuate periodically during the month, it is challenging to recapitulate the exact ratio of FSH and LH experimentally. Western blot analysis revealed that FSH+LH regulated the expression of proliferative proteins differently than the individual treatments of FSH or LH. FSH, LH, and FSH+LH were all able to stimulate the anti-apoptotic protein Birc5, which possibly accounts for the overall increase in survival at day 8. Birc5 was blocked by MK-2206, but not AG1478, indicating that Birc5 is more heavily regulated by Akt compared to EGFR. Intriguingly, average proliferation of OSE cultured in both 2D and 3D was lower when the gonadotropins were given simultaneously as compared to FSH alone. Furthermore, the combination of FSH and LH led to growth of fewer colonies in soft agar than LH alone. This may be reflective of the observation that the combination of gonadotropins reduced p-Akt expression after 5 minutes as compared to FSH alone. Interestingly, FSH and LH alone repressed the cell cycle inhibitor Cdkn2a expression at 24 h, which was not seen with the combination of FSH+LH. Taken together, these findings support the hypothesis that gonadotropins affect specific oncogenic signaling pathways that enhance proliferation in normal mouse OSE.
Animals
Day 16 female CD1 mice were used for organ culture experiments. Animals were not primed with eCG. All mice were acquired through in-house breeding, and all breeders were purchased from Harlan (Indianapolis, IN, USA). Animals were treated in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and the established Institutional Animal Care and Use protocol at the University of Illinois at Chicago. Animals were housed in a temperature and light-controlled environment (12 h light:12 h darkness) and were provided food and water ad libitum.
Organ Culture
Ovaries from pre-ovulatory day 16 mice were dissected as previously described [25,47]. Briefly, ovaries were dissected in dissection media composed of Liebovitz media with L-glutamine, 100 U penicillin (Gibco), and 100 μg/mL streptomycin. The bursa was removed with forceps and ovaries were cut with a scalpel into two or four pieces, termed organoids. Each organoid was placed into a 0.5% w/v alginate/PBS droplet formed on mesh fiber. The alginate-encapsulated organoid was placed into 50 mM CaCl 2 for 2 min to cross-link the alginate, forming a gel around the organoid. The organoid was then placed in growth media that consisted of alpha-MEM (Invitrogen), 100 U penicillin (Gibco), and 100 g/mL streptomycin. To study the effects of the gonadotropins, 1, 10 or 100 mIU/mL of human FSH (Sigma-Aldrich, St. Louis, MO, USA), human LH (Sigma-Aldrich) or FSH+LH, each at the same concentration, was added to the basal culture media. Bromodeoxyuridine (BrdU; 10 μM) was added into culture media 24 h prior to fixation to label proliferating cells.
Immunohistochemistry
Organoids were removed from culture media and fixed as previously described [25] using reagents from Vector Labs Inc. (Burlingame, CA, USA) unless otherwise noted. Immunohistochemistry was performed as previously described [8,18,25,47,48]. Section thickness for the BrdU/CK8-labeled tissue was 5 microns. Sections were mounted on Superfrost ® Plus Microscope slides (Fisher Scientific, Hampton, NH, USA). Briefly, heat-induced antigen retrieval was performed using 10 mM sodium citrate. Tissues being stained for BrdU were treated with 4 M hydrochloric acid for 10 min followed by 10 min incubation with 0.1 M sodium tetraborate. Tissues were blocked with 3% H 2 O 2 , avidin and biotin for 15 min each. Control slides received serum block instead of primary antibody. The primary anti-bodies against BrdU (rat, 1:200; Abcam, Cambridge, MA, USA) and cytokeratin 8 (CK8) (TROMA-1 antibody, rat, 1:100; Developmental Studies Hybridoma Bank, Iowa City, IA, USA) were incubated overnight at 4 °C. Slides were washed and incubated with biotinylated secondary antibody in 3%-BSA-TBS. Following three washes in TBS-Tween, slides were incubated for 30 min in avidin/biotin complex (ABC). The antigen-antibody-HRP complex was visualized using diaminobenzidine reagent for 3-5 min and counterstained with hematoxylin. All conditions had a minimum of 5 organoids and the experiment was performed in triplicates.
RNA Isolation and RT-PCR
After culture of organoids in basal media for 3 days followed by 24 h treatment with FSH, LH, or FSH+LH, OSE was collected by collagenase digestion as described previously [47]. RNA was isolated using a Qiagen Qiashredder column and Qiagen RNeasy (Valencia, CA, USA) kit according to manufacturer's protocol. cDNA was synthesized using SABiosciences RT 2 First Strand Kit and was added to a RT 2 Profiler PCR Mouse Cancer PathwayFinder Array (SABiosciences, Frederick, MD, USA). Cycling conditions for reactions were 95 °C for 10 min; 40 cycles of 95 °C for 15 s, and 60 °C for 1 min. Gene expression was calculated using ∆∆Ct method and expressed as fold change compared to basal conditions from duplicate arrays.
Cell Viability Assay
MOSE cells were seeded into 96 well plates at 1 × 10 4 cells/100 μL of media with serum. The next day cells were treated with FSH, LH, or a combination of FSH and LH at 1, 10 and 100 mIU/mL. The cells were allowed to grow for 8 days. Proliferation was measured using sulforhodamine B (SRB) colorimetric assay as described previously [49]. Media was aspirated and cellular proteins were fixed to the plate with 20% trichloroacetic acid for 1 h. Cells were washed with water and stained for 30 min with sulforhodamine B. Excess dye was washed with 1% (v/v) acetic acid. The protein-bound dye was re-suspended in 10 mM Tris buffer. Spectrophotometric analysis was completed using a Biotek Synergy 2 multi-mode microplate reader (Biotek, Winooski, VT, USA). All conditions were tested in three replicates in triplicate experiments.
Soft Agar Transformation Assay
OSE were collected from organoids cultured for 3 days in media containing FSH, LH, or FSH+LH, followed by analysis of anchorage-independent growth as measured by growth in soft agar. The base layer of the agar consisted of DMEM (Gibco), and 0.5% agarose (Sigma). The top layer consisted of DMEM, 0.35% agarose and 15 × 10 3 cells/well in a 24 well plate. The agar was overlaid with DMEM, 4% FBS, and penicillin-streptomycin. After 14 days, colonies were imaged on a Nikon Eclipse TS100 using a DS-Ri1 digital camera and counted using NIS Elements software. All conditions were tested in three replicates in triplicate experiments.
Imaging and Counts
Immunohistochemistry images were captured with a Nikon E600 microscope using a DS-Ri1 digital camera and NIS Elements software (Nikon Instruments, Melville, NY, USA). Using ImageJ software (National Institutes of Health, Bethesda, MD, USA), the number of CK8-positive OSE cells that were also positive for BrdU were counted and expressed as percentage of total CK8-positive OSE cells. Soft agar colony images were taken on a Nikon Eclipse TS100 using a DS-Ri1 digital camera. NIS elements software was used to determine the number of colonies.
Statistical Analysis
Values were expressed as the mean ± S.E.M. Dunnett's multiple comparison test to assess differences between control groups and experimental groups. A Student's t-test was used for comparison between two groups. p < 0.05 was considered statistically significant.
Conclusions
Our study supports that gonadotropins induce proliferation in normal OSE cells of ovarian organ culture as well as MOSE cells. Gonadotropins regulate proliferation, cell cycle progression, apoptosis, and growth in soft agar. An improved understanding of molecular signaling mechanisms in normal OSE may help to identify novel targeted therapeutic approaches to slowing the growth of ovarian cancers derived from this cell type. Figure S1. Gonadotropin induced (a) p-Akt protein expression in MOSE cells was blocked in presence of an Akt inhibitor, MK-2206 at 5 min while (b) p-Erk protein expression was suppressed in presence of EGFR inhibitor, AG1478 at 5 min. | 2014-10-01T00:00:00.000Z | 2013-02-28T00:00:00.000 | {
"year": 2013,
"sha1": "9ee334d1629cb64022abbf0f3f6bba76d912c1c4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/14/3/4762/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ee334d1629cb64022abbf0f3f6bba76d912c1c4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
54001581 | pes2o/s2orc | v3-fos-license | Tilting, cotilting, and spectra of commutative noetherian rings
We classify all tilting and cotilting classes over commutative noetherian rings in terms of descending sequences of specialization closed subsets of the Zariski spectrum. Consequently, all resolving subcategories of finitely generated modules of bounded projective dimension are classified. We also relate our results to Hochster's conjecture on the existence of finitely generated maximal Cohen-Macaulay modules.
Introduction
It is well known that the Zariski spectrum of a commutative noetherian ring R can be used to classify various structures over R. For example, it was shown by Gabriel in 1962 that the hereditary torsion pairs in the module category Mod-R are parametrized by the subsets of Spec(R) that are closed under specialization. An analogous result holds true at the level of the derived category: based on work of Hopkins, a one-to-one correspondence between the specialization closed subsets of Spec(R) and the smashing subcategories of the unbounded derived category D(R) was established by Neeman in 1992.
In the present paper, we restrict to specialization closed subsets of Spec(R) that do not contain associated primes of R, and show that they parametrize all 1-cotilting classes of R-modules. We then use this approach to give for each n ≥ 1 a complete classification of n-tilting and n-cotilting classes in Mod-R in terms of finite sequences of subsets of the Zariski spectrum of R (see Theorem 4.2 below).
While classification results of this kind are usually proved by considering the tilting setting first and then passing to the cotilting one by a sort of duality, the approach applied here is the very opposite. The key point rests in an analysis of the associated primes of cotilting classes and their cosyzygy classes. The classification of the tilting classes comes a posteriori, by employing the Auslander-Bridger transpose. For n = 1, we prove an additional result: In Theorem 2.10, we show that all 1-cotilting modules over one-sided noetherian rings are of cofinite type, that is, equivalent to duals of 1-tilting modules.
We also prove several results for tilting and cotilting classes in the setting of commutative noetherian rings which fail for general rings: (i) For each n ≥ 1, the elementary duality gives a bijection between n-tilting and n-cotilting classes of modules. (For general rings, there are more 1cotilting classes than duals of 1-tilting classes: Bazzoni constructed such examples for certain commutative non-noetherian rings in [6].) (ii) All n-cotilting classes are closed under taking injective envelopes by Proposition 3.11(ii). In particular, 1-cotilting classes are precisely the torsionfree classes of faithful hereditary torsion pairs (Theorem 2.7). (Note that 1-cotilting classes over general rings need not be closed under injective envelopes; see [15,Theorem 2.5].) (iii) Up to adding an injective direct summand, a minimal cosyzygy of an ncotilting module is (n − 1)-cotilting (Corollary 3.17). (Again, this typically fails for non-commutative rings, even for finite dimensional algebras over a field, since the cosyzygy often has self-extensions.) Although the tilting and cotilting modules over commutative rings are inherently infinitely generated in all non-trivial cases, our results have consequences for finitely generated modules as well.
First, as a side result we classify all resolving subcategories of finitely generated modules of bounded projective dimension in Corollary 4.4 1 and prove that they hardly ever provide for approximations.
Secondly, we relate our results to a conjecture due to Hochster claiming the existence of finitely generated maximal Cohen-Macaulay R/p-modules for regular local rings R and give information about the structure of these hypothetical modules in Theorem 5.16. will stand for the minimal injective coresolution and the image of E i−1 (M ) → E i (M ) for i ≥ 1 will be denoted by ✵ i (M ). That is, ✵(M ) = ✵ 1 (M ). We refrain from the usual notation Ω −i (M ) for the i-th cosyzygy for we require the following convention: ✵ 0 (M ) = M and ✵ i (M ) = 0 for all i < 0. Thus, we need to distinguish between syzygies and negative cosyzygies.
Given a class S of right modules, we denote: = 0 for all S ∈ S and i ≥ 1}, ⊥ S = {M ∈ Mod-R | Ext i R (M, S) = 0 for all S ∈ S and i ≥ 1}. If S = {S} is a singleton, we shorten the notation to S ⊥ and ⊥ S. A similar notation is used for the classes of modules orthogonal with respect to the Tor functor: S ⊺ = {M ∈ R-Mod | Tor R i (S, M ) = 0 for all S ∈ S and i ≥ 1}. Given a class S ⊆ Mod-R and a module M , a well-ordered chain of submodules 0 = M 0 ⊆ M 1 ⊆ M 2 ⊆ · · · ⊆ M α ⊆ M α+1 ⊆ · · · M σ = M, 1 Added in proof: An alternative description of resolving subcategories of finitely generated modules of bounded projective dimension in terms of grade consistent functions on Spec(R) has recently been obtained by Dao and Takahashi [14].
is called an S-filtration of M if M β = α<β M α for every limit ordinal β ≤ σ and up to isomorphism M α+1 /M α ∈ S for each α < σ. A module is called S-filtered if it has at least one S-filtration.
Further, given an abelian category A (in our case typically A = Mod-R, or A = mod-R if R is right noetherian), a pair of full subcategories (T , F ) is called a torsion pair if (i) Hom A (T, F ) = 0 for each T ∈ T and F ∈ F ; (ii) For each M ∈ A there is an exact sequence 0 → T → M → F → 0 with T ∈ T and F ∈ F . In such a case, T is called a torsion class and F a torsion-free class. A standard and easy but useful observation is the following: If A = Mod-R, it is well-known that F is the torsion-free class of a torsion pair if and only if F is closed under submodules, extensions and direct products. Similarly, torsion classes are precisely those closed under factor modules, extensions and direct sums. For A = mod-R and R right noetherian, any torsion-free class F is closed under submodules and extensions (so also under finite products), but some caution is due here as these closure properties do not characterize torsion-free classes. Consider for instance R = Z and the class F of all finite abelian groups.
Let us conclude this discussion with two more properties which torsion pairs in Mod-R can possess. For p ∈ Spec(R), we denote by R p the localization of R at p, and by k(p) = R p /p p the residue field.
If M ∈ Mod-R, p ∈ Spec(R) and i ≥ 0, the Bass invariant µ i (p, M ) is defined as the number of direct summands isomorphic to E(R/p) in a decomposition of E i (M ) into indecomposable direct summands (see e.g. [17, §9.2] or [9, §3.2]). That is, The relation of associated primes to these invariants is captured by the following lemma due to Bass: Lemma 1.3. Let M be an R-module, p ∈ Spec(R) and i ≥ 0. Then and we have the following equivalences: Proof. For the equality above we refer for instance to [ As a consequence, we can extend classic relations between associated prime ideals of the terms of a short exact sequence to their cosyzygies: a short exact sequence of R-modules and i ∈ Z. Then the following hold: Proof. Given any p ∈ Spec(R), we consider the long exact sequence of Hom and Ext groups, which we obtain by applying the functor Hom Rp (k(p), −) on the localized short exact sequence The lemma is then an easy consequence of Lemma 1.3.
In particular, we obtain information on associated primes of syzygy modules. Corollary 1.5. Let M be an R-module, ℓ ≥ 1 and K be an ℓ-th syzygy of M . Then for any i ∈ Z we have: Remark 1.6. We stress that according to our convention, ✵ i−ℓ (M ) = 0 for i−ℓ < 0. Thus, the right-hand term does not depend on M for i < ℓ.
Proof. This is easily obtained from Lemma 1.4(i) by induction on ℓ. We also use that Ass ✵ j (P ) ⊆ Ass ✵ j (R) for any j ∈ Z and any projective module P .
We finish by recalling a well-known property of the residue field considered as R-module (see e.g. [23,Theorem 18.4]), and its consequences: 1.3. Tilting and cotilting modules and classes. Next, we recall the notion of an (infinitely generated) tilting module from [13,1]: Definition 1.8. Let R be a ring. A module T is tilting provided that (T1) T has finite projective dimension. (T2) Ext i R (T, T (κ) ) = 0 for all i ≥ 1 and all cardinals κ. (T3) There is an exact sequence 0 → R → T 0 → T 1 → · · · → T r → 0 where T 0 , T 1 , . . . , T r ∈ Add T .
The class T ⊥ = {M ∈ Mod-R | Ext i R (T, M ) = 0 for each i ≥ 1} is called the tilting class induced by T . Given an integer n ≥ 0, a tilting module as well as its associated class are called n-tilting provided the projective dimension of T is at most n. We recall that in such a case we can chose the sequence in (T3) so that r ≤ n (see [5,Proposition 3.5]). If T and T ′ are tilting modules, then T is said to be equivalent to T ′ provided that T ⊥ = (T ′ ) ⊥ , or equivalently by [18,Lemma 5.1.12], T ′ ∈ Add T .
The structure of tilting modules over commutative noetherian rings is rather different from the classic case of artin algebras. The key point is the absence of non-trivial finitely generated tilting modules: Lemma 1.9. [12,25] Let R be a commutative noetherian ring and T be a finitely generated module. Then T is tilting, if and only if T is projective.
Even though the tilting module T is infinitely generated, the tilting class T ⊥ is always determined by a set S of finitely generated modules of bounded projective dimension. This was proved in [8], based on the corresponding result [7] for 1-tilting modules. We will call a subclass S of mod-R resolving in case S is closed under extensions, direct summands, kernels of epimorphisms, and R ∈ S. If S consists of modules of projective dimension ≤ 1, the requirement of S being closed under kernels of epimorphisms is redundant by [18,Lemma 5.2.22]. Using results from [2,7,8], we learn that resolving subclasses of mod-R parametrize tilting classes (and hence also the tilting modules up to equivalence): The dual notions of a cotilting module and a cotilting class are defined as follows: Definition 1.11. Let R be a ring. A module C is cotilting provided that (C1) C has finite injective dimension. (C2) Ext i R (C κ , C) = 0 for all i ≥ 1 and all cardinals κ. (C3) There is an exact sequence 0 → C r → · · · → C 1 → C 0 → W → 0 where W is an injective cogenerator of Mod-R and C 0 , C 1 , . . . C r ∈ Prod C. The class ⊥ C = {M ∈ Mod-R | Ext i R (M, C) = 0 for all i ≥ 1} is the cotilting class induced by C. Again, if the injective dimension of C is at most n, we call C and ⊥ C an n-cotilting module and class, respectively. If C and C ′ are cotilting modules, then C is said to be equivalent to C ′ provided that ⊥ C = ⊥ C ′ , or equivalently by [18,Remark 8 If T is an n-tilting right R-module, then the character module is an n-cotilting left R-module; see [2, Proposition 2.3]. By Lemma 1.10, the induced tilting class T = T ⊥ equals S ⊥ where S = ⊥ T ∩ mod-R is a resolving subclass of mod-R. The cotilting class C induced by C in R-Mod is then easily seen to be We will call C the cotilting class associated to the tilting class T .
It follows that that tilting modules T and T ′ are equivalent, if and only if the character modules T + and (T ′ ) + are equivalent as cotilting left R-modules; see [18,Theorem 8.1.13]. Therefore, the assignment T → T + induces an injective map from equivalence classes of tilting to equivalence classes of cotilting modules. For R noetherian, this map, as we will show, is a bijection, but for non-noetherian commutative rings the surjectivity may fail; see [6]. Let us summarize the properties we need. Lemma 1.12. Let R be right noetherian ring and n ≥ 0. Then the following holds: (i) If S ⊆ mod-R is a class of finitely generated modules of projective dimension bounded by n, then S ⊥ is an n-tilting class in Mod-R and S ⊺ is the associated n-cotilting class in R-Mod. Proof. For (i), S ⊥ is an n-tilting class by [
The one-dimensional case
From this point on, unless explicitly specified otherwise, we will assume that our base ring R is commutative and noetherian.
We will treat separately the case of 1-tilting and 1-cotilting modules. We have chosen such presentation for two reasons. First, the arguments for this special situation are simpler and more transparent. Second, the one-dimensional case is tightly connected to the classical notion of Gabriel topology and the abelian quotients of the category Mod-R. We refer to [31] for details on the latter concepts.
To start with, we recall [18, Lemma 6.1.2]: T ∈ Mod-R is 1-tilting if and only if T ⊥ = Gen (T ) where the latter denotes the class of all modules generated by T . In particular, T ⊥ is a torsion class in Mod-R. Dually by [18,Lemma 8.2.2], a module C is 1-cotilting if and only if ⊥ C = Cog (C) where the latter denotes the class of all modules cogenerated by C. Thus, ⊥ C is a torsion free class.
Our aim is to show that a torsion pair in Mod-R is of the form (T , Cog (C)) for a 1-cotilting module C if and only if it is faithful and hereditary. Moreover, we are going to classify such torsion pairs in terms of certain subsets of Spec(R). To this end, we introduce the following terminology: Definition 2.1. For any subset X ⊆ Spec(R) we say that X is closed under generalization (under specialization, resp.) if for any p ∈ X and any q ∈ Spec(R) we have q ∈ X whenever q ⊆ p (q ⊇ p, resp.). In other words, X is a lower (upper, resp.) set in the poset (Spec(R), ⊆). Further, we recall that Gabriel established a one-to-one correspondence between the subsets of Spec(R) closed under specialization and certain linear topologies on R. On the other hand, there is a bijective correspondence between these Gabriel topologies and hereditary torsion pairs in Mod-R. Let us look closer at this relationship.
Then G Y ∩ Spec(R) = Y and the set Y also determines a hereditary torsion pair (T (Y ), F (Y )), where: We further have the following:
bijective correspondences between the subsets of Spec(R) closed under specialization, the Gabriel topologies on R, and the hereditary torsion pairs in
For the fact that G Y is a Gabriel topology we refer to [31, Theorem VI.5.1 and §VI.6.6]. Next, T (Y ) defined as above is clearly closed under submodules, factor modules, extensions and direct sums, so it is a torsion class in a hereditary torsion pair. We claim that F (Y ) is the corresponding torsion-free class. Indeed, given Hence t(M ) = 0 by [17, 2.4.3] and M is torsion-free. Conversely, if M is torsionfree, we must have Ass M ∩ Y = ∅. This is since for any p ∈ Ass M we have an embedding R/p ֒→ M , but if p ∈ Y , we have R/p ∈ T (Y ) owing to the fact that Y is closed under specialization and Supp R/p = V (p) ⊆ Y . This proves the claim, showing that the latter correspondence is well-defined. For statement (i), note that the inverse of Y → G Y is given by the assignment G → G ∩ Spec(R), where G is a Gabriel topology. This follows from the equality G Y ∩ Spec(R) = Y and [31, VI.6.13 and VI.6.15]. It is well-known that Gabriel topologies are in bijection with hereditary torsion pairs; the hereditary torsion pair Finally for (iv), we know from [18, Lemma 4.5.2] that (T (Y ) ∩ mod-R, F (Y ) ∩ mod-R) is a torsion pair in mod-R and that is a torsion pair in Mod-R. Note that both T (Y ) and F (Y ) are closed under taking direct limits. In the case of F (Y ) this follows from (iii). Hence and by Lemma 1.1 we have equalities.
Remark 2.4. The bijections from Proposition 2.3 can be reinterpreted in terms of the one-to-one-correspondence There is an alternative description of the class It was shown by Hochster (cf. For our classification, we need to decide, which of the classes in mod-R closed under submodules and extensions are torsion-free classes in mod-R. These again correspond bijectively to subsets of Spec(R) closed under specialization, as has recently been shown in [30,Theorem 1]. We prefer to give a simple direct argument here: using the notation from Proposition 2.3, gives a bijective correspondence between subsets Y ⊆ Spec(R) closed under specialization and torsion pairs in mod-R.
is clearly a torsion pair in mod-R for every specialization closed set Y , and the assignment is injective since p ∈ Y if and only if R/p ∈ T (Y ). We must prove the surjectivity.
To this end, suppose that (T , F ) is a torsion pair in mod-R. By [32,Theorem 4 Indeed, given p ∈ X, we have R/p ∈ F . Then for any N ∈ T , Hom R (N, R/p) = 0 implies Hom Rp (N p , k(p)) = 0, so the finitely generated R p -module N p has no maximal submodules. That is, N p = 0 by the Nakayama Lemma (see e.g. [17, 1.2.28]). In particular, Supp N is specialization closed and disjoint from X, hence Supp N ⊆ Y . This proves the claim. We have shown that Let us now give a relation to 1-cotilting modules, using results of Bazzoni, Buan and Krause.
Proposition 2.6. Let R be a (not necessarily commutative) right noetherian ring. Then the 1-cotilting classes C in Mod-R correspond bijectively to the torsion-free classes F in mod-R containing R. The correspondence is given by the assignments Proof. This follows from [10, Theorem 1.5], since all 1-cotilting modules are pureinjective by [4]. See also [18,Theorem 8.2.5].
As a direct consequence, we get a characterization and a classification of 1cotilting classes in Mod-R for R commutative. Note that for R non-commutative the torsion pair having as the torsion-free class a 1-cotilting class need not be hereditary; see [15,Theorem 2.5].
Theorem 2.7. Let R be a commutative noetherian ring and C ⊆ Mod-R. Then C is 1-cotilting if and only if C is the torsion-free class in a faithful hereditary torsion pair (T , C). In particular, the 1-cotilting classes C in Mod-R are parametrized by the subsets Y of Spec(R) closed under specialization with Ass R ∩ Y = ∅. The parametrization is given by Proof. By Proposition 2.6, 1-cotilting classes in Mod-R correspond bijectively to torsion-free classes in mod-R containing R, which by Propositions 2.3 and 2.5 and [18, Lemma 4.5.2] correspond bijectively to faithful hereditary torsion pairs in Mod-R. Composing the two assignments amounts to identifying a cotilting class C with the torsion-free part of the hereditary torsion pair. This shows the first part.
For the parametrization, we can use Proposition 2.3, as soon as we prove that Now, we will give a connection to tilting classes. For this purpose, we recall the concept of a transpose from [3].
Then an Auslander-Bridger transpose of C, denoted by Tr(C), is the cokernel of f * , where (−) * = Hom R (−, R). That is, we have an exact sequence [3,Corollary 2.3], Tr(C) is uniquely determined up to adding or splitting off a projective direct summand. The following lemma gives some homological formulae for the transpose. Lemma 2.9. Let R be a (not necessarily commutative) left noetherian ring, and let 0 = U ∈ R-mod and n ≥ 0 such that Ext i R (U, R) = 0 for all i = 0, 1, . . . , n. Then we have: Denoting as in Definition 2.8 by (−) * the functor Hom R (−, R), we get a sequence (ii), (iii) These parts follow immediately using the well-known natural isomor- It follows that all 1-cotilting classes over a one-sided noetherian ring are of cofinite type, that is, they are associated to 1-tilting classes by the elementary duality: Theorem 2.10. Let R be a (not necessarily commutative) left noetherian ring. The assignment T → T + induces a bijection between equivalence classes of 1-tilting right R-modules and equivalence classes of 1-cotilting left R-modules.
In particular, given a 1-cotilting class C in R-Mod, there is a class U ⊆ R-mod with U * = 0 for all U ∈ U such that The preimage of C under the assignment above is then the 1-tilting class Proof. By a left-hand version of Proposition 2.6 there is a torsion pair (U, F ) in R-mod such that R ∈ F and C = lim [18,Theorem 4.5.2]. By Lemma 2.9(i) and (ii) for n = 0 the class S = {Tr(U ) | U ∈ U} ⊆ mod-R consists of finitely presented modules of projective dimension one, and C = S ⊺ . Now apply Lemma 1.12 and 2.9(iii). Now we summarize our findings for the one-dimensional setting over commutative noetherian rings in the main theorem of the section.
Theorem 2.11. Let R be a commutative noetherian ring. Then there are bijections between the following sets: Proof. Let us first explicitly state the bijections: The assignment in the first line of the We close this section with an equivalent, but more straightforward, parametrization of 1-tilting classes in terms of the coassociated prime ideals and divisibility: Definition 2.12. Let R be a commutative noetherian ring.
(1) Given an R-module M , a prime ideal p ∈ Spec(R) is said to be coassociated to M provided that p = Ann R (M/U ) for some submodule U of M such that the module M/U is artinian over R. We denote by Coass M the set of all prime ideals coassociated to M . For M ⊆ Mod-R, we set Coass M = M∈M Coass M . (
General cotilting classes
In this section, we classify all n-cotilting classes in Mod-R where R is an arbitrary commutative noetherian ring. In the next section, we will apply this classification to characterize all n-tilting classes in Mod-R.
Unfortunately, our methods do not seem to provide much information on the corresponding n-(co)tilting modules. Except for special classes of examples in [18,Chapters 5,6 and 8] and [25, §5], the only known way to construct, say, a cotilting module for a cotilting class C, seems to be as in the proof of [18, Theorem 8.1.9], using so-called special C-precovers.
Let us first introduce the sequences of subsets of Spec(R) which will parametrize both n-tilting and n-cotilting classes for given n ≥ 1.
Definition 3.1. In the following (Y 1 , . . . , Y n ) will always denote a sequence of subsets of Spec(R) such that (i) Y i is closed under specialization for all 1 ≤ i ≤ n; For any such (Y 1 , . . . , Y n ) we define the class of modules For i ≥ 1, denote by P i the set of all prime ideals in R of height i − 1. Since P 1 ⊆ Ass R, the well-known properties of Bass invariants of finitely generated modules imply that P i ⊆ Ass ✵ i−1 (R) ⊆ X i for all 1 ≤ i ≤ n (see e.g. [17,Proposition 9.2.13]). In other words, (iii) implies (iii * ) where (iii * ) P i ⊆ X i for all 1 ≤ i ≤ n. Since Gorenstein rings are characterized by the equality P i = Ass ✵ i−1 (R) for each i ≥ 1 by [23,Theorem 18.8], it follows that (iii) is equivalent to (iii * ) when R is Gorenstein. However, for general commutative noetherian rings condition (iii) may be more restrictive. In an extreme case, it may prevent existence of any non-trivial sequences (Y 1 , . . . , Y n ) as in the following example. Example 3.3. Let k be a field, S = k[x, y]/(x 2 , xy), and let (R, m, k) be the localization of S at the maximal ideal (x, y). It is easy to check that the ideal (x) ⊆ R is simple, so m ∈ Ass R. Hence given any (Y 1 , . . . , Y n ) as in Definition 3.1, we necessarily have Y i = ∅ for all 1 ≤ i ≤ n and C (Y1,...,Yn) = Mod-R. In view of the main theorem below, this implies that there are no non-trivial tilting or cotilting classes over this ring R.
Our next task is to prove that C (Y1,...,Yn) are precisely the n-cotilting classes in Mod-R. The following definition and lemma will allow us to use induction on n.
Definition 3.4. For any cotilting module C ∈ Mod-R, the corresponding cotilting class C = ⊥ C and j ≥ 1, we define the class Remark 3.6. If D is another module with C = ⊥ D, then we can also use D to compute C (j) for each j ≥ 1. Indeed, by dimension shifting, for each M ∈ Mod-R we have M ∈ C (j) , if and only if Ω j−1 (M ) ∈ C. So C (j) is uniquely determined by the class ⊥ C = ⊥ D.
In particular, performing the construction from 3.4 for the cotilting class C (2) , we obtain (C (2) ) (j) = C (j+1) for all j ≥ 1. give mutually inverse bijections between the sequences of subsets (Y 1 , . . . , Y n ) of Spec(R) satisfying the three conditions of Definition 3.1, and the n-cotilting classes C in Mod-R.
We will prove the theorem in several steps. We start by proving that the map Ψ is injective, but we postpone the proof of the fact that Ψ is well-defined in the sense that each class of the form C (Y1,...,Yn) is cotilting.
Denoting by M an (i − 1)-th syzygy module of k(p), we claim that M ∈ C (Y1,...,Yn) \ C (Y ′ 1 ,...,Y ′ n ) . Indeed, by Lemma 1.7(2) the only possible associated prime of a cosyzygy of k(p) is p, so Corollary 1.5 and Remark 1.6 give us for each 0 ≤ j ≤ n−1: Lemma 3.9. Let R be a commutative ring. Let C be a cotilting class in Mod-R, and let M ∈ C and F be a flat R-module. Then M ⊗ R F ∈ C. In particular, M p ∈ C for any M ∈ C and p ∈ Spec(R).
Proof. By Lazard's theorem (see e.g. [18, Corollary 1.2.16]), we can express F as a direct limit F = lim − →i∈I F i of finitely generated free modules F i . In particular, M ⊗ R F i ∼ = M ni ∈ C for each i ∈ I. Since C is closed under taking direct limits by [18,Theorem 8 The last assertion follows since M p ∼ = M ⊗ R R p and R p is flat as an R-module.
The next observation gives us a relation between C and C (2) (cf. Definition 3.4 and Remark 3.6). Proof. Let C be a cotilting module for C. Then Ext i R (K, C) ∼ = Ext i+1 R (M, C) for each i ≥ 1. The conclusion follows directly from the definition. Now we prove another part of Theorem 3.7, namely that Ψ • Φ = id. Again, we postpone for the moment the proof that the map Φ is well defined in the sense that the sequence (Spec(R) \ Ass C (1) , . . . , Spec(R) \ Ass C (n) ) of subsets of Spec(R) satisfies for each n-cotilting class C the conditions in Definition 3.1.
Proposition 3.11. Let n ≥ 1 and C be an n-cotilting class. Then the following hold: (i) If p ∈ Ass C, then k(p) ∈ C.
(ii) C is closed under taking injective envelopes.
Proof. We will prove the statement by induction on n. More precisely, we will first show that (i) and (iii) hold for n = 1, and that (i) ⇒ (ii) for each n ≥ 1. Then we will prove the statements (i) and (iii) simultaneously by induction. The proof of (i) for n = 1: Suppose that p ∈ Ass C. That is, R/p ⊆ M for some M ∈ C. Lemma 3.9 then gives k(p) ⊆ M p ∈ C. By Theorem 2.7, C is a torsion-free class, so C is closed under submodules and k(p) ∈ C.
(iii) for n = 1 is a straightforward consequence of Theorem 2.7.
(i) ⇒ (ii) for each n ≥ 1: By Lemma 1.3 for i = 0, for each M ∈ Mod-R, E(M ) is a direct sum of copies of the modules E(R/p) for p ∈ Ass M . So if p ∈ Ass C, then k(p) ∈ C by (i), and since E(R/p) is k(p)-filtered by Lemma 1.7, also E(R/p) ∈ C. Thus C is closed under injective envelopes.
(iii) for n > 1: Using conditions (i) and (ii) for n and Lemma 1.7, we obtain the implications E(R/p) ∈ C ⇒ p ∈ Ass C ⇒ k(p) ∈ C ⇒ E(R/p) ∈ C.
Also, condition (ii) for n and Lemma 3.10 imply that a module M belongs to C, if and only if E(M ) ∈ C and ✵(M ) ∈ C (2) . Since for each module M , the indecomposable direct summands of E(M ) are precisely the E(R/p) for p ∈ Ass M , we infer that E(M ) ∈ C if and only if Ass M ⊆ Ass C = X 1 .
We now apply condition (iii) to the (n − 1)-cotilting class C (2) . By Remark 3.6 we obtain In particular, ✵(M ) ∈ C (2) if and only if Ass E i−1 (M ) ⊆ X i for all 2 ≤ i ≤ n, and the conclusion follows.
Let us summarize what has been done so far. We have proved that the assignment Ψ in Theorem 3.7 is injective, and that Ψ • Φ = id. We are left to show that each sequence of subsets in the image of Φ meets the requirements of Definition 3.1, and that each class obtained by an application of Ψ is actually cotilting. We start with the former statement, which is easier. Lemma 3.12. Let n ≥ 1 and C be an n-cotilting class. If we put X i = Ass C (i) and Y i = Spec(R) \ X i for 1 ≤ i ≤ n, then the sequence (Y 1 , . . . , Y n ) of subsets of Spec(R) satisfies conditions (i)-(iii) in Definition 3.1.
Proof. Condition (ii) is clear from the inclusions C = C (1) ⊆ · · · ⊆ C (n) . Condition (iii) holds for i = 1 because R ∈ C; for 1 < i ≤ n it follows by induction using Lemma 3.10 and Proposition 3.11(ii).
In order to show (i), we prove that each X i is closed under generalization. Let p ∈ X i . Then k(p) ∈ C (i) by Proposition 3.11(i). Hence E(k(p)) ∈ C (i) and E R (k(p)) ∼ = E Rp (k(p)) = E R (R/p), by Lemma 1.7. This implies that C (i) contains an injective cogenerator for Mod-R p . Given any q ⊆ p in Spec(R), E(R/q) is an injective R p -module (see e.g. [17, Theorem 3.3.8(1)]), so E(R/q) is a direct summand in E R (R/p) I for some set I. But C (i) is closed under arbitrary direct products and direct summands, hence also E(R/q) ∈ C (i) and q ∈ X i = Ass C (i) .
Finally, we are going to prove that each class C = C (Y1,...,Yn) as in Definition 3.1 is n-cotilting. We require a few definitions first. The following characterization of n-cotilting classes will be useful for completing our task: Proposition 3.14. Let n ≥ 0 and C be a class of modules. Then C is n-cotilting, if and only if all of the following conditions are satisfied: (i) C is definable, (ii) R ∈ C and C is closed under taking extensions and syzygies (in conjunction with (i), this only says that C is resolving in Mod-R), (iii) each n-th syzygy module belongs to C.
Proof. If C is n-cotilting, then C is definable by [18,Theorem 8.1.7]. Clearly R ∈ C, and there is a hereditary cotorsion pair of the form (C, C ⊥ ) such that the class C ⊥ consists of modules of injective dimension ≤ n by [18,Theorem 8.1.10]. This implies conditions (ii) and (iii).
Assume on the other hand that (i)-(iii) hold. Using [18, Lemma 1.2.17], we can construct for each M ∈ C a well-ordered chain in C consisting of pure submodules of M such that |M α+1 /M α | ≤ |R| + ℵ 0 for each α < σ and M β = α<β M α for every limit ordinal β ≤ σ. Note that definable classes are closed under taking pure epimorphic images by [26,Theorem 3.4.8]. Thus also each subfactor M α+1 /M α belongs to C. In particular, it follows easily that M ∈ C if and only if M is S-filtered, where S is a representative set for the modules in C of cardinality ≤ |R|+ℵ 0 . Since clearly R ∈ S, we can use [18, Proof. We use the characterization of n-cotilting classes from Proposition 3.14. Clearly, R ∈ C by the assumptions on (Y 1 , . . . , Y n ). Conditions (ii) and (iii) of Proposition 3.14 then follow easily from Lemma 1.4 and Corollary 1.5 (see also Remark 1.6). Thus, it only remains to prove that C is definable.
To this end, note first that for a family of modules, the product of injective coresolutions of the modules is a (possibly non-minimal) injective coresolution of the product of the modules. Using the fact that Y i is closed under specialization for every i, Proposition 2.3 tells us that the class is closed under products for every i since it is precisely the classes of all injective Rmodules contained in the torsion-free class F (Y i ). Hence C is closed under products itself, using Definition 3.1 and Lemma 1.3.
Assume next that M ∈ C and K ⊆ M is a pure submodule. To prove that K ∈ C, we must show that for each 1 ≤ i ≤ n and p ∈ Y i , we have Rp (k(p), K p ) = 0. Since the embedding K ⊆ M is a direct limit of split monomorphisms and localizing at p preserves direct limits, also the embedding K p ⊆ M p is pure. The conclusion that Ext i Rp (k(p), K p ) = 0 then follows from the fact that k(p) is a finitely generated R p -module and thus the class The proof that C is closed under direct limits is similar. Namely for each 1 ≤ i ≤ n and p ∈ Y i , the class {M ∈ Mod-R | Ext i Rp (k(p), M p ) = 0} is the kernel of the composition of two direct limit preserving functors: the localization at p and the functor Ext i Rp (k(p), −); and C is the intersection of all these classes.
Proof of Theorem 3.7. Lemma 3.12 and Proposition 3.15 show that Φ assigns to each n-cotilting class a sequence satisfying the conditions of Definition 3.1, and conversely that Ψ assigns to each such sequence an n-cotilting class. Further, we have proved in Lemma 3.8 and Proposition 3.11 that Ψ is injective and Ψ • Φ = id. Thus, Φ and Ψ are mutually inverse bijections. We conclude our discussion by two consequences. We clarify the effect of passing from C to C (j) in the sense of Definition 3.4 on the corresponding filtrations of subsets of the spectrum: Proof. Since we now know that C (Y1,...,Yn) is an n-cotilting class, the statement follows directly from Remarks 3.2 and 3.6.
Further, we show that the dimension shifting in the sense of Definition 3.4 works nicely also at the level of cotilting modules.
Obviously, D has injective dimension ≤ n − 1, so (C1) holds. Condition (C2) also holds for D since for any i ≥ 1 and any cardinal κ we have D ∈ C (2) by Lemma 3.10 and Proposition 3.11, hence D κ ∈ C (2) = ⊥ D, and Ext i R (D κ , D) = 0. To prove (C3), it is by [5,Lemma 3.12] enough to show that C (2) ⊆ Cog D, that is, each M ∈ C (2) is cogenerated by D. We will show more, namely that Indeed, taking any M with Ass M ⊆ X 2 , we have
The main theorem
We are now going to prove that the correspondence T → T + induces a bijection between the equivalence classes of n-tilting and n-cotilting modules. This correspondence together with Theorem 3.7 will then rather quickly yield a proof of our main classification result.
We first need a translation of the definition of C (Y1,...,Yn) in a homological condition. Conversely, suppose that µ i (p, M ) = 0 for each p ∈ Y and consider the beginning of an injective coresolution of M : Then each element of Ext i R (R/p, M ) is represented by a coset of some homomorphism f ∈ Hom R (R/p, E i (M )). If p ∈ Y , then on one hand Im f is an R/p- since C is a cotilting class by Proposition 3.15. Thus, the expression of C in terms of the Tor-groups follows from Lemma 2.9(ii) (applied for U = R/p, where i = 1, . . . , n and p ∈ Y i ), and the fact that we have a bijection between (i) and (iii) is an immediate consequence of Theorem 3.7. The bijection between (ii) and (iii) is a consequence of Lemmas 2.9(iii) and 1.12.
In fact, the Ext and Tor orthogonals above for T and C, respectively, can be taken with respect to (typically considerably smaller) sets of finitely generated modules. For a given sequence (Y 1 , . . . , Y n ), let us denote for each i byȲ i the set of minimal elements in Y i with respect to inclusion. Since (Spec(R), ⊆) satisfies the descending chain condition, for each p ∈ Y i there exists q ∈Ȳ i such that q ⊆ p. We claim that Proof. Let us provisionally denote the above candidate for C = C (Y1,...,Yn) by C ′ . We shall prove that C ′ = C by induction on the length n of the sequence (Y 1 , . . . , Y n ).
First of all we claim that C ′ is n-cotilting. If n = 1, then C ′ = p∈Ȳ1 Tr(R/p) ⊺ since proj.dim R Tr(R/p) ≤ 1 by Lemma 2.9(i). Hence C ′ is a 1-cotilting class by Hence C ′ is n-cotilting by Lemmas 2.9(i) and 1.12(i) again. Now clearly C ′ ⊇ C (Y1,...,Yn) . Thus, Theorem 3.7 implies that On the other hand, sinceȲ i ⊆ Y i and R ∈ C, we have Ext i−1 R (R/p, R) = 0 for all 1 ≤ i ≤ n and p ∈Ȳ i . Combining Lemma 2.9(ii) with the proof of (ii) ⇒ (i) in Lemma 4.1, we infer that µ i−1 (p, M ) = 0 for each M ∈ C ′ and p ∈Ȳ i . In particular Y ′ i = Spec(R) \ Ass C ′ (i) ⊇Ȳ i for each i = 1, . . . , n by Remark 3.6 and Corollary 1.5. Since the Y ′ i are specialization closed, it follows that Y ′ i = Y i . The claim for T is a consequence of Lemma 1.12(ii).
In view of Lemma 1.10, Theorem 4.2 also yields a classification of the resolving classes in mod-R consisting of modules of bounded projective dimension: Proof. Let T be the n-tilting class corresponding to the sequence (Y 1 , . . . , Y n ) by Theorem 4.2. By Lemma 1.10, T also corresponds to the resolving class S = ⊥ T ∩ mod-R. Using Corollary 4.3, we have T = {M ∈ Mod-R | Ext 1 R (E, M ) = 0}. Hence ⊥ T is the class of all direct summands of E-filtered modules by [18, 3.2.4]. Then S the class of all direct summands of finitely E-filtered modules by Hill's Lemma [18, 4.2.6].
Another consequence of Theorem 4.2 reveals a remarkable lack of module approximations by resolving classes in mod-R in the local case.
Given two classes A ⊆ C ⊆ Mod-R, we say that A is special precovering in C provided that for each module M ∈ C there exists an exact sequence 0 Special precovering classes in Mod-R are abundant: for example, if T is any tilting class, then the class ⊥ T is special precovering in Mod-R, see [18, 5.1.16]. One might expect that S = ⊥ T ∩mod-R will then be special precovering in mod-R. However, if R is local then this occurs only in the trivial cases when T = Mod-R or S = mod-R: Let C ∈ mod-R. By (i), we have an exact sequence 0 → B → A → C → 0 with A ∈ S and B ∈ T ∩ mod-R, hence B = 0 and C ∈ S. Thus S = mod-R, and R has finite global dimension.
Remark 4.6. In the particular case of henselian Gorenstein local rings, there is a more complete picture available. By [33], the only resolving (special) precovering classes in mod-R are (1) the class of all free modules of finite rank, (2) the class of all maximal Cohen-Macaulay modules, and (3) mod-R.
Cotilting over Gorenstein rings and Cohen-Macaulay modules
In this final section, we will restrict ourselves to the particular setting of Gorenstein rings, and later even regular rings. We generalize some results from [34], but our main concern is the relation to the existence of finitely generated Cohen-Macaulay modules and, in particular, to Hochster's Conjecture E from [21]. The main outcome here is Theorem 5.16, which gives new information on properties of the (conjectural) maximal Cohen-Macaulay modules.
5.1.
Cotilting classes over Gorenstein rings. We start by considering torsion products of injective modules over Gorenstein rings. Recall that R is Gorenstein, if R is commutative noetherian and inj.dim Rp R p < ∞ for each p ∈ Spec(R). If r ∈ p \ q, then the multiplication by r is locally nilpotent on E(R/p), but an isomorphism on E(R/q). So both is true of the endomorphism of Tor R i (E(R/p), E(R/q)) given by the multiplication by r. This is only possible when Tor R i (E(R/p), E(R/q)) = 0. (Note that this argument does not need the Gorenstein assumption).
For the remaining case of p = q, we can assume that R is local by [17,Theorem 3.3.3]; then the result is a consequence of [17,Theorem 9.4.6].
(ii) This is proved in [
So Tor
Rp k (E(k(p)), E(k(p))) = 0, because E(k(p)) is a {k(p)}-filtered R p -module by Lemma 1.7, in contradiction with part (i) for the local Gorenstein ring R p .
(iv) Notice that by (i) and (iii) we have Tor R k−i+j (E(R/p), E j (M )) = 0 = Tor R k−i+j+1 (E(R/p), E j (M )) for every 0 ≤ j < i, where E j (M ) is the j-th term of a minimal injective coresolution of M . Indeed, the right hand side equality for j = i − 1 follows as in (iii) with ✵ i−1 (M ) in place of M , together with the assumption that µ i−1 (p, M ) = 0. Now, the short exact sequences 0 → ✵ j (M ) → E j (M ) → ✵ j+1 (M ) → 0, where j again ranges from 0 to i − 1, give rise to exact sequences Thus, Tor R k−i+j+1 (E(R/p), ✵ j+1 (M )) ∼ = Tor R k−i+j (E(R/p), ✵ j (M )) for each j < i, and by induction: . The second claim is an immediate consequence of part (iii) applied to ✵ i (M ) and of Lemma 1.3.
A direct consequence is another expression of an n-cotilting class over a Gorenstein ring, which is alternative to the ones in Theorem 4.2 and follows directly from Lemma 5.1(iii) and (iv). Specializing Theorem 4.2 and Corollary 4.3 to Gorenstein rings, we almost immediately get a formula as in Proposition 5.2, but with finitely generated modules. Some price must be paid for this, however, in terms of associated prime ideals, as we will see later in Remark 5.7. Recall that as in Corollary 4.3 we denote for a set Y ⊆ Spec(R) byȲ the set of all minimal elements of the poset (Y, ⊆). We also introduce a notation which we will use in the rest of the paper: Definition 5.3. Let R be Gorenstein and p ∈ Spec(R) of height ≥ 1. We denote L(p) = Tr(Ω ht p−1 (R/p)). Proof. Given a prime p of height k = ht p ≥ 1, note that Ext i R (R/p, R) = 0 for all i = 0, . . . , k − 1. Indeed, this follows from the shape of the injective coresolution of R (see [17,Theorem 9.2.27]) and the fact that Hom R (R/p, E(R/q)) = 0 for every q ∈ Spec(R) \ V (p). Thus, proj.dim R L(p) = k by Lemma 2.9(i). Note also that we have for every i = 1, . . . , k: The statements on C and T follow from Corollary 4.3, using the isomorphisms of functors Tor k−i+1 R (L(p), −) ∼ = Tor 1 R (Tr(Ω i−1 (R/p)), −) and similarly for Ext. In connection with Cohen-Macaulay modules and Hochster's conjecture below, we shall be interested in the associated prime ideals of the modules L(p), or more generally in their Bass invariants. A step toward the goal is to understand what the classes L ⊺ look like for finitely generated modules L of finite flat (hence projective) dimension. Such classes are cotilting class thanks to Lemma 1.12(i), so in particular they are of the form C (Y1,...,Yn) for a sequence of subsets of Spec(R) as in Definition 3.1. Hence the problem reduces to computing Y 1 , . . . , Y n , which for Gorenstein rings amounts to the following general lemma: Lemma 5.5. Let R be Gorenstein and L be a finitely generated non-projective Rmodule of finite projective dimension n. Then L ⊺ is an n-cotilting class and in view of the correspondence from Theorem 3.7 we have L ⊺ = C (Y1,...,Yn) , where First of all, L(p) is only defined uniquely up to adding or splitting off a projective summand; recall Definition 2.8 and the comment below it. There is a more substantial problem, however. If Ass L(p) = {p} for a particular choice of L(p), then we have Hom R (L(p), R) = 0 since Supp L(p) ∩ Ass R = ∅. This would imply that proj.dim R R/p ≤ ht p by the very construction of L(p). As far as we are concerned, this is a trivial situation. In that case, we could replace L(p) by R/p in the formula in Proposition 5.4, as we will see below in Theorem 5.10. In fact, R/p would then be a Cohen-Macaulay module by Lemma 5.11. The latter is certainly not true in general.
5.2.
Cohen-Macaulay modules and Hochster's conjecture. In Propositions 5.2 and 5.4 we get two different expressions of cotilting classes over Gorenstein rings. Now we are going to discuss the possibility of combining these two attempts. Namely we would like to find a finitely generated module K(p) for each p ∈ Spec(R) \ Ass R such that proj.dim R K(p) = ht p, Ass K(p) = {p} and such that these modules can be used to express any cotilting class. We will see later that the last property follows from the other two and that this attempt leads to the question of existence of some Cohen-Macaulay modules. Let us recall some relevant definitions and results. [17, 9.2.20] implies that either M has infinite projective dimension, or else M is free.
If R is a general commutative noetherian ring and M ∈ mod-R, then M is Cohen-Macaulay if M m is a Cohen-Macaulay R m -module for each maximal ideal m ∈ Supp M . The ring R is called Cohen-Macaulay if it is Cohen-Macaulay as a module over itself. Lemma 5.9. Let R be a Gorenstein ring, p ∈ Spec(R), and K ∈ mod-R be such that Ass K = {p}. Then the following are equivalent: (i) K is a Cohen-Macaulay module such that proj.dim R K < ∞; (ii) proj.dim R K = ht p.
Proof. If (i) holds, then for each maximal ideal m ∈ Supp K, K m is a Cohen-Macaulay R m -module of finite projective dimension, and the Auslander-Buchsbaum formula [17, 9.2.20] gives proj.dim Rm K m = ht m−Kdim K m . Since Ass K = {p}, we get Kdim K m = Kdim (R/p) m = ht m − ht p. This proves that proj.dim R K = ht p. Conversely, if (ii) holds then for each maximal ideal m ∈ Supp K, we have depth K m = Kdim R m − proj.dim Rm K m ≥ Kdim R m − ht p = Kdim (R/p) m = Kdim K m , so K m is a Cohen-Macaulay module. Now we shall show how to express any cotilting class using Cohen-Macaulay modules as in the latter lemma. Using the convention of Corollary 4.3, given a set Y ⊆ Spec(R), we denote byȲ the set of all minimal elements of the poset (Y, ⊆).
Theorem 5.10. Let R be a Gorenstein ring and assume that for each p ∈ Spec(R) \ Ass R there exists a Cohen-Macaulay module K(p) ∈ mod-R such that proj.dim R K(p) = ht p and Ass K(p) = {p}. Then for each (Y 1 , . . . , Y n ) as in Definition 3.1, the n-tilting class corresponding to (Y 1 , . . . , Y n ) by Theorem 4. We can w.l.o.g. assume that R is a regular domain. We must then prove that R/p is Cohen-Macaulay for each 0 = p ∈ Spec(R). This is trivial when p has height 3. The cases of height 1 and 2 are proved by localization: if p has height 2, then the localization of R/p at any maximal ideal is a 1-dimensional local domain which is necessarily Cohen-Macaulay [9, p.64]. Finally, each regular local ring is a UFD, so its prime ideals of height 1 are principal, hence R/p is even Gorenstein for p of height 1, see [9, 3.1.19(b)].
However, the existence of Cohen-Macaulay modules K(p) as in Lemma 5.9 in broader generality is closely related to long standing open problems in commutative algebra. One of them is: Hochster's Conjecture. Since factors of complete local rings are complete, and each complete local ring is a factor of a complete regular local ring, Hochster's Conjecture can equivalently be stated as follows: for each complete regular local ring R and each p ∈ Spec(R) there exists a maximal Cohen-Macaulay R/p-module K(p). In [21, §3], Hochster's Conjecture is proved for rings of Krull dimension ≤ 2. In fact, the canonical R/pmodule K(p) = Ext 2 R (R/p, R) satisfies depth K(p) = Kdim K(p) = Kdim R/p = 2, so K(p) is a maximal Cohen-Macaulay R/p-module in that case (see [28, Example 3.2(b)]). In general, however, the conjecture remains wide open.
The following lemma shows that in the complete local case, Hochster's Conjecture implies the existence of Cohen-Macaulay modules as in Lemma 5.9 for each p ∈ Spec(R): Lemma 5.13. Let R be a regular local ring and p ∈ Spec(R). Assume there exists a maximal Cohen-Macaulay R/p-module K(p). Then viewed as an R-module, K(p) is Cohen-Macaulay and satisfies Ass K(p) = {p}.
Proof. The maximality of K(p) implies that K(p) is a torsion-free R/p-module by [16, 21.9]. So K(p) ⊆ (R/p) n for some n < ω by [11,Proposition VII.2.4]. Considered as an R-module, K(p) thus satisfies Ass K(p) = {p} which implies that K(p) is a Cohen-Macaulay R-module.
To see how limiting the assumption of existence of modules K(p) from Theorem 5.10 is, we relate it to Serre's Positivity conjecture. In order to state it, we recall the notion of the intersection multiplicity: Definition 5.14. Let R be a regular local ring of Krull dimension d and M, N ∈ mod-R be such that M ⊗ R N has finite length. Then the intersection multiplicity of M and N is defined as N )).
Serre's Conjectures. [29] Assume that R is a regular local ring of Krull dimension d, and M, N ∈ mod-R are such that M ⊗ R N has finite length. Then (1) Kdim M + Kdim N ≤ Kdim R; | 2012-06-29T16:27:58.000Z | 2012-03-05T00:00:00.000 | {
"year": 2012,
"sha1": "284b00c70a190e400495c9a28a0d27343cf17ecb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.0907",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "284b00c70a190e400495c9a28a0d27343cf17ecb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253666851 | pes2o/s2orc | v3-fos-license | Design and performance evaluation of a novel ranging signal based on an LEO satellite communication constellation
ABSTRACT Driven by improvements in satellite internet and Low Earth Orbit (LEO) navigation augmentation, the integration of communication and navigation has become increasingly common, and further improving navigation capabilities based on communication constellations has become a significant challenge. In the context of the existing Orthogonal Frequency Division Multiplexing (OFDM) communication systems, this paper proposes a new ranging signal design method based on an LEO satellite communication constellation. The LEO Satellite Communication Constellation Block-type Pilot (LSCC-BPR) signal is superimposed on the communication signal in a block-type form and occupies some of the subcarriers of the OFDM signal for transmission, thus ensuring the continuity of the ranging pilot signal in the time and frequency domains. Joint estimation in the time and frequency domains is performed to obtain the relevant distance value, and the ranging accuracy and communication resource utilization rate are determined. To characterize the ranging performance, the Root Mean Square Error (RMSE) is selected as an evaluation criterion. Simulations show that when the number of pilots is 2048 and the Signal-to-Noise Ratio (SNR) is 0 dB, the ranging accuracy can reach 0.8 m, and the pilot occupies only 50% of the communication subcarriers, thus improving the utilization of communication resources and meeting the public demand for communication and location services.
Introduction
With the development of satellite navigation and mobile communication technologies, the demand for high-precision location services has increased. Currently, the positioning technology used in satellite navigation systems is widely used in various fields, but navigation signals have a weak anti-interference ability (Chen et al. 2018). In urban canyons with indoor and outdoor occlusion and electromagnetic interference, there are problems such as weak signals, poor penetration, and unable to provide highly accurate location services (Jia et al. 2020;Zhu et al. 2018). An OFDM signal has wide coverage and strong anti-multipath, anti-fading, and anti-interference capabilities, which can compensate for the lack of navigation signals (Piccinni et al. 2020;. Therefore, the integration of communication and navigation systems based on LEO satellite communication constellations can improve the positioning accuracy and coverage of navigation systems (cao 2020; Cui and Shi 2013;Gaur and Prasad 2020;Guo et al. 2022;Yin et al. 2019;Wang et al. 2019).
In recent years, the integration and development of LEO navigation augmentation systems and satellite communication constellations have greatly promoted the integration of communication and navigation (Deng et al. 2021;Qu et al. 2017;Reid et al. 2018;Wang et al. 2018aWang et al. , 2019b. The integration of communication and navigation is important for continuously improving Positioning, Navigation, and Timing (PNT) services and is a popular research topic around the world (Benzerrouk et al. 2019;Leyva-Mayorga et al. 2020). In China, a team led by Professor Deng Zhongliang from Beijing University of Posts and Telecommunications has proposed Time and Code Division-Orthogonal Frequency Division Multiplexing (TC-OFDM), a design scheme for integrating communication and navigation at the signal level. The scheme superimposes a low-power directly spread navigation signal with high-precision ranging capability and an OFDM communication signal in the same time slot to form a TC-OFDM positioning signal (Yu 2013). A TC-OFDM positioning signal can achieve a positioning accuracy of 3 m in the horizontal direction and 1 m in the vertical direction for highprecision seamless indoor and outdoor positioning over wide areas. TC-OFDM is an integration method in the time domain (Liu 2019). A study also has proposed a signal integration method in the frequency domain, which superimposes navigation and communication signals to provide new concepts for integrating communication and navigation and continuously improves the overall performance of integrated systems (Liang 2019). satellite navigation augmentation ranging signals are broadcast to users by using mobile communication networks, which can assist users in obtaining global high-precision positioning services ). In addition, with the increasing number of satellites and scarcity of available resources in frequency bands, the integration of communication and navigation can not only provide highprecision navigation and positioning services but also reduce frequency band resources by using satellite communication signals to broadcast navigation augmentation signals (Meng et al. 2018).
Therefore, to integrate communication and navigation, and improve the utilization of communication resources, this paper proposes the design of a new ranging signal based on the LEO communication constellation. The distance estimation value is obtained by using the time-frequency domain joint estimation method. The factors that affect the ranging accuracy and simulate the ranging performance of the new ranging signal are analyzed to continuously improve the utilization rate of communication resources while ensuring the ranging accuracy. The realization of the new ranging signal based on the LEO communication constellation will effectively improve multiple service capabilities and greatly promote the development of the location services industry (Wang et al. 2018b(Wang et al. , 2019aWu, Sun, and Xie 2020a;Wu et al. 2020b).
Iridium STL signal
The Iridium STL signal system combines satellite navigation and communication. The STL signal is a dedicated pulse signal achieved through an improvement in the paging channel of the original Iridium satellite communication system. The designed signal containing navigation and positioning information is a spread-spectrum Radio Frequency (RF) generated by the radio frequency modulation of the communication satellite signal pulse. The transmitted burst signal includes three components: Continuous-Wave (CW) mark, the PRN pilot code, and the navigation and positioning information. All components require specific coding, which provide the signal with high gain, strong landing power, strong penetration, and high resistance. Satellites with strong interference ability transmit signals in one direction, thereby not only improving the navigation and positioning performance of the signal but also greatly enhancing the GPS positioning, navigation, and timing capabilities (Xie 2019). The Iridium STL signal system is shown in Table 1.
In the above satellite communication system, the Iridium STL signal is transmitted by the paging channel, and the communication information is expressed in the same way as that in the Iridium System. It uses some time-frequency resources, divides the signal unit into the original timefrequency unit, and carries out the physical layer design of the signal, on the basis of which the signal ranging function is added to meet different PNT performance requirements. When the user initiates a call request, the user terminal is provided with PNT services. The design of the Iridium STL signal lays the foundation for designing integrated signals based on new satellite communication systems. OFDM is a key 4th-Generation (4 G) technology and the main basis for the signal system design of satellite internet. As the main modulation method in the 4G mobile communication system, OFDM is the most widely used multicarrier modulation transmission scheme (Huang and Wang 2018;Li, Wang, and Ding 2020). In the OFDM system, the transmitting end converts the transmitted digital signal into subcarrier amplitude and phase mapping, and performs Inverse Fast Fourier Transform (IFFT) operations to change the spectral expression of the data to the time domain. The IFFT can also be replaced by Inverse Discrete Fourier Transform (IDFT), which has the same effect, improves the calculation efficiency, and can be widely applied to all application systems. The receiving end performs the opposite operation and uses Fast Fourier Transform (FFT) for decomposition, and the amplitude and phase of the subcarriers are finally converted back to a digital signal. The OFDM system block diagram is shown in Figure 1.
OFDM is a technology that divides the given channel into many orthogonal subchannels in the frequency domain and uses a subcarrier for modulation on each subchannel. Each subcarrier transmits in parallel and can thereby effectively combat multipath effects, eliminate Inter Symbol Interference (ISI), combat frequency selective fading, and achieve high channel utilization. OFDM can transmit data at high speeds in multipath and Doppler-shifted wireless mobile channels. Therefore, using the existing OFDM system in designing the navigation system of the LEO communication constellation can not only improve the ability of the navigation signal to resist interference and multipath, but also effectively save spatial frequency band resources, thereby greatly facilitating the development of the integration of communication and navigation (Voronkov, et al. 2018;Wu et al. 2022).
LSCC-BPR signal
With the continuous updating of mobile communication systems, the OFDM system has become the main basis for the design of integrated communication and navigation signals. Through serial-to-parallel conversion, the input high-speed data are converted into a multichannel parallel transmission of the low-speed data stream, this conversion greatly improves the information transmission rate and spectrum utilization. In the design of new ranging signals based on LEO communication constellations, the integration of the OFDM signal and ranging pilot signal can not only improve the anti-interference and anti-multipath capabilities of the navigation signal, but also effectively reduce the use of frequency band resources. The realization of integrated communication and navigation based on LEO communication constellations will continue to enhance satellite navigation capabilities (Zhou 2020).
In the existing OFDM communication system, and in the design of new ranging signals based on LEO communication constellations, the LSCC-BPR signal is composed of the OFDM signal and a Constant Amplitude Zero Auto Correlation (CAZAC) signal, which are superimposed in the same time slot. The CAZAC signal is transmitted in a block-type form by some OFDM subcarriers and remains continuous in the time and frequency domains. The LSCC-BPR signal continuously improves the ranging accuracy of the system and the utilization rate of communication resources without using too many communication resources.
Ranging pilot signal
In the integration of communication and navigation systems, the most used Zadoff-Chu sequence in a CAZAC sequence is selected as the ranging pilot and is defined as follows: (1) where M denotes the length of the Zadoff-Chu sequence, r denotes the reciprocal of M and is an integer, and k = 0,1 . . . , M-1. A CAZAC sequence is characterized by sharp correlation peaks and zero side lobes, and the sequence remains a CAZAC sequence after Fourier transform application. In addition, a CAZAC sequence has a constant amplitude, and the Peak-to-Average Power Ratio (PAPR) is always 0 dB, therefore, these sequences can reduce the impact of PAPR values on the system, thereby making them a good choice for the design of integrated communication and navigation signals.
Generation of the LSCC-BPR signal
Driven by the enhancement of satellite internet and LEO navigation, to further realize integrated communication and navigation and improve the ranging accuracy, this paper proposes a new ranging signal design scheme based on LEO satellite communication constellations on top of OFDM technology for communication. The LSCC-BPR signal superimposes an OFDM signal and a CAZAC signal with a block-type structure so that the signal with ranging capability is fused to the OFDM signal. Using part of the subcarriers of the OFDM signal, positioning signals are transmitted to provide communication and high accuracy positioning services to users while ensuring no interference with each other. A block diagram of LSCC-BPR signal generation is shown in Figure 2.
The ranging pilot is integrated with the OFDM signal in a block-type structure, and the LSCC-BPR signal structure is different due to the different insertion positions of the ranging pilot. This paper focuses on the ranging accuracy of the LSCC-BPR signal when the ranging pilot is inserted into the front, middle, and back of the subcarrier. The time-frequency structure and structure diagram of the signal are shown in Figures 3 and 4 (a,b,c) As shown in Figures 3 and 4, the difference between the LSCC-BPR signal and the Positioning Reference Signal (PRS) in traditional 4 G is that the ranging pilot signal adopts a block-type structure. It is also integrated with the communication signal, and on-board ranging signals will occupy the same time and frequency resources as the communication information and remain continuous in the time-frequency domain. Thereby it ensures normal communication and ranging accuracy while greatly reducing the occupancy rate of the communication carrier. The realization of the LSCC-BPR signal will not only enable navigation augmentation of medium-orbit and high-orbit satellites, but also enable assisted ground base station positioning, thereby helping to solve indoor positioning problems.
Traditional OFDM signal ranging system model
In an ideal OFDM signal ranging system, the transmit sequence is modulated to each of the different subcarriers by IFFT, and continuous ranging signals are generated. A block diagram of the OFDM ranging system is shown in Figure 5. Data transmitted by the OFDM ranging system can be expressed as: where X i denotes the transmit sequence symbol, and symbols are assigned to all OFDM data; i denotes the i-th data message; N denotes the number of subcarriers; and T denotes the signal duration.
In the OFDM ranging system, the ranging pilot information is allocated to each subcarrier to provide the ranging pilot with continuity in the time and frequency domains. This method can be used to perform distance estimation and effectively suppress the fast-fading phenomenon caused by multiple paths and other factors. When ranging based on this ranging system, the ranging pilot is allocated to each communication subcarrier, which occupies too many communication resources and greatly reduces the utilization of communication resources. To improve the utilization of communication resources while ensuring the same ranging accuracy, a ranging system model of LSCC-BPR signals based on the original ranging model is proposed to reduce the proportion of communication resources occupied by ranging pilots.
Ranging system model of the LSCC-BPR signal
In the integration of communication and navigation systems, communication resources are used to complete ranging tasks. To save communication resources, firstly, the LSCC-BPR signal ranging system uses a block-type structure to send ranging pilots and modulates the ranging pilot information to some subcarriers. Secondly, the generated LSCC-BPR signal is used for distance estimation. Using a block-type structure to send ranging pilot signals without occupying all communication subcarriers ensures the continuity of ranging pilots in the time and frequency domains. Therefore, in ranging estimation, this approach greatly improves communication resource utilization. The ranging system of the LSCC-BPR signal is shown in Figure 6.
Compared with the traditional OFDM, the ranging system based on the LSCC-BPR signal mainly changes the structure of the sent ranging pilot by fusing it with the OFDM signal in a block-type structure and only a part of the communication subcarriers are occupied, thus ensuring the accuracy of ranging. At the same time, the utilization rate of communication resources is greatly improved. Since the ranging system will be applied to LEO satellites, the LEO satellites will run quickly and produce a large Doppler shift, which will have a greater impact on the orthogonality of the signal. The processing unit added to the ranging system mainly estimates, predicts, and compensates for the Doppler shift, thus reducing the impact of the Doppler shifts on the ranging accuracy. Therefore, it is necessary to estimate and compensate for the Doppler shift before ranging (Hsu and Jan 2014;Li et al. 2016;Li, Zhao, and Pei 2018;Niu 2020).
LEO satellites usually revolve around the earth at a constant speed in a circular orbit between 500 km and 2000 km. The Doppler shift of an LEO at any point P can be calculated based on the ground position information, which is expressed as: � ω F ðtÞr E r sin½φðtÞ À φðt 0 Þ�ηðθ max Þ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Among them.
where r E denotes the Earth's radius, r denotes is the orbital radius; φðtÞ À φðt 0 Þ denotes the estimated angular distance of the Earth's surface along the satellite's orbit; θ max denotes the maximum elevation angle, and ω F ðtÞ denotes the satellite's angular velocity, which can be approximated as a constant.
where ω S denotes the average angular velocity of the satellite's motion, ω E denotes the angular velocity of the Earth's rotation, and i denotes the inclination of the satellite's orbit. According to Equation 3, the downlink Doppler shift in the L-band can range from −35 kHz to 35 kHz. The change in the position of the mobile terminal makes the Doppler shift change, generally for a high-speed moving object. Due to the change in position, the maximum Doppler shift generated by the satellite is approximately 36 kHz. It is possible to use ephemeris for precompensation before ranging, thus reducing the interference between subcarriers and the impact of Doppler shift on ranging accuracy.
After Doppler frequency offset compensation, when the pilot signal adopts a block-type structure, it can occupy subcarriers at different locations. This article discusses only the three forms shown in Figure 4 In the LSCC-BPR signal ranging system, according to the position where the ranging pilot is inserted, as shown in Figure 4, the signal transmitted by the LSCC-BPR signal system can have three representations. Figure 4(a), data transmission by the LSCC-BPR ranging system can be expressed as: where M denotes the number of ranging pilots, M ≤ N, and T denotes the signal duration. Figure 4(b), data transmission by the LSCC-BPR ranging system can be expressed as: Figure 4(c), data transmission by the LSCC-BPR ranging system can be expressed as: For data transmitted through a Gaussian white noise channel, after a transmission delay τ, the data are received by the receiving end, this process is expressed as: (11) where A denotes the amplitude of the data after transmission through the channel, τ denotes the transmission time delay, and n(t) denotes Gaussian white noise.
Ranging algorithm based on the LSCC-BPR signal
The ranging based on the LSCC-BPR signal is based mainly on Time of Arrival (TOA) algorithms, which estimate the time delay of a continuously received signal in the time and frequency domains and then yield the corresponding distance value (Li et al. 2022). When the ranging pilot adopts the block-type pilot structure, the ranging pilot not only ensures the continuity of the pilot signal in the time-frequency domain, but also greatly saves communication resources. The algorithm flow is shown in Figure 7. As shown in Figure 7, when the ranging system uses the generated LSCC-BPR signal for ranging estimation, first, the integer delay estimation is performed in the time domain to obtain the integer delay estimation value, but the fractional delay cannot be estimated. Thus, after the integer delay is estimated, the signal is converted to the frequency domain, and when the frequency domain delay is estimated, the fractional delay estimation value is obtained. Finally, an estimate of the distance is obtained.
Time delay estimation in the time domain
Equations (9), (10), and (11) for received data involve sampling at T s ¼ T=N, sampling intervals to obtain the data. The sampled received data are expressed as: where T s denotes the sampling interval and T s ¼ T=N.
The local data generated at the receiver side are expressed as: where m denotes the sampling start position and τ denotes the transmission delay.
The sampled data are autocorrelated with the local data.
RðmÞ
The peak value of the autocorrelation function R(m), that is, the value of m, must be determined.
The time-domain delay estimation result is:
Time delay estimation in the frequency domain
Since only integer multiples of the time delay can be estimated in the time domain, the fractional multiples of the time delay are estimated in the frequency domain. The received data in Equations (12), (13), and (14) are sampled at T s ¼ T=N, sampling intervals and expressed as: where i denotes the serial number of the sampling point, and τ 2 denotes the decimal transmission delay. The FFT is performed separately for time-domain sample sequences x k and y k . Due to the time delay for the data in the time domain, x k and y k generate phase differences in the frequency domain, and fractional time delay estimation is performed based on the generated phase differences.
After the integer delay is estimated, the FFT is performed on time-domain sampled signals x k and y k to obtain frequency-domain data XðMÞand YðMÞ.
Then, the phase of W is: φ denotes the estimated value of the difference in the phase offsets separated by L subcarriers.
where L denotes the frequency domain correlation interval, which can be adjusted randomly.
The distance calculation is based on the following formula: The LSCC-BPR signal ranging algorithm finds the integer delay estimation value in the time domain by searching the correlation peak of the signal. In the frequency domain, the decimal delay value is determined by finding the phase difference caused by the delay. After joint estimation in the time and frequency domains, a more accurate distance estimate can be obtained.
Distance measurement simulation results and performance analysis
To determine reasonable simulation parameters for the LSCC-BPR signal ranging system and reduce the ranging error, this paper analyses the factors affecting the ranging accuracy. After determining reasonable simulation parameters, a ranging accuracy performance analysis is carried out. In the ranging simulation, the system simulation parameters include bandwidth B = 20 MHz, sampling interval time T s QUOTET s = 0.05 µs, signal durationT s ¼ T=N, fractional time delay of signal transmissionτ 2 QUOTEτ 2 , and L/N is any value between 0.1 and 1.
Influence of m on integer delay estimation
Because the value of m can be an integer or fraction when sampling, the following simulations are carried out to analyze the effect of time domain delay on the estimation for different values of m. The number of subcarriers is 512, 1024, 2048, and 4096, the integer part of m is taken as 256, 512, 1024, and 2048, and the fractional part is taken as 0.1 ~ 0.5 for simulation to observe the degree of influence of fractional sampling on the estimation of time-domain delay, and then, the fractional value range of m is determined. The effect of fractional sampling on the estimation of time delay when the number of subcarriers is 512, 1024, 2048, and 4096 is shown in Figure 8(a,b,c,d), respectively.
As shown in Figure 8, regardless of the number of subcarriers, when the fractional part of m is set to 0.1 ~ 0.4, there is no effect on the appearance of related peaks in the time domain estimation, and integer time delay estimation yields an accurate value. However, fractional time delay estimation cannot be performed. When the fractional part of m is set to 0.5, the appearance of related peaks occurs, and the integer estimation time delay value cannot be accurately estimated. Therefore, when time delay estimation is performed in the time domain, the delay value should be in the range of 0.1 to 0.4, time delays based on integer time sampling can be estimated, and a fractional time delay will cause errors in integer time delay estimation. Thus, fractional time delay estimation needs to be performed in the frequency domain.
Simulations of time-domain delay estimation are performed for different SNRs, the integer part of m is set to 1000, the decimal part of m ranges from 0.1 ~ 0.4, and the number of subcarriers is 4096. The results are shown in Figure 9.
As shown in Figure 9, for different SNRs, there is no effect on the autocorrelation of the ranging pilot. When the integer part of m is set to 1000, the position of the correlation peak is 3096, that is, the integer multiple of the delay is 1000 times the sampling interval. Therefore, when performing frequency-domain delay estimation, there is no error in time-domain delay estimation, and the influence on the ranging error comes mainly from delay estimation in the frequency domain. Equation (24) shows that L is a factor that affects the frequency domain time delay estimation. When the value of L is too small, the fractional delay estimate may exceed the actual estimate; when L is too large, the fractional delay estimate may be much smaller than the real value. Therefore, it is important to determine the value of L for the frequency domain delay estimation. Determining this value requires that the impact on the range accuracy for different values of L be determined in the experimental simulation stage, and then the value of L should be determined to provide reasonable parameter support for the subsequent simulation experiments.
Influence of the frequency-domain correlation interval L on the ranging accuracy
According to the above analysis, the fractional time delay of the signal transmission chosen for the simulation is τ 2 ¼ 0:4 � T s ¼ 0:02μs. To determine the effect of the frequency domain correlation interval L on the accuracy of the frequency domain estimation, simulations of the effect of L on the ranging error are performed for different values of L. Simulation of the ranging error is carried out for different values of L/ N at SNRs of −5 dB, −10 dB and −15 dB, and the appropriate value of L/N is found. The relationship between L/N and the ranging error for a nonblocktype structure at different SNRs is shown in Figure 10.
As shown in Figure 10, when the value of L/N is constant, the smaller the SNR is, the larger the error. For the same SNR case, when the value of L/N increases, the ranging error is relatively constant but is still affected by noise. When the SNR is large (i.e. the signal is noisy), the error increases, but the choice of the best L/N value is generally unaffected. When the SNR is −15 dB and the value of L/N is 0.3 ~ 0.7, the range error does not fluctuate much, and the range error is 3 ~ 4 m; at this time, the L/N value that has the least influence on the range is 0.51. When the SNR is −10 dB and the value of L/N is 0.3 ~ 0.7, the range error is the lowest and the range error is approximately 1 m; at this time, the best value of L/N is 0.62. When the SNR is −5 dB, with different values of L/N, the range error is below 1 m, at this time, the best value of L/N is 0.53. In general, for different SNRs, when L/N = 0.55, the ranging error is the lowest. Therefore, when the block-type pilot is used for ranging, L/N = 0.55 is selected in simulations.
Influence of the number of subcarriers on the ranging accuracy
The bandwidth, sampling interval, and fractional time delay are the same as those in the above simulation, and L/N = 0.55. For the OFDM ranging signal, when the pilot signal is assigned to each subcarrier, the ranging accuracy at different SNRs is evaluated for various numbers of subcarriers: N = 4096, 2048, 1024, and 512. The simulation results are shown in Figure 11. As shown in Figure 11, under the same conditions, the ranging error decreases as the SNR decreases. For the same SNR, as the number of subcarriers increases, the ranging accuracy increases. Therefore, to ensure a high range accuracy, 4096 subcarriers are chosen for the simulation with a block-type structure for range measurement.
Ranging simulation based on the LSCC-BPR signal
The following simulations are based on the ranging algorithm for LSCC-BPR signals.
Simulation of ranging accuracy with different numbers of pilots at the same position
When the value of L/N is 0.55; the bandwidth B = 20 MHz; the number of subcarriers N = 4096; the sampling interval T s ¼ 0:05μs the signal duration T s ¼ T=N ¼ 204:8μs; the fractional time delay of signal transmission τ 2 ¼ 0:4 � T s ¼ 0:02μs and the number of ranging pilots is 2048, 1024, and 512. The ranging accuracy is evaluated for different SNRs. Ranging pilot information is inserted into the front, middle, and back of subcarriers. The ranging accuracies of 2048, 1024, and 512 pilots with different SNRs are shown in Figures 12, 13, and 14, respectively.
As shown in Figure 12, the ranging error is the lowest when the number of pilots is 2048 at the same SNR; when the number of ranging pilots is the same, the ranging accuracy decreases as the SNR decreases. In the case of different pilot numbers, when the SNR is 0 dB, the ranging error is basically the same; at this time, the ranging accuracy can reach 1 m when the pilot number is 2048, the ranging accuracy is basically the same as it is when the block-type ranging pilots are not used, and the utilization rate of communication resources is improved by 50%. As shown in Figure 13, when the pilots are inserted into the middle of the subcarrier, the ranging error decreases continuously with the increase in the number of pilots at the same SNR; when the number of pilots is the same, the ranging accuracy decreases with the decrease in the SNR. When the SNR is 0 dB, the ranging error is below 3 m regardless of the number of pilots. At this time, when the number of pilots is 2048, the ranging accuracy can reach 0.8 m, which is the same as when block-type ranging pilots are not used, and the utilization rate of communication resources is improved by half.
As shown in Figure 14, when the ranging pilots are inserted into the back of the subcarrier, the main factor affecting the ranging error is still the SNRs and the number of ranging pilots. This result is basically the same as the trend of the ranging error shown in Figures 8 and 9. When the SNR is 0 dB, the ranging error is within the acceptable range for different numbers of ranging pilots, and the ranging pilot occupies a few communication subcarriers, thereby greatly improving the utilization of communication resources.
Simulation of ranging accuracy for the same number of pilots at different positions
Parameters, such as bandwidth, sampling interval, and fractional delay, are the same as in the above simulation. When the range pilot information is inserted into the front, middle, and back of the subcarrier, range accuracy simulations are carried out at different locations with different SNRs for the same number of pilots. The ranging accuracies at different SNRs for 2048, 1024, and 512 ranging pilots are shown in Figures 15, 16 and 17, respectively.
As shown in Figure 15, at the same or different SNRs, the ranging accuracy is independent of the position of the pilot data insertion, and the ranging accuracy is almost constant. When the SNR is from 0 dB to −10 dB, the ranging accuracy at different positions is more or less the same; when the SNR is between −10 dB and −15 dB, the ranging error is lower when the pilots are inserted in front of the subcarrier, but the difference between the top and bottom is not large; when the SNR is between −15 dB and −20 dB, the ranging error is not large between the top and bottom. Therefore, when the number of pilots is 2048, the ranging error at different positions is basically the same, and the position of insertion has little influence on the ranging error. When the SNR is 0 dB, the ranging error at different positions can reach 0.8 m, the communication resource is improved by 50%, and the ranging accuracy is the same as when the block-type ranging pilots are not used. As shown in Figure 16, when the number of pilots is 1024, with the change of SNRs, the position of the ranging pilot insertion has little effect on the ranging accuracy, and the ranging error is approximately the same. In the case of 0 dB, the ranging error can reach 1 m when the pilots are inserted into different positions. Not only does the communication resource utilization rate increase by 75%, but the ranging accuracy is basically the same as that in the case without the block-type structure.
As shown in Figure 17, when the number of pilots is 512, for different SNRs, the ranging accuracy is reduced compared with 2048 and 1024 pilots. Additionally, the position of pilot insertion has little effect on the ranging accuracy. When the pilots are inserted at different positions and the SNR is 0 dB, the ranging error can reach 2 m. This result is a 12.5% improvement in communication resource utilization. Obviously, the ranging error is affected by noise and is in an unacceptable range when the SNR is small.
Effect of Doppler frequency bias on ranging accuracy
The bandwidth, sampling interval time and fractional time delay are the same as the above simulation conditions, and the ±36 kHz Doppler shift is added to simulate the ranging accuracy at the same position with different SNRs of 2048, 1024, and 512 ranging pilots. The simulation of the effect of the Doppler shift on the ranging accuracy is shown in Figure 18.
As shown in Figure 18, when there is a Doppler shift in the ranging system, there is a certain magnitude of reduction in the ranging error after the Doppler shift is compensated. As the number of ranging pilots decreases, the effect of the Doppler shift on ranging accuracy increases. When the SNR is 0 dB and the number of ranging pilots is 2048, the ranging accuracy decreases from 0.8 m to approximately 3 m, due to the effect of the Doppler shift on the phase of the LSCC-BPR signal. Therefore, Doppler shift compensation should be performed before ranging to reduce the effect of the Doppler shift on ranging. Figures 15,16,and 17 shows that when the block-type pilot is used, when the number of pilots is 2048, the ranging accuracy can be the same as that when the block pilot is not used. Therefore, the ranging accuracy of the traditional OFDM signal and the LSCC-BPR signal are compared, and the ranging accuracy and the carrier occupancy rate are analyzed. When the number of subcarriers is 4096 and 2048 and the number of pilots is 4096 and 2048, the simulation of ranging accuracy with and without the block-type structure is shown in Figure 19.
Simulation of ranging accuracy for the traditional OFDM signal and the LSCC-BPR signal
As shown in Figure 19, when the number of subcarriers is 4096, the ranging accuracy based on the traditional OFDM signal is compared with that based on the LSCC-BPR signal. At 0 dB, the ranging accuracy is basically the same. At this time, the ranging pilot occupies only 50% of communication resources, thereby reducing the carrier occupancy rate. When the number of subcarriers is 2048, with the change in the SNR, when the LSCC-BPR signal is used, the ranging accuracies are basically the same. At 0 dB, the blocktype structure is used for ranging, this can improve the utilization rate of communication resources while ensuring the ranging accuracy.
Comparison of ranging based on the CSS signal and LSCC-BPR signal
The Chirp Spread Spectrum (CSS) signal technique is a common spread spectrum technique and an important way to achieve TOA localization (Jiang 2018;Daniel et al. 2021). The principle is to compress the pulse of the linear Frequency Modulation (FM) signal by a matching filtering technique similar to autocorrelation. The signal will form an energy peak at one moment, and the time of receiving the signal will be determined according to the peak to achieve ranging (Qian, Ma, and Liang 2019;Sha 2016;Wu 2013). The simulation parameters are consistent with the above simulation parameters. The integer time delay is 50 µs, the fractional time delay is 0.02 µs, and CSS signal is simulated for ranging. The estimated time delay of the CSS signal with the SNR of 0 dB is shown in Figure 20.
As shown in Figure 16, when the SNR is 0 dB, the integer time delay can be accurately derived as 50 µs according to the location of the peak occurrence, but the fractional time delay is not estimated. Since the time resolution is affected by the bandwidth, the fractional delay estimation is affected.
When the SNR is at 0 dB, Table 2 shows the comparison of ranging performances of the CSS and LSCC-BPR signals. Table 2 shows that the CSS signal and the LSCC-BPR signal have their own advantages and disadvantages in ranging. To save space frequency resources, the LSCC-BPR signal can be selected for ranging.
LSCC-BPR signal resource utilization analysis
In ranging based on the new OFDM ranging signal, the ranging pilots adopt the block-type structure and are inserted into the subcarriers in different positions to complete the ranging task. At this time, the ranging pilots occupy only part of the subcarriers, and the remaining empty subcarriers transmit other communication information. When the number of pilots is 512, 1024, and 2048, the proportions of subcarriers occupied by pilot data are shown in Figure 21. The ranging accuracy will be affected by the Doppler shift Since the position of pilot information insertion does not affect the ranging accuracy, according to the above simulation results, when the SNR is 0 dB, with the increase in the number of pilots, the ranging accuracy is also improved. Table 3 shows the ranging accuracy at a SNR of 0 dB for different numbers of pilots.
As shown in Figure 21 and Table 3, the ranging pilots occupy fewer subcarriers, the greater the number of subcarriers used to transmit communication information, the lower the carrier occupancy rate, and the higher the communication resource utilization rate. When the pilot data occupy 50% of the subcarriers, the ranging accuracy is basically the same as that when all subcarriers are occupied by pilots. While ensuring the ranging accuracy, the ratio of the ranging pilots occupied by the communication carrier is reduced, and the communication resource utilization rate is greatly improved.
Conclusions
In this article, the design of the LSCC-BPR signal is mainly proposed, and based on this signal, an analysis of the impact of this signal on ranging accuracy and resource utilization is performed. The LSCC-BPR signal combines the block-type ranging pilot with the OFDM signal, which can not only complete the ranging task but also transmit communication information normally. Compared with the traditional ranging signal, the ranging information in the LSCC-BPR signal occupies only part of the communication resources, thereby greatly improving the utilization rate of the communication resources. According to the analysis of the simulation results, it's concluded that when the LSCC-BPR ranging system is used, with ranging pilots' number of 2048 and the SNR of 0 dB, the ranging accuracy is the same as that using a traditional OFDM ranging system, and the accuracy can reach 0.8 m and occupy only 50% of the communication subcarriers, thereby greatly reducing the occupancy rate of the communication carriers. The realization of the LSCC-BPR signal can not only effectively solve the lack of frequency resources but also assist the navigation system in high-precision positioning. Therefore, the LSCC-BPR signal is applied to the LEO satellite constellation to provide inspiration for further the integration of communication and navigation
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Jingfang Su is a postgraduate in Information Science and Engineering, Hebei University of Science and Technology. Her research interests are wireless communication technology, integration of communication and navigation, and LEO navigation.
Jia Su is an associate professor in Dept. of School of Information Science and Engineering, Hebei University of Figure 21. Proportions of pilots, information, and subcarriers.
Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-11-19T16:11:31.047Z | 2022-11-17T00:00:00.000 | {
"year": 2023,
"sha1": "9521e752c38b6020b8a38925d7bf178b04ba46c4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/10095020.2022.2121229",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "607562e14e3030aa349ec2f6ce71da1b25c07f24",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
246068848 | pes2o/s2orc | v3-fos-license | Clinical implications of lymphadenectomy for invasive ductal carcinoma of the body or tail of the pancreas
Abstract Aim The appropriate extent of lymphadenectomy for pancreatic cancer of the body/tail has not been standardized worldwide. The present study evaluated the optimal extent of harvesting lymph nodes. Methods Patients who underwent distal pancreatectomy for invasive ductal carcinoma of the pancreas between 2007 and 2018 were retrospectively reviewed. Patients were subclassified into three groups depending on the tumor location: pancreatic body (Pb), proximal pancreatic tail (Ptp), and distal pancreatic tail (Ptd). The pancreatic tail was further divided into even sections of Ptp and Ptd. Patterns of lymph node metastasis and the impact of lymph node metastasis on the prognosis were examined. Results A total of 120 patients were evaluated. Fifty‐eight patients had a tumor in the Pb, 38 in the Ptp, and 24 in the Ptd. No patients with a Ptd tumor had metastasis beyond the peripancreatic and splenic hilar lymph nodes (LN‐PSH). All patients with metastasis to the lymph nodes along the common hepatic artery (LN‐CHA) or along the left lateral superior mesenteric artery (LN‐SMA) also had metastasis to the LN‐PSH. Recurrence after surgery occurred significantly earlier in this population. In a multivariate analysis, metastasis to the LN‐CHA or LN‐SMA (hazard ratio [HR] 3.3; P = .04) was an independent risk factor for overall survival. Furthermore, high levels of preoperative serum CA19‐9 (HR 10.9; P = .013) were a predictive factor for metastasis to the LN‐CHA or LN‐SMA. Conclusions Metastasis to the LN‐CHA or LN‐SMA was rare but a significant prognostic factor in patients with pancreatic body/tail cancer.
| INTRODUC TI ON
Lymph node status is well known to be a significant prognostic marker in patients with pancreatic cancer. [1][2][3][4][5] Pancreatectomy with lymphadenectomy has been the standard procedure for treating pancreatic cancer. [6][7] However, the optimal extent of lymphadenectomy has been controversial. Based on previous randomized controlled trials, extended lymphadenectomy with pancreatoduodenectomy has not been recommended for pancreatic head cancer. [8][9][10][11][12][13][14] Particularly for patients with adenocarcinoma in the body or tail of the pancreas, few studies have focused on the influence of lymph node involvement on the prognosis.
The recommended extent of lymph node dissection during distal pancreatectomy (DP) for pancreatic cancer differs somewhat between the seventh edition of the rules of the Japan Pancreas Society (JPS) 15 and the consensus statement by the International Study Group on Pancreatic Surgery (ISGPS). 7 The JPS recommends harvesting lymph nodes along the common hepatic artery and the celiac axis for both pancreatic body and tail cancers. In contrast, the ISPGS recommends that the lymph nodes around the celiac axis be resected, particularly when the tumor is close to the celiac axis in the body of the pancreas, and the lymph nodes along the common hepatic artery not to be dissected for pancreatic body or tail cancers, as resection of these lymph nodes has been considered to constitute extended lymphadenectomy. 7 Clarifying the incidence of metastasis in a specific regional lymph node station and the impact of lymph node metastasis on the prognosis has proven useful for understanding the patterns of tumor spread and examining the extent of lymph node dissection. However, to our knowledge, few studies have investigated the rates of lymph node metastasis, especially for distal pancreatic cancer. [16][17] The present study evaluated the patterns of lymph node metastasis in patients with pancreatic cancer in the body or tail and proved the validity of the current extent of lymphadenectomy during DP.
| Patients
From January 2007 to December 2018, 305 consecutive patients underwent DP, including 17 who underwent DP with celiac axis resection (DP-CAR), in Shizuoka Cancer Center, Japan. Among them, 135 patients who were histologically proven to have invasive ductal carcinoma of the pancreatic body or tail were included in this study. Of these, patients who underwent R2 resection (n = 1), those who underwent DP as total remnant pancreatectomy (n = 9), and those with double cancers (n = 5) were excluded from this study.
Ultimately, 120 patients were included as subjects in this study. The clinical data of these patients were obtained from a prospectively collected database.
This study was approved by the Institutional Review Board of the Shizuoka Cancer Center (approval number: J2020-164-2020-1-3).
| Surgical procedures
All surgical procedures were performed with an open approach.
No laparoscopic surgery was conducted during the study period.
Peritoneal lavage cytology and sampling of the para-aortic lymph nodes were performed after laparotomy. If unresectable factors were found, the planned procedure was abandoned. The surgical procedures performed for DP and DP-CAR were described previously. 19 Indications for DP-CAR in our institution included (a) the celiac axis was involved, whereas the aorta, superior mesenteric artery, and gastroduodenal artery remained free from the tumor; or (b) preserving the splenic artery root was technically or oncologically difficult. 19 To achieve complete lymph node dissection around the splenic artery and the splenic hilum, the spleen was routinely resected in both procedures. The extent of lymph node dissection was either equal to or greater than that recommended by the ISGPS. 7 In detail, the lymph nodes along the common he- Figure 1A. The intraoperative histological evaluation of the stump of the pancreas was always performed by pathologists to ensure that the surgical margin remained negative for cancer cells.
| Histological evaluation and numbering of lymph nodes
A histological assessment was carried out by at least two special-
| Subclassification of the tumor location
A schematic illustration of the subclassification of the tumor location is also described in Figure 1B. Tumors located at the tail of the pancreas were classified into two groups: proximal pancreatic tail (Ptp) and distal pancreatic tail (Ptd). The boundary between Ptp and Ptd was defined as the line that equally divided the left border of the abdominal aorta and the end of the pancreatic tail. If the tumor was located in more than two areas, classification was performed according to the location of the center of the tumor.
Preoperative computed tomography (CT) images were used for this analysis.
| Statistical analyses
Categorical variables were compared using the chi-square test or Fisher's exact test, as appropriate. Continuous variables were compared using the Mann-Whitney U-test. The survival was analyzed using Kaplan-Meier curves and the log-rank test. The optimum cutoff values of each continuous parameter for the overall survival (OS) and predicting metastasis to the LN-CHA or LN-SMA were determined using the minimum P values calculated using the logrank test. Especially, as to tumor marker, cutoff values were shown to be 15.0 ng/mL for CEA (P = .0029) and 400 U/mL for CA19-9 (P = .00047) ( Figure S1A, B). Hazard ratios were estimated by univariate and multivariate survival analyses using the Cox regression model. Variables with P < .05 using the univariate log-rank test were further explored in the multivariate setting. Differences were considered statistically significant at P < .05. All analyses were performed using the SPSS software program, v. 25.0 (IBM, Armonk, NY, USA).
| RE SULTS
Patients' demographics and operative characteristics are summarized in Table 1. Fifty-eight patients had tumors in the Pb, 38 in the Ptp, and 24 in the Ptd. Patients with tumors in the Ptd were younger than those with tumors in the Pb (P < .05). All patients with tumors in the Ptd had resectable lesions and underwent DP. DP-CAR was performed in 17 patients with tumors in the Pb or Ptp. There were no other significant differences among these three groups.
No significant difference was shown in the OS and the diseasefree survival (DFS) for patients in the Pb, Ptp, and Ptd groups ( Figure S2). Pathologic characteristics are also shown in Table 1. Nodal involvement was observed in 64 (53%) patients. The median number of examined regional lymph nodes was 16. R1 resection was
| Prognostic factors for OS and DFS
Multivariate analyses revealed that lymph node metastasis to the LN-CHA or LN-SMA, serosal invasion, portal venous system invasion, and a lack of adjuvant chemotherapy were risk factors for OS (Table 3).
Similarly, a high level of serum CA19-9, large tumor, lymph node metastasis, portal venous system invasion, and no adjuvant chemotherapy were shown to be risk factors for DFS by multivariate analyses (Table S1).
| Predictive factors for metastasis to the LN-CHA or LN-SMA
Univariate analysis showed that high levels of preoperative serum CA19-9 were a predictive factor for lymph node metastasis to the LN-CHA or LN-SMA ( Table 4).
| DISCUSS ION
LN-CHA and LN-SMA are considered appropriate for dissection, regardless of tumor location, according to the classification of pancreatic carcinoma in Japan. 15 However, few studies have described the metastasis rate of those stations and the effect of dissection of those lymph nodes, especially for pancreatic tail cancer. [16][17] This study describes the patterns of lymph node metastasis for patients with pancreatic body/tail cancer who underwent DP.
Specifically, it revealed that LN-CHA and LN-SMA metastasis was rare but still a significant prognostic factor in patients with pancreatic body/tail cancer. According to the mapping of the meta- cancer without lymph node metastasis as a preoperative diagnosis, a low extent of lymphadenectomy has been recommended. 23 For breast cancer without clinically lymph node metastasis, as confirmed by a sentinel node biopsy, axillary lymph node dissection has been omitted. 24 These treatments have been supported by an accurate diagnosis for tumor staging. Regarding pancreatic cancer, in general, the concept of the sentinel lymph node hypothesis has not been adopted, and a preoperative diagnosis for staging is sometimes difficult to make, compared to cases of stomach or breast cancer. Further advances in imaging studies along with the accumulation of evidence will help resolve this issue.
The pancreatic resection line during DP is determined by con- surgery. This might also be associated with our institutional policy, where the LN-SMA is usually dissected only in cases with Pb tumors. Thus, given these potential biases, we recognize that we cannot draw any absolute conclusions from these data. To confirm the current results, a further multicenter study including data from high-volume centers should be conducted. Nevertheless, we believe that the results of the study will help refine classical procedures.
In conclusion, metastasis to the LN-CHA or LN-SMA was rare but still a significant prognostic factor in patients with pancreatic body/tail cancer. | 2022-01-21T17:04:07.657Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "5bbfcf7a40d94b9c71fcbb43928a12e494cf6984",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ags3.12551",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "c4bd506e1d1d629a91350928166785e6ec7800fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211265974 | pes2o/s2orc | v3-fos-license | Biallelic mutations in NRROS cause an early onset lethal microgliopathy.
Microglia are tissue-resident macrophages playing essen-tial roles in central nervous system The of microglia for brain health in by the definition of Mendelian disorders associated with dysfunction of micro-glia-related proteins. microgliopathies comprise a diverse set of neurological phenotypes including disease due to mutations in CSF1R 8, DAP12 and TYROBP/TREM2 7], USP18 11, and IRF8 Here, we describe a novel early onset lethal encephalopathy due to mutations in the microglial-associated protein NRROS.
Microglia are tissue-resident macrophages playing essential roles in central nervous system (CNS) development and homeostasis [14,17]. The importance of microglia for brain health in humans has been highlighted by the definition of Mendelian disorders associated with dysfunction of microglia-related proteins. These so-called microgliopathies [20] comprise a diverse set of neurological phenotypes including disease due to mutations in CSF1R [5,8,13], DAP12 and TYROBP/TREM2 [4,7], USP18 [3,11,16,18], and IRF8 [2,6]. Here, we describe a novel early onset lethal encephalopathy due to mutations in the microglial-associated protein NRROS.
We ascertained three patients demonstrating a stereotyped clinical and neuroradiological phenotype (Supplementary material). Patients 1 (P1) and P2, both females, were the first and third children of non-consanguineous parents of Maori descent (family F1442), whilst P3 (family F2382), a male, was the first child of first cousin south Asian parents (Supplementary Figure 1). All three children were born after a normal pregnancy and delivery, and early development was unremarkable. However, in the second year of life, they experienced the onset of refractory seizures and neurodegeneration, leading to death between the ages of 27 and 36 months. Metabolic testing, including for mitochondrial dysfunction, was non-contributory. Neuroimaging initially demonstrated fine calcification at the depths of the cerebral gyri, with normal white matter (Fig. 1). As disease progressed, repeat imaging revealed increased calcification, severe generalized atrophy with ventricular dilatation, and diffuse signal changes in cerebral and cerebellar white matter.
Exome sequencing identified homozygous NRROS variants in the affected children from both families: a c.1777C > T/p.(Gln593*) and a c.1257del/p.(Gly420Alaf-sTer14) in F1442 and F2382, respectively. Cellular material was not available from any of the patients. However, both of these variants are predicted to result in a truncated protein, and both are very rare, with the p.(Gln593*) not previously recorded, and the p.(Gly420AlafsTer14) reported on only 1 of 251,438 alleles on gnomAD.
Detailed pathological examination was undertaken on P3, demonstrating abnormalities confined to the CNS. Gross examination indicated a significant cerebral atrophy (Supplementary Figure 2). Histologically, there was both grey and white matter pathology throughout the cerebrum, cerebellum, and brainstem. Focal calcification was noted in the neuropil. There was widespread neuronal loss with reactive gliosis throughout the grey matter (Supplementary Figure 3). The most striking pathological finding was the accumulation of foamy macrophages, predominantly in a perivascular distribution, throughout the white matter, extending from frontal to occipital white matter ( Fig. 2a-c), and through cerebellar white matter and descending corticospinal pathways in the basis pontis. These foamy cells immunoreacted with CD68, MHC Class II (CR3/43), and p22phox ( Fig. 2c-f), but did not immunoreact with CD163, Iba1, NRROS, CD3, P2Y12, or TMEM119 (Supplementary Figure 3). There was reduced myelin basic protein (MBP) expression in the white matter ( Fig. 2g) compared to age-matched controls, although there was preservation of U fibers. Occasional axonal spheroids were noted, albeit this was not a prominent feature (Supplementary Figure 3).
We assessed the cellular expression of NRROS, and the mouse homolog Nrros (Lrrc33), in human and mouse brain respectively, by mining curated transcriptomic data sets (Supplementary Figure 4). In fresh post-mortem human cortical microglia and brain samples, NRROS was highly expressed in isolated microglia, although less abundantly than established microglial signature genes. The expression of NRROS was enriched > 50-fold in microglia compared to whole brain, indicating that microglia are the major cell type expressing NRROS in human brain parenchyma. A similar pattern of highly enriched expression of Lrrc33/Nrros was observed in CD11b + microglia/macrophages in mouse brain relative to brain extracts, and in microglia versus other parenchymal cell types. Comparison of parenchymal microglia with CNS perivascular macrophages (PVMs) showed significantly greater expression in the latter.
The clinical features observed in our patients recapitulate those in mice with Nrros/Lrrc33 deficiency. Nrros−/−mice exhibited progressive neurological decline, including motor defects and abnormal locomotor activity, from age 2-3 months and death by 6 months of age [15,21]. Neuropathology in these mouse models includes neuronal loss, demyelination, axonal pathology, astrogliosis, and the increased presence of foamy macrophages, all of which were seen in our case. Of note, there was no indication of immune-mediated inflammation in our case or either of these mouse models.
NRROS is a leucine-rich repeat containing transmembrane protein localized to the endoplasmic reticulum, and preferentially expressed in myeloid cells. Reported functions include the regulation of reactive oxygen species (ROS) production through control of NOX2 stability [12], responsiveness of Toll-like receptor signaling [19], and processing/activation of transforming growth factor (TGF)-β via physical interactions with the latent complex [10,15]. NRROS expression is restricted to microglia within the CNS parenchymal compartment in humans and mice. The present case showed disruption to the distribution, density, and cell morphology of IBA1 cells alongside loss of P2Y12 staining and weak TMEM119 immunoreactivity, indicative of marked parenchymal microglial abnormalities. Both Nrros−/− mouse studies observed a loss of homeostatic gene expression profile which included suppression of P2ry12 and Tmem119 expression, and a shift towards a phenotype resembling PVMs [15,21]. Although Nrros is expressed in peripheral mononuclear cells, a series of crosses and bone marrow transplant experiments showed a negligible contribution of peripheral macrophages to the onset of the Nrros−/− phenotype [21]. Of note, selective deletion of Nrros in microglia during pregnancy indicated a cell-intrinsic role for NRROS. In contrast, Nrros deletion induced in 3-week-old mice did not cause neuropathological changes or neurological abnormalities [21], implying that NRROS is important during microglial establishment at embryonic/ postnatal stages, but may be dispensable for maintenance of adult microglia. Functions of NRROS proposed above, notably in ROS and TGFβ regulation, may be important in disease pathogenesis. p22phox was markedly up-regulated in PVMs in our case, suggesting that an absence of functional NRROS may result in increased p22phox-NOX2 binding, with the potential for increased superoxide radical formation. However, a cross of Nrros−/− and Cybb−/− (encoding NOX2) mice did not rescue the Nrros−/− phenotype [21]. Mice with CNS or microglial-restricted disruption during development of other key nodes in the TGFβ activation/ signaling pathway, including deletion of αVβ8 integrin or TGFBR2 [1], develop highly similar pathological, microglial, and neurological abnormalities to Nrros−/− mice. Moreover, human TGFβ1 loss-of-function mutations causing early onset leukoencephalopathy were described recently [9].
Taken together with the mouse data, our findings indicate that NRROS is indispensable in controlling the early development of a homeostatic microglial population and/or its ongoing preservation in the postnatal brain, thereby suggesting a loss of NRROS function as a novel microgliopathy in humans. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2020-02-25T16:27:22.184Z | 2020-02-25T00:00:00.000 | {
"year": 2020,
"sha1": "6db4a5548b72cbc82f5bb298f87e9577a8cd6672",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00401-020-02137-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6db4a5548b72cbc82f5bb298f87e9577a8cd6672",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4378703 | pes2o/s2orc | v3-fos-license | Abstraction and Idealization in Biomedicine: The Nonautonomous Theory of Acute Cell Injury
Neuroprotection seeks to halt cell death after brain ischemia and has been shown to be possible in laboratory studies. However, neuroprotection has not been successfully translated into clinical practice, despite voluminous research and controlled clinical trials. We suggested these failures may be due, at least in part, to the lack of a general theory of cell injury to guide research into specific injuries. The nonlinear dynamical theory of acute cell injury was introduced to ameliorate this situation. Here we present a revised nonautonomous nonlinear theory of acute cell injury and show how to interpret its solutions in terms of acute biomedical injuries. The theory solutions demonstrate the complexity of possible outcomes following an idealized acute injury and indicate that a “one size fits all” therapy is unlikely to be successful. This conclusion is offset by the fact that the theory can (1) determine if a cell has the possibility to survive given a specific acute injury, and (2) calculate the degree of therapy needed to cause survival. To appreciate these conclusions, it is necessary to idealize and abstract complex physical systems to identify the fundamental mechanism governing the injury dynamics. The path of abstraction and idealization in biomedical research opens the possibility for medical treatments that may achieve engineering levels of precision.
Introduction
Many important clinical conditions continue to elude effective treatments. Stroke is a notable example where over 100 clinical trials have failed to find a means to prevent cell death by neuroprotection [1]. Similar strings of clinical trial failure have occurred with diseases such as myocardial infarction [2] and acute nephrotic ischemia [3]. We have argued that a key factor behind these failures is the lack of a general theory of biological cell injury.
We therefore introduced a nonlinear dynamical theory of acute cell injury [4]. However, the original form of the theory possessed limitations as detailed in Ref. [5]. This led us to reformulate the theory, which was introduced elsewhere [5] but for which we here provide a deeper analysis. Technically, the original theory consisted of a system of autonomous nonlinear ordinary differential equations. The reformulation of the theory consists of a system of nonautonomous, nonlinear differential equations. The technical mathematical differences impact how the equations are solved and how the resulting solutions are interpreted in terms of acute cell injury. It is the purpose of this paper to illustrate our procedure for solving and interpreting the solutions of the nonautonomous theory of acute cell injury.
As the title indicates, constructing and understanding the theory, whatever mathematical form it takes, requires abstracting and idealizing the real world. Biology and the subdisciplines of biomedicine are generally descriptive. Advanced mathematics are not widely used in these sciences. On the other hand, the sciences that have allowed the most effective development of technology, notably physics, do not seek to literally describe its subject matter. Instead, physics idealizes and abstracts reality to mathematically formulate how things change. The second major aspect of the present work is to illustrate how idealization and abstraction can be used in biomedical research to model acute injury to biological cells.
In this regard, a key idea we wish to convey is that mathematically abstracting a system allows all possible states of the system to be understood. Experiments usually involve high costs of time and resources and can only measure a finite number of parameter combinations. On the other hand, a theory allows us to see how the system behaves under all parameter conditions. Then, the goal of science is to assure congruency between specific solutions of the theory and specific experimental measurements. Experimental measurements made only to describe a phenomenon in the absence of mathematical theory are incomplete and are descriptive science [6]. Rather, the measurements should seek to test a theory [6].
Below we study one idealized cell type injured by one idealized injury mechanism. We shall see the enormous complexity in this one example. However, the complexity is not incomprehensible. The mathematics provide a systematic framework, a catalog of sorts, allowing all possible states of the system to be understood in an organized fashion. This has major implications for developing therapies for acute injuries such as stroke. We shall show that injuring one ideal cell type by one ideal injury mechanism, varying only the intensity of the injury, produces a continuum of states for which there is little appreciation in the paradigms that currently dominate biomedical research.
Cell Injury Idealized
There are two main categories of how cells become injured. There is either some identifiable injury applied to the cells or there is not. Examples of identifiable injuries would include mechanical trauma, chemical trauma (e.g., a poison), metabolic trauma (such as ischemia), and so on. The criteria being that there is a clearly identifiable exogenous agent or circumstance that injures the cell. Further, the intensity of the damage mechanism can be quantified: the minutes of ischemia, the concentration of a poison, the amount of force, and so on. We can abstract the quantitative aspect of injury intensity as a parameter I, and thereby abstract it from any specific injury mechanism. This stands in contrast to injuries such as cell transformation, or chronic neurodegenerative diseases in which the cause, let alone intensity, of the cell injury is not clearly identifiable, if even known. Thus, I represents the intensity of a clearly identifiable injury mechanism and we term this "acute cell injury".
From these considerations, we can begin to construct an idealized picture of acute cell injury. A cell is injured by some acute injury mechanism, and in response it either lives or dies. This is an idealization because we do not specify what type of cell nor the specific injury. The cell and the injury are tokens that interact, and their interaction is quantified by the parameter I.
From the vast amount of empirical, descriptive studies, it has been shown that many biomolecular changes occur in cells after they have been acutely injured. These are generally expressed in terms of pathways: changes in phosphorylation or other signaling events, increases or decreases in the activity of specific proteins or pathways, changes in localization or amounts of ions, specific proteins, transcription factors, micro-RNAs, etc. It is these specific molecular events that constitute the complex network of changes in the injured intracellular milieu. It is a case of Humpty Dumpty: we injure cells (or tissues) then grind them up and identify the hundreds of changes in the biomolecules, but how to put them back together again to reconstruct how the cell dies? To date, such reconstruction efforts have generally been unsuccessful as attested by failed clinical trials in many fields of biomedicine.
Instead of a literal reconstruction of the events, we idealize as follows. We know a priori that some of these biomolecular changes harm the cell but that others serve to protect the cell. Let all the cell-damaging changes be represented by the variable D, the total damage in the cell. Likewise, represent all the pro-survival changes in the cell by the variable S, the total stress response. Theoretically, there is a third category of changes in the acutely injured cell: those changes that have no effect on damaging the cell or helping it survive, which we denote by an empty set [ ]. Thus, all the complex molecular changes in the cell fall into these three general categories: D, S, or [ ]. There are no other possibilities. Since [ ] has no effect on outcome, we can ignore it here.
The concepts of D and S are key abstractions and idealizations of acute cell injury. We assert they provide an incontrovertible generalization that covers all possibilities. In mainstream biomedical research, the goal is to discover which molecules or pathways damage, and which enhance cell survival. Further, do some molecules that, at one point in the post-injury time course foster survival, transform into damaging influences? We submit that such questions lead down blind alleys. As abstractions, D and S already implicitly subsume these possibilities. From a theoretical perspective, we do not need to know the specific molecules of which D and S consist at any moment in time any more than we need to know the exact velocity of each air molecule when we measure air temperature.
Thus, we can fill in our ideal picture further: There is a cell. It is acutely injured with intensity I. In response, inside the cell, D and S assume nonzero values that change with time. The changes in D and S directly determine the survival or death outcome.
What Causes Cell Death?
The main use to which the concepts of D and S are applied is to define the cause of cell death. The core assumption of the mathematical idealization of acute cell injury is this: if S > D, the cell lives but if D > S, the cell dies.
A metaphor may help illustrate the concept. Imagine kicking a ball up a hill. The ball starts at a position at the bottom of the hill. This is the uninjured cell. A force is applied to the ball and it rolls up the hill. The force is the application of injury I to the cell. If the force is weak, the ball will roll to some point on the facing hillside, then roll back down to where it started. This is the survival case where S > D. However, if the force of I is greater than some specific value, the ball will go over the top of the hill and roll down the other side. This is the state D > S, and the other side of the hill is the state of death.
To be more precise, we are suggesting that acute cell injury is a tipping point phenomenon. This is already recognized in other terms by the concept of "cell death threshold". The "cell death threshold" is the amount of injury that causes the cell to die. The "cell death threshold" concept does not describe a threshold but instead indicates a tipping point between survival and death. Thresholds and tipping points are different mathematical entities, as illustrated below. The tipping point between survival and death is quantitatively determined by the intensity of injury I.
Another useful way to give meaning to D and S is to recognize that, before injury, the cell is in a homeostatic steady-state. Application of injury intensity, I, "knocks" the cell out of homeostasis. The variable S represents changes inside the cell seeking to bring the cell back into homeostasis. D represents changes that disrupt homeostasis. If the disruption of homeostasis is greater than the cell's ability to re-achieve homeostasis (e.g., D > S), the cell dies.
Survival and Death Outcomes after Acute Injury
These ideas intimately link sublethal and lethal injuries which, however, are generally studied separately in mainstream biomedical studies, and generally treated as different phenomena. On the sublethal side is survival after injury, often accompanied by a preconditioning effect [7][8][9][10][11]. Preconditioning is specifically defined here to mean that a cell subjected to a sublethal injury can, after a specified time, survive a lethal injury. On the lethal side is cell death which can take on different qualitative forms.
With respect to cell death after brain ischemia, necrosis and delayed neuronal death (DND) [12,13] are observed. They appear different by many criteria and therefore are thought to be due to different causes. Thus, one finds terms such as apoptosis, necrosis, and necroptosis, and other variants, described by lists of qualitative features such as cell appearance during death or which molecular pathways are activated [14][15][16][17][18][19][20][21][22][23].
With respect to survival and preconditioning, empirical evidence shows that the time between the sublethal and lethal injuries required to achieve optimal survival is finely tuned [24][25][26]. Thus, there are degrees of preconditioning, and the preconditioning effect is not permanent but fades with time after the sublethal injury. Further, different tissues display different forms of preconditioning (e.g., rapid preconditioning in heart, and delayed preconditioning in brain [27]).
Rapid or delayed preconditioning, necrosis and DND are qualitative distinctions. By defining acute cell injury in terms of the dynamics of D and S we build a framework where these seemingly different phenomena are points on a continuum of responses of cells to acute injury. The continuum is injury intensity, I. The dynamics of D and S vary quantitatively as a function of I as we will show below. This allows us to understand clearly that the specific qualitative forms of preconditioning or cell death are but cross sections of a continuum of dynamical behaviors.
Thus, through idealization and abstraction, we define the cause of cell death in a general fashion, independent of any specific cell type or specific injury mechanism, by focusing on and abstracting features common to all cells. By converting these ideas into a mathematical theory, we intimately link survival and death outcomes in a single continuum of dynamical behaviors.
A Mathematical Theory of Acute Cell Injury
We have developed the refined theory elsewhere [5] and therefore give only a brief summary here. We begin with our idealized picture of acute cell injury: there is a cell which is acutely injured with intensity I. D and S accumulate in the cell and change with time. If D > S, the cell irreversibly exits homeostasis and dies.
By definition, D and S are mutually antagonistic. Stress responses seek to eliminate damage, but damage reactions can destroy the mediators of stress responses. This has a specific mathematical meaning: D and S are inversely related. The inverse relationships between D and S can be quantified using the Hill function, which is the function that gives S-shaped curves. The Hill function defines a threshold, Θ, where a 50% effect occurs (e.g. like LD 50 ). Thus, there is some amount of S, Θ S , which causes a 50% decline in D. Similarly, there is some amount of D, Θ D , which causes a 50% reduction in S. Θ D and Θ S are true mathematical thresholds, meaning they are the values of D and S at the 50% reduction point.
The key assumption of our theory is that Θ D and Θ S change as a function of injury intensity, I.
Equation (1) posits specific functional forms: Θ D increases and Θ S decreases exponentially with I. These constitute assumptions that require empirical verification. However, for the sake of theory building, they provide a simple, plausible relationship. The parameters c D , c S , λ D , and λ S are constants of proportionality, required to correctly express the proportionalities between the thresholds and I. Their meanings are discussed in the next section.
To model the changes in D and S with time, the following equation is used As indicated, this is a well-known system of differential equations for modeling the common sense understanding that the net rate equals the difference between the rate of formation and the rate of decay [28]. The rates of formation are given by Hill functions expressing the inverse relationship between D and S, scaled by a velocity parameter v. In Equation (2), the rates of decay are assumed to be linear, with a decay parameter k.
The original theory substituted Equation (1) into Equation (2) [4]. We present here a refined theory using two additional assumptions. In addition to the assumption embodied by Equation (1) we now assume that (a) the velocity parameter decreases exponentially with time (Equation (3)), and (b) the decay parameter is a function of the instantaneous difference of D and S (Equation (4)). The rationalization of these assumptions was discussed previously [5].
Preliminary Considerations
When the term "prediction" is used in a scientific context, it does not refer to qualitative statements. The term "prediction" specifically means that one of the mathematical solutions of a mathematical theory fits a dataset intended to measure the theory. A necessary precondition to data fitting is to interpret the theory in terms of the relevant physical system. Our goal now is to display a typical solution of Equation (5) and how it can be interpreted with respect to experimental situations.
To summarize what is detailed below: Equation (5) takes input numbers (the "input vector") and outputs time courses of D and S. A time course can begin at D = 0 and S = 0, corresponding to an uninjured system, or it can begin at any value of D or S. Where the time courses start are theinitial conditions, (D 0 , S 0 ), which are part of the input vector. For a given pair of D and S time courses output by Equation (5), either D or S achieves a maximum value (D max and S max , respectively). Above we spoke of S > D or D > S. In the time courses output by Equation (5), these inequalities take the form S max > D max or D max > S max , corresponding to survival and death outcomes, respectively. The control parameter is I, injury intensity, and all other parameters are held constants to examine how the system behaves as a function of injury intensity. The series of time course along the continuum of I we have termed an injury course [4]. We show below how to express the solutions of Equation (5) as injury courses calculated across a range of initial conditions.
The Input Vector
Equation (5) has 9 parameters and two initial conditions (D 0 , S 0 ), giving the following input vector: A brief description of the parameters now follows. v 0 is the initial velocity of D and S formation at time zero. As Equation (3) indicates, the initial velocity decreases exponentially with time, meaning that the rate at which D and S form decreases with time after the injury. c 1 is a decay constant indicating how quickly v 0 decreases with time; the larger c 1 , the faster v 0 decreases. c 2 sets the rate that D and S decay; the larger c 2 , the faster are the D and S decay rates. In general, if v 0 and c 2 are set to 1, the solutions to Equation (5) stay in the unit plane (i.e., D and S range only between 0 and 1).
Four parameters, c D , λ D , c S , and λ S , characterize the qualitative aspects of the system. As described in detail elsewhere [29], c D and λ D represent the injury mechanism, and c S and λ S represent the cell type. The parameter n, typically called the Hill coefficient, can, in the context of our theory, be taken to represent how "tightly" the nodes of the molecular networks defined by D and S are linked. As stated above, I, injury intensity, is the control parameter. All these parameters can vary over large ranges and produce sensible output from Equation (5).
Initial Conditions
Initial conditions (D 0 , S 0 ) were described above but merit further discussion because they (1) link to experimental designs commonly encountered in biomedical studies, and (2) provide an example where a mathematical theory can study situations that are limited by time and resources in the laboratory.
In the laboratory, there is generally an uninjured control condition that is compared against the injured cells. A typical example would be a cell culture given a poison (e.g., thapsigargin that inhibits the endoplasmic reticulum SERCA pump). In this instance, thapsigargin is the injury mechanism, and its concentration is the injury intensity, I, which can be sublethal or lethal. For such a study there will always be a control cell culture given only the vehicle in which the thapsigargin is dissolved. Prior to administering thapsigargin, the experimental cells are identical to the control cells. This is an example of beginning an injury from initial conditions (D 0 ,S 0 ) = (0,0). Prior to drug administration, there is no cell damage and no active stress responses in the cell culture.
However, what if the cells were first transfected to express, say, HSP70 protein? HSP70 is an important pro-survival stress response protein. One would hypothesize that the transfected cells should survive a higher concentration of thapsigargin than the un-transfected cells. Increasing HSP70 protein before administering thapsigargin means that S 0 is no longer zero but is now a positive value greater than zero. Transfecting HSP70 therefore represents a change in the S initial conditions of the cells. In the typical case, the experimental group would be transfected plus thapsigargin, and the control would be transfected plus vehicle. However, the transfected control is different from the untreated, un-transfected control. It is generally assumed that it is good enough to take the transfected plus vehicle as the control, and the effect of transfecting with HSP70 is not taken into account in the study design. However, in the scope of our model, the transfection of the control cells is a change in initial conditions. As we show below, changing initial conditions can radically alter the dynamics.
In general, experimental manipulations such as the example given above are not recognized as changes in initial conditions, and hence, these manipulations are not systematically accounted for in the empirical biomedical literature. The study of varied initial conditions provides a systematic handle on such circumstances. Equation (5) can be studied over ranges of initial conditions that would otherwise require prohibitive amounts of time and resources to empirically study in the laboratory. Thus, at any value of I, we also study the behavior of Equation (5) over a range of initial conditions. It is theoretically relevant to ask: what dictates the range of initial conditions? Across an injury course, there will be one time course that gives the maximum possible value for D and another that gives the maximum possible value of S. The maximum value of S across all time courses can be interpreted as the maximum possible total stress responses for the specific cell type. Therefore, S cannot exceed this value. Thus, any initial condition of S must be less than or equal to this maximum value. For example, if the maximum S across all time courses = 1, then the initial condition for S must range as 0 < S 0 < 1. The same logic holds for D 0 . There are other possibilities for setting the initial condition ranges, but this one will serve in the present analysis.
Summary of Input Vector
For the present exercise, we hold [v 0 , c 1 , c 2 , c D , λ D , c S , λ S , n] constant. We then vary I over the range 0 < I < I max , where I max is the injury intensity beyond which the cells are incapable of mustering any stress response (e.g., S~0) for the entire post injury time course. The behavior of the system over the range 0 < I < I max constitutes the injury course. The I-range is determined relative to I X , the tipping point value of injury intensity [4]: Finally, at each value of I, within the I-range, we study Equation (5) over ranges of initial conditions. To repeat, if v 0 and c 2 each equal 1, then the values of D and S over time never exceed 1, and so our initial conditions can be confined to the range 0-1. From initial conditions (0,0), all time courses from 0 < I< I X will have S max > D max (survival outcome), and all time courses from I X < I < I max will have D max > S max (death outcome). This statement, however, does not hold in general at other initial conditions. We show below how to represent outcomes across ranges of initial conditions at each value of I.
To summarize, the values of parameters and initial conditions used in our example are: 2. I-range centered at I X = 0.6161 (as calculated from the parameters given in 1).
Solutions to the Theory of Acute Cell Injury
Equation (5) was custom programmed into Matlab (Mathworks, Natick, MA, USA, version 9.0) and solved using the ode45 solver which implements a variant of the Runge-Kutta method. In this section, we proceed in the following stages:
1.
Display time courses (and the corresponding trajectories) at specific values of I.
3.
Display time courses at a specific value of I and a range of initial conditions. 4.
Display the injury course across a range of initial conditions. Figure 2B plots Dmax and Smax vs I from the time courses shown in Figure 1A and provides a summary of the injury dynamics of this system from initial conditions (0,0). The maximum curves cross at IX. To the left of IX, Smax > Dmax, and to the right of IX, Dmax > Smax. Figure 2C extends the I-range to 0 < I < 5IX to illustrate that Imax ~ 3, where Imax is defined as the I value in which the S time course is always zero. Imax is of formal value by setting the upper bound of the I-range for a given input vector. The lower bound of I is of course zero. In this example, Imax is ~6 times larger than IX. Figure 2B plots D max and S max vs. I from the time courses shown in Figure 1A and provides a summary of the injury dynamics of this system from initial conditions (0,0). The maximum curves cross at I X . To the left of I X , S max > D max , and to the right of I X , D max > S max . Figure 2C extends the I-range to 0 < I < 5I X to illustrate that I max~3 , where I max is defined as the I value in which the S time course is always zero. I max is of formal value by setting the upper bound of the I-range for a given input vector. The lower bound of I is of course zero. In this example, I max is 6 times larger than I X .
Time Courses over Ranges of Initial Conditions
The effect of initial conditions on the injury dynamics is an important factor because it can potentially reverse outcome. Figure 3 illustrates the effect of altered initial conditions on the sublethal time course from I = 0.43129 (shown in Figure 1). In Figure 3A, the nine time courses correspond to the initial conditions marked by dots in Figure 3B. The green dots in Figure 3B indicate Smax > Dmax (the survival outcome) and the red dots indicate Dmax > Smax (the death outcome) for the corresponding time courses. Three of the nine initial conditions caused the survival outcome to "flip state" to death outcomes.
The result is sensible. The three time courses that flipped outcome started with D0 positive, which is interpreted as inducing some form of damage in the cells before applying injury I, i.e., a pretreatment. For example, one could imagine applying a mitochondrial inhibitor in sublethal doses before applying a second drug, for example, thapsigargin. The increase in D0 would correspond to increasing sublethal doses of the mitochondrial inhibitor. This will certainly weaken the cells. Then, application of the thapsigargin at a sublethal dose induces the injury dynamics in cells with preexisting damage. The sum of the pre-existing damage and damage from the thapsigargin kills the cells, even though application of the thapsigargin by itself would not. From this example, we gain insight into how application of multiple treatments to cells is by no means neutral and that application of any agent intimately affects the cell's injury dynamics.
In Figure 3C, instead of nine time courses, 2500 time courses (50 D0 by 50 S0) were calculated and outcome plotted as indicated above, filling in the plane of initial conditions. Calculating a large number of time courses caused survival and death regions to become visible on the initial conditions plane. The region size can be quantified by the ratio of death outcomes to all outcomes. In this example, 47.25% of initial conditions resulted in a death outcome (the remaining 52.75% gave survival outcomes). We emphasize that the injury is sublethal from initial conditions (0, 0). However, given a range of meaningful initial conditions, almost half of the time courses result in killing the cells under supposedly sublethal conditions. This example illustrates how theory allows us to explore conditions that present practical obstacles to measure. While measuring 9 time courses is feasible, measuring 2500 would require some type of automated method and could not be performed by hand.
Time Courses over Ranges of Initial Conditions
The effect of initial conditions on the injury dynamics is an important factor because it can potentially reverse outcome. Figure 3 illustrates the effect of altered initial conditions on the sublethal time course from I = 0.43129 (shown in Figure 1). In Figure 3A, the nine time courses correspond to the initial conditions marked by dots in Figure 3B. The green dots in Figure 3B indicate S max > D max (the survival outcome) and the red dots indicate D max > S max (the death outcome) for the corresponding time courses. Three of the nine initial conditions caused the survival outcome to "flip state" to death outcomes.
The result is sensible. The three time courses that flipped outcome started with D 0 positive, which is interpreted as inducing some form of damage in the cells before applying injury I, i.e., a pretreatment. For example, one could imagine applying a mitochondrial inhibitor in sublethal doses before applying a second drug, for example, thapsigargin. The increase in D 0 would correspond to increasing sublethal doses of the mitochondrial inhibitor. This will certainly weaken the cells. Then, application of the thapsigargin at a sublethal dose induces the injury dynamics in cells with pre-existing damage. The sum of the pre-existing damage and damage from the thapsigargin kills the cells, even though application of the thapsigargin by itself would not. From this example, we gain insight into how application of multiple treatments to cells is by no means neutral and that application of any agent intimately affects the cell's injury dynamics.
In Figure 3C, instead of nine time courses, 2500 time courses (50 D 0 by 50 S 0 ) were calculated and outcome plotted as indicated above, filling in the plane of initial conditions. Calculating a large number of time courses caused survival and death regions to become visible on the initial conditions plane. The region size can be quantified by the ratio of death outcomes to all outcomes. In this example, 47.25% of initial conditions resulted in a death outcome (the remaining 52.75% gave survival outcomes). We emphasize that the injury is sublethal from initial conditions (0, 0). However, given a range of meaningful initial conditions, almost half of the time courses result in killing the cells under supposedly sublethal conditions. This example illustrates how theory allows us to explore conditions that present practical obstacles to measure. While measuring 9 time courses is feasible, measuring 2500 would require some type of automated method and could not be performed by hand.
Injury Course over Ranges of Initial Conditions
We can construct initial condition "outcome planes" at each value of I in the injury course. Figure 4A shows such "outcome planes" for one sublethal and two lethal values of I. We saw above that a sublethal plane can produce death outcomes under some initial conditions. Similarly, lethal planes can produce survival outcomes at some initial conditions. Again, the result is sensible. When S0 is increased over D0, the pre-activated stress responses mitigate the injury I and cause survival. This is essentially a preconditioning phenomenon expressed by the theory solutions. We did not seek to specifically make a theory of preconditioning. Instead, from our idealization of acute cell injury in terms of D and S, and the mathematical forms chosen to represent the idealization, preconditioning emerges as a natural consequence of the dynamics.
As before, we can calculate the percentage of death outcomes on each plane and plot this vs I ( Figure 4B). The number of death outcomes increases with I, as would be expected. Significantly,
Injury Course over Ranges of Initial Conditions
We can construct initial condition "outcome planes" at each value of I in the injury course. Figure 4A shows such "outcome planes" for one sublethal and two lethal values of I. We saw above that a sublethal plane can produce death outcomes under some initial conditions. Similarly, lethal planes can produce survival outcomes at some initial conditions. Again, the result is sensible. When S 0 is increased over D 0 , the pre-activated stress responses mitigate the injury I and cause survival. This is essentially a preconditioning phenomenon expressed by the theory solutions. We did not seek to specifically make a theory of preconditioning. Instead, from our idealization of acute cell injury in terms of D and S, and the mathematical forms chosen to represent the idealization, preconditioning emerges as a natural consequence of the dynamics.
As before, we can calculate the percentage of death outcomes on each plane and plot this vs. I ( Figure 4B). The number of death outcomes increases with I, as would be expected. Significantly, from a therapeutic point of view, there are substantial areas of the plane with survival outcomes for lethal injuries. A therapy that could access those regions of the plane would be able to convert a lethal outcome to a survival outcome. The theory thus provides a systematic and quantitative way to access therapy. Further, the therapeutic region changes as a function of I, which means "one size does not fit all". The theory can calculate the required therapy for any specific circumstance. In this example, "therapy" refers to the ranges of initial conditions leading to survival outcomes at I > I X . from a therapeutic point of view, there are substantial areas of the plane with survival outcomes for lethal injuries. A therapy that could access those regions of the plane would be able to convert a lethal outcome to a survival outcome. The theory thus provides a systematic and quantitative way to access therapy. Further, the therapeutic region changes as a function of I, which means "one size does not fit all". The theory can calculate the required therapy for any specific circumstance. In this example, "therapy" refers to the ranges of initial conditions leading to survival outcomes at I > IX.
Percent Death Plots
We can calculate as many outcome planes as desired. Figure 5A calculates 20 outcome planes over the range 0 < I < 2IX and the corresponding plot of percent death outcomes is shown in Figure 5B. It is noted that within the 2IX range, the dynamics are, roughly, 50:50 survival to death outcomes at each I. Figure 5C extends the I range to 0 < I < 5IX and the percent death outcome is shown in Figure 5D. Now we see that as I increases, death outcomes predominate. At Imax, which is approximately 5IX, there are no longer any survival outcomes. Thus, Imax indicates the end of any potential therapy because all outcomes after Imax are death.
Percent Death Plots
We can calculate as many outcome planes as desired. Figure 5A calculates 20 outcome planes over the range 0 < I < 2I X and the corresponding plot of percent death outcomes is shown in Figure 5B. It is noted that within the 2I X range, the dynamics are, roughly, 50:50 survival to death outcomes at each I. Figure 5C extends the I range to 0 < I < 5I X and the percent death outcome is shown in Figure 5D. Now we see that as I increases, death outcomes predominate. At I max , which is approximately 5I X , there are no longer any survival outcomes. Thus, I max indicates the end of any potential therapy because all outcomes after I max are death. Finally, we wish to briefly illustrate how the injury dynamics can vary from system to system.
A given system is mainly defined by the 4 qualitative parameters (cD, D, cS, S). Varying (cS, S) indicates a different cell type, and varying (cD, D) indicates a different injury mechanism. To model a different cell type, cS is increased from 0.4 to 4, corresponding to a cell with stronger stress responses. The initial condition outcome planes ( Figure 5E) and percent death outcome plot ( Figure 5F) exhibit different dynamics from the cS = 0.4 case. Notably, the percent death outcome plots are different between the two systems. For cS = 0.4, below IX, the outcomes are roughly 50:50. For the cS = 4 case, the results are nonmonotonic and unexpected: there is (1) a narrow range below IX where there is close to 100% survival outcomes, and (2) a range at very low I where death outcomes are increased when I decreases. Also, for cS =4, Imax occurs ~ 2IX vs 5IX for the cS = 0.4 case, meaning it has a relatively more compressed I-range. Finally, we wish to briefly illustrate how the injury dynamics can vary from system to system. A given system is mainly defined by the 4 qualitative parameters (c D , λ D , c S , λ S ). Varying (c S , λ S ) indicates a different cell type, and varying (c D , λ D ) indicates a different injury mechanism. To model a different cell type, c S is increased from 0.4 to 4, corresponding to a cell with stronger stress responses. The initial condition outcome planes ( Figure 5E) and percent death outcome plot ( Figure 5F) exhibit different dynamics from the c S = 0.4 case. Notably, the percent death outcome plots are different between the two systems. For c S = 0.4, below I X , the outcomes are roughly 50:50. For the c S = 4 case, the results are nonmonotonic and unexpected: there is (1) a narrow range below I X where there is close to 100% survival outcomes, and (2) a range at very low I where death outcomes are increased when I decreases. Also, for c S =4, I max occurs~2I X vs. 5I X for the c S = 0.4 case, meaning it has a relatively more compressed I-range.
Comparing two input vectors illustrates two points. First it underscores the need to systematically study how Equation (5) behaves as the 4 qualitative parameters vary. We are currently undertaking this task and it will be the subject of a future report. Second, this comparison illustrates how two different cell types exhibit very different dynamics to injury. What this means in practical terms is that one cannot assume that because cell type A behaves in a certain fashion when injured, cell type B will behave similarly. This is a well-appreciated insight with respect to tissue differences, for example, injured brain versus injured heart. However, it is less acknowledged in cell culture studies. Our results quantitatively demonstrate that cell type variations can widely alter injury dynamics.
Discussion
We showed here how to solve the nonautonomous theory of cell injury dynamics and interpret solutions of Equation (5) in terms of acute cell injury. The results obtained illustrate that the theory produces output that is both sensible and insightful with respect to known phenomena associated with acute cell injury such as preconditioning, or variations in the length of time it takes a system to die after injury (e.g., necrosis vs. DND in stroke). Our nonautonomous theory demonstrates that outcome is a function of injury intensity and that D and S time courses are, in general, different for different injury intensity I. We also demonstrated that, in general, the range of initial conditions resulting in survival outcomes at lethal I > I X decrease as I increases, until I max , after which all outcomes are death. These results clearly specify that a "one size fits all" therapy will be unsuccessful at effecting survival at all lethal injury values. Below, we compare the output of Equation (5) to our previous autonomous version of theory, and then conclude with statements about the value of abstraction and idealization for biomedical sciences.
Outcomes in the Autonomous vs. Nonautonomous Theories
The technical differences between autonomous and nonautonomous differential equations mean there is not a direct one-to-one mapping of the solutions. However, there are features of the solutions that are analogs in terms of how they are interpreted. For example, the autonomous theory output fixed points (D*, S*) at each value of I, and injury courses were expressed as plots of fixed points vs. I [4]. Such plots are called bifurcation diagrams. Technically, there is only one fixed point for all solutions to the nonautonomous theory: (D*, S*) = (0, 0) as t → ∞. The nonautonomous theory was designed to have this feature, which is a necessary condition to have closed loop trajectories ( Figure 1B,D). Therefore, fixed points cannot be directly compared between the two versions of the theory. Instead, the maximum points of the time courses (D max , S max ) from the nonautonomous version are functionally analogous to the fixed points of the autonomous theory. Plotting maximum points vs. I ( Figure 2B,C) resemble a bifurcation diagram. However, Figure 2B,C are not bifurcation diagrams because a bifurcation diagram accounts for all initial conditions. The percent death plots ( Figures 4B and 5B,D,F) are thus the analogs of bifurcation diagrams for the nonautonomous theory because they incorporate all different initial conditions. It needs to be stated that percent death plots are not to be interpreted that, given a specific value of I, there is a probability of X % that the cell will die. There is nothing statistical about the theory. It must be firmly kept in mind that any time course from a specific initial condition is a deterministic outcome. The percent death plots are meant only as summaries of all the time courses from all the initial conditions at a given value of I. Given some value of I and specific initial conditions, one can then calculate the specific deterministic time course. Do we expect that the precision to identify point-like initial conditions is possible in biomedical studies? Certainly not with today's technology. However, we do not need point precision. As the "outcome planes" indicate, outcome is associated with ranges of initial conditions, and such ranges are likely good enough to map to current experimental technologies.
Injury Courses in the Nonautonomous Theory
The functional injury course for the nonautonomous theory is the percent death plot (e.g., Figures 4B and 5B,D,F). A percent death plot allows assessment, at a glance, of survival across the entire range of injury intensities. From the two examples calculated above ( Figure 5D,F) we conclude that, in general, different percent death plots are obtained from different input vectors. Knowledge of how the percent death plots vary with (c D , λ D , c S , λ S ) will be important to fully systematize the nonautonomous theory. For the autonomous version, only four qualitative types of bifurcation diagrams were observed [4]. Thus, in the scope of the autonomous model, there were only four basic forms of acute injury dynamics. It remains to be seen if a similar simplification of injury dynamics occurs in the nonautonomous theory and therefore a parameter sweep study is important to undertake.
However, even without a complete understanding of the dynamics of Equation (5), we can still make important and relevant comments about the meaning of the percent death plots and how they compare to the injury courses of the autonomous theory. Figure 6 shows two of the four types of bifurcation diagrams obtained from the autonomous version (the other two types are variants of Figure 6B and are not discussed here). Figure 6A is monostable, meaning only a single pair of fixed points (D*, S*) at each I. The interpretation of this type of injury course is that for all I < I X , the cell always survives, and for all I > I X , the cell always dies. There is no therapy possible in this type of injury dynamics. It is unrealistic because a sublethal injury damaged by a pretreatment (e.g., D 0 > 0) will not kill the cell, no matter how strong the pretreatment damage (e.g., even D 0 → ∞ would lead to survival of the cell). Similarly, no pre-inhibition of damage (D 0 < 0) or pre-activation of stress responses (S 0 > 0) will halt cell death when I > I X . The monostable case is completely ideal and does not correspond to reality. It does capture, however, the common idea of a "cell death threshold" where the cell survives below the "cell death threshold" and dies above the "cell death threshold".
Injury Courses in the Nonautonomous Theory
The functional injury course for the nonautonomous theory is the percent death plot (e.g., Figure 4B and Figure 5B, 5D and 5F). A percent death plot allows assessment, at a glance, of survival across the entire range of injury intensities. From the two examples calculated above ( Figure 5D, 5F) we conclude that, in general, different percent death plots are obtained from different input vectors.
Knowledge of how the percent death plots vary with (cD, D, cS, S) will be important to fully systematize the nonautonomous theory. For the autonomous version, only four qualitative types of bifurcation diagrams were observed [4]. Thus, in the scope of the autonomous model, there were only four basic forms of acute injury dynamics. It remains to be seen if a similar simplification of injury dynamics occurs in the nonautonomous theory and therefore a parameter sweep study is important to undertake. However, even without a complete understanding of the dynamics of Equation (5), we can still make important and relevant comments about the meaning of the percent death plots and how they compare to the injury courses of the autonomous theory. Figure 6 shows two of the four types of bifurcation diagrams obtained from the autonomous version (the other two types are variants of Figure 6B and are not discussed here). Figure 6A is monostable, meaning only a single pair of fixed points (D*, S*) at each I. The interpretation of this type of injury course is that for all I < IX, the cell always survives, and for all I > IX, the cell always dies. There is no therapy possible in this type of injury dynamics. It is unrealistic because a sublethal injury damaged by a pretreatment (e.g., D0 > 0) will not kill the cell, no matter how strong the pretreatment damage (e.g., even D0 would lead to survival of the cell). Similarly, no pre-inhibition of damage (D0 < 0) or pre-activation of stress responses (S0 > 0) will halt cell death when I > IX. The monostable case is completely ideal and does On the other hand, the bistable injury course does allow for death at I < I X , and survival for I > I X in the bistable regions where both solutions simultaneously exist in the system dynamics ( Figure 6B, where area marked in yellow is the bistable region). This was, in fact, the central insight of the autonomous model: that therapy was only possible when the dynamics were bistable. This was a major finding with respect to the link between injury dynamics and therapeutics. We have stated elsewhere and repeat here that this is perhaps the most important insight provided by our theoretical study because it opens the possibility to calculating therapy for any given situation.
What is of great interest, and is the main finding of the present study, is the following: With respect to the nonautonomous theory, the system is, in general, "bistable" at all values of I < I max . Again, because of the technical mathematical differences, the term bistable is inappropriate and used only by analogy in the following sense. What is demonstrated in Figure 5D,F is that, at each value of I < I max , there exist time courses across the initial conditions with both survival and death outcomes. At a given I, the "outcome plane" was clearly demarcated into a survival region and a death region, and access to each region is granted by application of the appropriate initial conditions. Further, the area of the death region on the outcome planes increased with I, until it subsumed 100% of the plane at I > I max . This result provides a considerably more realistic model of cell injury dynamics.
Sublethal and Lethal Conditions Form A Continuum
Above we stated that specific qualitative responses to cell injury, such as rapid or delayed preconditioning, or necrosis or delayed neuronal death, are cross sections of a continuum of injury dynamics. This point is made succinctly in Figure 3A showing different time courses at the same value of I. The time courses are distinguished by their initial conditions, which again, correspond to the variety of manipulations performed on cells or tissues in the laboratory. It is clearly seen that the time courses can have different forms and durations. This kind of complexity is not intuitive to the current biomedical paradigms.
The continuum of responses is also illustrated in Figure 2A that shows a series of time courses across the I-range starting from (D 0 , S 0 ) = (0, 0). With respect to sublethal effects, the area under the curves of the S time courses (i.e., the accumulation of S over time) is greater than the areas under the D time course. This indicates there is excess stress response beyond that needed to inhibit the actual damage. This excess stress response is what causes the preconditioning response. If a second injury is applied at some later time before the excess stress response fades, the excess stress response is available to combat the second injury and ameliorate its effect, allowing the cell to survive. Further, the area under the S time courses decreases from I = 0 to I = I X . This indicates that preconditioning will be a graded response and a function of I. This is well-known in the empirical literature. There is an optimal sublethal injury required to produce the optimal preconditioning effect. Now consider the lethal side of the time courses in Figure 2A. For those time courses close to I X , it takes a much longer time for the system to decay to death, which would be a DND phenotype in the case of stroke. At the highest I values, the time course returns to (0,0) very rapidly, and this would correspond to necrotic forms of cell death. Thus, the theory reveals that these are not different forms of cell death, but cross sections along the continuum of injury dynamics that are a function of I.
Therefore, a bona fide mathematical theory of cell injury dynamics demonstrates that the variations in survival responses (e.g., preconditioning), and death phenotypes are intimately interlinked and form a continuum of states.
Measuring the Theory
The main purposes of this paper have been: (1) to show how to solve the nonautonomous theory and interpret the solutions in terms of acute cell injury, and (2) illustrate how to think of injured cells in abstract, ideal terms. We have not focused on how we would measure or test the theory. In the following we make a few comments along these lines.
The link between the theory and real injured cells or tissues are the concepts of D and S. When a brain or heart is injured by ischemia, or when cultured cells are injured by thapsigargin (or any other injury mechanism), one must envision that D and S are real phenomena occurring inside the cells and that the amount of each follows time courses as calculated by the theory. Ideally, one would then measure how D and S change with time and determine if the theory accurately predicts the D and S time courses.
The empirical question thus comes down to measuring D and S. Recalling their definitions, D is the total damage and S is the total-induced stress responses. Thus, to measure them would be to measure every single form of post-injury damage at a point in time and sum them together to obtain D at that time point. Similarly, every induced stress response would be measured and summed to obtain S. In practice, given today's technology and our incomplete understanding of cell physiology, this is impossible.
However, taking this approach is analogous to measuring temperature by attempting to measure the velocity of every single particle in the medium and average them, which is also impossible. Instead, we measure temperature via e.g., the expansion volume of a liquid, typically mercury. This provides a surrogate measure that correlates perfectly with the average velocity of the particles making up the medium whose temperature we seek to know.
A similar approach is required to estimate D and S. There must be specific changes inside the injured cells that estimate or track the real values of D and S. We have hypothesized that the gene changes inside the cell track S and that some general form of cell damage, such as protein aggregates, track the total damage D. We have made these measurements and are in the process of preparing our results for publication. We have not included our empirical work here because it is outside the scope of the present work, and the present work is a necessary prelude for reporting our empirical results.
The important point to emphasize here is that there is a chasm between the mainstream descriptive approach to cell injury that assumes it can discover some qualitative feature that causes cell death, and our theoretical approach that dictates a priori what needs to be measured to characterize cell injury. In our theory, D and S are a priori concepts, and the theory indicates that empirical work needs to be directed first to discover how to quantitatively estimate D and S, and then determine their time courses. Ultimately, it is the failure of the descriptive approach that has motivated us, and we believe, in the long run, both approaches will be necessary much in the same manner that theory and experiment co-exist in physics.
Conclusions
We have shown here a plausible and effective way to express solutions from the nonautonomous theory of acute cell injury dynamics. The solutions are sensible and capture important aspects of real behaviors observed in acutely injured biological systems. The solutions to the nonautonomous version are more realistic than the autonomous version, as discussed above. Both versions of the model possess important implications for therapy and indicate the possibility to calculate therapies for specific acute injuries with engineering-like precision.
Our goals here were two-fold: first to show how to solve and interpret the nonautonomous version of the dynamical theory of acute cell injury and second, to use this to illustrate the value of abstraction and idealization for biomedical science. In a sense, it is immaterial whether Equation (5) is correct or not. We have been explicit in our assumptions of the mathematical forms used and these can be modified as necessary to better fit real data from real injured systems. In an upcoming work we will discuss our attempt to experimentally measure D and S time courses and fit them to the nonautonomous cell injury theory. Any weakness in the specific mathematical formulation is offset by the framework and the potential it provides to organize and systematize the study of cell injury across all biomedical fields that involve acute injury. | 2018-04-03T01:18:38.834Z | 2018-02-27T00:00:00.000 | {
"year": 2018,
"sha1": "71e803ca3b9cd6d021559bf3f0d0b5e719644ae7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/8/3/39/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2de15eea228ceab762eaa706c57c3533ddba9b9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
216031948 | pes2o/s2orc | v3-fos-license | Inhibition of miR-19a-3p decreases cerebral ischemia/reperfusion injury by targeting IGFBP3 in vivo and in vitro.
BACKGROUND
Inflammation and apoptosis are considered to be two main factors affecting ischemic brain injury and the subsequent reperfusion damage. MiR-19a-3p has been reported to be a possible novel biomarker in ischemic stroke. However, the function and molecular mechanisms of miR-19a-3p remain unclear in cerebral ischemia/reperfusion (I/R) injury.
METHODS
The I/R injury model was established in vivo by middle cerebral artery occlusion/reperfusion (MCAO/R) in rats and in vitro by oxygen-glucose deprivation and reperfusion (OGD/R) induced SH-SY5Y cells. The expression of miR-19a-3p was determined by reverse transcription quantitative PCR. The infarction volumes, Neurological deficit scores, apoptosis, cell viability, pro-inflammatory cytokines and apoptosis were evaluated using Longa score, Bederson score, TTC, TUNEL staining, CCK-8, ELISA, flow cytometry assays. Luciferase reporter assay was utilized to validate the target gene of miR-19a-3p.
RESULTS
We first found miR-19a-3p was significantly up-regulated in rat I/R brain tissues and OGD/R induced SH-SY5Y cells. Using the in vivo and in vitro I/R injury model, we further demonstrated that miR-19a-3p inhibitor exerted protective role against injury to cerebral I/R, which was reflected by reduced infarct volume, improved neurological outcomes, increased cell viability, inhibited inflammation and apoptosis. Mechanistically, miR-19a-3p binds to 3'UTR region of IGFBP3 mRNA. Inhibition of miR-19a-3p caused the increased expression of IGFBP3 in OGD/R induced SH-SY5Y cells. Furthermore, we showed that IGFBP3 overexpression imitated, while knockdown reversed the protective effects of miR-19a-3p inhibitor against OGD/R-induced injury.
CONCLUSIONS
In summary, our findings showed miR-19a-3p regulated I/R-induced inflammation and apoptosis through targeting IGFBP3, which might provide a potential therapeutic target for cerebral I/R injury.
Background
As the most common type of stroke, ischemic stroke is characterized by the sudden loss of blood circulation to an area of the brain, which represents a major public health problem [1]. Currently, rapid restoration of the blood supply has been the most effective treatment for ischemic stroke. However, further brain injury and dysfunction following ischemia may be aggravated by blood reperfusion, which is known as cerebral ischemia/reperfusion (I/R) injury [2]. Therefore, it urgently needed to elaborate the underlying molecular mechanisms to improve the functional recovery after cerebral I/R injury.
According to an increasing number of studies, the mechanisms of cerebral I/R injury are complex, of which the inflammation and apoptosis are considered to be main factors inducing nerve cell injury after I/R [3][4][5][6]. It is known that microRNAs (miRs), as small non-coding RNA molecules (19-24 nts), could modulate diverse biological processes, including cell proliferation, apoptosis and neuroinflammation through binding to the 3′-UTR region of their target mRNA [7][8][9]. With the development of ischemic stroke studies, investigation of the role of miRs in cerebral I/R injury has been increased. For example, miR-132 has been reported to attenuate cerebral injury by protecting blood-brain barrier disruption in ischemia stroke [10]. MiR-224-3p may protect N2a cells from cerebral I/R injury by targeting FAK familyinteracting protein (FIP200) [11]. On the contrary, miR-27b inhibition promotes recovery after ischemic stroke by regulating AMP-activated protein kinase (AMPK) activity [12]. All the above-mentioned reports strongly suggest that miRs play an important role in the process of I/R injury. Recently, the broadly conserved miR-19a-3p, a crucial component of the miR-17-92 cluster, has been shown to be a mediator of the cell proliferation-inhibitory effect in breast cancer [13], cell apoptosis in chemosensitivity of osteosarcoma [14] and inflammatory responses [15]. Interestingly, our attention was aroused by miR-19a-3p which is the most widely modulated miRNA as novel biomarker in ischemic stroke by Eyileten et al. [16]. However, the possible mechanisms of miR-19a-3p against inflammation and apoptosis in cerebral I/R injury are still understudied.
Insulin-like growth factor-1 (IGF-1) is a mediator of growth hormone that promotes human growth by directly acting on the growth hormone receptor [17]. As the main binding protein of IGF-1, insulin-like growth factor binding protein-3 (IGFBP3) has been reported to be linked to pathogenesis of cancers by exerting tumor suppressor activity in breast cancer [18] and pro-tumor effects in oral squamous cell carcinoma [19] and lung cancer [20]. According the report by Krakowska-Stasiak et al. [21], the levels of IGF-1/IGFBP3 were lower in patients with inflammatory bowel disease. Notably, IGF-I and IGFBP-3 concentrations after acute cerebral ischemia were strikingly lower than those in control subjects and healthy individuals reported in Schwab et al. [22], Denti et al [23], Johnsen et al. [24]. These evidences indicate IGFBP-3 might present neuroprotective effects against cerebral I/R injury.
In this study, we investigated the role of miR-19a-3p in inflammation and apoptosis in middle cerebral artery occlusion (MCAO) rat model and in vitro oxygen and glucose deprivation/reoxygenation (OGD/R) induced SH-SY5Y cell model. Moreover, a new target IGFBP3 of miR-19a-3p was identified using bioinformatics software and validated by luciferase reporter assay. Furthermore, we further provided direct evidence that miR-19a-3p regulated OGD-induced SH-SY5Y cell injury by targeting IGFBP3. Our findings might provide a new insight into the mechanism of cerebral I/R injury.
Animal groups
Healthy male Sprague-Dawley rats, weighing 200-250 g, were purchased from the Experimental Animal Center of College of Medicine, Zhejiang University (Zhejiang, China). Rat were housed in standard cages (22-25 °C and 45-50% humidity) with 12-h light/dark cycle and allowed free access to food and water. Rats were randomly divided into the following three groups (n = 6 each group): (1) Sham group; (2) Middle cerebral artery occlusion (MCAO) group; (3) MCAO + inhibitor group, in which miR-19a-3p inhibitor (GCT CAA ACT GTT TAT CTT CCA TGC GAG TTT G) was a chemically synthesized exogenous miR-19a-3p mature sequence inhibitor supplied by GenePharma Co., Ltd. (Shanghai, China) and diluted with EntransterTM in vivo transfection reagent (Engreen, Beijing, China). Then, rats were administered intracerebroventricular injection of miR-19a-3p inhibitor using a microsyringe (Hamilton, Nevada, USA) 3 days prior to MCAO. All animal experiments were carried out in this study were approval by the Ethics Committee of The First Affiliated Hospital, College of Medicine, Zhejiang University (No. 2018-642, Date: 20180516) and followed the guidance of the National Institutes of Health Guide for the Care and Use of Laboratory Animals (No. 80-23, revised 1996).
MCAO treatment
Rat model of cerebral I/R injury was established by 2 h of MCAO via intraluminal filament method as described previously [25]. Briefly, rats were subcutaneously anesthetized with chloral hydrate (400 mg/kg, ip) and then placed in the supine position on operating table. The right common carotid artery, external and internal carotid arteries were exposed by a midline skin incision. Then, a heparinized intraluminal filament with rounded tip (diameter 0.22 ± 0.02 mm) was inserted from external carotid artery through the internal carotid artery to reach the MCA to block the origin of MCA. After operation, the surgical site was sutured and the filament was withdrawn, followed by 24 h of reperfusion. The rats in the sham group had the same surgery except that the intraluminal filament was not inserted to the MCA origin. Brain infarct volumes, neurological scores and TUNEL staining were evaluated at 72 h after reperfusion.
2, 3, 5-Triphenyltetrazolium chloride (TTC) staining
TTC staining was performed to histologically verify the success of the model. In brief, the brains tissues from different groups were collected and frozen for 30 min at − 20 °C. Then, the brain tissues were sliced into 2-mmthick sections and incubated with 2% TTC solution (Sigma-Aldrich, St. Louis, MO, USA) at 37 °C for 20 min, which was terminated by rinsing with PBS. Subsequently, the slice section was fixed with 4% paraformaldehyde for 2 h and photographed. The infarct volume was expressed as a percentage of total infarct volume/total brain volume × 100% by Image-Pro Plus 6.0 analysis software.
Neurological deficit evaluation
Neurological deficit evaluation was performed after reperfusion using modified Longa score [26] and Bederson score [27] by an assessor who was blinded to the experimental groups. The Longa score was assessed on a scale of 0 to 4: 0 = no observable deficits; 1 = failure to fully extend the left forepaw; 2 = difficulty in circling to the left; 3 = failing to the left side; 4 = no spontaneous walking with decreased level of consciousness. The Bederson score was graded on a 5-point scale as follows: 0 = no deficits; 1 = lost forelimb flexion; 2 = lost forelimb flexion with lower resistance to lateral push; 3 = unidirectional circling; 4 = longitudinal spinning or seizure activity; 5 = no movement. For Longa score and Bederson score, the higher the score is, the more severe the damage.
In vitro model of OGD/R
The in vitro model for simulating I/R injury was constructed by oxygen and glucose deprivation/reoxygenation (OGD/R). Here, we used a neuron-like human-derived neuroblastoma cell line SH-SY5Y, which was purchased from American Type Culture Collection (ATCC, Manassas, VA, USA). Cells in normoxia group were cultured in DMEM medium (HyClone, Logan, UT, USA) with 10% fetal bovine serum (FBS, Gibco, USA) and incubated at 37 °C in a humidified atmosphere containing 5% CO 2 . For OGD/R, cells were cultured for 8 h in culture medium with deprivation of glucose and serum in an oxygen-free condition at 37 °C. Subsequently, the cells were returned to normal medium under normoxic conditions to allow for reoxygenation for 24 h.
Analysis of cell viability
Cells were seeded into 96-well plates at a density of 5 × 10 3 cells per well and cultured overnight at 37 °C. Next day, pre-made Cell Counting Kit-8 solution (10 μL per well, Beyotime Biotechnology) was added to each well and cells were incubated for 1 h at 37 °C. The optical density values at 450 nm were measured by Microplate Reader (Bio-Rad, Hercules, CA, USA).
Cell apoptosis analysis
Cell apoptosis was measured using Annexin V-FITC apoptosis detection kit (BD Biosciences, San Jose, CA) according to the manufacturer's instructions. Briefly, approximately 5 × 10 4 cells were collected, washed twice with PBS and subjected to Annexin V-FITC/PI double staining at room temperature for 20 min in the dark. The differentiation of apoptotic cells (Annexin V positive) was detected by flow cytometry (BD Bioscience, San Jose, CA).
Western blot analysis
Total protein was extracted using the RIPA Lysis and Extraction Buffer and protein concentrations were measured by BCA Protein Assay reagent kit (both from Beyotime Biotechnology) according to the manufacturer's protocol. Equal amounts of protein samples (30 μg) were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto PVDF membranes. The membranes were blocked with 5% non-fat dry milk in TBST for 2 h at room temperature and then incubated overnight at 4 °C with primary antibodies against IGFBP3, Bcl-2, Bax and GAPDH, followed by incubation with horseradish peroxidaseconjugated second antibody for 2 h at room temperature. Blots were visualized by an enhanced chemiluminescent substrate (Thermo, Fisher Scientific).
Statistical analysis
All data were analyzed by SPSS 21.0 software (SPSS Inc., Chicago, IL, USA) and expressed as the mean ± SD. Values of p less than 0.05 were considered statistically significant. All quantitative data were analyzed using Student's t tests for two groups, while one-way ANOVA followed by Tukey's post hoc test for multiple comparisons.
Down-regulation of miR-19a-3p protected rat brain against cerebral I/R injury
To investigate the potential role of miR-19a-3p in brain I/R injury, the rats randomly received an intracerebroventricular injection of miR-19a-3p inhibitor prior to MCAO treatment, with Sham as control. As shown in Fig. 1a, the infarct region was obviously observed in the brain of MCAO groups compared with Sham group. However, the infarct volume was significantly reduced in MCAO treated with miR-19a-3p inhibitor. Meanwhile, the neurological function deficits in MCAO + inhibitor group were significantly improved compared to those in MCAO group in terms of the Longa score (Fig. 1b, n = 6 each group) and Bederson score (Fig. 1c, n = 6 each group). We further examined the expression of miR-19a-3p associated with injury of cerebral I/R. RT-qPCR analysis showed that the expression of miR-19a-3p was significantly increased in the MCAO group compared with Sham group, but notably decreased after intracerebroventricular injection of miR-19a-3p inhibitor in MCAO group (Fig. 1d). These data indicate that the injury of cerebral I/R could be improved by miR-19a-3p silence.
Down-regulation of miR-19a-3p suppressed inflammation and apoptosis caused by I/R injury
To clarify the downstream mechanism of miR-19a-3p knockdown-mediated protection from injury of cerebral I/R, we analyzed the effects of miR-19a-3p knockdown on inflammation and apoptosis, known as the indication of I/R injury. ELISA assay showed that the massive production of pro-inflammatory cytokines, including TNF-α (Fig. 2a), IL-1β (Fig. 2b), and IL-6 ( Fig. 2c) in MCAO group could be significantly decreased by injection of miR-19a-3p inhibitor. In addition, TUNEL assay showed more TUNEL-positive cells was observed in the brain sections from the MCAO group, whereas miR-19a-3p inhibitor treatment induced significant decrease in the TUNEL-positive cells in MCAO group, which was also reflected by the fluorescence intensity labeled by TUNEL staining (Fig. 2d). Furthermore, a significant decrease in anti-apoptotic Bcl-2 and increase in pro-apoptotic Bax were observed in MCAO group compared with those of the sham group. Significant elevation in the Bcl-2 expression and reduction in the Bax expression were noticed after miR-19a-3p knockdown in MCAO group (Fig. 2e). Collectively, these findings indicate that down-regulation of miR-19a-3p could exert protective role against cerebral I/R injury through suppressing inflammation and apoptosis.
Down-regulation of miR-19a-3p protected SH-SY5Y cells against OGD/R-induced injury
To determine the role of miR-19a-3p on cellular OGD/R injury, OGD/R cell model was established in SH-SY5Y cells. The expression of miR-19a-3p was measured using RT-qPCR. As shown in Fig. 3a, miR-19a-3p expression in OGD/R group was significantly higher than that in the normoxia group. Subsequently, miR-19a-3p inhibitor had been successfully transfected into the OGD/R cells, as demonstrated by remarkably reduced miR-19a-3p expression (Fig. 3b). Then the effects of miR-19a-3p on cell viability, inflammation and apoptosis were evaluated in OGD/R cell model. CCK-8 assay showed that cell viability was significantly reduced in OGD/R cells compared to those in normoxic cells, and that miR-19a-3p knockdown notably alleviated such decrease (Fig. 3c). On the contrary, the levels of TNF-α (Fig. 3d), IL-1β (Fig. 3e), and IL-6 ( Fig. 3f ) were significantly increased when SH-SY5Y cells were under OGD/R conditions. There observed increases were reversed posterior to the transfection of miR-19a-3p inhibitor into the OGD/R cells (Fig. 3d-f ). In addition, OGD/Rinduced SH-SY5Y cells exhibited notable increases in apoptotic cells population, which was partly abolished by miR-19a-3p silencing (Fig. 3g).
IGFBP3 was a potential target of miR-19a-3p
To explore the mechanisms miR-19a-3p silencing modulated inflammation and apoptosis, we identified the potential gene targets of miR-19a-3p silencing by using the TargetScan program. Among all of the predicted gene targets, IGFBP3 was chosen as a candidate because it is reported to be associated with ischemic stroke. As shown in Fig. 4a, the potential binding site in the 3′-UTR of the IGFBP3 mRNA was identified as being targeted by miR-19a-3p. Then, dual luciferase reporter assay was performed to obtain the direct evidences that IGFBP3 was a target of miR-19a-3p. It was found that miR-19a-3p inhibitor significantly increased the luciferase activity in the SH-SY5Y cells transfected with the WT IGFBP3, but not in the cells transfected with the MUT IGFBP3 (Fig. 4b). Moreover, we further observed the down-regulated IGFBP3 mRNA and protein expression by OGD/R exposure was significantly reversed by miR-19a-3p inhibitor transfection in SH-SY5Y cells (Fig. 4c-d). In addition, we determined the expression of IGFBP3 in I/R injury rat model. The results showed that the expression of IGFBP3 mRNA and protein were significantly down-regulated in MCAO group compared with Sham group. Notably, miR-19a-3p inhibitor injection remarkably increased the expression of IGFBP3 mRNA and protein in MCAO group (Fig. 4e-f ). These results suggested that miR-19a-3p likely binds to the 3′-UTR of IGFBP3 in OGD/R-induced SH-SY5Y cells.
IGFBP3 as a functional regulator involved in silenced miR-19a-3p exerted protective roles against OGD/ R-induced injury
To further investigate whether IGFBP3 was involved in miR-19a-3p regulating OGD/R-induced injury, SH-SY5Y cells were transfected with empty vector, IGFBP3, inhibitor or inhibitor plus si-IGFBP3, respectively, Fig. 2 The effect of miR-19a-3p on pro-inflammatory cytokines and apoptosis in I/R rat brain. The levels of TNF-α a, IL-1β b, and IL-6 c in the Sham, MCAO, and MCAO + inhibitor groups were measured by ELISA (n = 6 per group). d The apoptosis of cortical neurons was evaluated by TUNEL staining. e-f Measurement of Bcl-2 and Bax protein levels in Sham, MCAO and MCAO + inhibitor groups using western blotting. The experiments were performed in triplicate and each value represented mean ± SD. ***p < 0.001, compared with Sham; ##p < 0.01, ###p < 0.001, compared with MCAO followed by OGD/R exposure. Western blot analysis confirmed IGFBP3 was up-regulated after sole IGFBP3 or inhibitor transfection, but elevated IGFBP3 expression by inhibitor was abrogated by si-IGFBP3 transfection (Fig. 5a). Results from CCK-8 assay showed that IGFBP3 overexpression significantly improved cell viability in OGD/R induced cells. However, IGFBP3 knockdown obviously reversed the effects of miR-19a-3p inhibitor on cell viability (Fig. 5b). On the contrary, ELISA assay (Fig. 5c-e) and flow cytometry assay ( Fig. 5f-g) further demonstrated that the levels of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) and cell apoptotic rate were significantly decreased by IGFBP3 overexpression and reduced levels of these pro-inflammatory cytokines and apoptosis by inhibition of miR-19a-3p was remarkably reversed by IGFBP3 knockdown in SH-SY5Y cells after OGD/R exposure. These findings suggest that inhibition of miR-19a-3p exerted protective roles against OGD/R-induced injury might through up-regulating IGFBP3.
Discussion
Neuroinflammation and apoptosis occupy a crucial role in the complicated pathologies that lead to ischemic brain injury and the subsequent reperfusion damage [3][4][5][6]28]. Previous studies have demonstrated that specific mRNA is considered as potential target against I/R injury. In the present study, RT-qPCR showed the expression of miR-19a-3p was rapidly increased in the brain tissue after I/R. Subsequently, rats were given intracerebroventricular injection of miR-19a-3p inhibitor, followed by MCAO treatment. According to the results, inhibition of miR-19a-3p effectively reduced brain infarct size and ameliorated neurological deficits. We also found inhibition of miR-19a-3p significantly decreased the levels of pro-inflammatory cytokines inhibitor or miR-NC, followed by OGD/R exposure. The expression of miR-19a-3p was determined in (a) Normoxia and OGD/R group, as well as (b) OGD/R + miR-NC and OGD/R + inhibitor group. c CCK-8 assay was utilized to analyze cell viability. The levels of TNF-α (d), IL-1β (e), and IL-6 (f) were measured by ELISA assay. g Cell apoptosis was determined with Annexin V/PI double staining followed by flow cytometry assay. The experiments were performed in triplicate and each value represented mean ± SD. ***p < 0.001, compared with Normoxia; ##p < 0.01, ###p < 0.001, compared with OGD/R + miR-NC (TNF-α, IL-1β, and IL-6) and apoptosis after I/R injury in vivo. Similarly, miR-19a-3p was identified as key regulator altered in multiple systems atrophy, as a rare neurodegenerative disorder [29]. Through bioinformatics analysis, Eyileten et al. [16] consistently revealed miR-19a-3p might be proposed as a diagnostic and prognostic biomarker in ischemic stroke.
To further confirm the protective role of miR-19a-3p down-regulation against cerebral I/R injury, we constructed the in vitro OGD/R SH-SY5Y model to analyze Fig. 4 IGFBP3 3′-untranslated region (UTR) was directly targeted by miR-19a-3p. a Schema of the WT and mutated IGFBP3 3′-UTR indicating the interaction sites between miR-19a-3p and the 3′-UTR of IGFBP3. b Dual luciferase assay in SH-SY5Y cells co-transfected with the miR-19a-3p inhibitor and reporter vectors containing either the wild-type or mutated 3'-UTR of IGFBP3. c RT-qPCR and d Western blot analysis were used to determine the expression of IGFBP3 in SH-SY5Y cells transfected with miR-19a-3p inhibitor or miR-NC, and then was exposed to OGD/R conditions. **p < 0.01, compared with miR-NC; ###p < 0.001, compared with Normoxia; The expression levels of IGFBP3 mRNA (e) and protein (f) were determined in Sham rat brains and I/R rat brains treated with miR-19a-3p inhibitor. The experiments were performed in triplicate and each value represented mean ± SD. ***p < 0.01, compared with Sham; ##p < 0.01, compared with MCAO the effects of miR-19a-3p on cell inflammation and apoptosis. Consistent with most of studies on ischemic stroke, SH-SY5Y has been frequently chosen as the most commonly used tool in five models of ischemia-related injury, including oxygen and glucose deprivation, H 2 O 2 -induced oxidative stress, oxygen deprivation, glucose deprivation and glutamate excitotoxicity because of its human origin, catecholaminergic neuronal properties, and ease of maintenance [30]. Here, we observed miR-19a-3p inhibitor could promote cell proliferation, and suppress the production of pro-inflammatory cytokines (TNF-α, IL-1β and IL-6) and cell apoptosis in OGD/R-induced SH-SY5Y cells. Our results complement nicely with a previous report showing miR-19a-3p inhibited cell proliferation and promoted cell apoptosis in rheumatoid arthritis fibroblast-like synoviocytes [31]. Notably, a recent study by Ge et al. [32] reported that elevated miR-19a-3p promoted cerebral ischemic injury by modulating glucose metabolism and neuronal apoptosis. Different from this, our study focused on the effect of miR-19a-3p on neuroinflammation and apoptosis in OGD/R-induced SH-SY5Y cells. Moreover, we found different cell types may play different roles in brain injury induced by I/R. As demonstrated by Ge et al. [32], the expression level of miR-19a-3p in rat neurons was significant lower than astrocytes, and induction of I/R in vivo in astrocytes or OGD in vitro in neuronal cells significantly induced miR-19a-3p expression. Here, we used SH-SY5Y cells as the most commonly used tool in five models of ischaemiarelated injury. In addition, another study showed miR-19a-3p acts as an oncogene in myeloma by promoting cell proliferation/invasion and inhibiting apoptosis [33]. Silenced miR-19a-3p exerted protective roles against OGD/R-induced injury by up-regulating IGFBP3. SH-SY5Y cells were transfected with empty vector, IGFBP3, inhibitor or inhibitor plus si-IGFBP3, respectively, followed by OGD/R exposure. a The protein level of IGFBP3 expression was detected by western blot analysis. b Cell viability, c-e pro-inflammatory cytokines and f-g apoptosis were evaluated in SH-SY5Y cells after the above treatments using CCK-8, ELISA and flow cytometry assays, respectively. The experiments were performed in triplicate and each value represented mean ± SD. **p < 0.01, ***p < 0.001, compared with Vector; ###p < 0.01, compared with inhibitor MiR-19a-3p plays an important role in pancreatic β cell function by enhancing cell proliferation and inhibiting cell apoptosis [34]. These differences of miR-19a-3p exerting regulatory functions on cell apoptosis might be ascribed to different disease background.
Insulin-like growth factor binding proteins (IGFBPs) are a family of proteins binding to insulin-like growth factors, which have been identified as useful prognostic biomarkers in various malignancies [35]. Recently, IGFBP3 has been reported to be associated with ischemic stroke and significantly decreased in studies from Schwab et al. [22], Denti et al. [23] and Johnsen et al. [24]. Our data demonstrated that IGFBP3 is regulated by miR-19a-3p at post translational level and is a direct target of miR-19a-3p by luciferase reporter assay. The in vitro OGD/R SH-SY5Y model showed IGFBP3 was significantly decreased, which was opposite with miR-19a-3p expression. In agreement with our findings, decreased IGFBP3 expression was negatively correlated with miR-27-3p level in blood samples drawn from ischemic stroke patients [36]. Moreover, lower plasma concentration of IGF-1/IGFBP3 increased the risk of prevalent and incident dementia [37]. Furthermore, we further found that IGFBP3 overexpression imitated, while knockdown reversed the protective effects of miR-19a-3p down-regulation against OGD/R-induced injury, which further indicates that IGFBP3 acts as a downstream effector in the miR-19a-3p-mediated function of OGD/R SH-SY5Y model. Of course, many other target genes of miR-19a-3p have also been reported, including PITX1 in gastric cancer [38], SOCS3 in pancreatic β cell function [34], and adiponectin receptor2 (ADIPOR2) in cerebral I/R injury [32]. We believed that there are more target genes of miR-19a-3p will be explored and confirmed in cerebral I/R injury based on different molecular mechanisms.
Conclusions
In conclusion, we reported for the first time that a mild activation of IGFBP3 by inhibition of miR-19a-3p induced neuroprotective role in cerebral I/R injury by suppressing inflammation and apoptosis. This study will further enhance our understanding of the inflammatory and apoptosis mechanism after cerebral I/R injury and also provide a strong experimental basis for targeting IGFBP3 by miR-19a-3p as a potential therapeutic option. | 2020-04-21T14:32:48.387Z | 2020-04-20T00:00:00.000 | {
"year": 2020,
"sha1": "5e54a77a6eb0f2c0943d0feb709a5e426742e6d3",
"oa_license": "CCBY",
"oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-020-00280-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b30858beb75a0f53b15e9e97b698176dee53e378",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230627219 | pes2o/s2orc | v3-fos-license | Safety Evaluation of Fermotein: Allergenicity, Mycotoxin Production, Biochemical Analyses and Microbiology of a Fungal Single-cell Protein Product
Aim: Single-cell proteins (SCPs) are considered as innovative and sustainable alternatives to animal-based products. Fermotein is an innovative SCP obtained from fermentation of the filamentous fungus Rhizomucor pusillus. The toxicity, capability to produce secondary metabolites and allergenic potential of this fungus has never been assessed before. Like other filamentous fungi, there is a lack of information on this species to assess its safety for human consumption. The objective of the current study was to investigate the safety of Fermotein and its source Rhizomucor pusillus regarding toxicity, capability to produce secondary metabolites and allergenicity. In addition, possible contaminants were also examined. Methodology: The genome of Rhizomucor pusillus was sequenced and annotated in order to screen for production of common mycotoxins, antibiotic synthesis pathways, mucormycosis-related virulence factors and in silico potential cross-reactivity with known food allergens. The presence of Original Research Article Van der Spiegel et al.; EJNFS, 12(10): 146-155, 2020; Article no.EJNFS.63064 147 mycotoxins and allergens were validated by laboratory analysis. The level of RNA, heavy metals and microbiological contaminants were also determined. Results: No mycotoxin production-related genes were identified in the genome of Rhizomucor pusillus nor were mycotoxins found in Fermotein. Six proteins present in Fermotein showed high homology with five known food allergens. No gene clusters were found that corresponded with antibiotic synthesis pathways. Although 10 proteins in the genome of Rhizomucor pusillus may represent mucormycosis-related virulence factors, no cases of mucormycosis after oral intake are reported. The level of heavy metals and microbiological contaminants were below legislative limits, whereas RNA content was 4.9 ± 0.2% of dry matter. Conclusion: No safety concerns were identified for Fermotein or its source Rhizomucor pusillus, except the potential for cross-reactivity with five known food allergens. This should be taken into account for communication with consumers. Information from the current study contributes to the body of evidence for determination of Qualified Presumption of Safety status of Rhizomucor pusillus.
INTRODUCTION
Globally, it is expected that the population will reach 9 billion individuals by 2042, which may result in challenges to provide food [1]. Insufficient amounts of animal-based proteins will be available for the high number of people, whereas more consumption will have negative effects on climate change [2]. Therefore, introduction of sustainable alternative protein sources is of major importance [3][4][5][6]. Examples of these protein sources are legumes, duckweed, insects and single-cell proteins [7,8]. Single-cell protein (SCP) refers to protein biomass from microbial sources, including microalgae, bacteria and fungi [9]. More specifically, mycoprotein is the term used for a fungal SCP.
Considering human consumption, not all fungal species are suitable for SCP production [9]. A safety assessment is therefore needed when a new SCP product is placed on the market. In the European Union (EU), a pre-market safety assessment or a history of safe use before 1997 is required under the Novel Food Regulation for novel food products [10]. In the USA, all substances that will be added to food are subject to pre-market approval by the FDA unless such substance is generally recognized as safe (GRAS) among qualified experts under the conditions of its intended use [11].
A well-known market example of mycoprotein is Quorn, obtained from the filamentous fungus Fusarium venenatum [9]. Quorn is considered to be GRAS in the USA for use as food in general except meat products, poultry products and infant formula [12]. It also has a history of safe use as a meat replacer in the EU [9]. History of safe use in the EU before 1997 has been established for other fungal species, including Rhizopus oryzae, Aspergillus sojae and Aspergillus oryzae for production of tempeh (products), soy sauce and as an alternative mineral source in foods and food supplements respectively [13][14][15]. These three fungi are consumed in low amounts in Western countries.
The filamentous fungus Rhizomucor pusillus is a promising microorganism that produces a new food protein source called Fermotein. The fungus has no history of use as a SCP but it has been used for the production of food enzymes [16][17][18][19][20][21]. Despite the safe use as enzyme-producing fungi, Rhizomucor spp. and other filamentous fungi could not granted a Qualified Presumption of Safety (QPS) status by the European Food Safety Authority (EFSA) due to insufficient literature information on toxicity, capability to produce secondary metabolites and allergenicity [22][23][24]. Therefore, the safety of Rhizomucor pusillus needs to be assessed in more detail before the biomass Fermotein could be used as a food ingredient in both compressed and powder forms, for broad food applications like bakery products, meat replacers, pasta and fermented milk products.
The objective of the current study is to investigate the potential toxicity, the capability to produce secondary metabolites and the allergenic potential of Fermotein and its source Rhizomucor pusillus. In addition, the levels of chemical and microbial contaminants in Fermotein were investigated. This information contributes to the body of knowledge regarding the safety of Rhizomucor pusillus and its derived products.
Fermotein Production
Fermotein is a SCP product obtained from a wild type filamentous fungus Rhizomucor pusillus. Rhizomucor pusillus cells were plated onto PDA plates and incubated at 46°C for at least 16 hours. A shake flask was inoculated with Rhizomucor pusillus spores from the PDA plate and incubated at 46°C, 180 rpm for at least 16 hours. The shake flask medium was composed of 20 g/L glucose, minerals, tartaric acid as buffer and ammonium sulphate.
For the aerobic, submerged, temperature and pH-controlled fermentation process, 95 DE glucose syrup from maize (C*Sweet™ D 027R3 from Cargill), glucose syrup from maize (Sirodex 321 from Tereos Starch & Sweeteners Europe), dextrose from wheat (Meritose 200 from Tereos Starch & Sweeteners Europe) or cane sugar (Western ruwe rietsuiker) were used as nutrients together with nitrogen sources (Ammonium salts, aqueous NH 3 ) and minerals, while olive oil was used to prevent foaming. The biomass was harvested using a solid liquid separation and further processed by adding antioxidants to prevent oxidation of unsaturated lipids, followed by pasteurisation and dewatering by compression. No solvents, pesticides, antimicrobials or anti-parasitic agents were used during the production process. The biomass was compressed to obtain Fermotein Wet (27 -30% dry weight) and dried to obtain Fermotein Dry (93 -97% dry weight). Fermotein Wet was frozen and stored at -18°C, whereas Fermotein Dry was stored at 20°C. Products were stored under these conditions until analysis.
Raw materials were processed and handled according to general food safety principles, food contaminant requirements and microbiological requirements as laid down in EU regulations. Processing occurred based on ISO standards and quality control checks were performed on the final product.
Five independently produced batches of Fermotein were used for analyses. All analyses were conducted by accredited laboratories (Nutri Control, Veghel, the Netherlands; NutriLab, Giessen, the Netherlands; SYNLAB Analytics & Services Oosterhout B.V., Oosterhout, the Netherlands) and according to validated methods. All analyses were performed with both Fermotein Wet and Dry, except for RNA levels and allergenicity analysis, which were only performed with Fermotein Dry.
Whole-Genome Sequencing (WGS)
The whole genome of Rhizomucor pusillus was de novo sequenced using a combination of PacBio and Illumina technology (Baseclear, Leiden, The Netherlands) and annotated (Biomax Informatics AG, Planegg, Germany). Sequence data were used to screen for the presence of genes encoding proteins (enzymes, transporters etc.) of mycotoxin production pathways, mucormycosis-related virulence factors and antibiotic synthesis pathways. A Blast search was performed in 2020 according to Altschul et al. [25] to investigate the presence of mycotoxin production-related genes in the genome. A keyword algorithm was used to search sequence data (Pedant Pro platform, Biomax AG) for mucormycosis-related virulence factors.
The keywords 'virulence' and 'mucormycosis' or 'mucormycosi' were used. The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database (https://www.genome.jp/kegg) was used to investigate the presence of biosynthetic pathways of 12 main classes of antibiotics in Rhizomucor pusillus.
Allergenicity Testing
The translated predicted open reading frames (ORFs) of R. pusillus genome, obtained from the WGS, were used to screen for amino acid sequence homology with known food allergens registered in the Allergen Online database (http://www.allergenonline.org) to predict in silico potential cross-reactivity. In case of a high sequence homology, a mRNA analysis was performed on Fermotein Wet using reverse transcriptase polymerase chain reaction (RT-PCR) to confirm the presence of the corresponding gene transcript in the end product. Subsequently, the presence of the protein was confirmed using liquid chromatography-mass spectrometry (LC-MS/MS; Proteome Factory AG, Berlin, Germany). The protein extraction of Fermotein was performed using a sequential extraction method as described by Broekman et al. [26].
Data Analysis
All data are presented as means ± SDs (standard deviations) of five representative batches of either Fermotein Wet or Dry. Microbial data are provided for each batch of Fermotein Wet and Dry. SDs are not provided when values were below the detection limit. Measured concentrations of contaminants and other components were compared to advisory or legislative limits for human food from the USA and EU if available. It should be noted that limits are occasionally general and not specific for a single-cell protein product. Therefore, the strictest limits for food products with similar food applications were used, except those specifically intended for infants.
Mycotoxin Production
No mycotoxin production-related genes were identified in the genome of Rhizomucor pusillus in the Blast search. Results were confirmed by the analyses of common mycotoxins in Fermotein (Table 1). Concentrations of all mycotoxins were below the detection limit, and well below advisory levels and legislative limits in the USA and EU, respectively. The levels of diacetoxyscirpenol, nivalenol, HT2-toxin, T2-toxin comply with the EU Tolerable Daily Intake (TDI values) when the anticipated intake is applied.
The analysis of mycotoxins in Fermotein confirms the in silico prediction that Rhizomucor pusillus does not produce mycotoxins. This is in line with previous studies investigating the production of mycotoxins and other secondary metabolites by micromycetes growing on food raw materials and plant-based products. Paterson et al [30] and Lugauskas [31] reported no mycotoxin production by Rhizomucor pusillus. Mycotoxin production is therefore not considered a safety concern for Fermotein.
Allergenicity
In silico analysis revealed that six proteins from Fermotein showed high homology with known food allergens. mRNA analysis showed that all six genes, encoding the potential allergenic proteins, are actively expressed during the fermentation of the fungus and are present in the final product. Through proteomic analysis, six proteins were actually identified to be present in Fermotein, meaning that there is a chance of cross-reactivity with salmon, tuna, chicken, pistachio nut, carrot and some shrimp and crab species. Chicken and carrot are not seen as food allergens in the EU as well the USA.
Fungi are known to elicit allergenic responses through inhalation of spores. Among eight phyla of fungi, three phyla are associated with the production of known allergens, including Zygomycota [32]. However, only allergens from Rhizopus species among the Zygomycota have been officially characterized [32,33]. Spore formation and consequently allergic reaction is not considered of concern during Fermotein production, since spores are generally not produced at the fermentation conditions applied in the production process and laboratory analysis showed that the fungus is not viable due to inactivation during the production process and no spores are present at the end of the production process. In addition, to our knowledge, no case reports of allergenic reactions or sensitization to Rhizomucor pusillus have been described. Results of the in silico analysis, mRNA analysis and proteomics did show potential cross-reactivity and presence in Fermotein for five known food allergens. The potential risk of crossreactivity should be clearly communicated to consumers via labelling. Furthermore, postmarket monitoring is necessary to follow the introduction of de novo sensitizations due to the novelty of the food product.
Mucormycosis-related Virulence Factors and Antibiotic Synthesis Pathways
Invasive infections in humans, known as mucormycosis, can be induced by different fungal species. WGS data and its annotated proteins were used to assess the potential of Rhizomucor pusillus to induce mucormycosis. The keyword 'virulence' resulted in 189 entries, of which 10 contained the words 'mucormycosis' or 'mucormycosi'. These 10 proteins may represent mucormycosis-related virulence factors encoded by the genome of Rhizomucor pusillus.
In literature, less than 40 cases of human infections caused by Rhizomucor pusillus have been reported [34,35]. Most of the cases were associated with profound neutropenia and leukemia in the host. Indeed, mucormycosis primarily occurs in immunocompromised subjects via inhalation of spores [36,37]. There is no evidence of mucormycosis caused by ingestion of foods containing fungi, which makes it highly unlikely that invasive infections can occur via consumption of Fermotein. Since cases of mucormycosis caused by Rhizomucor pusillus were identified, it cannot be excluded that the 10 proteins which were identified in the genome, may represent mucormycosis-related virulence factors. However, due to the validated pasteurization step in the production process of Fermotein, Rhizomucor pusillus is not viable at the end of the production process and spores are inactivated. Therefore, mucormycosis is not considered a safety issue for Fermotein.
None of the 12 major antibiotic biosynthetic pathway gene clusters were found in the genome of Rhizomucor pusillus. It has been reported that other fungi used for food production, such as Aspergillus oryzae, may produce antibacterial metabolites [38]. However, Zygomycetes are not capable of antibiotic production according to literature [39]. Results of the genome screening support this statement and antibiotics production is therefore not considered a safety concern for Fermotein.
Heavy Metals and RNA Content
Concentrations of arsenic, cadmium, lead, and mercury were low and well below EU legislative limits (Table 2). No limits for human food categories with similar food applications compared to Fermotein were identified for the USA.
Filamentous fungi are known for their capability to absorb heavy metals and minerals [40]. Although raw materials used in the production of Fermotein are processed according to international standards and requirements, traces of heavy metals could be introduced. Results show that concentrations of the analysed heavy metals are below limits and therefore do not pose a safety threat for the consumption of Fermotein. The accumulation of minerals (data not shown) was also not considered to be a safety issue.
The average concentration (n = 5) of RNA in Fermotein is 4.9 ± 0.2% of dry matter. RNA content is expected to be high in fungal SCP (7 -10%), which is of concern when used for human consumption since high purine intake can affect health negatively [9]. Compared to those numbers, RNA content of Fermotein is relatively low.
Microbiological Contamination
Concentrations of microorganisms that could cause a food safety hazard or adversely affect shelf life, were low in all batches of Fermotein Wet and Dry (Table 3), indicating that the pasteurisation processing step and the storage conditions of Fermotein assure that microbial contamination is not a concern.
Legislative limits in the EU are set for specific food and foodstuffs but SCP products are not included in any food category [43]. The microorganisms were either absent or their levels were within generally accepted standards for food ingredients. Some colony forming units of the spore forming Bacillus cereus were detected. In the EU, limits are only set for strictly controlled food products, such as baby formulae and foods for special medical purposes, where maximum allowed concentrations are 500 cfu/g for 1 out of 5 batches [43]. EFSA has suggested that producers of new products should ensure that 10 3 -10 5 cfu/g are not reached at the stage of consumption [44]. Levels of Bacillus cereus are well below this threshold after production and the handling and storage have to be controlled. Therefore, microbiological contamination is no safety risk for human consumption of Fermotein with the current production process and quality control checks in place.
CONCLUSIONS
Fermotein is a SCP product obtained from the filamentous fungus Rhizomucor pusillus, for which no safety data or QPS status are currently available. Our studies show that no safety concerns for Fermotein and Rhizomucor pusillus as a source were identified, except for a potential cross-reactivity with a few known food allergens. However, the risk of cross-reactivity can be communicated to consumers via labelling. RNA content should be taken into account when determining maximum use levels of Fermotein due to absence of legislative limits. When labelling for potential cross-reactivity is not preferred or to ensure that a certain group of consumers is excluded, allergenicity testing is an option to investigate the actual potential of crossreactivity of Fermotein. Post-market monitoring should follow de novo sensitisations. Based on the current data on Fermotein and the controlled production process, there is no necessity to perform toxicity studies.
Addition to the QPS list is preferred but it is not a limitation for market authorization. The fungi with a history of safe use (Aspergillus oryzae, Aspergillus sojae and Rhizopus oryzae) are also not added to the QPS list. The information from this study contributes to the body of knowledge that can be used to assess the QPS status of Rhizomucor pusillus and derived products. | 2020-12-10T09:04:47.507Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "a6f33e7419cf8fafc11d2463c4a7c2a1771078c3",
"oa_license": null,
"oa_url": "https://www.journalejnfs.com/index.php/EJNFS/article/download/30311/56870",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d8fea8c13325ff5a771c62fd28287360b8f14caa",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
121929282 | pes2o/s2orc | v3-fos-license | Transport properties of the hot quark-gluon plasma
A phase where quark and gluon are the relevant degrees of freedom is expected for nuclear matter at energy density ϵ ≥ 1 GeV/fm3 and a temperature T > 160 MeV. A transient state state of such a matter can be created by mean of ultra-relativistic heavy-ion collisions. We briefly overview some main results on the properties of the quark-gluon plasma emphasizing the necessity to develop a transport theory for quarks and gluons able to incorporate the main developments in lattice QCD and perturbative QCD. First results show that Boltzmann-Vlasov transport theory correctly predict the elliptic flow observed at both RHIC and LHC energies.
Introduction
The study of the fundamental theory of strong interactions, Quantum Chromo Dynamics (QCD), under extreme conditions of temperature and density has been one of the most challenging problems in physics during the last 20 years, capturing increasing experimental and theoretical attention. There are several reasons underlying such a vivid interest. QCD is a quantum field theory with an extremely rich dynamical content (asymptotic freedom, confinement, chiral symmetry, nontrivial vacuum ...). Besides Heavy-Ion Collisions at ultrarelativistic energies ( √ s > 10 AGeV) provide the unique possibility to create a transient state of matter at energy densities and temperatures similar to those of the Universe in the first 10 −6 −10 −5 s after the Big Bang. Furthermore the most dramatic event during the first second was the quark-to-hadron phase transition associated to a reduction in the number of degrees of freedom by about a factor three [1]. Finally, in recent years the discovery of a duality between gauge and string theory has led to the development of a new field of intense research [2].
In 1965 Hagedorn conjectured the existence of a limiting temperature T c ∼ 160 MeV for the hadronic system due to an envisaged exponential increase of the density of hadronic states. The existence of a matter made by quark in a deconfined state was suggested for the first time in 1975 by Cabibbo and Parisi [3] soon after the Nobel Prize paper on the the asymptotic freedom of non-abelian gauge theory [4]. They pointed out that the so-called Hagedorn limiting temperature was associated to a divergency in the hadronic gas partition function and hence a sign of a phase transition to quark matter. However first evidences of the possibility to realize by mean of heavy-ion collision a transient state of such a matter came only in the 90 ′ s thanks to the SPS facility able to realize heavy-ion collisions up to √ s = 17 AGeV. It was however only with the RHIC project conducted at Brookhaven National Laboratory (BNL) that it was possible to create a quark-gluon plasma (QGP) phase (expected at energy density ǫ c > 1 GeV/fm 3 ) lasting for about 4-5 fm/c at a maximum initial temperature T ∼ 2 T c . A temperature and a duration time sufficiently long for strong interactions that have made possible to achieve several discoveries about the properties of the QGP and its hadronization [5,6].
The present knowledge about the properties of the QGP is mainly based on three complementary sources: QCD calculations on the lattice (lQCD) and in the perturbative regime (pQCD), theoretical and phenomenological models, empirical information from heavyion experiments. The lQCD computations have clearly shown that a phase transition at energy density ǫ c ∼ 1 GeV/fm 3 and a temperature T c ∼ 160 MeV occurs being most likely a cross over in the case of realistic quark masses. Furthermore the energy density and entropy density reach about 80% of their ideal gas limit value already relatively close to the critical temperature T c , but then the full value is reached only for asymptotically large temperatures. Even more interestingly a large deviation from a non-interacting gas behavior is found in the large trace anomaly T µ µ = ǫ − 3P up to T ∼ 2 T c indicating a system far from a mere gas of quarks and gluons and hinting to a strongly interacting one. The development of pQCD calculations at high temperature has shown a slow convergency with both temperature and mass of the quarks. It is necessary to go to temperature T > 3 − 4 T c [7] or at masses larger than m c ∼ 1.3 GeV [8] to have a higher order pQCD scheme applicable.
Our focus in these Proceedings is on the phenomenological models and in particular on the development of a tansport theory for quarks and gluons able to embed the information coming from lQCD and pQCD from one hand and to provide a tool for a direct comparison with the experimental observables on the other hand.
Main results at RHIC
The theorethical and experimental efforts around RHIC in the last decade has allowed a first important breakthrough in the knowledge of the properties of the QGP at least up to T ∼ 2 T c . It became soon quite clear that a new state of matter has been created and there are several novel discoveries and results. We will discuss some of them focusing on two main type of observables that has allowed a first survey of the QGP. A first observable is called nuclear modification factor and provides a measure of the modification of the hadrons momentum spectra in ion-ion (AA) collisions respect to the pp collisions through the ratio of the respective spectra rescaled by the number of collisions N coll according to a geometrical Glauber model: It is clear that R AA = 1 means that AA collisions are merely a superposition of nucleon-nucleon collisions. First observations at RHIC and more recently at LHC have shown R AA (p T ) ∼ 0.2 for most central collisions (see figure 2 for the case of heavy quars) corresponding to a strong interaction of the partons initially created as one could expect if a QGP plasma fireball has been really created. The other main observables that allow to characterize several properties of the QGP medium is the elliptic flow. Its origin is the initial space eccentricity of the QGP coming from the noncentral overlapping of the colliding ions, quantified by ǫ = y 2 −x 2 y 2 +x 2 . Due to the pressure gradients of the QGP such a space eccentricity is converted into an unisotropic momentum distribution respect to the azimutal angle φ p that can be expressed by mean of a Fourier expansion: where the first coefficient (v 1 ) is vanishing on the average to the space symmetry of the system while the second term v 2 namely the elliptic flow is the dominant one. Both at RHIC and LHC values of v 2 up to about 0.25 has been observed, see figure 1 which means that in the φ p = 0 direction the abundancy of hadrons can be about 3 times larger than at φ p = π/2. It is clear that it is a strong effect carying information on both the EoS and the shear viscosity of the QGP.
We now briefly focus on three main reslts of relevant to the study of the transport properties of QGP and its hadronization: • The QGP is a nearly perfect fluid with very low viscosity -The plasma created at temperatures T ∼ 200 − 300 MeV and small baryon chemical potential exhibits a nearly perfect fluid behavior in the bulk of the system, opposite to the asymptotic freedom expectations. Such a statement is mainly corroborated by the observation of large anisotropic flows that develop due to the initial anisotropy of the created fireball. Similarly to what has been observed in the same years for ultra-cold trapped atoms [9] the elliptic flow has values close to the ideal hydrodynamical predictions. A first estimate would suggest a η/s ≤ 0.4 very close to the conjectured lower bound for supersymmetric gauge theories in the infinite coupling limit [2] and to that suggested by quite general Quantum Mechanics considerations [10]. It remains to be determined is the value of η/s of what could be the most ideal fluid ever observed and in particular its microscopic origin. As for the microscopic origin of the ideal fluid behavior, a possible explanation is the presence of quark-antiquark resonances that are reminiscent of hadronic-like or gluonic states [8] or could be a more subtle competition between electrically charged quasiparticles (quarks and gluons) and magnetically charged ones (magnetic monopoles) [11]. Lattice results trying to identify and isolate these objects and their contribution to thermodynamics are also becoming available [12]. In the next Section we will describe more in detail the issue of the QGP shear viscosity. • Hadronization is modified respect to the vacuum one -The statement is justified by the fact that the ratio of baryons to mesons is up to a factor of 4 larger than the one in pp collision in an intermediate range of p T ∼ 2 − 6 GeV. The most convincing explanation of this phenomena is that most of the hadrons come from a coalescence of the quarks in the plasma [13,14]. The basic idea is that instead of pop-up quarks from the vacuum as in the standard fragmentation picture one can hadronize combining the quark of the medium. In such a picture calling f q (p) the (anti-)quark distribution function, the spectra of the mesons is given by two different mechanisms: The first term represent the standard fragmentation contribution while the second is the coalescence one.The Σ(n) is the n-particle phase space, D(z) is the fragmentation function giving the probability that a parton q will give an hadron H of momentum p H = z p q and Φ H is the hadron wave function. It has been shown in several works that the fragmentation dominates parametrically at high p T [15,16,13], but with the expected density and temperature of the quark plasma there is a dominance quark coalescence up to p T ∼ 5 − 6 GeV. It is easy to understand that in a coalescence process the baryon can be produced more abundantly respect to fragmentation, in fact the quarks are already present and in particular because the distribution function at low p T are exponentials f q ∼ e −p/T therefore the integrand in Eq. hadronic states. Therefore can account for the observed enhancement of the baryon over meson ratio observed in the data and shown in figure 1 for proton/pions (squares) and lambda/kaon (circles). The solid lines are the correspondent prediction of a coalescence plus fragmentation model [14,15,17]. A coalescence mechanism brings with it another and even more peculiar feature that is the scaling of the elliptic flow with the number of constituent quarks, a property firmly observed at RHIC and more recently also at the LHC energy. The understanding of QNS is straitfoward in the simplified version of collinear quarks coalescing with momenta p q /n q . Eeach quark distribution can be written as . Under this approximation substituting in the second term of Eq.3 it can be easily shown that In figure 1 (right) it is shown how the coalescing scaling is able to predict for example the Kaon and the Lambda v 2 (p T ) once the quark v 2q (p T ) has been fixed fitting the pion v 2 .
There several other observable that appear to be consistent with a quark coalescence mechanism like triggered angular correlation, charge fluctuations, R AA and v 2 for heavy meson with charm and bottom quarks. Of course what it has been briefly describe is a quite simplified version of a realistic coalescence models that has to include also the r−space coalescence, the possibility of quarks with different momenta, the radial flow, the feedown from resonance decays, the contribution from higher Fock states and so on. A review of this aspect of the QGP physics can be found in Ref. [13,14] microscopic scale is important and the specific mechanism of hadronization can modify the observables from the partonic to the hadronic case. • Heavy Quarks strongly interact with the medium -The trivial expectation was that due the large mass respect to the plasma temperature, m Q >> T and a presumable pQCD behavior due to the m Q >> Λ QCD their in medium interaction relatively weak and the relaxation time of heavy quarks was much larger that the light quark one. Furthermore the main mechanism responsible for the in-medium energy loss, the gluon brehmsstrahlung, should be suppressed by a dead cone effect in the gluon radiation. Despite such pQCD expectation the experimental data [18] revealed a a strong suppression of the spectra, small R AA , and a quite large elliptic flow both nearly comparable with the light quark ones, see figure 2. Models based on jet quenching or upscaled pQCD correction failed to explain the observed R AA (p T ) and the v 2 (p T ) measured. Only a non-perturbative approach to the heavy quark dynamics based on the solution of the T-matrix scattering under a potential derived from lQCD has been capable of account for the data [19,20]. The main ingredients are the presence of a resonant scattering that leads to a peak in the imaginary part of the T-matrix especially in the color singlet state and again the presence of a coalscence mechanism for hadronization. In figure 2 it is shown by solid line the prediction of the T-matrix approach while the dashed line show the result if one discard the coalescence mechanism and assume an hadronization only by parton fragmentation. Predictions for at the LHC appears to be also quite succesfull showing a fairly good agreement with eralier results from the ALICE Collaboration [21,22].
We have briefly discussed three main surprising and relevant discoveries at RHIC, of course there are other several aspects that could be discussed as the evidence of a strong jet quenching at high momenta, first possible signs of a Color Glass Condensate matter, the confirmation of the enhancement of strangeness. We note that the main findings about the properties of the Quark-Gluon Plasma both in the light and heavy quark sector ask for the development of a transport theory of quarks and gluons, as we discuss more in detail in the next section.
Transport Theory for the Quark-Gluon Plasma
A first comparison with preliminary data has shown an agreement of both the p T spectra and the elliptic flow of different hadrons with the prediction of ideal hydrodynamics [24]. This has lead to the announcement of the creation of an almost ideal fluid. Nonetheless thanks to a more accurate comparison it has been found that dissipative effects cannot be neglected and even a small shear viscosity to entropy ratio η/s produce sizeable effect increasing with the transverse momentum p T of the particles [25]. This has triggered a lot of activity in developing a relativistic theory of viscous hydrodynamics. The basic idea is to add a dissipative part in the energy momentum tensor by mean of a first order expansion in the space-momentum gradients: where ∆ µν ≡ g µν − u µ u ν , ∆ µ ≡ ∆ µν ∂ ν . However viscous corrections to ideal hydrodynamics are indeed large and a simple relativistic extension at first order, the so-called Navier-Stokes, is affected by causality and stability pathologies [26]. It is therefore necessary to go to second order gradient expansion, and in particular the Israel-Stewart theory has been implemented to simulate the RHIC collisions providing an upper bound for η/s ≤ 0.4 [27]. Such an approach, apart from the present limitation to 2+1D simulations, has the more fundamental problem of a limited range of validity in η/s and in transverse momentum for p T > 1.5 GeV. In this p T region viscous hydrodynamics breaks its validity because the relative deviation of the equilibrium distribution function δf /f eq increases probably with p 2 T becoming large already at p T ≥ 3T ∼ 1 GeV. In fact viscous terms have two main effects one is the dissipative correction to the flow velocity u µ (x) and to the density and tempeature evolution, the other is the non-equilibrium corrections to the distribution function f → f eq + δf . It has to be realize that however there is no biunivocal correspondence between the nonequilibrium distribution and the non-equilibrium energy momentum tensor. Therefore the theory of hydrodynamics cannot determine δf and an ansatz has to be choosen typically it used the Grad's ansatz: In this contest it appears important the development of a more complete transport theory for quarks and gluons that would have a more wide range of validity recovering the limiting case of hydrodynamics. This appears important not only for the issued of the viscosity of the QGP but more in general also because a transport theory has also microscopic scale that could result essential to treat consistenly the hadronization mechanism. Furthermore heavy quark significantly deviates from a full thermalization and cannot be described by viscous hydrodynamics while they can be self-constitently included in the transport theory and treated on equal footing as the light ones.
We are therefore developing a relativistic Boltzmann-Vlasov transport theory for on-shell particles [28,29,30]. Such a transport approach has the advantage to be a 3+1D approach not based on a gradient expansion in viscosity that is valid also for large η/s and for out of equilibrium momentum distribution allowing a reliable description also of the intermediate p T range where the important property of quark number scaling (QNS) of v 2 (p T ) has been observed [13].
Furthermore Boltzmann-Vlasov transport theory distinguishes between the short range interaction associated to collisions and long range interaction associated to the field interaction, responsible for the change of the Equation of State (EoS) respect to that of a free gas. This last feature allows to unify two main ingredients that are relevant for the formation of collective flow. In ideal hydrodynamics the v 2 (p T ) depends only on the EoS namely on the sound velocity c 2 s = dP/dǫ, while the mean free path λ is assumed to be vanishing. In the parton cascade approach the EoS is fixed to be the one of a free gas c 2 s = 1/3 = P/ǫ, while the mean free path λ = 1/ρσ is finite. In the first stage of RHIC the two different approaches were able to account for the large v 2 observed; in particular the parton cascade with large scattering cross section predicted the saturation of v 2 vs p T [31]. Anyway once viscosity is finite both a finite λ and the EoS are important for the generation of the momentum anisotropies and this is naturally present in the Boltzmann-Vlasov transport approach. The basic equation of transport for the (anti-) quark phase-space distribution function f ± for the case of mean field interaction that generate massive quasi particles can be written as: where the first term is related to the free streaming, the second term represents the effect of a scalar field modifying the ǫ = 3P relation (giving a finite interaction measure) and the last term is the effect of the collisions directly associated to a finite λ and therefore to a finite η/s. The collisions term if only two body collisions are considered can be written as: where j = j d 3 p j / (2π) 3 2E j , M denotes the transition matrix for the elastic processes and f j are the particle distribution functions. The relevance of the transport equation for quasiparticles with a space-time dependent mass resides in the success of quasi particles in describing correctly the behavior of energy density and pressure of the QGP as computed in the lQCD approach.
Quasiparticle model
A successful way to account for non-perturbative dynamics is a quasi-particle approach, in which the interaction is encoded in the quasi-particle masses.
The model is usually completed by introducing a finite bag pressure that can account for further non-perturbative effects and could be directly linked to the gluon condensate at least in the pure gauge case [32]. It is already well known that, in order to be able to describe the main features of lattice QCD thermodynamics, a temperature-dependent mass has to be considered. This also implies that the bag constant has to be temperature-dependent, in order to ensure thermodynamic consistency.
The temperature-dependent effective mass for quarks and gluons can be evaluated in a perturbative approach that suggests the following relations [33]: where n f is the number of flavors considered, N c is the number of colors, m u,d is the mass of the light quarks. The coupling g is generally temperature-dependent. However, as mentioned in the introduction, the calculation of such a T -dependence by means of perturbation theory does not allow to have a good description of lattice QCD thermodynamics. Therefore, usually g(T ) is left as a function to be determined through the fit to lattice QCD data.
The pressure of the system can then be written as the sum of independent contributions coming from the different constituents, which have a T -dependent effective mass, plus a bag constant: [34] for the pressure and trace anomaly as functions of the temperature, together with the quasi-particle model curves. Right panel: quark and gluon quasi-particle masses as functions of T /T c [34,35].
In order to have thermodynamic consistency, the following relationship has to be satisfied: which gives rise to a set of equations of the form Only one of the above equations is independent, since the masses of the constituents all depend on the coupling g through relationships of the form: m i (T, µ = 0) = α i g(T )T , where α i are constants depending on N c and N f according to Eqs. (8). The energy density of the system is then obtained from the pressure through the thermodynamic relationship ǫ(T ) = T dP (T )/dT − P (T ) and will have the form In the model there are therefore two unknown functions, g(T ) and B(T ), but they are not independent, they are related through the thermodynamic consistency relationship (10). Therefore, only one function needs to be determined, which we do by imposing the condition: We performed our fit to the lattice data for the energy density [34]; in figure 3 we show the good agreement between our curves and lattice results for other quantities like pressure and trace anomaly.
We notice that, at sufficiently high temperature, m ∼ T , as we can expect because T remains the only scale of the problem. When we are approaching the phase transition, there is instead a tendency to increase the correlation length of the interaction: the quasiparticle model tells us that this can be described as a plasma of particles with larger masses. This essentially determines the fall and rise behavior of m(T ) seen in all quasiparticle model fits to lattice data in SU (N ) gauge theories including the SU (3) case for QCD. We notice a quite smoother behavior of the masses when the Wuppertal-Budapest lQCD are considered respect to older data of the HotQCD collaboration [35], indicating that the strength of such correlation is significantly reduced when lattice simulations are performed at the physical quark masses and the continuum limit is taken. The main point for our purposes here is the direct application of such a simple quasiparticle model in the transport theory supplies the possibility to include the correct equation of state of the QGP as computed in lattice QCD.
Transport at fixed shear viscosity
Our aim is to exploit the transport approach fixing the value of the η/s in order to make possible a direct comparison to viscous hydrodinamic approach and more generally to have a tool to directly estimate the viscosity of the plasma. To this end we do not calculate the cross section from a microscopic model (which could very well be an impossible task) but determine the local cross section σ in order to have the wanted local viscosity. Here we illustrate the procedure for the simplest case of a massless gas for simplicity, the extension to the finite mass case is easily achieved [36], In kinetic theory under ultra-relativistic conditions the shear viscosity can be expressed as [10]: η = 4 15 ρ p λ, with ρ the parton density, λ the mean free path and p the average momentum. Therefore considering that the entropy density for a massless gas is s = ρ(4 − µ/T ), µ being the chemical potential or fugacity, we get: where σ tr is the transport cross section, i.e. the sin 2 θ weighted cross section. From Eq. (14) we see that assuming locally the thermal equilibrium this can be obtained evaluating in each α cell the cross section according to: with 4πη/s set in the range 1 − 4. This approach is equivalent to have a total cross section of the form σ T ot = K(ρ, T )σ pQCD > σ pQCD where K takes into account the non perturbative effects responsible for that value of viscosity. This approach have been shown to recover the viscous hydrodynamics evolution of the bulk system [26]. We notice that a guideline on the temperature and time dependence of the cross section can be obtained considering the simple case of a free massless gas for which s = g 2π 2 45 T 3 , and therefore neglecting µ in Eq. (15) one gets σ tr ∼ T −2 for 4πη/s = 1. Furthermore a simple Bjorken expansion which means T ∼ τ −1/3 gives σ tr ∝ τ 2/3 which is the approximate prescription adopted in [21].In figure 4 it is shown σ tr (τ ) evaluated locally in space in a cylinder of radius 3 fm as a function of time for Au + Au at √ s = 200 AGeV, we see on the left the approximate τ 2/3 and on the right the agreement with the estimated T −2 behavior.
First preliminary results
In our calculation the initial condition are longitudinal boost invariant with the initial parton density dN/dη(b = 0) = 1250 at RHIC and dN/dη(b = 0) = 2250 at LHC. The partons are initially distributed in coordinate space according to the Glauber model while in the momentum space at RHIC (LHC) the partons with p T ≤ p 0 = 2 GeV (p T ≤ p 0 = 4 GeV) are distributed according to a thermalized spectrum with a maximum temperature in the center of the fireball of 2T c (3.5T c ), while for p T > p 0 we take the spectrum of non-quenched minijets according to standard NLO-pQCD calculations. We also start our simulation at the time t 0 = 0.6f m/c at RHIC and t 0 = 0.3f m/c at LHC.
In order to study the effect of the kinetic freezeout on the generation of the elliptic flow we have performed two calculations one with a constant 4πη/s = 1 during all the evolution area take into account the quasi-particle model predictions for η/s [35]. Right: Differential elliptic flow v 2 (p T ) at mid rapidity for 20% − 30% collision centrality. The red dashed line is the calculation with 4πη/s = 1 during all the evolution of the fireball and without the freeze out condition, while the black blue and green lines are calculations with the inclusion of the kinetic freeze out and with 4πη/s = 1, 4πη/s ∝ T and 4πη/s ∝ T 2 in the QGP phase respectively as shown in the left panel.
of the system (red dashed line of figure 5) the other (shown by black solid line in figure 5) with 4πη/s = 1 in the QGP phase and an increasing η/s in the cross over region towards the estimated value for hadronic matter 4πη/s ∼ 8. Such an increase allows for a smooth realistic realization of the kinetic freeze-out. In figure 5 (right) it is shown the elliptic flow v 2 (p T ) at mid rapidity for 20% − 30% centrality for both RHIC Au+Au at √ s = 200 GeV and LHC Pb+Pb at √ s = 2.76 TeV. As we can see at RHIC energies, left panel of figure 5, the v 2 is sensitive to the hadronic phase and the effect of the freeze out is to reduce the v 2 of about of 20%, from red dashed line to black solid line in left panel of figure 5 (see also figure 6. For the p T range shown we get a good agreement with the experimental data for a minimal viscosity η/s ≈ 1/(4π) once the f.o. condition is included. At LHC energies, right panel of figure 5, the scenario is different, we have that the v 2 is less sensitive to the increase of η/s at low temperature in the hadronic phase. The effect of large η/s in the hadronic phase is to reduce the v 2 by less than 5% as shown by the solid line for LHC in figure 6 while the RHIC case is shown by red dashed line and compared to the case at 4πη/s = 2 but without the f.o. dynamics. This different behaviour of v 2 between RHIC and LHC energies can be explained looking at the life time of the fireball. In fact at RHIC energies the life time of the fireball is smaller than that at LHC energies, 5f m/c at RHIC against the about 10f m/c at LHC. Therefore at RHIC the elliptic flow has not enough time to fully develop in the QGP phase. While at LHC we have that the v 2 can develop almost completely in the QGP phase. Due to this large life time of the fireball at LHC and the larger initial temperature is interesting to study the effect of a temperature dependence in η/s. In the QGP phase η/s is expected to have a minimum of η/s ≈ (4π) −1 close to T C as suggested by lQCD calculation. While at high temperature quasi-particle models (see previous Section) seems to suggest a temperature dependence of the form η/s ∼ T α with α ≈ 1 − 1.5 [35]. To analyze these possible scenarios for η/s in the QGP phase we have considered two different situation one with a linear dependence 4πη/s = T /T 0 = (ǫ/ǫ 0 ) 1/4 (blue line) and the other one with a quadratic dependence 4πη/s = (T /T 0 ) 2 = (ǫ/ǫ 0 ) 1/2 (green line) where ǫ 0 = 1.7 GeV/fm 3 is the energy density at the beginning of the cross over regions where the η/s has its minimum, see figure 5.
At RHIC energies the v 2 is essentially not sensitive to the dependence of η/s on temperature in the QGP phase, see the blu and green lines in the left panel of figure 5. However the effect on average is to decrease the value of v 2 but at low p T < 1.5 GeV the v 2 (p T ) appears to be insensitive to η/s(T ) while a quite mild dependence appears at higher p T where however the transport approach tends always to overpredicted the elliptic flow observed experimentally. In any case still a strong temperature dependence in η/s has a small effect on the generation of v 2 we found that with a constant or at most linearly dependent η/s(T ) the transport approach can describe the data at both RHIC and LHC at least up to p T ∼ 2 GeV. It is quite likely that a more detailed analisis of all the anisotropic harmonics measurable up to v 5 = cos(5 φ p will allow to better constraint the η/s(T ) as anticipated for the v 4 /v 2 2 in Ref. [37].
Perspectives and conclusions
We have reviewed some of the main results of the QGP at high temperature created in ultrarelativisti heavy-ion collisions (HIC) at both RHIC and LHC energies. We are developing a transport approach to study the properties of the QGP and have the possibility of interpreting the rich fenomenology coming from ultrarelativisitc HIC. We have shown that a Boltzmann- Vlasov transport approach has the potential to properly include the dynamics associated to an EoS as evaluated in the lattice QCD and the viscosity dissipation mechanisms. First results indicate that without any parameter tuning the approach is able to correctly predict the behavior of the elliptic flow going from the Au + Au collisions at √ s N N = 200 GeV to the P b + P b at √ s N N = 2.75 TeV showing its validity. A first important result is that at LHC a key observables like the elliptic flow is much less contaminated by the hadronic phase allowing a better study of the QGP properties.
In the next future the capability to naturally extend transport theory to self-consistenty include the developments in the quark-gluon quasiparticle models, the possibility extend the study of the collective flows to higher harmonics up to v 5 = cos(5φ) and the extension of the transport approach to heavy quarks dynamics and the long-standing issue of the J/Ψ suppression-regeneration will potentially allow to obtain a deeper insight into the QGP properties and their microscopic origin. | 2019-04-19T13:03:43.384Z | 2011-12-28T00:00:00.000 | {
"year": 2011,
"sha1": "5ce925204cbea0a81888009d96841e9e2c767e52",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/336/1/012017",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0d3ff98c7d8bb692d285366359eed103d8187631",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16777743 | pes2o/s2orc | v3-fos-license | Genetic Structure of the Aphid, Chaetosiphon fragaefolii, and Its Role as a Vector of the Strawberry yellow edge virus to a Native Strawberry, Fragaria chiloensis in Chile
The monoecious anholocyclical aphid, Chaetosiphon fragaefolii (Cockerell) (Homoptera: Aphididae), was collected on a native strawberry, Fragaria chiloensis (L.) Duchesne (Rosales: Rosaceae) from different sites in Chile. The presence of this aphid was recorded during two consecutive years. F. chiloensis plants were collected from seven natural and cultivated growing areas in central and southern Chile. Aphids were genotyped by cross-species amplification of four microsatellite loci from other aphid species. In addition, the aphid borne virus Strawberry mild yellow edge virus was confirmed in F. chiloensis plants by double-antibody sandwich ELISA and RT-PCR. Genetic variability and structure of the aphid populations was assessed from the geo-referenced individuals through AMOVA and a Bayesian assignment test. The presence of C. fragaefolii, during the two-year study was detected in only four of the seven sites (Curepto, Contulmo, Chilián and Cucao). Genetic variation among these populations reached 19% of the total variance. When assigning the individuals to groups, these were separated in three genetic clusters geographically disjunct. Of the seven sampled sites, six were positive for the virus by RT-PCR, and five by double-antibody sandwich ELISA . The incidence of the virus ranged from 0–100%. Presence of the virus corresponded with the presence of the aphid in all but two sites (Chilian and Vilches). The greatest incidence of Strawberry mild yellow edge virus was related to the abundance of aphids. On the other hand, sequences of the coat protein gene of the different virus samples did not show correspondence with either the genetic groups of the aphids or the sampling sites. The genetic structure of aphids could suggest that dispersal is mainly through human activities, and the spread to natural areas has not yet occurred on a great scale.
Introduction
The strawberry aphid Chaetosiphon fragaefolii (Cockerell) (Homoptera: Aphididae) is an important pest of strawberry worldwide, presumed to originate from North America. Parthenogenetic forms occur all year round. Although in laboratory cultures and greenhouses male and oviparae female forms occur, this is rarely found in the field (Blackman and Eastop 2000).
In Chile, the presence of this aphid is recent (Zuñiga 1967), and is associated especially with the cultivated strawberry, Fragaria x ananassa Duchesne (Rosales: Rosaceae) (Gonzalez 1989), although it has dispersed throughout the whole strawberry production area, including Fragaria chiloensis (L.) Duchesne (Rosales: Rosaceae) (Klein-Koch and Waterhouse 2000). F. chiloensis is a clonal herbaceous perennial native to grasslands, sand dunes and forests along portions of the Pacific Coast of North and South America. In Chile, its native location, it is distributed from 34º 55' S to 47º 33' S (Carrasco et al. 2007), and was traditionally cultivated by the native people before the Spaniards arrived to central Chile in 1542 (Darrow 1966;Wilhelm and Sagen 1972). However, to date the extent to which the aphid C. fragaefolii has spread on F. chiloensis in Chile has not been studied.
As C. fragaefolii persistently transmits several viruses, such as the Strawberry crinkle virus, Strawberry mottle virus, Strawberry mild yellow edge virus (SMYEV), and Strawberry vein banding virus (Krczal 1979(Krczal , 1982Blackman and Eastop 2000;Converse 2002;Posthuma et al. 2002), its presence in natural populations might have a negative impact on commercial strawberry production.
Of all of the viruses affecting strawberry, the most common and economically important virus is SMYEV, occurring in both F. x ananassa and F. chiloensis (Khan 1989). SMYEV is distributed worldwide in cultivated strawberries, and is among the 50 most frequently cited plant diseases in the quarantine regulations of 124 countries (Khan 1989). In this study, the virus was described in detail (Jelkmann et al. 1990), and its full nucleotide sequence obtained (Jelkmann et al. 1991). Nymphs, apterae, and alatae of C. fragaefolii all transmit the virus equally well, with 100% transmission occurring with an acquisition feeding period of two days and a transmission feeding period of eight days (Krczal 1979). Although SMYEV has been previously described on F. chiloensis (Hepp and Martin 1991), only a small sample was studied, and there is no information on the extent of its spread and incidence, particularly in the natural populations of F. chiloensis and the association with the presence of its vector.
In this study, we use heterologous microsatellite markers in order to determine the population structure, diversity, and gene flow of C. fragaefolii on wild and cultivated F. chiloensis in Chile. The incidence and genetic similarity of SMYEV on different C. fragaefolii populations was also assessed.
Materials and Methods
During two consecutive years, seven areas in the central-south of Chile were sampled for C. fragaefolii and SMYEV on wild and cultivated F. chiloensis. The presence and abundance of aphids was recorded from each site. The sampling sites were Curepto (35º 5' S, 72º 3' W), Vilches (35º 36' S, 71º 12' W), Chovellen (35º 54' S, 72º 41' W), Chillan (36º 35' S, 72º 4' W), Contulmo (38º 4' S, 73º 14' W), Petrohue (41º 8' S, 72º 24' W), and Cucao (42º 35' S, 71º 7' W) (Figure 1). When present, 30 to 40 C. fragaefolii individuals were collected and kept in 95% alcohol for posterior analysis. To minimize the risk of collecting the same clone, all individuals were collected from different plants separated by least 10 m. Individuals collected were female, wingless adults. Examination under the microscope was done with all individuals to determine species identity. At the same time, 20 to 30 plants were taken per site to assess SMYEV presence and incidence. Plants were kept in aphid-proof cages in a greenhouse, and regular insecticide sprays (Imidacloprid) were done to avoid aphid cross-transmission between plants.
DNA extraction and polymerase chain reaction amplification for C. fragaefolii Genomic DNA was obtained following the 'salting out' procedure from . The tissue was homogenized with a pestle inside plastic tubes provided with TNES buffer (Tris-HCl 50 mM, pH 7.5, NaCl 400 mM, EDTA 20 mM, SDS 0.5%). The extract was incubated overnight at 37°C with Proteinase K (10 mg mL-1). Proteins were precipitated with NaCl 5M, followed by centrifugation at 10,000 rpm. The supernatant was further washed twice with ethanol under cold conditions, and subjected to centrifugation. The DNA template was suspended in 20 µl of distilled sterile water. Concentration and contamination were assessed with a spectrophotometer.
A total of ten heterologous microsatellite loci were tested, but only five amplified successfully. These were Sm10, Sm11, and Sm17 , as well as M62 and M37 (Sloane et al. 2001). These loci have been shown to successfully amplify in several aphid species (Wilson et al. 2004). Polymerase chain reactions (PCR) were carried out in a Mastercycler® gradient Eppendorf thermocycler (http://www.eppendorf.com), and performed in a 10 μl reaction mixture containing: 1 ng/µl DNA template, 2.5mM MgCl, 0.2 mM dNTP, 0.5 U Taq DNA polymerase (Invitrogen, http://www.invitrogen.com), 0.5 µM of each primer, 20 mM Tris-HCl, pH 8.4, 50 mM KCl. PCR followed a program of 3 min of initial denaturation at 94°C and then 40 cycles of a 1 min denaturation step at 94°C, 1 min of annealing (Sm10 = 52º C, Sm11 and M62 = 55º C, Sm17 and M37 = 56.5º C), a 45 sec extension at 72°C, and a final extension at 72°C for 4 min. Amplicons were separated in 6% polyacrylamide denaturing gels using a BIO-RAD Sequi-Gen GT Electrophoresis Cell. After electrophoresis, gels were silverstained to visualize the PCR products using the procedure described by Promega (1996). Variation at each locus was recorded by comparing the size of the amplicon in the gel (allele) in base pairs (bp) with the sequence of the PGEM 3ZF(+) vector (Promega Biosciences, http://www.promega.com) loaded in the same gel.
Anti-viral inmunoglobulins were used in polystyrene microtitre plates according to the manufacturer's instructions.
F. chiloensis leaf tissue was triturated in 3 ml extraction buffer (20 mM TRIS pH 7.4, 137 mM NaCl, 3mM KCl, 2% PVP, 0.05% Tween 20, 0.02% NaN3) and centrifuged. Aliquots of 100 µl of prepared samples were added to each well duplicated, and negative and positive commercial controls were added to each plate. A total of 20 randomly chosen plants were tested for each site. Samples were read 30 and 60 min after ELISA at 405 nm in a microtitre plate reader VICTOR X3 (PerkinElmer, http://www.perkinelmer.com). ELISA readings were considered positive when the absorbance of sample wells was at least two times greater than the mean absorbance of the negative controls.
As recombinant strains of virus could be undetected using strain specific monoclonal antibodies (Singh et al. 2003), the presence of SMYEV in F. chiloensis plants was also confirmed with RT-PCR analysis with coat protein specific primers following the protocol of Thompson et al. (2003). Total RNA of each sample was extracted from 200 mg of leaf tissue, which was homogenized in 2 ml SEBbuffer (0.14 M NaCl, 2 mM Kcl, 2 mM KH 2 PO 4 , 8 mM Na 2 HPO 4 2H 2 O (pH 7.4), 0.05% v/v Tween-20, 2% w/v PVP-40, 0.2% w/v ovalbumin, 0.5% w/v bovine serum albumin, 0.05% w/v NaN 3 ) and transferred to a 1.5 mL plastic tube. Then, 100 µl 10% Nlauryl sarcosyl and 5 µl 2-meraptoethanol were added to the tube. From this mixture, a total of 200 µl was taken, and 400 µl of grinding buffer was added and incubated at 70º C with intermittent shaking for 10 min. Then, tubes were placed on ice for 5 min, and centrifuged at 13,000 rpm for 10 min. Next, 150 µl of EtOH, 300 µl 6 M NaI solution, and 25 µl of re-suspended silica were added to the supernatant. Subsequently, the pellet was resuspended in 500 µl of wash buffer and dried. After drying, the pellet was re-suspended in 100 µl of sterilized distilled water, incubated at 70º C for 5 min, and centrifuged at 13,000 rpm for 3 min. RNA integrity was checked by electrophoresis on 1 % agarose gels, and by the A260/A280 ratio using a spectrophotometer (Thermo Scientific NanoDrop, http://www.nanodrop.com).
Complementary DNAs were prepared using SuperScript III Reverse Transcriptase (RT) (Invitrogen) as reported previously by Chang et al. (2007). The RT reaction was carried out with 300 ng of total RNA, 300 ng of random primers, 1 x first-strand buffer, 0.5 mM dNTPs, 10 mM dithiothreitol (DTT), 16 U of RNaseOUT (Invitrogen), and 60 U of SuperScript III RT in a final volume of 50 µl. The reaction was incubated for 2 h at 50°C, and stopped for 10 min at 70°C. For dsRNA templates, a denaturation step using 0.2 mmol of CH 3 HgOH for 15 min at room temperature was performed prior to the RT reaction. PCR amplifications for virus detection were performed using previously described SMYEV, and internal control AtropaNad2, specific primers (Thompson et al. 2003). PCR products were separated on a 2% agarose gel and visualized under UV light after staining with ethidium bromide.
SMYEV sequencing
The resulting DNA fragments were cloned in the TOPO® TA vector (Invitrogen). Two µl of the ligation solution was used to transform One Shot Mach1-T1 chemically competent cells (Invitrogen). Plasmids were isolated using QuickClean 5M miniprep kit (GenScript Corp, http://www.genscript.com) from 3 ml overnight cultures containing ampicillin. Recombinant plasmids were verified by EcoR1 digestion. Sequencing reactions were performed at the Macrogen Inc. facilities (http://www.macrogen.com) in an ABI3730 XL automatic DNA sequencer. Aphid population structure A total of five loci were considered (Table 1), but as locus Sm10 was invariant for most populations, only four loci were considered for the final analysis. Results were first analyzed using Micro-Checker 2.2.3 to check microsatellite data for null alleles and scoring errors. Fstat (Goudet 2002) was used to calculate observed and expected heterozygosity, as well as linkage disequilibrium between loci. An exact test was used to detect significant deviations from Hardy-Weinberg equilibrium (HWE) using Arlequin 3.11 (Excoffier et al. 2005). Sample sizes, number of alleles, effective number of alleles, information index, observed and expected heterozygosity, fixation index per locus per population and fixation index per population (all loci) were calculated with Genalex 6 (Peakall and Smouse 2006). The use of different methods to study the spatial genetic structure of organisms in a sampled region has been strongly recommended (Frantz et al. 2006;Pearse and Crandall 2005;Storfer et al. 2007); therefore, structure was assessed using two methods. First, a molecular analysis of variance (AMOVA) was carried out, and pairwise values between collection sites were estimated using Genalex 6 (Peakall and Smouse 2006). The proportion of the variance among populations relative to the total variance was estimated considering genotypic information (PhiPT). PhiPT is analogous to Fst when the data are haploid, or when assumptions of HWE are not met (Maguire et al. 2002). Isolation by distance was checked using a Mantel test between genotypic differentiation (PhiPT) and the geographical distance between sites, using zt version 1.0 with 10,000 permutations (Bonnet and Van de Peer 2002). The second method for assessing the populations structure was a Bayesian clustering method described by Corander et al. (2003), implemented in software BAPS version 4.14 (Corander et al. 2006). This was used to determine the genetic structure of C. fragaefolii. This software uses stochastic optimization to infer the genetic structure, and it can use a spatial model that takes into account individual geo-referenced multilocus genotypes to assign the biologically relevant structure, thereby increasing the power to detect correctly the underlying population structure (Corander et al. 2006). To run the program, a number K of genetic clusters characterized by the matrices of allele frequencies at each locus is first assumed. Then, for each individual, the proportion of its genome derived from each genetic cluster (proportion of ancestry) is estimated.
The posterior probability (probability of K given the data) is then calculated for each mean value of K using the mean estimated log-likelihood of K to choose the optimal K. Ten independent repetitions for each K from 1 to 4 were carried out following the recommendations of Corander et al. (2003).
Genetic similarity and divergence between SMYEV sequences
In order to compare the genetic similarity and the divergence pattern between SMYEV sequences of sites where positive identification occurred, samples were aligned with ClustalX version 2 (Larkin et al. 2007), and their phylogenetic relationships were inferred using the neighbor-joining method implemented in MEGA4 (Tamura et al. 2007). The evolutionary distances were computed using the maximum composite likelihood method (Tamura et al. 2004) and bootstrap with 10,000 replicates.
Aphid genetic variability and structure
As derived from Table 1, mean observed heterozygosity across loci of C. fragaefolii populations varied from 0.21 to 0.38, being highest at Cucao (0.38) and Contulmo (0.31). The number of effective alleles varied from 3 and 9 (Table 1). All loci departed significantly from HWE, with the exception of Sm17 in Cucao and Chovellen. While no linkage disequilibrium was evident between loci, frequency of null alleles using the EM algorithm (Dempster et al. 1977) implemented in software FreeNA (Chapuis and Estoup 2007) was high for locus M37 (0.19 across populations). Therefore HWE was estimated excluding M37, with no significant difference with the estimations including M37. Genetic differentiation in the data was modest, with PhiPT values reaching 0.19, although PhiPT values between pairs of sites varied from 0.03 to 0.33 ( Table 2). The greatest genetic difference occurred between Cucao and the other sites (Table 2). There was significant isolation by distance as evidenced by the Mantel test (r = 0.94; p = 0.008). The number of clusters in optimal partition assignment with BAPS determined K = 3, with a log marginal likelihood of optimal partition of -757.4084, and the posterior probability reaching its highest value (~1). Aphids from sites Chovellen and Curepto conformed one cluster, whereas aphids from Contulmo alone formed a second cluster. Aphids from Cucao formed a third cluster (Figure 2). This same grouping was observed when considering the site of individuals and their spatial coordinates in the model.
Presence and abundance of C. fragaefolii and incidence of SMYEV From the seven sites sampled during 2005 and 2006, only four had the aphid (C. fragaefolii). SMYEV, however, was detected at six of these seven sites. Mean aphid abundance per leaf per site ranged from 0.8 to 14.5, with the highest numbers occurring at Contulmo (Table 3). SMYEV incidence revealed by ELISA was highest at Contulmo (Table 3). All strawberry plants of the seven sites tested by RT-PCR with the capsid primer of Table 3. Presence of Chaetosiphon fragaefolii, mean abundance ± SE, presence and incidence of SMYEV using double-antibody sandwich ELISA and RT-PCR at the collection sites. SMYEV = Strawberry mild yellow edge virus SMYEV revealed amplification, with the exception of Petrohue (Figure 3).
The phylogenetic relationship among SMYEV sequences revealed a larger genetic distance of Cucao (south) when compared with the remaining samples ( Figure 4). In addition, among the northern populations, Contulmo formed a separated lineage from Chovellen and Curepto, while these two were found to belong to different branches in the unrooted tree ( Figure 4).
Discussion
Observed heterozygosity of C. fragaefolii populations was highest at Cucao and Contulmo. Such values are lower than those reported in studies from other aphid species (Hales et al. 1997;Figueroa et al. 2005;Lavandero et al. 2011), which may be explained by the absence of sexual reproduction of this species, and the recent introduction of only a few clones (Blackman and Eastop 2000). Similarly, significant departures from HWE were also expected because parthenogenesis is the only mode of reproduction reported for this species (Blackman and Eastop 2000). However, one locus (M37) presented high values of null alleles, but was consistent over all populations, and did not affect the overall results. Genetic variability was modest, showing geographic structure especially with the most southern population (Cucao). Bayesian analysis showed three clear clusters, which was in agreement with the AMOVA results, as sites in each cluster shared low PhiPT values, and the highest PhiPT values were between sites that were assigned to other clusters (Table 2, Figure 2), confirming that C. fragaefolii populations on F. chiloensis consist of these distinct clusters.
Clusters agreed with geographical distance between sampling sites, as areas that were close to each other formed a single cluster, and sites further away formed separated genetic clusters (Figure 2). In fact this was confirmed by the Mantel test, showing significant isolation by distance. This may be related to the fact that F. chiloensis has a fragmented distribution across the studied area, making aphid dispersal difficult between sites. Indeed, the low gene flow of C. fragaefolii between geographical areas, with exception of the nearby sites (Curepto and Chovellen), suggests little migration between sites. It is important to mention that Cucao is a population located on the Chiloé island, a rather big island in the south of Chile. This may explain the genetic distance of the southern C. fragaefolii from the northern populations.
The phylogenetic relationship of SMYEV showed Cucao as a genetically distant group from the northern samples, resembling the higher genetic differentiation exhibited by the aphid C. fragaefolii on F. chiloensis (cluster 3, Figure 2). It is worth noting that the remaining SMYEV sequences were not clearly different. Thus, with the exception of the Cucao sample, SMYEV sequence divergence does not fully agree with the genetic structure of its aphid vector, suggesting that the virus was present before the arrival of the aphids. As mentioned above for the case of C. fragaefolii, the fact that the Cucao population is located on an island may also explain the divergence from the northern samples of SMYEV.
The absence of C. fragaefolii at one site (Petrohue , Table 3) was coincident with the absence of the vector. At the other two sites where C. fragaefolii was not found (Vilches and Chillán), the virus had a low incidence (0.05%).
At both sites, however, Neuquenaphis edwardsi (Laing) (Hemiptera: Aphididae), a native species of aphid frequently found on southern beech Nothofagus sp. Blume (Fagales: Nothofagaceae), was found developing on wild F. chiloensis. It is common to find (as is the case for both sites) F. chiloensis associated to Nothofagus forests (San Martin et al. 1991). The aphid species could act as a vector between plants, perhaps in a non persistent way. Although SMYEV has been shown to be mainly transmitted by C. fragaefolii, other species of aphids have also been shown to effectively infect strawberry (Converse 1987). Therefore, the capacity to vector the virus may be very restricted, this restriction being reflected in the low incidence values.
The fact that the virus was found in a native aphid species may complicate the spread of the virus further, but it is still not clear whether it is the same strain, as preliminary results of the sequence of the virus present in N. edwardsi suggest variation in size of the virus capsid RT-PCR products (Salazar, unpublished data). Further studies are needed to better understand these findings and to confirm F. chiloensis as an alternative host for N. edwardsi, or for other native aphid species. It has been reported that introduced aphids can be very harmful to the native flora (Malmstrom et al. 2005), particularly in island ecosystems where aphid species with permanent parthenogenesis are more likely to develop successful colonies, and therefore be more efficient in virus transmission (Mondor et al. 2007). On the other hand, the highest SMYEV incidence values are associated with the highest abundance of aphids in the greatest F. chiloensis cultivation area (Contulmo) (Carrasco et al. 2007). Most orchards here also cultivate F. x ananassa, with little control measures of F. chiloensis.
As C. fragaefolii aphids have been introduced and spread recently, the low incidence of the virus in the native F. chiloensis could be explained. However, sequence data of the virus found on F. chiloensis suggest that the virus is not different between cultivated and wild F. chiloensis. Thus, the presence of the aphid clearly increases the incidence of the virus, even though it is already present in natural and cultivated F. chiloensis with no clear grouping of the viral strains, with the exception of Cucao. Cucao is the most southern group, and phylogenic trees could suggest isolation. . Unrooted tree describing the phylogenetic relationships of SMYEV inferred using the neighbor-joining method. The optimal tree with the sum of branch length = 0.27935868 is shown. The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. SMYEV = Strawberry mild yellow edge virus. High quality figures are available online. | 2018-04-03T01:34:58.303Z | 2012-10-03T00:00:00.000 | {
"year": 2012,
"sha1": "7a0f37f7223181b10ddfe6a14197c79c8909be35",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/12/1/110/18150768/jis12-0110.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b081039a05d0ceb1e1f8fe4d6c8fccdf8207f1c9",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3733006 | pes2o/s2orc | v3-fos-license | Gender differences in mortality among ST elevation myocardial infarction patients in Malaysia from 2006 to 2013
BACKGROUND Coronary artery disease (CAD) is one of the leading causes of death in Malaysia. However, the prevalence of CAD in males is higher than in females and mortality rates are also different between the two genders. This suggest that risk factors associated with mortality between males and females are different, so we compared the clinical characteristics and outcome between male and female STEMI patients. OBJECTIVES To identify the risk factors associated with mortality for each gender and compare differences, if any, among ST-elevation myocardial infarction (STEMI) patients. DESIGN Retrospective analysis. SETTINGS Hospitals across Malaysia. PATIENTS AND METHODS We analyzed data on all STEMI patients in the National Cardiovascular Database-Acute coronary syndrome (NCVD-ACS) registry for the years 2006 to 2013 (8 years). We collected demographic and risk factor data (diabetes mellitus, hypertension, smoking status, dyslipidaemia and family history of CAD). Significant variables from the univariate analysis were further analysed by a multivariate logistic analysis to identify risk factors and compare by gender. MAIN OUTCOME MEASURES Differential risk factors for each gender. RESULTS For the 19 484 patients included in the analysis, the mortality rate over the 8 years was significantly higher in females (15.4%) than males (7.5%) (P<.001). The univariate analysis showed that the majority of male patients <65 years while females were ≥65 years. The most prevalent risk factors for male patients were smoking (79.3%), followed by hypertension (54.9%) and diabetes mellitus (40.4%), while the most prevalent risk factors for female patients were hypertension (76.8%), followed by diabetes mellitus (60%) and dyslipidaemia (38.1%). The final model for male STEMI patients had seven significant variables: Killip class, age group, hypertension, renal disease, percutaneous coronary intervention and family history of CVD. For female STEMI patients, the significant variables were renal disease, smoking status, Killip class and age group. CONCLUSION Gender differences existed in the baseline characteristics, associated risk factors, clinical presentation and outcomes among STEMI patients. For STEMI females, the rate of mortality was twice that of males. Once they reach menopausal age, when there is less protection from the estrogen hormone and there are other risk factors, menopausal females are at increased risk for STEMI. LIMITATION Retrospective registry data with inter-hospital variation.
C oronary artery disease (CAD) is the number one cause of mortality and morbidity in Malaysia and globally for both males and females. 1,2 Even worse, CAD has remained the principal cause of death for the ten years from 2005 to 2014. 3 In CAD, which is also known as ischemic heart disease, a waxy substance called plaque builds up inside the coronary arteries. 4 CAD, traditionally considered a male disease, is also a major threat to females nowadays. In general, females with CAD have a worse outcome than their male counterparts when no adjustments are made for other characteristics and comorbidities. 5,6 Although females tend to present with CAD later in life, the outcome can be severe. 6 Even when they present young, they tend to receive less evidence-based treatment than their male counterparts. 7 An ongoing prospective registry known as the Malaysian National Cardiovascular Disease-Acute Coronary Syndrome (NCVD-ACS) registry was first established in 2006. Starting with only 8 hospitals in 2006, it now includes 18 hospitals across the country. The registry was introduced to collect clinical data including inhospital management and clinical outcome. The Ministry of Health Malaysia has become the main sponsor of the NCVD-ACS Registry with National Heart Association of Malaysia as the co-sponsor. 8 Technical support in the form of clinical epidemiology expertise, biostatistics and information and communication technology services are provided by the Clinical Research Centre of Malaysia. The database is a useful source of information such as demographic values of patients as well as medical information, which are helpful in understanding the trends of CAD among the Malaysian population.
In one way or another, CAD affects all Malaysians. Most adults at increased risk of CAD have no symptoms or obvious signs, especially females, but they may be identified by assessment of risk factors. Therefore, the main aim of the study was to identify the risk factors associated with mortality for each gender and compare differences, if any, among acute coronary disease patients, particularly ST-elevation myocardial infarction (STEMI) patients.
PATIENTS AND METHODS
Anonymised patient data were obtained from the NCVD-ACS registry for the 8-year period from the years 2006 to 2013. The registry enrols patients presenting with STEMI, non-ST elevation myocardial infarction (NSTEMI), and unstable angina (UA). However, in this study, only the data of patients who were diagnosed with STEMI from 18 participating hospitals across Malaysia were selected from the NCVD-ACS registry. Among the different types of acute coronary syndrome, STEMI has the worst outcome. 1 In this setting, STEMI was defined as persistent ST segment elevation ≥1 mm in two contiguous electrocardiographic leads, or the presence of a new left bundle branch block in the setting of positive cardiac markers. 9 Data was collected from the time the patient with STEMI was admitted to the hospital until discharge. Each patient was assigned a unique national identifica-tion number to avoid duplication. Follow-up was done 30 days after hospital discharge via phone call or when the patient came to the clinic for a review. To verify the mortality status, a cross check was done with the national death registry. Patient characteristics and clinical presentation, in-hospital treatment and clinical outcome were recorded. After verification, data was then entered into the NCVD website. An extensive information and communications technology system is maintained to ensure functional efficacy and effectiveness in the NCVD operation.
In addition to gender, other demographic variables such as ethnicity and age group are available. The ethnicity was determined based on national identity cards and self-report. In this study, patients were categorised into two age groups based on local medical practice namely age <65 years and age ≥65 years. 9 The risk factors were diabetes mellitus, hypertension, smoking status, dyslipidaemia and family history of CAD. Comorbid variables included myocardial infarction (MI) history, chronic lung disease, cerebrovascular disease, peripheral vascular disease and renal disease. Clinical presentation known as Killip class was divided into four classes. The Killip classification predicts the odds of survival within 30 days in patients with an acute MI, with a higher class having a higher odds of dying whereby Killip IV is the highest class. 10 Results are presented in the form of descriptive statistics, followed by univariate analysis and a multivariate logistic regression model. Categorical variables are described as percentages. Chi-square tests were used to test the association between factors by gender.
Stepwise logistic regression was used to explain the relationships of all independent variables to mortality. A P value of less than .05 was considered statistically significant. A Hosmer and Lemeshow test was used to determine the goodness of fit. The variable inflation factor test is a test of multicollinearity. The -2log-likelihood is used to measure how well the model actually fits the data and the change in fit of the model to data if a variable is removed or added to the model. All analyses were conducted using SPSS statistical software (version 22, IBM SPSS Statistics, Armonk, NY, USA).
RESULTS
For females, the mortality rate (15.4%) was significantly higher than for males (7.5%) (P<.001) over the 8 period. Nevertheless, the percentage of female patients affected with STEMI (14.4%) was fewer than males (85.6%). The patient population was mainly ethnic Malay (more than 50.0% for both male and female patients) ( Table 1). The majority of male patients were ≤65 years while females were mostly ≥65 years. For females ≥65 years, the incidence of CAD was twice that of males. The most prevalent risk factors for male patients were smoking (79.3%), followed by hypertension (54.9%) and diabetes mellitus (40.4%), while the most prevalent risk factors for female patients were hypertension (76.8%), followed by diabetes mellitus (60%) and dyslipidaemia (38.1%). MI history was the most common comorbidity followed by renal disease and cerebrovascular disease for both male and female patients.
The majority of the STEMI patients were in Killip class I or II on presentation. As part of their continuing medical care, cardiac catheterization was the most frequent procedure followed by PCI for both male and female patients. All variables showed a statistically significant difference between males and females except for chronic lung disease (P=.472) and peripheral vascular disease (P=.444).
In the univariate analysis, all variables were significant for males (data not shown). Among the most significant were Killip class (odds ratio (OR=15.9), age group (OR=3.2) and renal disease (OR=3.9) for male patients. In the multivariate model for males ( Table 2), seven variables were significant: diabetes mellitus, hypertension, family history of CAD, renal disease, PCI, In the univariate analysis on females, all variables were also significant. Among the highest were Killip class (OR=12.8), renal disease (OR=2.6) and age group (OR=2.5). The best-fitting multivariate model for females is given in Table 3. Of the 15 variables, only 4 were statistically significant in the multivariate model: smoking, renal disease, Killip class and age group. The adjusted odds ratio suggests that females who smoked are less likely to die (OR=0.49), while the mortality of female patients is 2.2 times higher with renal disease than those without renal failure. The effect of Killip class in the model is also significant, indicating that those with Killip class IV are 14.6 times more likely to die than those from Killip class I. Equally important is the age group where the risk of mortality is 3.4 times higher in female patients from the age group ≥65 years than those from the age group <65 years.
The -2log-likelihood values obtained by comparing the final model with the null model with no covariates, showed a significant decrease in the -2log-likelihood. Likewise, the Hosmer and Lemeshow tests of the goodness of fit found that both final models fit well to the data as the P values were greater than .05. The degree of accuracy of both models was 93.3% and 87.5%. Also, the variable inflation factor test indicates an absence of multicollinearity in the variables for both males and females.
DISCUSSION
CAD is the leading cause of mortality among both males and females in Malaysia. With cancer, tuberculosis, HIV/ AIDS, and malaria combined, CAD is still the most common cause of mortality in females worldwide, killing more than 16 per minute. 11 That the percentage of females affected with STEMI was much lower than males is supported by other studies where STEMI is more prevalent among males as compared to females. 9,12,13 However, females had a significantly higher mortality rate as compared to male patients in our study, which is compatible with other findings indicating that females have had higher mortality rates than males annually since 1984 with the cause of death mostly from myocardial infarction and sudden death. 14,15 Females are more resilient to developing CAD, but once they have CAD, they are more likely to experience the worse consequences. 16 Contrary to popular belief, CAD, and not breast cancer, is the main cause of death in women, where there is a two to one ratio of CAD between males and females. 17 Also, females are twice as likely to die of a first MI and notably have a short-term survival as compared with males. 18,19 In addition, since females have smaller coronary vessels than males, females are twice as likely to die as a result from coronary artery bypass surgery. 18,20 The motivation of this study was to assess whether gender differences exist in risk factors, clinical presentation and outcomes among STEMI patients in Malaysia. This study found that in females aged 65 years and older, the incidence of STEMI is twice of males. This is similar to previous studies which found that larger risk of acute MI and a significantly higher mortality rate in female patients aged 65 years and older. 21,22 Female risk climbs as they age and once females reach menopausal age, there is less protection from the estrogen hormone and together with other risk factors like diabetes mellitus and obesity, menopausal women are at greater risks for CAD. 21,23 Moreover, due to the misconception that acute coronary syndrome is a disease of men; most women lack the awareness and are considered more at risk for breast cancer than for CAD. 18 Contrary to men, atypical symptoms such as numbness of the arms, fatigue, nausea, jaw pain, tightness or pressure, but no pain over the left chest are often present among women with CAD. 18,20 Physicians often fail to distinguish these symptoms in women. 24 Smoking is the most prevalent risk factor for males (79.3%) followed by hypertension (54.9%) and diabetes mellitus (40.4%). This is supported by the National Health and Morbidity Survey Malaysia which stated that the Malaysian population has a higher prevalence of smoking with the prevalence of adult male smokers being 46.5%. 25,26 This is consistent with the NCVD database registry annual reports. 12 Moreover, a high prevalence of smoking (48.7%) and hypertension (31.7%) among male residents in rural Selangor, Malaysia were found in another study. 27 Even though most of the disease burden caused by active smoking occurs among males, females bear nearly 80% of the total burden from passive smoking. 28 In this context, passive smoking is the inhalation of smoke, called second-hand smoke, or environmental tobacco smoke, by persons other than the intended active smoker. 29 The number of deaths among females caused by passive smoking is about two-thirds of that caused by smoking for CAD and lung cancer. 28 Also, another study stated that passive smoking wives of current or former cigarette smokers had a higher death rate from ischemic heart disease than women whose husbands never smoked. 30 From the multivariate logistic model, smoking is one the significant variables in females. The odds ratio suggests that females who smoke are less likely to die (OR=0.49). This surprising outcome is similar to a previous study whereby active smokers have a tendency to do well at both in-hospital and 30-days post discharge with significantly lesser overall mortality risk compared to those who never smoked. 25 Another study suggested that even though most of the females who die of CAD are past menopausal age, smoking increases the danger in younger females than in older females. 31 In addition, women are less likely to quit smoking than men. 32 In this study, the most prevalent risk factors for females were hypertension (76.8%) followed by diabetes mellitus (60%) and dyslipidaemia (38.1%). A study of the NCVD-ACS registry patients between 2006 and 2008 reported that out of 9702 patients, 24.2% were females with 22.3% being menopausal women, which was associated with diabetes mellitus and hypertension. 33 The findings of the present study for both males and females are supported by a preceding study whereby on admission, more than 95% of patients having not less than one common cardiovascular risk factor such as hypertension, smoking, dyslipidaemia and diabetes mellitus. 12 Diabetes mellitus has been well recognized in increasing the risk of CAD in both males and females. 34-37 A study on the numerous aspects of gender differences among 10 554 PCI patients in the NCVD-PCI registry between 2007 to 2009 found that at presentation, women typically were five years older than men and had a higher prevalence of risk factors. 5 Even more, the in-hospital and six-month mortality were also higher in women. 5 Another study found that among 13 591 patients in the NCVD-ACS registry from 2006 to 2010, 24.2% were women and they had more risk factors, were not likely to undergo invasive treatment, and had a higher mortality. 38 Besides, a review of autopsy reports done at the University Malaya Medical Centre from year 1996 to 2005 found that 83 of 936 female deaths were because of cardiac causes. 39 Hypertension, diabetes mellitus and age were the most significant risk factors.
Apart from that, renal disease has become one of the significant findings for both males and females in this study. The mortalities of both male and female patients were twice as high with renal disease than those without it. Another study found that in patients with acute decompensated heart failure, one-year worsening of renal function is a common comorbidity and strong predictor of all-cause and cardiovascular mortality. 40 As for clinical presentation, male and female patients with Killip class IV were 16.5 and 14.6 times, respectively, more likely to die than patients with Killip class I. This is supported by a previous study where the mortality rate within Killip classes were in descending order from class IV to III, II and I. 16 Apart from that, a family history of CAD is one of the significant variables among males in this study. This is similar to the another study where a family history of CAD was a significant risk factor among patients of ABO blood groups with the majority of patients being males (57.4%). 41 PCI, also known as coronary angioplasty, is a typical treatment for CAD. In this study, PCI is one of the significant variables in the multivariate logistic model for males. Patients who had undergone PCI had a lower mortality rate than those who had not undergone a PCI (OR=0.7). Therefore, PCI is considered an effective treatment in reducing morbidity and mortality. Primary PCI is the preferred treatment due to a better result. 9,42,43 Moreover, there were mortality advantages gained from PCI treatment for elderly patients even though the outcome of elderly patients after PCI is not as good as that of non-elderly patients. 9 In another study, the lack of PCI treatment in acute MI patients has contributed to the high in-hospital mortality among female patients with 12.3% as compared to 9.5% for males. 44 To overcome this problem, the Malaysian Ministry of Health, together with the Ministry of Education and National Heart Institute, initiated a better Kuala Lumpur STEMI network in 2015. This network plays an important role in referring acute STEMI patients between government hospitals, teaching university hospitals and the National Heart Institute right to PCI capable centres. 13 In conclusion, gender differences existed in the baseline characteristics, associated risk factors, clinical presentation and outcomes among STEMI patients. Female patients were older and more likely to have hypertension and diabetes mellitus, yet less likely to smoke than male patients. It is obvious that even though females share the same risk factors as males, there are risk factors that relate only to females which may increase their tendency to develop CAD. To date, with the enhancement of health care in general and the cardiac care specialist, understanding possible genderbased differences in baseline characteristics, risk factors, treatments and outcomes will help in improving current management of females with CAD particularly STEMI.
Conflict of interest
All authors have no conflict of interest to declare. | 2018-04-03T02:00:49.195Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "e8ff1cd280138656f9fecb65e4742a0cd6c2b195",
"oa_license": "CCBYNCND",
"oa_url": "https://www.annsaudimed.net/doi/pdf/10.5144/0256-4947.2018.1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8ff1cd280138656f9fecb65e4742a0cd6c2b195",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267850386 | pes2o/s2orc | v3-fos-license | Monitoring the Velocity of Domain Wall Motion in Magnetic Microwires
An approach was proposed to control the displacement of domain walls in magnetic microwires, which are employed in magnetic sensors. The velocity of the domain wall can be altered by the interaction of two magnetic microwires of distinct types. Thorough investigations were conducted utilizing fluxmetric, Sixtus–Tonks, and magneto-optical techniques. The magneto-optical examinations revealed transformation in the surface structure of the domain wall and facilitated the determination of the mechanism of external influence on the movement of domain walls in magnetic microwires.
Introduction
Amorphous soft magnetic materials, such as glass-coated microwires and ribbons, play a fundamental role in numerous technological applications [1][2][3][4][5][6][7].The most advanced application of these soft magnetic materials is their utilization in magnetometers and magnetic sensors [8][9][10][11][12][13].This utilization is made possible by the exceptional magnetic properties and good mechanical properties of the materials, as well as the existence of a well-established and validated production and quality control system.
The perfectly cylindrical shape of magnetic wires presents the opportunity to observe magnetic properties that are quite unusual, such as spontaneous magnetic bistability and/or the giant magnetoimpedance (GMI) effect [1,[14][15][16][17][18][19].These properties are intrinsically linked to the distinctive domain structure of magnetic wires, which consists of an inner axially magnetized core surrounded by an outer domain shell [14,20].Consequently, the high GMI effect of Co-rich magnetic wires is a result of the high circumferential magnetic permeability of Co-rich amorphous wires [14,[20][21][22][23][24].On the other hand, spontaneous magnetic bistability is attributed to the remagnetization process within the axially magnetized core brought about by the rapid propagation of domain walls (DWs) [1,20].
The observation of rapid single-domain wall (DW) propagation in magnetic wires has garnered significant attention from the perspective of fundamental physics.This includes investigating the origins of DW nucleation and propagation fields, as well as the remarkably high DW velocities (v) and DW mobility (S) [1,20,[25][26][27][28][29][30].Conversely, various potential applications, such as racetrack memories, magnetic logic, and electronic surveillance, have been developed by leveraging the magnetic bistability of fast and controllable DW propagation [31][32][33].
In the majority of applications, the efficient regulation of individual domain walls (DWs) through injection, the management of controllable DW propagation, and the act of pinning are of utmost significance [31,32,34].
Generally, this study aligns with the overarching concept of interlayer interaction present in magnetic multilayer structures featuring non-magnetic separators surpassing exchange lengths [35,36].Notably, oscillation periods of interlayer exchange were noted, contingent upon the non-magnetic material type and thickness.In our investigation, the glass coating of two microwires functioned as the non-magnetic layer.Additionally, within this framework lies the notion of employing two magnetic microwires with distinct chemical compositions, hence differing magnetic characteristics.
One of the primary focal points in our previous research, within the context of the specified orientations of employing magnetic microwires, entailed a comprehensive examination of the dynamics of domain walls (DWs), both internally within the microwires and on their surfaces [34,37].Particularly noteworthy to us was the matter of regulating DW propagation [31,34,[37][38][39].In the course of developing diverse methodologies for such regulation, we pursued the concept of reciprocal influence between closely positioned microwires.This concept was explored not only through our own research endeavors [37], but also by other scientific communities [40][41][42], thereby enabling us to assess the potential level of this reciprocal influence in relation to the distance separating the microwires.
In the majority of prior publications, researchers have investigated the magnetostatic interaction among microwires that possess similar compositions and geometry [37,[40][41][42][43][44].In terms of the dynamics of domain walls, our previous work from several years ago demonstrated that the relationship between the velocity of the DW and the applied magnetic field could be influenced by the magnetostatic interaction with another microwire with identical properties [37].Furthermore, only a small number of publications have reported on the distinctive magnetic properties observed in a linear array consisting of microwires with varying chemical compositions and magnetic properties [45].
As an outcome, the notion of situating two microwires with distinct characteristics in close proximity to each other appeared innovative and promising to us.In order to achieve a located influence, one of the microwires was deliberately chosen to be considerably shorter than the other.Consequently, one of the designated wires was a lengthy Fe-rich microwire.In this microwire, which possesses the magnetic bistability phenomenon, we examined the displacement of an individual DW, the presence of which we were already aware of.The second microwire, with which we aimed to exert a localized influence, was a short Co-rich microwire.
We opted for a specific set of research techniques that appeared most appropriate for this study, including the fluxmetric method [46], the Sixtus-Tonks method [37], and the magneto-optical Kerr effect technique [47].
Experimental Details
In our studies, we used the following two microwires: an as-prepared 12 The Co-rich microwire was located in close proximity to the Fe-rich microwire (Figure 1).
The magnetic hysteresis loops were obtained utilizing the fluxmetric technique, which has been previously employed for the characterization of magnetically soft microwires.The investigation of the magnetization reversal process in the surface region of the microwire was conducted using a MOKE loop tracer.The polarized light emitted by a He-Ne laser was directed towards the detector after being reflected from the microwire (Figure 1).
The DW velocity was determined using a modified Sixtus-Tonks experiment, which has previously proven effective in the examination of DW dynamics in magnetic microwires [20,26].The microwire was positioned within an extended solenoid, establishing a magnetic field.The magnetic hysteresis loops were obtained utilizing the fluxmetric technique, which has been previously employed for the characterization of magnetically soft microwires.The investigation of the magnetization reversal process in the surface region of the microwire was conducted using a MOKE loop tracer.The polarized light emitted by a He-Ne laser was directed towards the detector after being reflected from the microwire (Figure 1).
The DW velocity was determined using a modified Sixtus-Tonks experiment, which has previously proven effective in the examination of DW dynamics in magnetic microwires [20,26].The microwire was positioned within an extended solenoid, establishing a magnetic field.
Three pickup coils were positioned coaxially within the solenoid (Figure 1), encircling the sample and kept at an equal distance apart, for the purpose of evaluating the velocity of the DW.The velocity of the DW was determined by observing the difference in time between the electromotive force (EMF) peaks caused in the pick-up coils by the displacement of the DW.The pick-up coils were separated by the distance L.
In addition to the phenomenon of surface hysteresis, the MOKE method facilitates the detection of MOKE peaks that correspond to the displacement of a domain wall across the surface of the specimen.By acquiring knowledge about the characteristics of the MOKE peaks at various positions on the surface of the elongated microwire sample (designated as locations I, II, and III in Figure 1), we employed the MOKE technique to infer the structure of the domain wall at the different aforementioned locations within the investigated microwire.
Results and Discussion
As an initial investigation, magnetic and magneto-optical hysteresis were acquired from the two samples utilized in the experiments.Figure 2 shows the hysteresis loops for the magnetic (represented by black lines) and MOKE (represented by red line) methods.The Fe-rich microwire, which displayed positive magnetostriction, exhibited a volume hysteresis loop with a perfectly rectangular shape (Figure 2a, black line).Conversely, the Co-rich microwire, with a vanishing value of magnetostriction, exhibited an inclined hysteresis loop with significantly lower coercivity (Figure 2b).The shape of the volume hysteresis loop observed for the Fe-rich microwire is associated with the so-called "magnetic bistability effect".
Figure 2 depicts diagrams illustrating the domain structures.The black arrows represent magnetization in a schematic manner.When the external magnetic field reached a sufficient magnitude, both types of microwires reached saturation (top and bottom inserts).This is denoted by the black arrows that align with the microwire axis.In Fe-rich microwires, magnetization reversal occurs through the swift movement of a flat domain Three pickup coils were positioned coaxially within the solenoid (Figure 1), encircling the sample and kept at an equal distance apart, for the purpose of evaluating the velocity of the DW.The velocity of the DW was determined by observing the difference in time between the electromotive force (EMF) peaks caused in the pick-up coils by the displacement of the DW.The pick-up coils were separated by the distance L.
In addition to the phenomenon of surface hysteresis, the MOKE method facilitates the detection of MOKE peaks that correspond to the displacement of a domain wall across the surface of the specimen.By acquiring knowledge about the characteristics of the MOKE peaks at various positions on the surface of the elongated microwire sample (designated as locations I, II, and III in Figure 1), we employed the MOKE technique to infer the structure of the domain wall at the different aforementioned locations within the investigated microwire.
Results and Discussion
As an initial investigation, magnetic and magneto-optical hysteresis were acquired from the two samples utilized in the experiments.Figure 2 shows the hysteresis loops for the magnetic (represented by black lines) and MOKE (represented by red line) methods.The Fe-rich microwire, which displayed positive magnetostriction, exhibited a volume hysteresis loop with a perfectly rectangular shape (Figure 2a, black line).Conversely, the Co-rich microwire, with a vanishing value of magnetostriction, exhibited an inclined hysteresis loop with significantly lower coercivity (Figure 2b).The shape of the volume hysteresis loop observed for the Fe-rich microwire is associated with the so-called "magnetic bistability effect".
Figure 2 depicts diagrams illustrating the domain structures.The black arrows represent magnetization in a schematic manner.When the external magnetic field reached a sufficient magnitude, both types of microwires reached saturation (top and bottom inserts).This is denoted by the black arrows that align with the microwire axis.In Fe-rich microwires, magnetization reversal occurs through the swift movement of a flat domain wall separating domains with opposite magnetization (as shown in the left inset).Generally, in Co-rich microwires, various domain structures, such as axial, helical, or circular structures, may exist.However, in this instance, magnetization reversal occurs through the rotation of magnetization without forming a stable domain structure (as shown in the right inset).
right inset).
The process of magnetization reversal in Fe-rich microwires takes place when a single DW is detached from one of the closure end domains and is subsequently displaced along the microwire [20,26,48,49].On the other hand, the magnetization reversal process in Co-rich microwires is characterized by the rotation of magnetization.As a result, these microwires typically exhibit low coercivity and high magnetic permeability [50][51][52][53].It was equally important to obtain pertinent details regarding the process of magnetization reversal within the surface layer of a Fe-rich microwire.The confirmation of our assumptions regarding the magnetic bistability, referred to as the "surface bistability effect", was made evident by the rectangular shape observed in the MOKE hysteresis loop.This observation was made both in the sample's volume and on its surface.Therefore, we deduced that during the magnetization reversal, there was swift displacement of a solitary domain wall on the surface of the Fe-rich sample, as well as within the bulk.It is important to highlight the slight deviation of the signal from a linear path, as observed in the MOKE hysteresis loop.This occurred because the laser beam reflected not from a flat surface, but from a cylindrical surface of the sample.Nonetheless, this did not hinder our ability to detect magnetization jumps on the sample's surface that were linked to the rapid movement of the domain wall.The slight disparity in the coercivity magnitude observed in the magnetic and MOKE hysteresis loops arose from the fact that the process of magnetization reversal initiates from the surface of the Fe-rich sample.
After obtaining initial insights into the bulk and surface magnetization reversal process, we initiated an investigation into the propagation of the domain wall (DW) using the Sixtus-Tonks method.In Figure 3, the magnetic field dependencies of velocity in the Fe-rich sample are illustrated.The placement of the Co-rich short wire aligns with the diagram depicted in Figure 1.
The Co-rich microwire was situated in close proximity to the surface of the long Fe-rich microwire, as well as in the space between the secondary coils 2 and 3.The black line (Figure 3) represents the velocity that was measured between coils 1 and 2, whereas The process of magnetization reversal in Fe-rich microwires takes place when a single DW is detached from one of the closure end domains and is subsequently displaced along the microwire [20,26,48,49].On the other hand, the magnetization reversal process in Co-rich microwires is characterized by the rotation of magnetization.As a result, these microwires typically exhibit low coercivity and high magnetic permeability [50][51][52][53].
It was equally important to obtain pertinent details regarding the process of magnetization reversal within the surface layer of a Fe-rich microwire.The confirmation of our assumptions regarding the magnetic bistability, referred to as the "surface bistability effect", was made evident by the rectangular shape observed in the MOKE hysteresis loop.This observation was made both in the sample's volume and on its surface.Therefore, we deduced that during the magnetization reversal, there was swift displacement of a solitary domain wall on the surface of the Fe-rich sample, as well as within the bulk.It is important to highlight the slight deviation of the signal from a linear path, as observed in the MOKE hysteresis loop.This occurred because the laser beam reflected not from a flat surface, but from a cylindrical surface of the sample.Nonetheless, this did not hinder our ability to detect magnetization jumps on the sample's surface that were linked to the rapid movement of the domain wall.The slight disparity in the coercivity magnitude observed in the magnetic and MOKE hysteresis loops arose from the fact that the process of magnetization reversal initiates from the surface of the Fe-rich sample.
After obtaining initial insights into the bulk and surface magnetization reversal process, we initiated an investigation into the propagation of the domain wall (DW) using the Sixtus-Tonks method.In Figure 3, the magnetic field dependencies of velocity in the Fe-rich sample are illustrated.The placement of the Co-rich short wire aligns with the diagram depicted in Figure 1.
The Co-rich microwire was situated in close proximity to the surface of the long Fe-rich microwire, as well as in the space between the secondary coils 2 and 3.The black line (Figure 3) represents the velocity that was measured between coils 1 and 2, whereas the red line (Figure 3) was measured in the region of the sample between coils 2 and 3, where the shorter Co-rich microwire was situated.It can be observed that a substantial disparity in the value of the velocity of the domain wall, as measured in various regions of the sample, exists.In simpler terms, a controlled deceleration in the motion of the domain wall is apparent.
the red line (Figure 3) was measured in the region of the sample between coils 2 and 3, where the shorter Co-rich microwire was situated.It can be observed that a substantial disparity in the value of the velocity of the domain wall, as measured in various regions of the sample, exists.In simpler terms, a controlled deceleration in the motion of the domain wall is apparent.To examine the operation of regulated alterations in the speed of the domain wall, we investigated the conversion of the MOKE peaks acquired at different positions on the surface of the microwire.
The initial MOKE measurement (designated as point I in Figure 1) was carried out between coils 1 and 2. A comparison of the EMF peak registered by coil 1 and the MOKE peak registered at point I is shown in Figure 4.It is evident that the shapes of these two signals bear significant resemblance.This observation suggests the rapid, homogeneous, and compact motion of the domain wall within the volume of the microwire and on its surface.In this particular instance, the MOKE peak exhibited a somewhat narrower width compared to the peak acquired using the Sixtus-Tonks technique.Presumably, this discrepancy arose from the fact that the width of the domain wall on the sample's surface was slightly smaller than within its interior.To examine the operation of regulated alterations in the speed of the domain wall, we investigated the conversion of the MOKE peaks acquired at different positions on the surface of the microwire.
The initial MOKE measurement (designated as point I in Figure 1) was carried out between coils 1 and 2. A comparison of the EMF peak registered by coil 1 and the MOKE peak registered at point I is shown in Figure 4.It is evident that the shapes of these two signals bear significant resemblance.This observation suggests the rapid, homogeneous, and compact motion of the domain wall within the volume of the microwire and on its surface.In this particular instance, the MOKE peak exhibited a somewhat narrower width compared to the peak acquired using the Sixtus-Tonks technique.Presumably, this discrepancy arose from the fact that the width of the domain wall on the sample's surface was slightly smaller than within its interior.
disparity in the value of the velocity of the domain wall, as measured in various regions of the sample, exists.In simpler terms, a controlled deceleration in the motion of the domain wall is apparent.To examine the operation of regulated alterations in the speed of the domain wall, we investigated the conversion of the MOKE peaks acquired at different positions on the surface of the microwire.
The initial MOKE measurement (designated as point I in Figure 1) was carried out between coils 1 and 2. A comparison of the EMF peak registered by coil 1 and the MOKE peak registered at point I is shown in Figure 4.It is evident that the shapes of these two signals bear significant resemblance.This observation suggests the rapid, homogeneous, and compact motion of the domain wall within the volume of the microwire and on its surface.In this particular instance, the MOKE peak exhibited a somewhat narrower width compared to the peak acquired using the Sixtus-Tonks technique.Presumably, this discrepancy arose from the fact that the width of the domain wall on the sample's surface was slightly smaller than within its interior.In the second phase of the MOKE investigation, our focus shifted towards the region between coils 2 and 3, which exhibited a lower speed of the domain wall.The MOKE peaks were obtained at the designated locations II and III, as illustrated in Figure 1.Point II can Sensors 2024, 24, 1326 6 of 10 be identified on the surface of the Fe-rich microwire, positioned in close proximity to the Co-rich microwire.On the other hand, point III was situated on the surface of the Fe-rich microwire, slightly above the Co-rich microwire.Our belief is that this particular area of the Fe-rich microwire, encompassing points II and III, found itself within the zone of influence exerted by the Co-rich microwire, albeit to a varying degree at each point.
The findings of these MOKE experiments are depicted in Figure 5.It is evident that the morphology of the MOKE peaks exhibits variability contingent upon the precise location of the observation point.As previously mentioned, a consistent peak achieved at point I corresponds to the displacement of the compact and regular domain wall.Conversely, the signal received at point II (the blue line in Figure 5) exhibits a modified shape in relation to the peak obtained at point I.An additional series of smaller peaks and a reduction in the amplitude of the primary peak signified a metamorphosis of the domain wall on the surface of the Fe-rich microwire.The signal received at point III (the green line in Figure 5) underwent even more pronounced alterations.At this juncture, a series of peaks with varying heights became apparent.Simultaneously, the primary peak continued to decrease.
In the second phase of the MOKE investigation, our focus shifted towards the region between 2 and 3, which exhibited a lower speed of the domain wall.The MOKE peaks were obtained at the designated locations II and III, as illustrated in Figure 1.Point II can be identified on the surface of the Fe-rich microwire, positioned in close proximity to the Co-rich microwire.On the other hand, point III was situated on the surface of the Fe-rich microwire, slightly above the Co-rich microwire.Our belief is that this particular area of the Fe-rich microwire, encompassing points II and III, found itself within the zone of influence exerted by the Co-rich microwire, albeit to a varying degree at each point.
The findings of these MOKE experiments are depicted in Figure 5.It is evident that the morphology of the MOKE peaks exhibits variability contingent upon the precise location of the observation point.As previously mentioned, a consistent peak achieved at point I corresponds to the displacement of the compact and regular domain wall.Conversely, the signal received at point II (the blue line in Figure 5) exhibits a modified shape in relation to the peak obtained at point I.An additional series of smaller peaks and a reduction in the amplitude of the primary peak signified a metamorphosis of the domain wall on the surface of the Fe-rich microwire.The signal received at point III (the green line in Figure 5) underwent even more pronounced alterations.At this juncture, a series of peaks with varying heights became apparent.Simultaneously, the primary peak continued to decrease.Henceforth, the alterations observed in the MOKE signal imply a metamorphosis of the surface segment of the domain wall within the sphere of impact of the Co-rich microwire.Evidently, the impact of the Co-rich microwire is associated with its stray fields (Figure 6); however, certain peculiarities exist.Henceforth, the alterations observed in the MOKE signal imply a metamorphosis of the surface segment of the domain wall within the sphere of impact of the Co-rich microwire.Evidently, the impact of the Co-rich microwire is associated with its stray fields (Figure 6); however, certain peculiarities exist.We maintain the belief that the impact of stray fields is non-uniform, both in terms of the microwire's length and diameter.In the vicinity of the Co-rich microwire (designated as point II in our experimental setup), the configuration of domains was primarily in-
Figure 1 .
Figure 1.Schematic picture of the experimental set-up: 1, 2, and 3-pick-up coils; I, II, and III, points of laser reflection during the MOKE experiment.The position of the Co-rich microwire relative to the pick-up coils and the Fe-rich microwire is demonstrated.
Figure 1 .
Figure 1.Schematic picture of the experimental set-up: 1, 2, and 3-pick-up coils; I, II, and III, points of laser reflection during the MOKE experiment.The position of the Co-rich microwire relative to the pick-up coils and the Fe-rich microwire is demonstrated.
Figure 3 .
Figure 3. V(H) dependencies obtained from the Fe-rich microwire.V1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 4 .
Figure 4. V(H) dependencies obtained in a linear array consisting of long Fe-rich and short Co-rich microwires.V1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 3 .
Figure 3. V(H) dependencies obtained from the Fe-rich microwire.V 1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V 2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 3 .
Figure 3. V(H) dependencies obtained from the Fe-rich microwire.V1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 4 .
Figure 4. V(H) dependencies obtained in a linear array consisting of long Fe-rich and short Co-rich microwires.V1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 4 .
Figure 4. V(H) dependencies obtained in a linear array consisting of long Fe-rich and short Co-rich microwires.V 1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V 2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 5 .
Figure 5. V(H) dependencies obtained in a linear array consisting of long Fe-rich and short Co-rich microwires.V1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 5 .
Figure 5. V(H) dependencies obtained in a linear array consisting of long Fe-rich and short Co-rich microwires.V 1-2 represents the velocity dependence obtained between secondary coils 1 and 2. V 2-3 represents the velocity dependence obtained between secondary coils 2 and 3.
Figure 6 .
Figure 6.Schematic sketch of stray field distribution. | 2024-02-25T06:17:15.345Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "c3d1dddac47b820c79f72465cd82e6d4edd86021",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6c1b7b7a71d2a629393fa057e02593183e7fcca5",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
257137890 | pes2o/s2orc | v3-fos-license | Child care costs, household liquidity constraints, and gender inequality
In a model with endogenous female labour supply and wages, we show that liquidity constraints that prevent households from buying child care generate an inefficiency and amplify gender gaps in the labour market. We evaluate the relative merits of paid maternity leave, child care subsidies, and government loans in mitigating liquidity constraints and promoting gender equality. While an extension in the duration of the leave has ambiguous effects, child care subsidies and loans in the form of child care vouchers remove the liquidity constraints and reduce gender gaps in participation and wages. We illustrate the mechanisms at play in a numerical example using Spanish data.
Introduction
Despite progress, gender gaps in the labour market are still wide. The average gender participation gap in EU countries is around 10 percentage points and the unadjusted gender wage gap is around 15%, with large variation across countries. According to Bertrand (2020) and Cortés and Pan (2020), having children remains a key source of gender inequality in the labour market. There is increasing evidence that, while parenthood is almost a non-event in fathers' labour market outcomes, mothers reduce labour force participation, the number of hours worked, and experience a reduction in hourly wages (Angelov et al. 2016;Kleven et al. 2019a;De Quinto et al. 2021;Casarico and Lattanzio 2021;Herrarte and Urcelay 2022). These negative effects persist throughout women's lives rather than being short-term, and are common to many countries irrespective of differences in family policies (Kleven et al. 2019b).
Affordability of care is often mentioned in surveys as one of the reasons why mothers do not work or quit their jobs. According to the Eurostat Database, between 0.6% (in Czechia) and 7.5% (in Romania) of mothers do not work because child care is too expensive. Indeed, across European countries, female labour force participation is negatively correlated with the share of mothers not working because child care services are too expensive. In Italy, 46% of mothers who quit their job in 2020 gave as reason the difficulties of combining work and care (Ispettorato Nazionale del Lavoro 2021), due to lacking care support. In Spain, the share of mothers who do not work because child care is too expensive is 6.4%.
Child care costs are certainly critical in mothers' labour supply decisions. If the household has enough resources to buy child care, both parents can work and accumulate experience, granting the household a higher lifetime income. In the impossibility to borrow from future earnings, some households with children may be unable to pay child care costs and face a liquidity constraint that forces one of the parents, typically the mother, to quit working. Quits impose adjustment costs on firms, that reduce the wage of young women. In addition, since women who quit accumulate less experience, also their future wages will be lower.
Developing a model that allows for endogenous labour force participation and wages, this paper studies the impact of liquidity constraints on gender gaps in the labour market and evaluates the relative merits of an extension in paid maternity leave duration, a child care subsidy, and a government loan in mitigating liquidity constraints and reducing gender inequality. We illustrate the mechanisms at work with a numerical example using data from Spain.
To the best of our knowledge, we are the first to isolate the role of liquidity constraints related to the purchase of child care on gender inequality. 1 Clearly, not all households face the same market child care costs and are equally likely to be liquidity constrained, even at similar income levels. In fact, market child care costs can show considerable heterogeneity. Many households rely on friends and relatives, and, therefore, face zero market child care costs. Others only need a few hours of babysitting. Among households needing to place their children in a nursery, some may live near a public facility, while others may have to use (more expensive) private institutions. Some may have neither. In some households with multiple children, the older one can help take care of the younger, or children may be all taken care of at once. Finally, some households may require special care for one or more of their children. We rely on this heterogeneity to illustrate that women in liquidity constrained households may be willing to, but unable to work.
We set up a simple unitary model, where households are composed of a man and a woman of given identical productivity. Some households will have children. Following Bjerk and Han (2007), the market cost of child care is randomly distributed across households, to account for the aforementioned heterogeneity in needs for child care in a setup with exogenous fertility. 2 Differently from Bjerk and Han (2007), we assume that mothers have the right to a paid maternity leave, as it is the case in most developed countries: both the leave and the decision to quit in order to take care of the child generate an adjustment cost for the firm, which is reflected in lower wages for women compared to men. In addition, we account for a second period of work, when all men and women work, wages depend on productivity and accumulated experience, and there is no cost related to child care.
Firms meet workers at the beginning of the first period, when a contract is signed. When offering a work contract, firms form beliefs on the probability that a household will have children, know that mothers will be on maternity leave, and form expectations on whether they will return to work after the leave period. Households are formed immediately after a work contract is signed and a share of them have children. Mothers are on paid leave for a fraction of the first period, at the end of which the household has to decide whether to buy child care in the market or take care of the child at home. In the latter case, one parent must quit working. As long as lifetime income when both parents work is higher than lifetime income when only one of them does, the household will prefer to buy child care in the market. However, child care costs need to be paid in the first period of work, and-in the impossibility to borrow from future earnings-there may be households that cannot afford to pay them out of first period income. Since firms penalise women ex ante for their period on leave and for the threat that they will quit, they earn less than men with the same productivity and end up being the ones to quit when the household cannot afford to pay for child care costs.
We show that the presence of liquidity constraints generates an inefficiency and increases gender wage and participation gaps in equilibrium, compared to a situation in which all households interested in buying child care can afford to do so. As a result, enabling women in constrained households to return to work when young reduces gender gaps in the labour market.
Regarding the effectiveness of different policy instruments in addressing the liquidity constraint and the ensuing gender inequality, we find that an extension in the duration of maternity leave has an ambiguous impact on gender gaps in participation and wages. On the one side, a longer leave reduces child care costs and may make it more likely that mothers return to work after the leave, increasing their participation, with a positive effect on their wages when young and on average female wages when old. On the other side, a longer leave is more costly to firms and this has a negative effect on young women's wages, reducing their participation. We do not know a priori which effect will prevail. In the case of Spain, our numerical exercise shows that when we increase the duration of maternity leave from 4 to 12 months per child, the gender gap in participation declines, whereas that in wages increases. The introduction of a child care subsidy reduces child care costs, allowing women in liquidity constrained households to return to work. This increases female labour force participation as well as wages, thanks to the lower adjustment costs firms face. Thus, gender inequality in the labour market is reduced. A loan in the form of a child care voucher has the same effect on gender inequality as the subsidy, but it entails no tax cost. The numerical exercise confirms that both policies can remove the liquidity constraint and increase labour force participation and wages of young women in Spain. Removing the liquidity constraint with a loan has slightly larger effects on gender equality than doing so with a proportional subsidy, given that the latter requires increasing the tax.
Our paper is related to contributions studying the role of statistical discrimination in generating gender gaps and how policy can address them (e.g. Bjerk and Han 2007;Dolado et al. 2013;Lommerud et al. 2015). 3 Unlike these previous works, we emphasise the contribution of liquidity constraints to amplifying gender gaps and explore the role of loans among other policies. Chapman and Higgins (2009) propose household loans to help women with children, but they do not look at gender inequality. Student loans have been used and discussed for a long time to address liquidity constraints preventing the payment of education costs, which deliver returns in the future. Findeisen and Sachs (2016) and Stantcheva (2018) have recently underlined the role of income contingent student loans as part of the optimal tax policy. Ho and Pavoni (2020) have studied the optimal design of child care subsidies in a setting where agents are heterogenous and have private information on their productivity. Bastani et al. (2020) have investigated the subsidisation of child care expenditure in an optimal taxation framework, where parents can choose the quality and quantity of care, and the latter affects children's human capital (see also Casarico et al. 2015). 3 A vast empirical literature investigates the effects of family policies on maternal labour supply and health, on fertility and time allocation decisions, on children's human capital and health, with a view on the overall impact in terms of reduction in gender gaps in the labour market and in household production. See Olivetti and Petrongolo (2017) and Rossin-Slater (2018) for exhaustive surveys. Österbacka and Räsänen (2022) provide evidence on the effects of child home care and private day care allowances on mothers' return to employment after childbirth in Finland. Bergemann and Riphahn (2023) study the employment effects of a change in parental leave benefits in Germany.
Parental leaves are a central element of family policies in most OECD countries, and a few papers study their effects in a theoretical setup. Barigozzi et al. (2018) focus on the endogenous formation of social norms and show that parental leave can reduce social welfare. Bastani et al. (2019) show that a mandatory parental leave can be part of the socially optimal policy when firms are not allowed to offer differentiated contracts due to anti-discrimination legislation. Del Rey et al. (2017) underline the role of the relative bargaining power of firms and workers in determining the effect of leave duration on unemployment and wages. Finally, Del Rey et al. (2021) explore the impact of maternity leave duration on female labour supply in a model with endogenous fertility. Their model allows for non-monotonic effects of leave duration on female labour supply.
The rest of the paper is organised as follows. Section 2 presents the model and Section 3 the equilibrium. Section 4 shows the effects of the policies. Section 5 presents the numerical example based on Spanish data. Section 6 offers a discussion and Section 7 concluding remarks.
The model
To analyse how household liquidity constraints influence gender gaps in participation and wages, and study the role of policy, we build on Bjerk and Han (2007). Starting from their basic framework, we add a paid maternity leave for women with children and a second period of work, when earnings depend on productivity and accumulated experience. In this setting, we allow for the presence of liquidity constraints for households with children. In the next sections we describe the building blocks of the model, starting from the behaviour of workers and firms, to then determine the equilibrium and its properties.
Workers
In period t, there is a continuum of young individuals of identical productivity x and gender g = {m, f }. The total measure of males [resp. females] is normalised to one. Young individuals coexist with an equal mass of children of type x and gender g, that make no economic decision, and an equal mass of old individuals of type x, gender g, and labour market experience . We neglect time indices because all periods are the same. Population growth rate is zero, as implied from above.
Individuals live for three periods during which they are children, young and old, respectively. From the perspective of individual lifetime, we use first and second period to refer to the periods in which agents are active in the labour market. They are young in the first period and old in the second period. If individuals work during the whole first period, they accumulate high experience h. If they work only during part of the first period, they accumulate intermediate experience i. If they do not work during the first period they accumulate nil experience n. Therefore, experience is = {h, i, n} , with h > i > n > 0. At the beginning of the first period, young individuals sign a work contract involving a wage w g (x). Immediately after signing the contract, households are formed by a woman and a man, with a proportion ρ of these households having children. 4 Mothers take a paid maternity leave of total length 0 < α ≤ 1, after which they may return to work or remain at home for the rest of the first period. The length of the paid leave is set by the government and cannot be chosen by the households. The government finances the leave with a lump-sum tax τ levied on all workers, young and old. The paid leave, instead, is exempt from taxation by assumption. 5 The interest rate is zero. 6 If mothers stay at home, they will take care of the children. If the mother returns to work, the household has to buy child care in the market at cost η > 0, where η is a random variable with an increasing and continuous distribution function F on (0, ∞), with F = f > 0. As discussed previously, the cost of buying child care η can take different values depending on the availability of relatives, of child care facilities, the number of hours of care, or special needs.
Old individuals earn a wage that depends only on the observable productivity and experience w (x), with = {h, i, n}. There is no unemployment. Finally, there are no capital markets where households can borrow. Figure 1 represents mothers' timeline. Men and women without children are assumed to work during the whole first and second periods.
Firms
There is a continuum of competitive, profit maximising firms, that offer wages w g (x) to young workers of type x and gender g = {m, f }, and wages w (x) to old workers of type x, and experience = {h, i, n}. The female partner takes a maternity leave, which imposes on the firm a cost q(x) > 0 during her absence. This cost can be interpreted in terms of adjustment, reallocation of tasks to cover for the missing worker, or administrative costs. If a worker quits her job during the first period, the firm incurs a cost p(x) > 0, which also reflects the presence of adjustment costs related to turnover. To simplify, we assume that q(x) = p(x). When workers are young, they have no experience. When they are old, they no longer need to purchase child care on the market. Wages offered are those, which set profits to zero. Profits made when hiring a young male worker of productivity x are When hiring young women, firms know that they will have children with probability ρ, take a leave of duration α and return to work with probability λ. Then, expected profits when hiring a young woman of productivity x are where q(x) is the cost imposed on the firm when a (female) worker of productivity x is absent, either because she is on maternity leave, or because she quits.
To better understand Eq. (2), Table 1 summarises the proportion of young female workers in different situations, the associated surplus and costs for the firm. The (1 − ρ) women who do not have children produce x and are paid w f . Since they will work for the entire first period after having signed a contract, they impose no cost on the firm. Women who have children (ρ) take a leave of duration α, which costs the firm αq(x). Among those ρ who have children, firms expect a proportion λ to return to work after the leave and generate a surplus (1 − α)(x − w f ). Finally, the firm expects (1 − λ) female workers with children not to return to work; this implies an additional cost (1 − α)q(x). The last part of Eq. (2) captures the total expected costs that women impose on firms, given by the sum of ραq(x) during the maternity leave, and ρ(1 − α)(1 − λ)q(x), for women who quit.
Using Eq.
(2), and setting profits equal to zero, we obtain the wage offered to young women of productivity x, given firm's beliefs λ: which implies w f (x, α) < x. Note that this wage is the smallest if all mothers are expected to leave and never work, i.e. λ = 0. Women are willing to sign a work contract before entering the stage of household formation as long as w f (x, α) > 0. We now impose a condition that guarantees that all young women are willing to sign a work contract before forming a household, even when offered the lowest possible . This ensures that the female participation rate is positive.
Assumption 1
( Since there is no compulsory leave for men, w m (x) > w f (x, α), and all men are willing to sign a work contract too.
Finally, when hiring an old worker of productivity x, and experience , which are both observable characteristics, firms' profits are:
Households' lifetime income
Members of households without children work both periods and their net lifetime income is where τ stands for the lump-sum tax paid by each worker in each period to finance the maternity leave.
In households with children, both adult members are active in the labour market at the beginning of the first period. Then, mothers take the paid leave, which is not subject to taxation, for a portion α of the period. If the mother goes back to work when the leave is over, the household has to buy market child care at price η during the period 1 − α. In the second period children are grown up and they no longer impose a cost of care on the parents. Both members of the household continue working since they have more experience and hence higher wages. Fathers work the whole time when young and have experience = h. Also mothers work in the first period but are on leave a fraction α of it, hence, they accumulate less experience ( = i). Thus, net lifetime income of households where young mothers work is If the mother does not work when the leave is over, the household does not buy child care in the market and the woman accumulates no experience ( = n). Assumption 2 states that all wages are larger than taxes and implies, in particular, that old women always work irrespective of experience. 7 Net lifetime income of households where young mothers do not work is: Note that children affect women's wages in two distinct ways. First, the compulsory leave α and the fact that some women quit their jobs to take care of children increase firms' expected costs and, hence, reduce wages for all young women, whether they are mothers or not, because of statistical discrimination. Second, there is a penalty children impose only on mothers, through a reduction in second period wages due to lower experience (w n < w i < w h ).
Equilibrium
The choice to return to work after the leave by mothers and the simultaneous setting of wages by firms, together with a balanced government budget constraint determine the equilibrium.
Decision to return to work after maternity leave
Households with children decide on whether the mother goes back to work after the leave by comparing household lifetime income when she does (and the household buys child care) and when she does not (and the mother stays at home to provide care). The return to work after the leave period affects mothers' experience and their wage when old.
Comparing household lifetime income if young mothers go back to work after the leave period and if they do not, i.e. Eqs. (7) and (8), it will be optimal for the household that the mother goes back to work if: This condition allows to identify a threshold level of child care costs η * , below which households are better off if women return to work after the leave period, for given wages: If Eq. (10) is satisfied, households want to buy child care on the market. Otherwise, it is better if the mother stays at home. By Assumption 2, η * (x, α) defined by Eq. (10) is positive. At equilibrium, the number of mothers who wish to return to work at the end of the paid leave for given wages is F (η * (x, α)).
Note that η * is also the threshold of child care costs below which it is efficient for women to return to work after the leave, given taxes and paid leave duration. If child care costs are larger than η * , the additional income earned by mothers by going back to work and accumulating more experience is smaller than the costs incurred.
So far, we have not considered whether households have enough income when young to pay for child care costs. All that mattered was lifetime income as if there were perfect credit markets where households could borrow. When households cannot use their future earnings as collateral for a loan, for the mother to go back to work at the end of the paid leave it must hold that two earner households can afford to buy child care in the first period.
To consider the question of affordability, let c denote unavoidable household consumption (food, housing, etc). The following assumption guarantees that households can always pay for minimum consumption in the first period, even if mothers do not work.
This assumption also guarantees that households can pay for minimum consumption in the first period when the mother goes back to work at the end of the leave. 8 However, for them to be able to pay for child care costs we further need that: We can now identify a threshold η c , above which households cannot afford the purchase of child care, i.e. Eq. (11) is not satisfied: , all those households willing to buy child care are able to. If η c (x, α) < η * (x, α), or, by Eqs. (10) and (12): some households will be liquidity constrained, i.e. unable to buy child care and let the mother return to work, in spite of this choice generating more net lifetime income. That is, in spite of this being the efficient choice. In this case, the equilibrium number of mothers that return to work is F (η c (x, α)). In households with η ∈ (η c (x, α) , η * (x, α)) women would like to go back to work after the leave but cannot afford to do so. Hence, the number of liquidity constrained households is (x, α)). Note that the number of liquidity constrained households depends both on how many households find it optimal to buy child care in the market and how many of them are able to pay for it. Eliminating this liquidity constraint is efficient because it increases aggregate income. As we will see below, it also reduces gender inequality. Figure 2 represents the relevant thresholds and preferences over/affordability of market child care.
Participation and wages
With perfect competition (firm's zero profits at equilibrium), young and old male and old female workers' wages coincide with their respective marginal productivities, (1) and (5)). The wage paid to a young woman is given by Eq.
(3). In equilibrium, firm's beliefs on how many mothers will return to work at the end of the paid leave coincide exactly with how many do, i.e. where Then, the equilibrium wage of a young woman is: For the existence of the equilibrium, it is necessary and sufficient that, if all households willing to buy child care can afford to do so, i.e.η = η * (x, α), and if some households are constrained. Appendix 1 provides the formal proof. We conclude this section with a proposition characterising gender inequality at equilibrium. To this end, first, we compute labour force participation of young men and women. All men participate for the entire first period, that is, MLF = 1. Women participate the whole first period if they have no children or, if they have children and return to work at the end of the leave, since women on maternity leave are also part of the labour force. Labour force participation of young women at equilibrium is, hence: Second, we calculate the gender gap in wages of old workers who, by assumption, always work. Note that wages of old workers depend on accumulated experience and this is different on average for men and women. In particular, we denote the average wage of old workers of productivity x and gender g = {m, f },ω g (x). It holds thatω m (x) = w h (x), since all men have high experience. The average wage of old women writes: The first term on the right hand side captures wages of old women without children, who all have high experience. The term ρF (η) refers to women who have children, return to work after the leave, and earn w i (x) in the second period of work; ρ(1 − F (η)) are women who have children and go back to work only when the children are grown-ups and earn w n (x) in the second period of work.
Proposition 1
The equilibrium exhibits gender gaps in labour force participation and wages. In particular, 1. The ratio of male to female labour force participation for young workers is: 2. The ratio of male to female wages for young workers is: 3. There is no gender participation gap among old workers by assumption. 4. The ratio of male to female average wages for old workers is: Proof By construction, all men, women without children, and older women work. Participation of young mothers is given by Eq. (19). Wages are given by Eqs. (1), (5) and (16).
We now investigate the effects of enabling women in constrained households to return to work. This amounts to raising the equilibrium threshold from η c (x) to η * (x). In the equilibrium in which some households cannot afford to pay child care costs, i.e.η = η c (x), gender gaps in participation and wages result from a combination of statistical discrimination and liquidity constraints in young age, and lower accumulated experience by mothers, with negative repercussions on female wages in old age. Lifting the liquidity constraint, the labour force participation of young mothers increases and the ratio in Eq. (21) goes down. In addition, the wage of young women increases. Indeed, differentiating Eq. (16) with respect to η c we get: Then, the gender wage gap in Eq. (22) goes down. Finally, more women accumulate labour market experience and the average wage of old women increases. Hence, the gender wage gap in Eq. (23) is reduced. This allows us to write the following
Corollary 1 Enabling women in constrained households to return to work when young increases efficiency and reduces gender gaps in participation and wages:
1 In Appendix 2, we study how gaps in participation and wages change with individual productivity x through a comparative statics exercise.
Balanced government budget constraint
The government funds the benefits accruing to mothers on leave by levying a lumpsum tax τ on all workers. Letting F (η) denote the number of households with child care costs smaller thanη, whereη is the child care cost born by the last household where the mother goes back to work at equilibrium, the government budget constraint reads:
Policy
In this section we explore the effect of alternative policies on gender inequality when some households are liquidity constrained. We first discuss the effects of increasing the duration of the maternity leave. Then, we explore the role of child care subsidies to dual earner households. Finally, we consider a government loan. To study this instrument, we assume that, unlike households, the government can borrow in international markets to obtain the funds required to cover child care expenses. We also assume that the government has the power to seize incomes directly, in case households do not repay the loan.
Extending the duration of paid maternity leave
We first consider the impact of changes in the duration of the paid maternity leave on gender gaps when some households are liquidity constrained. In principle, longer periods of paid maternity leave reduce the market cost of child care, because households in which mothers return to work will have to pay it for a shorter period of time. However, they also affect wages directly, because longer leave periods are more costly to firms. This has repercussions on participation, which may feed back into wages. We state the following proposition.
Proposition 2 If some households are liquidity constrained, increasing the duration of the paid maternity leave α has the following effects on labour market outcomes: a) More young women of productivity x return to work after maternity leave and their wages increase iff Then, the participation of young women in the labour market increases. As a result, gender gaps in participation and wages for young workers decrease, and so does the gender wage gap of old workers. b) More young women of productivity x return to work after maternity leave and their wages decrease iff
Then, the effect on the participation of young women in the labour market is ambiguous and so is the effect on the gender gap in participation for young workers. The gender wage gap increases both for young and old workers.
Proof See Appendix 2.
The intuition of the proposition is as follows. Increasing the duration of the maternity leave has two different effects on the number of women returning to work. First, longer duration reduces child care costs and incentivises women to go back to work. Second, wages can increase or decrease, with a further impact on the number of women who return to work.
In fact, with a longer leave, mothers are less likely to quit-which reduces costs for firms-but are also absent from work for a longer period, which increases costs for firms. Depending on which of the two effects dominates, wages can increase or decrease. If wages increase, the incentive to go back to work is stronger. This is case a in the Proposition. If wages decrease, this weakens the incentives to return to work. In case b in the Proposition, more women return to work in spite of the decrease in wages. In case c, the negative effect on wages dominates the reduction in child care costs and fewer women go back to work after the leave. This reinforces the negative effect on wages further.
With respect to female labour force participation, given by Eq. (19), a longer duration of the leave keeps women attached to the labour force for longer, and can increase or decrease the number of mothers going back to work after the leave. If more women return to work after the leave, participation increases (cases a and b). If fewer women return to work (case c), the effect of a longer leave on female labour force participation is ambiguous. Finally, the impact on the average wage of old women hinges on the proportion of women returning to work after childbirth. Hence, the average wage of old women increases in cases a and b, and decreases in case c when the duration of the leave is extended.
To conclude this analysis, note that funding a longer maternity leave will require adjusting the government budget constraint. We can show that increasing taxes has a negative effect on gender inequality (see Appendix 2). In particular, it holds that Hence, increasing taxes limits the positive effects of extending paid leave duration in cases a and b, and exacerbates the negative effects in cases b and c.
Child care subsidies to dual earner households
The government could subsidise households with child care costs η ∈ (η c , η * ), that is, households for which it is optimal that the mother returns to work, but cannot afford it. However, since η is not observed, the government does not know the child care needs of one particular family and, thus, cannot subsidise constrained households only. Under these circumstances, we assume that the government subsidises a proportion s of all child care bought in the market. The first period income of a constrained household would then become: Hence, the households that can now afford child care are those with Clearly, η s (x, s) > η c (x, α): more households can afford for the woman to work after childbirth, given α. From Corollary 1, this reduces gender gaps in participation and wages. Subsidising child care, however, requires higher taxes. The government budget constraint becomes: As before, taxes limit the positive effects of the subsidy since, with subsidies, The details of these calculations are available in Appendix 2. Summing up, subsidising child care costs in dual earner households can mitigate liquidity constraints and reduce gender inequality in the labour market, but the required taxes will hinder their effectiveness in doing so.
We now assume that the government can borrow in the international capital market to lend constrained households what they need to buy child care. Since the government will not aim to make a profit on this loan, we assume that it lends at the same rate at which it borrows. This justifies our assumption that interest rates are zero for simplicity. 9
Loans
In this section we characterise a simple loan programme run by the government to mitigate liquidity constraints and, by Corollary 1, reduce gender inequality. We show that only constrained households have incentives to apply for a loan that can be used exclusively to pay for child care services. Our assumption is that the government can borrow in international markets, and that it can directly seize household income so that non-repayment is not an option.
Proof First, note that over-borrowing and default are not relevant options. On the one hand, no household has an interest in borrowing more than it needs, since borrowing can only be used to pay for child care services and has to be paid back. This prevents over-borrowing. On the other hand, the government can seize an amount of income that could even be larger than the amount owed in case of non-repayment. This eliminates incentives for default. Let us now look at each type of households in turn: 9 In particular, let r denote the cost of borrowing for the government. This is also both the interest households would obtain from lending (opportunity cost of waiting) and the interest households would pay for a government loan (since the government will not intend to make a profit). Then, with R = 1+r, the present value of lifetime income of a household where the mother goes back to work after borrowing B and repays it in the second period is: Assuming r = 0 in this context is innocuous. a) For households with η < η c (x, α), first period income is larger than child care costs, hence they do not need to borrow and borrowing would not lead to an increase in lifetime income. b) For households with η ∈ (η c (x, α), η * (x, α)), first period income w m (x) − τ + w f (x, α) − (1 − α)τ − c is lower than child care costs (1 − α)η. They need to borrow that difference, which we can write (1 − α)(η − η c (x, α)). If they borrow, women in these households will go back to work and the additional income earned will be larger than their loan repayment since η < η * (x, α). c) In households with η > η * (x, α), since Eq. (9) is not satisfied, it holds that i.e. households attain higher lifetime income if mothers do not go back to work after the leave. Hence, they are better off staying at home and providing care themselves, instead of buying child care to return to work.
Clearly, more complex environments (e.g. the inclusion of uncertainty, different attitudes towards risk, or asymmetric information) provide additional challenges to the design of a loan programme. Chapman and Higgins (2009) were the first to propose household loans to help women with children to return to work. A very similar tool, that of student loans, has, however, been discussed for a long time. Like higher education investments, child care can be seen as an investment that improves women's future earning prospects. Hence, all the insights gained about the implementation of student loans can be applied to child care loans. Income contingent loans, in particular, have gained prominence as a way to deal with asymmetric information and uncertain future outcomes. 10 We now propose a numerical example to compare the effects on gender inequality of the three policies, when some households are liquidity constrained.
A numerical example
The theoretical model presented before shows that some households' inability to afford child care, besides generating an inefficiency, amplifies gender gaps in participation and wages. It also demonstrates how different policies affect the extent of gender inequality, by altering households' constraints. In particular, the model illustrates that a longer paid maternity leave has unclear effects on female labour force participation and wages, and that the effects of subsidies and loans differ due to the role played by taxes. In this section, we calibrate and simulate the model using Spanish data. Since there are many aspects of the real world that are currently not captured by our model, our goal is not to reach quantitative conclusions. Instead, we wish to provide an example of how the different policies affect gender inequality at equilibrium when some households are liquidity constrained.
Calibration
We calibrate the model in yearly terms for households with average earnings in the Spanish economy in 2018. Table 2 presents the calibrated parameters and variables. Next, we describe the calibration procedure.
Households and benchmark leave duration Young individuals are between 30 and 49 years old. Old individuals are 50 and above. We set the proportion of households with children at ρ = 0.704, which reflects the percentage of women aged 30 to 49 who are mothers in 2018 according to the Spanish National Institute of Statistics. In the benchmark calibration we consider a scenario of young households with two adults (a man and a woman) and 2/ρ children. Mothers receive 4 months of fully paid maternity leave per child. Thus, we set α = (4 months × 2children)/(19 years × 12 months × ρ) = 0.0496, implying that a woman aged between 30 and 49 years spends 5% of her available time on leave. Older households consist of two adult members.
Wages We use the 2018 Spanish Wages Structure Survey to calibrate the wage distribution of full time workers. Our model has young and old women and men. Old men have high job experience, while old women may have high, intermediate or no job experience. We consider that male and female workers with high job experience are those with more than 11 years of job seniority. In contrast, female workers with no job experience have less than one year of job seniority. For female workers with intermediate experience we want to focus on those who only stopped working during maternity leave. For this reason, for intermediate experience, we consider women with 10 to 11 years of job seniority. 11 Using a total sample of 28,500 establishments with around 220,000 employees, we compute the average annual wage of old experienced men, which is equal to 42,953 euros, and normalise it to w h (x) = 1. In the model, women without children work for the entire youth period, which gives them high experience when old. Thus, we assume that they earn the same wage as old men. This assumption will only affect the computation of the average wage of old women, which will be higher than that observed in the data, without any other implication. 12 We express the other average wages as ratios of w h (x). Thus, the wage of a young man is set to Labour supply and proportion of mothers returning to work after leave By assumption, all young men, old men, and old women work. Only young women can be inactive, if they have children and do not go back to work after the leave. Using data from the Spanish Labour Force Survey, we target the labour force participation rate of young women aged between 30 and 49 in Spain in 2018 at 81.0%. In the model, the share of women with children who return to work is F (η c ). Then, using Eq. (19) and the female labour force participation rate (81.0%), we obtain the proportion of mothers who go back to work after the leave is over F (η c ) = 0.717. Plugging this value in Eq. (20), we also obtain the calibrated average wage of an old woman: ω f (x) = 0.732.
Taxes
We calibrate the lump-sum tax by calculating the revenues necessary to cover the cost of paid maternity leave per taxpayer as a fraction of w h (x) = 1. Using Eq. (28), the tax τ required to finance the maternity leave is equal to (ραw f (x))/(3+ (1 − ρ) + (1 − α)ρF (η c )) = 0.0029. This implies an annual amount of 124 euros per taxpayer, which is not far from the average expenditure in parental leave per employee observed in Spain in 2018 (94 euros).
Liquidity constrained households
The 2010 Spanish special module on reconciliation between work and family life from the Labour Force Survey shows that 6.4% of the mothers with children below 15 years of age do not work because child care services are too expensive. We assume that this percentage matches that of households who are liquidity constrained. Then, F (η * ) − F (η c ) = 0.064. Thus, we obtain F (η * ) = 0.781.
Child care costs Each household needs to spend a different amount on child care for the mother to be able to return to work. These costs depend on a large variety of elements, for example: whether the household can get help from relatives, and how much; availability of public or private child care facilities nearby; working schedules; commuting time; whether the child gets sick often (needing a different arrangement, like a baby sitter who takes care of him/her at home); the age distribution of children, as older children can take care of younger ones, or other special needs.
Calibrating the distribution of these costs is not an easy task. The distribution of actual expenditure on child care can be a good measure of the distribution of child care costs only for those households that buy child care on the market, but it is not informative of the costs faced by those households, that decide to rely on household provision of child care. Since the costs of the latter type of households are not observed, we assume that the overall distribution of child care costs is of the Weibull type and calibrate the parameters of the distribution to match the values of F (η * ) and F (η c ) that we have obtained before. 13 To calibrate this distribution we first need the thresholds η * and η c , which are calibrated using Eqs. (10) and (12). We obtain η * = 0.782 and η c = 0.357. Then, having two parameters to calibrate in each case (the scale parameter and the shape parameter), and using the targets F (η * ) and F (η c ) as well as the Weibull distribution function, we obtain η shape = 0.229 and η scale = 0.112. The median of this distribution is 0.029, approximatively 10% of the young woman wage at the benchmark scenario corresponding to a fully paid leave of 4 months per child (α = 0.050).
Firm's adjustment costs
The costs incurred by the firm when mothers are on leave or quit are obtained using the wage Eq. (16) withη = η c . We get q = 0.147.
Simulations
We first explore the effect on gender inequality of increasing the duration of paid maternity leave when some households are liquidity constrained. We then study the effects of a proportional subsidy and a loan. We know that both instruments reduce gender inequality, and that a loan can eliminate the liquidity constraint. Therefore, in the simulation, we calculate the subsidy rate that eliminates the liquidity constraint so that the subsidy and the loan can be compared on equal terms.
Modifying the length of fully paid maternity leave
We change the duration of the fully paid maternity leave α when households are liquidity constrained. We maintain the assumption that households have 2/ρ children. Besides the benchmark scenario (α = 0.050), we consider two additional scenarios. The first one assumes that there is no paid leave and mothers work for the entire first period. Thus, we set α = 0. In the second, the paid leave increases to 12 months per child (α = 0.15), which is near to the average paid leave duration in OECD countries in 2020 according to Table PF2.1.A in the OECD Family Database. Note that, according to our strategy of calibration, all these scenarios imply adjusting the lump-sum tax to finance the change in the leave duration. Table 3 shows the simulated scenarios.
If we start from the benchmark calibration with α = 0.05-see column 2-and eliminate maternity leave by setting α = 0-see column 1-the female labour force participation rate decreases from 81.0% to 79.9%, while w f (x) increases from 0.307 to 0.3127. As a result, the gender wage gap of young workers falls from 13.91% to 11.83%. In contrast, when maternity leave duration increases from four months to one year (α = 0.15)-see column 3-the female labour force participation rate increases from 81.0% to 83.22%, while w f (x) falls from 0.307 to 0.2943. Thus, the gender wage gap of young workers increases from 13.91% to 18.84%. Since participation of According to our model, the reduction in the wage of young mothers takes place because the negative effect of a higher α on w f (x) dominates the positive one due to a higher labour force participation (case b in Proposition 2). Note that the share of constrained households does not fall in response to higher α. In fact, while more households can afford to pay child care (η c shifts to the right) it is also the case that more households find it optimal to do so (η * shifts to the right). For example, the percentage of constrained households increases from 6.40% to 6.41% when the paid leave parameter increases from α = 0.05 to α = 0.15. Finally, note that the increase in the maternity leave duration from four months (α = 0.05) to one year (α = 0.15) increases the lump-sum tax from 124 (τ = 0.0029) to 360 (τ = 0.0084) euros.
Introducing child care subsidies
In Section 4.2 we saw that a proportional subsidy on child care costs can reduce the proportion of households that are liquidity constrained and, thus, gender inequality. We now compare two different scenarios of child care subsidies. The first one corresponds to our benchmark scenario where the proportion of the child care cost subsidised by the government is equal to s = 0. In the second scenario, we introduce a proportional subsidy s and set the rate so that the percentage of households that cannot pay child care costs but would be better off if they could is set to zero. This happens when s = 0.543. We adjust the lump-sum tax to finance the change in s, which implies increasing τ from 0.0029 (124 euros) to 0.0088 (378 euros).
As expected, the female labour force participation increases and so do wages, with an ensuing reduction of gender wage gaps. This happens because more households can afford for women to participate in the labour market, thus reducing the firm's expected cost of quitting, with positive effects on female wages and labour force participation. Specifically, the participation rate of women increases from 81.0% to 85.27% and the wage of young women increases from 0.3070 to 0.3169. As a result, the average wage of old women goes up too, reducing the gender wage gap in old age (Table 4). Lump-sum tax τ as fraction of w h (x) = 1 0.0029 0.0088
Introducing loans
We now explore the effect of removing the liquidity constraint through the provision of a loan. The loan (see (38)) covers the difference between the child care cost the household faces if the mother returns to work once the maternity leave is over, (1 − α)η, and household income in the first period w m ( In other words, the child care cost (1 − α)η c , which they can afford with this income. In our simulated scenario, the average loan provided by the government amounts to 0.173 as a fraction of w h (x) = 1 (7,430 euros, 2,653 per child). Table 5 shows the benchmark calibration with constrained households (F (η * ) − F (η c ) > 0, column 1) and the results of simulating the removal of household liquidity constraints (F (η * ) − F (η c ) = 0, column 2). Removing liquidity constraints increases female labour force participation and reduces gender wage gaps for both young and old. Specifically, the participation rate of women increases from 81.0% to 85.3%. This effect is slightly larger than the one obtained with the proportional subsidy because taxes remain unchanged in this case. Young women's wages increase from 0.3070 to 0.3170 and old women's wages increase from 0.732 to 0.752. Loan provided by the government as a fraction of w h (x) = 1 -0.173 Lump-sum tax τ as fraction of w h (x) = 1 0.0029 0.0029
Discussion
Child care is expensive. Some households cannot afford it, and this forces them to have one parent staying at home until (pre-)school is free. Typically, it is the mother who stays home, and this amplifies gender gaps in labour market participation and wages. For these households the outcome is inefficient: their lifetime income net of child care costs would be larger if the mother went back to work right after maternity leave, because of the positive effect of accumulated experience on wages. We show that allowing mothers in these households to remain employed reduces gender gaps in the labour market. The inefficiency studied here is similar to that arising in (tertiary) education, where liquidity constraints can prevent the young from making investments that would yield positive net returns. To address these liquidity constraints many countries use student loans as part of their education policy. One of the advantages of government-led loan programmes is that repayments can be embedded in the income tax, like in the optimal student financial aid formulas developed in Colas et al. (2021). In the context of child care policy a similar idea has been advanced by Chapman and Higgins (2009), but to the best of our knowledge no country has implemented a policy of this kind to date. Our numerical example in Section 5 (see Table 5) shows that, in 2018, child care related liquidity constraints could have been removed with an average loan of 7,430 euros (2,653 per child) in Spain.
In contrast, child care subsidies and maternity leaves are very common instruments around the world to support maternal employment. Their effect on gender inequality when some households are liquidity constrained have, however, not been considered before. Child care subsidies reduce child care costs and allow women in liquidity constrained households to return to work. Firms then face lower adjustment costs related to hiring women and female wages go up. In our numerical example, a subsidy of 54.3% of child care expenses eliminates the liquidity constraint and increases young female labour force participation rate and wages almost by the same extent as loans (see Table 4).
Also, maternity leave policies are a potentially good policy tool to address liquidity constraints, because longer maternity leaves expand the time the mother is at home, thus making child care less expensive. However, longer leave periods impose adjustment costs on firms, which may not only lower young women's wages but also offset the positive effect of the leave on participation. From proposition 2, we can see that increasing the duration of paid maternity leave is more likely to reduce gender gaps when, for instance, the right hand side of Eq. (29) is small. This happens if f (η c )the number of women who return to work thanks to the extended duration-is large relative to F (η c )-the number of women that do so before the change in the policy. Also, an extension in maternity leave duration is more likely to have positive effects on women's labour market outcomes when firm adjustment costs q(x) are lower, and therefore the wage of young women is higher (left hand side of Eq. (29) larger through Eq. (16)). In our numerical example, the extended duration of the maternity leave from 4 to 12 months increases the female labour force participation rate but not as much as the other policies, and reduce the wage of young women slightly. Thus, it is not the best policy to address gender inequality in the labour market, at least in Spain.
Concluding comments
Maternity leaves and child care subsidies are widely used around the world to guarantee mothers a job-protected leave and promote work-life balance. They are also a sizeable fraction of overall family policy expenditure in OECD countries. The potential benefits of loans in the context of family policy, instead, have been put forward by Chapman and Higgins (2009), but their role in addressing gender inequality in the labour market has not been considered in the literature, as far as we know.
In this paper we show that mitigating liquidity constraints that prevent households from buying child care reduce gender gaps in participation and wages. In this context, we evaluate the relative merits of an extension in paid maternity leave duration, a child care subsidy, and a government loan. We find that increasing the duration of paid maternity leave has ambiguous effects on gender inequality because, on the one hand, this policy reduces child care costs and liquidity constraints but, on the other hand, it imposes higher adjustment costs on firms that then pay women lower wages. Subsidising child care costs mitigates liquidity constraints and unambiguously reduces gender inequality because these subsidies do not impose costs on firms. The same happens with a loan given out in the form of a child care voucher. The subsidy requires higher taxes but our numerical example shows that the tax per worker required to fund it is relatively small.
Future work can assess the effectiveness of these policies in reducing gender inequality in more complex environments, where uncertainty about future earnings plays a role. Note also that we have studied the effects of these policies on gender gaps in participation and wages rather than on overall welfare, taking a positive rather than a normative approach. We leave the analysis of welfare effects for future research. | 2023-02-24T17:35:42.239Z | 2023-02-21T00:00:00.000 | {
"year": 2023,
"sha1": "60eac546d773957e57bf836d8d3fc309dafde196",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00148-023-00936-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "cf5638f8c0facb183ebcdd93d86d5fc44d8650ec",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
220962555 | pes2o/s2orc | v3-fos-license | In vitro activities of crude extracts and triterpenoid constituents of Dichapetalum crassifolium Chodat against clinical isolates of Schistosoma haematobium
Dichapetalum crassifolium Chodat (Dichapetalaceae) is widely distributed in Africa, Tropical Asia and Latin America. As part of our quest for potential bioactive lead compounds for various neglected tropical diseases, we report the anti-schistosomal potential of the crude extracts and chemical constituents of the stems and roots of Dichapetalum crassifolium. Column chromatography of extracts of the stems and roots led to the isolation and identification of three oleanane-type triterpenoids, friedelan-3β-ol (1), friedelan-3-one (2), and maslinic acid (3); the ursane-type tritepenoid, pomolic acid (4) and the dammarane-type tetracyclic triterpenoids, dichapetalin A (5) and dichapetalin M (6). Dichapetalin A was isolated from only the roots. Isolated compounds were identified by comparison of their physico-chemical and spectral data with published data. The highest in vitro anti-schistosomal activity (IC50) of the crude extracts against clinical isolates of Schistosoma haematobium (Bilharz 1852) was 248.6 μg/ml for the ethyl acetate extract of the root while dichapetalin A gave the highest activity at 151.1 μg/ml among the compounds compared with the 15.5 μg/ml for the standard drug, praziquantel. The rest of the compounds showed activities in the order 177.9, 191.0, and 378.1 μg/ml respectively for mixture of β-sitosterol/stigmasterol, dichapetalin M and friedelan-3-one. The least active extract was the methanol extract of the stem (893.7 μg/ml). The constituents of D. crassifolium showed activity against the S. haematobium that are below praziquantel. It is envisaged that the presence of multiple layers and the minute sizes of pores in the egg shells, may preclude penetration of eggs by the compounds.
Introduction
Naturally-occurring pentacyclic triterpenoids of the lupane, oleanane and ursane classes are known to possess a variety of biological activities. Quite a number of these triterpenoids have been isolated and identified from some plant species of the Dichapetalaceae family. One of the hitherto uninvestigated species is Dichapetalum crassifolium Chodat widely distributed in Africa, Tropical Asia and Latin America (Breteler, 1978). It is typically found in the rain or gallery forests, primitive woods, shady places, and among rocks (Breteler, 1978;Hiern et al., 1901). Even though there is no documented ethnobotanical use and phytochemical investigation for D. crassifolium, other species of the genus have indicated the presence of a wide range of secondary metabolites with diverse biological activities. These include the fluorinated carboxylic acids reputed to be responsible for the toxicity of some members of the genus (Meyer and O'Hagan, 1992) as well as various types of triterpenoids including the dichapetalins reputed to have cytotoxic and antiproliferative activities (Fang et al., 2016;Long et al., 2013;Osei-Safo et al., 2012). Other non-terpenoidal compounds recently reported from the genus are the bisbenzyl derivatives heudelotol A and B from D. heudelotii (Osei-Safo et al., 2017). Additional compound types obtained from the genus are the alkaloid trigonelline, the amino acids N-methylserine and N-methylalanine (Breteler, 1978;Eloff, 1980), sugars, various glycosides, esters of (E)-ferulic acid (Addae-Mensah et al., 2007;Adu-Kumi, 1997) and pyracrenic acid (Long et al., 2013).
Apart from the cytotoxic and antiproliferative activities exhibited by the dichapetalins (Achenbach et al., 1995;Addae-mensah et al., 1996;Jing et al., 2014;Osei-Safo et al., 2017), this unique class of triterpenoids has also shown anthelmintic (Chama et al., 2015;Jing et al., 2014), antifungal, feeding deterrent, inhibition of intracellular release of nitric oxide (NO) and acetylcholinesterase (AChE) activities (Jing et al., 2014). As part of our quest for potentially active constituents against parasitic and other causative agents of various neglected tropical diseases, we report the anti-schistosomal activity of the crude extracts and constituents of the stems and roots of the hitherto uninvestigated D. crassifolium against clinical isolates of Schistosoma haematobium Bilharz 1852, (Tan and Ahana, 2007).
Materials
The roots and stems of D. crassifolium were obtained from the Bobiri Forest Reserve in the Bosomtwe district of the Ashanti Region in July 2013. Identification was done by John Ntim-Gyakare formerly of the Forestry Commission, Kumasi. Voucher specimen (DCR001) has been deposited in the Ghana Herbarium, Department of Plant and Environmental Biology, University of Ghana.
TLC was performed on aluminium foil slides pre-coated with silica gel (thickness 0.2 mm, type Kieselgel 60 F 254 , Merck, Rogers, AR); detection: I 2 vapour, vanilin stain and anisaldehyde spray reagent. Column chromatography was carried out on silica gel 60 (Fluka Analytical, Bellefonte, PA). Melting points (uncorrected) determined on a Stuart Scientific Melting Point Apparatus (Sigma Aldrich, St. Louis, MO). IR spectra were obtained on an FT IR spectrometer at the Food and Drugs Authority in Ghana. Visualisation of spots under UV light was done with UVGL-58 Handheld UV lamp at 254-365 nm. Organic solvents were concentrated using Buchi Rotary Vacuum Evaporator.
NMR were run on a 500 or 600 MHz Brüker Avance instrument at 90, 125 or 150 MHz for 13 C NMR and 360, 500 or 600 MHz for 1 H NMR. Depending on the solubility of a particular compound, solvents used were CDCl 3 /CD 3 OD, DMSO-d6, acetone-d6 or CD 3 OD with TMS as the internal standard. Schistosomal activity testing was carried out at the parasitology laboratory of the Noguchi Memorial Institute for Medical Research, University of Ghana. Ethical clearance for the antischistosomal work was obtained from the Noguchi Memorial Institute for Medical Research (NMIMR-IRB CPN 059/13-14). Schistosome egg recovery and concentration from infested urine samples was by the modified Kotze et al. method (Kotze et al., 2005). Urine reagent strips were obtained from URIT Medical Electronic Co, Ltd, China. Different pore sizes of sieves for filtration of suspensions were obtained from Nonaka Rikaki Co. Ltd, Japan.
Sample collection of clinical isolates
S. haematobium eggs were obtained from urine samples collected from 120 school children in Tomefa in the Ga South District of Accra and stored in air-tight plastic containers. More than 80% of participants were between ages 8 and 14 years. Samples were first tested with urine reagent strips, URIT 10V to identify cases of haematuria and then kept in a Styrofoam box and transported to the parasitology laboratory at the Noguchi Memorial Institute for Medical Research.
Recovery, purification and identification of S. haematobium eggs from urine samples
Portions (10 ml) of collected urine samples were centrifuged and observed using a low power microscope to determine the presence of S. haematobium, whose eggs were identified by the presence of terminal spines (Figure 7). Urine samples that were positive for schistosome eggs were pooled into 50 ml falcon tubes and centrifuged at 500xg for 5 min.
The supernatant was discarded and the deposits suspended in 50 ml of normal saline (0.9%, NaCl) and centrifugation repeated. The sediment was suspended in 40 ml of 0.015% Brij-35 and shaken vigorously. It was centrifuged again at 500xg for 5 min. The supernatant was discarded and the sediment re-suspended in normal saline into a 200 ml beaker. More saline was added to make up to the 150 ml mark. Due to the different particle sizes of faecal samples which do not usually produce clean egg cells of the parasite, sieves of different pore sizes were used.
For differential separation to obtain cleaner schistosome eggs, the suspension was filtered through a stack of three sieves of pore size 180 μm, 150 μm and 80 μm respectively.
The stack of sieves was thoroughly flushed with a jet of normal saline to wash the eggs. To prevent air lock, the 150 μm and 80 μm sieves were occasionally separated. At the end of washing, the 80 μm sieves were removed and inclined at approximately 45 to the horizontal and a jet of saline applied to it to wash off the eggs into a beaker. The suspension was centrifuged at 200xg for 10 min and left for 30 min after which the supernatant was aspirated down to 5 ml. Three 30 μl suspensions of the concentrated schistosome eggs were pipetted and observed under light microscope, the number of eggs per 30 μl was noted and hence the average number of eggs per 30 μl determined. From this result, the approximate number of S. haematobium eggs in the 5 ml concentrated egg solution was calculated.
Extraction, isolation and structure elucidation of compounds
Soxhlet extraction of 5 kg each of pulverized roots and stems in batches of 500 g of D. crassifolium was carried out exhaustively using 5 L of petroleum ether for 24 h to give 54 g root and 34 g of stem extracts after concentration, Figure 1. Each plant residue was further extracted with EtOAc and MeOH respectively after drying. Concentration of the root extracts gave 77 g (EtOAc extract) and 108 g (MeOH extract) crude material. The stem yielded 65 g (EtOAc extract) and 23 g (MeOH extract) of crude material, Figure 1.
2.4. In vitro screening of test samples against schistosome eggs using the 96well plate-egg hatch assay For each of the test compounds; namely, friedelan-3-one (2), dichapetalins A (5) and M (6) and mixture of β-sitosterol/stigmasterol (6, 7), 2 tracts were also prepared. From the suspension of purified eggs, 50 μl was added to each well and the presence of eggs was ascertained under an inverted microscope. The final concentration of DMSO was kept below 0.1%. The choice of DMSO as a solvent and selection of suitable concentration that had no effect on the morphology of the eggs followed the method of Treger et al. (2014). Water and praziquantel (2 mg/ml stock solution) were used as negative and positive controls respectively. Praziquantel, because of its partial solubility in cold water, was dissolved in warm water. The assays were incubated for 24 h at ambient temperature and the process of egg hatch was stopped by the addition of 100 ml of formalin to each well. The number (#) of hatched eggs and unhatched eggs (larvae) was counted using inverted light microscopy. The percent egg hatch inhibition (% EHI) was calculated as: %Egg Hatch Inhibitionð%EHIÞ ¼ #of unhatched eggs # of unhatched eggs þ # of hatched eggs  100 The %EHI was plotted against log of the concentration. Extrapolation of the 50% EHI on the curve gave the half maximal inhibitory concentration (IC 50 ) of each test sample using GraphPad Prism v.7.
Results and discussion
Compound 1 was identified as friedelan-3β-ol (Figure 3), mp 272-274 C (Lit. 274-276 C, Utami et al., 2013) and stained purple with anisaldehyde spray reagent. Compound 2 was also identified as friedelan-3-one (Figure 3), mp: 249-251 C, (Lit. 249-251 C, Utami et al., 2013;Sousa et al., 2012) and stained yellow on TLC with anisaldehyde spray reagent. Other physico-chemical and spectroscopic properties were consistent with those reported in literature (Chama, 2007;Chama et al., 2015;Osei-Safo et al., 2008). Compound 3 was identified similarly as maslinic acid (Figure 3, Table 1), creamy powder which stained purple with anisaldehyde spray reagent, mp 249-251 C (Lit. 248-250 C, (Hossain and Ismail, 2013;Lozano-Mena et al., 2014;Tanaka et al., 2003;Woo et al., 2014). This is the first report of maslinic acid from the genus Dichapetalum aside its isolation from Crataegus oxyacantha and Eriobotrya japonica. Compound 4 was characterised as pomolic acid (Figure 3, Table 1). Melting point and spectral data were consistent with literature values (Chama et al., 2015). Compounds 5 and 6 ( Figure 6 were each identified as dichapetalin A and M, respectively upon comparison of their physico-chemical and spectral data with literature, Table 2. Melting point of dichapetalin A (210-213 C) and spectral data (Table 2) were consistent with literature (Lit: 212-213 C, Achenbach et al., 1995;Addae-mensah et al., 1996;Chama et al., 2015;Osei-Safo et al., 2008). Melting point of dichapetalin M, 282-284 C and spectral data were also consistent with that of known values (Lit: 280-282 C, Osei-Safo et al., 2008). This is the first report of the presence of these two dichapetalins in D. crassifolium and this could be of chemotaxonomic importance. With the exception of compounds 1 and 2 which were isolated from the petroleum ether extracts, the rest of the compounds including 1 and 2 were isolated from the EtOAc extracts. Phytochemical screening indicated the petroleum ether extracts contained only terpenoids in the stem and terpenoids, saponins and cardiac glycosides in the roots. Also, terpenoids and tannins were present in the EtOAc extracts of both the stem and root extracts in addition to saponins and cardiac glycosides which were present only in the roots. The methanol extracts contained terpenoids, tannins, and cardiac glycosides.
Results of in vitro anti-schistosomal activity against Schistosoma haematobium
The picture of Schistosoma haematobium eggs identified under a low power microscope is presented in Figure 7. The half maximal inhibitory concentrations, IC 50 (μg/ml, AE SEM) of reference standard, compounds and extracts were determined for duplicate experiments (n ¼ 2) as presented in Table 3.
Generally, the isolated compounds showed higher ovicidal activity against S. haematobium eggs than the extracts though activities for both samples were low compared to the standard praziquantel (Table 3). Among the compounds isolated from the stem, the mixture of β-sitosterol/stigmasterol showed the highest potency (IC 50 , 177.90 μg/ml) but this was about 11 times less potent than the standard praziquantel drug (15.47 AE 0.06 μg/ml), (Table 3, Figure 8). The next highest was dichapetalin A (151.10 μg/ml) whiles friedelan-3-one showed the least potency with IC 50 , 378.10 μg/ml (Table 3, Figures 8 and 9). Dichapetalin M from the root extract gave an IC 50 of 191.00 μg/ml, (Table 3, Figure 9).
All the triterpenoids were isolated from the polar EtOAc extracts. This might be the reason why the most active compound, dichapetalin A was isolated from the most active root EtOAc extract. The root crude extracts were generally more active than the stem crude extracts (Table 3) even though the root EtOAc extract contained additional saponins and cardiac glycosides aside the terpenoids and tannins which were also contained in the stem EtOAc extract. This could be because of the roots containing more of the terpenoids as observed in the greater yield of the root extracts (1.5%) compared with the stem extracts (1.3%).
A number of terpene compounds including monoterpenes, sesquiterpenes, diterpenes and triterpenes have showed both remarkable in vitro and in vivo antischistosomal activity. The mechanism of action of their schistosomicidal effects are affected by their lipophilicity that allows easy crossing of the plasma membrane and interaction with intracellular molecules of parasites to cause morphological changes (de Moraes, 2015). Monoterpenes with in vitro activity against S. mansoni adult worms include, rotundifolone from Mentha x villosa at a concentration of 70 μg/ml (Matos-Rocha et al., 2013). (þ)-Limonene epoxide in essential oils also showed activity at a concentration of 25 μg/ml together with the detection of morphological alterations on the schistosome surface at concentrations of 25-75 μg/ml (de Moraes et al., 2013). Among the antischistosomal sesqueterpene compounds active against the immature stages of S. mansoni are the antimalaria compounds artemisinin from the leaves of Artemisia annua L., artemether, artesunate and dihydroartemisinin (Liu et al., 2014;Utzinger et al., 2001). The essential oil nerolidol was effective in vitro against adult worms of S. mansoni with a reduction in worm motor activity death at 31-62 μM (Silva et al., 2014). Budlein-A from Viguiera spp (Asteraceae) showed in vitro schistosomicidal activity against S. mansoni adult worms at 12.5 μM and its derivatives 4α, 5-dihydrobudlein A and 4α,5-11β,13-tetrahydrobudlein A gave 100% mortality at the respective concentrations of 50 and 200 μM (Sass et al., 2014). The diterpene phytol, from chlorophyll has also indicated promising in vitro and in vivo activity with infected mice against adult S. mansoni . Aside these, triterpenes such as betulin isolated from Schefflera vinosa plant was effective in vitro against S. mansoni adult worms at a concentration of 100-200 μM (Cunha et al., 2012). The triphenylphosphonium derivatives of betulin and betulinic acid showed in vitro antischistosomal activity against newly transformed schistosomula and adult worms of S. mansoni at concentrations between 10 μM and 2 μM, respectively (Spivak et al., 2014). Moreover, balsaminol F and karavilagenin C are cucurbitane-type triterpenes from Momordica balsamina with effective in vitro antischistosomal activity against S. mansoni adult worms at LC 50 values of 15 and 29 μM respectively (Ramalhete et al., 2012). Crude extracts and fractions of different families of plants have also indicated antischistosomal activity at varying degrees of potency. The stem bark and roots of Rauwolfia vomitoria killed all cercariae stage of S. mansoni within 2 h of exposure at a respective concentration range of 62.5-1000 μg/ml and 250-1000 μg/ml. The LC 50 values after 1 and 2 h was 207.4 μg/ml (stem) and 61.18 μg/ml (root). In addition, both the stem and root of the plant showed 100 mortality of the adult worm within 120 h of incubation at a concentration range of 250-1000 μg/ml (Tekwu et al., 2017). The MeOH extracts of Curcuma longa L. (Zingiberaceae) and Nerium oleander L. (Apocynaceae) showed 100% S. mansoni worm mortality after a 24 h incubation period at concentrations up to 100 μg/ml (Abdel-Hameed et al., 2008). Also, MeOH extracts of five plants Dryopteris filixmas (Dryopteridaceae), Tanacetum vulgare (Asteraceae), Juglans nigra (Juglandaceae), Syzygium aromaticum (Myrtaceae) and Allium sativum (Liliaceae) exhibited strong potency at minimum effective concentrations of 50 μg/ml after 24 h against adult S. mansoni worms (Metwalley, 2015). In a study to evaluate the effect of MeOH extracts on S. mansoni infected Swiss Albino Mice, Malus domestica (Rosaceae) showed significant (P < 0.05) antischistosomal activity at concentrations 300 mg/kg and 200 mg/kg with respective worm reduction of 85.93% and 72.22%. Allium cepa (Liliaceae), at 500 mg/kg and 300 mg/kg indicated 72.59% and 58.52% worm reductions respectively, whiles Citrus limon (Rutaceae) showed the least worm reduction of 42.96% at 200 mg/kg and 26.63% at 100 mg/kg (Muema et al., 2015). In a similar study, different solvent extracts of Ocimum americanum (Lamiaceae) and Bridelia micrantha (Phyllanthaceae) gave significant antischistosomal activity against S. mansoni infected Swiss Albino Mice (Waiganjo et al., 2016).
Conclusion
This study has established the presence of tetracyclic and pentacyclic triterpenoids from the stems and roots of D. crassifolium. The identification of dichapetalins from the plant is chemotaxonomically significant since other species of the genus have been shown to contain this unique class of triterpenoids. So far, nine species of this genus have been shown to contain the dichapetalin class of compounds. However, only the roots of D. crassifolium contained both dichapetalins A and M, the stem yielded only dichapetalin M. For the first time, maslinic acid and pomolic acid have been isolated from both the stems and roots of the plant. The activity of both extracts and isolated triterpenoids against S. haematobium was very low to merit consideration as potential lead compounds for development into any anti-schistosomal agent. The extracts and compounds might have an effect on other stages of the diseases such as the schistosomulae, miracidiae and the cercariae. Thus, a study of infections from experimental animals might reveal useful information on this. The general low activity of the tested compounds and extracts may be attributed to the facts that eggs of schistosomes have multiple layers between the shell and the larva. Hence, the presence of these layers, together with the minute size of pores in the egg shells, may preclude penetration of eggs by the extracts used.
Author contribution statement
Mary Anti Chama: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Dorcas Osei-Safo: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.
Ivan Addae-Mensah: Conceived and designed the experiments. Michael Wilson: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data. | 2020-07-30T02:07:13.814Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "1fea0614944ef033d368bef4969b6c9358bd92f8",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2020.e04460",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ec76e611c51ee4832d159db456fc45962f19bef",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
248120900 | pes2o/s2orc | v3-fos-license | 18F-THK5351 PET for visualizing predominant lesions of pathologically confirmed corticobasal degeneration presenting with frontal behavioral-spatial syndrome
Abbreviations bvFTD Behavior variant frontotemporal dementia 11C-PiB 11C-Pittsburgh compound B CBD Corticobasal degeneration DaT Dopamine transporter 18F-FDG 18F-fluorodeoxyglucose FBS Frontal behavioral-spatial syndrome 123I-FP-CIT 123I-N-ω-fluoropropyl-2β-carboxymethoxy3β-(4-iodophenyl)nortropane MAO-B Monoamine oxidase-B MRI Magnetic resonance imaging PET Positron emission tomography PSP Progressive supranuclear palsy SBR Specific binding ratio SPECT Single-photon emission computed tomography SUV Standardized uptake value
Dear Sirs,
Clinical phenotypes of corticobasal degeneration (CBD) vary and are typically presented with four phenotypes: corticobasal syndrome (CBS), frontal behavioral-spatial syndrome (FBS), nonfluent/agrammatic variant of primary progressive aphasia, and progressive supranuclear palsy syndrome [1]. FBS is the third most common phenotype of pathologically verified CBD, accounting for approximately 14% of CBD patients [1]. Conversely, the pathological features of the clinical phenotype of behavior variant frontotemporal dementia (bvFTD) also vary, and approximately 9% of these patients have been verified pathologically CBD [2]. Here, we present an autopsy-confirmed case of CBD presenting with FBS who underwent positron emission tomography (PET) with 18 F-THK5351, visualizing the predominant lesion of the frontal lobes associated with the clinical phenotype.
A 72-year-old right-handed male developed gait slowing. Three years later, he lost his way when climbing mountains and was found lying. Two months later, he lost his way again in the neighborhood, and eventually, his wife started accompanying him when he went outside. He developed urinary incontinence, masked face, decreased speech output, visual hallucination, and abnormal behaviors, such as nocturnal wandering, pica, and apraxia. His condition was evaluated at an outpatient neurology clinic, and neurological examination revealed his masked face, bradykinesia, and rigidity of his neck and left-sided upper limb. He showed no therapeutic response to levodopa for parkinsonism. His abnormal behavior increased with time and he showed stereotyped behavior, such as keeping cleaning a room or washing his body. He visited our hospital for further evaluation of his neuropsychiatric symptoms at the age of 75.
Neurological examination revealed left-sided predominant parkinsonism, apraxia, and perseveration. He showed severe cognitive impairment and scored 19/30 on the Mini-Mental State Examination and 3/18 on the Frontal Assessment Battery. Brain magnetic resonance imaging (MRI) at the age of 75 revealed right-sided and frontal lobe dominant atrophy (Fig. 1A,B), which was verified using the voxel-based specific regional analysis system for Alzheimer's disease [3] (Fig. 1C). Dopamine transporter single-photon emission computed tomography with 123 I-Nω-fluoropropyl-2β-carboxymethoxy-3β-(4-iodophenyl) nortropane obtained 4 months before he visited us showed diffusely reduced uptake in the bilateral striata with rightsided predominance (Fig. 1D). He underwent 18 F-THK5351 PET ( Fig. 1E-G), 18 F-fluorodeoxyglucose ( 18 F-FDG) PET (Fig. 1H), and 11 C-Pittsburgh compound B ( 11 C-PiB) PET at the age of 75. The Z score map of 18 F-THK5351 was superimposed on spatially normalized T1-weighted image (Fig. 1G). The Z scores were calculated as: (mean voxel value of 30 cognitively unimpaired subjects-patient voxel value)/standard deviation of 30 cognitively unimpaired subjects with cerebellar cortex as reference. 18 F-THK5351 accumulated in the frontal lobes with right-sided predominance, as well as in the parietal lobes ( Fig. 1E-G). Hypometabolism of both the frontal and parietal lobes with right-sided predominance was detected by 18 F-FDG PET (Fig. 1H). Amyloid deposition was not identified by 11 C-PiB PET (data not shown). He was clinically diagnosed with bvFTD underlying tau-pathology, that is, frontotemporal lobar degeneration tau, according to the prominent frontal symptoms with spatial impairment, parkinsonism, and neuroimaging findings including 18 F-THK5351 PET. He died of aspiration pneumonia at the age of 76. An autopsy was performed after consent was obtained from his family. The brain weighed 1402 g and showed cerebral atrophy, especially of the frontal and temporal lobes (Fig. 1I). There was dilation of the Sylvian fissure and mild atrophy of the frontal operculum with right-sided predominance (Fig. 1J). Microscopic assessment showed neuronal loss and an increased number of astrocytes in the cerebral cortex, especially in the frontal lobes, accompanied by abnormal rarefaction of tissue (Fig. 1K, L). Immunohistochemistry of the frontal lobes using AT8 antibody revealed phosphorylated tau-positive astrocytic plaques, pretangles, coiled bodies, and threads ( Fig. 1M-O), which were presented as right-sided predominance (Fig. 1P, Q). Staining for RD4 or RD3 revealed phosphorylated 4-repeat but not 3-repeat tau-positivity (Fig. 1R, S). These tau-related pathological characteristics in the cortex were prominent in the anterior part of the frontal lobes with right-sided predominance. These were also found in the substantia nigra, subthalamic nucleus, thalamus, globus pallidus, putamen, nucleus basalis of Meynert, locus coeruleus, and inferior olivary nucleus. Western blot analysis of sarkosyl-insoluble tau from the brain showed a major doublet of 68 and 64 kDa with predominant ~ 37 kDa fragments (Fig. 1T) [4]. These pathological and biochemical findings were consistent with CBD. He was finally diagnosed with CBD-FBS [1].
Various tau PET tracers are under rapid development to visualize abnormal tau accumulation for both diagnosing tauopathy and evaluating the therapeutic effects of new drugs against tauopathy. 18 F-THK5351, a first-generation tau tracer, was originally developed to detect abnormal taupathology [5]. However, recent studies revealed that 18 F-THK5351 binds to monoamine oxidase-B (MAO-B) highly expressed in astrocytes as off-target binding [6,7]. Using the binding affinity to MAO-B, 18 F-THK5351 visualizes the astrogliosis reflecting neurodegenerative changes in various neurological diseases other than tauopathy, such as amyotrophic lateral sclerosis [8].
The main clinical phenotypes of our case imply that the core lesions were in the frontal lobes. Conventional morphological and functional imaging modalities using brain MRI and 18 F-FDG PET imaging showed both atrophy and hypometabolism with right-sided predominance, respectively. Being consistent with these results of the imaging study, 18 F-THK5351 PET accumulated in similar lesions, which concurred with the previous study [9]. In addition, Fig. 1 Neuroradiological and histopathological findings and western blot analysis. A-C Brain magnetic resonance imaging demonstrates right-sided and frontal predominant atrophy (A, B). The voxel-based specific regional analysis system for Alzheimer's disease reveals the right-sided predominant atrophy both in white matter (C, left column) and in gray matter (C, right column), being 2 standard deviations lower than the average volume of cognitively unimpaired elderly. D Dopamine transporter SPECT with 123 I-FP-CIT shows diffusely reduced uptake in the bilateral striata with right-sided predominance. E-G 18 F-THK5351 PET demonstrates abnormal accumulation in the frontal lobes with right-sided predominance, as well as the parietal lobes. 18 F-THK5351 PET image was superimposed on brain computed tomography of the patient (F). Z score of 18 F-THK5351 compared to 30 cognitively unimpaired subjects as control was superimposed on spatially normalized T1-weighted image (G). H 18 F-fluorodeoxyglucose PET shows hypometabolism of both the frontal and parietal lobes with right-sided predominance. I, J The macroscopic appearance of the whole brain and right brain. There is cerebral atrophy of the frontal and temporal lobes (I). There is dilation of the Sylvian fissure and mild atrophy of the frontal operculum in the right brain (J). K, L In the right frontal lobe, hematoxylin-eosin staining shows rarefaction of tissue (K), and immunohistochemistry using anti-vimentin antibody reveals vimentin-immunoreactive astrocytes along with the corticomedullary junction, reflecting astrogliosis (L). M-O Immunohistochemistry of frontal lobes using AT8 antibody reveals phosphorylated tau-positive astrocytic plaques (M), pretangles (N), and coiled bodies (O), accompanying tau-positive threads. P, Q Immunoreactivity for AT8 antibody in frontal lobes is predominant on the right side (P) compared to the left side (Q). R, S Immunohistochemistry shows that tau-positive deposition is immunoreactive for anti-4R (RD4) (R), but not for anti-3R (RD3) (S). T Western blot analysis of sarkosyl-insoluble tau from the brain probed by T46 antibody shows a major doublet of 68 and 64 kDa, which corresponds to hyperphosphorylated full-length 4-repeat tau isoforms. Note the prominent C-terminal fragments of tau with ~ 37 kDa of this case are similar to those of CBD, but not to PSP. Bars, 5 cm (I, J), 100 µm (K-M, R, S), 50 µm (N, O), 1 mm (P, Q). CBD corticobasal degeneration, 123 I-FP-CIT 123 I-N-ω-fluoropropyl-2βcarboxymethoxy-3β-(4-iodophenyl)nortropane, PET positron emission tomography, PSP progressive supranuclear palsy, SBR specific binding ratio, SPECT single-photon emission computed tomography, SUV standardized uptake value ◂ these abnormalities obtained from in vivo imaging studies were verified through pathological evaluation of taupositive deposition and astrogliosis in the frontal lobes with right-sided predominance. Despite the neuropathological features of bvFTD being heterogeneous [2], previous studies on 18 F-THK5351 PET with bvFTD patients lack pathological confirmation. To our knowledge, this is the first clinicopathological case report demonstrating 18 F-THK5351 accumulation in the frontal lobes in a CBD-FBS patient, in which the presence of tau-related neurodegenerative change was pathologically verified. However, our study was unable to distinguish the 18 F-THK5351 accumulation derived from tau accumulation, an increased number of MAO-B positive astrocytes, or both, because of the limited discriminability of 18 F-THK5351 between tau and MAO-B. Thus, pathologically, radiologically, and biochemically validated MAO-B PET tracers, which can more sensitively visualize neurodegeneration consisting of astrogliosis, are required [10]. Our findings provide the use of 18 F-THK5351 PET as a marker closely associated with tau-related neurodegeneration.
In conclusion, the 18 F-THK5351 PET image visualizes abnormal tau-related neurodegeneration reflecting clinicopathological severities in CBD-FBS. Our case highlights that 18 F-THK5351 PET can be a potent technique for visualizing the tau-related predominant lesions of CBD-FBS and for discriminating the underlying pathologies of bvFTD.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-04-13T17:47:04.934Z | 2022-04-13T00:00:00.000 | {
"year": 2022,
"sha1": "43f1e3f7ddca44127f15dece5a52c957f6b23192",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-022-11121-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "6ecc8fd7b05cca71a01a7f05b1ffe9c12afcfb2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227065405 | pes2o/s2orc | v3-fos-license | Parental Vaccine Preferences for Their Children in China: A Discrete Choice Experiment
Abstract: Background Vaccination is one of the most cost-effective health investments to prevent and control communicable diseases. Improving the vaccination rate of children is important for all nations, and for China in particular since the advent of the two-child policy. This study aims to elicit the stated preference of parents for vaccination following recent vaccine-related incidents in China. Potential preference heterogeneity was also explored among respondents. Methods: A discrete choice experiment was developed to elicit parental preferences regarding the key features of vaccines in 2019. The study recruited a national sample of parents from 10 provinces who had at least one child aged between 6 months and 5 years old. A conditional logit model and a mixed logit model were used to estimate parental preference. Results: A total of 598 parents completed the questionnaire; among them, 428 respondents who passed the rational tests were analyzed. All attributes except for the severity of diseases prevented by vaccines were statistically significant. The risk of severe side effects and protection rates were the two most important factors explaining parents’ decisions about vaccination. The results of the mixed logit model with interactions indicate that fathers or rural parents were more likely to vaccinate their children, and children whose health was not good were also more likely to be vaccinated. In addition, parents who were not more than 30 years old had a stronger preference for efficiency, and well-educated parents preferred imported vaccines with the lowest risk of severe side effects. Conclusion: When deciding about vaccinations for their children, parents in China are mostly driven by vaccination safety and vaccine effectiveness and were not affected by the severity of diseases. These findings will be useful for increasing the acceptability of vaccination in China.
Introduction
Vaccination is one of the most cost-effective ways to avoid disease. Currently, it can prevent 2-3 million deaths per year, and a further 1.5 million could be protected if the global coverage of vaccinations was improved [1]. Routine vaccination for children is one of the most successful strategies to ease the burden of infectious diseases [2]. Improving the vaccination rate of children is important for all nations, and for China in particular since the advent of the two-child policy.
There is still a large gap between actual vaccination coverage and the goal [3]. In China, several vaccines are mandatory for children and covered by China's National Immunization Program (NIP); e.g., the diphtheria-tetanus-acellular-pertussis (DTaP) vaccine, measles-mumps-rubella (MMR) vaccine and hepatitis B vaccine. There are also recommended but not mandatary vaccines, such as rotavirus vaccine, seasonal influenza vaccine and pneumococcal conjugate vaccine. Mandatory vaccines are free to the public and government-funded, whilst recommended vaccines are normally self-paid by parents. The current uptake of recommended vaccines has been estimated to be low in China, at about 6% and 0.7% for seasonal influenza vaccine and 13-valent pneumococcal conjugate vaccine, respectively [4,5]. A report authored by the World Health Organization (WHO) has shown that vaccine hesitancy, as one of the 10 threats to global health in 2019, has impeded the progress made in tackling vaccine-preventable diseases. Vaccine hesitancy, which is defined as the "delay in acceptance or refusal of vaccination despite the availability of vaccination services" [6], has a direct influence on vaccination rate, and a quarter to a third of US parents were affected by this [7,8].
The reasons for vaccine hesitancy are complex. The literature suggests that the key factors that contribute to vaccine hesitancy include the unnaturalness of vaccination [7], heuristic thinking [9] and a loss of public confidence [10]. In China, several vaccine incidents-e.g., the Changchun Changsheng vaccine incident and Shandong illegal vaccine sales-have occurred in the past few years, which may have resulted in a loss of public confidence in vaccines. The Changchun Changsheng vaccine incident involved (i) manufacturing and selling substandard DTaP vaccines, and (ii) the illegal production of freeze-dried rabies vaccines [11], while in the Shandong illegal vaccine sales incident, questionable vaccines (i.e., produced by licensed manufacturers but not transported or stored properly) were sold to 24 provinces and cities without approval [12]. A study conducted in 2018 found that a majority of the respondents held negative attitudes towards vaccines after the Changchun Changsheng vaccine incident [13]. Another study evaluating the impact of Shandong illegal vaccine sales arrived at a similar conclusion [12].
In this context, it is important to understand parental attitudes and preference for vaccines and to explore key factors associated with parents' decisions to vaccinate their children. A discrete choice experiment (DCE) technique based on random utility theory has been widely applied to study vaccine preference globally, and substantial heterogeneities exist among the findings [14][15][16][17][18]. In mainland China, very limited DCE studies have been conducted regarding the preference for specific or general vaccines, and they have all been constrained to a single province [19,20]. This is the first DCE study to target a national sample with respondents recruited from 10 provinces in China.
The present study had two objectives: (i) to provide insights into the importance of determinants in parental vaccination choices and (ii) to explore preference heterogeneity among parents with different characteristics.
Discrete Choice Experiment
The discrete choice experiment has been increasingly used in health economics and health service research as a method to elicit participants' preferences. DCE can also be used to estimate participants' willingness to pay as well as to predict participation rates given a set of characteristics of goods or services [21,22]. This approach is derived from random utility theory, where participants would choose the option with the highest utility from the alternatives presented [23]. The DCE design and analysis were conducted following the checklist and reports of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Conjoint Analysis Task Forces [24][25][26].
Study Population and Sample Size
To ascertain a national parental preference, a multistage sampling design was used. Firstly, 10 provinces/municipalities were selected based on the Division of Central and Local Financial Governance and Expenditure Responsibilities in the Healthcare Sector released by the State Council in 2018, which divided the 31 provinces/municipalities in mainland China into five layers. According to their geographical location and level of economic development, 10 provinces/municipalities were randomly chosen to represent the eastern region (Shandong and Shanghai), western region (Gansu and Chongqing), southern region (Yunnan and Guangdong), northern region (Beijing and Jilin) and central region (Henan and Jiangxi) ( Figure 1). Next, except for three municipalities (Beijing, Shanghai and Chongqing), in each of the other seven provinces, one provincial capital and one non-provincial capital city were chosen to balance the regional disparity. Finally, parents with at least one child aged between 6 months and 5 years old were invited to participate in this survey at community healthcare centers or stations. Only one participant per household could take part in this study.
Study Population and Sample Size
To ascertain a national parental preference, a multistage sampling design was used. Firstly, 10 provinces/municipalities were selected based on the Division of Central and Local Financial Governance and Expenditure Responsibilities in the Healthcare Sector released by the State Council in 2018, which divided the 31 provinces/municipalities in mainland China into five layers. According to their geographical location and level of economic development, 10 provinces/municipalities were randomly chosen to represent the eastern region (Shandong and Shanghai), western region (Gansu and Chongqing), southern region (Yunnan and Guangdong), northern region (Beijing and Jilin) and central region (Henan and Jiangxi) ( Figure 1). Next, except for three municipalities (Beijing, Shanghai and Chongqing), in each of the other seven provinces, one provincial capital and one non-provincial capital city were chosen to balance the regional disparity. Finally, parents with at least one child aged between 6 months and 5 years old were invited to participate in this survey at community healthcare centers or stations. Only one participant per household could take part in this study. The guidelines proposed by Johnson and Orme suggested that the sample size can be calculated using the equation N > 500 c/(t × a), where c indicates the number of analysis cells, t refers to the number of choice tasks and a is the number of alternatives [27]. In the main-effects only design, c is equal to the largest number of levels among different attributes in the DCE. In our study, the corresponding values for c, t and a are 4, 10 and 2, respectively; therefore, N can be estimated as (500 × 4)/(10 × 2) = 100. Considering the potential regional heterogeneity, a minimum of 100 respondents would need to be recruited in each region [22,28]. In practice, we intended to survey 60 parents in each province and 120 parents in each region. The guidelines proposed by Johnson and Orme suggested that the sample size can be calculated using the equation N > 500 c/(t × a), where c indicates the number of analysis cells, t refers to the number of choice tasks and a is the number of alternatives [27]. In the main-effects only design, c is equal to the largest number of levels among different attributes in the DCE. In our study, the corresponding values for c, t and a are 4, 10 and 2, respectively; therefore, N can be estimated as (500 × 4)/(10 × 2) = 100. Considering the potential regional heterogeneity, a minimum of 100 respondents would need to be recruited in each region [22,28]. In practice, we intended to survey 60 parents in each province and 120 parents in each region.
Survey Development
Based on previously published literature regarding DCE studies on vaccination [14,16,17,29,30], 11 attributes were initially identified. To assess the appropriateness of attributes and levels to be included and to further reduce the number of attributes in our DCE, four experts with several years of vaccination experience were interviewed face-to-face in Jinan Maternity and Childcare Hospital. Two focus groups (n = 12) were also conducted. One focus group included four parents only, and the other contained a vaccinologist, three parents and four health economics/DCE experts. They were asked to review and rank a list of potential attributes. Finally, six attributes were selected for this study (Table 1). The out-of-pocket cost of a vaccine 0 Yuan 150 Yuan 300 Yuan A D-efficient design was developed using Ngene software (www.choice-metrics.com), which yielded 60 choice sets that were further divided into six blocks to reduce respondents' cognitive burden. To check for internal consistency, one choice set in each block was duplicated and was not excluded in the analysis. Each participant received one block randomly and was asked to answer 11 choice sets.
A pairwise two-stage response DCE design was used to maximize the information gained from the respondents [31]. In the first stage, the participants were forced to choose between two alternative vaccination profiles. Then, they were asked to confirm whether they would vaccinate their preferred option from the first stage for their children. An example of a final choice set was shown in Table 2.
In addition to DCE questions (which were presented in a hardcopy questionnaire), the participants' and their children's socio-demographic characteristics were also collected using an iPad. Before completing DCE questions, respondents were asked to rate the importance of six attributes. A pilot was conducted among 15 parents in Beijing and Jinan in July 2019 to examine the acceptability, comprehensibility and validity of the experiment. A few modifications were implemented based on feedback from the pilot.
Data Collection and Analysis
The survey was conducted between August and October 2019. Data were collected by means of one-on-one face-to-face interviews with parents waiting for a routine vaccination for their children or remaining for observation after vaccination. Parents are required to take children to vaccination sites for mandatory vaccines, and high vaccination rates have been achieved for these mandatory vaccines; i.e., rates of above 95% have been achieved for DTaP and hepatitis B vaccines [32]. Thus, the potential sample selection bias for this recruitment strategy was low. Before enrolling in the survey, the conditions were explained in detail by interviewers who received specific training by the research team. All participants signed an electronic informed consent form ahead of enrolment and all responses were anonymous. The study received ethics approval from the Peking University Ethics Committee (IRB00001052-19076).
Responses to the hardcopy DCE questionnaire were double-entered into EpiData 3.1 software and then matched with other socio-demographic characteristics obtained from the iPad for processing and analyzing. Descriptive statistics were reported first. Student's t-test, the χ 2 test and Wilcoxon rank-sum test were used to compare means and proportions between subgroups depending on the nature of data.
Regarding DCE data, the personal out-of-pocket cost was coded for linearity, and the effect for the severity of diseases prevented by vaccines was likely to be non-linear. When the latter was coded as three separate parameters, the result was similar and the model performance worsened considerably (see Supplementary File Table S1). Thus, we decided to treat this attribute as a continuous variable, and the remaining attributes were coded as dummy variables. The goodness of model fit was guided by the Akaike information criterion (AIC) and Bayesian information criterion (BIC) [22,33]. An initial exploratory analysis was conducted using a conditional logit model (see Supplementary File Table S2), where the preference among respondents was assumed to be homogenous. Mixed logit or latent class models are commonly used to explore preference heterogeneity [23]. We employed the mixed logit model, where the preference was assumed to follow a normal distribution and the coefficient of attribute level was composed of a mean coefficient as well as a standard deviation [26]. In addition, observed variables such as age, relationship with children, education level and working status were also used to estimate the influence on preferences by including a series of interaction terms with attribute levels. All statistical analyses were conducted using Stata 12.1 software.
Study Population
In total, 598 parents from 10 provinces participated, and 18 parents were excluded from the analysis due to failing to complete the majority of the questionnaire. Among the remaining 580 parents, the mean age was 31 years (standard deviation: 0.21 years), and the mean age of their children was 2 years old. The majority of respondents were mothers (82%) and more than half resided in an urban area (61%)-which was close to the proportion (60%) of urban population in China [34]-had a Bachelor's degree or above (56%) and were in employment (67%).
Regarding the internal consistency check within the DCE section, 428 (74%) respondents passed the test. There were no significant differences in socio-demographic characteristics between respondents who failed the test, excluding the gender and health status of children-for more details, see Table 3.
Importance Rating
The results of the importance rating are presented in Figure 2; the respondents were asked to rank attributes from the most to the least important aspects. Overall, parents attached the greatest importance to the severity of diseases prevented by vaccines, followed by the protection rate and risk of severe side effects. The out-of-pocket cost and location of the vaccine manufacturer were less important.
Results of DCE Analysis
Regarding the DCE analysis, those who passed the internal consistency test were included in the main analysis, and the mixed logit estimates are presented in Table 4 [30,35]. The full sample results are comparable with the main analysis and are shown in Table S3. The coefficient of non-vaccination was included to consider the unconditional choice scenario of allowing for opting out. This was significantly negative, which suggests that, on average, parents preferred to vaccinate their children. The estimated preference for the attributes was consistent with our expectations, except for the severity of diseases prevented by vaccines attribute, which was statistically insignificant.
Results of DCE Analysis
Regarding the DCE analysis, those who passed the internal consistency test were included in the main analysis, and the mixed logit estimates are presented in Table 4 [30,35]. The full sample results are comparable with the main analysis and are shown in Table S3. The coefficient of non-vaccination was included to consider the unconditional choice scenario of allowing for opting out. This was significantly negative, which suggests that, on average, parents preferred to vaccinate their children. The estimated preference for the attributes was consistent with our expectations, except for the severity of diseases prevented by vaccines attribute, which was statistically insignificant.
Parents were more likely to choose the vaccines with a higher protection rate, a longer duration of the illness being prevented by the vaccine and a lower risk of severe side effects. The negative coefficient for the location of the vaccine manufacturer suggested that domestic vaccines were preferred to imported vaccines. The negative coefficient of the out-of-pocket cost attribute indicated that a cheaper vaccine would be preferred. The relative importance of the change in the risk of severe side effects from highest to lowest was 1.667 at most, followed by the highest protection rate. Reducing the risk of severe side effects from high to low could yield 2.6 (1.667/0.642) times as much utility, increasing the duration of the illness being prevented by vaccines from 1 to 10 years. Compared to vaccination safety, the duration was less important.
Some estimated standard deviations were significant, indicating the existence of preference heterogeneity. Social-demographic characteristics were compared with attribute levels to examine preference heterogeneity (Table 5). For the non-vaccination interaction terms, the significantly negative coefficients indicated that the subgroup were less likely to choose non-vaccination (i.e., more likely to vaccinate their children). We found that fathers (β = −1.576) or rural parents (β = −1.283) prefer to vaccinate their children, and children whose health was not good were also more likely to be vaccinated. For the other interaction terms, the significantly positive coefficients suggested that the attributes were more important; well-educated parents preferred imported vaccines (β = 0.468) and the vaccines with the lowest risk of severe side effects (β = 0.445). The highest protection rate was valued higher by parents who were not more than 30 years old, and fathers had a stronger preference for a longer protection duration. Other observed characteristics including the working status of parents, whether the parents had a single child, and the gender of children had no significant influence. Note: 1. CI-confidence interval. * -the multiplicative relationship which represents the interaction effect of two variables. All attributes except for cost and severity of diseases prevented by vaccines were coded as dummy variables. 2. Interaction terms were treated as fixed effect variables, and the others as random effect variables.
Discussion
This study reported the results of a DCE study into parental vaccine preferences for their children. Some previous DCE studies in vaccines have been constrained [19,20,36,37] to one particular province or special administrative region in China. To the best of our knowledge, this is the first study to survey parents nationwide to explore their vaccine preferences and examine whether preference heterogeneity existed among participants with various characteristics using discrete choice experiments.
Our study found that a minority (12.1%) of parents chose not to vaccine their children in secondary tasks. The significantly negative coefficient of non-vaccination in the mixed logit model confirmed this finding. The preference for non-vaccination has mixed support in the literature. Although most studies found the same result [38][39][40], a study of parents in the Netherlands found that, on average, parents preferred not to vaccinate their children against human papillomavirus [41].
Among all attributes, the risk of severe side effects and the protection rate of the vaccine had the largest effect on vaccination choice. These findings were in line with other vaccine DCE studies. In a study of HPV vaccines in the US in 2010, greater efficacy was the most desired feature and was strongly valued by mothers [40]. A DCE study conducted in the Philippines found that efficacy was valued most as a factor when deciding to vaccinate with leptospirosis vaccines [42]. In a study of pediatric influenza vaccine, parents placed more importance on the risk of side effects [43]. In addition, other studies found that willingness to vaccinate was closely related to vaccination safety and efficacy [38,44]. In China, the first Vaccine Administration Act voted by the Standing Committee of the National People's Congress in 2019 stated that a compensation system should be implemented for abnormal responses to vaccination, and the bearer of compensation costs depended on whether the vaccine involved was covered by the government-funded Expanded Program on Immunization [45]. Meanwhile, a quality analysis report in the case of abnormal reactions to vaccines should be submitted to the Medical Products Administrations [45]. However, the database is not publicly available, which might be one reason why safety was the decisive factor. The findings suggest that the safety and efficacy of vaccines would be key characteristics influencing parental vaccination decision-making.
The out-of-pocket cost was found to be less important than other significant attributes. Even though several previously published studies indicated that cost was assigned great importance when deriving preferences [30,46,47], the results are incomparable with our study due to differences in the targeted vaccines and targeted population. Another DCE study conducted in China found that the cost was not associated with a stated preference for a vaccine [20]. In China, common vaccines are affordable, which could be supported by the comparison between the household income and the out-of-pocket cost of vaccines shown by the data in our study; e.g., the highest out-of-pocket cost accounts for about 2% of monthly income. This finding suggests that changing the price may not be an effective or optimal method to improve vaccination coverage.
However, the severity of diseases prevented by vaccines (in terms of mortality) was not a significant contributor to a parental preference for a vaccine, which was inconsistent with the result of the importance rating. Some DCE studies in other countries obtained contrasting results, showing that disease severity played an important role when respondents chose a vaccination profile [15,48,49]. An explanation for the results in our study could be that parents lacked medical knowledge and were less sensitive to the differences in levels of disease severity as a result of the current status of the immunization service in China. Insensitivity to disease severity could also be caused by the larger number of attributes included, meaning that participants ignored this attribute. This finding also suggests that it may be not effective to stress the severity of targeted diseases when health workers recommend vaccines to parents.
Somewhat surprisingly, the results of our study show that, a year after the Changchun Changsheng vaccine incident, a domestic vaccine was preferred to an imported vaccine. A similar finding was obtained from a DCE study conducted in Shanghai a year before the Changchun Changsheng vaccine incident [20], even though both studies varied in terms of study populations and study settings. The reasons that parents preferred domestic vaccines could be that domestic vaccines were thought to be more effective [50] and more accessible. The other potential reason is that the regulatory environment is more stringent. Indeed, a public consultation was facilitated after the incident in 2018 [51], and the first Vaccine Administration Act in 2019 was adopted, which aimed to tighten vaccine regulation [45]. Further studies are warranted to better understand the influence of the manufacturer's location in vaccine preference.
Concerning preference heterogeneity, we found that vaccination preference differed significantly according to the type of dwelling place, the relationship with children, and the health state of children. In addition, other observed variables (age and education level) had a significant influence on the preference for attribute levels. A study from Poland and Hungary found that working mothers placed less weight on effectiveness and illness severity than non-working mothers [15]. Veldwijk et al. also found that respondents with a lower education level and lower health literacy attached more importance to a vaccine with higher effectiveness [52]. Exploring preference heterogeneity for vaccines would be meaningful and helpful for policy-makers to take pertinent measures in different groups.
The present study had several limitations. First, there may be some omitted factors concerning parental preference for vaccination. However, the process for identifying and selecting attributes has followed the recommended guidelines. Second, recent guidance recommends the use of natural frequencies to present risks. Nonetheless, we opted to use terms such as "low", "moderate" and "high" to describe this attribute, and participants may have interpreted the levels differently, which might have caused an estimation bias. Finally, the results of the DCE and importance rating are not entirely consistent. This suggests that it would be better to reduce respondents' difficulty in understanding the meaning of attributes by using figures.
Conclusions
This study used the well-established DCE technique to investigate vaccine preference among parents in China, and it also revealed preference heterogeneity among respondents. On average, parents were more likely to vaccinate their children. A vaccine with high effectiveness and low risk of severe side effects would be more desirable and decisive, while the severity of targeted diseases had little effect on parents' decisions. Significant preference heterogeneity was identified among respondents. The findings from this study will be helpful for policymakers to implement more effective policy implementation to improve the vaccine uptake rate in China. | 2020-11-19T09:17:29.878Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "798208578074f6131af24ccf0f5d7b460812def4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/8/4/687/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57746f2eb95c0ef8ff2801f94ccbb287cd10be92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
54549413 | pes2o/s2orc | v3-fos-license | The Role of Music in The Shadow Play “Hacivat and Karagöz”
The shadow play Hacivat and Karagöz has become an important part of the Traditional Turkish Theatre continuing for centuries. The existing shadow play which still preserves its popularity has become a significant entertainment tool for people. According to the local features of the performed characters and the content of the topic, this art type carried out by reflecting on a screen specially designed puppets from behind a lightened curtain and by playing with the voice of the performer has various music types and its instruments, especially Turkish folk music and Turkish classical music. in this study, within the play, the role of the music which is thought to form an important part of the shadow play was researched and the music and instruments used were analysed. The effect and role of the music within the play was particularly tried to be put forward.
Introduction
Shadow play is based on the projection of shadows of puppets, which are made up of leather,on a white curtain by light coming from their behind.Karagöz curtain is generally made of cotton batiste and has flowers on the edges.Besides, behind the curtain and on the ground are shelves called destgah on which candles are placed.Puppetsare moved by using 60 cm sticks.The action of playing these puppets is called el peşrevi.Some pictures, relevant or irrelevant to the show, are reflected on the curtainbefore the play, and these are called göstermelik.Karagöz figures are composed of figurative drawings based on abstraction.The dominating colors are red on Karagöz and green on Hacivat figure.There are no decorations on Karagözcurtain as an indication of the setting.Therefore,the existence of a setting is figured out only by the words of puppets or sometimes by little descriptions symbolizing the setting.Karagöz is always on the right side of the curtain and Hacivat is on the left.If there are other characters to be used in the play, they enter and leave the curtain from the left side where Hacivat stays. in accordance with their appearances, places like tents, mountains, rocky or bare lands and fields are situated on Karagöz's side and houses and places of more luxury are placed on Hacivat's side.Karagöz figures are called puppets.They are generally made of calf, cattle or buffalo leather.Puppets are produced by an old hand journeyman or by Karagözcü, a puppeteer in the play, himself.Karagöz puppeteers are generally a crew of five people.Among Karagöz puppeteers areHayali or Hayalbaz(meaning image creator), who is the master puppeteer, Çırak, the helper of the master puppeteer, Hayali, Sandıkkar, the assistant of Çırak, Yardak, who sings songs and Dayrezen, who plays tambourine (Tekerek, 2008).Shadow play is a kind of art indigenous to eastern cultures and there is different information relating its origin in various sources.in one source, it is stated that it first appeared in China before Christ, and according to another source, it originated in India and passed to Java in the 4th and 5th centuries and spread from Java to the western world.There is no definite knowledge about when shadow play technique is adopted and performed by the Turkish society.According to one belief, it is transferred to Mongolians by the Chinese, then to Turks, and then, in parallel with the direction of the Turkish military excursions, to the West.There are different rumors about when this technique came into existence in Turkish Folk Culture.The most common of these beliefs is that it took place during the construction of UluCami (the Grand Mosque)during the reign of Sultan Orhan between 1324 and 1362.The laborers who took part in the construction of the mosque gathered around Ironmaster Kambur Bali Çelebi(Karagöz) and bricklayer HalilHacıİvaz(Hacivat), both of whom worked in the construction, in order to listen to their cheerful conversation, causing the construction to slow down.Informed about the situation, Sultan got both of them executed.But, later on, Sultan regretted getting them killed and felt so sorry for them that in order to cheer Sultan up and comfort him, ŞeyhKüşteritook off his headwear, called sarık, which is like a curtain,lit a candle behind it to create shadows, took of his çarıks, a kind of sandal worn in those days, and animated the figures of Karagöz and Hacivatbehind the curtain and repeated their cheerful conversation.From that day on, Karagöz and Hacivat plays started to be performed in different squares.Today, Karagöz curtain is called ŞeyhKüşteri square and he is accepted as the father of Karagöz Shadow Play.According to Metin And, shadow play entered Anatolia after Yavuz Sultan Selim, who conquered Egypt in 1517 and got Tumanbay, a Mameluk Sultan, executed on Roda Island on the Nile River, watched an imagecreator who depicted the execution on a curtain, and brought him to Istanbul wishing his son Kanuni Sultan Süleyman see the performance.Turks took the shadow projection technique behind the curtain from Egypt at the beginning of the 16th century.At first, since there were some irrelevant scenes to each other in Egyptian plays, the same practice was applied in the first Turkish shadow plays.Moreover, there are not any certain characters in Egyptian shadow plays.Therefore, Hacivat and Karagöz's names are not mentioned a lot with the 16th century.Turkish creativity was added to this new play in time, then, a very colorful and dynamic form was given to it, and after the play took its final form, it spread the areas within the influence area of the Ottoman Empire.This is how shadow play returned back to Egypt, where it originated, in this new form.As a matter of fact, many travelers, while describing the shadow game in Egypt in the 19th century, stated that it was Karagöz shadow game and it was brought to Egypt by Turks and performed mainly in Turkish (And, 1985).
In the works of some Islamic Sufis, the image curtain is likened to the World, and human beings and other living creatures are likened to temporary images on the curtain.It is told that an invisible creator moves all the creatures in the universe just like the puppeteer behind the curtain moving the images in the play.There are many documents that show how prevalent shadow play is and that it is one of the most important Ottoman entertainment arts.According to the information gathered from local writers', such as EvliyaÇelebiand Naima, works, and journals and travel books of Europeans who had been in Istanbul in that era, these plays, which were performed in cafes in Ramadan and at homes, palaces and residences during special occasions such as marriage, birth and circumcision ceremonies, were among the major entertainments ofthe Ottoman society.Moreover, it is possible to see in local and foreign sources that shadow plays were among the favorite entertainment of the Ottoman Palace and public meetings in the 19th century.According to these local sources, during Sultan Mahmut II's reign, Karagöz shadow plays were performed in eleven different places at nights during the circumcision ceremonies of his sons.Also, some Karagöz puppeteers were allowed to MızıkayıHümayunduring Sultan Abdülaziz and Abdülhamit II's time (Kudret, 1970).Thanks to its flexible structure, Karagöz and Hacivat shadow plays, which were open to improvisation and dealing with current events,became the most important means of satire of its time.
Although not yet popular as they were before, Karagöz and Hacivat plays, which have always been popular throughout history, have lost their potency considerably due to the introduction of theatres, cinema and television one after another with the effects of technological developments.
Method
in this study, a descriptive method has been used in order to reveal the role of music in Karagöz and Hacivat Shadow Play.
According to Karasar (1982), descriptive method is a research which describes an event or a situation taking place in the past or today as it really is.
Findings
in this part of the study, sources and previous researches on the subject have been examined, discussed by combining them with expert opinions, and interpreted.
Our country has a rich folk theatre culture.Our own music has an important place in Karagöz and Hacivat Shadow Play, which is the most noticeable example of this culture.Karagöz and Hacivat Music was examined as a separate subjectfor the first time in the book KaragözMusikisi(Karagöz Music), published by Ministry of Culture Publications, by EtemRuhiÜngör in 1989.It has never been the subject of research in any books published on the subject.However, music used in Karagöz and Hacivat plays has gained a distinctive identity and created a typical type of humor music. in the book written by Üngör, the study of Karagöz Music in terms of musicology can be regarded as the first attempt on the subject.According to the results obtained from this study, it is observed that Turkish music is used with all of its features and diversity in Karagöz and Hacivat Shadow Play.
With its compositions, Ağır(Heavy)Semaisi, Yürük(Turkish Nomads)Semaisi, Peşrev(Overtures), Saz(Instrument)Semaisi,Köçekler, Folk songs and Songs, Karagöz music includes all forms of Turkish Music, and, hence, becomes an inseparable part of the play.We can add our characteristic styles, unsystematic beats, to the previous group.for instance, in one of the Karagöz plays, KanlıKavak(Bloody Popler), drum is played with 5, 7 and 9 beat style, and in another play, Tahmis(Extension),Arab's playing the drum with 7 beat style and BebeRuhi's playing it with 9 beat style can be given as an example to unsystematic beats.Moreover, Apart from the folk songs of Anatolia and Rumelia, Arabic and Jewish songs related to the play, tunes indigenous to Greek and Armenian culture, and Western musical forms such as Valse, Polka, and Opera Arias were used when needed.According to Üngör (1989), texts, which gain a different characteristic and depend generally on humor and philosophy, and the compositions, made parallel to these texts, create a special music style.Therefore, this music style can be called Karagöz music.It is observed that the accessible repertoire of Karagöz music is related more to the 19th and 20th century Turkish music.When Karagöz music is studied, it is seen that it is set up on a triple pattern composed of Semai, Gazel(Ode), and Hayal(Imaginary) Songs.
Semai: It is one of the small forms of Turkish Art Music (Say, 1992).It carries three meanings in music.The first of these is that it is the name of a triple time and triple beat music style.There are four songns, called Semai, which are among the songs mentioned in the research of Üngör and among the first songs performed at the beginning of the play and have notes and records.The singer of these first songs in the play is Hacivat.Unlike other imaginary songs, only the introduction and chorus parts of Semais are played and sung (Üngör, 1989).
Gazel: It is a form which is played spontaneously like Taksim(Improvisation)in Turkish Art Music.Lyrics are generally chosen among the poems in the form of Gazel.It doesn't have a style.It is independent and without pattern.Exclamations such as 'ah, of, aman, eyyar etc.(exclamations of mourning in Turkish culture)' which express sorrow amog lyrics (Say, 1992).The music of Gazelhan(gazel singer)gains value with his knowledge and talent.There haven't been any indications of Gazel forms' existence among Hacivat and Karagöz texts.Gazels,which have been performed in a mode up till now, are now being sung like plain texts due to the lack of Karagöz Puppeteers' musical knowledge and Gazel singing skills.Unlike Semai sung by Hacivat, PerdeGazel is performed by Karagöz (Üngör,1989).
Imaginary Songs: Unlike Semai and Gazel, they present a variety.Generally, they are composed of songs and folk songs.Although they make up most of the plays, half of them have disappeared today.Though repeated in some plays, imaginary songs vary by plays.The repeated ones have been special imaginary songs of Karagöz characters.According to the research conducted by Üngör, the most repeated song among imaginary songs is 'Nice sevmeyeyimdostlarbiracayipdilivar(she has such a good tone, how can't I fall in love with her?)' composed by SeyyitNuh, a 17th century composer and in Şehnazmode.Moreover, according to the same research, among the repertoire of 211 songs, 61 songs by Hacivat, the leading singer, 55 by Çelebi, 43 by Zenne, and 26 by Karagöz. in addition, when Karagöz music is studied in terms of mode, the most used songs and their modes are, from the most to the least, 21 songs in Hicazmode, 13 songs in Uşşakmode, 12 songs in Rastmode, 10 songs in Hüseynimode, 10 songs in Nihavendmode.Apart from these, examples of Muhayyerkürdiand other modes of Turkish music were used in plays.
The instruments used in Karagöz Shadow Play can be divided into two categories as 'instruments on the curtain' and 'instruments behind the curtain'.Instruments on the curtain are the instrument used in Classical Turkish music such as bağlama(an instrument with three double strings), KaradenizKemençesi (a three string instrument like violin indigenous to the Black Sea region of Turkey), drum, clarion, kabak (a three string instrument like bağlama,but held vertically when played), clarinet and tambourine (http: //turkgolge.sitemynet.Com ).Also, cymbal, tong with cymbals and nakkare (a small kettle drum used in mehter music)are used as curtain instruments.The most important instrument used behind the curtain is tambourine.The use of tambourine is a tradition for Hacivat and Karagöz Puppeteers.There aren't any plays in which tambourine isn't used.Because it has an important role especially in the fights of Hacivat and Karagöz, in expressing the jokes, and in the entries and exits of characters, tambourine is seen as an inseparable part of the play.
Besides, in order to contribute to the research, an interview has been carried out with Hayali Nevzat Çiftçi, who still performs shadow plays in Bursa, who produces Hacivat and Karagöz puppets, and who is also a master of shadow play figures and puppets, regarding Karagöz music. in the interview, Çiftçi's opinions about Karagöz music have been the main topic.He has stated that Karagöz music is a contentful and a special kind of music with its own characteristics, and that a Hacivat and Karagöz play cannot be thought of without music, and that music plays a crucial role in the play.His remarks have been found quite important for the research.
Conclusion and Discussion
As a result of this research, it is found out that the number of written sources is very limited and the performers of this play do it voluntarily and there are very few of them.According to the information, gathered from a couple of sources that could be reached and, and the information and documents taken from Hacivat and Karagöz Museum in Bursa, music occupies an important placein Hacivat and Karagöz Shadow Play and is an indispensable part of plays.The plays also give place to many kinds of music, mainly Turkish music.It is found out that music types,especially, which reflect the features of characters and are related to the theme of the plays are used.It is observed that all the instruments used in front of and behind the curtain are the instruments used in Turkish Folk Music and Turkish Art Music and there aren't any instruments, but tambourine in today's plays, and even the music is played by CD playersand computers today.Even though the number of puppeteers and audience decreased significantly, shadow plays have reached the present day by the efforts of volunteers.Technological advancements shouldn't be regarded as the reason for decrease in the popularity of Karagöz plays which run the risk of extinction and are among the most important pieces of Traditional Turkish Theatre.The westernization attempts starting from the 17th century showed their effects in the 20th century, the tradition of improvisation, the most important feature of traditional Turkish theatre, was given up, written texts as in the western theatre replaced it.Karagöz plays, dependent on written texts, couldn't keep up with the age and the cultural developments in human life as no more plays were written, presentation of the same plays repeatedly wasn't able to attract the attention of the public.Karagöz plays can be as prestigious and common as it used to be only if the tradition of improvisation is used once again.Otherwise, Karagöz plays, which are performed by a handful of puppeteers, will end up with extinction in the pages of history books in the next decades.The most important responsibility for the preservation and presence of Hacivat and Karagöz, an important cultural value of our country's culture, is on the Ministry of Culture and institutions of art education.The establishment of Departments of Traditional Turkish Theatre within the Conservatories and Faculties of Fine Arts will make these plays contemporary and increase the number of specialists in the field.in addition, the number of people who think that people who are keen on these plays should be supported by the state will increase.Especially, the increase in the number of museums like Karagöz House supported by the metropolitan municipality of Bursa, the continuous performances in these places and opening courses for interested individuals will carry this form of art to future generations and stop its disappearance. | 2018-12-02T20:27:33.428Z | 2014-12-30T00:00:00.000 | {
"year": 2014,
"sha1": "4a245333ed254ca905e7939f16e39c65f10f9c33",
"oa_license": "CCBY",
"oa_url": "http://journals.euser.org/index.php/ejser/article/view/695/684",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4a245333ed254ca905e7939f16e39c65f10f9c33",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": [
"History"
]
} |
256018595 | pes2o/s2orc | v3-fos-license | Impact of IQ on the diagnostic yield of chromosomal microarray in a community sample of adults with schizophrenia
Schizophrenia is a severe psychiatric disorder associated with IQ deficits. Rare copy number variations (CNVs) have been established to play an important role in the etiology of schizophrenia. Several of the large rare CNVs associated with schizophrenia have been shown to negatively affect IQ in population-based controls where no major neuropsychiatric disorder is reported. The aim of this study was to examine the diagnostic yield of microarray testing and the functional impact of genome-wide rare CNVs in a community ascertained cohort of adults with schizophrenia and low (< 85) or average (≥ 85) IQ. We recruited 546 adults of European ancestry with schizophrenia from six community psychiatric clinics in Canada. Each individual was assigned to the low or average IQ group based on standardized tests and/or educational attainment. We used rigorous methods to detect genome-wide rare CNVs from high-resolution microarray data. We compared the burden of rare CNVs classified as pathogenic or as a variant of unknown significance (VUS) between each of the IQ groups and the genome-wide burden and functional impact of rare CNVs after excluding individuals with a pathogenic CNV. There were 39/546 (7.1%; 95% confidence interval [CI] = 5.2–9.7%) schizophrenia participants with at least one pathogenic CNV detected, significantly more of whom were from the low IQ group (odds ratio [OR] = 5.01 [2.28–11.03], p = 0.0001). Secondary analyses revealed that individuals with schizophrenia and average IQ had the lowest yield of pathogenic CNVs (n = 9/325; 2.8%), followed by those with borderline intellectual functioning (n = 9/130; 6.9%), non-verbal learning disability (n = 6/29; 20.7%), and co-morbid intellectual disability (n = 15/62; 24.2%). There was no significant difference in the burden of rare CNVs classified as a VUS between any of the IQ subgroups. There was a significantly (p=0.002) increased burden of rare genic duplications in individuals with schizophrenia and low IQ that persisted after excluding individuals with a pathogenic CNV. Using high-resolution microarrays we were able to demonstrate for the first time that the burden of pathogenic CNVs in schizophrenia differs significantly between IQ subgroups. The results of this study have implications for clinical practice and may help inform future rare variant studies of schizophrenia using next-generation sequencing technologies.
Background
Schizophrenia is a severe psychiatric disorder associated with significant impairments in cognitive functioning [1]. On average, full scale IQ (FSIQ) is 7-8 points lower in cohorts with schizophrenia compared to general population norms [2] and the risk for schizophrenia has been shown to increase by 3.8% per 1-point decrease in FSIQ [3,4]. However, this risk appears to be greatest for individuals with FSIQ < 85, and for those with a significantly lower performance IQ (PIQ) than verbal IQ (VIQ) (i.e.~7-point difference or greater in the two main components of FSIQ) [4][5][6]. More extreme VIQ > PIQ discrepancies (i.e. ≥ 15 points) are clinically relevant and represent a neuropsychological hallmark of non-verbal learning disability (NVLD), a condition characterized by deficits in visualspatial perception, complex psychomotor skills, nonverbal problem solving, arithmetic, and social judgment [7,8]. The prevalence of schizophrenia in individuals with intellectual disability (ID; generally, IQ < 70) is threefold to fivefold higher than the general population prevalence of 1% [3,9]. Taken together, these data suggest that the underlying genetic mechanisms that predispose individuals to schizophrenia may be stronger in those with low FSIQ, particularly low PIQ, than in those with higher IQ. Given that the IQ deficits in schizophrenia are associated with functional outcome [1], further study of genetic risk variants for schizophrenia in the context of the intellectual profile appears warranted.
Rare copy number variations (CNVs) have been identified to play an important role in the etiology of schizophrenia and developmental disability and/or ID (DD/ ID) [10,11]. Several large rare CNVs, including deletions at 2p16.3 overlapping NRXN1, 15q13.3 (BP4-BP5) deletions, and 16p11.2 deletions/duplications, have been identified in schizophrenia and DD/ID [12][13][14]. Additionally, CNVs associated with schizophrenia have been shown to negatively affect IQ in population-based controls without any major neuropsychiatric disorder [15]. The widespread use of clinical microarray testing in DD/ ID has established the yield of pathogenic CNVs to be 15-20% [16]. In contrast, there have been significantly fewer diagnostic yield studies in schizophrenia [10,17], possibly due to the lack of guidelines endorsing routine clinical microarray testing in this complex adult-onset condition [18]. Since most rare CNV studies of schizophrenia do not report IQ and/or have excluded participants with co-morbid ID [13,19], the yield of pathogenic CNVs and the underlying genetic architecture of schizophrenia in the context of low IQ (schizophrenia-LIQ) remains unknown. Further, there have been no studies examining the genome-wide burden and/or functional impact of rare CNVs on schizophrenia while taking into account IQ, and after removing those CNVs that are deemed pathogenic.
Identifying sub-populations of individuals with schizophrenia who may be at an increased risk for a clinically reportable CNV, classified as pathogenic or a variant of unknown significance, would be useful for clinical practice. The primary aims of this study were twofold: (1) to compare the genome-wide burden of clinically reportable CNVs between individuals with schizophrenia-LIQ and schizophrenia-average IQ; and (2) to compare the genome-wide burden and functional impact of rare CNVs, beyond those that are currently deemed pathogenic, between individuals with schizophrenia-LIQ and average to higher IQ. Secondary analyses were aimed at identifying the yield of clinically reportable CNVs in schizophrenia across a wider range of IQ groups, including for those with a NVLD.
Schizophrenia sample collection and ascertainment
We recruited 688 adults who met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, diagnostic criteria for schizophrenia or schizoaffective disorder. Our detailed ascertainment strategy is described elsewhere [10]; however, it should be noted that the majority of the individuals recruited were chronically ill and therefore unlikely to include individuals in the first onset of illness whose diagnosis may change over time. There were 644 participants ascertained from six community mental health clinics across Central and Eastern Canada. In order to increase the number of individuals with schizophrenia at the lower end of the IQ spectrum we recruited an additional 44 participants with schizophrenia and ID from two outpatient mental health clinics that specialize in treating adults with a dual diagnosis (ID and a psychiatric disorder). However, of these 44 individuals, only 19 (43.2%) were included in the final cohort of 546 unrelated participants of European ancestry with adequate IQ data. The CNV data for a subset of the individuals with schizophrenia (n = 459; 66.7%) were previously published [10], although without the associated IQ data. Consent was obtained from all participants and surrogate consent was provided by an individual with power of attorney or equivalent for health decisions for individuals deemed incapable of providing informed consent. This study was approved by local institutional research ethics boards at the Centre for Addiction and Mental Health, Saint John Horizon Health Network, Humber River Hospital, Queen Elizabeth Hospital, Hamilton Health Services, and Bethesda Services.
Clinical assessment of IQ level in individuals with schizophrenia
Similar to previous studies [20], we used a combination of previous IQ testing and educational attainment data to assign individuals with schizophrenia to an IQ subgroup.
We also performed a comprehensive screening interview with each individual and/or his or her relative(s) to obtain medical, developmental, educational, and psychiatric history in addition to detailed demographic information. We retrospectively reviewed the available lifetime medical and psychiatric records for all 688 participants, blind to CNV status, and recorded results from all previous IQ and clinical genetic testing. These previous genetic and IQ results were not known at the time of recruitment. There were 212 of 546 (38.8%) individuals in the final sample with IQ scores (n = 136; 19.8%) and/or descriptive IQ ranges (n = 76; 11.0%) available (collectively referred to as IQ scores in the remaining text), 202 (36.9%) of whom had age at testing and schizophrenia age at onset both available. The majority of these IQ scores (n = 164/202; 81.2%) were obtained during the five years preceding the first onset of psychotic illness or within the 15 years after onset. Eighteen (8.9%) individuals had IQ testing completed more than five years before the first onset of psychotic illness and 20 (9.9%) had testing completed more than 15 years after onset. Individuals with IQ data had to be stable enough (e.g. with respect to psychotic symptoms) to be able to complete standardized IQ testing. There were no data on antipsychotic treatment at testing, but such treatment is unlikely to have affected IQ results [21].
We assigned individuals to the schizophrenia-LIQ or the schizophrenia-average IQ group if they had an IQ score of < 85 or ≥ 85, or an estimated IQ of borderline/ ID or average range, respectively. For secondary analyses, individuals in the schizophrenia-LIQ group were divided into borderline intellectual functioning (IQ 71-85) or ID (IQ ≤ 70) groups. Given that the risk for schizophrenia may be higher for individuals with a significant discrepancy between their PIQ and VIQ scores we assigned individuals meeting criteria for a NVLD (PIQ ≥ 15 points lower than VIQ; Additional file 1: Figure S1) to a separate schizophrenia-NVLD category [6,7]. We also used educational attainment to assign participants to intellectual functioning groups. However, IQ scores were deemed the more accurate measure of intellectual ability when years of education appeared out of keeping with expectations and functioning. Examples included individuals with IQ < 70 yet 12 years of education in a modified curriculum (assigned to schizophrenia-ID group) and individuals with IQ of 90 who left school to work after just eight years of education (assigned to schizophrenia-average IQ group).
In the absence of IQ scores we used educational attainment, which has a 0.6-0.7 correlation with FSIQ in the general population [22] and/or additional clinical data to assign individuals to each group as follows: the schizophrenia-LIQ group comprised individuals with a history of special education and/or had ID noted repeatedly throughout the medical records (estimated mild/moderate ID) and individuals who had 8-11 years of formal education with reported difficulties in school (e.g. repeated grades, enrolled in general courses in high school; estimated borderline intellectual functioning) [22,23]. Years of education are not informative for individuals with schizophrenia-ID given that the majority of individuals are enrolled in special education and/or had modified academic curriculums. Individuals who had completed ≥ 12 years of education (graduated high school), had no reported difficulties in school, and had not repeated any grades were assigned to the schizophrenia-average IQ group [22,23]. However, there were a number of scenarios in which our detailed clinical data led us to believe that an individual's formal educational attainment did not reflect their true cognitive abilities. For example, we assigned individuals to the schizophrenia-average IQ group if they left school early due to incarceration, vocational and/or family responsibilities, or early onset of psychotic symptoms if they were reported to have done well academically up until that point. All assessments of IQ and educational attainment were performed blind to CNV status.
CNV detection and annotation
High-quality genomic DNA was available for 540/546 (98.9%) participants and was submitted to the Centre for Applied Genomics in Toronto, Canada for genotyping using either the Affymetrix® Genome-Wide Human SNP array 6.0 or the CytoScan HD array. All samples met the Affymetrix quality control cut-offs. Similar to previous studies [10,24], we only included CNVs that were > 10 kb, identified by at least two CNV calling algorithms (two of ChAS, iPattern, or Genotyping Console for the CytoScan HD array and two of iPattern, Birdsuite or Genotyping Console for the Affymetrix 6.0 array), spanning ten consecutive array probes, and overlapping < 75% of segmental duplications. Over 90% of CNVs called using these criteria validate using a second laboratory method [24]. The CytoScan HD array has a higher resolution than the Affymetrix 6.0 array; however, 90.0% of deletions ≥ 25 kb and spanning 25 consecutive array probes and duplications ≥ 50 kb and spanning 50 consecutive array probes are concordant between the two microarrays [25]. There was no significant difference in the proportion of individuals from the schizophrenia-LIQ or the schizophrenia-average IQ group analyzed on the Affymetrix 6.0 and CytoScanHD array (χ 2 = 1.50, df = 1, p = 0.219). There were six (1.1% of 546) participants with 22q11.2 deletions included in the cohort who did not have Affymetrix 6.0 or CytoScan HD microarray data available and were therefore only included in the analyses comparing the burden of pathogenic CNVs.
We used 10,113 population-based controls (Additional file 1: Table S1) to adjudicate CNV rarity in the schizophrenia-LIQ and schizophrenia-average IQ groups. As before [10,24,26], we used a conservative definition of "rare," defined as CNVs found in < 0.1% of these 10,113 independent controls using a 50% overlap criterion. Further quality control methods included removing locus-specific batch effects (i.e. CNVs with identical coordinates and copy number state that were present in > 1% of the sample) and manually joining large CNVs that appeared to be fragmented [13]. All CNV coordinates are given using the Genome Reference Consortium February 2009 build of the human genome (GRCh37/hg19).
Assessment of ancestry and relatedness
We genotyped the 549,374 SNPs that are common to both the Affymetrix 6.0 and CytoScan HD arrays for participants using Birdseed v2 or Chromosomal Analysis Suite 3.1, respectively. Genotype data from 293,511 unlinked SNPs were used to estimate ancestry for the individuals with schizophrenia using PLINK [27]. Genotype data from 778 HapMap participants were used as a known reference for ancestry. Of the 688 individuals with schizophrenia in the original sample, 617 (89.6%) were identified to be of European descent. Pair-wise identity by descent analyses for individuals with high-resolution microarray data revealed that none of these participants were related to one another (all PI_HAT values were < 0.1). The unrelated individuals of European descent with schizophrenia who had sufficient IQ/educational data available to be categorized by intellect (n=546; 88.5% of 617) comprised the sample for this study.
Clinical adjudication of rare CNVs in schizophrenia participants
All rare (< 0.1%) exonic CNVs > 100 kb and all noncoding CNVs > 500 kb were assessed for clinical relevance by a trained cytogeneticist following the American College of Medical Genetics (ACMG) guidelines for CNV interpretation [28]. CNVs were classified according to the five standard ACMG categories: (1) pathogenic; (2) variant of unknown significance (VUS) likely pathogenic; (3) VUS; (4) VUS likely benign; and (5) benign. We considered CNVs classified as pathogenic or VUS-likely pathogenic to be pathogenic. CNVs defined as clinically reportable included those classified as pathogenic, VUSlikely pathogenic, and VUS. The yield of pathogenic CNVs, VUS, and clinically reportable CNVs (pathogenic and VUS combined) were calculated based on the proportion of individuals in the schizophrenia-LIQ vs the schizophrenia-average IQ group with at least one of these CNV types, regardless of size or chromosomal location.
Genome-wide CNV burden and statistical analyses
In our primary analyses, we tested the hypothesis that the genome-wide burden of clinically reportable CNVs was greater for participants with schizophrenia-LIQ than for those in the schizophrenia-average IQ group. In addition, after excluding individuals who were identified to have a pathogenic CNV (Table 1), which tend to be large and overlap many genes, we performed a logistic regression analysis to compare the total number, total length, and genic content of rare autosomal CNVs (all, deletions and duplications separately) > 10 kb between the schizophrenia-LIQ and schizophrenia-average IQ groups. Sex and genotyping platform were included as covariates. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using R 3.3.1 software. All tests were twosided with p < 0.05 defined for statistical significance, and uncorrected given limited multiple testing.
Gene-set enrichment analysis
We performed a gene-set enrichment analysis in order to determine if the functional impact of rare autosomal CNVs differed between the schizophrenia-LIQ and schizophrenia-average IQ groups. We tested 17 genesets that were postulated to play a role in the pathogenesis of schizophrenia and/or DD/ID. These included 15 sets that were significantly enriched for deletions (n = 15) or duplications (n = 1) in a recent large-scale CNV study of schizophrenia [13]. Briefly, these included two sets containing genes that are predicted to be targets of FMR1 [29,30], three sets containing genes coding for members of N-methyl-D-aspartate receptors (NMDAR), neuronal activity-regulated cytoskeleton-associated protein, and components of the postsynaptic density (PSD) [31], and ten sets associated with neuronal function, synaptic components, and/or neurological/neurodevelopmental phenotypes in humans (n = 7) or mice (n = 3) [13]. We also included two sets that comprised genes that were overlapped significantly more often by deletions (n = 1) or duplications (n = 1) in a clinically ascertained cohort with DD/ID compared to controls [12]. Detailed descriptions of how these 17 gene-sets were compiled are outlined in Additional file 2.
The gene-set enrichment analysis used a logistic regression deviance test [31] [R/Bioconductor package cnvGSA: Gene Set Analysis of (Rare) Copy Number Variants (version 1.18.0)] (https://www.bioconductor.org/packages/release/bioc/html/cnvGSA.html) to evaluate if the number of genes overlapped by rare exonic deletions or duplications in each individual for each of the gene-sets (i.e. gene-set specific exonic burden) was predictive of the participant being a member of the schizophrenia-LIQ or the schizophrenia-average IQ group. We included sex, genotyping platform, and the total number of genes overlapped by rare CNVs per individual as covariates. Multiple-testing
Clinical features of the cohort
Of the 546 unrelated participants with schizophrenia of European descent, there were 325 (59.5%) assigned to the schizophrenia-average IQ group, 192 (35.2%) assigned to the schizophrenia-LIQ group, 130 (67.7%) with borderline intellectual functioning and 62 with mild (n = 57) or moderate (n = 5) ID, and 29 (5.3%) assigned to the schizophrenia-NVLD group. Total years of education were significantly lower (Mann-Whitney U = 6453.5, p < 0.0001) in the schizophrenia-borderline intellectual functioning group (median = 10; range = 5-16 years) compared to the schizophrenia-average IQ group (median = 12; range = 5-19 years) and not significantly different between the schizophrenia-average IQ and the schizophrenia-NVLD (median = 12; range = 7-18 years) groups (p = 0.385). Before involvement in this study, only seven (1.3%) individuals from the entire cohort had previously received clinical genetic testing, all of whom had been recruited from a specialized dual diagnosis clinic. These included six (9.6%) individuals from the schizophrenia-ID group and one (0.8%) individual from the schizophrenia-borderline intellectual functioning group. Further demographic and clinical data for the cohort are provided in Additional file 1: Table S2.
Total burden of clinically reportable CNVs
There were 78 CNVs classified as VUS in 70 (12.8%; 95% CI 10.2-16.0%) schizophrenia participants (Additional file 3), five of whom also had a pathogenic CNV. In contrast to the pathogenic CNV results, there was no significant difference (p = 0.243) in the prevalence of individuals with one or more VUS between the schizophrenia-LIQ group (n = 26/192; 23.5%) and schizophrenia-average IQ group (n = 33/325; 10.2%). Secondary analyses revealed that there was also no significant difference in the prevalence of participants with a VUS between any of the IQ subgroups (Fig. 1). Of the 78 CNVs classified as VUS (median size 723 kb; range 115 kb to 4.3 Mb), there were slightly more duplications (n = 51; 65.3%) than deletions (n = 27; 34.7%), but this difference was non-significant (p = 0.057). Taken together, there were 99 (18.1%; 95% CI 15.0-21.7%) schizophrenia individuals with a clinically reportable CNV (pathogenic and/or VUS). Together there were 14 (2.6%) participants with two or more clinically reportable CNVs, significantly more of whom were in the schizophrenia-ID (n = 5/62; 8.1%) group compared to the schizophrenia-average IQ (n = 4/325; 1.2%) group (OR = 8.06 [2.21-32.93], p = 0.0018). There was no significant Fig. 1 Yield of clinically reportable CNVs in schizophrenia by IQ group. The figure depicts the percentage of individuals with schizophrenia for each of the IQ groups with one or more pathogenic (defined as pathogenic or VUS-likely pathogenic) CNV (a) or one or more CNV classified as a VUS (b), determined using the ACMG guidelines for CNV interpretation [28]. Individuals with more than one clinically reportable CNV were only counted once. Schizophrenia participants were assigned to each of the IQ sub-groups using the methods described in the manuscript. Average average, IQ intelligence quotient, BL borderline intellectual functioning, ID intellectual disability, NVLD non-verbal learning disability, VUS variant of unknown significance. Asterisks above horizontal brackets represent the significance of the between-group comparisons: *p < 0.05, **p < 0.01, ***p < 0.001, NS = p > 0.1. All other comparisons were non-significant difference (p = 0.135) between the schizophrenia borderline intellectual functioning group (n = 5/130; 3.8%) and the schizophrenia-average IQ group.
Burden of genome-wide rare CNVs
We also attempted to determine if the genome-wide burden of rare CNV was higher in an expanded schizophrenia-LIQ group compared to the schizophrenia-average IQ group, after excluding the 39 individuals with a pathogenic CNV. Given that the prevalence of pathogenic CNVs was similar for participants with ID and a NVLD, we added the 23 individuals with a NVLD (and no pathogenic CNV) to the original schizophrenia-LIQ group for the remaining analyses. After controlling for sex and genotyping platform there was no significant difference in the genome-wide burden, total genomic length, or total number of genes overlapped by rare autosomal CNVs between the two groups (Additional file 1: Table S3). However, there were significantly more genic CNVs in the expanded schizophrenia-LIQ group compared to the schizophrenia-average IQ group (OR = 1.19 [1.01-1.41], p = 0.042), primarily driven by an increased burden of genic duplications (OR = 1.42 [1.14-1.81], p = 0.002); findings for genic deletions did not reach significance (p = 0.129) (Additional file 1: Table S3).
Gene-set enrichment analysis
After multiple-test correction (BH-FDR < 10% and p value < 0.05), we detected no gene-sets that were significantly enriched for rare autosomal deletions in the expanded schizophrenia-LIQ group (comprising individuals with schizophrenia and ID, borderline intellectual functioning, or NVLD) compared to the schizophrenia-average IQ group (data not shown). There was one gene-set, GO nervous system development, that was significantly (p = 0.013) enriched for rare duplications in the schizophrenia-LIQ group that had a BH-FDR of 0.22 (Table 2). To see if we could improve the FDR, we reduced the 17 gene-sets to only those six reported to have a FDR < 30% for rare genic duplications in the recent Psychiatric Genomics Consortium schizophrenia case control CNV study [13]. This resulted in an improved FDR (from 0.22 to 0.07) for the GO nervous system development gene-set. The GO nervous system development gene-set became nonsignificant (p = 0.074, FDR = 0.37) after excluding n = 39 participants with an pathogenic CNV ( Table 2).
There were 44 rare duplications in 35 individuals in the expanded schizophrenia-LIQ group and 29 rare duplications in 28 individuals in the schizophrenia-average IQ group contributing to the GO nervous system development gene-set result (Additional file 3). The duplications not currently classified as pathogenic or a VUS in the schizophrenia-LIQ individuals overlapped several interesting neuropsychiatric candidate genes, such as CNTN4, NDUFV2, and RCAN1 [47][48][49]. There were also duplications in two participants from the expanded schizophrenia-LIQ group that overlapped two genes (ARSA and EIF2B1) associated with leukodystrophy, a progressive disease that causes abnormal development of and/or destruction of the myelin sheath and can present in adulthood with symptoms similar to that of schizophrenia [50,51].
Discussion
This is the first study to examine the burden of clinically reportable CNVs in schizophrenia by IQ group. Our results revealed that 7.1% of schizophrenia individuals ascertained from a community outpatient setting may have a pathogenic CNV detected by genome-wide microarray. However, this diagnostic yield was not uniformly distributed across the cohort as there was a The CNVs comprised four pathogenic 16p11.2 duplications in four individuals, for which the contributing gene for this gene-set result was MAPK3 LIQ low IQ, GO gene ontology, CNV copy number variation, p statistical result when all rare autosomal CNVs are included; BH-FDR Benjamini-Hochberg false discovery rate, Inf infinity, NMDAR N-methyl-D-aspartate receptor components significant increase in the yield of pathogenic CNVs as IQ decreased (Fig. 1). We also demonstrated for the first time that the prevalence of pathogenic CNVs may be similar for individuals with schizophrenia-ID and schizophrenia-NVLD. Further, we identified an increased burden of rare genic autosomal duplications in the schizophrenia-LIQ compared to the schizophreniaaverage IQ group, a finding that was not attributable to large rare pathogenic CNVs.
The importance of clinical microarray testing in the dual diagnosis adult population In the current study, the highest yield of pathogenic CNVs (24.1%) was identified in individuals with schizophrenia and co-morbid ID. This yield was higher than that that reported for epilepsy (~5-10%) [52] or ASD (~10-15%) [53] alone, and comparable to that for DD/ ID (~15-20%) [16]. There have been few studies that have examined the burden of pathogenic CNVs in adults with a dual diagnosis (ID plus one or more additional neurodevelopmental and/or neuropsychiatric conditions), and even fewer that have focused specifically on schizophrenia-ID. A recent study identified a pathogenic CNV in nine of 72 (12.5%) individuals with ID and psychosis [54], about half of the yield reported in the current study. Such discrepancies in the diagnostic yield may be due to differences in ID severity between these two cohorts (data unavailable for the Wolfe et al. cohort) [54] and/or differences in the application of guidelines for CNV interpretation that are vulnerable to subjective evaluations of the evidence supporting the classification of a variant [55]. Further studies examining the diagnostic yield of microarray in the schizophrenia-ID population will be needed to clarify this. With respect to other neuropsychiatric conditions, the rate of rare de novo CNVs has been shown to be increase with decreasing IQ individuals with ASD [56] but no studies have yet formally examined the diagnostic yield of clinical microarray in the ASD-ID population. Conversely, a recent study investigating adults with ID and pediatric-onset epilepsy revealed that 16.0% (n = 23/ 143) of the cohort had a pathogenic CNV [57], a higher prevalence than that reported for just epilepsy [52]. Taken together, these data suggest that adults with a dual diagnosis should be prioritized for clinical microarray testing. However, it is important to note that before inclusion in our study fewer than 10% of the dual diagnosis schizophrenia-ID participants in our cohort had received any type of clinical genetic testing despite meeting criteria for routine CMA testing based on the Miller et al. 2010 clinical recommendations [16]. This suggests that additional efforts to increase widespread CMA testing in the adult DD/ID population, particularly for those with a dual diagnosis, are needed.
Increased burden of clinically relevant CNVs in individuals with schizophrenia-NVLD
To our knowledge, this is the first study to report on the burden of clinically reportable CNVs in individuals with schizophrenia-NVLD. Individuals with a NVLD demonstrate significant deficits in visual-spatial organization, motor coordination, and social perception and interaction, but retain relatively well-developed verbal skills [7]. Determining a diagnosis of NVLD relies heavily on formal IQ testing, allowing for the detection of significant VIQ > PIQ discrepancies that are difficult to ascertain clinically. Perhaps unsurprisingly, given the emphasis on verbal skills in formal education, there was no significant difference in the total years of education between the schizophrenia-NVLD and schizophreniaaverage IQ groups (Additional file 1: Table S2). Yet, the schizophrenia-NVLD participants had a significantly higher burden of pathogenic CNVs compared to the schizophrenia-average IQ group. Indeed, the burden of pathogenic CNVs in the schizophrenia-NVLD group was more comparable to that of the schizophrenia-ID group (20.7% vs 24.1%). Interestingly, all of the individuals with schizophrenia-NVLD and a pathogenic CNV had a PIQ < 85, yet VIQ for all but one individual was in the average to above average range (Additional file 1: Figure S1). This finding has important clinical relevance because individuals with schizophrenia-NVLD would not generally be considered for clinical microarray testing [16].
NVLD has been previously described as being associated with several structural variants, including those underlying Turner syndrome [58], Williams syndrome [59], and the 22q11.2 deletion syndrome [60]. This study extends these findings to include other rare recurrent CNVs, including 15q11.2-q13.1 and 16p11.3 duplications. We also identified a novel 280-kb deletion at 3p26.1 overlapping genes SUMF1 and ITPR1 in a participant with schizophrenia-NVLD that has not been previously reported in the literature. ITPR1 encodes the inositol 1,4,5-triphosphate receptor that plays an important role in releasing Ca 2+ from the endoplasmic reticulum [61]. Interestingly, a recent whole-exome sequencing (WES) study identified 11 individuals with schizophrenia who had ultra-rare disrupting/damaging variants in ITPR1 [46]. There is also a 60-kb deletion overlapping the first four exons of ITPR1 in a case (nsv996226) with autistic behavior reported in the Clinical Genome Resource database (https://www.clinicalgenome.org). Heterozygous deletions and missense mutations in this gene have also been associated with adult onset spinocerebellar ataxia-15 (MIM 606658) and childhood onset spinocerebellar ataxia-29 (MIM 117360) [62,63]. Additionally, homozygous/compound heterozygous truncating mutations and heterozygous deletions and missense mutations in ITPR1 have been associated with Gillespie syndrome (MIM 206700), a disorder characterized by hypotonia, progressive hypoplasia, ataxia, and variable cognitive impairment with onset occurring within the first year of life [61]. Notably, many genes associated with spinocerebellar ataxia are also reported to play a possible role in schizophrenia, such as ATAXN1, ATAXN2, and ATAXN10 [64][65][66].
Delineating the genetic architecture of schizophrenia Data from this study demonstrate that the genetic architecture of schizophrenia-LIQ, and probably schizophrenia-NVLD, differs significantly from schizophrenia-average IQ, even after excluding large rare pathogenic CNVs that have a well-documented impact on cognition in the general population [15]. Despite having a relatively small sample size, we were able to identify an increased burden of rare exonic duplications in the schizophrenia-LIQ group that may overlap more genes involved in nervous system development ( Table 2). Larger sample sizes could help provide improved statistical support for these findings as well as potentially identify additional pathways relevant to schizophrenia-LIQ. Data from a recent WES study in schizophrenia identified a significantly increased burden of rare damaging variants in loss-of-function (LoF) intolerant genes associated with developmental disorders in individuals with schizophrenia-ID compared to those with schizophrenia and average IQ [20]. Interestingly, the burden of these damaging variants in LoF intolerant genes was also increased in schizophrenia-average IQ compared to controls, suggesting that they contribute to risk for developing schizophrenia but to a lesser degree than that for schizophrenia-ID [20].
Advantages and limitations
Significant strengths of our study included the robust CNV detection methods used and systematic application of established guidelines for CNV interpretation [28]. Also, the community-based ascertainment strategy more closely reflects the general schizophrenia population than a strictly hospital-based recruitment strategy, thus allowing for more generalizable diagnostic yield estimates. The comprehensive phenotyping protocol facilitated our ability to stratify the schizophrenia cohort by IQ. Yet, despite substantial efforts, the numbers of individuals with schizophrenia-ID remained relatively small. We therefore targeted specialized dual diagnosis clinics in order to increase recruitment at the lower end of the IQ range. This appeared to increase the yield of pathogenic CNVs detected in the borderline intellectual functioning group, possibly reflecting an ascertainment bias in which more severely affected and/or psychiatrically complex individuals may be referred to such specialized clinics. Efforts to identify more individuals from community settings are needed.
Another limitation of our study was that although there were formal IQ data for a substantial number of individuals, educational data were used to group the remainder of the cohort. While the correlation between FSIQ and educational attainment is high (0.6-0.7) [22], it is possible that we have misclassified some participants, particularly those with a learning disability (such as NVLD) that cannot be delineated by educational attainment alone. Further, given that there are some data to suggest that there is an IQ decline associated with schizophrenia illness onset [67] and that the majority of the IQ scores used in this study were obtained just before or sometime after the onset of illness, it is possible that some individuals were incorrectly assigned to an IQ group lower than their "premorbid" IQ score would have placed them. However, we based IQ group placement on multiple pieces of evidence, including IQ scores, educational attainment, functioning, and personal circumstances. This process mirrors the multiple factors taken into consideration when making a clinical diagnosis of intellectual functioning. Also, each IQ group covers a fairly wide range of IQ scores. We therefore believe that IQ group misclassification is likely to be low for this study.
Conclusions
Using high-resolution microarrays, the results indicate that the burden of pathogenic CNVs is significantly greater for individuals with schizophrenia and low IQ compared to those with normal to superior IQ. These data have important clinical and research implications, including demonstrating that participants with schizophrenia and low IQ should be prioritized for clinical microarray testing and highlighting the importance of taking IQ into account for the interpretation of future rare variant studies of schizophrenia. The data also suggest that individuals with schizophrenia-NVLD, comprising 5.3% of the sample, may also have an increased burden of pathogenic CNVs. Data from next-generation sequencing will allow for the detection of sequence variants and smaller structural variants that may help shed light on more specific mechanisms related to schizophrenia-LIQ. While we identified several potential candidate genes for schizophrenia-LIQ, larger samples will be required to provide sufficient statistical support for any given loci.
Additional files
Additional file 1: A word document that contains one figure that depicts the verbal and performance IQ scores for 29 individuals with schizophrenia and a NVLD ( Figure S1), and three tables, including: (1) a list of 10,113 population-based controls used to adjudicate CNV rarity in schizophrenia participants (Table S1); (2) the demographic and clinical information for 546 probands with schizophrenia of European ancestry (Table S2); and (3) the genome-wide burden of all rare autosomal CNVs > 10 kb between the expanded schizophrenia-LIQ and schizophrenia-average IQ groups (Table S3) Availability of data and materials All relevant microarray data have been deposited into the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/) database and can be obtained using the accession number GSE106818. The rare (< 0.1% in population controls) CNV calls from this cohort are also available in Additional file 3.
Author's contributions CL and ASB designed the study. CL and GC recruited participants and collected the clinical data and samples. JW and KB provided the 44 additional individuals with schizophrenia-ID. CL and ASB collected the cognitive and educational data and performed all statistical analyses. AL, JW, CRM, and SWS performed the genomic analyses. AN, DJS, and MS adjudicated the CNV data for clinical relevance. CL and DM designed and performed the gene-set enrichment analysis. CL wrote the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate This research project complies with the Declaration of Helsinki. Consent was obtained from all participants and surrogate consent was provided by an individual with power of attorney or equivalent for health decisions for individuals deemed incapable of providing informed consent. This study was approved by local institutional research ethics boards at the Centre for Addiction and Mental Health, Saint John Horizon Health Network, Humber River Hospital, Queen Elizabeth Hospital, Hamilton Health Services, and Bethesda Services.
Consent for publication Not applicable
Competing interests DM is an employee of Deep Genomics, Inc. The remaining authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2023-01-20T14:12:56.745Z | 2017-11-30T00:00:00.000 | {
"year": 2017,
"sha1": "7c3b57d1fef1e31b151d96e12e4ccca2883fcbb7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13073-017-0488-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "7c3b57d1fef1e31b151d96e12e4ccca2883fcbb7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
145838860 | pes2o/s2orc | v3-fos-license | Evaluation of the effectiveness of different trap designs for the monitoring of Drosophila suzukii ( Matsumura , 1931 ) ( Diptera : Drosophilidae ) in blackberry crop
Spotted Wing Drosophila (SWD), Drosophila suzukii is one of the most important pests in berry crops around the world. In this study, different models of traps were evaluated for monitoring adult SWD. The study was conducted in a commercial blackberry orchard of the cultivar Chester, in the municipality of Vacaria, RS, Brazil in May 2016. The treatments consisted of three trap designs, namely the European model (Hemitrap ® ), American model (plastic pot with 750 ml of capacity), and Brazilian model (red dyed, and colorless polyethylene terephthalate (PET) bottle of 250 ml of capacity). A total of 1,572 adults of SWD were captured, as 867 males and 705 females. The mean sexual ratio was of 0.56 ± 0.03 with no difference among trap models. The trap Hemitrap ® showed the highest capture values for SWD adults as well as for other Drosophilidae. The American model did not show good results being surpassed by the PET bottle trap. When considering the number of entrapped insects per milliliter of attractant, per area of entrance, per evaporative surface, and per selectivity, the colorless PET trap (Brazilian model) is the most effective.
INTRODUCTION
Blackberry crop is cultivated mostly by small-hold farmers due to the low investment cost and its high profitability (Pagot and Hoffmann, 2003;Poltronieri, 2003).Although blackberry has favorable characteristics for production in family agriculture, pest attack has been limiting the expansion and profitability of the crop, due to the struggle of controlling the pests as well as the rejection of whole fruit lots by the industry when detecting the presence of alive insects in the fruits.
The recent registry in Brazil of Drosophila suzukii (Matsumura, 1931) (Diptera, Drosophilidae), a pest known worldwide as Spotted Wing Drosophila (SWD) has caused concern among producers and technicians due to the significant damage it causes to the crops, which may reach 100%.SWD is currently expanding worldwide, attacking several host crops: blackberry, strawberry, blueberry and raspberry (Lee et al., 2011;Walsh et al., 2011).In Brazil, the pest was reported in the South of the country in strawberry crops, causing damage of around 30% of the production (Santos, 2014).Due to the high potential of dissemination, rapid population growth and high number of host plants, full attention should be given to SWD in host crops (Teixeira and Rego, 2011).
The establishment of SWD monitoring strategies is an initial step for pest detection in the crops and to back decision making of strategies to control the dissemination.Santos (2014) suggested the use of attractant traps, made with polyethylene terephthalate (PET) bottles (250 ml) to monitor SWD, although, no data regarding the efficiency of this model of trap in the field was presented.In the USA it is common to use traps made with transparent plastic pots for monitoring D. suzukii (Lee et al., 2012).In Europe, the use of a commercially available trap (Hemitrap®) has been recommended for monitoring as well as mass control of SWD due to its high efficiency (Probodelt, 2015).It is agreed that the different types of traps, that is, different colors, shapes and number of holes, will affect the rate of insects captured.For example, the external color of the trap has been pointed out as an important factor for SWD, with superior results using yellow, red or black color (Basoalto et al., 2013;Lee et al., 2013).The trap shape is also a characteristic to be analyzed, since the volume of attractant inside the trap influences the amount of volatiles released to the field, and consequently the capture rate of the pests (Lee et al., 2013).
Because it is a recently introduced pest in Brazil, there are still few studies that provide robust knowledge for the monitoring and management of D. suzukii in host crops.Thus, the present study aimed to evaluate SWD capture with different trap designs in a commercial blackberry orchard.
MATERIALS AND METHODS
The experiment was conducted in a commercial orchard of blackberry cv.Chester, located in the municipality of Vacaria, State of Rio Grande do Sul,in Southern Brazil (28° 28'40.18 "S and 50° 58'7.40" W) during the month of May, 2016.The experimental design was of randomized complete blocks with four treatments (trap designs) and five replications.The traps were filled with attractant based on biological yeast, sugar and water (Santos, 2016) in the recommended amount for each trap model.
The first treatment consisted of traps made with transparent dos Santos et al. 605 plastic pots of 14 cm of height by 11 cm in diameter, named as the "American model" (Lee et al., 2013) (Figure 1A).The trap had 11 holes of 4 mm of diameter for the arrival of the insects (138.16mm 2 of entrance area), located at the upper edge of the trap, near the lid, and filled with 250 ml of attractant.The second treatment consisted of the commercially available yellow color Hemitrap® with 15 cm in height and 12.5 cm of diameter, named as "European model".The trap had 21 holes of 7 mm diameter (807.8 mm 2 of entrance area), arranged in three groups of seven holes each, symmetrically distributed around the trap in its upper third, and filled with 250 ml of the attractant (Figure 1B).The third and fourth treatments consisted of traps made with PET bottles of 250 ml Coca Cola ® soda pop, referred to as "Brazilian model".The traps contained five holes with 4 mm of diameter (62.8 mm 2 of entrance area), equidistant 2 cm, in the lower third of the trap which contained 5 cm in diameter.In this trap, a volume of 40 ml of the attractant was used.The difference between treatments three and four was the color of the trap: transparent and red, respectively (Figure 1C and D).
The traps were placed in the field, and arranged in randomized complete blocks (plant rows), equidistant from each other in 6 m, at a height of 1.30 m from the ground level.They were inspected every two days, and the captured insects removed, packed in plastic pots, and taken to the Embrapa Grape and Wine laboratory in Vacaria, RS, for screening.SWD adults were segregated into both sexes and computed under stereomicroscope, along with the number of other Drosophilae adults present in the samples.The analyzed variables were: total number of adults of D. suzukii and sexual ratio; total number of other Drosophilidae captured; mean SWD per attractant volume (ml) and mean SWD as a function of the total insect entrance area in each trap model.The total number of holes, total entrance area and evaporation surface of the attractant were considered for each trap.
The data were tabulated and analyzed for normality by the Shapiro-Wilk test and homoscedasticity by Hartley and Bartlett.Treatment averages were compared by the Tukey test at 5% probability using Statistica 6.0 software.
RESULTS AND DISCUSSION
A total of 1.572 adults of D. suzukii were collected in the experiment, being 867 males and 705 females.The sexual ratio was approximately 1:1 in all trap models, indicating that the traps did not interfere with the sextrapping behavior in the evaluated orchard (Table 1).Klesener et al. (2018), in southern Brazil, also found that there are no significant differences in the sexual ratio of SWD in berry crops.
Considering the total entrapment values, the European model (Hemitrap®) was the one that captured the largest number of SWD, followed by the transparent and red color PET bottles traps, whereas the American model was the one with the lowest value (Table 1).In relation to the color, different studies have shown that red color is more attractive to SWD than transparent (Lee et al., 2012;Basoalto et al., 2013;Lee et al., 2013).However, in the study conducted by Mazzetto et al. (2015), it was pointed out that a lower entrapment rate of red color traps in blueberry crops, similarly to the results found in this experiment, where the red color trap did not promote In relation to the greater entrapment of the European model, some factors are important; for example, the greater volumetric capacity and surface area to release the volatiles of the attractant to the field.Lee et al. (2013), discussed this, and affirmed that there is an increase in the capture of pests as a function of the amount of attractant in the trap, but the increase is not linear, because at a 225% increase in the surface area of the attractant's volatiles release, there was an increase of only 12% in capture rate.For Lee et al. (2012) and Renkema et al. (2014) the greatest capture is related to the entrance area of the insects in the trap (area occupied by holes).This result corroborates the findings of this trial, since it is precisely the European model trap (Hemitrap ® ) that presented the largest insect entry area ).Thus, when evaluating the number of SWD captured in relation to the attractant volume, number of holes, insect entrance area and evaporation surface of the attractant, the Brazilian model (transparent) was superior with significantly higher entrapped insects/ml; insects/ entrance area and insects/evaporation surface (Table 2).In this new analysis, the European and American models were similar, without significant differences between them (Table 2).
In this analysis, the "Brazilian model" (transparent) was efficient for the population evaluation of SWD, since with a low amount of attractant (40 ml), it was already possible to measure the population of the pest in the field.The best performance was also observed when analyzing the evaporation surface area of the attractant, since the Brasilian model trap has the smallest area (62.8 mm 2 ) than the others (Table 2).
Another important aspect of the traps is the location of the holes, which in the Brazilian model are in the lower third and near the surface of the attractant.In addition, the convex shape of the bottle in the region (Coca-Cola ® ), makes it difficult for the insects to escape from the trap in function of the SWD positive phototropism.This does not occur in the American model because the holes are in the top position of the trap, near the lid of the pot, just where there is more accumulation of insects, which allows them to escape.Comparing the European model, Hemitrap ® , with the American model, which contained the same attractant volume (250 ml), the largest capture was determined by the position and number of holes, which in the Hemitrap® model seemed to be more adequate.
Regarding other Drosophilidae, a total of 489 adults were collected in the experiment, representing 23.7% of the total drosophila sampled in this study.This result reflects a certain selectivity of the attractant used and proposed by Santos (2016), since larger percentages of other Drosophilidae are mentioned in studies with SWD, as for example in the use of apple vinegar (Lee et al., 2012).From the traps evaluated, the Brazilian model showed selectivity for SWD collection of 90 and 80%, in red and transparent color, respectively.Close result was obtained with the American model (84.2%), while in the European model the lowest value (68.2%) was recorded (Table 3).
The results obtained in this study showed the possibility of using traps made with PET bottles to monitor SWD in blackberry orchards.This type of trap does not have the highest number of collected insects, an attribute of the European model trap, but when considering other aspects such as proportion of insects per ml of attractant, number of holes and entrance area, the PET trap (Brazilian model) presented statistically superior performance.Another important aspect is to be a reusable material, with low cost for trap confection.In addition, the Brazilian model presented low capture of other Drosophilidae, which simplifies the sorting of collected material.The reduced amount of attractant used in the PET model (Brazilian model) is another favorable point, because it uses only 40 ml, against 250 ml of other traps.It should be noted that the Brazilian model proposal is for use in SWD monitoring to support decision-making control strategies, not for mass collection strategies, where the European model, Hemitrap® trap, would be the most appropriate.
For Lee et al. (2012Lee et al. ( , 2013)), the most efficient trap model for SWD monitoring should be the one with the highest insect capture, suitability to the needs of the producer, ease of handling and acquisition.Thus, traps made with PET bottles proved to be an efficient alternative to use for SWD monitoring in blackberry orchards.
Conclusions
The evaluated traps do not interfere in the capture behavior according to sex in blackberry orchard.The European model, Hemitrap® trap, captures the largest number of SWD adults and other Drosophilidae.
By taking into consideration the number of insects collected per ml of attractant, per entrance area, per hole, per evaporation surface and selectivity, the Brasilian model transparent trap is the most efficient.
Table 1 .
Total number and sex ratio of adults of Drosophila suzukii collected in different trap designs in a commercial orchard of blackberry cv.Chester.Vacaria, RS, May 2016.
Table 2 .
Mean (± SE) of adults of Drosophila suzukii captured in different trap designs, as a function of the amount of attractant (ml), number of holes, insect entrance area (mm 2 ) and evaporation surface.Means followed by the same letter in the column do not differ statistically by the Tukey test at 5% of probability.
Table 3 .
Number and percentage of Drosophila suzukii and other Drosophilidae collected with different trap designs in commercial blackberry orchard cv.Chester.Vacaria, RS, May 2016. | 2019-05-07T13:41:00.596Z | 2019-03-07T00:00:00.000 | {
"year": 2019,
"sha1": "9f2aa495813e4bdbebd6ce1f013f4e6745eb6348",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/856FC0060462.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d69724dd381c5e1eaa75f268e6f30dd5d6a4eb98",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
18013968 | pes2o/s2orc | v3-fos-license | Isotemporal classes of n-gons
Here I present the present the first major result of a novel form of network analysis - a temporal interpretation. Treating numerical edges labels as the time at which an interaction occurs between the two vertices comprising that edge generates a number of intriguing questions. For example, given the structure of a graph, how many ``fundamentally'' different temporally non-isomorphic forms are there, across all possible edge labelings. Specifically, two networks, N and M, are considered to be in the same isotemporal class if there exists a function alpha(N)->M that is a graph isomorphism and preserves all paths in N with strictly increasing edge labels. I present a closed formula for the number of isotemporal classes N(n) of n-gons. This result is strongly tied to number theoretic identities; in the case of $n$ odd, N(n)= 1/n sum_{d|n} (2^{n/d -1}-1)Phi(d), where Phi is the Euler totient function.
Definitions and the Problem
As a field, the mathematical analysis of networks has both sophistication and remarkable diversity. This is due largely to the surprising consistency with We'll begin with an example. Let graph theorists A, B, C, D and E belong to a strange academic society that meets often, but only two members at a time. Their most recent meeting history is given in Figure 1A.
Forgetful Professor B returned from a January trip to Paris bearing a miniature Eiffel Tower key-chain. Since then, he has misplaced the souvenir but clearly remembers lending it to another member of the society, although he is uncertain which.
Precise conclusions based on the information provided in Figure 1A can be derived using a temporal interpretation of the network of interactions For convenience, we can order the times of the meetings (the temporal labels) in T , and replace the date of the first event with the number 1, the second date with the number 2 and so forth. This network is shown in Figure 1C.
Desperate to recover his miniature, Prof. B begins analyzing the temporal network. He concludes that any of his colleagues D, E or C could be in possession of the key-chain. D could have it, had B passed it to him on July 8 (time 4). Likewise C could have it after the meeting at time 2; furthermore, C could have passed it on to E at their time 5 meeting. Only A could not possibly possess the trinket, since A met with D and E before either of those two could possibly have acquired it.
That E may possess the object reflects a temporal connectedness between B and E that motivates the following definition.
After emailing E, C and D regarding the key-chain, the societys secretary contacted all the members to inform them that the April 1 meeting between A and D had been entered into the records incorrectly. The actual date was indeed in April, but which day in that month is unknown.
If we assume that the ambiguous date was actually April 19, this changes the order of the interactions. The temporal network which corresponds to this alternative is shown in Figure 1D. However, inspection of these two networks reveals that every temporal path in network 1C is also a temporal path in network 1D. Indeed, the two networks are temporally isomorphic.
networks. If the function φ: V → V ′ has the following properties then φ is a temporal isomorphism and N and M are temporally isomorphic: Any two networks which are temporally isomorphic are said to belong to the same isotemporal class, and if a particular function φ satisfies at least the edge preservation condition for networks N and M, it is said to be a graphical isomorphism between the two networks. Temporal isomorphism and graphical isomorphism between M and N are denoted M ∼ = T N and That two temporal networks (i.e. Figures 1C and 1D) can have fundamentally different temporal labelings, but belong to the same isotemporal class is an important property. Under a temporal interpretation, the temporal paths through a network (the paths over which an object could progress) are, in a sense, more fundamental descriptors of the network than the particular order in which the interactions occurred.
An attempt to understand all the different temporal variants of a graph such as the 5-gon shown in Figure 1, would be well served by determining the number of 5-gon different isotemporal classes. A more ambitious version of this question is, for a particular n, how many isotemporal classes (N ) of the n-gon are there? between w e i and w e j is directed toward w e i if τ (e i ) > τ (e j ), and toward w e j if τ (e i ) < τ (e j ). We write w e i → w e j if the edge between w e i and w e j is directed The line graph of our example temporal network is shown in Figure 2A.
Two line graphs are said to be directionally isomorphic ( ∼ = D ) if, in addition to edge preservation, there is preservation of the directedness of each edge. The line graph L(G) of a temporal n-gon provides a useful tool for counting the number of isotemporal classes because of the useful fact that for a temporal n-gon, N, N ∼ = G L(N). This follows immediately from the definition of the line graph; within any n-gon, one can inscribe another n-gon by rotation of 180/n degrees.
This fact is required to show that every isotemporal class of an n-gon can be uniquely and entirely described by a single directed line graph.
, v j is a temporal path (as it must be the case that either τ ({v j , v j+1 }) > τ ({v j+1 , v j+2 }) or vice-versa). Without loss of generality, we will assume that the former is a temporal path. By the definition of the line graph, This function clearly preserves edges, and since preserves directedness of the edges, and is a directional isomorphism.
To show the converse, that L(N) ∼ = D L(M) implies N ∼ = T M, we invoke the fact that N ∼ = G L(N). Let the functions σ 1 : . Graphical isomorphism is an equivalence relation, and so φ: N → M will be a graphical isomorphism, since directional isomorphism implies graphical isomorphism. If φ preserves temporal paths, it will be a temporal isomorphism. Because N is an n-gon, any temporal path will be in one of the following forms: loss of generality, we will assume it is the former. By the definition of line graph, and application of σ 1 , we know that The directedness of these edges implies that, after appli- where τ is the temporal labeling in M. Therefore, u d , u d+1 , . . . , u d+m is a temporal path in M, and the function φ(v c ) = u d is a temporal isomorphism from N to M.
This theorem places isotemporal classes in one-to-one and onto correspondence with isodirectional classes of line graphs. So, in order to determine N (n), we need only count the number of line graphs up to directional isomorphism. Given the trickiness of the counting arguments to come, we are well served to even further simplify our representation of isotemporal classes.
Definition 3.2 The plus-minus form (±-form) of an n-gon N = {V, E, T, τ }, is a n-gon labeled according to the following scheme. Noting the directedness of edges in L(N), edge e a receives a "+" label if w e a−1 → w ea and w ea ← w e a+1 , a "−" label if w e a−1 ← w ea and w ea → w e a+1 . If w e a−1 → w ea and w ea → w e a+1 , or w e a−1 ← w ea and w ea ← w e a+1 , then edge e a receives a "0" label. Thus, The ±-form of our example temporal network is shown in Figure 2B.
There are several additional useful properties of the ±-form of n-gons that follow directly from the definition. • There must be at least one edge of A labeled with a +.
• Any path through a ±-form that starts and ends on edges labeled +, and containing no other + labels, will have within it, precisely one edge labeled with a −.
Let the Counting Begin
Curiously this implies that in examining the labels of ±-form in turn, we will find the + and − labels alternating, and interspaced by an arbitrary number of 0 labels. You can see this pattern in Figure 2B.
Here is our strategy for finding a formula for N (n), the number of isotemporal classes of an n-gon: 1) Count the number of distinct ways edges can be selected on an n-gon to receive non-zero labels. 2) Then, consider for each case, whether labeling an arbitrary first edge with a + or − label generates a different ±-form. For the first part of this argument, we will need to invoke the help of the choose function.
The number returned by the function n k can be interpreted as the number of order non-specific ways to select k objects from a pool of n distinct objects. If we let the pool of n objects be the set X = {1, 2, . . . , n}, then n k returns the number of distinct subsets of X of order k. Each of these subsets can be used to identify a class of labelings of a ±-form of an n-gon by identifying those edges of the n-gon that are to receive non-zero labels. See Figure 3A. By no means does the choose function identify each distinct footprint uniquely, or even consistently. For example, all the footprints in Figure 3A are rotationally equivalent, and for this footprint, the choose function will identify 8 replicates. For the footprint shown in 3B only two replicates of the footprint will be identified. Additionally, the footprint in 3C is a mirror reflection of the first footprint of 3A; the two represent label isomorphic ±-forms, but are identified by the choose function as distinct.
Four forms of symmetry will interfere with identifying distinct ±-forms: mirror symmetry, skewed mirror symmetry, rotational symmetry, and skewed rotational symmetry. Examples, of ±-forms and corresponding footprints of n-gons with eight of sixteen possible combinations of these types of symmetry are given in Figure 4A.
Definition 4.2 In an n-gon, a vertex axis of symmetry running through An edge axis of symmetry running through e a and e a+ n 2 (A ea ) is an axis of mirror symmetry if f (e a−k ) = f (e a+k ) and an axis of skewed mirror With those definitions, we can now approach the first task of our strategy, determining the number of distinct footprints: Theorem 4.1 The number of footprints (up to reflective isomorphism) of an is the Euler totient function that returns the number of non-divisors of d.
Proof -We will begin with the odd case where n = 2k + 1. If each footprint indicates edges of N that receive non-zero labels, it must contain k = 2, 4, . . . , 2k edges, since the number of + labels must equal the number of -labels. Thus, the term k i=1 n 2i counts all the footprints at least once. This can be simplified using basic binomial identities to 2 n−1 − 1.
However, as we see in Figure 3, if a footprint lacks rotational symmetry it will be represented either n or 2n times by choose, if it either lacks or has reflective symmetry respectively. And, if the footprint has at most d-fold rotational symmetry (as in Figure 3B), this term will identify it n/d times.
Since this formula does not claim to equate left and right-hand reflections of a footprint, we will only consider the mis-representation by choose of those footprints with rotational symmetry.
It is our goal to compensate for the under-representation of rotationally symmetrical footprints by the choose function so that each footprint is counted either n or 2n times depending on whether it has reflective symmetry. We will identify those footprints with at least d-fold symmetry with each term of the following formula: d =1,d|n Here ∆ d is a correction factor specific to each d-fold symmetrical footprint that increases the number of occurrences of the under-represented class from n/d to n. When ∆ d 1 corrects for each d 1 -fold symmetrical footprint, the term also corrects to the same degree, all labelings with d 1 d 2 -fold symmetry.
Let us consider p-fold symmetrical labelings where p 1 is prime. As a prime, p 1 has no sub-divisors. Since, each application of the choose function will identify each p 1 -fold symmetrical footprint n/p 1 times, and n/p 1 have already been identified by the initial 2 n−1 − 1 term, ∆ p 1 = p 1 − 1, since (p 1 ) n p + n p = n = ∆ p 1 (n/p) + (n/p). It is not a coincidence that ∆ p 1 = ϕ(p 1 ). We will prove that ∆ d = ϕ(d) by induction on the number of sub-divisors of d, and have already shown that when d has no divisors, ∆ d = ϕ(d). So, assume that for all d i |d, that ∆ d i = ϕ(d i ). Since any ∆ d i will contribute to the number of accumulated representations of footprints with d-fold symmetry, we can calculate ∆ d as follows: In these equations ϕ(1) is subtracted since 1-fold symmetry corresponds to the rotationally asymmetrical case, which is accounted for by the 2 n−1 − 1 term, and ϕ(d) is subtracted since there is no previous term accounting for d-fold symmetry. Invoking the number theoretic fact that n = d|n ϕ(d) to substitute and simplify, we have: . This completes the second half of the proof by induction, and allows us, therefore, to combine terms for when n is odd.
The proof of the even case of this formula is highly analogous, and for n even, M(n) = 1 + 1 n d|n (2 n/d−1 − 1)ϕ(d); the addition of 1 derives from the fact that for n even, What good is this formula, if it considers two footprints, isomorphic under reflection, to be distinct? As we will see, this result is sufficient to determine N (n) for n odd. Furthermore, it is related to the number of binary necklaces fixed in the plane 1 n d|n ϕ(d)2 n/d [1]. Recall that our temporal networks are not "fixed;" labelings isomorphic under reflection are considered identical. Assume, without loss of generality, that ±(N) has an edge axis of skewed mirror symmetry; f (e b−k ) = −f (e b+k ) for some b and any k. Let φ: ±(N) → − ± (N) by φ(e a ) = e a+2(b−a) . Since these edges are symmetrically far from e b , their labels will be + and -, or 0 and 0. Thus, φ will preserve edge labels from ±(N) → − ± (N).
This proposition tells us exactly when alternatively labeling an arbitrary "first edge" of a footprint with a + or a -yields different ±-forms: only when the ±-form has neither skewed mirror symmetry nor skewed rotational symmetry.
Further examination of the dihedral group and the choice of labeling the arbitrary first edge with a + or a -convinces us that the four cases of symmetry we have considered: skewed mirror, mirror, rotational and skewed rotational are indeed the only possible cases of symmetry that lead to miscounting by the choose function. This lets us determine, for all combinations of symmetry, whether the choose function has mis-counted the number of isomorphically distinct footprints, and the number of distinct ±-labelings (up to isomorphism: one or two) that each footprint needs to represent in our final formula (See Figure 5: Column A).
Here, a "1" indicates a combination of symmetries such that the ±-form of such a network, P , is directionally isomorphic to −P (P and −P are identical); a "2" label indicates networks where P is not isomorphic to −P (and therefore, each footprint must represent 2 isotemporal classes). Figure 5: Column B gives the number of replicates of a particular footprint (again, up to isomorphism) identified by the formula of Theorem 2. Recall that left and right hand reflections were considered different footprints in that formula, so footprints without any reflective symmetry were counted twice. • for n = 2k + 1, N (n) = 1 n ( d|n (2 n/d−1 − 1)ϕ(d))) • for n = 4k + 2, N (n) = 1 n ( d|n 2 n/d−1 ϕ(d) − c| n 2 2 n/2c−1 ϕ(2c)) + 2 n−4 2 • for n = 4k, N (n) = 1 n ( d|n 2 n/d−1 ϕ(d) − c| n 2 2 n/2c−1 ϕ(2c)) + 2 Proof -Let us first examine the n odd case. By Lemma 3.1 and Proposition 3.1, we can eliminate any case of symmetry in which mirror or skewed rotational symmetry appear. Therefore only the first four rows of Figure 5 correspond to possible cases, and within these rows, the number of ±-forms that correspond to a particular footprint (Column A) is identical to the number of copies of each footprint identified by the formula given in Theorem 2 (Column B). Therefore, that formula satisfies the odd case of this theorem.
Needless to say, the even cases will be more complicated.
Since the odd-formula does not return the correct number (Column A) of ±-labelings for four different categories of footprint (these are indicated with asterisks in Figure 5), additional correction terms are required. This correction will be done by adding or subtracting one replicate of each footprint in batches corresponding to cases of symmetry, so that after all the correction terms are taken into account, the sum of the counting terms of Columns B through F, across each row, will equal that in A.
In Column C, for each ±-form with mirror symmetry, another replicate is added. Column D subtracts a ±-form replicate for each labeling with mirror and skewed mirror symmetries. Column E subtracts another ±-form replicate for each labeling with skewed rotational symmetry, and finally Column F adds a ±-form replicate for all labelings with skewed rotational and skewed mirror symmetries. The sum across each row of these correction terms and the initial value given by the odd-formula (Column B) is given in Column G.
As Columns G and A are identical, implementing this sequence of corrections to the odd formula will yield the correct formula in the even cases; this is our road for the rest of the proof.
How are we going to count the number of footprints that have only skewed mirror symmetry and reflective skewed symmetries, or the number of footprints that have skewed rotational and rotational symmetries? In each case, a careful counting argument will give us the values we are interested in.
Column C -Adding a Replicate for Mirror Symmetry: Since an axis of mirror symmetry must pass through edges with non-zero labels, consider two polar edges of N "fixed." Each distinct half-footprint on one side of this axis or mirror symmetry will determine the footprint of the whole n- Therefore, N has skewed mirror and mirror symmetries if and only if it has skewed mirror and skewed rotational symmetries. Thus, the cases to be identified in Columns D and F are one in the same, and ±-forms with only skewed mirror and mirror symmetries, and likewise ±-forms with only skewed rotational and skewed mirror symmetry cannot exist, since they both imply the existence of the third type of symmetry. These cases are marked by double asterisks in Figure 5. Since in Column D we were to subtract the number of such cases, while adding them in Column F, the net contribution of the correction terms generated by these two columns is zero.
This property is remarkably convenient. All we need now is the number of ±-forms with skew rotational symmetry. In order to count the number of skewed rotational footprints, we will need to use a similar argument as that used in Theorem 2. Summing over possible even c-folds, the number of ways to select an odd number of edges from n/c edges is c| n 2 ⌈ n 4c −1⌉ k=0 n 2c 2k+1 = c| n 2 2 n/2c−1 . In order to count each occurrence n/2 times, we must introduce a correction factor similar to ∆ d .
An argument analogous to that given in the proof of Theorem 2 shows that 2 n c| n Column E demands that we subtract from our formula only one replicate of each ±-form that has skewed rotational symmetry. Therefore the term 1 2 (( 2 n c| n 2 2 n/2c−1 ϕ(2c)) + Λ) will give the number of cases taking into account reflective asymmetry. Here Λ is the number of ±-forms with skewed rotational symmetry, and some kind of reflective symmetry (i.e., those that are only counted once by the summation term). As we saw above, if ±-form has skewed rotational symmetry and some form of reflective symmetry, then it must have both mirror and skewed mirror symmetries. And with both kinds of reflective symmetry present, N must contain at least two perpendicular axes of symmetry.
In general, we must consider the possibility that the axis perpendicular to the mirror axis could be either another axis of mirror symmetry, or an axis of skew symmetry, and that these two cases need to be counted separately. Let the number of ±-forms with at least two axes of symmetry (our correction factor) be Λ = Λ skew + Λ mirror , the sum of the number of ±-forms where the perpendicular axis is a skewed mirror axis or a mirror axis, respectively.
If n = 4k + 2, the axis perpendicular to the axis of mirror symmetry must be a skewed mirror axis, as it passes through vertices. Here, Λ mirror = 0, and since all other cases have a perpendicular skewed mirror axis, determining the number of quarter-footprints will determine the number of ±-gons with this form (Λ skew ). There are n−2 4 edges in this quadrant which can be independently included or not in the quarter-footprint. Therefore Λ skew = 2 n−2 4 = Λ is the number of n = 4k + 2-gons with skewed rotational symmetry and reflective symmetry.
If n = 4k, the axis perpendicular to the mirror axis placed by assumption can be either an edge skewed mirror axis, or an edge mirror axis. Assuming the former, Λ skew = 2 n−4 4 , as there are n−4 4 edges in the quadrant between the mirror axis, and the edge skewed mirror axis.
If the perpendicular axis is another mirror axis, then, because skewed mirror and mirror axes must alternate, there must be an odd number of axes between the perpendicular mirror axes. Therefore, one axis (either mirror or skewed mirror) must be a bisecting axis between the two perpendicular axes Therefore, the number of ±-forms with skewed rotational symmetry is
302.
Acknowledgments. This paper would not have been possible without the continued guidance and insightful comments of Tristan Tager footprints without reflective symmetry are counted n times in the n k term that identifies them. B: Examples of footprints of the 8-gon with 4-fold rotational symmetry. Footprints with d-fold rotational symmetry are represented n/d times. C: Those footprints without reflective symmetry are additionally counted for their "left" and "right-hand" versions. The footprint in C is isomorphic to all those in A, but is identified by a distinct footprint. Figure 5: All Possible Symmetries and a Strategy for Calculating the Even Formula -Column A gives the number of isotemporal classes each footprint must ultimately represent. The number of replicates of a particular footprint, as identified by the odd-formula is given in B. C, D, E, and F represent corrections taken to revise the values in B to equal those in A: respectively, addition of mirror symmetric cases, subtraction of mirror and skewed mirror symmetric cases, subtraction of skewed rotational cases, and addition of skewed mirror and skewed rotational cases. G gives the sum of B through F, and as the correction strategy is sound, has entries equal to the goal of A. * : rows where the odd-formula and the goal differ. * * : combinations of symmetry that are impossible. | 2014-10-01T00:00:00.000Z | 2005-01-11T00:00:00.000 | {
"year": 2005,
"sha1": "f9dc956503e1ca4f7939a025d4811e9fa1b22801",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "828477cb3231db1828b0a501b0bf16af91bdfe23",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
70410241 | pes2o/s2orc | v3-fos-license | Phaechromocytoma with Histopathologic Aspects
Phaeochromocytoma is a term used for catecholamine secreting tumors that arise from chromaffin cells of sympathetic paraganglia. The new World Health Organisation (WHO) classification of endocrine tumors has recommended to reserve the term phaeochromocytoma for intraadrenal tumors only and the others are defined as sympathetic or parasympathetic paragangliomas, further categorised by site. Although it was the first adrenal tumor to be recognised, the term phaeochromocytoma was introduced many years later by Pick in 1912. The name is based on the fact that the tumors get dark brown after exposure to potassium dichromate because of chromaffin reaction.
Introduction
Phaeochromocytoma is a term used for catecholamine secreting tumors that arise from chromaffin cells of sympathetic paraganglia.The new World Health Organisation (WHO) classification of endocrine tumors has recommended to reserve the term phaeochromocytoma for intraadrenal tumors only and the others are defined as sympathetic or parasympathetic paragangliomas, further categorised by site.Although it was the first adrenal tumor to be recognised, the term phaeochromocytoma was introduced many years later by Pick in 1912.The name is based on the fact that the tumors get dark brown after exposure to potassium dichromate because of chromaffin reaction.
Anatomy
The human adrenal glands are located in retroperitoneum superomedial to kidneys.They are composite endocrine organs made up of cortex and medulla which have different embriyonic origin, function and histology.On fresh or formalin fixated cut surface the two portions, a relatively thick outer yellow cortex and inner, pearly gray medulla, is readily visible.The medulla is mainly situated in head and partly body of the organ .It may variably extend to tail and focally to alae.It's weight comprises about 8%-10% of the total.Medulla is of neuroectodermal origin and secretes and stores catecholamines, especially epinephrine.
Histology
On histological examination the cortex-medulla junction is sharp with no intervening connective tissue but the border is irregular.The medulla is mainly composed of chromaffin cells (phaeochromocytes, medullary cells) that are arranged in tight clusters and trabeculae seperated by a reticular fiber network.Embriyologically, they are modified sympathetic postganglionic neurons which have lost their axons.They are all innervated by cholinergic endings of preganglionic symptathetic neurons.There are sustentacular cells at the periphery of clusters which can only be demonstrated by immunostaining for S-100 protein.
The chromaffin cells are polygonal to columnar and larger than cortical cells.They have basophilic cytoplasm which have fine secretory granules and/or vacuoles.These granules contain catecholamines and derivates of tyrosine which transform to colored polymers by www.intechopen.comPheochromocytoma -A New View of the Old Problem 16 oxidizing agents such as potassium dichromate and ferric chloride.This staining is called chromaffin reaction which is replaced by formaldehyde methods for detection of catecholamins because of its relatively low sensitivity.Among chromaffin cells are randomly scattered individual or group of parasympathetic ganglion cells that are often associated with a nerve.Small clusters of cortical cells are also a usual component of the medulla.Small groups of lymphocytes and plasma cells may be seen within the medulla but their significance is unknown.
Ultrastructure
Ultrastructural examinations have shown that epinephrine and norepinephrine are secreted by two different types of cells.Epinephrine secreting cells have smaller, moderately electron-dense granules that are closely applied to their limiting membranes.Norepinephrine secreting cells' granules are larger, more electron-dense and have an electron-lucent layer beneath the surrounding membrane forming a halo.The nuclei are usually larger than cortical cells and have finely or coarsely clumped chromatin.Most nuclei are spheroidal and show slight pleomorphism.
The paraganglia
Sympathetic paraganglia (SP) are distributed along paraaxial regions of the trunk along the prevertebral and paravertebral sympathetic chains and in connective tissue in the walls of pelvic organs.However parasympathetic paraganglia (PSP) are found along cranial and thoracic branches of the glossopharyngeal and vagus nerves.Among SP the organ of Zuckerkandl is characteristic, located at the origin of the inferior mesenteric artery, because of being the only macroscopic extraadrenal paraganglia.Similarly PSP are highly variable in number and location and don't have specific names except from carotid bodies which are located between the carotid arteries just above the carotid bifurcation.Apart from different clinical standpoint SP and PSP are similar at cellular level.
Histopathology of phaeochromocytoma
Sporadic phaeochromocytomas make up of about 50% of all phaeochromocytomas and are usually unilateral and unicentric while more than 50% of familial forms are bilateral and coexist with extraadrenal sympathetic and parasympathetic paragangliomas.Patients w i t h M E N t y p e 2 , V H L o r N F t y p e 1 a r e k n o w n t o h a v e a n i n c r e a s e d r i s k f o r pheochromocytoma.
Macroscopic examination
Gross examination highligts a tumor 3-5 cm in diameter which can be more than 10 cm.Tumor weight may range from a few grams to over 3500g, with an average of 100 g in hypertension patients.The cut surface is solid, gray-white, light tan or dusky red and darkens on exposure to air (Figure 1).Hemorrhage, central degeneration, necrosis, cytic change and calcification is not uncommon.The adrenal gland can usually be seen compressed or incorporated within the tumor.An adrenal gland containing phaeochromocytoma should be carefully dissected since diffuse and nodular hyperplasia can be found suggestive of a familial form.
www.intechopen.com
Pheochromocytoma -A New View of the Old Problem 18
Microscopic examination
Microscopically, similar to usual corticomedullary border,cortex-tumor border is irregular and there's a pseudocapsule rather than a true capsule.The most common histologic pattern is alveolar (Zellballen) and trabecular or a mixture of the two, bound by a delicate fibrovascular stroma (Figure 2).Diffuse or solid pattern can also be encountered.Tumor cells resemble usual chromaffin cells but are slightly larger.Sometimes nuclear and cellular pleomorphism is pronounced.Nuclear pseudoinclusions can be seen resulting from deep cytoplasmic invaginations.Occasional mitotic figures are present but they don't exceed 1/30 hpf.Intracytoplasmic hyaline globules are common.Their presence may aid to differentiate phaeochromocytoma from adrenal cortical neoplasms.Interstitial amyloid deposition and small amounts of melanin pigment representing neuromelanin may be present .Hemorrhage and hemosiderin deposits are common and scattered ganglion cells can be encountered.Sometimes tumor cells may undergo lipid degeneration and this may lead to confusion with cortical tumors.Exceptionally, the cells of phaeochromocytoma may contain a large number of mitochondria which give the cells oncocytic appearance.Spindle shaped sustentacular cells form a second cell component of phaeochromocytoma forming a peripheral rim around Zellballen, similar to usual adrenal medulla.These cells have been encountered more frequently in phaeochromocytomas associated with MEN and benign forms.Histopathologic diagnosis of phaeochromocytoma is based on morphology but immunohistochemical techniques are usually used to confirm the diagnosis.Immunopositivity for neuron spesific enolase, chromogranin-A and synaptophysin is characteristic.Extra-adrenal SP are mostly solitary in adults and histologically resemble adrenal counterpart.Dispersed along the paravertebral sympathetic chain, they are most common in the superior (45%) followed by inferior (30%) paraaortic region.Urinary bladder, intrathoracic and cervical paragangliomas can occasionally be seen.More than 25% of these tumors are functional and usually secrete norepinephrine.Approximately 50% of extraadrenal tumors are malignant giving rise to metastases.PSP seldomly produce cathecolamine excess.Carotid body and jugulotympanic tumors are more common than aortic and vagal lesions.Carotid body tumors are more commonly bilateral in familial cases .Also people living at high altitude is ten times at a higher risk for paraganglioma because of hyperplastic response to hypoxic stimulus.
Malignant phaeochromocytoma
Malignant phaeochromocytomas comprise up to 10% of all phaeochromocytomas.WHO 2004 classification of endocrine tumors defines malignant phaeochromocytoma only when there is metastasis to sites where paraganglial tissue is not otherwise found.As a matter of fact there's no reliable histological criteria for classifiying phaeochromocytoma as malignant at present, therefore no lesion can be definetly predicted as benign.There are new approaches to find significant histologic criteria for defining phaeochromocytoma malignant.Large nests of tumor cells, necrosis, high cellularity, cellular monotony, nuclear hyperchromasia, macronucleoli, vascular or capsular invasion, increased mitotic figures and high Ki-67 proliferation index, extension of tumor into adjacent fat, catecholamine phenotype and absence of hyaline globules are all shown to be correlated with malignant behaviour in scoring studies in both phaeochromocytomas and extraadrenal sympathetic paragangliomas.Unfortunately none of these criteria give exact discrimination thus histological gold standard is still not possessed.
Composite phaeochromocytoma
Composite phaeochromocytoma or paraganglioma refers to histological combination of phaeochromocytoma and paraganglioma with features of ganglioneuroma, ganglioneuroblastoma, neuroblastoma or peripheral nerve sheath tumour.There are fewer than 40 cases in the literature.The tumour was combined with ganglioneuroma in 80%, and with ganglioneuroma in 20% of all reported cases.They are usually seen in adults and symptoms are similar to typical phaeochromocytoma as with genetic abnormalities.About 90% occur in adrenal gland and the remainder in the urinary bladder.Altough ordinary phaeochromocytomas can contain scattered neuron-like or ganglion cells the histopathological diagnosis of composite tumour requires both different architecture and cell population.Present evidences show that the origin of neurons in these tumours is preexisting chromaffin or paraganglioma cells.Cell culture studies favor that both normal and neoplastic human phaeochromocytoma cells can undergo neuronal differentiaton.
Adrenal medullary hyperplasia
Lastly, diffuse or nodular adrenal medullary hyperplasia may cause exess amount of cathecolamine secretion and may lead to clinical phaeochromocytoma.
Conclusion
It is easy to define usual phaeochromocytoma histopathologically but diagnosing malignant forms is problematic.Many studies should be done and moleculer techniques should be designed to overcome this dilemma.
Fig. 1 .
Fig. 1.Extra-adrenal paraganglioma with a nodular, tan cut surface.Adrenal gland can be encountered in orange above the tumor (by courtesy of Prof. Dr. Filiz Ozyilmaz). | 2018-12-29T21:49:06.215Z | 2011-12-16T00:00:00.000 | {
"year": 2011,
"sha1": "3d2aab4e91b09b07774986823a812e732be16820",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/25179",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3d2aab4e91b09b07774986823a812e732be16820",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269141350 | pes2o/s2orc | v3-fos-license | Investigation of chemical, physical and morpho-mechanical properties of banana-plantain stalk fibers for ropes and woven fabrics used in composite and limited-lifespan geotextile
This study aimed to assess the potential of banana-plantain stalk fibers (BPSF) as a raw material for ropes and fabrics used in composites and geotextiles. Fibers were obtained by Biological retting and ropes used for geotextile weaving were obtained by three-strand twisting in order to optimize the mechanical properties of geostalk. The thermal, physical, chemical and mechanical characteristics of the fibers were studied in order to assess the impact of the extraction process on fiber performance. In addition, the microstructure of fibers and ropes was analyzed using Scanning Electron Microscopy (SEM) and the results highlighted the presence of cellulose microfibrils parallel to fiber axis and hemicellulose linked by lignin matrix. These constituents are organized in three concentric layers around the lumen. Elementary chemical analyses using X-ray energy dispersion (EDS), Fourier Transform Infrared (FTIR) and chemical deconstruction using Jayme-Wise protocol were carried out to determine the chemical composition of BPSF, which consists of 51.5 % Carbon, 47.07 % Oxygen and mineral salts that can be highly contribute to soil fertilization after degradation. These chemical constituents represent 40 % cellulose, 21.5 % hemicellulose, 24 % lignin, 0.34 % pectin, 7.2 % lip soluble extractable and 7.36 % water-soluble sugars present in BPSF. Thermal properties of BPSF have been investigated showing the initial degradation around 200 °C. Physical analysis and uniaxial tensile testing were performed to determine the multi-scale physical and mechanical properties of geostalk. Statistical evaluation using Weibull distribution established an increasing rate of physical and mechanical properties from the finest scale to the macroscopic scale. Thus, from the BPSF to the ropes, titer increases from 42.5 ± 4.5 g/km to 7983.4 ± 132 g/km and elongation at break increases from 0.75 ± 0.29 mm for the fibers to 52.42 ± 18.91 mm for geostalk. With mass per unit area of 1869 g/m2, the tensile stress of 1281.05 ± 273 MPa and maximum strength of 15.4 ± 1.74 kN/m, geostalk is a sustainable woven fabric alternative to geosynthetics for soil reinforcement as other limited lifespan geotextiles (geojute, geocoir and geosisal). In addition, the thermal stability and high mechanical properties of fibers and ropes suggest their potential application as reinforced phases in composite materials.
lifespan geotextiles (geojute, geocoir and geosisal).In addition, the thermal stability and high mechanical properties of fibers and ropes suggest their potential application as reinforced phases in composite materials.
Introduction
Soil reinforcement and coastal protection are still major concerns in construction and environmental engineering.Geosynthetic materials have been proposed as a solution for stabilizing soils and preventing landslides [1,2].Hahladakisa et al. [3] estimate that only 0.1 % of Carbon contained in polyethylene terephthalate (PET) is transformed into CO 2 , while the rest of the constituents infiltrate the soil, defertilising it and destroying the flora.Marczak Daria et al. [4] analyze geosynthetic materials made from plastic polymers such as PET, butadiene styrene acid (ABS), polypropylene (PP) and polyurethane (PU), and conclude that their non-degradation involves a serious environmental problem because of the soil pollution.Numerous research studies, such as [5,6] ], suggest replacing geosynthetics with biodegradable plant fiber geotextiles (LLGs) that comply more fully with environmental standards.In the same vein, to reduce environmental impact, several studies propose to use plant fibers instead of synthetics.Thus, those from agro-waste such as Capsicum Annum stem or Bambusa flexuosa stem which contributes to cleaner production [7,8] are used for bio-composite.According to Refs.[9,10], plant fiber geotextiles can be used to solve short-term geotechnical problems.They are useful in applications such as drainage [11,12], soil reinforcement [13], slope protection [12] and marine erosion.Work by Kiffle et al. [11] and Sumi et al. [14] propose the use of jute and coir respectively in the manufacture of geotextiles.Banana-plantain fibers also attracting increasing scientific interest in the field of geotextiles.Several works as P. Pilien et al. [15][16][17] uses banana fibers as geotextile-reinforced polymer and geotechnical engineering.
Although the results are quite promising, LLGs have a limited lifespan due to the rapid degradation of plant fibers before they have fulfilled their main function.There is an urgent need to look at the capacity of a plant fiber to be used as a raw material in the manufacture of a geotextile.The above-mentioned previous work on LLGS does not reveal the role of the chemical constituents of the fiber on the mechanical behaviour of the geotextile and the impact on its durability.Morphological, physical and mechanical characteristics of intermediate scales and interactions between constituents at different scales have also not been investigated.A study [4] shows that variations in the chemical constituents and geometric profile of plant fibers have a direct influence on the mechanical properties of geotextiles, this makes their optimal use very complex.Another factor that directly influences the mechanical properties of LLGs once they are in use is the manufacturing techniques [5].Several stringing techniques exist, including knitting, twisting, fibrillating, warping and beaming [18].Comparative studies of these techniques [18][19][20] show that twisting poses fewer problems of crimping, deformability, friction and the formation of short fibers, reducing areas of weakness in the woven fabric.
Global banana plantain production is estimated at nearly 30.5 million tons (Mt) per year [21].In Cameroon, bananas are the third most important export product after fuel and wood.Production of Cameroon increases from 1.16 Mt to 3.88 Mt, from 2014 to 2019 [15].The country generates a large annual quantity of unexploited banana lignocellulosic waste (pseudo-trunk, stalks) which is thrown away in nature, if not used as compost or combustible.This waste represents around 4.5 Mt of fresh biomass, and 4.0275 Mt of dry biomass containing 80.57 % organic matter [22].The lack of appropriate technologies and energy deficit in Cameroon are creating a serious problem for the exploitation of this biomass.There is no doubt about the availability of fibers from banana plantain, particularly from stalks.There is an urgent need to optimize the use of this potential in the manufacture of eco-materials such as geotextiles.
The aim of this study is to investigate of morphological, physicochemical, thermal and mechanical properties of BPSF and their application to the geotextile applied for LLGs problems mentioned above.To achieve this goal, we will first extract BPSF by biological retting, and examine its microstructure and its chemical characterization by elemental analysis using a Scanning Electron Microscope (SEM).FTIR analysis and chemical deconstruction will then be used to qualify and quantify lignocellulosic constituents of BPSF.ATG/ DTG will help to study the thermal stability of fibers.Geostalk will be obtained by twisting fibers and weaving them into taffetas.Fibers, ropes and geostalk will be physically and mechanically characterized using diameter distribution, linear density, and porosity, weight per unit area, water absorption and tensile tests.Finally, elastic behavior in uniaxial tension at different scales will be analyzed using a statistical approach based on Weibull distribution.
Banana plantain stalk fibers (BPSF) extraction
BPS (Fig. 1(a)) comes from the waste disposal site of NJOMBE PENJA 'P⋅H⋅P.' banana plantation in Littoral region of Cameroon.Fibers were extracted from the stalks by biological retting, which preserves their chemical and mechanical properties as much as possible [5,10,[23][24][25].Retting was carried out by immersing stalks in water for 12-14 days at room temperature as shown in Fig. 1(b).Then after delignification, the fibers are detached from each other by hands and cold water rinse as shown in Fig. 1(c) and dried (Fig. 1 (d)).Only fibers located at the periphery and in the intermediate Fig. 1(e) zone were retained for their high mechanical strength [26].Fig. 1(f) shows the extraction zones of BPSFs.
Presence of knots on periphery of BPS explains small size of BPSF, between 20 and 40 cm.Fibers in intermediate section are denser and between 45 and 80 cm long.
Roping
Ropes were manufactured by twisting three strands (Fig. 2(a)) in order to limit the void ratio and obtain more rigid and tensileresistant ropes as reported in the literature [18,27,28].Strands and ropes were twisted on the rope machine shown in Fig. 2(b).Strands were obtained from an average of 40 ±5 fibers.To obtain high mechanical properties from the rope, the number of revolutions (trs) of the crank used was between 25 and 30 trs [19].The final rope length calculated from equation (1) depends on the initial fiber length and the twisting angles α t and α c of the strand and rope respectively [28].
L c , L t and L f are length of rope, strand and fiber respectively.Twist angles are calculated from equation (2) [28].
cos(α
i designs the strand or rope, α is the twist angle, P is the pitch and R is the radius of strand or rope.
Weaving of geostalk
Geostalks are woven by hand.During weaving, ropes in the warp direction were stretched and those in the weft direction were interlaced to form a plain weave (taffeta).Each geostalk rope was modeled as a beam assumed to be flexible and extensible.Corrugations have an impact on the tensile properties of the fabric [28][29][30][31].
Study of microstructure and elemental analysis (SEM/EDS)
Morphological and elemental analyses by Energy-dispersive X-ray spectroscopy (SEM/EDS) were carried out using Model JSM-IT100 Scanning Electron Microscope (SEM) equipped with a silicon detector with guaranteed resolution ≤129 eV and magnification between×5 and ×300.The device was set at a voltage of 15 kV and in the low-pressure mode of 45 Pa, this method was recently used [32,33] to analyze the morphological structure of rachis banana and Neuropeltis acuminatas liana fibers.Longitudinal, transverse profiles and the elemental chemical composition of BPSF were determined.
Fourier Transform Infrared (FTIR) spectroscopy
FTIR analyses were carried out on Vertex 70v spectrometer with resolution of 4 cm − 1 for 50 scans.Working in vacuum without atmospheric disturbance in compliance with ASTM E1252-2013 standard was used to supplement results of elemental chemical analysis by identifying different groups of chemical components of BPSF [32,34,35].
Chemical deconstruction
Groups located in BPSF were quantified by deconstruction of chemical constituents.Five steps are necessary to obtain cellulose according to Jayme Wise protocol recently used by Thomas Sango et al. [34].This deconstruction consists of obtaining a mass of cellulose after the progressive destruction of all the other constituents.Klaxon lignin was obtained independently of cellulose according to TAPPI T 222 om-88 standard used by Ndikontar [36].
Extraction was repeated twice for 8 h each in a Soxhlet at 95 • C using a mixture of 1/2 (v/v) ethanol/toluene solvent and 30 g of BPSF powder.The first siphoning was obtained after 10 min The second extraction was followed by filtration and oven drying at 105 • C for 2 h.The level of liposoluble extracted is obtained using equation (7).
where m 0 , initial mass of the first extraction and m 1 final anhydrous mass.
Masse m 1 obtained after the elimination of liposoluble was boiled for 4 h at 90 • C in a reflux tube and then filtered under vacuum through a Wattman glass fiber filter.After drying in an oven at 105 • C, anhydrous mass m 2 is obtained.Water-soluble sugar content T 2 was obtained by equation (8).
Masse m 2 obtained after elimination of water-soluble sugars was mixed with 2 % chloridric acid and the mixture was stirred for 4 h at room temperature.Anhydrous mass m 3 is obtained after oven drying at 105 • C for 2 h.Pectin content T 3 was obtained by equation (9).
Masse m 3 obtained after the elimination of pectins is mixed with an acetic buffer solution (stirred mixture of 27 g NaOH in 500 ml distilled water, 27 ml acetic acid and 2.7 g sodium chlorite).The mixture was boiled under reflux in a Soxhlet for 4 h at 90 • C, then filtered and dried in an oven at 70 • C until anhydrous mass m 4 of hollocelluloses was obtained.
Masse m 4 is mixed with 150 ml of potassium hydroxide (KOH) under stirring at 90 • C for 15 h.The solution obtained is filtered and soluble mass rinsed with distilled water, then mixed with a 7.5 % w/v KOH solution again under stirring for 2 h.Finally, the solution obtained is diluted in 3 % acetic acid (CH3COOH) and rinsed with ethanol (CH3CH2OH).The anhydrous white mas m 5 obtained after oven drying represents cellulose.Hemicellulose content is obtained by the difference between hollocelluloses and cellulose content.
Step 6 Lignin extraction (Klaxon) [36].Masse m i (10 g) of BPSF powder was mixed with 100 ml of 72 % sulphuric acid and stirred for 2 h, then the mixture was brought to a boil under reflux for 4 h.The mixture was rinsed and filtered with distilled water and the process repeated four times.Residual mass was oven-dried at 105 • C for 24 h until the anhydrous mass m 6 was obtained.Lignin content T 4 is obtained by the formula in equation (10).
These stages are summarized in the diagram of Fig. 3.
Thermogravimetric analysis TGA/DTG
The thermal stability of BPSF was assessed by the thermogravimetric analysis (TGA) method using a thermo balance with a UNIVERSAL V4.5A TA Instruments mass flow meter.The sample was prepared and tested according to ASTM E1131-2008.The heating rate was 10 • C/min at the range of 25 • C-600 • C in air.
Multi-scale physical characterization of geostalk 2.6.1. Geometric profile of BPSF
Fiber, strand and rope diameters were measured by ASTM 2130-90 from the campaign of 40, 12 and 11 samples respectively, using the Keyence VHX-6000 Digital 3D profilometer.Five different measurements were taken for each sample [33].
Weibull's statistical law was used to evaluate the diameter distribution of fibers [33].Equation ( 11) represents the Weibull distribution function and equation ( 12) was used to determine the probability of failure of this function for n samples of rank i.
Equation ( 13) of the Weibull distribution was used to calculate the two Weibull parameters (shape: β and scale: γ).The shape parameter gives an idea of the dispersion of defect size in the material, the lower the shape (β < 10), the greater the dispersion [37].In the ratio α = σ γ where σ is the tensile strength of the material, if α > 1 then, the failure of the material is mainly due to an increase in the tensile load.If not, failure is mainly due to defects in the material.
Mass densities
Linear mass density (g/m), or titer (tex), was determined by ISO 7211-5 (2020) and calculated from equation (14).Masses (m) of length (l) of 10 samples were taken on a Mettler Toledo Panda7 version balance accurate to 0.01 % at room temperature [25].
Mass density per unit area of geostalk was obtained by measuring 7 samples 10 × 10 cm 2 of each weighed using Mettler Toledo precision balance based on NF EN 965 standard described by Ref. [15].The surface area (s) of the samples was set on the balance, which directly displays the value of mass per unit area.The formula for setting parameters is given by equation (15).
Geostalk thickness
Thickness was determined according to the ISO 9863-1 (2016) standard used by Ref. [15].It consists of measuring the vertical distance between a reference plate on which geostalk sample of dimensions 10 × 10 cm 2 is placed and a presser foot parallel to this plate after applying pressure of 2 kPa for 30s.An electromagnetic sensor with a precision of 0.01 mm measures the distance between two plates and the actual thickness in millimeters is given on a digital indicator.Three samples were taken from each plate.
Water absorption of fibers and geostalk
An initial mass of fibers ( m i = 1 ±0.01g) and geostalk samples of 10 × 10 cm 2 previously dried at 105 • C for 24 h until anhydrous mass was obtained is used for water absorption of fibers test following NFG08-012 standard.25 samples [33] and ten geostalk samples [15] were separately immersed in distilled water at room temperature.Masses were sampled on an evolving time cadence over 48 h.Equation ( 16) is used to calculate the water absorption rate.
Fiber density and porosity
The porosity of BPSF defined by equation ( 19) was determined from the bulk and real densities.Bulk density ρ a was calculated from equation ( 17) as a function of the fiber's apparent physical characteristics (mass m f , average diameter d f and target length l f ).Equation ( 18) was used to calculate real density ρ r , according to ASTM D 2320-98 (2022) standard [38,39].A mass m f of fiber was coated with paraffin of density ρ p = 0.9 g/cm 3 .Formed mass m en was then immersed in a graduated tube containing Toluene (106.3 cm 3 /mol), and volume displaced V en was recorded.
Mechanical characterization
Uniaxial tensile tests at different scales were carried out under conditions of temperature T = 21 • C and relative humidity HR = 64 %.The requirements of standards NF T25-501-2, DIN EN ISO 2062 and NF EN ISO 10319 for tensile testing of fibers, ropes and geostalk recommend a gauge length of 20, 70 and 70 mm respectively over a minimum campaign of 25, 10 and 10 samples.These tests were carried out on the tensile machines (with clamps, jaws and bites) shown in Fig. 4 at 2 mm/min crosshead speed for fiber, rope and geostalk tensile testing.Load cells of 10 N for fibers, 1 kN for ropes and 3 kN for geostalk were used.40 mm fiber sample, glued straight onto a special paper, is held by the machine's jaws as shown in Fig. 4(a), the paper was cut before the test began.70 mm-long rope samples were mounted directly on the jaws of the machine in Fig. 4(b).200mm × 100 mm geostalk samples were cut and mounted on the machine as shown in Fig. 4(c).Similar methods have been used in literature to determine the mechanical properties of Neuropeltis acuminatas liana, Rafia vinifera and Banana stem fibers [33,35,38].
Morphological and structural analysis
SEM analysis of fiber yields images shown in Fig. 5 describing the fiber's longitudinal Fig. 5(a) and transverse profile Fig. 5(c).Three layers can be observed: periphery S 1 , intermediate zone S 2 and central lumen S 3 .Cellulose microfibrils are parallel in layer S 1 , then diagonally interwoven in layer S 2 , forming a microfibrillar angle of inclination.Microfibrils have small hollow cavities with a polygonal cross-section as illustrated in Fig. 5(d).Some BPSF have defects along their length, as shown in Fig. 5(b).These defects could have an impact on the multi-scale physical and mechanical properties of the geostalk.A similar type of defect and the three-stage structure has also been observed in the morphological structure of flax [40], cotton [41], Sisal [42], Jute [5] and Neuropeltis Acuminatas Liana [33].The diameter of the BPSF varies from 146 to 452 μm.
Elemental analysis of the FPSBs by energy dispersive X-ray spectroscopy SEM/EDS revealed the presence of Carbon, Oxygen, Aluminum, Silicon, Phosphorus, Sulfur, Potassium and Calcium atoms, in addition to Hydrogen.These elements were quantified using a mass spectrograph shown in Fig. 6.The elemental composition of BPSF is summarized in Table 1.These results show that carbon atoms, present at 51.50 %, and Oxygen atoms, present at 47.07 %, are the majority in BPSF.The presence of P, K, S and Ca molecules, which are very present in lignin, protects BPSF from mold, thereby increasing its shelf life in humid conditions.BPSF would therefore be of great interest if used in the manufacture of limited-life geotextiles used to prevent soil erosion while the flora consolidates and vegetation regains its function.Once BPSF has decomposed in the soil, the mineral salts will act as soil fertilizers.These remarks are in line with the analysis made by Prambauer et al. [6].
Chemical properties 3.2.1. Fourier Transform Infrared analyses
To understand the different chemical functional groups present in BPSF due to the bonds between atoms, FTIR analysis was used to obtain the spectra shown in Fig. 7 and results in Table 2. Between wave numbers 2950 cm − 1 and 3550 cm − 1 , a broad absorbance peak of up to 0.05 % can be observed, reflecting the presence of the O-H bond, a characteristic function of the alcohol group, which is linked to the presence of celluloses and hemicelluloses compounds.The same observation was made on Raphia [35], and Neuropeltis Acuminatas Liana [7,33] fibers.Two peaks between 2750 cm − 1 and 2900 cm − 1 mark the presence of the C-H 2 bond, very present in lignin, and C-H, present in hemicelluloses.The same observation was made on unbleached fiber from Lemba leaves [43] and Raphia fiber [35].Then a peak between 1450 cm − 1 and 1650 cm − 1 is observed, marking the presence of C --O, CH3O and CH bonds characteristic of aromatic groups (benzene) present in lignins, and finally, between 950 cm − 1 and 1140 cm − 1 , a large C-O-C peak is observed in cellulose.This observation has been made on several other plant fibers [24,44,45].These analyses identify cellulose, hemicellulose and lignin as main constituents of BPSF.Cellulose has a positive influence on the material's tensile strength and crystallinity [5] ], which is of great interest as a mechanical property for geostalk strength and composites reinforcement as revealed by Pilien et al. [16] and Vanapalli et al. [17] for application of geotextile in geotechnical engineering and geotextile-reinforced geopolymer.Hemicellulose has a positive influence on the rigidity (Young's modulus) of the fiber, preventing it from cutting during rope twisting and geostalk weaving.Lignin, regarded as the glue that binds cellulose and hemicellulose, is responsible for the fiber's flexibility [5].It has a positive influence on fiber elongation, which facilitates twisting and weaving [46] and improves geostalk drapability.
Chemical deconstruction of BPSF components
Further analysis by chemical deconstruction of constituents enabled chemical constituents to be quantified.BPSF contains 40 % cellulose, 21.5 % hemicellulose, 24 % lignin, 0.34 % pectin, 7.2 % liposoluble extractable and 7.36 % water-soluble sugars.These results are close to those obtained for banana Rachis grown in Asia and the hybrid variety CRBP969 grown by the African Center of Banana -Plantain research in Cameroon [32,47].BPSF has a very low H/L (hollocelluloses/lignin) ratio of 2.46 and a crystallinity index of 55.59 [32].These characteristics are very interesting for roping and weaving, and therefore suitable for the manufacture of geotextiles [5,46].Results in Table 3 show that BPSF has a low cellulose content compared with cotton [41], flax [48], mesocarp [49] and pineapple fibers [48].This is an advantage for the durability of geostalk, as, after hydrolysis, cellulose is transformed into glucose, which is highly consumed by soil micro-organisms.BPSF also has a higher lignin content than fibers such as flax, cotton and pineapple.Lignin is highly hydrophobic thanks to its structure, which contains fewer polar groups than hollocelluloses.It is made up of three monolignols, P-Coumayl alcohols, coniferyl alcohols and sinapilic alcohols [5], which act both as passive fiber protection (against
Thermal degradation properties
The curves in Fig. 8 show the general appearance of thermal degradation of BPSF.The TG and TGA curves present two weight loss steps, while the degradation occurs in one mean stage.The initial loss of weight of 7.2 % between 30 and 110 • C, was due to evaporation of water in the fiber (intramolecular dehydration phase).Similar observations have been made on other plant fibers such as raffia [35] and palm nut mesocarp fiber [39].The degradation of BPSF starts at about 200 • C and above this temperature, there is a gradual decrease of the thermal stability and subsequent degradation of the fibers until the temperature of 390 • C.This decomposition between 200 and 390 • C was related to the thermal depolymerisation of the hemicellulose and cleavage glycosidic linkage of α-cellulose in BPSF (weight loss of 60.06 %) [51].Due to the complex structure of lignin (aromatic compound rings with various branches), its degradation occurs slowly within the whole temperature [52].The DTG curve shows a maximum decomposition peak at 342 • C, which is related to the crystallization degradation [37,53].Furthermore, the fiber residues mainly comprise carbon residues and undiluted fillers remaining after heating at 800 • C of BPSF is around 32.74 %.Similar observations have been made on pineapple, raffia, palm nut and Neupletis fibers [33,37,39,54].The chemical element decomposition peak at 342 • C is closed to sisal fiber 340 • C [55] and flax 345 • C [56].With an initial degradation point around 200 • C, this fiber can be used as a composite reinforcement.
Physical properties 3.4.1. Fiber diameter
Diameters of 40 fibers measured with the profilometer give an arithmetic mean of 287 μm with a very high standard deviation of 80 μm.This shows a wide dispersion of diameters from one fiber to another.The Weibull distribution law was used to represent this diameter dispersion through its linear regression described in equation ( 13).Fig. 9 shows a good regression with a correlation coefficient R 2 = 0.93.This gives a mean diameter of 0.316 μm for a low Weibull shape parameter of 4.252 < 8.This dispersion of diameters is not only due to the microstructure of BPSF, which features a lumen in the center and cavities in the center of cellulose microfibrils, but also to defects in the fiber illustrated in Fig. 5(c)(d).Similar results have been observed in the study of other plant fibers as Flax [57,58], Sisal [42], Neuropeltis Acuminatas Liana [33], Jute [5] and cotton [41].Fiber lengths in the intermediate zone range from 45 to 80 mm Fig. 1(e), resulting in an L/d ratio of 157-279, sufficiently high for fiber to be used in roping [5].
Rope diameter
For a twisting angle of 30 • and a pitch of 5 ± 0.2 mm, the external diameter of strands is 2.37 ± 0.04 mm and 5.22 ± 0.47 mm for ropes.The void ratio of ropes was assessed using Mesurim2 software on scanning electron microscope images.Fig. 10 on rope's porosity shows a full surface area of 20320 μm 2 (Fig. 10(b)) out of a total surface area of 32890 μm 2 , with a void ratio of approximately 38 %.This void ratio is linked to permeability and concentration factor, which directly influences the physical (twist angle, pitch, external diameter, linear density) and mechanical (stiffness) properties of rope.The higher the void ratio, the less rigid the rope.Threestrand rope has less core void than two-and four-strand rope [31,59].Increasing the number of turns/m of rope easily reduces this void ratio, resulting in a stiffer rope.
Multiscale densimetry
BPSF has a linear density of 42.5 ± 3.4 tex.This value is higher than those obtained by Nkemaja et al. (from 13.33 to 17.33 tex) [26] for the William variety of the same fiber.This difference can be explained by the extraction technique used, the age of the fiber and the harvesting season.Libog et al. [38] have shown that the linear density of banana-plantain fibers increases from the base to the top of the pseudo stem in the range 8.4-34 tex.This result places the linear density of BPSF above 34 tex.
Bulk and real fiber densities are 1.08 ± 0.4 g/cm 3 and 1.28 g/cm 3 respectively, corresponding to an average porosity of 14.3 %.This porosity is justified by the presence of cellulose microfibrils and hemicellulose responsible for the hydrophobicity of the fiber.This result is consistent with the SEM micrograph (Fig. 7(c)) of the fiber cross-section which revealed the presence of the lumen.Densities Fig. 9. Weibull distribution of fiber diameters.
S. Fogue Matchum et al.
range from 0.9 to 1.41 g/cm 3 obtained for plant fibers in general [26,38].This will enable geostalk used for slope reinforcement to facilitate transfer between the external environment and soil, allowing water to pass through to activate vegetation regrowth.In the protection of sea coasts against marine erosion, geostalk retains the detritus of running water and reduces hydraulic pressure.Porosity, empty cavity in cellulose microfibrils, lumen in the center of the fibers, and voids due to fogging and shrinkage during weaving of the geostalk fiber allows the soil to breathe [14].
Geostalks have an average thickness of 6.85 mm and a mass per unit area of 1869 g/m 2 .These results are favorably compared with those reported in literature for geocoir (6.5 mm and 700 g/m 2 ) [60,61], geojute (1.85 mm et 760 g/m 2 ) [62] and more recently for banana rachis (5.91 mm et 932 g/m 2 ) [15].The fiber extraction area and stringing process have resulted in a very high mass per unit area of 1869 g/m 2 for the σ/e ratio (fiber strength/geostalk thickness).This thickness, coupled with geostalk's mechanical strength, enables it to withstand the shocks of gully formation and water run-off on deforested slopes.
Water absorption
BPSF has a water absorption percentage of 428 % and geotextile 176.46 %.This significant reduction of 59 % is an advantage for the mechanical durability of geostalk under hydric exposure.
In addition, this significant reduction in the water sensitivity of the geotextile compared to the BPSF fiber can be attributed to pore closure (Fig. 10) during the physical twisting process (Fig. 2) of the fiber [63].Three-strand twisting which reduces the void rate in the ropes, and tightens the fibers and taffetas weaving are techniques that led to this reduction.Fiber is hydrophilic, and this sorption rate is due to the presence of hydroxyl groups in celluloses and hemicelluloses [51], and to the porosity of BPSF.This result is in line with those obtained from 347.1 % to 467.4 % by Libog et al. [38].This is in agreement with the range of results obtained on plant fibers in general, such as Rafia vinifera [64] and palm nut mesocarp [39].Hydrophobicity of the fiber and the void ratio in the rope make geostalk permeable.This is advantageous for drainage, filtration and soil reinforcement.Geostalk acts as a filter separator, letting through only water and fine particles needed to irrigate and enrich coasts.
Tensile properties of fibers, ropes and geostalk
Stress-strain curves in Fig. 11 reveal that fibers break abruptly whereas ropes and geostalks break gradually, fiber by fiber.The rheological behavior of fibers shows perfect elasticity until rupture, while that of ropes and geostalks shows a first phase where elasticity is not perfect, and a second phase in which stress drops and rises again over several cycles until total rupture.
Tensile tests on fibers (FBs), ropes (RPs) and geostalks (GTs: woven fabric) reveal the evolution of mechanical properties.Results in Table 4, illustrated by characteristic curves in Fig. 11 show that strength and elongation at break increase from the microscopic to the macroscopic scale, very interesting as composite reinforcement [5].
Fig. 12 illustrates the distribution of fiber mechanical properties as a function of diameter.It shows a dispersion of mechanical properties and a concentration of these around average diameter (287 μm).During the tensile test, the twisted fibers are reorganized, untwisting and stretching parallel to the tensile axis of ropes, then progressively breaking into bundles.Similarly, corrugated ropes in the taffetas structure of geostalk are stretched parallel to the tensile axis and gradually break.This explains the rheological behavior observed in Fig. 11 while fibers abruptly break up after the linear zone.
Statistical analysis of multi-scale elastic behavior
The evolution of fiber tensile properties as a function of diameter, illustrated in Fig. 12 shows non-linear behavior and wide dispersion.This behavior indicates that diameters do not follow a normal distribution law.Stiffness and stress decrease with increasing diameter, which is not the case with strain.It would therefore be difficult to predict elastic behavior without involving other laws of behavior, such as Weibull's probabilistic law, which explains this wide dispersion.Similar results have also been observed on other plant fibers such as mesocarp fiber [49], raffia vinifera [35] and Neuropeltis acuminatas liana [33].Statistical analysis using the Weibull distribution in Fig. 13 has permitted to obtain Weibull's parameters shown in Table 5.At the mesoscopic scale of ropes, Young's modulus predicted by Weibull is 1250.96MPa and experimentally it is 1134 ± 268 MPa.This gives a ratio of α = 1.103 > 1; with a shape parameter of 4.09 indicating that the failure of ropes is mainly due to an increase in tensile load during the test than defects in ropes.At the macroscopic scale of geostalk, predicted Young's modulus is 3268.67MPa and that was obtained experimentally indicating that failure is mainly due to increase of tensile load during test than defects in the geostalk.The same observation has been done on woven fabrics and composites [55,65].
Results in Table 5 show that mechanical properties follow the Weibull distribution with regression coefficients R 2≥ 0.91.Weibull shape parameters β increase from the lower to the upper scale.They rise from 2.76 to 4.09 for ropes, then to 8.65 for geostalk, but remain very low, below 10 Finest scale of fibers is the most sensitive, with a shape coefficient of 2.76, defects are not uniformly distributed in the fiber due to its microstructure.The evolution of these parameters shows a reduction in the dispersion of defects from the finest scale to the macrostructure.This reduction is due to twisting, which regulates defects along the ropes [31,59] and plain weave (taffeta), which distributes rope defects uniformly throughout the geostalk structure (warp and weft direction) [19,20].The mean fiber stiffness defined by scaling parameter η predicted by Weibull is 2487 MPa, and the arithmetic mean obtained experimentally is 2635 MPa (Fig. 13).This difference indicates for a ratio of α = 0.94 < 1 that failure in fibers is not only due to an increase in tensile load but mainly to internal defects in fiber illustrated in Fig. 5(b).The same observations have been made on Neuropeltis acuminatas liana fiber [33].At the mesoscopic scale of ropes, Young's modulus predicted by Weibull is 1250.96MPa and experimentally it is 1134 ± 268 MPa.This gives a ratio of α = 1.103 > 1; with a shape parameter of 4.09 indicating that the failure of ropes is mainly due to an increase in tensile load during the test than defects in ropes.At the macroscopic scale of geostalk, predicted Young's modulus is 3268.67MPa and that was obtained experimentally indicating that failure is mainly due to increase of tensile load during test than defects in the geostalk.The same observation has been done on woven fabrics and composites [55,65].
Results in Table 6 enable to compare physical and mechanical properties of BPSF with those of fibers also used in geotextile manufacture.The diameter of BPSF is 287 ± 80 μm, the ratio L/d is between 157 and 279 and the density is 1.24 g/cm 3 , very close to coir (0.87-1.2) g/cm 3 and jute (1.3-1.46)g/cm 3 .With an elongation of 3.98 ± 1.3 and Young modulus of 2.64 ± 0.7 GPa, BPSF have good physical and mechanical properties for twisting and weaving [28].Data in Table 7 compares characteristics of geostalk obtain with those of LLGs in the literature.The tensile strength per meter and elongation of Geostalks are respectively 15.6 ± 0.2 kN/m and 165 ± 47 mm.These mechanical properties are comparable to those of geocoir, which has a tensile strength per meter between 9.3 and 27.8 kN/m and an elongation between 32 and 68 mm [60,61,67].This means they can resist tearing under critical loads and prevent damage by preventing stones from sinking into soft soils.These high mechanical and physical properties of geostalk from banana plantain stalk of Cameroon can contribute to ground improvement by modifying the soil of existing foundations to achieve better post-construction performance and manage operational load constraints during construction.As geocoir or geojute, it increases stability and bearing capacity and reduces foundation settlement [16,17,68].
Implications of this study 3.7.1. Environmental sustainable benefits
The amount of geosynthetics made from plastic polymers such as Polyethylene (PET), butadiene styrene acid (ABS), polypropylene (PP) and polyurethane (PU) has doubled in less than 20 years, from 2000 to 2019, rising from 234 to 460 million tons and discharing 85 % of all marine and land-based waste.This is expected to rise to 845 million tons in 2032 [4].In addition, only 0.1 % of the carbon contained in polyethylene terephthalate (PET) is transformed into CO2, the rest infiltrates the soil, defertilises it and destroys the flora [3].Geotextiles made from banana stalk are biodegradable, eco-friendly and fertilise soil after degradation.Thus with their high mechanical properties, they are a sustainable alternative to geosynthetics useful for soil and coast reinforcement.
Practical implications
This study aimed to extract banana stalk fibers, study their physical, mechanical and chemical characteristics then use them to make ropes for composite or geotextile applications.The results obtained are very satisfactory and they are not only enabled to optimize the mechanical properties through the choice of process and extraction areas for the banana fiber, but also through the multiscale mechanical process (roping and weaving) developed.The mechanical property of banana stalk fiber (82.5 ± 24 Mpa of stress strength, 2.64 ± 0.7 Gpa of Young Modulus, 3.98 ± 1.3 % of deformation), its elementary and chemical composition (40 % cellulose, 24 % Lignin, 21,5 % hemicellulose and 24 % ashes) show that this fiber can be used in other several fields as textiles, composite and bioethanol.The high percentage of lignin (24 %) contents in BPSF compared to fibers like Flax (2.2 %), cotton (0.7-1.6 %) and pineapple (12.7 %) is a great advantage for the durability of geostalk face to the water and microbiological conditions.Furthermore, the multi-scale approach by tree strand twisting and taffetas weaving helps to optimize the mechanical properties of ropes and geostalk.Despite the fact that mechanical properties of BPSF (E = 2.64 Gpa, ε = 3.98 ± 1.3 %) was lesser than those of the coir (E = 4 Gpa, ε = 4 %) considered as reference in geotextile domain, the combination of two processing (tree strand twisting and taffetas weaving) help to optimize properties of geostalk (T = 15.6 ± 0.17 kN/m, ε = 165 ± 47 %) close to geocoir's tensile strength (T ∈ [9.3 -27.8]kN/m), and very higher than deformation of geocoir (ε ∈ [32% -68%]).
Conclusion
This work aimed to investigate the elastic behavior of fibers, ropes and woven fabric in order to optimize their exploitation in environmental engineering and soil reinforcement (geotextile) or composite reinforcement.The potential of banana plantain stalk fiber available in Cameroon has been assessed.Extraction of peripheral and intermediate zones of stalk fibers by water retting, twisting of three-strand ropes with a twisting angle of 30 • and plain weave weaving (taffetas) made it possible.That mechanical process helps to regulate and reduce the dispersion of defects from the finest to the macroscopic scale.Microstructural analysis using SEM, elemental constituent analysis using UV fluorescence, Fourier Transform Infrared (FTIR) and chemical deconstruction using Jayme-Wise protocol enabled us to determine the morphological, geometric structure and chemical composition of BPSF.These include 51.5 % carbon and 47.07 % oxygen, which make up 40 % cellulose and 21.5 % hemicelluloses, for a very low H/L (hollocelluloses/lignin) ratio of 2.46, and mineral salts (K, P, Ca, Si and S) present in the fiber's lignin.This chemical study classifies plantain stalk fiber as an interesting fiber for geotextiles in terms of durability, with a high lignin content of 24 %.The physical and mechanical properties of BPSF are determined.Shape ratio L/d of between 157 and 279 is obtained from an average diameter of 287 ± 80 μm.A density of 1.08 g/cm3, stress of 82.56 ± 44 MPa, strain at break of 3.98 ± 1.3 % and Young's modulus of 2.64 ± 0.7 GPa are recorded.Ropes with an average diameter of 5.22 ± 0.47 mm, the titer of 7983.4 ± 271 g/km and tenacity of 3.43 cN/tex are used to weave geostalk with an average thickness of 6.85 ± 1.34 mm, mass per unit area of 1869 g/m 2 , the tensile strength of 15.6 ± 1.74 kN/m and strain at break of 165.4 ± 47 %.Weibull distribution was used to predict mechanical properties and analyze elastic behavior at different scales of woven fabric.Thus it ranks among geotextiles with limited service life, such as geojute, geocoir, geokenaf and suitable alternative such as composite reinforcement.
Fig. 3 .
Fig. 3. Summary diagram of various stages in chemical deconstruction of BPSF.
fungal and bacterial attack) and as active fiber protection (forming in areas of damage), as shown in Fig.5.This would protect BPSF from attack by microorganisms and mold under use conditions.FTIR curves show the presence of a peak at 1745 cm − 1 in Fig. 7(b)-.(e) and (d), but not in Fig. 7(a) and (c) indicating that components disappear during stage of process and are not yet present in cellulose.Various chemical groups evolve from the removal of liposoluble, sugars and pectin to the production of cellulose and the C-H peak of hemicelluloses present in raw fiber Fig. 7(e) is destroyed.The presence of O-H groups is high due to reactions with alcohols, acids and solvents used (NaOH, CH3COOH, KOH, CH3CH2OH), C-O-C group peak is high in cellulose (responsible for its crystallinity) and very low in lignin.Similar observations have been made on the deconstruction of banana pseudo stem fiber [34].
Fig. 12 .
Fig. 12. Distribution of tensile properties of BPSF as function of diameter (a) Young's modulus distribution, (b) Stress distribution (c) Elongation distribution.
Table 1
Basic chemical constituents of BPSF.
Table 2
Functional chemical groups of BPSF.
Table 3
Comparison of chemical properties of BPSF with other fibers in literature.
Table 4
Evolution of tensile properties from microscopic (fiber) to macroscopic (geostalk) scale.
Table 5
Weibull parameters of fibers, ropes and geostalk.
Table 6
Comparison of physical and tensile properties of BPSF with those of literature. | 2024-04-15T15:12:42.410Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "53598be05522a36aadf3ce9ce9fc73896a477017",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024056871/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b2b40ba69ffc30538b03587eafa59302ef286b0",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
218571060 | pes2o/s2orc | v3-fos-license | Modelling arterial travel time distribution using copulas
The estimation of travel time distribution (TTD) is critical for reliable route guidance and provides theoretical bases and technical support for advanced traffic management and control. The state-of-the art procedure for estimating arterial TTD commonly assumes that the path travel time follows a certain distribution without considering segment correlation. However, this approach is usually unrealistic as travel times on successive segments may be dependent. In this study, copula functions are used to model arterial TTD as copulas are able to incorporate for segment correlation. First, segment correlation is empirically investigated using day-to-day GPS data provided by BMW Group for one major urban arterial in Munich, Germany. Segment TTDs are estimated using a finite Gaussian Mixture Model (GMM). Next, several copula models are introduced, namely Gaussian, Student-t, Clayton, and Gumbel, to model the dependent structure between segment TTDs. The parameters of each copula model are obtained by Maximum Log Likelihood Estimation. Then, path TTDs comprised of consecutive segment TTDs are estimated based on the copula models. The scalability of the model is evaluated by investigating the performance for an increasing number of aggregated links. The best fitting copula is determined in terms of goodness-of-fit test. The results demonstrate the advantage of the proposed copula model for an increasing number of aggregated segments, compared to the convolution without incorporating segment correlations.
I. INTRODUCTION
Travel time reliability (TTR) estimates are of major importance for travelers and transport managers as they are very informative for decision making and planning schedules. TTR has been increasingly recognized as an important measure for estimating the operation efficiency of road facilities, assessing alternative management strategies [11], and providing travelers with route guidance [12]. TTR is defined as the consistency or dependability in travel times, as measured from day to day and/or across different times of the day [9]. In [2] it is suggested that the analysis of TTR is as important, if not more important than, the traditional analysis of average travel time. In order to fully assess TTR, travel time distribution (TTD) needs to be determined as a prior. This makes it possible to measure the risk for on-time arrival probability and find a path for risk-averse travellers [18].
A common approach is to aggregate segment TTD to a joint distribution by assuming independence between individual segment TTD [5], [14]. However, such an approach Adam asamara@mail.uni-mannheim.de, goettlich@uni-mannheim.de appears inaccurate since travel times on successive segments are essentially dependent. For example, when one segment becomes congested, the neighbouring segment also gets affected by this congestion.
This work addresses this problem, by adopting the copula model in econometrics [16] to assess path TTD given a segment set by accounting for correlation between segment travel times. Using copulas for estimating travel time was proposed by [3] and [4]. The proposed methodology was evaluated by [3] for the through movement of two arterials in Shanghai, China and Los Angeles, California, based on automatic vehicle identification (AVI) and Next Generation Simulation (NGSIM) data, respectively, while the copula model was evaluated by [4] by utilizing VISSIM simulation with calibration to generate travel time data on one arterial in Hangzhou, China. The comparison in both studies was made for path TTDs estimated by the copula model, the convolution and the empirical distribution fitting approach, indicating a superior performance of the copula model. Table I gives an overview of both studies. However, the data used by both studies does not represent day-to-day travel time observations. In addition, only two and three segments are aggregated, respectively. This leaves the performance of the proposed copula model for an increasing number of segments an open question, as a path is comprised of several segments when using route guidance systems.
This work shows that the copula model provides a better solution than the traditional convolution model even when using real day-to-day GPS data collected over the period of one year, and for an increasing number of segments. In addition several copulas are compared, and the best fitting copula is determined.
One distinct feature of the copula model when compared to multivariate distributions is that the dependence structure is unaffected by the types of marginal distributions, which enables greater flexibility in correlating individual segment TTD. For this study, path TTD for a major urban arterial in Munich, Germany, is investigated.
This article is structured as follows: In the first section, the copula theory as well as the estimation procedure for copula models is described. Then, a case study is conducted for the study site using historical travel time data provided by BMW Group. Path TTDs are estimated by copula models, first by aggregating two segment TTDs, then by aggregating ten segment TTDs. These results are compared with the estimates obtained without considering correlations between segment travel times and empirical distributions. The last section draws a conclusion and gives an outlook for future work.
II. METHODOLOGY
In this section a brief overview of the underlying theory of copulas adopted from [1] is given. In addition the proposed copula model is described, which was implemented based on [17], [8] and [7].
A. Mathematical preliminaries
A road network can be represented as a directed graph G = (V, E) which is an ordered pair of a (finite) set of vertices V and a set of edges E, representing geolocations and road links connecting these locations, respectively. An edge e ∈ E comprises a pair of two vertices v 1 , v 2 ∈ V. Besides we have o, d ∈ V , representing the origin and the destination, respectively. The travel time for each link is represented as a random variable x, which is derived from historical data. Thus, empirical link TTD is discrete. For characterizing link TTD as continuous probability density function f (x) with distribution function F (x), both parametric and nonparametric estimators can be used. Examples for parametric estimators are Normal, Lognormal, Gamma, and Weibull, while Kernel Density Estimation and Gaussian Mixture Model are examples for nonparametric estimators.
A path from o to d is comprised of several successive links. As historical data for entire paths are not available, path TTD is obtained by aggregating link TTD, which is explained below.
B. Copulas
Copulas are functions that relate multivariate distribution functions of random variables to their one-dimensional marginal distribution functions. According to Sklars theorem [15], for an n-variate distribution function F (x 1 , ..., x n ) with marginal distribution functions F 1 (x 1 ), ..., F n (x n ), there exists a certain copula function C which meets the relationship If marginal distributions are all continuous, C is unique. Based on Sklar's theorem the concept of copula provides an efficient way of modeling dependent variables. Following from Sklar's theorem the joint distribution of f (x 1 , ..., x n ) can be obtained by with copula density and marginal distribution functions f i (x i ).
There exist two major families of copulas: Archimedian and elliptical copulas. Archimedean copulas are very popular because they are easily derived and they are capable of capturing wide ranges of dependence. The definition of the Archimedean copula is based on the generator function ϕ. Archimedean copulas take the form where ϕ (−1) is the pseudo-inverse of ϕ. The reason for Archimedean copulas' popularity in empirical applications is that it produces wide ranges of dependence properties for different choices of the generator function. Two of the most frequently used archimedian copulas are the Clayton Copula with the generator function and the Gumbel Copula with the generator function where u is the marginal distribution and α is the Copula parameter, which describes the dependency between the random variables x i . It can be determined by rank correlation coefficient, e.g, Kendall correlation coefficient [1]. A Clayton copula is able to capture lower tail dependence, and a Gumbel copula is able to capture upper tail dependence. The elliptical copulas differ from the Archimedean classes of copulas in the approach that only an implicit analytical expression is available. These copulas are derived from the related elliptical distribution. The first example of elliptical copula is the Gaussian copula, which belongs to normal distribution, defined as where Φ R is the joint normal distribution with correlation matrix R and Φ −1 is the quantile function of the univariate standard normal distribution. For the purpose of this work a uniform correlation structure was used for the correlation matrix, so that R = (1 − p)I ρ + ρII with correlation coefficent ρ. The second example is the Student-t copula, which belongs to the t-distribution, defined as where t ν,R is the joint Student distribution with correlation matrix R and ν degrees of freedom, and t −1 ν is the quantile function of the univariate Student distribution. In this case R was chosen to be uniform. Unlike the Gaussian copula the Student-t copula is able to capture tail dependence.
C. Estimation of copula models
A copula model is estimated in two steps. First the segment TTDs are estimated from empirical GPS data, and then the copula parameters are fitted. In this study finite Gaussian Mixture Model (GMM) [13] was used, as it showed an accurate fit. The finite GMM with k components is represented as where µ j are the means, s j are the inverse variances, π j are the mixture weights and N is a normalized Gaussian with specified mean and variance. The parameters of the GMM are obtained by the Expectation-Maximization (EM) algorithm [10].
For the second stage of the estimation process, the copula parameters are estimated by log likelihood maximization [19]. For d-variate i.i.d. observations x := (x 1 , ..., x n ) t of size n with x i := (x i1 , ..., x id ) t for i = 1, ..., n the corresponding log likelihood is given by where β 1 , ..., β d are the corresponding marginal parameters and θ is the copula parameter space.
For model testing and verification the following goodness of fit tests are used. The Kolmogorov-Smirnov (KS) Test is defined as the largest difference between two CDFs. The Cramer-von-Mises (CVM) test is the full sum across every observation of the difference of the two CDFs [6]. Therefore it gives a higher power gain than the KS test by using the full joint sample.
III. CASE STUDY
Here we present the results for the case study. First the empirical travel time data as well as the study site are described. Then, empirical segment TTD is investigated. Correlation between successive segments is analyzed. Finally, the results for path TTD estimation are presented.
A. Data
The travel time data provided by BMW Group is collected from probe vehicles. The setup includes a fleet of probe vehicles, which have a module that reports GPS data and a central server, which collects all data in a database. Each vehicle samples the current GPS positions in intervals of 10s to 30s, which are stored in in the local memory of the vehicle together with the according timestamp. The recently sampled positions and according timestamps are transmitted to the central server. Each transmitted position is linked to an alias, which is randomly generated by the vehicle and changes over time due to protection of driver's privacy. At the server, single transmitted positions of the same alias can be connected in order to reconstruct vehicle trajectories. However since vehicles do not transmit continuously, and hide their vehicle ID, it is not possible to reconstruct complete trips or infer driver's identity. The collected raw data is then matched to the links of the road network. Velocities are derived from the difference of time and location, respectively, between two GPS points. Travel times are then obtained using the velocity and the length of the link. The study site consists of ten segments with a total length of 586 m on Leopoldstrae, a major urban arterial in Munich. A schematic illustration of the study site can be found in figure 1. The arterial comprises two signalized intersections. In addition, there is one bus lane, which stretches from Mnchner Freiheit until Hohenzollernstrae, and there is signal control at the start and the end of the bus lane, respectively. The travel time data was collected over a period of one year, i.e. from 01. March 2013 until 01. March 2014. For the through movement of the arterial 4495 trips were recorded. In order to obtain travel time data for the through movements of the ten segments of the arterial, the data with the same Drive-ID for each segment was chosen.
C. Investigation of segment TTD
In [3] it is suggested that segment travel time on urban arterials follow multimodal distribution with three states and GMM with three components is chosen to estimate the marginal distribution. Also for this study, GMM with three components showed an accurate fit. Figure 2 shows the TTD estimation for segment 2 and KS statistic, exemplarily. The other segments are estimated accordingly with a similar accuracy. The parameters of the GGM for each segment are listed in table II. The first component of GMM denotes In order to illustrate the correlation between segments, a scatter diagram for the travel times of segment 2 and segment 3 is shown in figure 3, exemplarily. We can observe a complex correlation structure with a tendency to lower tail dependence, rather than a linear correlation.
The line through the origin is caused by the estimation of the velocities of probe vehicles described above. If one GPS point is located in front of Segment 2 and the next GPS point is located behind Segment 3, the velocity for both segments is equal. Therefore
D. Estimation of path TTD
For estimating path TTD we compare the performance of different copula models with the convolution, and the empirical distribution. For the copula model, the two-stage estimation procedure was used as described above. KS and CVM statistics are used as goodness of fit tests. A lower value for the KS and CVM statistic, respectively, indicates a better fit. In order to assess the scalability of the estimation models, we iteratively increase the number of aggregated segments. First, we estimate TTD for a path comprised of two segments and refer to the corresponding estimation models as "2D Models". Then, we estimate path TTD for the total path comprised of ten segments and refer to the corresponding models as "10D Models".
For evaluating the performance of the 2D models, we estimated TTD for a path comprised of Segment 2 and Segment 3. Figure 4 shows the corresponding PDF and CDF. Goodness of fit tests and the parameters of each copula model are listed in table IV. Each copula model performs better than the convolution due to their ability to incorporate segment correlation. The Clayton copula performs best. A possible reason may be its ability to capture lower tail dependence.
PDF and CDF for TTD estimation by the 10D models for the total path comprised of ten segments is shown in figure 5. The corresponding goodness of fit tests and the parameters of each copula model are listed in table V. Compared to the results for the 2D models, the inaccurate estimation of the convolution as well as the superior estimation of the copula models is more distinct. Again, each copula model performs better than the convolution, while the Clayton copula shows the best fit. Figure 6 shows the CVM statistic for path TDD estimation by the convolution and the Clayton copula for the iterative aggregation of the ten segments. The accuracy of the convolution decreases with the number of aggregated segments, while the accuracy of the Clayton copula stays nearly constant. Therefore, using convolution for estimating path TTD will lead to severe inaccuracies as successive segments are treated as independent. This makes convolution ineligible for assessing travel time reliability. Copulas are able to incorporate segment correlation and, thus, provide an accurate assessment of travel time reliability.
IV. CONCLUSION AND OUTLOOK
This paper presented a copula-based approach to aggregate individual segment TTDs to estimate path TDDs. The aim was to evaluate the scalability of different copula models in terms of number of aggregated segments with real day-to-day data.
GPS travel time data was collected from probe vehicles over the period of one year for a major urban arterial in Munich, Germany. First segment TTDs were investigated. Marginal distributions were estimated using GMM with three components. Segment correlation was analyzed showing a lower tail dependence between successive segments. Path TTD estimation models were first assessed for a path comprised of two segments, then for a path comprised of ten segments. Gaussian, Student-t, Clayton, and Gumbel Copulas were compared to the empirical path TTD and the convolution. The main findings are the following: 1) Path TTD estimation by each copula model is more accurate than the estimation by the convolution.
2) The copula model has potential to model path TTD for an increasing number of segments, whereas the accuracy of the convolution decreases with the number of aggregated segments. 3) The Clayton copula is able to incorporate the segment correlation most accurately compared to other copula models. A possible reason may be its ability to capture lower tail dependence. Future work will focus on further investigating segment correlation by clustering day-to-day data dependent on time of day and day of week. In addition, different study sites will be investigated to improve the applicability of the proposed methodology in field implementation. | 2020-05-11T01:00:55.467Z | 2020-05-07T00:00:00.000 | {
"year": 2020,
"sha1": "f7d7be1e1b5c7c92a3121c1d0f86240d66841cf2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.03699",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7d7be1e1b5c7c92a3121c1d0f86240d66841cf2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
9207378 | pes2o/s2orc | v3-fos-license | Seasonal Imbalances in Natural Gas Imports in Major Northeast Asian Countries: Variations, Reasons, Outlooks and Countermeasures
The seasonal imbalances and price premiums of natural gas imports (NGIs) seriously affect the sustainability of these imports in major Northeast Asian countries, namely, China, Japan, and South Korea. Research on NGI seasonality might provide new insights that may help solve these issues. Unfortunately, little research has been conducted on this topic. Therefore, this paper examined the seasonalities of Chinese, Japanese, and South Korean NGIs using the X-12-ARIMA model to analyze monthly and quarterly data. The results suggest that Chinese NGIs lacks identifiable monthly or quarterly seasonality, while South Korea and Japan exhibit clearly identifiable seasonality. In Japan, NGIs exceed their average levels in January, February, July, August, September, and December; that is, Japan imports more natural gas during the winter and summer. In South Korea, NGIs exceed their average levels in January, February, March and December. In other words, South Korea typically imports more natural gas during the winter. The seasonal differences in NGIs among these countries might be explained by differences in natural gas consumption characteristics, domestic natural gas production capacity, NGI capacity, price sensitivity, and means of transportation. Based on seasonal differences and their probable causes, some suggestions are provided to promote the sustainable development of NGI.
Introduction
Demand for natural gas has increased significantly in Northeast Asia as a replacement for less environmentally friendly and less efficient fuels [1,2].Due to the region's limited production capacity, however, increasing amounts of natural gas have been imported from counties such as Qatar, Indonesia, Malaysia, Russia, and Australia.The major natural gas importers in Northeast Asia are China, Japan, and South Korea.Over the period 2008 to 2013, the NGIs (natural gas imports) of these three countries increased by 66.9 percent, 5.5 percent, and 8.8 percent annually, respectively, and reaching 51.9 billion cubic meters (bcm), 119 bcm, and 54.2 bcm, respectively.Together, these countries account for approximately 89 percent of total gas imports in Northeast Asia [3].Japan and South Korea are reliant on LNG (liquefied natural gas) imports to meet their natural gas demand and are consequently the largest and second largest LNG importers in the world.China became the third largest LNG importer after surpassing Spain in 2013.
As NGIs have increased, these countries have encountered two problems.The first is seasonal imbalances in NGIs.There is a large gap in NGI demand depending on the season, leading to either a shortage (in which demand exceeds supply) or an excess supply of gas.The second is a high premium on LNG prices.Because the international LNG market is regionally fragmented, significant price differences exist among the major basins.As depicted in Figure 1, prices in Northeast Asia have been considerably higher than those in North America or Europe have been since 2010 [4].The reason why the Chinese, Japanese, South Korean, English and German LNG prices went up after 2010 is that Japan had to increase LNG import after the Fukushima incident to substitute the decreased electricity supply after shutting down all its nuclear power plants in 2011, which led to tight natural gas supply in the international market.However, the American and Canadian LNG prices continued to decrease, which is caused by the US shale gas revolution.
Seasonality reflects the monthly or quarterly environmental differences in a variable due to factors, such as climate variations, agricultural arrangements, or social traditions [7].Seasonality is predictable due to its recurring one-year pattern; therefore, it is possible for importers to manage the effects of NGI seasonality.Based on seasonal characteristics, importers can respond flexibly, create reasonable policies, and cooperate to obtain optimal NGI regulations, which would lower costs and ensure continuity.
To date, little attention has been paid to energy seasonality.Sailor and Munoz [8] assessed the sensitivity of electricity and natural gas consumption to climate at regional scales in the USA.Mitchell et al. [9] analyzed the seasonalties of Australian petrol price behavior and the relationship between petrol prices and holidays, mood of consumers, and weather conditions.Wang and Wu [10] used X-12-ARIMA method to analyze seasonal fluctuation of Brent crude oil price and its movement discipline, in order to provide decision support for China's oil import.Wang et al. [11] conducted a likely research on seasonal fluctuation of Dubai crude oil price and its movement discipline.Both found that seasonal factors have a significant impact on crude oil price.Filippin and Larsen [12] proved the existence of seasonality of residential natural gas consumption in Argentina, with a maximum value in the cold period July to August.Zhou and Dong [7] examined the potential seasonality of China's monthly and quarterly crude oil import respectively based on X-12-ARIMA, finding that the seasonal factors tend to be positive in spring and summer quarters while negative in fall and winter quarters.Obviously, the above literature on seasonality has concentrated on the oil sector, including the seasonality of oil prices and oil imports.The seasonality of NGIs has received less academic attention and existing research is marked by a lack of systematic analysis.To the best of our knowledge, no study has yet examined on NGI seasonality in China, Japan, and South Korea.Thus, we analyze the seasonal characteristics of NGI in China, Japan, and South Korea using the X-12-ARIMA model, which is among the most widely used methods of seasonal adjustment.
The seasonality of NGI mainly stems from the seasonality of natural gas consumption, which results from climate factors and energy use policy, as reported elsewhere (e.g., Zhou and Dong [7], Moosa [13], Sailor and Muioz [8], Wang and Wu [10], Wang et al. [11], Filippín and Larsen [12]).The seasonality of NGI is also affected by domestic natural gas production capacity, NGI capacity, price sensitivity, and means of transportation.As the temperature drops in winter, the demand for natural gas for heating increases in China, Japan, and South Korea; In China, natural gas is not used to supplement the peak power demand in summer and winter; In Japan, natural gas power supplements the peak load during both summer and winter, and the government supports the installation of gas air conditioning systems especially in summer; In South Korea, natural gas power is mainly used to supplement the peak power demand in winter.In general, in China and South Korea, natural gas used in winter may be higher than other seasons, and in Japan, natural gas consumption in both winter and summer is much higher.However, unlike Japan and South Korea, China has much higher domestic natural gas production capacity, lower NGI capacity, lower price sensitivity, and more imports by pipeline, which makes the NGI seasonality unclear in China.Therefore, the hypothesis of this paper is "seasonality sustains in the Japanese and Korean NGI markets, but doesn't sustain in Chinese NGI markets".
The remainder of this paper proceeds as follows.Section 2 introduces the X-12-ARIMR model, and describes and pre-processes the data.Section 3 analyses the variations and reasons of NGI seasonality.Section 4 raises some discussions and Section 5 concludes the paper.
The X-12-ARIMA Method
In this paper, the X-12-ARIMA method was used for testing the existence of seasonality for the Japanese, Korean, and Chinese NGI markets.Currently, the main seasonal adjustment methods used for estimating the seasonal component are X-12-ARIMA and TRAMO-SEATS.X-12-ARIMA belongs to the nonparametric seasonal adjustment methods.It allows the identification of the different components of the initial series (trend-cycle, seasonality, irregular) by applying linear filters, often called X-11-type filters, which cancels or preserves a well-defined component (tendency-cycle or seasonal variation).The irregular one is represented thereafter by the residual of the decomposition.It contains the regARIMA module, which allows dectecting and removing any undesirable effect of the series (outliers, calendar effects, etc.).
TRAMO-SEATS program belongs to the parametric seasonal adjustment methods based on the signal extraction.It is composed of two independent subroutines: TRAMO program (Time series Regression with ARIMA noise, Missing observations and Outliers) and SEATS program (Signal Extraction ARIMA Time Series).The principle of TRAMO program is in fact to model the initial series using the univariate approach of Box and Jenkins via ARIMA or seasonal ARIMA models, while detecting, estimating and correcting as a preliminary the outliers, the missing values, the calendar effects as well as structural changes, likely to disturb the estimation of the model coefficients.SEATS program uses signal extraction with filters derived from an ARIMA-type time series model that describes the behavior of the series.
When the data generating process (DGP) is not equivalent to the Airline model, X-12-ARIMA may produce better seasonal estimates for such series than TRAMO-SEATS, due to X-12-ARIMA's nonparametric nature and its tendency to match the seasonal filter of any DGP to the order of the moving averages of SI differences or ratios [14].Besides, X-12-ARIMA provides procedures to examine the time series' trading day effects, holiday effects and some other calendar effects [15].Therefore, we chose the X-12-ARIMA method.The X-12-ARIMA seasonal adjustment model is an enhanced version of the X-11 model.The enhancements of the model include a clearer and more typical user interface and a variety of new diagnostic features to help users detect and correct deficiencies that appear in seasonal and calendar effects adjustments [15].The model also includes a variety of new tools to overcome adaptation problems in order to expand the time series to be adjusted [16].As presented in Figure 2, a complete X-12-ARIMA programme process is divided into three stages.
Series Stationarity
The raw monthly data on Chinese net natural gas imports from January 2003 to December 2013 were collected from China Petroleum and Chemical Industry Association.Data for July and August 2003 were missing and therefore replaced with the average for that year.Monthly Japanese and South Korean NGI data for the same period were collected from International Energy Agency (IEA).All quarterly series were derived from monthly data.As illustrated in Figures 3 and 4, NGIs in China, Japan, and South Korea display a trend of increasing volatility.South Korea, in particular, exhibits a marked seasonal pattern in which NGIs peaks during the winter and bottom out during the summer.To statistically evaluate the stationarity of these series, the Augmented Dickey-Fuller (ADF) unit root test with intercept is used to test a null hypothesis that an observable time series is not stationary.The null hypotheses of non-stationarity are not rejected with the ADF test for the levels of the series (Table 1), indicating that the series are not stationary.The original series are non-stationary and will therefore be transformed into stationary series by differencing processes prior to seasonal adjustment.Then, a (p, d, q) × (P, D, Q)s process for the original series is executed through the X-12-ARIMA's regression function, where, p and P signify the order of the non-seasonal process and seasonal autocorrelation, respectively; q and Q signify the order of the non-seasonal and seasonal moving averages, respectively; and d and D signify the order of the non-seasonal and seasonal differences, respectively.The subscript "s" indicates the seasonal periodicity (i.e., for monthly data, s = 12 and for quarterly data s = 4).The method for determining P (p) and Q (q) was drawn from Box and Jenkins' Time series analysis: forecast and control (Revised edition, Chapter 6, pp.173-186) [17].
The null hypothesis of non-stationarity is not rejected with the ADF test for the first difference of China's quarterly series, but it is rejected for the second difference of the series (Table 1).However, the null hypotheses of non-stationarity are rejected for the first differences of other series.Therefore, the D value is 2 for China's quarterly series, and 1 for other series.
At the end of a time series, seasonal adjustment leads to information loss for which an out-of-sample forecast in ARIMA model is usually employed [18].To test the performance of the forecasts, the actual values were used as benchmarks and compared to the forecasted values.For China and Japan, the monthly forecasts outperform the quarterly forecasts in terms of the error ratio, whereas the opposite is true for South Korea (Table 2).The reason may be that the fluctuations of irregulars in China and Japan's monthly series are smaller than that in their quarterly series, while the opposite is true for South Korea.
The Variations of NGI Seasonality
In the X-11 procedure, an input series is decomposed into three components, namely, the trend-circle, seasonal factors and irregulars.The decompositions are plotted in Figures 5 and 6.These figures clearly indicate that all three countries' monthly and quarterly series exhibit a steep upward trend in terms of the trend-circle components, and their irregular factors fluctuate strongly; however, obvious differences also exist among the three sets.
The seasonal factors of Chinese NGIs are less regular than those of Japan and South Korea.When graphed by month, China's NGIs fluctuate especially irregularly; however, when graphed by quarter, the trend peaks during the summer.
Japan's monthly NGI series is characterized by a w-shape whose central peak occurs in August.When graphed by quarter, the Japanese NGI curve approximates a "~".
The Korean monthly and quarterly NGI series are both U-shaped, but the quarterly series exhibits a sharper trough.
For China, the seasonality tests (Table 3) indicate that stable seasonality is not present in the monthly NGI series, although moving seasonality is present.Moving seasonality, which represents a more obscure, fluctuating cycle, suggests that Chinese seasonality is less stable and identifiable.In addition, the adjustment quality test (Table 4) rejects monthly seasonality within China's monthly NGI series (Q value = 1.82, critical value = 1).For China's quarterly NGI series, the Q-statistic support stable seasonality, but the combined seasonality test rejects stable seasonality.Therefore, quarterly seasonality is rejected for Chinese NGIs.
For Japan, the combined seasonality tests indicate that seasonality is identifiable for both the monthly and quarterly series and that the adjustment quality is even better; that is, synthetically, both monthly and quarterly seasonality are accepted.The monthly seasonality and quarterly seasonality of the South Korean NGI series are also accepted by the seasonality and adjustment quality tests.NGI seasonality stems predominantly from seasonality in natural gas consumption.The customer base is divided into four categories: residential, commercial, industrial, and power generation [19].Industrial customers tend to be less sensitive to temperature and display insignificant seasonal characteristics [19].The proportion of commercial natural gas consumption is relatively low in China, Japan, and South Korea (Figure 7).Thus, in this article, the seasonal characteristics of natural gas consumption are attributed to residential consumption, power generation and air conditioning.Residential consumption: Customers use natural gas for space heating, which is known as the heating load.Consumers also use natural gas for water heating, drying, cooking, baking, etc., which is termed the base load.The heating load is weather dependent (especially on temperature), while the base load is not weather dependent and thus tends to be constant [19].As temperatures drop during the winter, demand for natural gas for residential heating increases.In China, Japan, and South Korea, residential consumption is the main source of increased winter natural gas consumption.
Power generation: Peaks in electricity demand usually occur during the winter and summer [21][22][23].China's electric power load is still largely reliant on coal and hydropower, while gas electricity accounts for a minimal proportion, approximately 1.8 percent, of the total power generated [24].In other words, natural gas is not used as a power load resource in China, which implies that the seasonality of power generation is not a factor in Chinese NGIs.In Japan and South Korea, however, imported natural gas is predominantly utilized in power plants (Figure 7).In Japan, natural gas power supplements the peak load during both summer and winter [25].In South Korea, natural gas power is mainly used to supplement the peak power demand in winter, as indicated in Figure 8 below.
Air conditioning: In Japan, gas air conditioning systems are increasingly favored due to benefits such as economic and energy efficiency, space conservation, and system operation, with the result that more natural gas is consumed during summer and winter.To mitigate the stress produced by peak electricity demand, the Japanese government supports the installation of gas air conditioning systems in office buildings, shopping centers, schools and hospitals, especially during the summer [14]; however, gas air conditioning is in its initial stages in China and South Korea.
Differences in Domestic Natural Gas Production Capacity
China's natural gas production has risen substantially over the past decade, more than tripling between 2002 and 2013 to 117.1 bcm [26].China relies on domestic production to meet natural gas consumption, and Chinese foreign gas dependency is relatively low (Figure 9).China was a net gas exporter until 2007, and seasonal fluctuations in natural gas consumption are therefore met by adjusting domestic production.Japanese natural gas production, however, has been low and constant for over a decade due to declining reserves.In 2012, production was 3.3 bcm, which represents a decline from an average of 5.2 bcm over the past 10 years [27].Japan is dependent on foreign sources for over 95 percent of its natural gas consumption.Thus, in cases of limited natural gas storage, seasonal fluctuations in consumption will inevitably lead to significant seasonal fluctuations in NGIs.Like Japan, South Korea must import natural gas to meet its consumption, which has nearly doubled over the previous decade [28].
Differences in the NGI Capacity
China has a brief NGI history, and its import facilities are flawed.In 2006, China built its first LNG receiving station, Dapeng LNG.With limited import capacity and rapid growth in demand, China's natural gas pipelines and LNG facilities operate at full capacity [30].When demand increases, imports cannot adjust due to limited import capacity.Therefore, the seasonality of Chinese NGIs is unclear.Japan and South Korea have longer histories of NGI and superior infrastructure.Japan began importing LNG from Alaska in 1969.Japan possesses 30 operating LNG import terminals with a total gas send-out capacity of 242.4 bcm/y, which exceeds demand [27].South Korea began importing LNG in 1986 and possessed 197 LNG carriers with over 10000 m 3 of cargo capacity per vessel in 2012.As indicated in Table 5, Japanese and South Korean LNG import capacities exceed that of China, and these countries therefore reflect seasonal changes in gas demand by adjusting their NGIs.
Differences in Natural Gas Price Sensitivity
Price sensitivity is mainly affected by the natural gas pricing mechanism.For a long time, China's natural gas prices were determined by the government, which produced higher imported gas prices than domestic market prices.NGI companies are experiencing considerable losses due to relatively low terminal NGI prices [33][34][35].Therefore, companies lack incentives to adjust NGIs for seasonality.However, the natural gas pricing mechanisms are relatively reasonable in Japan and South Korea.The Japanese government adheres to the cost, corporate compensation, and fairness principles [36].In addition, the specific pricing formula and method are proposed by the Tokyo Gas Company, which possesses some autonomy.In South Korea, wholesale and retail natural gas prices are adjusted with changes in the price of LNG.Wholesale natural gas prices are adjusted monthly to reflect fluctuations in the import price and exchange rate.Retail natural gas prices can be adjusted on a quarterly basis when exchange rate and LNG price changes exceed plus or minus three percent of the overall price [37].
Differences in the Means of Transportation
The primary means of transportation for natural gas, it is thru the pipelines and LNG tankers.NGI by pipelines is less flexible than that by LNG tankers, which could meet the seasonal variations in NGI better.Over the past four years, China has ramped up imports of natural gas via pipelines as shown in Figure 10 [38].The pipeline's first and second phases of China's first international natural gas pipeline connection, the Central Asian Gas Pipeline began operations in 2010 and link to the second West-East pipeline at the Sino-Kazak border.In September 2013, China began importing gas from Myanmar when the China-Myanmar gas pipeline became operational.However, Japan and South Korea do not have any international gas pipeline connections, and must therefore import all gas via LNG tankers.
Outlook on NGI Seasonality
The results for NGI seasonality presented here cannot simply be extrapolated into the future.Although climate factors will remain stable, policies will change and produce changes in seasonality.
(1) China: clear NGI seasonality might emerge As China's natural gas market matures, seasonal consumption characteristics will become clearer.Currently, natural gas consumption is concentrated in urban areas; however, with improving gas pipelines and other infrastructure and increasing rural incomes, the countryside might become one of China's main natural gas consumer markets.In addition, due to the energy shortage in the 1950s, the Chinese government drew a boundary at the Qinling mountains-Huaihe River to limit central heating, which was only permitted north of the boundary [39].Today, the continued use of the so-called north-south heating line, a product of the planned economy era when energy was a scarce resource, to determine the use of central heating is unreasonable.In recent years, many southern cities have experienced between 90 and 100 days of winter temperatures averaging below 6 °C [39].Therefore, demand for heating in the southern cities has become more urgent.As gas heating in the countryside and South China gradually expands, wintertime natural gas consumption will increase further.To optimize the structure of natural gas consumption, in 2012, the Chinese government introduced a new natural gas use policy under which gas air conditioning was defined as a priority [40].This policy could significantly increase both summer and winter demand for natural gas but especially summer demand.
Since the 1990s, the Chinese government has implemented several natural gas price reforms.The pricing method for natural gas has evolved from government pricing to a two-track implementation to prices set with government guidance and finally to the current market net back value method [35].During the process of price reforms, pricing mechanisms have gradually improved, and China's natural gas will realize market-oriented pricing in the near future.The rationalization of natural gas prices will gradually improve the enthusiasm of enterprises to import natural gas, which will promote the development and construction of import facilities.
In addition, China's dependence on foreign natural gas is rising (Figure 10) and, according to the BP Energy Outlook 2030, will reach over 40 percent by 2030 [41].Based on these predictions, we boldly infer that Chinese NGIs will display significant seasonal trends in the future.
(2) Japan: relatively stable seasonal NGI characteristics Natural gas represented over 27 percent of electric generation in 2010 before the Fukushima nuclear disaster.Post-Fukushima, the majority of lost nuclear generation has been replaced with natural gas power plants.The government currently plans to construct additional gas powered generators, and three gas power plants with a capacity of 3.4 GW are scheduled to come online by 2016 [27].Consequently, we anticipate that natural gas will continue to play a significant role in guaranteeing the supply of peak demand for electricity, especially during the summer.Additionally, because gas air conditioning has played an important role in mitigating peak electricity demand during the summer, the government will continue to support its development.Stable natural gas use policy will produce stable seasonal natural gas consumption characteristics, and accordingly, NGI seasonality will remain unchanged.
(3) South Korea: summer NGIs might increase To narrow the gaps in the seasonal gas demand attributable to higher winter demand, the government has increased efforts to control demand while ensuring a stable supply.The Korean Ministry of Knowledge Economy released its tenth long-term plan to balance the supply and demand for gas over the period 2010-2024 on December 31, 2010 [42].The plan calls for expanding the seasonal gas tariff to decrease winter demand and increase summer demand.The plan also proposes the introduction of gas air conditioning systems to increase summertime demand for gas.Summer NGIs will increase as summer demand increases.
Based on existing policies and climate factors, we anticipate that the seasonal characteristics of NGIs in China, Japan, and South Korea might converge in the future such that all three countries' winter and summer NGIs would exceed spring and autumn imports.
(4) Effects of some important events on NGI seasonality Over the long term, international gas market has certain uncertainty.On one hand, after the Fukushima incident, Japan increased its demand for fossil fuels, primarily natural gas.However, the effects of the accident on energy security were not restricted to Japan; the accident itself resulted in the loss of public acceptability of nuclear power and led countries, such as Germany and Italy, to immediately shut down some of the nuclear reactors or abandon plans to build new ones [43].On the other hand, because of the shale gas revolution, the US natural gas market would be more self-sufficient and independent of the major exporters.If America exports natural gas to more countries, it will alleviate the natural gas tight supply situation.Due to the above reasons, it's difficult to judge the gas price trend.If the gas price increases, importers will face more severe energy security risk and economic risk, they would be likely to take positive measures to balance seasonal imports, so the seasonality of NGI may become no significant.On the contrary, if the gas price decreases, less attention will be paid by importers on tackling the seasonal imbalances of NGI, which may lead to more obvious NGI seasonality.
Measures to Protect the Sustainability of NGI
Based on the seasonal differences among China, Japan, and South Korea and the reasons for those differences, some suggestions are provided to promote the sustainable development of NGI.
(1) Construction of import facilities Due to its limited import capability, China's NGIs did not exhibit seasonal fluctuations consistent with consumption.During the peak demand season (winter), imports did not increase, which produced gas shortages.Thus, to meet demand fluctuations, China should accelerate the construction of import infrastructure, including natural gas pipelines, LNG tankers, and LNG receiving terminals.Compared to China, Japan and South Korea possess stronger import capabilities but remain constrained in terms of how much LNG each can receive based on berthing, ship size, and other infrastructure limitations [27].Thus, their import capabilities should be adjusted accordingly.
(2) Expansion of inter-season swaps Inter-season swaps are executed by multiple natural gas buyers facing different demand patterns to reduce seasonal supply and demand gaps [44].Such swaps primarily benefit buyers and can manage seasonal imbalances in NGIs, reduce storage costs, and improve the utilization efficiency of existing take-or-pay cargoes.South Korea currently engages in frequent swap transactions due to its tighter natural gas demand-supply balance during the winter.China and Japan should also seek inter-season swaps with other countries to meet their own seasonal demand.
(3) Construction of natural gas storage Importing countries with significant seasonality should establish natural gas storage for utilization during peak demand.Gas storage capacity should be determined by both the extent of seasonal differences in import demand and elasticity of import capabilities.Larger gaps in seasonal import demand and inelastic import capacities imply the need for more peak gas storage, and vice versa.Optimized natural gas storage would greatly reduce the seasonal variations in NGIs and improve stability.
(4) Interoperation of remaining import facilities Seasonal differences can cause import facilities to operate under conditions of shortage or oversupply.Importers should cooperate to obtain corresponding use rights.An importer with facilities in shortage countries could pay equipment fees to an importer in an oversupply country to not only meet their demand for NGIs but also improve the utilization efficiency of facilities on the supply side.
(5) Development of short-term spot trading International trade in natural gas has been dominated by long-term contracts, in which buyers are obliged to import a fixed quantity of natural gas over a 15-25 year period regardless of supply or demand fluctuations [4].Most long-term natural gas contracts include destination clauses that restrict buyers from reselling cargo to third parties.These clauses pose barriers to buyer profit maximization through resale.However, spot purchases could increase importer choices, inject liquidity into markets, and allow buyers to hedge their financial and physical risks.Currently, the NGIs in China, Japan, and South Korea are dominated by long-term contracts [4], so it is advisable for them to exploit spot purchases to cope with the demand fluctuations that long-term contracts cannot handle.
(6) Establishment of a natural gas futures market It would benefit countries with significant demand to establish a natural gas futures market, especially countries that experience seasonal fluctuations in demand for NGIs and for which price risk is an important factor.By establishing a pre-determined price, these countries avoid paying peak prices during peak import times.In addition, the first countries to establish a futures market could negotiate favorable futures contracts according to their import seasonality, form their trade benchmark prices, improve their bargaining power, and enhance their discourse in the international gas market.
Conclusions
The purpose of this study was to detect, illustrate and compare NGI seasonality in three major Northeast Asian countries, namely, China, Japan, and South Korea.The X-12-ARIMA model is used to analyze monthly and quarterly data.The following results are observed: (1) NGIs in China do not exhibit identifiable monthly or quarterly seasonality, while NGIs in South Korea and Japan are clearly seasonal.(2) In Japan, NGIs exceed average levels in January, February, July, August, September, and December; that is, Japan usually imports more natural gas during the winter and summer.(3) In South Korea, NGIs exceed average levels in January, February, March and December.In other words, South Korea typically imports more natural gas during the winter.
From the above results, we could found (i) the seasonality of NGI mainly stems from the seasonality of natural gas consumption; (ii) domestic natural gas production capacity, NGI capacity, price sensitivity, and means of transportation could have some effect on NGI seasonality, for example, the factors make NGI seasonality unclear in China; and (iii) the seasonality of NGI might change due to policy factors, for example, Japanese NGIs during summer are much higher, due to the policy about gas air conditioning systems.
The above results are also useful for designing energy policies to promote healthy and sustainable NGI development.In brief, a deeper understanding of the seasonal variations of NGI, gained by analyzing the seasonal patterns over the year, may help national governments to prevent natural gas shortages and decrease the cost of NGI by designing suitable NGI policies that take into account the seasonal patterns found in the countries.For example, China and Japan could seek inter-season swaps with other countries to meet their own seasonal demand.
We must emphasize that importers should note that the long-term seasonality of NGIs might change due to policy factors.For example, the South Korean government proposed the expansion of seasonal tariffs, which might increase summer NGI.Beside, over the long term, global warming caused by greenhouse gas emissions is also likely to dramatically alter people's natural gas consumption habits.For example, "warmer winters" might significantly reduce natural gas consumption for heating during winter, and NGIs will decrease accordingly.Of course, this is beyond the scope of this paper, and we propose this conjecture to provoke thought.
Figure 5 .
Figure 5. Additive components of the monthly series by country.
Figure 6 .
Figure 6.Additive components of the quarterly series of China, Japan, and South Korea.
Figure 8 .
Figure 8. Monthly natural gas consumption for power generation in South Korea [5].
Figure 10 .
Figure 10.LNG imports and pipeline imports in China.
Figure A3 .
Figure A3.The correlogram of Japan's monthly NGI series.
Figure A5 .
Figure A5.The correlogram of South Korea's monthly NGI series.
Figure A6 .
Figure A6.The correlogram of South Korea's quarterly NGI series.
of log level Second seasonal difference of log level Critical value ADF value Critical value ADF value Critical value ADF value
** Denotes significance at 5% level. | 2015-09-18T23:22:04.000Z | 2015-02-05T00:00:00.000 | {
"year": 2015,
"sha1": "5c5c30e23aa501fd36a55eaf15e4473f0d6ce871",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/7/2/1690/pdf?version=1424779567",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5c5c30e23aa501fd36a55eaf15e4473f0d6ce871",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
38025966 | pes2o/s2orc | v3-fos-license | Cyclic nucleotide-independent protein kinases from rabbit reticulocytes. Purification and characterization of protease-activated kinase II.
A cyclic nucleotide-independent protein kinase, protease-activated kinase II, which incorporates up to four phosphates into 40 S ribosomal protein S6, has been purified from the postribosomal supernatant of rabbit reticulocytes. Protease-activated kinase II was purified as an inactive proenzyme by chromatography on DEAE-cellulose, phosphocellulose, Sephadex G-150, and hydroxylapatite. The enzyme was activated in vitro by limited digestion with trypsin or chymotrypsin. No other mode of activation for protease-activated kinase II in vitro was identified. The proenzyme had a molecular weight of 80,000 as measured by gel filtration; following tryptic digestion, the molecular weight of the activated protein kinase was 45,000-55,000. Protease-activated kinase II required Mg2+ for activity but was inhibited by other divalent cations, monovalent cations, and fluoride ion. ATP was the phosphoryl donor in the phosphorylation reaction; GTP had no effect. In vitro, multiple phosphorylation of S6 was observed with some phosphate incorporated into S10. Phosphorylation of S6 by protease-activated kinase II has been shown to be stimulated in serum-starved 3T3-L1 cells by insulin (Perisic, O., and Traugh, J. A. (1983) J. Biol. Chem. 258, 9589-9592) and in reticulocytes by altering the pH of the incubation medium (Perisic, O., and Traugh, J. A. (1983) J. Biol. Chem. 258, 13998-14002.
Two cyclic AMP-independent protein kinases have been isolated from rabbit reticulocytes.
These enzymes have been resolved from the cyclic AMP-regulated activities by ion exchange chromatography on DEAEcellulose and phosphocellulose and assayed using casein as substrate.
For simplicity, the casein kinases were numbered in order of elution from DEAE-cellulose. Casein kinase I (CK I) bound to phosphocellulose and to sulfopropyl-Sephadex at low ionic strength at pH 6.8. Casein kinase II (CK II) did not adhere to phosphocellulose in the absence of monovalent cations, but bound when the concentration of these ions was raised to 0.25 M. This differential chromatography of CK II on phosphocellulose was used in the purification of the enzyme. Both CK I and CK II activities were purified further by hydroxylapatite chromatography. In the phosphorylation of casein, CK I preferentially utilized ATP over GTP. The K, for ATP and GTP was determined to be 13 pM and 900 pM, respectively. CK II utilized both ATP and GTP in the phosphotransferase reaction with a K,,, for ATP of 10 pM and 40 PM for GTP.
Analysis
of the highly purified CK II by polyacrylamide gel electrophoresis in sodium dodecyl sulfate showed three major bands of molecular weight 42,000, 38,000, and 24,000. The 24,000 molecular weight band was selfphosphorylated when the enzyme was incubated with magnesium and either ATP or GTP. In a similar experiment, a single protein band of 37,000 daltons was observed with CK I which was self-phosphorylated by incubation with magnesium and ATP. Velocity sedimentation experiments yielded a sedimentation coefficient of 3.2 S for CK I and 7.5 S for CK II. Preincubation of CK II with [Y-~~P]ATP followed by sucrose gradient centrifugation yielded a single, enzymatically active peak of 7.5 S which coincided with the radioactivity. A molecular weight of 144,000 + 10% was estimated for CK II by sedimentation-equilibrium which in combination with gel electrophoresis data suggests a heterogeneous subunit structure.
A number of enzyme activities which catalyze the posttranslational phosphorylation and dephosphorylation of proteins have been detected in diverse eukaryotic cells (1). These include cyclic nucleotide-regulated and cyclic nucleotide-independent protein kinases. The cyclic nucleotide-independent protein kinases are a class of enzymes whose function is not under direct control of either CAMP or cGMP and are not * This research was supported by United States Public Health Service Grant GM 21424. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 IJSC. Section 1734 solely to indicate this fact. controlled by the same regulatory proteins as the CAMPregulated enzymes.
Cyclic nucleotide-independent protein kinases which phosphorylate casein have been partially purified from a variety of tissues, including rat liver (Z-9), human lymphocytes (lo), calf brain (ll), dogfish skeletal muscle (12), mouse plasmacytoma (I3), and rabbit reticulocytes (14) and erythrocytes (14,15). Highly purified casein kinase act.ivities have been reported recently from yeast (16), Novikoff ascites tumor cells (17), and rat liver (18). The physiological function of these enzymes remains a subject for speculation. This paper deals with the purification and properties of two cytoplasmic cyclic nucleotide-independent protein kinases from rabbit reticulocytes.
Lysate
Preparation-The preparation of the reticulocyte lyaate has been described previously (19). Protein Kinase Assay-The assay for protein kinase was carried out as previously described (20). Determination of K,,, for AZ'P and GTP-Initial velocity data were obtained at 30°C under optimal assay conditions as described above and in Table II Gel Electrophoresis-Gel electrophoresis was carried out in the presence of sodium dodecyl sulfate (24) in a slab gel apparatus (25) as previously described (20). Phosphorylase a, bovine serum albumin, creatine kinase, carbonic anhydrase, and rihonuclease were included as standards.
Their molecular weights were taken as 94,000, 68,000, 40,000, 30,000, and 13,700, respectively (26). The gel was stained, destained, and autoradiographed as described previously (20). Protein and radioactivity were quantified by scanning the gel and autoradiogram with a densitometer (EC Apparatus Corp.). Radioactivity was also quantified by excision and counting as described previously (20). and histone (Fig. 1). Two peaks of cyclic AMPindependent protein kinase activity were detected which phosphorylated casein and used ATP as the phosphate donor. matic activity was routinely detected. CK II has been previously reported in rabbit reticulocytes (formerly identified as I&) (14,29), and an activity with a similar elution pattern has also been described in rabbit erythrocytes (14, 1.5 (Fig. 2A). A minor amount of casein kinase activity did not adhere to the phosphocellulose resin. This enzyme comprised only 10% of the total casein activity applied to the phosphocellulose column. It had chromatographic properties which were similar to CK II and undoubtedly corresponded to a small contaminating amount of that enzyme. CK II behaved somewhat anomalously on phosphocellulose. In the absence of monovalent cations, about 90% of the protein kinase activity did not adhere to the resin. The fraction which did not bind to the initial phosphocellulose column was dialyzed against Buffer B containing 0.25 M NaCl and applied to a second phosphocellulose column. At this salt concentration the enzyme adhered to the resin and was eluted as a single peak of activity from 0.70 M to 0.85 M NaCl (Fig. 2B). This anomalous behavior on phosphocellulose was used to enhance the purification of the enzyme. At this point the CAMP-dependent activities were well resolved from the cyclic AMP-independent casein kinase since both type I and type II CAMP-dependent kinases were not bound to phosphocellulose under these conditions. Chromatography of CK I on Sulfopropyl-Sephadex-CK I adhered to the resin and appeared in fractions eluting between 0.3 M and 0.5 M NaCl (Fig. 3). Binding of the protein kinase to the column was very dependent on pH as the enzyme did not bind to the resin when the pH was raised to 7.1. CK I was extremely unstable after this step in the purification, and it was pooled and concentrated immediately to help stabilize the activity. (Fig. 4). After a brief dialysis against Buffer B, the enzymes were concentrated by batch chromatography from lml hydroxylapatite columns. They were stored at 4°C in Buffer B which contained 0.4 M potassium phosphate, pH 6.8. A typical purification of CK I and CK II is summarized in Table I.
Analysis of CK I and CK II by Polyacrylamide
Gel Electrophoresis-CK I yielded a major band of molecular weight 37,000 when electrophoresed on polyacrylamide gels containing sodium dodecyl sulfate. Preincubation with [Y-~~P]ATP followed by electrophoresis and autoradiography resulted in one phosphorylated band corresponding to the 37,000 molecular weight protein (Fig. 5A). CK II was analyzed by polyacrylamide gel electrophoresis in sodium dodecyl sulfate, and three major bands with molecular weights of 42,000, 38,000, and 24,000 were observed as shown in Fig. 5B. When the enzyme was incubated with [Y-~~P]ATP and analyzed by gel I 25 50 FRACTION NO electrophoresis followed by autoradiography, radioactive phosphate was detected and associated exclusively with the 24,000-dalton protein. Only one-third as much phosphate was incorporated during the 30-min incubation period when equal concentrations of GTP were substituted for ATP. Variable amounts of CK II ranging from 6 pg to 30 pg were electrophoresed, stained with Coomassie blue R-250, and scanned with a densitometer (547 nm) to determine the relative amount of protein in each band. Integration of the traces and correction for dye binding on a weight basis (31) yielded an average molar ratio of 1.3:1.0:1.6 for the 42,000,38,000, and 24,000 molecular weight subunits, respectively.
Analytical Ultracentrifugation-CK II was centrifuged to experimental equilibrium (based on the identical distributions obtained after 16 and 21 h of centrifugation).
The long column technique of Chervenka (28) A solution containing 5 pg of CK II was layered at low speed on a buffered 0.5 M NaCl column as described under "Methods" and the rotor accelerated to 59,780 rpm. The direction of sedimentation was from left to right. Scans were made at 4-min intervals with the monochromator set at 236 nm. A, initial absorbance trace after layering; B, after 66 min at speed. full 3 mm of the column and the logarithmic plot was linear over 70% of the distribution. A value of 144,000 rt 14,000 was calculated for the apparent weight average molecular weight of CK II. The stated error arises largely from the uncertainty in the value chosen for the apparent isopotential specific volume. In the absence of density data, we have assumed a value of 0.74 + 0.02 ml/g. Near the bottom of the column, a limiting value of about 220,000 was calculated from the slope of the logarithmic plot (32). Sedimentation velocity experiments with CK II in the optical centrifuge showed a single peak sedimenting with an~&,,~~ value of 7.5 (Fig. 6). Insufficient quantities of CK I were available to perform experiments with the optical centrifuge and, therefore, velocity experiments had to be done with sucrose density gradients. Five velocity experiments were performed which yielded a value of 3.2 + 0.15 for the sedimentation coefficient of CK I (Fig. 7A). In these experiments, the s value of CK II which had been self-phosphorylated with ATP was also determined. A value of 7.5 & 0.3 S was calculated relative to ovalbumin and catalase standards, and the enzymatic activity coincided with the radiolabel incorporated during the self-phosphorylation process (Fig. 7B). Therefore, we concluded that phosphorylation of CK II did not effect a change in the molecular weight of this enzyme. Taken together, the gel electrophoresis data and the centrifuge data suggest a structure for the CK II enzyme which would be composed of two 24,000 molecular weight subunits and one each of the 42,000 and 38,000 molecular weight subunits.
Determination of K, for ATP and GTP-Lineweaver-Burk plots were constructed for CK I and for CK II when ATP and GTP were used as phosphate donors (Fig. 8). A summary of the K, values for the enzymes is given in Table II
DISCUSSION
Previous studies on the CAMP-independent protein kinases from the postribosomal supernatant fraction from rabbit reticulocytes had shown the presence of a single peak of activity eluting from DEAE-cellulose (14). This peak was termed IIIc and was identical with CK II described here. CK I was not observed when an ammonium sulfate precipitation preceded the DEAE-cellulose chromatography step. Thus, this is the initial report on the second cytoplasmic casein kinase activity, although previous studies had shown at least two casein kinase activities were associated with the protein-synthesizing complex. Centrifugation through 0.5 M NaCl dissociated these latter activities from the complex (29). Kumar and Tao (15) have reported two CAMP-independent protein kinase activities from rabbit erythrocytes.
These enzymes were similar chromatographically to CK II and utilized both ATP and GTP in the phosphotransferase reaction. The K,,, values for ATP and GTP differed significantly from those reported here; however, this may be due to the fact that their studies were carried out at pH 9.0. We have observed that CK II aggregated when the monovalent salt concentration was less than 0.5 M. This may account for the very high molecular weight values observed by Kumar and Tao (15) and suggests that the two peaks may be different aggregation states of CK II.
The anomalous behavior of CK II on phosphocellulose has been noted. When the small amount of activity (usually less than 10%) which binds under conditions of low salt was carried through the sulfopropyl-Sephadex and hydroxylapatite steps, a high molecular weight contaminant was found to co-chromatograph.
This contaminant was not present in the phosphocellulose flow-through fraction which contained the majority of the CK II activity. Therefore, we have routinely included phosphocellulose chromatography at low salt in our procedure, even though only a small overall purification was realized.
Both CK I and CK II lose activity rapidly in the latter stages of purification which we attribute to the general decline in protein concentration.
It is important, therefore, to maintain stock solutions of these enzymes at the highest practical concentrations of protein. We have found that this was accomplished most satisfactorily by batch elution from small (1 to 2 ml) hydroxylapatite columns (95 to 100% yield). A subunit molecular weight of about 37,000 was determined for CK I by gel electrophoresis.
Assuming a globular shape for CK I, the s value of 3.2 obtained via centrifugation in sucrose translates to about 37,000 (33). This suggests that CK I is a single subunit enzyme. The molecular weight data from gel electrophoresis and the ultracentrifuge are consistent with a heterogeneous subunit structure for CK II. An enzyme with subunit molecular weights of 42,000, 38,000, and 24,000 in a ratio of 1:1:2 as suggested by gel electrophoresis would yield a native molecular weight of 128,000 + 6,500. This would be consistent with both the molecular weight of 144,000 + 14,000 obtained by equilibrium centrifugation and the 7.5 S velocity coefficient. A similar structure has been proposed for a casein kinase activity purified from Novikoff ascites tumor cells (17) and rat liver (18). Our finding that only the smaller subunit (24,000) is self-phosphorylated is also similar to that found by others (17). The radiolabeling of the native 7.5 S complex confirmed the gel electrophoresis data which suggested that it is indeed a subunit of CK II. Attempts to further purify CK II by binding the enzyme to adenosine-agarose and ATP-Sepharose or by chromatography on Sephadex G-100 showed no alteration in the subunit pattern." When the time course for the self-phosphorylation of CK II was examined, 1.7 mol | 2018-04-03T04:10:10.922Z | 1983-11-25T00:00:00.000 | {
"year": 1983,
"sha1": "026734d2a7dfb3537647d77cb653c4360f75661d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)44014-2",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e95816c16445e22d3c3cb66b5cbc605fbda37426",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
27582129 | pes2o/s2orc | v3-fos-license | EFFECT OF BUTANOLIC FRACTION OF Desmodium adscendens ON THE ANOCOCCYGEUS OF THE RAT
The chemical composition of plants can vary according to factors such as soil and time of collection. Desmodium adscendens (Sw.) D.C. var. adscendens (Papillionaceae) is a plant employed in the treatment of asthma in Ghana, Africa. Studies have shown that butanolic extract inhibits contraction of the ileum and trachea in guinea pigs. In Mato Grosso, this plant is used only in the treatment of ovarian inflammation.The objective of this work was to verify if the plant found in Mato Grosso also relaxes smooth muscle and to understand better its action. The cumulative application of the butanolic fraction relaxed the contraction maintained in the isolated anococcygeus of a rat, induced by high potassium, but not that induced by phenylephrine. Relaxation was not altered by methylene blue. The butanolic fraction reduced in a concentration-dependent way the maximum response of concentration-response curve to calcium in the anococcygeus muscle. The results suggest that the butanolic fraction acts, at least partly, through the blockade of voltage-sensitive Ca +2 channels.
INTRODUCTION
The degree of contraction of cells of smooth muscle determines the lumen in blood vessels and airways as well as the propulsive function of the gastrointestinal and genitourinary tract. Abnormalities in the contraction are related to a variety of clinical conditions including hypertension and asthma. Substances that act in contractility control of smooth muscle are useful in the treatment of disorders due to contraction abnormalities in such muscle.
In Brazil, it is easily found in the Northeast, Center West, and Southeaste regions (Pio Corrêa, 1984).
In Mato Grosso, the plant is used only in the treatment of ovary inflammation (Guarin Neto, 1996). There it is known as "amores do campo" or "carrapichinho", and in São Paulo and Rio Grande do Sul as "pega-pega".
It has been suggested that the action mechanism of the plant is due to the depletion of the histamine stocks (Addy & Awumey, 1984); the inhibition of the cicloxigenase and lipoxigenase enzymes (Addy & Burka, 1988); the increase of the prostaglandin synthesis, PGE 2 and PGF 2a ; the opening of the BK Ca channels (MacManus et al., 1993), or the inhibition of cytochrome P450 NADPH-dependent arachidonic acid metabolism (Addy & Schartzman, 1992).
As the chemical composition of the plant varies according to factors such as collection time or soil and as the action of the plant is not very clear yet, the objective of this study was to determine whether the butanolic fraction of the aqueous extract of Desmodium adscendens found in Cuiabá, Mato Grosso, relaxes smooth muscle and to understand better its action, by comparing its effects with that of an inhibitor of the soluble guanylyl cyclase (sodium nitroprusside), an activator of the adenylato cyclase (forskolin), and blocker of Ca +2 channels voltage dependent (nifedipine) and opener of K + channels (cromakalin).
Collection and identification of the botanical material
The Desmodium adscendens was collected at the campus of the Federal University of Mato Grosso (UFMT), Cuiabá, Mato Grosso.
Taxonomic confirmation was accomplished by Prof. Dr. Germano Guarin Neto of the Botany and Ecology Dept., Biology Institute/UFMT. A voucher specimen (n. 10930) was deposited in the Central Herbarium of UFMT.
Obtainment of the crude aqueous extract and butanolic fraction of the aqueous extract of Desmodium adscendens
Dried and ground leaves of Desmodium adscendens were extracted for 48 hours with water in a Soxhlet apparatus; the extract was concentrated in a rotary evaporator and freeze-dried.
The aqueous solutions of the freeze-dried material were extracted thrice, each time with 3 volumes of water-saturated n-butanol, and the nbutanol extracts were pooled and concentrated in a rotary evaporator to form a brownish syrup; this was freeze-dried (MacManus et al., 1993).
On one occasion, the freeze-dried n-butanol fraction was dissolved in distilled water.
Preparation of the anococcygeus muscle
Male Wistar rats (200-220 g) were sacrificed and the anococcygeus muscle removal performed through an incision in the anus, exposing the distal portion of the rectum.
Isotonic contractions were registered by a quimograph, under a resting load of 1 gram.
They were allowed to equilibrate for 60 minutes, at a temperature of 37 o C and under continuous aeration with Tyrode solution changed every 15 minutes. Tyrode solution diphenydramine (1 mM) was also added.
Effects of EBDA, cromacalin, forskolin, nifedipine, and sodium nitroprusside on the maintained contraction evoked by phenylephrine and 80 mM K +
After equilibration, the preparations were contracted with phenylephrine (10 -5 M) or 80 mM potassium. After the contraction had stablized, tissues were exposed to cumulative concentrations of EBDA (0.01-3 mg/ml), cromakalin ( The relaxant response to each concentration was allowed to reach a stable level before the next addition was made.
Effect of methylene blue on the relaxation produced by EBDA and sodium nitroprusside on the maintained contraction evoked by phenylephrine and 80 mM K +
Cumulative concentrations of EBDA (0.1-3 mg/ml) and sodium nitroprusside (3 × 10 -8 -10 4 − M) were added in the absence or presence of methylene blue (10 3 − M), pre-incubated, for 20-30 minutes.
Effect of EBDA on the concentration-response curve to the calcium evoked by 80 mM K +
Concentration-response curves for Ca +2 were achieved before and after the pre-incubation of the preparation with EBDA (0.1-1 mg/ml) for 40 minutes. Normal tyrode solution was substituted by tyrode solution containing 80 mM potassium without calcium. At this point, concentrations of calcium varying from low to high (10 4 − M-1 M) were cumulatively applied.
Solutions and analysis of the information
The drugs used were methylene blue, diphenydramine, phenylephrine sodium nitroprusside, forskolin, nifedipine, and cromakalin (Sigma Chemical Company, St. Louis, Missouri, USA).
To produce the stock solution, nifedipine was dissolved in absolute ethanol. Forskolin was dissolved in dimetil sulfoxide (DMSO). Thereafter, these, as all the other drugs, were dissolved and diluted in milli-Q water.
Potassium-rich solutions were prepared by substituting the appropriate amount of KCl for NaCl; Ca +2 -free solutions were prepared omitting CaCl 2 .
Results were expressed as relaxation percentage. The relaxation response was defined as: the percentage of reduction of the maximum contraction induced by phenyleprine and 80 mM K + after the application of EBDA. The greatest relaxation obtained by the largest concentration or its closest value was referred to as maximum relaxation.
The concentration-response curve to Ca +2 was constructed as a percentage of the maximum contraction before and after the addition of different EBDA concentrations.
The results represent the mean ± standard error values. Differences between the calculated para-meters were evaluated by Student's t test (unpaired) with p < 0.05 taken as indicating a significant difference.
In Table 1 are listed the concentrations of EBDA, cromakalin, forskolin, nifedipine, and sodium nitroprusside required to produce 50% relaxation in the contraction produced by phenylephrine and high potassium.
Almost complete relaxation was observed with forskolin, sodium nitroprusside ( Fig. 2A), and the EBDA (Fig. 2B) in the maintained contraction evoked by high potassium.
Effect of methylene blue on the relaxation produced by EBDA and sodium nitroprusside on the maintained contraction evoked by phenylephrine and 80 mM K +
The methylene blue did not alter the EBDA effect ( Fig. 3B) in the contraction induced by the 80 mM potassium, but it did deviate to the right the relaxation induced by the sodium nitroprusside in the maintained contraction induced by phenylephrine and high potassium (Fig. 3A).
DISCUSSION
Ethanol extracted from the plant Desmodium adscendens found in Cuiabá, is also capable of relaxing smooth muscle since it relaxed completely, and in concentration-dependent manner, the maintained contraction of anococcygeus induced by 80 mM K + (Fig. 2B). Furthermore, in studies of acute toxicity in mice carried out by N'Gouemo et al. (1996), no effect of this extract was observed.
This makes real the possibility of offering the local population, particularly those with low incomes, a therapeutic alternative mainly in the treatment of very common disturbances of the respiratory tract, a frequent regional problem during the dry season.
Smooth muscle relaxation can happen through: (a) increase of permeability of the membrane to potassium; (b) mobilization of ions of calcium; (c) increase of the cyclical nucleotideos (GMPc and AMPc); (d) direct action on the contractile proteins; and (e) reduction of sensibility to calcium (Cox, 1990).
In relation to the increased permeability of the membrane to potassium, recently Macmanus et al. (1993) characterized three active principles, potent openers of large-conductance calcium-dependent potassium channels (BK Ca ) present in the butanolic fraction of the plant. Salsoline, a 6hydroxy tetrahydroisoquinoline derivitive, also seems to be present in the butanolic fraction of the plant (Asante-Poku et al., 1988). The tetrahydroisoquinoline structure of salsoline has been associated with the blockade of calcium-dependent voltage channels (Frank King et al., 1988).
One of the characteristics that distinguishes substances that act by opening potassium channels in substances is little or no alteration in the contractions induced by potassium on the order of 40-80 mM (Hollingsworth et al., 1987).
To verify the possibility of the extract acting in other mechanisms besides the opening of potassium channels, we adopted a simple experimental model: analysis of the EBDA effect on the maintained contraction evoked by a solution containing 80 mM K + . In this experiment, we observed that EBDA completely relaxed, in a concentration-dependent manner, maintained contractions of the anococcygeus induced by potassium (Fig. 2B).
According to Gibson et al. (1994b), the contractions induced by high potassium concentrations are due to the entrance of Ca +2 mediated through Ca +2 -dependent voltage channels, because the responses are eliminated in environments free of Ca +2 or in the presence of the Ca +2 channel blocker, nifedipine.
The EBDA can probably relax the maintained contraction of the induced potassium, through the Ca +2 -dependent voltage channel modulation.
To confirm this hypothesis, another experimental procedure analyzed the relationship between the influx of Ca +2 and the contraction it produced. The protocol consisted of pre-incubating the muscle with a solution free of Ca +2 , to stimulate it with a high-potassium solution free of Ca +2 , and to increase gradually the solution of Ca +2 in the bath.
In the study, EBDA reduced in a concentration-dependent way the maximum response of Ca +2 concentration-effect curve in the anococcygeus (Fig. 4), demonstrating that one of the actions probably involved in the relaxation induced by EBDA is a Ca +2 -dependent voltage channel blocker.
A Ca +2 -dependent voltage blocker, such as D 600, verapamil, and nifedipine, does not act on the contraction of the anococcygeus induced by noradrenaline and phenilephrine. This fact has been observed by Villa et al. (1985), Iravani & Aboo Zar (1993), and Silva et al. (1993).
In the anococcygeus, phenylephrine begins a powerful contraction, which is eliminated in an environment free of Ca +2 , but is insensible to nifedipine, suggesting that it is mediated by the entrance of Ca +2 in the cell through nonselective cations current-dependent on the increase of intracelular Ca +2 promoted by IP 3 (Gibson et al., 1994b).
The results obtained in this study are in accordance with the results described in the literature, since nifedipine was not able to relax the maintained contraction in the anococcygeeus induced by phenyleprine, in a way similar to EBDA (Figs. 1A and 1B, respectively). However, both were effective in relaxing in a concentration-dependent way the maintained contraction of the anococcygeus induced by K+ 80 mM ( Figs. 2A and 2B). The present data suggests that the relaxing effect of EBDA on this muscle involves another mechanism, probably through a Ca +2 -dependent voltage channel blocker.
The debate on the mechanism involved here is open. Its is possible that potassium channels might be involved in the relaxation induced by EBDA since it was isolate substances openers BK Ca (MacManus et al., 1993). Further studies are necessary to test this possibility. The exact mechanism by which the extract of Desmodium adscendens produces relaxation of smooth muscle remains to be identified.
In relation to the involvement of EBDA in other stages of muscular contraction, the results obtained until now do not allow us to discard the possibility of its acting on smooth muscle through the reduction of an elevated level of Ca +2 due to the activation of Ca +2 -ATPase by the direct action on the contractile proteins, or by reduction of sensitivity to Ca +2 .
In relation to AMPc, it has been demonstrated that elevation of its levels is related to relaxation of the anococcygeus (Mirzazadeh et al., 1991;Wendt & Raymond, 1996). In this study, we can discard the possibility of EBDA, in the concentrations used, acting through the increase of AMPc, since forskolin relaxed the maintained contraction induced by potassium and by phenylephrine (Figs. 1A and 2A).
One of the tissues in which the relaxation induced by the nitregic nerve has been broadly studied is the anococcygeus muscle, a smooth nonvascular muscle. In this tissue, the relaxation induced by the nervous stimulation is blocked by the inhibitors of the nitric oxide synthesis (Gibson et al., 1994a, b).
Stimulation of the non-adrenergic non-colinergic nerves (NANC) or the application of exogenous nitrate such as sodium nitroprusside on this tissue, produces a strong relaxation. It is commonly accepted that sodium nitroprusside supports relaxation of the vascular smooth muscle through the liberation of nitric oxide that activates the cytosolic guanylyl cyclase, increasing the levels of GMPc. In this line, Zhang (1993) demonstrated that the sodium nitroprusside was capable of relaxing the trachea of guinea pigs. This effect was reduced by methylene blue, an inhibitor of the guanylate cyclase, and potentiated by zaprinast, inhibitor of the phosphodiesterase.
The results show that relaxation induced by sodium nitroprusside in the contraction of the ano-coccygeus induced by phenylephrine and by potassium was diverted to the right by methilene blue, suggesting that, in this case, there might be participation of GMPc in the relaxation induced by phenylephrine (Fig. 3A).
This result suggests further that relaxation of the butanolic fraction is probably not related to increased intracellular concentration of GMPc, considering its inability to relax the contraction of the anococcygeus induced by phenylephrine (Fig. 3B).
In summary, the results obtained demonstrate that EBDA of Desmodium adscendens leaves found in Mato Grosso, is also capable of relaxing smooth non-vascular muscles in the anococcygeus of rats.
Also, they provide indirect evidence that, at least in part, the effect of the extract is probably related to the blocking of Ca +2 dependent voltage channels. | 2017-08-15T10:12:41.027Z | 2002-05-01T00:00:00.000 | {
"year": 2002,
"sha1": "024b95b91c58c8683155bb3c2e0e0bddfe2bd59a",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bjb/v62n2/10871.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "024b95b91c58c8683155bb3c2e0e0bddfe2bd59a",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16186694 | pes2o/s2orc | v3-fos-license | Is Angiography Still the Best Method to Stratify Stroke Risk in Symptomatic Atherosclerotic Carotid Plaque?
The degree of vessel lumen narrowing is an independent predictor of ischemic stroke. New developments in carotid plaque morphology imaging (MR, CT), may bring new insights to the relationship between carotid atherosclerotic disease and stroke risk. Our aim is to review the stroke risk in a symptomatic patient with moderate carotid stenosis by CT imaging and histo-pathology. A 72-year-old patient with low ABCD2 scores TIA and moderate left internal carotid stenosis (50% by carotid ultrasound), was discharged with an optimized medical therapy. Four months later, he presented an ischemic stroke in the left frontal area. Carotid angi-ography showed a 60% stenosis in the left-internal carotid artery with a regular surface. CT plaque imaging detected a thin fibrous cap with calcification and an intraplaque hemorrhage (high-risk plaque). These findings were confirmed in the histolopathological study of the atheroscle-rotic plaque performed after the endarterectomy. After 1 year of follow-up, the patient returned independently to his daily activities. We propose, in this study, the inclusion of noninvasive plaque imaging in the evaluation of acute TIA with moderate carotid stenosis to better select patients with higher risk of stroke recurrence.
INTRODUCTION
Carotid atherosclerotic plaque has been identified since our ancient ancestors and some in the recent modern era have been evaluated to the prevention of catastrophic stroke [1].The report from North America Symptomatic Carotid Endarterectomy Trial (NASCET) in 1991 has stimulated new interest in carotid stenosis and confirmed angiography as a gold standard method to stratify symptomatic patients to endarterectomy [2].
The degree of vessel lumen narrowing is an independent predictor of ischemic stroke, particularly in symptomatic patients with severe carotid stenosis (70%).Carotid endarterectomy is also associated with a moderate stroke risk reduction in patients with symptomatic moderate carotid stenosis (50% -69%).However, treatment decisions in these cases should be also considered as other important issues including an exceptional surgical skill [3].In addition, optimal medical treatment has been improved and becomes a topic of equal importance for managing carotid disease, especially in those patients with asymptomatic atherosclerotic plaque [4].
New developments in carotid plaque morphology imaging, particularly MR or CT, may bring new insights to the relationship between carotid atherosclerotic disease and stroke risk [5,6].
Our aim is to review the stroke risk in a symptomatic patient with moderate carotid stenosis according to the plaque surface morphology and the degree of stenosis on carotid angiography and to compare the carotid plaque morphology classification obtained by CT imaging and histopathology.
CASE
A 72-year-old previously hypertensive patient arrived at emergency department with a sudden onset of right sided weakness lasting 10 minutes.He was evaluated using the TIA assessment protocol and obtained a low ABCD2 score.The only remarkable finding was a proximal moderate left internal carotid stenosis (50%) detected by the carotid ultrasound examination.He was discharged and referred to the neurological outpatient clinic with an optimized medical therapy.
Four mouth later, he presented a recurrence of similar symptoms without a complete recovery (NIHSS = 2) and arrived at hospital outside of the therapeutic window for reperfusion.Brain MRI demonstrated an ischemic stroke lesion in the corona radiata and frontal cortex visible in the FLAIR and T2-weighted imaging.Carotid angiography showed a 60% stenosis in the left proximal internal carotid artery with a regular surface (Figure 1(A)).CT plaque imaging (Figure 1(B)) detected a thin fibrous cap with calcification and an intraplaque hemorrhage (Figure 1(C)) classified as a high-risk plaque according to the American Heart Association plaque classification [7].These findings were confirmed in the histolopathological study of the atherosclerotic plaque (Figure 1(D)) performed after the endarterectomy.After 1 year of follow-up, the patient returned independently to his daily activities (modified Rankin score = 1).
DISCUSSION
Carotid ultrasound is usually the first line examination to evaluate carotid disease in patients with TIA and detects a stenosis degree in the lower limit of the range (50%) for a clinical decision in favor of carotid endarterectomy in our patients.
Carotid angiography performed in the recurrent ischemic event did not significantly add new information.We hypothesized that the stenosis grade obtained (60%) might have changed during the interval between the two ischemic events due to dynamic modification in the structure and have turned into a high risk plaque.CT plaque imaging identified features beyond luminal stenosis or plaque surface and represented a new noninvasive imaging technique that might reliably assess plaque vulnerability in symptomatic carotid disease patients presenting with an acute ischemic event.Based on histological American Heart Association criteria, the classification allows categorization of carotid plaques noninvasively into distinct lesion types (I-VIII).Atherosclerotic plaques that are prone to rupture owing to their intrinsic composition such as a large lipid core, thin fibrous cap and intraplaque hemorrhage are associated with subsequent thromboembolic ischemic events as occurred in our patients.
CT plaque imaging classification worked less well for classifying lipid-rich necrotic cores and hemorrhage, probably because the range of densities associated with these components overlapped with the densities associated with connective tissue, but they showed a good correlation with histological classification when only large lipid core and large hemorrhage are considered [5].On the other hand, MRI has also some limitations in the acute stroke evaluation and needs a specific phased-array surface coil for plaque examination [6].
Timing of carotid endarterectomy after an ischemic event may largely influence outcome.Therefore, we propose the inclusion of noninvasive CT plaque imaging in the evaluation of acute TIA with moderate carotid stenosis to better select patients with higher risk of stroke recurrence. | 2016-09-14T22:35:13.896Z | 2013-11-05T00:00:00.000 | {
"year": 2013,
"sha1": "4a53d7a9b5ae659f0f8bc66ceb51524583d3611d",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=39579",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "4a53d7a9b5ae659f0f8bc66ceb51524583d3611d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253835041 | pes2o/s2orc | v3-fos-license | Review on Virtual Inertia Control Topologies for Improving Frequency Stability of Microgrid
the power system's stability. This paper presents an overview of various topologies on virtual inertia, VSG concepts, control techniques, and VSG applications. Finally, the VSG challenges and future research will be discussed.
H I G H L I G H T S A B S T R A C T
RESs connected with the inverter have no physical inertia compared with the synchronous generator. VSG provides the required inertia under sudden disturbances in a microgrid. VSG controller emulates droop controller to decrease the frequency deviation.
Renewable energy sources (RESs), such as solar and wind power, offer new technologies for meeting the world's energy requirements . The distributed generator (DG) based on RESs has no rotational mass and damping effects compared to the traditional power system with synchronous generators (SG). However, the increasing penetration level of DG based on RESs causes low inertia, a dampening effect on the dynamic performance of the grid, and stability. A solution to improve the frequency stability of such a system is to provide virtual inertia by using virtual synchronous generators (VSG), which can be created by using short-term energy storage and a power inverter, and a suitable control mechanism. The VSG control mimics the dynamics of the rotation SG and enhances the power system's stability. This paper presents an overview of various topologies on virtual inertia, VSG concepts, control techniques, and VSG applications. Finally, the VSG challenges and future research will be discussed.
Introduction
The supplies of conventional fossil energy have been considerably exhausted and inadequate to satisfy the needs of sustainable human society growth. The benefits of distributed RESs are flexibility, pollution-free and wide distribution. It becomes appropriate energy that complies with the principle of sustainable development [1,2]. Another advantage is its main involvement in supporting the electricity network in remote and rural areas [3]. In recent years, many RES, such as wind farms and photovoltaic (PV), have been integrated into the conventional power system. RES based on power electronic converters (PECs) is expected to have a significant impact on a sizeable power grid in the near future [4,5], as the PEC-based generations would be substituted for the substantive portions of traditional SGs in power systems [4]. The RES-based PECs have no rotating mass and damping effects compared with traditional power with SG. On the other hand, the SG plays an important role in grid stability due to the inherent kinetic energy in rotating mass and damping properties (due to mechanical friction and losses) [6].
Most RES connects to the grid via a power electronic device (Inverter) [7]. Although there are quick responses to the conventional grid-connected inverter, it has almost no moment of inertia, is unable to provide the requisite support for voltage and frequency [8,9], and makes it hard to provide the necessary inertia and to dampen the grid [10,11]. So, the increase in RES will lead to serious stability problems for the power system. As a result, new technology is urgently needed to allow new energy to participate in regulating and modulating the power grid frequency [12,13].
The DGs containing RESs systems can contribute to frequency sustenance by introducing virtual inertia to the grid. In contrast, the SG provides frequency support and decreases the rate of change of frequency (ROCOF) during a disturbance using its rotating mass in conventional power systems [14]. The way to stabilize such a grid is by providing virtual inertia. DGs/RES can generate virtual inertia by combining an energy storage system (ESS) with a power inverter and a suitable control mechanism. This term is called a virtual synchronous generator (VSG) [6,15] or synchronous virtual machine (VSM) [16]. The units would then work as SG.
The IEEE Working Group initially suggested the VSG idea. Researchers then continued investigating exterior characteristics for analog synchronous engines [17], virtual inertia, and frequency control strategy [18]. However, the VSG is yet in the development stage and must still develop its theoretical framework and practical foundation. The existing literature mainly presents various kinds of control technologies and methods of VSG implementation [19,20]. In [21][22][23][24][25], a basic idea of VSG is introduced, enabling the grid-connected to the inverter to mimic SG's operating properties. However, the currentcontrolled VSG equates to the current source, so providing the system with voltage and frequency support is difficult. Therefore, a voltage-controlled VSG technique is suggested in [26] to overcome the deficiencies of current-controlled VSG. The essence of voltage-controlled VSG strategies is to mimic SGs in frequency control rotor inertia and system frequency modulation features to increase the system's frequency stability. At the same time, reactive power and voltage control are primarily used to control stable voltage output [27].
Moreover, several techniques in [23,28,29] have been demonstrated that develop a model and a controller to simulate the different dynamics of SG. These model and control strategies accomplish the operation and self-synchronization of SG without a phase-locked loop (PLL). The VSG control is, therefore, still needed to be theoretically and practically mature enough compared to traditional SG.
The rest of the paper is structured as follows: Section 2 introduces different virtual inertia topologies. The concept of VSG is discussed in section 3. In section 4, the VSG control is explained. Section 5 describes the VSG application. The problems and future studies are explained in section 6. Finally, section 7 presents the conclusion of this paper.
Virtual Inertia Topologies
VSG is a combination of RESs, control algorithms, ESS, and power electronics that emulate the inertia of SG in a conventional power system [30]. The VSG algorithm is the main component of the system interfacing between different storage units, generation units, and the utility grid.
The VSG concept was presented as a method to tackle the power grid's stability problems, including RESs [30,31]. PV systems interfaced to the grid through DC to AC converters (inverter). This system does not respond to changes in inertia. Much research is also developing many concepts and control methods to mimic the damping and inertia characteristics of an SG. According to the literature, for different topologies, the main model ideas are similar, but the implementation of each model is different from the others. Few topologies use mathematical equations to simulate the SG's behavior (Synchronous generator model-based ). In contrast, few topologies copy the erratic SG performance using the swing equation (Swing equation based). In addition, DG units respond in a few topologies to changes in the grid system frequency (Frequency-power response based) [32]. In this section, many essential VSG topologies are discussed. Figure 1 shows the VSG control system developed by the VSYNC group (project within the 6th European Research Framework Program) [33][34][35], where energy sources are connected through an inverter with a filter LCL to the grid. The PLL is used in grid frequency measurement and frequency change rates. In addition, to inertia emulation, PLL also generates the rotating frame of dq reference phase angle for inverter control quantity and reference frequency. The reference current is determined using the control block using the state of charge (SOC) of energy storage, grid voltage, change in frequency, and the reference voltage through the following Equation [34][35][36].
Synchronverters
A synchronverter is equivalent to an SG with a small capacitor bank connected to the stator in parallel [37]. The frequency droop algorithm regulates the output power of the inverter in the same way as it regulates the output power of SG [38]. The structure of a synchronverter is shown in Figure 2. The output voltage and current signals from the inverter solve the differential equation of the controlling unit. In addition, the damping factor and moment of inertia can be configured to meet specific requirements. These parameters, however, are extremely important in terms of system stability [39]. The mechanical equation is given by: where is the moment of inertia, and is the mechanical and electromagnetic toque, is the virtual angular speed and is a damping factor. Electromagnetic torque can be found by [39]: where is mutual inductance, is DC current, is the inductor currents, and is the virtual angle. The reference generated voltage of the controller is given by = sin (5) Figure 3 illustrates the VSG model produced by Osaka University's ISE lab. The swing equation, expressed as the following equation, was used in this model [40,41].
Ise Lab Vsg Topology
Where is the input power (prime mover), is the output power of the VSG, is the rotor moment of inertia, is the angular speed of the rotor, and D is the damping factor. The frequency and power measurement unit receives the voltage from the common connection point (CCP) and measures the inverter's output current. Then, it calculates the inverter's active output power and utility grid frequency using these values [42,43].
Kawasaki Heavy Industries (KHIs) Topology
The KHI community developed the inverter controller using an algebraic SG model [44] and using the phasor diagram of an SG to produce the reference current in this VSG model, as shown in Figure 4, which ensures the desired operation under any load (especially when the load is nonlinear and unbalanced). The Kawasaki Heavy Industries (KHI) lab-produced this topology. The reference phase and voltage of the virtual machine are generated by an automatic voltage regulator and a governor unit based on a digital controlling unit [45]. These references are then used to generate reference currents using algebraic phasor representation. Figure 4 depicts a straightforward model of KHI lab topology.
The Institute of Electrical Power Eng. (IEPE) Topologies
The IEPE group developed a VSG design named Virtual Synchronous Machine (VISMA) [46][47][48]. The VISMA-1 design is based on the principle that a simplified synchronous machine model would provide the reference current from grid voltage. The reference current obtains by: where is the voltage generated in the stator winding, are resistance and inductance of stator winding. The mechanical equation of the rotor is given by where is electrical and mechanical torque, respectively, is the moment of inertia, is the damping factor, and w is angular velocity. For example, in [45], the VISMA technique used d-q-based architecture to mimic the synchronous generator when a digital control unit of a power inverter is used to implement this architectural configuration when it copies the dynamics of the SG shown in Figure 5. [36] In addition, the IEPE group developed VISMA-2 [31], which is a new tool. Instead of using grid voltage to feed the SM algorithm, this method produces the reference voltage as an output using grid current. In addition, the hysteresis controller was replaced by a PWM controller to use a constant switching frequency, making it easy to choose the filter circuit. This new technology is extremely effective for asymmetric loads and sharp grid changes.
In Table 1, a comparison between all topologies of virtual inertia is presented. Finally, the advantages and disadvantages of each inertia emulation topology are summarized in Table 2.
VSG Concept
The VSG theorem is based on the inclusion of the advantages of dynamic converter technologies and electromechanical SGs' static and dynamic operating properties [49]. As seen in Figure 6, the basic description of the VSG concept. VSG's three distinct elements include power electronic converters (PECs), ESS (battery, supercapacitor, etc.), and the control system that controls the amount of power injected or absorbed from the energy storage. This power enables the power system to avoid frequency variations like the SG [50,51].
If the DG and ESS are considered the prime mover input torque, The electromechanical power transformer between the stator and rotor is supposed to be the DC/AC converter. Then the main midpoint voltage component represents VSG's electromotive force. Filter unit resistance and induction represent the stator winding impedance. VSG is usually placed between a DC source and a grid [50]. The DC source to the VSG algorithm operates as SG by providing inertia and damping support to the grid system. This is achieved by successful inverter power control in inverse proportion to the rotor speed. The VSG can absorb or inject (charge or discharge) power into the system due to the presence of an ESS. The VSG control strategies with the higher or lower order can be developed by small changes to the voltage sources controller control system. While all VSG strategies control active and reactive power, each has its control frame. Furthermore, different VSG control strategies have been introduced to enable the inverter to emulate the properties of an SG [41,52]. As a result, every VSG implementation entails an approximately direct mathematical model of an SG [4]. Many solutions presented in the literature show that selecting any SG model and its parameters primarily depends on a random design choice. However, any VSG implementation mimics electromechanical oscillations' inertial properties and damping characteristics. An SG model's transient and sub-transient dynamics can be included or ignored, depending on the required level of competence and accuracy in replicating the SG dynamics [42].
The basic SG swing equation is a key component of VSG, and it is stated as [52]: Where is rotor inertia. is the damping factor. is mechanical torque and is electromagnetic torque. virtual angular frequency and reference angular frequency, respectively. And is the power angle.
Furthermore, to approximate the electromagnetic properties of SG for VSG, the SG's stator equation is usually simulated without considering the electromagnetic relationship between rotor and stator, and this can be expressed as [52]: Where and are the inductance and resistance of the LC filter, and are the voltage and current of the inverter, respectively. The VSG power loop emulates the primary SG frequency modulation, damping, and inertia to determine the reference phase and the modulated signal frequency. And the reactive power simulates the SG's voltage regulation and determines the modular signal's amplitude. The VSG active power mathematical equation, comprising a simple VSG governor, is as follows [52]: where are the input power and output electrical power of the inverter, respectively. Moreover, the VSG's reactive control loop mathematical Equation is as follows [53]: where is the inertia coefficient of the reactive power loop, is the virtual electromotive force, are the reference and output reactive power, respectively, is the output and rated voltage amplitude, respectively. is the Q-V droop coefficient.
VSG Control
In the power grid, converter control is a very important factor that affects grid stability. Droop control is the most common control system used now [14]. Droop control is divided into active power with frequency control (P-f) [54,55] and reactive power with voltage control (Q-V) [56,57]. P-f control adjusts the phase angle (δ), and, Q-V adjusts the voltage amplitude (E) of the reference potential in response, where the changes in E adjust the reactive power and changes in δ adjust the active power. VSG's control algorithm can be classified into two groups, as follows.
P-F Control
VSG's active power control is a copy of SG's governor unit. Figure 7 shows the control diagram. Grid frequency stability is achieved in the power system by active power. The power generated is balanced with load power under normal conditions in the power system. And when the system is disturbed, the active balance is destroyed, and the grid frequency fluctuates. [13]. The general equation for P-f droop is Where: and are the mechanical and active power references of VSG, respectively. is the reference and actual frequency of the system and is the P-f droop coefficient. The output torque of VSG is controlled by the change of power in the active power adjustment control of VSG. So, the active loop mimics SG's primary frequency modulation characteristic, and its output works as the inverter's reference phase angle of voltage [58].
Q-V Control
As illustrated in Figure 8, a classic droop control method is used for voltage control. The grid voltage is kept stable by the reactive power [33]. Grid voltage will fluctuate if the balance is disrupted. The droop characteristic equation for Q-V is [57] = − ( − ) Where: and are the output and reference reactive power of VSG, respectively. are the reference and measured voltage of the system and is the Q-V droop coefficient. If the inverter does not provide reactive power support in steady states, set to zero. The droop coefficient is based on maximum voltage changes and the voltage control features that affect the stability of the power system. [59].
Application of VSG
Because of the inherent features that enable it to participate in frequency stability support, the VSG control method may be used for all types of generation units, such as PV farms, wind farms, electric vehicles, and AC and DC transmission lines.
In the case of PV generation, literature [60,61] presented a PV-VSG technology, which considers the dynamic features of PV power, allowing for many PV units to connect to the grid via VSG. It enables the connect-off-grid operation that is both flexible and reliable, and it is critical to enabling grid-friendly access for distributed PV power generation.
In the case of wind power [62][63][64]. VSG is used to increase the dynamic performance of the wind turbine on the side of the rotor wind turbine and grid converters.
In the application of independent energy storage units, literature [65][66][67] proposed an electric vehicle VSG-based battery control technique. Active and reactive power is calculated by the virtual two-phase system, preventing the effect of power oscillation on virtual inertia and contributing to primary frequency modulation for the power grid. In addition, when the grid is disrupted, the island grid can be restored, and the local load can be supplied by the electric vehicle battery.
In the case of the AC and DC transmission side, literature [68][69][70] offered a VSG-based active voltage feedback control technique for frequency control, irrespective of the communication technology. In addition, the typical synchronizer has been improved, and the VSG can now adjust the secondary frequency independently of the PLL current under fault conditions.
Challenges and Further Research
The use of many units of VSGs in power systems presents additional technical problems. As the electrical sector works to use substantial amounts of DGs-based RES in the power grid, a lot of work is needed to accommodate and effectively manage the VSG units already in use. A key aspect is handling topological changes generated by using multiple VSGs as additional network control devices and stabilizing the power grid to employ the potential flexibility of the dispersed VSGs. However, extensive expertise and a thorough analysis of the literature reveal numerous challenges surrounding VSG integration that must be thoroughly investigated.
Centralized Control for VSG
Because the current power system is developing and integrating a large number of VSG-based DGs, it is necessary to develop a centralized control method that improves the various VSG control methods, including grid connection, voltage and frequency control, active or reactive power control, and parallel circulation control. In addition, it is required to build centralized control on the distributed control properties of VSG to produce more stable centralized control for VSG [71,72].
Develop VSG Control Algorithms
Various VSGs may need to be more flexible to balance supply and demand on modern power grids. The VSG frequency regulation refers to the ability of these units to regulate their output through effective control methods. In order to achieve this effectively, more active practical algorithms and control methods are required. Further investigations must be performed with traditional SG properties to coordinate kinetic energy discharge time and dimensions.
Efficient VSG Modeling
By further study of the mathematical derivation of equivalence between the VSG concept and the SG, an effective and robust control system can still be achieved by improving the existing models, where only the preferable parts are used.
VSG Energy Storage System
Batteries and capacitors are generally employed as power storage systems with VSG-based PV systems. Therefore, the battery and the ultra-capacitors combination is suggested in [73] that suppress high-frequency effects, as ultra-capacitors quickly release stored energies while batteries manage low-frequency effects. This is a better solution for power storage, but not economical because of the high cost. Therefore, a new and economical ESS must be developed with the characteristics of conventional batteries, ultracapacitors, and small sizes.
A summary of the advancements in virtual inertia topologies discussed above sections is shown in Table 3.
Conclusion
Continuous development in integrating DGs based on RESs into the power system network has contributed to the imbalance in a traditional power system structure. DGs have little or no inertia and damping property compared to the conventional SGs, which means that the total inertia of the whole system is decreased altogether. The VSG solves the low inertia problem and dampening properties by providing virtual inertia by injecting active power from the VSG for a short time after any disturbance occurs. VSG's development is a convenient and economical solution for the utilization and expansion of RESs. An important measure to allocate RESs optimally is the effective interaction between VSG and SG. In addition, VSG integrates the flexibility of electronic power equipment with the operating mechanism of SG efficiently. This paper gives an overview of several virtual inertia topologies and a detailed description of the VSG structure as the most important topics on the VSG concept. Moreover, VSG control defining P-f and Q-V control is explained in detail. The VSG applications are afterward explained. Finally, the VSG challenges and future research are discussed. | 2022-11-24T16:06:01.241Z | 2022-11-22T00:00:00.000 | {
"year": 2022,
"sha1": "173be3e935e0390122f3a2ab8d7104da7d271c43",
"oa_license": "CCBY",
"oa_url": "https://etj.uotechnology.edu.iq/article_176020_06f2bd743ac3173dddfd822efe6b8db4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "20a5ecf082fae5ed9919fca6f5b76fed61cd5bc9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
216282608 | pes2o/s2orc | v3-fos-license | Development of new poly(ADP-ribose) polymerase (PARP) inhibitors in ovarian cancer: Quo Vadis?
Epithelial ovarian cancer (EOC) is the fifth leading cause of cancer mortality among women, potentially due to ineffectiveness of screening tests for early detection. Patients typically present with advanced disease at diagnosis, whereas, up to 80% relapse and the estimated median progression-free survival (PFS) is approximately 12–18 months. Increased knowledge on the molecular biology of EOC resulted in the development of several targeted therapies, including poly(ADP-ribose) polymerase (PARP) inhibitors. These agents have changed the therapeutic approach of the EOC and exploit homologous recombination (HR) deficiency through synthetic lethality, especially in breast cancer genes 1 and 2 (BRCA1/2) mutation carriers. Furthermore, BRCA wild-type patients with other defects in the HR repair pathway, or those with platinum-resistant tumors may obtain benefit from this treatment. While PARP inhibitors as a class display many similarities, several differences in structure can translate into differences in tolerability and antitumor activity. Currently, olaparib, rucaparib, and niraparib have been approved by Food and Drug Administration (FDA) and/or European Medicines Agency (EMA) for the treatment of EOC, while veliparib is in the late stage of clinical development. Finally, since October 2018 talazoparib is FDA and EMA approved for BRCA carriers with metastatic breast cancers. In this article, we explore the mechanisms of DNA repair, synthetic lethality, efficiency of PARP inhibition, and provide an overview of early and ongoing clinical investigations of the novel PARP inhibitors veliparib and talazoparib.
Introduction
Approximately 22,440 newly diagnosed cases of ovarian cancer and 14,080 deaths occurred in the United States in 2017 (1). Two thirds of patients present at advanced stages, whilst the estimated 5-year survival rate is 20-40%. The vast majority of ovarian cancers are epithelial in origin (90%), whereas 10% are non-epithelial; germ cell and sex cord stromal cell (5% each). They differ in epidemiology, etiology, and treatment. Epithelial ovarian cancer (EOC) is the most frequent cause of death from gynecologic cancer among women due to lack of an effective screening test. Histologically, it is predominantly divided into five main subtypes; high-and low-grade serous (75-80%), endometrioid and clear cell (10% each), and mucinous (3%) (2). Patients with EOC response usually well to the initial standard treatment, which includes cytoreductive surgery with either preoperative or adjuvant platinumbased chemotherapy; nevertheless, the estimated median progression-free survival (PFS) ranges from 12 to 18 months (3). Therefore, development and validation of functional biomarkers and novel therapeutic agents are of major importance for the improvement of patients' outcome.
Breast cancer genes 1 and 2 (BRCA1/2) mutations are the most significant molecular aberrations in ovarian cancer with established prognostic and predictive value following chemotherapy. Based on that, increased research focused on germline variant testing, risk stratification, early detection, and cancer prevention for BRCA1/2 mutation carriers has been conducted (4). Cells with mutations in BRCA1/2 genes have an impaired double-strand DNA breaks (DSBs). In any case of impairment of homologous recombination (HR), synthetic lethality induced by poly (ADP-ribose) polymerase (PARP) inhibition occurs and may target tumor tissue selectively (5,6). Furthermore, several somatic mutations beyond BRCA1/2 genes have been recognized, including RAD51, and ataxia telangiectasia-mutated (ATM), which are also involved in HR repair (7). Tumors with these abnormalities are often sensitive to similar therapies (8).
Over the last decade, clinical trials led to the approval of several PARP inhibitors in ovarian cancer. Olaparib, rucaparib, and niraparib have all obtained US Food and Drug Administration (FDA) and/or European Medicines Agency (EMA) approval in EOC in different settings. Veliparib and talazoparib are in earlier clinical development. Veliparib was evaluated mainly combined with chemotherapy or targeted agents (9), whilst at least in vitro talazoparib demonstrates more potent antitumor activity, based on its enhanced PARP-DNA trapping ability (10).
The purpose of this article is to review the mechanisms of HR, and provide current evidence and future challenges in the development of the investigational PARP inhibitors veliparib and talazoparib.
Mechanisms of DNA repair
DNA damage often arises within the context of normal cellular processes. It can be spontaneous or caused by cell metabolism or by environmental agents (11). Base excision repair (BER) is the major DNA repair pathway responsible for the removal of DNA base damage and formation of single-strand DNA breaks (SSBs) and DSBs. The primary activity of PARP1/2 proteins, is post-translational poly-ADP ribosylation (PARylation) of substrate proteins involved in biological processes such as transcription and DNA damage repair. The idea of PARylation asserts that during DNA damage PARP1 is activated, on both SSB and DSB. In addition, several post-translational modifications also alter activity of PARP1, which is implicated in multiple signaling pathways (12). Once PARP is activated, downstream events of PARP signaling take place, involving either covalent PARylation of substrates, non-covalent binding of PAR polymer to proteins bearing a PAR-binding motif, liberation of free PAR to the cell or lowering cellular NAD+/ATP levels. This could lead to loss of genomic instability, cell death, and even carcinogenesis if not correctly repaired (13). HR and non-homologous end-joining (NHEJ) likely playing the largest role in DSB repair. How the cell determines whether HR or NHEJ will be used to repair a break depends on the phase of the cell cycle. HR predominates as a mechanism of repair during mid S and G2 phases (14). If an undamaged template DNA is unavailable, then the faster but error-prone NHEJ repair pathway is the primary method of DNA DSB repair in the cell (15). Additional DNA damage repair operational mechanisms include nucleotide excision repair (NER), mismatch repair (MMR), and translesional synthesis (16). In the presence of functional defects of both HR and classical NHEJ, inhibition of PARP1 inhibits alternative NHEJ, resulting in cell apoptosis (17).
Seventeen members of the PARP proteins have been described so far. PARP1 is responsible for approximately 90% of the PARylation activity, whereas PARP2 and to a lesser extent PARP3 function in fewer, but overlapping DNA repair processes (18) sites, its catalytic activity, and its eventual release from DNA are key elements for potential response of a cancer cell to DNA breaks introduced by certain chemotherapeutic agents, and radiation (19). When activated PARP, it recruits other DNA repair proteins (20).
PARP inhibition and synthetic lethality
Two preclinical studies published in 2005 promoted knowledge and clinical development of PARP inhibitors (5,6). In view of assessing the effects of PARP1 depletion, a plasmid expressing a short interfering RNA targeting mouse PARP1 was transfected into embryonic stem cells lacking wild-type BRCA1/2. These cells bared specific genomic mutations of BRCA1/2, lacked wild-type single allele and were directly compared to their isogenic wildtype counterparts. Investigators concluded that these cell lines with a BRCA1/2 mutation were more sensitive to PARP inhibition than heterozygous mutants. This was based on the synthetic lethality, characterized by a bimodal dependency through which the loss of function of one gene in a cell does not have impact on viability, whilst the combined loss of both components results in cell death (21). The synthetic lethality between PARP inhibition and HR deficiency is overall produced due to SSBs repair failure. If the inhibitor stays bound within the PARP active site and the PARP protein is trapped on the DNA long enough to be encountered by the replication machinery, this can lead in a stalling of the replication fork, its collapse and the generation of a DNA DSB (21). Among evaluated PARP inhibitors, olaparib, niraparib, and rucaparib are approximately 100fold more potent than veliparib, while talazoparib has the most enhanced trapping potency (10). It has been suggested a correlation between increased PARP trapping and high myelosuppression, which results on variation in dosing among PARP inhibitors. Apart from mutations in BRCA1/2, genomic alterations involving other genes in HR deficiency pattern have been recognized (22). The term "BRCAness" describes the phenotype shared between BRCA1/2-mutated and non-BRCA1/2-mutated ovarian cancers, resulted in severe chromosomal instability due to deregulated HR (23). Indeed, BRCAness phenotype may be attributed in part to defective HR secondary to several mechanisms, including hypermethylation of the BRCA1 promoter, somatic mutations of BRCA1/2, or EMSY amplification. Furthermore, several somatic mutations in genes beyond BRCA have been recognized in a wide variety of tumors.
For example, aberration of ATM, BRIP1, RAD50, RAD51C, RAD51D, RAD52, and DNA-dependent protein kinase (DNA-PK) is therapeutically important as expands the sensitivity to PARP inhibition beyond germline BRCA1/2 mutations (24). Ongoing efforts are directed towards clinical application of synthetic lethality and interaction between PARP inhibition and HR deficiency. To this end, a precise comprehension of the implications of the different PARP inhibitors is challenging.
Clinical applications of PARP inhibitors in ovarian cancer
PARP inhibitors were originally developed as radio-and chemo-sensitising drugs, and are being investigated to a different extent and settings in EOC and other solid tumors (25). Table 1 depicts the PARP inhibitors, which have obtained approval by FDA, and/or EMA for the treatment of EOC. Currently, novel agents are in clinical development. Veliparib was initially demonstrated in 2007 to potentiate the preclinical activity of temozolomide, platinum agents, and radiotherapy in a variety of tumors (9). Talazoparib specifically in the treatment of EOC is still at an early stage of clinical development. However, there are studies actively recruiting patients for the evaluation of talazoparib in several solid tumors. Talazoparib has currently EMA (and FDA) approval for metastatic breast cancer.
Historically, EMA approved in 2014 a capsule formulation of olaparib in maintenance setting for BRCA carriers with recurrent high grade serous EOC, or primary peritoneal cancer (study 19) (24). At the same year, FDA approved olaparib as the first-in-class PARP inhibitor for germline BRCA-mutated patients, previously treated with at least three lines of chemotherapy (study 42) (26). The tablet formulation of olaparib has been approved by both agencies as maintenance therapy for patients with platinumsensitive relapsed EOC regardless of BRCA status (SOLO 2) (24,27). FDA approved olaparib maintenance treatment on December 19, 2018, based on the results of SOLO 1 trial (NCT01844986), examined the efficacy of olaparib versus placebo in subjects with BRCA-mutated advanced EOC, who were in complete response (CR) or partial response (PR) to first-line platinum-based chemotherapy (28).
Rucaparib has been approved by FDA and EMA in December 2016 and May 2018 respectively, for patients who have been treated with two or more prior lines of platinumbased chemotherapy, and cannot tolerate further platinum- FDA, Food and Drug Administration; EMA, European Medicines Agency; BID, twice a day (bis in die); gBRCA, germline BRCA mutation; BRCA, breast cancer gene; EOC, epithelial ovarian cancer; CR, complete response; PR, partial response; g/sBRCA, germline/somatic BRCA mutation. based chemotherapy. The efficacy was based on integrated analyses of data from study 10 and ARIEL2 (29)(30)(31).
FDA and EMA approved niraparib (March and November 2017, respectively) for the maintenance treatment of the responders to platinum-based chemotherapy (NOVA trial) (32). In June 2018 were presented the results of a phase II study of niraparib in heavily pre-treated patients with recurrent ovarian cancer (Quadra Trial) (33). Registration studies that led to approvals of PARP inhibitors for treatment of EOC are resumed in Table 2.
Based on the distinct chemical structures of PARP inhibitors and various off-target effects, the therapeutic strategy of re-challenge with a PARP inhibitor following disease progression needs to be further developed. In June 2019 was presented at the American Society of Clinical Oncology (ASCO) the largest clinical trial of prospective evaluation of PARP inhibitors failure, correlating tissue genomic mechanisms of resistance (36).
Preclinical pharmacokinetics and pharmacodynamics of veliparib
The pharmacokinetic profile of veliparib is characterized by high oral bioavailability and rapid absorption. The administration of the immediate-release formulation BID (bis in die) resulted in a peak-to-trough concentration ratio of 0.45 μM. Veliparib passes through the bloodbrain barrier; its combination with temozolomide is highly effective in the treatment of intracranial tumors (9). The activity of veliparib combined with temozolomide has been demonstrated across a broad histologic spectrum of models in B-cell lymphoma, lung, pancreatic, ovarian, breast, and prostate cancer xenografts (37). Veliparib is primarily excreted from tubular cells into urine via OCT2. With this regard, drug dosage adjustment should be based on the creatinine clearance, whereas concurrent treatment with OCT2-inhibitors such as cimetidine, results in higher therapeutic dose of veliparib (9). In terms of mechanisms of action, apart from "PARPtrapping", it is fundamental the sensitizing effect of veliparib to DNA-damaging drugs, including oxaliplatin, irinotecan, cisplatin, carboplatin and cyclophosphamide, and equally the radiotherapy (38).
Veliparib in clinical practice
The analysis of ongoing studies, assessing veliparib as single agent or, in combination with cytotoxics, revealed an overall objective response rate (ORR) ranging from 14.3% to 79%.
Phase I/II studies of veliparib monotherapy
A p h a s e I t r i a l , p r e s e n t e d i n 2 0 1 4 , a s s e s s e d pharmacokinetics, pharmacodynamics and clinical efficacy of veliparib (39). Among 88 enrolled patients with platinumrefractory EOC or basal-like breast cancer, 60 were BRCA mutants. The recommended phase II dose (RP2D) was 400 mg BID, and the half-life 5.2 hours. ORR was higher in BRCA mutated, as compared to BRCA wild-type patients (23% and 4%, respectively). The most common toxicities included nausea, fatigue, and lymphopenia.
More recently, a phase I/II trial evaluated veliparib monotherapy in 48 subjects with germline BRCA mutated EOC (40). Veliparib was given BID in a 4-weekly treatment cycle, and the maximum tolerated dose (MTD) was 300 mg BID. Platinum-sensitive subset of patients attained longer PFS (P=0.037) and OS (P=0.02) than platinum-resistant. The high ORR of 65% (6% CR, 59% PR), in patients with relapsed, platinum-resistant ovarian cancer, should be highlighted. Overall, the tolerance was acceptable, and most common treatment related adverse events included grade 2 fatigue and nausea (22% each), followed by vomiting (9%).
In a small phase I dose-escalation study, 14 out of 16 enrolled Japanese patients had high-grade serous EOC and were treated with veliparib BID (41). The RP2D was 400 mg BID, whilst two patients experienced PR as best achieved response. The most prevalent grade 3 or 4 toxicities included fatigue and manifestations of the gastrointestinal system.
Based on the promising results of early phase studies, the single-arm, phase II, Gynecologic Oncology Group (GOG) 280 trial (NCT01540565) was published in 2015 (42). Veliparib was administered at a dose of 400 mg BID to a cohort of 50 BRCA mutated EOC patients, pretreated with a maximum of three lines of chemotherapy. The ORR of veliparib was 26% [90% confidence interval (CI), 16-38%], and the study met its primary endpoint. Furthermore, subgroup analysis revealed responses of 35% and 20%, in platinum-sensitive and resistant setting respectively, which was not significantly different (P=0.33). However, 31 patients (62%) experienced progressive disease while on treatment. Treatment-associated adverse events were not prominently featured; anemia and leukopenia were of grade 1/2, whilst nausea and vomiting occur mostly during first cycle treatment. Table 3 lists the available phase I and II studies of single Cyclophosphamide was given 50 mg daily throughout a 3-weekly schedule. The MTD was obtained at veliparib 60 mg with cyclophosphamide 50 mg daily. As far as treatment efficacy is concerned, 7 participants (20%) experienced PRs.
In 2015 was presented the report of a phase I study of veliparib in combination with bevacizumab, paclitaxel and carboplatin in newly diagnosed patients with stage II-IV EOC or carcinosarcoma (GOG 9923; NCT00989651) (44). Veliparib starting dose was 30 mg BID given on 3-weekly cycles for the initial 6 treatment cycles. Bevacizumab was administered at 15 mg/kg intravenously, each first day 1 from cycle 2 to 22. The RP2D for veliparib was 150 mg BID in combination with the remaining regimens. Based on the NCT00989651, it is currently active the 3-arm phase III trial GOG 3005 (NCT02470585) (45).
At the same setting of the combination of veliparib with chemotherapy and bevacizumab, the phase I, GOG 9927 trial, enrolled 39 patients with relapsed platinumsensitive EOC (NCT01459380) (46). The recommended MTD of veliparib was 80 mg BID, when was combined with pegylated liposomal doxorubicin 30 mg/m 2 and carboplatin area under the curve (AUC) 5 on 4-weekly cycle. At MTD, 12 additional patients were enrolled and treated with bevacizumab. Among them, 9 exhibited doselimiting toxicities, such as thrombocytopenia, neutropenia, hypertension, and sepsis.
It has been suggested that mitomycin C (MMC) is involved in generating DNA DSBs, activation of the Fanconi anemia (FA) pathway and veliparib-induced sensitization (47). Based on this concept was conducted a 3+3 dose escalation trial of veliparib as monotherapy, or combined with MMC. Sixty-one patients with HR deficient solid tumors were enrolled and randomized to each arm, through 14 dose levels (NCT01017640) (48). The MTD for single agent veliparib was 300 mg BID to FA-deficient patients. In the combination strategy, MMC was recommended at dose of 10 mg/m 2 , followed by veliparib 200 mg BID in a 4-weekly cycle with 21 days on and 7 days off. Veliparib as monotherapy did not produce a substantial number of tumor regressions. This modest clinical benefit is associated with veliparib's spectrum of doses below MTD, and the additional antiapoptotic stimulus to which the repair deficient cell has become addicted.
In 2017 was published a small Japanese phase I doseescalation trial (NCT02483104), evaluated in newly diagnosed advanced EOC, veliparib in combination with 3-weekly cycles of carboplatin AUC 6 and paclitaxel 80 mg/m 2 (days 1, 8, 15) (49). Patients were treated with the platinum-doublet chemotherapy for six cycles in total; veliparib was incorporated throughout the course of treatment, and the RP2D was 150 mg BID. Among 5 assessed for response patients, 4 experienced PR, and 1 CR, respectively. However, these findings should be interpreted cautiously, taking into account the small sample size, and lack of random assignment and control group.
Another small phase I study (NCT01154426) evaluated veliparib combined with single agent gemcitabine in advanced solid tumors (50). Gemcitabine was given at dose of 500-750 mg/m 2 , administered either thrice on 4-weekly, or twice on 3-weekly schedule. Veliparib was escalated from 10 to 40 mg BID during gemcitabine's weeks. Among 31 enrolled patients, 23 developed grade 3/4 side effects, primarily myelosuppression. The recommended MTD were 750 mg/m 2 for gemcitabine and 20 mg for veliparib BID on the 3-weekly regime. Among 27 patients, 3 achieved PR and 15 stable disease (SD), respectively. However, correlation between response and BRCA status is difficult to be justified, and the combination should be further explored.
Veliparib combined with the doublet of carboplatin/ gemcitabine has been investigated in a phase I doseescalation study of 75 patients with advanced EOC and breast cancers (NCT01063816) (51). The most prevalent adverse event was the myelosuppression resulting in discontinuation in 11% of patients, and dose reduction for veliparib and gemcitabine were required in 20 (27%) and 27 patients (36%), respectively. Median PFS for the entire study population was 7.0 months (95% CI, 5.3-8.4 months). This PFS benefit was more prominent in BRCA carriers [8.6 months (95% CI, 7.1-11.7 months)] than in BRCA wild-type/ unknown subgroup [5.9 months (95% CI, 4.1-9.9 months)]. Equally, BRCA-mutants achieved higher ORR of 68.9% as compared to the 42.8% of the wild-type/unknown BRCA patients.
Finally, a phase I study (NCT01012817) evaluated the combination of veliparib and weekly topotecan in several solid tumors, including EOC (52). The treatment was well tolerated, and in line with the previous studies, resulted in prolonged ORR in BRCA1/2, or RAD51D mutants. Table 4 details phase I studies of veliparib combined with chemotherapy.
Phase II/III studies of veliparib combined with chemotherapy
A randomized phase II trial (NCT01306032) randomized 72 pretreated, BRCA-mutant, EOC patients to the combination of veliparib with low dose cyclophosphamide, versus cyclophosphamide monotherapy (55). DNA repair defects were not predictive biomarkers for either cyclophosphamide single agent or the veliparib combination. Finally, neither ORR (11.8% versus 19.4%, respectively), nor median PFS (2.1 versus 2.3 months, respectively; P=0.68) were improved with the combination. Based on that, the trial was early terminated.
Taken the in vitro synergy of topotecan with veliparib, a phase I/II dose escalation clinical trial was conducted to investigate the combination in the setting of recurrent, BRCA1/2 wild-type or unknown EOC (NCT01690598) (56). Twenty-seven enrolled patients were treated with an initial dose of veliparib, 30 mg BID, and topotecan, 3 mg/m 2 in 4-week treatment cycles. The reported efficacy was modest with median PFS of 2.8 months (95% CI, 2.6-3.6 months), and OS of 7.1 months (95% CI, 4.8-10.8 months). However, these findings should be interpreted in light of the negative prognostic factors of the study population. Haematological toxicities of grade 1 and 2 included mostly anemia (81.5%), followed by thrombocytopenia (29.6%) and neutropenia (22.2%).
Furthermore, the results of a randomized phase II study in recurrent high-grade serous EOC, evaluating veliparib combined with temozolomide versus pegylated liposomal doxorubicin are pending (NCT01113957) (57). Finally, the phase III GOG 3005 is an ongoing, randomized, doubleblind trial, with aim to investigate the efficacy of veliparib in combination with carboplatin and paclitaxel in highgrade serous EOC, or primary peritoneal cancer patients (NCT02470585) (45). The recruitment target size is 1,140 patients, and this is the only phase III trial of veliparib in first-line treatment.
Phase II/III clinical trials of veliparib in combination with chemotherapy for the treatment of EOC are resumed in Table 5.
Veliparib in combination with radiotherapy
Preclinical evidence suggests that low-dose fractionated whole abdominal radiation (LDFWAR) combined with veliparib is an effective therapeutic option. A phase I dose escalation trial enrolled 22 patients with advanced solid tumors and peritoneal carcinomatosis, including 8 subjects with EOC (58). SD was maintained for 24 weeks or longer in 33% of participants. PFS was 7.92 months in the platinum-sensitive setting versus 3.58 months in the platinum-resistant subset.
In the final publication, 32 patients were finally enrolled, including 18 with EOC (56%) (59). The established MTD and RP2D for veliparib was 250 mg BID. Patients with platinum-resistant and those with platinum-sensitive recurrence, achieved a median OS of 5.8 and 10.9 months, respectively. The most common haematological adverse event of grade 3/4 was lymphopenia (59%), followed by thrombocytopenia (12%), anemia (9%), and neutropenia (6%). However, due to lack of specific biomarkers, incorporation of somatic genomic testing and HR deficiency score should be planned, in view of optimization of the efficacy of this therapeutic strategy.
Early randomized studies of veliparib in combination with radiotherapy are depicted in Table 6.
Talazoparib
Talazoparib in the treatment of EOC is still at an early stage of clinical development. However, preclinical studies have demonstrated activity in several solid tumors (60)(61)(62)(63). Following olaparib, talazoparib was the second FDA and EMA approved drug for BRCA-mutated, HER2negative breast cancer. Superior radiosensitizing capacity of talazoparib as compared to veliparib is probably based to its enhanced PARP trapping ability (64). Talazoparib has been shown to be the more potent PARP inhibitor (10), but equally has the highest rates of myelosuppression, particularly anemia and neutropenia in clinical trials (65).
Phase I studies of talazoparib monotherapy
Talazoparib was initially evaluated in 2017, with the firstin-human, 2-stage, dose-escalation, phase I study in over 100 patients with germline BRCA1/2 mutated advanced or recurrent solid tumors, previously treated with platinumbased chemotherapy (NCT01286987) (66). Thirty-four (67). Twenty-four patients were enrolled in four cohorts. Frequent grade 3/4 side effects were neutropenia (63%), which was more prominent in germline BRCA mutants, followed by anemia (38%), thrombocytopenia (29%), and fatigue (13%). One complete and two PRs (14%) were achieved by patients with germline BRCA1/2 mutations. Finally, POSITION is an ongoing phase I study assessing the influence of talazoparib on DNA copy number and RNA expression in patients with advanced stage EOC (NCT02316834) (68). Table 7 provides summary results of phase I studies of talazoparib for treatment of ovarian cancer.
Phase II/III studies of talazoparib monotherapy beyond ovarian cancer Currently, there are not available phase II or III clinical trials of talazoparib monotherapy in EOC. However, such data could be extrapolated from ongoing studies in metastatic breast cancer (65,69). Indeed, the benefit of talazoparib, specifically in BRCA mutants, has been reported in the phase II ABRAZO study (NCT02034916) (69). Eightyfour patients, pre-treated with platinum or other cytotoxic regimens, were enrolled in the study. The reported ORR for those with BRCA1/2 mutations was 23% and 33%, respectively. Similarly, triple-negative breast cancer patients, and those with expressed estrogen and progesterone receptors, achieved an ORR of 26% and 29%, respectively.
FDA and EMA granted standard approval of talazoparib in advanced, HER2-negative advanced or metastatic breast cancer with germline BRCA1/2 mutations, based on data gathered from EMBRACA study (NCT01945775) (65). This is a phase III, open-label study, which compared talazoparib with standard single agent treatment. The primary endpoint of median PFS was 8.6 months in talazoparib arm, significantly higher than 5.6 months in the chemotherapy arm [HR: 0.54 (95% CI, 0.41-0.71), P<0.0001]. Furthermore, response rates in talazoparib and chemotherapy group, were 63% and 27%, respectively. Similarly, quality of life was importantly improved in favour of talazoparib. Efficacy of the agent in triple-negative, BRCA wild-type breast cancer, will be evaluated by the ongoing phase II trial NCT02401347.
Several studies are in progress in prostate cancer. NCT03148795 is a phase II study aiming to assess talazoparib in patients with metastatic, castration resistant disease with defects in DNA repair mechanisms (70), whilst phase III study TALAPRO-2 (NCT03395197), is evaluating the addition of talazoparib to enzalutamide at the same setting (71).
A d d i t i o n a l l y, t h e s i n g l e -a r m p h a s e 2 s t u d y NCT01989546 is still recruiting patients with several solid tumors and BRCA1/2-mutations, for the evaluation of talazoparib in platinum-sensitive setting (72).
As far as concerned ovarian cancer, two phase II trials have already been withdrawn ( Table 8). NCT02326844 enrolled patients with BRCA1/2 mutations, following primary progression on prior PARP inhibitor therapy (73). This study addresses the important issue of whether rechallenging with an alternative PARP inhibitor may be associated with therapeutic benefit. Similarly, the withdrawn phase II randomized study NCT02836028 had been planned to assess talazoparib combined or not with temozolomide in patients with relapsed ovarian cancer and defects in DNA repair pathway (74).
Conclusions and future directions
PARP inhibitors have attracted great attention and illustrate a paradigm of bench-to-bedside medicine. HR deficiency remains a strong predictor of clinical benefit from these agents. Besides ovarian cancer, PARP inhibitors may be effective in subsets of patients with breast, prostate, and even pancreatic tumors. On December 27, 2019 FDA approved olaparib for the maintenance treatment of patients with metastatic pancreatic cancer, who were carriers of germline BRCA1/2 mutations, based on the results of POLO trial (NCT02184195). The ORR was 23.1% in the olaparib versus 11.5% in the placebo arm, whereas median duration of response was 24.9 months as compared to 3.7 months, respectively. Mutations in DNA repair related genes are frequent in those tumors, which highlights further that evaluation of molecular alterations should be incorporated in clinical practice. Apparently, combination treatment strategies can induce HR pathway deficiency in cancers with de novo or acquired HR proficiency to PARP inhibitors. Moreover, PARP inhibitors may be effective in patients with somatic BRCA1/2 mutations to the same extent as in those with germline BRCA1/2 mutations. As such, somatic genomic analysis and clinical qualification of biomarkers, enabling patient stratification, promote delivery of precision medicine. Adverse events associated with PARP inhibitors should be carefully evaluated. Myelosuppression may require dose reduction. Optimization of toxicities could be achieved by modifying treatment modalities (continuous versus intermittent, concurrent to chemotherapy versus maintenance). Several clinical trials are ongoing, in different settings. Even though newer PARP inhibitors, demonstrate increased potency, it has not yet been fully clarified whether this translates into greater efficacy. | 2020-04-27T16:22:59.807Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "1ddc217f33b95205dbb485444adfc3682221b89b",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/39878/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c74a7d554a1e0cac9c0ab98101591afb1afe4164",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
58911969 | pes2o/s2orc | v3-fos-license | Calibrated sky imager for aerosol optical properties determination
Calibrated sky imager for aerosol optical properties determination A. Cazorla, J. E. Shields, M. E. Karr, A. Burden, F. J. Olmo, and L. Alados-Arboledas Departamento de Fı́sica Aplicada, Facultad de Ciencias, Universidad de Granada, Fuentenueva s/n, 18071, Granada, Spain Centro Andaluz de Medio Ambiente (CEAMA), Junta de Andalucı́a-Universidad de Granada, Avda. del Mediterraneo s/n. 18071, Granada, Spain Marine Physical Lab, Scripps Institution of Oceanography, University of California San Diego, 9500 Gilman Dr., La Jolla CA 92093-0701, USA Received: 27 August 2008 – Accepted: 7 October 2008 – Published: 28 November 2008 Correspondence to: L. Alados-Arboledas (alados@ugr.es) Published by Copernicus Publications on behalf of the European Geosciences Union.
In this sense long range transport events like Saharan dust outbreaks (Lyamani et al., 2005;Lyamani et al., 2006a, b) or global scale events like stratospheric aerosols following major volcanic eruptions like El Chichón and Mount Pinatubo (Olmo and Alados-Arboledas, 1995) represent extreme cases of this variability. Remote sensing appears to be a valuable tool for characterizing the physical and optical properties of the aerosol. 15 Sun-photometry is the most common way to characterize aerosol in daytime from the ground. An instrument such as a CIMEL CE318 photometer gives useful information about column-integrated physical and optical properties of the atmospheric aerosol (Holben et al., 1998). Sun-photometer networks such as Aerosol Robotic Network (AERONET) (Holben et al., 1998) address the spatial problem in aerosol characteriza-20 tion, but temporal resolution remains a problem. Another problem is the assessment of valid measurements, i.e. the CIMEL CE318 requires that the sun not be obscured by clouds, and sometimes it is difficult to perform the cloud rejection test (Smirnov et al., 2000).
Ground-based sky imagery has been used for years for cloud cover assessment e.g.
characteristics from sky imagers, we can supplement the existing aerosol data bases. In this paper, the Whole Sky Imager (WSI), a calibrated ground-based sky imager, developed in the Atmospheric Optics Group (Marine Physical Laboratory, Scripps Institution of Oceanography, University of California San Diego) (AOG) has been tested to determine optical properties of the atmospheric aerosol. Different neural network-5 based models estimate the aerosol optical depth for three wavelengths using the radiance extracted from the principal plane of sky images from the WSI (i.e. at constant azimuth angle equal to the solar azimuthal angle, with varied zenithal angles) as input parameters. TheÅngström coefficients α and β (Ångström, 1964) are also derived from the aerosol optical depth estimated with the models and a neural network-based 10 model also estimates theÅngström exponent α. The models use data from a CIMEL CE318 photometer (Holben et al., 1998) for training and validation.
Experimental site
This work is the result of cooperation between the Atmospheric Physics Group 15 (Dept. Applied Physics, University of Granada) and the Atmospheric Optics Group (Marine Physical Laboratory, Scripps Institution of Oceanography, University of California San Diego). This group has been researching and developing sky imagers for decades (Johnson et al., 1989;Shields et al., 1993Shields et al., , 1998b. They have different sky imagers in different locations. The WSI used in this work was located in the Southern general circulation models used for climate research. The SGP site consists of in situ and remote-sensing instrument clusters arrayed across approximately 143 000 square kilometers in north-central Oklahoma. Figure 1 shows a map of the facility. The central facility is a heavily instrumented location on 0.65 square kilometers of cattle pasture and wheat fields southeast of Lamont, Oklahoma (36.61 • N, 97.5 • W, 320 m a.s.l.).
5
More than 30 instrument clusters have been placed around the SGP site, at the Central Facility and at Boundary, Extended, and Intermediate Facilities. The locations for the instruments were chosen so that the measurements reflect conditions over the typical distribution of land uses within the site.
Both instruments, the CIMEL CE318 and the WSI are located in the central facility (Shields, 1998a).
The sun-photometer CIMEL CE-318
Sun photometry is the most widely used technique for atmospheric aerosol characterization in daytime. The CIMEL CE318 automatic sun tracking photometer (Holben et al., 1998) has been designed to measure sun and sky radiance in order to derive to-15 tal column water vapor, ozone and aerosol properties using a combination of spectral filters and azimuth/zenith viewing controlled by a microprocessor. The CIMEL CE318 is the standard instrument in the AERONET network (Holben et al., 1998). AERONET collaboration provides globally distributed observations of spectral AOD, inversion products (Dubovik et al., 2002(Dubovik et al., , 2006 and precipitable water in 20 diverse aerosol regimes. AOD data are computed for three data quality levels: Level 1.0 (unscreened), Level 1.5 (cloud-screened), and Level 2.0 (cloud screened and qualityassured).
The CIMEL CE318 used in this work has operated in the AERONET program since 1994. We use the level 2.0 data (quality-assured) and the parameters extracted are 2.3 The Whole Sky Imager The Atmospheric Optics Group has been very active in the development of digital sky imager for over two decades. The original concept for the WSI evolved from a measurement and modeling program using multiple sensors for monitoring sky radiance, atmospheric scattering coefficient profiles and other parameters related to vision through the atmosphere (Johnson et al., 1980(Johnson et al., , 1989. With the use of very low noise 16 bit CCD cameras and an occultor designed to handle both sun and moon, these systems were further developed into the Day/Night WSI (Shields et al., 1993(Shields et al., , 1998a(Shields et al., , 1998b(Shields et al., , and 2003. The Day/Night WSI is a 16-bit digital imaging system that acquires images of the 10 full sky (2π hemisphere) under both day and night conditions in order to assess cloud fraction, cloud morphology, and radiance distribution. The WSI measures the sky radiance in approximately 185 000 directions simultaneously by using a 512×512 CCD sensor. The result is a 34 µsteradian field of view (FOV) in each direction, to cover the full 2π steradian dome. Images are acquired in visible and near infrared (NIR) wave- 15 bands with filters at 450 nm (blue), 650 nm (red), and 800 nm (NIR) under sunlight or moonlight. Open hole is used for starlight and most moonlight conditions. The FWHM of the filters is 70 nm. A picture of the instrument fielded at the Oklahoma Cloud and Radiation Testbed (CART) site at SGP is shown in Fig. 2. The primary features seen in this figure are the environmental housing that protects the sensor and electronics, 20 the optical dome, and the solar occultor that shades the optics. Even though the camera would not be damaged by direct sun radiation, this shading is desirable because it minimizes stray light especially near the sun. Figure 3 shows a daytime image. The center of the image is the zenith and the edges are the horizon. One of the capabilities of the WSI is the determination of the absolute sky radiance 25 distribution. The fisheye lens directs the light from different directions onto different pixels in the image plane, and the signal of each pixel may be calibrated to yield a determination of the absolute sky radiance, in W/m 2 µm sr, in that direction. The FOV for 19994 each pixel is approximately 34 µsteradians. Thus, this radiance product is equivalent conceptually to a radiance distribution determined by a scanning radiometer, except that all radiances are acquired simultaneously (at 185 000 points) and at a very high spatial resolution (Shields et al., 1998b). The instrument follows a process to get the radiance and geometric calibration 5 (Shields et al., 1998b).
Methodology
The sky radiance depends directly on the aerosol load through several parameters connected with extensive and intensive aerosol properties. While previous investigations have related AOD to radiances measured in a restricted range of scattering angle 10 to simulate the spaceborne point of view (e.g. Kaufman, 1993;Sánchez et al., 1998), here the development is focused on surface measurements. There is a dependency between radiance along the principal plane and the aerosol optical parameters (Olmo et al., 2008). Radiance along the principal plane is affected by the particle size, which in turn has impacts on the α parameter and the turbidity 15 factor (β parameter). The impact of the α parameter and β parameter on the radiance are shown in Fig. 4a and b. Both parameters are directly related with the aerosol optical depth (Ångström, 1964).
These previous works, along with the results obtained with the All-Sky Imager characterizing the atmospheric aerosol (Cazorla et al., 2008b) are the basis of this work. 20 We also have considered the use of sky radiance in the principal plane since the obstruction of the image due to the shadow system is smaller.
The data set selected in this work comprises the period from 1 October 2001 to 29 September 2002. This data set is from a whole year so we can model the seasonal variability of the atmospheric aerosol. Using the cloud decision images processed by 25 AOG we sorted out all the cases with clouds, to work with the clear-sky results. A total of 1047 clear-sky image sets (i.e. 3 spectral images acquired in one set) were Introduction
Tables Figures
Back Close
Full Screen / Esc
Printer-friendly Version Interactive Discussion associated with a synchronous CIMEL measurement, applying a ±5 min margin. This image set has been used to create and validate the model.
Retrieving the radiance over the principal plane of the sky images
Knowing the Sun position we can locate the principal plane making the azimuthal angle equal to the Sun azimuthal angle or that angle plus 180 degrees, and varying the zenith 5 angle. The radiance over the principal plane has been extracted for the whole data set from scattering angle 1 • to 100 • . The cloud decision image provided by the WSI has been used to apply a mask over the non valid values (horizon obstruction and shadow system obstruction). The stored values from the principal plane are calibrated values, i.e. sky radiance, in W/m 2 µm sr, for every scattering angle in the principal plane. Thus,
10
we have the sky radiance over the principal plane for scattering angles from 1 to 100 • and the Solar Zenith Angle in degrees (SZA • ) as inputs. The AOD at 440, 675 and 870 nm from the CIMEL data is used as the target data in the neural network model that we will describe in the next section.
3.2 Neural networks and radial basis function networks 15 According to Haykin (1994) a neural network resembles the human brain in two aspects: the knowledge is acquired by the network through a learning process, and interneuron connection strengths known as synaptic weights are used to store the knowledge. Once a neuron is set up, it can learn to emulate behaviors such as classification, pattern recognition, function approximation, control system, etc.
20
Neural networks have been widely used in atmospheric science recently (e.g. Gutiérrez et al., 2004) and we have experimented with them in several research applications (Alados et al., 2004(Alados et al., , 2007Cazorla et al., 2005Cazorla et al., , 2008bGil et al., 2005). The radial basis function (RBF) networks (Gutiérrez et al., 2004;Yee and Haykin, 2001) are especially suitable for function approximation. The inputs of the radial basis networks 25 are the variables of the function, and the output is the function approximation. RBF's emerged as a variant of artificial neural networks in the late 80s. However, their roots are entrenched in much older pattern recognition techniques as for example potential functions, clustering, functional approximation, spline interpolation and mixture models (Tou and Gonzalez, 1974). RBF's are embedded in a two layer neural network, where each hidden unit implements a radial activated function. The output 5 units implement a weighted sum of hidden unit outputs. The input into an RBF network is nonlinear while the output is linear. Their excellent approximation capabilities have been studied by Poggio and Girosi (1990) and Park and Sandberg (1991). Due to their nonlinear approximation properties, RBF networks are able to model complex mappings, which perceptron neural networks can only model by means of multiple intermediary layers (Haykin, 1994).
In order to use a RBF network we need to specify the hidden unit activation function, the number of processing units, a criterion for modeling a given task and a training algorithm for finding the parameters of the network. Finding the RBF weights is called network training. If we have at hand a set of input-output pairs, called a training set, we 15 optimize the network parameters in order to fit the network outputs to the given inputs. The fit is evaluated by means of a cost function, usually the mean square error. After training, the RBF network can be used with data whose underlying statistics is similar to that of the training set.
The topology of the RBF networks consists in a two-layer feed-forward neural net-20 work. Such a network is characterized by a set of inputs and a set of outputs. In between the inputs and outputs there is a layer of processing units called hidden units. Each of them implements a radial basis function. Figure 5 shows the topology of the RBF networks. RBF networks are characterized for the transfer function which is the RBF. In this 25 case input to the transfer function is the vector distance between its weight vector w and the input vector p, multiplied by the bias b The transfer function for a radial basis neuron is a Gaussian function The RBF has a maximum of 1 when its input is 0. As the distance between w and p decreases, the output increases. Thus, a radial basis neuron acts as a detector that produces 1 whenever the input p is identical to its weight vector w . The bias b allows 5 the sensitivity of the neuron to be adjusted.
This topology allows a very simple training for function approximation or interpolation. Every sample in the training set creates a new neuron in the hidden layer. The weight of the connection input-neuron is set to the input value. Therefore n in Eq. (1) is 0 and a in Eq. (2) is 1. The last layer gathers the hidden layer's outputs and readjusts 10 the output to provide the correct function value. A typical input would activate several neurons (the weight is not exactly the same as the input since the input is not in the training set), i.e. 0<a<1 for several neurons output and the final output to the network is a combination of the different neuron outputs. This feature allows to the network to interpolate the function values and, therefore learn the shape of the function. Assum- 15 ing that the training set is well spread along the input range, the RBF network learns the shape of the function with the training set. An independent set, the test set, is used to evaluate the function approximation. It works like a spline interpolation with the advantage that N-dimensional functions can be approximated easily but the disadvantage that the function is unknown.
Development of a neural network-based model for the AOD estimation
We need to relate the wavelengths of the WSI filters with the wavelengths used to measure the direct irradiance with the CIMEL CE318. We used the nearest wavelength and, therefore the 450 nm filter is associated with the 440 nm in CIMEL, the 650 nm filter is associated with the 675 nm in CIMEL and the 800 nm filter is associated with the 25 870 nm in CIMEL. Thus, we developed a RBF network model for each WSI wavelength obtaining an estimation of the AOD at 440, 675 and 870 nm. 19998 Inputs to the RBF network are the SZA • corresponding to the measurement and the radiance at several scattering angles over the principal plane.
Neural network-based model
The whole data set consists of the radiances over the principal plane of the 1047 images, the SZA • at the time of the measurement and the synchronous CIMEL measure-5 ment. This set is divided randomly in two subsets, a training set using 2/3 of the whole data set and a test set using 1/3 of the whole data set. All values are normalized to the range [0,1], i.e. the values are rescaled to that range where the minimum and maximum values correspond to 0 and 1 respectively. Inputs (radiances over the principal plane and SZA • ) and outputs (AODs) are normalized. During the training every measurement is used to adjust the internal values of the neural network (the weights) so the output is the desired variable amount, i.e. the AOD corresponding to the wavelength we are trying to estimate. Once the network is trained we use the test set to evaluate the performance of the network. The coefficient of determination (R 2 ) calculated between the AOD measured with the CIMEL and the values estimated with the network using the 15 test set is our estimator of the performance. The performance depends on the selection of the training and test sets. These sets are created randomly out of the whole data set. Hence we repeat the process nine times and select the best partition, i.e. the one that yields the best performance. Interactive Discussion principal plane, but not all the scattering angles are necessary. The first iteration of the algorithm creates 100 networks using the radiance at one of the scattering angles. All of them are evaluated and the best one is added to the solution. The second iteration of the algorithm creates 99 networks using the radiance at the best scattering angle of the previous iteration and the radiance at a different scattering angle. Once again, 5 the best one is added to the solution. This process is repeated until the performance decreases, i.e. no more scattering angles are needed. During the process we applied a mask to eliminate the measurements with non valid values (obstruction due to the horizon or shadow system). As a result, the final data set may vary depending on the scattering angles used.
Results
Every wavelength has an independent model. The inputs and, therefore the training and test sets, are different for each AOD estimation.
The greedy algorithm selected only one scattering angle for each AOD model. For the blue WSI channel (450 nm associated to the 440 nm AOD), it selected the 37 • 15 scattering angle. Therefore the model has two inputs: the radiance of the sky at that scattering angle over the principal plane and the SZA • . 117 measurements had to be eliminated from the original 1047 measurements due to shadow system obstruction, so the model was created from the remainder of this set with 930 measurements. The greedy algorithm for the model with the red WSI channel (650 nm associated 20 to the 675 nm AOD) selected the 71 • scattering angle. The number of valid measurements after applying the mask is 968. The greedy algorithm for the model with the NIR WSI channel (800 nm associated to the 870 nm AOD) selected the 83 • scattering angle. The number of valid measurements is 973 in this case. provides information about the over -or underestimation associated with the model. The coefficient of determination provides an evaluation of the experimental variance explained by the model. The root mean square deviation (RMSD) and the mean bias deviation (MBD) have been also evaluated as quality estimators. These quality estimators allow us to evaluate the differences between the experimental data and the 5 model and the presence of a systematic over -or underestimation. Table 1 shows the statistics for the three models. Figure 7 show histograms of the differences between measured and estimated AOD at 440, 675 and 870 nm.
As we can see in Fig. 6a and Table 1, 96% of the data variance is explained by the model that estimates the AOD at 440 nm. MBD and the slope of the linear fit reveal 10 a slight systematic underestimation. Figure 7 shows a histogram of the differences between the calculated and estimated values. It reveals that 81% of the estimated AOD values at 440 nm had a deviation less than 0.01 with respect to the CIMEL result which is the AERONET AOD estimated uncertainty (Holben et al., 1998). Figure 6b and Table 1 reveal that 94% of the data variance is explained by the model 15 that estimates the AOD at 675 nm. MBD and the slope of the linear fit also indicate a slight systematic underestimation. Figure 7 shows that almost 80% of the estimated AOD at 675 nm has a deviation less than 0.01. Finally, Fig. 6c and Table 1 reveal that 92% of the data variance is explained by the model that estimates the AOD at 870 nm. While MBD suggests a slight overestimation, 20 the slope of the linear fit indicates an underestimation. Figure 7 shows that 90% of the estimated AOD at 870 nm has a deviation less than 0.01. Figure 6a, b and c reveal that the data set is not homogeneously distributed along the whole range of values. There are a lot of points with low AOD and very few with higher AOD. This can explain the slight systematic underestimation of the model. The 25 linear fit is forced to zero and there are a lot of points close to zero, but the very few values far from the zero introduce a variance that, in this case makes the slope be slightly below 1.
The coefficient of determination decreases when we estimate AOD at longer wave-20001 works as seen in Sect. 3.3, and β parameter is also estimated. Secondly, a new neural network has been trained using the calculated α by AERONET. The AERONET algorithm calculates α using a different interval of wavelengths. The interval 440-870 nm includes the values we estimate, and therefore this is the interval used to compare the results in both approaches. 15 For the first method, Fig. 8a shows estimated versus calculated values of α with the CIMEL in the interval 440-870 nm using the standard AERONET procedure. Figure 8c shows the histogram of the differences between calculated and estimated α. Figure 8a reveals that 63% of the data are explained by the model that estimates α. Figure 8c shows that 48% of the estimated α has a deviation less than 0.1, which is 20 the estimated uncertainty in the AERONET procedure for α calculation (Holben et al., 1998).
The α estimation is affected by the error introduced in the AOD estimation. The AOD at 870 nm introduces an error in the calculation of α by linear fitting of ln(AOD) vs. wavelength. For this reason, we tried a new neural network-based model using RBFs 25 to estimate the value of α using the radiance over the principal plane of the sky images for the three wavelengths. The inputs for this model are the same for the estimation of the AOD together, i.e. we combined the radiance at different scattering angles over 20002 data are explained by the model that estimates α. Figure 8c shows that 84% of the estimated α has a deviation less than 0.1, which is the estimated uncertainty in the AERONET procedure for α calculation. This represents a clear improvement in the estimation of α from WSI images. Even though the uncertainty in the estimation of α is large with the standard method, 10 the estimation is still useful for the interpolation of the AOD at different wavelengths.
We have tested it calculating the AOD at 500 nm with the α and β estimated with the first method usingÅngström Law (Ångström, 1964) and compared it with the AOD at 500 nm obtained from CIMEL CE318 measurements. Figure 9 shows estimated values of AOD at 500 nm versus calculated values of AOD with the CIMEL. 96% of the data 15 variance is explained by the model that estimates the AOD at 500 nm. As we can see, the use of this estimation yields a very good estimation of AOD in different wavelengths (one of the main uses of α). However, if we need a more precise estimation of α, for example as input in an inversion model, we also have the most accurate estimation using the neural network-based model.
Conclusions
Three neural network-based models using RBFs has been created to estimate the value of the AOD at three different wavelengths using the radiance over the principal plane of the sky images from the calibrated sky imager WSI. The correlation constants are close to unity and the number of cases within measurement error is very large.
performed in two ways. First, it has been calculated using the standard procedure in AERONET using the AODs estimated with the neural network (the β parameter is also calculated) and secondly, it has been estimated with a new neural network-based model using RBFs. Inputs to this RBF are the radiances in the same scattering angles used for the AOD models.
5
The three AOD models provide an estimation that, according to validation, is inside the nominal error of the AERONET (±0.01) in approximately 80% for the blue and red channels and 90% for the NIR channel. In all cases the models explain up to 92% of the variance of the experimental data. The coefficient of determination decreases when we estimate AOD at longer wavelengths. This can be caused by the difference between the 10 central wavelength of the filter in the sky imager and the CIMEL. This difference is larger with longer wavelengths and, therefore, the overlap of the wavelength decreases and so the performance. Nevertheless, all estimations have a coefficient of determination over 0.92.
Concerning the scattering angles selected with the greedy algorithm, these reveal 15 that to estimate the AOD at 450 nm (blue) it is necessary to use a point close to the sun. The estimation of the AOD at 870 nm (NIR) requires a point farther from the sun. At 675 nm the behavior can be catalogued as in the middle of the behavior presented in the other two wavelengths. The α estimation using the Aeronet method is affected by the cumulative error of all 20 the AOD estimations. However, almost 50% of the data are inside the nominal error of the AERONET program for α calculation using the standard procedure in AERONET. The neural network-based model for α estimation increases the explanation of the data to 63% and the data inside the nominal error is increased to 77%. The neural network process is complex but increases substantially the estimation. The neural network 25 model forces the result to be the same as that estimation of α for the interval 440-870 nm.
The results are promising in the sense that it seems to be feasible that a sky imager can estimate the AOD and the algorithm could be applied in the field. Furthermore, the 20004 Government and his research stay at University of California at San Diego has been also funded by the Andalusian Regional Government. We are also especially thankful to the principal investigator of the AERONET site at SGP, Rick Wagener and the collaboration of the US Department of Energy as part of the Atmospheric Radiation Measurement Program Climate Research Facility.
properties assessed by an inversion method using the solar principal plane for non-spherical particles, Journal of Quantitative Spectroscopy and Radiative Transfer, 109(8), 1504-1516, 2008. 20007 Table 1. Statistical results of the validation for the radial basis networks for estimation of AOD at 440 nm, AOD at 670 nm and AOD at 870 nm. The column B represents the slope of the linear fitting through zero of the data, R 2 is the coefficient of determination, MBD is the mean bias deviation and RMSD is the root mean squared deviation. | 2018-12-15T02:39:47.482Z | 2008-11-28T00:00:00.000 | {
"year": 2008,
"sha1": "2f7e699bd6995f03eee0da31c063f7fbe4daefd8",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/9/6417/2009/acp-9-6417-2009.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2f7e699bd6995f03eee0da31c063f7fbe4daefd8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
45388649 | pes2o/s2orc | v3-fos-license | A novel homeobox protein which recognizes a TGT core and functionally interferes with a retinoid-responsive motif.
We describe here a novel homeobox gene, denoted TGIF (5"TG3' interacting factor), which belongs to an expanding TALE (three amino acid loop extension) superclass of atypical homeodomains. The TGIF homeodomain binds to a previously characterized retinoid X receptor (RXR) responsive element from the cellular retinol-binding protein II promoter (CRBPII-RXRE), which contains an unusual DNA target for a homeobox. The interactions of both the homeprotein TGIF and receptor RXR alpha with the CREBPII-RXRE DNA motif occur on overlapping areas and generate a mutually exclusive binding in vitro. Transient cellular transfections demonstrate that TGIF inhibits the 9-cis-retinoic acid-dependent RXR alpha transcription activation of the retinoic acid responsive element. TGIF transcripts were detected in a restricted number of tissues. The canonical binding site of TGIF is conserved and is an integral part of several responsive elements which are organized like the CRBPII-RXRE. Hence, a novel auxiliary factor to the steroid receptor superfamily may participate in the transmission of nuclear signals during development and in the adult, as illustrated by the down-modulation of the RXR alpha activities.
We describe here a novel homeobox gene, denoted TGIF (5TG3 interacting factor), which belongs to an expanding TALE (three amino acid loop extension) superclass of atypical homeodomains. The TGIF homeodomain binds to a previously characterized retinoid X receptor (RXR) responsive element from the cellular retinol-binding protein II promoter (CRBPII-RXRE), which contains an unusual DNA target for a homeobox. The interactions of both the homeoprotein TGIF and receptor RXR␣ with the CRBPII-RXRE DNA motif occur on overlapping areas and generate a mutually exclusive binding in vitro. Transient cellular transfections demonstrate that TGIF inhibits the 9-cis-retinoic acid-dependent RXR␣ transcription activation of the retinoic acid responsive element. TGIF transcripts were detected in a restricted number of tissues. The canonical binding site of TGIF is conserved and is an integral part of several responsive elements which are organized like the CRBPII-RXRE. Hence, a novel auxiliary factor to the steroid receptor superfamily may participate in the transmission of nuclear signals during development and in the adult, as illustrated by the down-modulation of the RXR␣ activities.
Homeobox genes play a fundamental role in directing cellular differentiation processes and in determining cell fate. Over the past 10 years, the term homeodomain has evolved to define a superfamily of protein domains of ϳ60 amino acids with homology to the Drosophila homeotic proteins (15). Homeoproteins confer the specificity of action to a wide variety of transcription factors. They exert their action both by their DNA binding surfaces and by domains that are targets for protein: protein interactions with other transcription factors (16 -18).
Regulated, tissue-specific, and developmental expression of eukaryotic genes results from the interplay of a variety of transcription factors, like the homeoproteins. They exert their effects on target genes by both activating and repressing transcriptional activities.
Vitamin A (retinol) and other retinoids, like the retinoic acid (RA), 1 were demonstrated to exert striking effects on cell pro-liferation and differentiation. Excessive intake as well as deficiency of vitamin A generate characteristic toxicity and malformation patterns in a number of organ systems. Retinoic acid as well as a number of small lipophilic hormones mediate their signals through ligand-activated transcription factors belonging to the large steroid/retinoid nuclear receptors superfamily (19). Two classes of retinoid receptors have been identified; the retinoic acid receptors (RARs) and the retinoid X receptors (RXRs) (20,21). Homo-as well as hetero-, dimers of these receptors act in response to retinoids by binding to specific cis-acting retinoid-responsive promoter elements (22,23), thereby generating a large diversity of transcriptional controls in the retinoid signaling pathways (24). The expression of several homeogenes was demonstrated to be differentially regulated by RA, and this suggests that homeogenes are likely to control the temporal and spatial modulation of the levels of endogenous retinoids (25).
Recently, the diversity of nuclear receptor-mediated control was found to be further extended by the synergy of other transcription factors. The interaction of retinoid receptors and transcription factors of the c-Jun and c-Fos family (AP-1), for example, can either repress or potentiate the retinoid-dependent transcription activation (26,27). Therefore, there exist regulatory "cross-talk" pathways that allow modulation of the retinoid signal by the AP-1 signaling system (28).
There are two classes of cytoplasmic retinoid-binding proteins implicated in the transduction of the retinoid signal which also play an important role in retinoid homeostasis: the cellular retinoic acid-binding proteins, CRABPI and -II, and the cellular retinol-binding proteins CRBPI and -II (for review, see Ref. 29). CRBPII is expressed mostly in prenatal liver and in adult intestine (29) and is probably involved in the regulation of the vitamin A signaling pathway by controlling the intracellular transport and storage of retinol, a precursor of retinoic acid (30).
We show here the functional cloning of a new member of the homeobox gene superfamily, called TGIF, that belongs to a growing superclass of atypical homeodomains, whose hallmark is an extension of three amino acids between ␣ helices 1 and 2. The TGIF homeoprotein recognizes a previously characterized retinoid response motif (CRBPII-RXRE) which consists of an unusual DNA target for a homeobox. TGIF can prevent the retinoid X (RXR) receptor from functioning as a transcriptional activator through interference with the previously characterized CRBPII-RXRE-responsive element (1,2). The consensus binding site of TGIF was identified and is conserved in several CRBPII-RXRE-like responsive elements, suggesting a broader functional association of TGIF with this type of responsive element. * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM 1 The abbreviations used are: RA, retinoic acid; RAR, retinoic acid receptor; RARE, retinoic acid response element; RXR, retinoid X receptor; RXRE, retinoid X response element; PCR, polymerase chain reaction; COUP, chicken ovalbumin upstream promoter; CRBP, cellular retinol-binding protein; CRABP, cellular retinoic acid-binding protein;
Generation of Recombinant TGIF-Autographa californica Multiple Nuclear Polyhedrosis Virus for Baculovirus Expression of TGIF-A
BamHI-SspI DNA fragment, containing the complete coding region of TGIF, was digested from the pGEX-2T-TGIF plasmid and was cloned into Klenow-blunted BamHI-NcoI sites in the pBlueBacHis vector, designed for generation of recombinant baculovirus (Invitrogen). Spodoptera frugiperda ovarian (Sf9) cells (2 ϫ 10 6 Sf9 cells in TC-100 medium (Life Technologies, Inc.) without fetal calf serum) were co-transfected with 1 g of wild type A. californica multiple nuclear polyhedrosis virus and 1 g pBlueBacHis-TGIF. After 96 h, screening for recombinant virus in infected Sf9 cells was performed by serial dilution of the transfection medium supplemented with 5-bromo-4-chloro-3-indolyl -D-galactoside for colorimetric test of positives. A culture of 10 8 Sf9 cells was infected with recombinant TGIF-A. californica multiple nuclear polyhedrosis virus (infection multiplicity ϭ 5-10). After a 1-h infection in 25 ml of TC 100 medium with 5% fetal calf serum, the total volume was expanded up to 100 ml in a spinner flask. Cells were harvested 72 h after infection, washed once with phosphate-buffered saline buffer, and resuspended in 1 ml of high salt extraction buffer (10 mM Tris-HCl (pH 8.0), 1 mM EDTA, 1 mM dithiothreitol, 600 mM KCl, and protease inhibitors, leupeptin, pepstatin, chymostatin, soybean trypsin inhibitor, aprotinin (all at 10 mg/ml), and phenylmethylsulfonyl fluoride (1 mM)). Whole cell protein extracts were obtained by lysing the cells with 30 strokes in a 2-ml glass Dounce homogenizer and by centrifuging twice at 10,000 ϫ g for 30 min at 4°C in order to remove cell debris.
Dimethyl Sulfate Methylation Interference and Electrophoresis Mobility Shift Assay (EMSA)-The EMSAs were performed with the different recombinant proteins as described in Ref. 34 and the probes shown in Fig. 1. For the G-specific dimethyl sulfate methylation interference experiment, the CRBPII-RXRE, cloned in the EcoRI site of pBluescript (Stratagene), was removed with HindIII-PstI or BamHI-HincII. This enabled the 32 P labeling of the coding and the noncoding strand with Klenow polymerase. DNA probe fragments were isolated by electroelution after polyacrylamide gel separation from the vector backbone and methylated by dimethyl sulfate (Fluka) for 2 min at room temperature (35). Methylated DNA probe (1.5-2 ϫ 10 5 cpm/300 ng of DNA) was incubated with RXR␣ isolated from baculovirus or GST-TGIF isolated from bacterial cell extracts and analyzed in an EMSA. Complexed probe and free probe were separated in 6% nondenaturing polyacrylamide gel (acrylamide/bisacrylamide ratio 19:1), and both were isolated by electroelution and then ethanol-precipitated. Pellets were dissolved in 10% piperidine (DuPont NEN), and the DNA was cleaved at 90°C for 30 min before lyophilization followed by two rounds of washing with 20 l of water. The hydrolysis products were resolved on 8% denaturing acrylamide/urea gels. The dried gels were exposed to x-ray films (Fuji, Inc) with an intensifying screen at Ϫ70°C for 12-36 h.
Cells and Extracts-COS-1 cells and U87 cells (glioblastoma, astrocytoma, grade III) were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum, 200 mM glutamine, and penicillin and streptomycin. For the transfection experiments, 5 ϫ 10 7 cells (5 ml of suspension) were plated in a 6-cm diameter Petri dish in order to reach 70% confluency at the transfection time. After transfection, cells were treated with 9-cis-RA (see Transfection Experiments and CAT Assay). Whole cell extracts were prepared by using a freezethaw protocol (0.4 M KCl buffer) (33).
Recombinant Plasmids for Transient Transfection Assays-The sequences of the DNA-responsive elements used in the reporter plasmids from this study are shown in Fig. 1. The CRBPII-RXRE promoter element (2, 12) (see also Fig. 1) or mutated/deleted responsive elements were inserted in the 5Ј end of the TATA tk promoter sequence and the CAT (chloramphenicol acetyltransferase) gene reporter sequence from the pBLCAT5 vector (36) between the BamHI and the HindIII sites. The pcDNA-TGIF recombinant plasmid consists of the TGIF cDNA sequence included between the BstUI site (21 base pairs upstream of the initiation initiation codon) and the SspI site (position 1223 in Fig. 2A) which was cloned between the EcoRI and the EcoRV sites of the pcDNA vector (Invitrogen). The pSG5-RXR␣ recombinant contains the RXR␣ ORF inserted in the pSG5 vector (37); the TGIF and RXR␣ cDNAs were introduced in expression vectors having two different promoters (respectively from cytomegalovirus and SV40) in order to avoid competition between them and therefore to optimize the expression of both proteins. A pcDNA-Gal construct was included as an internal control to standardize the different transfection assays.
Transfection Experiments and CAT Assay-Cells were transfected by calcium phosphate co-precipitation (38) with 5 g of one of the recombinant pBLCAT reporters in the presence or absence of 3 g of pcDNA-TGIF and/or with various amounts of pSG5-RXR␣ depending on the cell type used (see Fig. 6). The endogenous TGIF levels were tested, and no significant amounts were detected either by Northern or by EMSA experiments. 2 In order to detect the TGIF-mediated transactivation regulation, we titrated out the TGIF and the RXR␣ activities to establish the optimal ratio of TGIF to RXR␣. Optimal transactivation was obtained by using 1 g of the RXR␣ effector plasmid for the U87 cells and 0.25 g for the COS-1 cells. 0.5 g of the pcDNA-Gal internal control was included in each experiment to standardize the transfection efficiency. Finally, calf thymus DNA was added as double-stranded carrier DNA to equalize the DNA concentrations in each precipitate. In several cases, we also used the pcDNA and/or the pSG5 vectors without insert in order to control for a plasmid-driven effect on the expression modulation of the reporter.
The precipitate was left on the cells for 16 -20 h before the medium was changed. Cells were incubated for another 20 -24 h in the presence of 10 Ϫ7 M 9-cis-RA. Cells were harvested, and extracts were prepared. The normalized CAT activity assay was run as described previously (39). Each transfection experiment was repeated at least three times with different plasmid preparations. The percent of chloramphenicol acetylation was determined by thin layer chromatography followed by quantification in a PhosphorImager (Fuji). The values always agreed within 15% from one experiment to another.
PCR Binding Site Selection-Enrichment for binding sites from a random oligonucleotide pool (5Ј-GGCTGAGTCTGAACGGATCC(N 15 )-CCTCGAGACTGAGCGTCG-3Ј) was performed as described in Ref. 40. Binding reactions were carried out with purified GST-HD (see above). EMSAs were carried out as described in Ref. 34. After three rounds of enrichment, the selected oligonucleotides were cloned with the CloneAMP®pAMP System for Rapid Cloning of Amplification Products (Life Technologies, Inc., Catalog No. 18381-012).
The EMBL Data Library accession number for the human TGIF cDNA clone is X89750.
Isolation and Molecular Characterization of a New Homeobox Protein
Binding to the Rat CRBPII-RXRE-The promoter region located between positions Ϫ639 and Ϫ605 of the rat cellular retinol-binding protein II (CRBPII) gene (12) was previously characterized as an optimal retinoid DNA-responsive element (CRBPII-RXRE) for RXR␣, -, and -␥ (2,41). This CRBPII-RXRE promoter element is composed of five almost perfectly conserved directly repeated half-sites with the consensus hexanucleotide sequence 5ЈAGGTCA3Ј. These hexamer half-sites are spaced by one nucleotide (Figs. 1 and 4) and generate a series of direct repeats (DR1). We have numbered these half-sites from 1 to 5 in Figs. 1 and 4. Half-sites 1 and 5 diverge from the canonical hexamer sequence.
To identify novel nuclear factors that specifically interact with the rat CRBPII-RXRE and which could interfere or synergize with RXR␣ or other retinoid receptor molecules on this promoter region, we have screened a human liver cDNA expression library cloned into the gt11 phage vector with a radiolabeled double-stranded DNA probe consisting of 3 copies of the CRBPII-RXRE (see Fig. 1). Upon screening of a total of 10 6 plaques, two clones were isolated and their sequences were combined to form a 1562-nucleotide-long cDNA. This cDNA contains an open reading frame (ORF) encoding a protein of 272 amino acids, hereafter called TGIF for 5ЈTG3Ј interacting factor (see Fig. 2A).
The initiation codon of TGIF occurred at the first in-frame ATG from the cDNA. The sequence context of this ATG con- (12)) and the wild type half-sites 1 and 2 for Up ⌬3,4,5 and the half-site 1 mutated and the wild type half-site 2 for Up M1 ⌬3,4,5. In vitro DNA affinities of RXR␣ and TGIF to the different probes are reported: ϩϩϩϩ, very strong binding; ϩϩϩ, strong binding; ϩϩ, significant binding; ϩ, weak binding; Ϫ, no binding; ND, not determined.
FIG. 2. TGIF cDNA sequence and alignment of atypical homeodomains from the TALE (three amino acid loop extension) superclass. A, the sequences of two independently isolated clones were combined to form a 1.562-kilobase-long cDNA fragment which contained the complete TGIF open reading frame. The deduced amino acid sequence is shown as single-letter code. The cDNA sequence (numbering on the right) encodes a 272-amino acid-long protein (numbering on the left). The atypical homeodomain sequence is underlined. The boxed prolines in the carboxyl-terminal region indicate a putative SH3 domain binding site (13). The polyadenylation consensus sequence is shown in reverse lettering. B, alignment of TGIF homeodomain with several TALE homeodomains. The alignment of the amino acid sequences, collected from the GenBank and EMBL data bases, was performed according to the algorithm Pile-up/Prettybox included in the GCG (University of Wisconsin) software package. Identical amino acids between the sequences are shown on black background, whereas the conserved amino acids are shown on gray background. For maximizing identities, 15 forms to that expected for an initiation codon (42). Furthermore, a nonsense codon TGA is found 100 base pairs upstream of this first ATG of the open reading frame. Another ATG codon containing an optimal context for initiation of translation occurs at position 372 of the cDNA in the reading frame ( Fig. 2A). Nevertheless, the amino terminus of the protein has been assigned to the 5Ј-most ATG codon because in vitro translation of the full-length 272-amino acid-long cDNA has clearly demonstrated that the proximal ATG was exclusively utilized as an initiation codon. 2 Furthermore, a comparison of the mouse homologue to this human TGIF cDNA showed that the assigned initiation codon and its nucleotide context are fully conserved in both human and mouse cDNA sequences. 2 Analysis of the predicted reading frame encoding 272 amino acids revealed homology from amino acid positions 35-98 with homeodomains from different species. The carboxyl-terminal part of the new TGIF homeoprotein is rich in proline ( Fig. 2A) and contains a putative SH3 binding domain (XPPPXPP boxed in Fig. 2A) (13). Proline-rich sequences were also implicated in transcriptional regulation suggesting a possible function for this homeoprotein (43,44). A search of the sequences deposited in the latest release (88) of the GenBank and EMBL data bases revealed several entries with small DNA sequence stretches identical with the TGIF homeoprotein. These sequences, however, were generated by random DNA sequencing performed with human cDNA libraries. This implies that TGIF is a novel homeoprotein.
TGIF Belongs to a Large Superclass of TALE-atypical Homeodomains-The amino acid matches between the TGIF homeodomain and nine closely related homeodomains are shown in Fig. 2B. It shows a group of atypical homeodomains (14) whose hallmark is an extension or a deletion by several amino acids of the canonical Antennapedia consensus homeodomain. The TGIF homeodomain showed the highest degree of homology with the homeoboxes encoded by the bE2 alleles of Ustilago maydis (for maximizing identities, 15 amino acids have been deleted at the position indicated by the solid triangle in Fig. 2B), the Saccharomyces cerevisiae copper homeostasis CUP9 gene, and the yeast MAT ␣2 mating-type regulatory gene. Besides a conservation of the invariant amino acids in the DNA recognition ␣ helix 3, the TGIF homeodomain sequence shares highly conserved residues in a 3-amino acid elongated loop between ␣ helices 1 and 2 (consensus His 23 and Lev 24 ). Another stretch of conserved residues among these homeodomains is located between positions 27 and 29 (Pro 27 , Tyr 28 , Pro 29 ). We suggest calling this growing group of atypical homeodomains the TALE homeodomain superclass for three amino acid loop extension (see Fig. 2B and "Discussion"). The highly conserved 3-amino acid-long loop structure is present in homeoproteins from different species ranging from yeast to human and must have an important role in the activities of these regulatory proteins.
TGIF Binds to a T/G-rich Region Located 5Ј to Directly Repeated AGGTCA Half-sites-The DNA sequence-specific binding of TGIF was analyzed by testing a series of DNA probes by EMSA. To localize the domain of TGIF involved in DNA binding, a 106-amino acid-long amino-terminal fragment which contained the full-length homeodomain was synthesized in vitro in a rabbit reticulocyte lysate (⌬TGIF in Fig. 3A). As shown in Fig. 3A, lane 1, ⌬TGIF generated a complex with the trimeric DNA probe used in the Southwestern gene screening. This complex was specifically competed by increasing concentrations of unlabeled specific CRBPII-RXRE DNA (Fig. 3A, lanes 2-4) but not by identical concentrations of a nonspecific oligonucleotide DNA (Fig. 3A, lanes 5-7).
To further analyze the DNA sequence requirements for TGIF, we tested the binding of the ⌬TGIF polypeptide to a CRBPII promoter fragment overlapping only the upstream region of the CRBPII-RXRE probe (CRBPII promoter positions Ϫ659 to Ϫ624 and see also in Fig. 1, Up ⌬3,4,5) (12). As shown in Fig. 3A, lane 8, the additional nucleotides upstream from the CRBPII-RXRE did not affect the DNA binding capability of ⌬TGIF, indicating that half-sites 1 and 2 were sufficient. Specificity of the DNA binding was evaluated by competing this probe with three different unlabeled oligonucleotides. Strong competition was observed with the oligonucleotides Up ⌬3,4,5 and CRBPII-RXRE (Fig. 3A, lanes 9 and 11). Identical amounts of the upstream CRBPII promoter fragment but mutated in half-site 1 (Fig. 1, Up M1 ⌬3,4,5), could however compete only very weakly the ⌬TGIF homeodomain⅐DNA complex (Fig. 3A, lane 10). These results suggest that the ⌬TGIF domain binds specifically to the 5Ј region of the CRBPII-RXRE which contains half-site 1. We concluded that ⌬TGIF interacts strongly with a guanosine residue which is located at position Ϫ636 in the Up ⌬3,4,5 probe and deleted in the Up M1 ⌬3,4,5. The requirement for a guanosine residue at this position was confirmed by the lack of binding of TGIF to the labeled Up M1 ⌬3,4,5 probe lacking guanosine Ϫ636 (Fig. 3A, lane 12). The full-length TGIF homeoprotein synthesized in vitro and the ⌬TGIF (see above) had DNA binding properties identical with the CRBPII-RXRE probe (Fig. 5A, lane 2).
The results obtained with the help of point-mutated or par- FIG. 3. TGIF homeodomain interacts specifically with the retinoid-responsive element CRBPII-RXRE. A, binding of a 106-amino acid-long TGIF polypeptide containing the TGIF homeodomain (⌬TGIF) to different probes (see Fig. 1). [CRBPII-RXRE] 3 stands for trimer of the CRBPII-RXRE. The protein⅐DNA complexes were specifically competed with an increasing molar excess (as indicated) of unlabeled competitor (comp.) oligonucleotides. B, binding of a bacterially expressed TGIF homeodomain-GST fusion protein (GST-HD) to different probes (see Fig. 1). * indicates a GST-HD oligomer. Boxes of different sizes represent two different concentrations of GST-HD. Total protein concentration in the binding reactions has been equilibrated with nonprogrammed cell extracts.
tially deleted responsive elements used as probes or competitors in EMSAs (Fig. 3, A and B) were confirmed by using a G-specific methylation interference protocol. To this end, we overexpressed in Escherichia coli a GST-TGIF fusion protein.
Two complexes with different electrophoretic properties were visible in an EMSA after co-incubation of the GST-TGIF fusion protein and the CRBPII-RXRE probe. The GST-TGIF was probably processed by proteolytic cleavage into two DNA-binding polypeptides. Both polypeptides interfered, however, with identical residues in dimethyl sulfate mapping (data not shown). The CRBPII-RXRE⅐TGIF interactions indicated that the methylated G residues at position Ϫ639 and Ϫ636 from the transcribed strand (solid squares in Fig. 4A, lanes 1 and 2) and at position Ϫ634 from the nontranscribed strand (solid square in Fig. 4A, lanes 3 and 4) interfered strongly with the protein⅐DNA complex formation. This result is in agreement with that described in Fig. 3, A and B, in which we demonstrated that the deletion of residue Gly Ϫ636 in the Up M1 ⌬3,4,5 DNA probe almost abolished TGIF binding. Interestingly, the residues located at position Ϫ629, Ϫ622, Ϫ615, and Ϫ609 on the coding strand, and Ϫ632 from the nontranscribed strand were slightly undermethylated, indicating a detectable but partial interference in the complex formation onto the half-sites 2, 3, 4, and 5. These results led us to conclude that the strongest interferences are restricted to four residues within an eightnucleotide-long region which reads 5ЈGCTGTCAC3Ј (double arrow in Fig. 4C). Inspection of the CRBPII-RXRE indicated that this sequence is partially conserved in four direct repeats overlapping half-sites 2, 3, 4, and 5 as shown in Fig. 4C (doubledashed arrows). These repeats which read 5ЈCTGTGAC3Ј may well be weak binding sites for TGIF and could therefore explain the partial methylation interference in the TGIF⅐DNA complex detected on those repeats (Fig. 4A, lanes 1, 2, 3, and 4).
To test the existence of these weak binding sites, E. colioverexpressed glutathione S-transferase TGIF homeodomain (GST-HD) fusion protein was produced and used in an EMSA. In Fig. 3B, lanes 1 and 2, a second, slower migrating, complex was detected upon addition of larger amounts of GST-HD to the CRBPII-RXRE probe indicating that GST-HD bound to additional sites besides half-site 1. Probe M2,4,5 in which mutations were introduced in half-sites 2, 4, and 5 generated a weaker, retarded TGIF⅐DNA complex (Fig. 3B, lanes 7 and 8). Further, larger amounts of GST-HD allowed the detection of a weak binding to the probe M1 (Fig. 3B, lanes 3 and 4) and only residual DNA binding of GST-HD to the probes ⌬1 M3/4 or ⌬1 M2,4,5 was detectable (Fig. 3B, lanes 5, 6, 9, and 10), suggesting the existence of weak TGIF binding sites with the DNA sequence 5ЈCTGTGAC3Ј (double-dashed arrows in Fig. 4C). As summarized in Figs. 1 and 4C, the results obtained from both the G-specific methylation interference and a series of DNA binding assays, indicated that TGIF binds strongly to half-site 1 and weakly to sites located between the RXR half-sites 2, 3, 4, and 5 of the CRBPII-RXRE.
Since most homeodomains contact a 5ЈATTAAT3Ј recognition DNA sequence (15), we tested by EMSA whether or not TGIF had any affinity for a homeobox consensus DNA sequence. In fact, there is no detectable binding of the in vitroexpressed full-length TGIF protein to the 5ЈATTAAT3Ј consensus sequence motif (Fig. 5A, lane 3), in contrast to the Oct-2 POU domain (32) which recognized this probe specifically (Fig. 5A, lanes 4 -7). In identical conditions, TGIF recognized the CRBPII-RXRE as shown in Fig. 5, lane 2. Taken together, the results raise the question whether or not the non-TAAT recognition site of TGIF (5ЈGCTGTCAC3Ј) in the CRBPII-RXRE is the cognate TGIF binding site. To address this question, a random binding site selection was performed. The TGIF homeobox (GST-HD), expressed in E. coli, was challenged to determine the nucleotide preferences from a pool of oligonucleotides containing 15 randomized nucleotides. High affinity binding sites were selected in three successive rounds of EMSAs by testing three different GST-HD protein concentrations. A set of 42 sequences were analyzed and revealed a consensus core sequence which reads 5ЈTGTCA3Ј as illustrated in Fig. 5B. The consensogram, derived from the 42 selected sequences, demonstrates that the 5ЈTGTCA3Ј core sequence is extremely well conserved at each position (Fig. 5C). Furthermore, the two first nucleotides upstream of the core sequence are preferentially a G or a C residue, whereas the nucleotides A or T are over-represented downstream of the core sequence (Fig. 5C).
Interestingly, the TGIF binding site in the CRBPII-RXRE conforms well with the consensus binding site determined by random binding site selection experiment. As shown in Table I, the TGIF core binding site is also conserved in several retinoid/ steroid receptor cognate binding sites from human, rat, mouse, and/or chicken gene promoters. The TGIF core binding site is flanked in all the described cases (Table I) with the consensus half-site motif (5ЈAGGTCA3Ј) recognized by the zinc fingercontaining nuclear receptors. The TGIF core binding site and the 5ЈAGGTCA3Ј motif form the two half-sites contained in imperfect direct or inverted repeats generally spaced by one nucleotide (Table I), as in the CRBPII-RXRE.
In Vitro Overlapping Binding of TGIF and RXR␣ on the CRBPII-RXRE-To evaluate the binding properties of RXR␣ to the CRBPII-RXRE, we have generated extracts from S. frugiperda Sf9 cells which overexpressed RXR␣ upon infection with a recombinant A. californica baculovirus. The dose-response curves corresponding to the quantification of the different probes complexed with increasing concentrations of RXR␣ in the EMSA indicated that the RXR␣ homo-cooperativity was dependent on the number of conserved hexanucleotide half- sites. The divergent half-site 1, although poorly bound by RXR␣, did also contribute to the stabilization of the RXR␣ binding to the CRBPII-RXRE DNA (see Fig. 1 for a summary of the DNA affinities of RXR␣). 2 To further investigate how many half-sites were occupied on the CRBPII-RXRE, we mapped the binding sites of RXR␣ by deoxyguanosine G-specific dimethyl sulfate interference. Fig. 4B, lanes 1-4, shows a typical experiment, where a complex generated by protein extracts obtained from Sf9 cells infected with a recombinant baculovirus expressing RXR␣ has been mapped onto the CRBPII-RXRE. On both strands, all G residues contained within the half-sites 2, 3, 4, and 5 were undermethylated. Residues from half-sites 2, 3, and 4 were slightly more undermethylated than residues from halfsite 5, indicating a slightly predominant occupancy of the proximal (relative to half-site 1) repeat (Fig. 4B, lanes 1 and 2). This was consistent with the weaker M2,3⅐RXR␣ complex (Fig. 1) and also correlates well with the recent description of the mouse CRBPII-RXRE in which only the half-sites 1, 4, and 5 from the rat CRBPII-RXRE were conserved, allowing weaker RXR binding (1). In the region corresponding to half-site 1, none of the strands were undermethylated. This, however, contrasted with the EMSA experiments reported in Fig. 1, in which specific binding to this site was detected upon addition of a 10-fold larger amount of RXR␣ to the binding reactions.
Repression of RXR␣-mediated Transcription Activation-To
investigate the function of TGIF in the control of the cellular retinol-binding protein promoter, a series of retinoid-responsive reporter plasmids (pBLCAT5-CRBPII-RXREs) were transfected into either human glioma U87 or COS-1 cell lines together with vectors expressing RXR␣ (pSG5-RXR␣) and/or TGIF (pcDNA-TGIF). The choice of the cell lines was dictated by the low levels of endogenous RXR's activities. Cells were treated with 10 Ϫ7 M 9-cis-RA to selectively induce RXR␣-dependent transcription activation (45,46). Fig. 6 provides evidence that TGIF acts as a repressor of the RXR␣-dependent transcriptional activation. In these experiments, a weak constitutive activity was observed by endogenous RXR␣ upon transfection of various reporter plasmids (Fig. 6, A and B, lanes 1, 5, 9, and 13). As described earlier (2), transfection of RXR␣ generated a 9-cis-RA-dependent activation (4 -5-fold in U87 cells and 10-fold in COS-1 cells) of the CRBPII-RXRE reporter plasmid (Fig. 6, A and B, lanes 2).
Co-transfection of TGIF and RXR␣ expression vectors resulted in a repression (3-fold in U87 cells and 2-fold in COS-1 cells) of the 9-cis-RA-inducible expression of the CRBPII-RXRE reporter plasmid (Fig. 6, A and B, compare lane 2 with lane 4). The difference of repression activities detected in those cell lines suggest that the TGIF and RXR␣ ratio is very important (see "Materials and Methods"). Further, transfection of the TGIF expression vector alone resulted in a decrease of the constitutive activity (3-fold in U87 and 2-fold in COS-1 cells), indicating that TGIF also repressed the endogenous RXR␣-dependent activation (Fig. 6, A and B, compare lanes 3 with lanes 1) which may be linked to the cooperative binding of RXR␣ on the CRBPII-RXRE and the saturation of the RXR binding sites. Deletion of the TGIF binding site (half-site 1) in pBLCAT-⌬1 M3/4 did not affect the RXR␣ transcriptional activation in the presence of TGIF suggesting that the repression (Fig. 6, A and B, lanes 5-8) was mediated by the first DNA repeat of the CRBPII-RXRE element. To rule out a possible competition between the promoters contained within the expression vectors pSG5-RXR␣ and pcDNA-TGIF, we co-transfected pSG5-RXR␣ and pcDNA devoid of TGIF sequences (Fig. 6C, lane 3). No repression was observed under these conditions, nor did the co-transfection of both expression vectors devoid of cDNA sequences result in a reduction of the constitutive activity (Fig. 6C, lane 2).
To further document this repressor activity, we transfected reporter plasmids which would reduce or no longer support the cooperative binding of RXR␣ onto the CRBPII-RXRE (see Fig. 1). Mutations of two or more RXR␣ binding sites (M4,5, M2,4,5, and M2, 3,4,5 in Fig. 1) reduced or abolished the RXR␣-mediated activation as well as the constitutive activity. With the pBLCAT-M4,5 reporter (Fig. 6, A and B, lanes 9 -12), the constitutive activity in both U87 and COS-1 cells was reduced about 2-fold below the activity measured with the wild type CRBPII-RXRE. The RXR␣-mediated transactivation was about 3-fold in Fig. 6, A and B, lanes 9 and 10. With a 3-fold reduced RXR␣-dependent transactivation using the pBLCAT-M2,4,5 reporter, we could observe an even higher TGIF-dependent inhibitory effect (Fig. 6, A and B, lanes 14 and 16).
Transfection of the reporter plasmids pBLCAT-M4,5 and pBLCAT-M2,4,5 resulted in a proportionally stronger repression (5-fold in U87 cells and 3-fold in COS-1 cells) of the RXR␣ activity (Fig. 6, A and B, lanes 10 and 12). We also tested a reporter plasmid in which all RXR␣ binding sites were mutated (M2,3,4,5). As expected, no RXR␣-mediated transactivation was detectable in U87 cells (Fig. 6A, lanes 17 and 18). Using the same reporter plasmid, TGIF did induce a slight decrease of the signal (Fig. 6A, lanes 19 and 20). This transcription repression could be directed toward the basal transcription activity or toward endogenous transcription activating proteins that can interact with the TGIF response element (see Table I), suggesting that TGIF is a general transcription repressor.
Mutually Exclusive Binding of TGIF and RXR␣ on the CRB-PII-RXRE-The inspection of the G-specific methylation interference patterns generated by both TGIF and RXR␣ (Fig. 4, A and B) indicated that their interactions occurred at contiguousadjacent areas on the CRBPII-RXRE retinoid-responsive ele- 6. Repression of both basal and RXR␣-dependent transactivation by TGIF in U87 cells and COS-1 cells. A, 5 g of the pBLCAT5 reporter gene constructs containing several CRBPII-RXREs promoter sequences (see Fig. 1) were co-transfected in U87 cells with either 1 g of pSG5-RXR␣ effector plasmid and/or 3 g of pcDNA TGIF effector plasmid, as indicated (ϩ). B, COS-1 cells were co-transfected with the different reporter gene constructs and effector plasmids as in A except that 0.25 g of RXR␣ effector plasmid were transfected. C, control experiments were performed in U87 cells by co-transfecting the reporter plasmids together with the pSG5 and pcDNA plasmids with or without insert in the same conditions as described in A. All experiments were performed in presence of 10 Ϫ6 M 9-cis-retinoic acid for induction of RXR␣-specific transcriptional activation. The top part of A, B, and C depict the characteristic thin layer chromatography pattern of nonacetylated and acetylated 14 C-labeled chloramphenicol (CAT) obtained in the different CAT assays. Relative expression of the CAT reporter gene has been quantified, and the values with standard deviations represent the mean of three independent experiments. ment. This raised the possibility that these two proteins recognize overlapping DNA binding sites, i.e. half-site 1 for TGIF and half-sites 2 and 3 for RXR␣. To test whether or not TGIF and RXR␣ generated mutually exclusive DNA binding in vitro, co-incubation of these factors with the CRBPII-RXRE was performed in EMSA. A recombinant baculovirus was generated to overexpress full-length TGIF in Sf9 cells. TGIF complexed the probe specifically (Fig. 7A, lane 2) as already shown in Fig. 3. Nonprogrammed Sf9 cell extract (Fig. 7A, lane 1) did not. Two complexes with different electrophoretic migration were visible upon incubation of RXR␣ from baculovirus-infected Sf9 cells, and the rat CRBPII-RXRE probe (Fig. 7A, lane 6). The fastest migrating complex (RXR␣(D) in Fig. 7) represents an RXR␣ homodimer because it co-migrated with an RXR␣⅐M4,5 DNA complex (Fig. 7A, lane 7) in which 2 out of 4 RXR␣ sites are mutated, allowing only the formation of an RXR␣ dimer on half-sites 2 and 3. The slower migrating complex in the same lane (Fig. 7A, lane 4) most likely represents the binding of three or four RXR␣ molecules. This is in agreement with what has been suggested previously in Ref.
2.
As shown in Fig. 7A, co-incubation of a constant amount of TGIF and increasing amounts of RXR␣ with the CRBPII-RXRE probe, led to the disruption of the TGIF⅐CRBPII-RXRE complex, suggesting an incorporation of TGIF in the larger complex called "TGIF/RXR␣(T)" or an exclusive binding between TGIF and RXR␣. In conditions where equal amounts of TGIF and RXR␣ were co-incubated with the CRBPII-RXRE probe, the TGIF and the dimeric RXR␣(D) complexes were supershifted into the larger complex TGIF/RXR␣(T) (Fig. 7A, compare lane 4 with lanes 2 and 6). This result (lane 4) was obtained in conditions where the probe concentration was limiting and the TGIF/RXR␣ protein concentrations identical, in order to allow TGIF to bind on half-site 1 and to force the RXR␣ molecules onto the free distal half-sites (3, 4, and 5) of the CRBPII-RXRE probe.
The M4,5 probe was used in EMSA to evaluate possible steric hindrance between RXR␣ and TGIF on their high affinity halfsites 1, 2, and 3 (Fig. 7A, lanes 7-11). Using this probe, RXR␣ generated a homodimer complex to half-sites 2 and 3, which was equally intense as the complex obtained with TGIF (Fig. 7A, compare lanes 7 and 8). Co-incubation of TGIF and RXR␣ with the M4,5 probe gave rise to two complexes (Fig. 7A, lane 9) migrating at the level of the respective protein⅐DNA complexes but with reduced intensity (Fig. 7A, lanes 7 and 8). This result suggests that RXR␣ and TGIF shared the M4,5 probe without binding simultaneously on the same probe molecules. There were neither intermediary complexes nor a supershift of the complexes. A stronger TGIF signal was recovered by diluting out RXR␣ in a co-incubation with the M4,5 probe (Fig. 7A, lane 10).
A similar experiment was performed in Fig. 7B with an excess of TGIF over RXR␣. Both proteins could complex the M4,5 probe, respectively, in Fig. 7B, lanes 1 and 3. Co-incubation of TGIF and RXR␣ with the M4,5 probe led to a disruption of the RXR␣ dimeric complex (Fig. 7B, lane 2). Specific antibodies raised against TGIF (Fig. 7B, lanes 4 -6) neutralized the binding of TGIF to its recognition site (Fig. 7B, compare lanes 1 and 4) but did not affect RXR␣ binding (Fig. 7B, compare lanes 3 and 6). Complete RXR␣ binding could however be restored upon neutralization of the TGIF DNA binding (Fig. 7B, compare lanes 5 and 2). The slightly supershifted complex RXR␣(D) observed in lanes 2 and 5, where TGIF and RXR␣ were co-incubated, was not due to the presence of TGIF in the large complex because the band retardation is identical with or without neutralization of DNA binding of TGIF (compare both lanes). Partially degraded TGIF protein could slow down complex migration. In this case, there would be neither a sterical hindrance by RXR␣ nor a DNA binding neutralization by the antibodies which were not directly raised against the DNA binding domain.
Attempts to co-immunoprecipitate RXR␣ with specific antibodies to TGIF failed, suggesting that no direct protein-protein interaction between TGIF and RXR␣ occurred. 2 These results demonstrated that the presence of TGIF prevents RXR␣ from binding to the DNA recognition half-sites 2 and 3 on the CRB-PII-RXRE, leading to a disruption of the RXR cooperative binding. Further, they support the notion that TGIF prevents RXR␣ from functioning as a transcriptional activator by interacting with its cognate responsive element.
Tissue-specific Expression-As shown in Fig. 8, poly(A) ϩ RNA from different adult human tissues were probed in Northern blots and revealed a single TGIF transcript of 2 kilobases. TGIF mRNA is highly expressed in the placenta, liver, kidney, testis, and ovary (Fig. 8, respectively, lanes 3, 5, 7, 12, and 13). It is weakly expressed in the small intestine and is almost not detectable on Northern blots in heart, brain, skeletal muscle, and peripheral blood leukocytes (Fig. 8, lanes 14, 1, 2, 6, and FIG. 7. Mutually exclusive binding between TGIF and RXR␣. A, TGIF and RXR␣ from baculovirus-infected cells were used in EMSAs with the CRBPII-RXRE and the M4,5 probes (see Fig. 1). The sizes of the boxes are a graphical representation of the protein concentrations. D and T stand for dimer and trimer/tetramer, respectively. B, TGIF was incorporated in excess over RXR␣ in binding reactions with the M4,5 probe. Antibodies (serum dilution 1/20) directed against GST-TGIF (see "Materials and Methods") were co-incubated as indicated. Total protein concentration of the binding reactions has been equilibrated with nonprogrammed cell extracts and/or preserum.
FIG. 8. TGIF mRNA is expressed in a restricted number of human adult tissues. A 300-bp-long probe corresponding to the amino-terminal coding region of TGIF was used to detect a single TGIF transcript on Northern blots containing 2 g of oligo(dT)-selected RNAs from different human tissues. TGIF mRNA is highly expressed in placenta, liver, kidney, testis, and ovary tissues. It is less expressed in lung, pancreas, thymus, prostate, small intestine, colon, blood leukocyte, and spinal cord tissues. It is almost not detectable by Northern blots in brain and muscle tissues. Migration of RNA size standards is indicated (kb). Hybridization by a human -actin probe was used to score the amount of RNA loaded. 16). However, inspection of different subregions of the human brain revealed subtle signal variations. The mRNA corresponding to TGIF is fairly well expressed in the spinal cord (Fig. 8, lane 21), but it is almost not detectable in the cerebral cortex and in the cerebellum (Fig. 8, lanes 23 and 24). Interestingly, TGIF mRNA co-localizes with RXR␣ mRNA in adult liver, placenta, and kidney (21). DISCUSSION Our studies have revealed that a novel homeoprotein, TGIF, recognizes an unusual DNA sequence for homeoproteins only reported in a restricted number of examples. As described in the case of RXR␣, the TGIF binding on a CRBPII gene promoter element had as a consequence a functional interference.
TGIF is a member of a growing family of homeoproteins which is characterized by the requirement of insertions and/or deletions in their sequences for maximizing identities in amino acid alignments (14). A substantial number can be classified in a novel group of atypical homeodomains characterized by the presence of three additional amino acids between helix 1 and 2. This three amino acid loop extension could be determined on the basis of structural comparisons between the atypical ␣2 and the typical Engrailed homeodomains (47). We suggest calling this group of atypical homeodomains the TALE (three amino acid loop extension) homeodomain superclass. Four classes, Kn, PBC, HAC-ATYP, and M-ATYP (according to the general nomenclature described in Ref. 48) can be grouped in the TALE superclass. These four classes share a three amino acid extended loop, and this structural conservation suggests that their members have common biological features. The high divergence of the TGIF homeodomain with the homeodomain of members of these four classes, the closest being the HAC-ATYP class, suggests that TGIF may be integrated in a new class.
The TGIF homeodomain shares with the other atypical TALE homeodomains highly conserved residues in the extended loop between helices 1 and 2 (amino acids 23 to 31, in Fig. 2B). The large number of these TALE homeodomains allows us to predict at which position the three amino acids were inserted during evolution on the basis of amino acid conservation between the typical and atypical homeodomains. We propose that the insertion of these three amino acids occurred carboxyl-terminally to amino acid 22, thereby not affecting the spacing between the highly conserved residues Asn 23 and Tyr 25 in the classical as well as that between residues Asn 26 and Tyr 28 (in Fig. 2B) in the atypical homeodomains. Furthermore, as indicated in Fig. 2B, the two first amino acids from the three amino acid insertion (His 23 and Leu 24 ) are well conserved in this TALE superclass of homeodomains, suggesting thereby an important role of these residues in the function of this superclass of homeoproteins.
In contrast to most homeoproteins which specifically interact with the target consensus sequence 5ЈAATTA3Ј, TGIF together with a1/␣2, caudal (Cad) and the thyroid nuclear factor 1 (TTF1) display high affinity for non-ATTA consensus sequence elements (15, 49 -51), as demonstrated for TGIF by the binding site selection. TGIF homeodomain contains 15 rare amino acids located mainly in helix 1 and helix 3 which occur less than 5 times (1.5%) at each position among 346 homeodomains (14), e.g. Cys 49 . However, 4 out of these rare 15 amino acids (residues Pro 9 , Trp 19 , Asn 50 , and Ile 53 ) were strongly represented in the group of the atypical homeodomains. Amino acids Asn 50 and Ile 53 , located in the TGIF recognition helix 3 (see Fig. 2), have been described as critical residues for DNA sequencespecific binding (40,52,53). Interestingly, Asn 50 was conserved at the same position in the ␣2 homeodomain and Ile 53 in the typical a1 homeodomain. Furthermore, TGIF shares with a1/␣2 not only these residues involved in specific DNA binding, but also the affinity for the TGT core DNA binding site (47,49). The "Asn 51 alignment rule" was defined on the basis of the alignment of the A residue (( ACA TGT ) and ( TTA AAT )) in the DNA recognition sites contacted by Asn 51 , a highly conserved amino acid in all homeodomains (Asn 54 for TALE homeodomains). The alignment of the DNA recognition sequences from TGIF, ␣2, and Engrailed according to the above-mentioned rule showed that residue Arg 57 (in Fig. 2B) from the ␣2 homeodomain (47) which contacts the central G nucleotide in the TGT core ( ACA TGT ) is also conserved in TGIF. In contrast to the Engrailed homeodomain which recognizes an AAT core, this amino acid sequence conservation in TGIF and ␣2 might well reflect their unusual DNA binding behavior.
Comparison of the rat and mouse CRBPII-RXREs indicates that the TGIF DNA recognition site is present in both species and that in the mouse it is flanked in the 3Ј direction by a direct repeat spaced by one nucleotide (DR1) composed of half-sites 4 and 5 ( Table I). The DR1 hexamer half-sites 4 and 5 consists of a weaker binding site for RXR␣, for RXR:RAR heterodimer, and for the hepatocyte nuclear factor HNF-4, and a stronger binding site for the apolipoprotein A1 regulatory protein 1 (ARP-1) (1). The TGIF consensus binding site is moreover conserved in the promoters from the mouse lactoferrin gene, the chicken ovalbumin gene, the human complement factor H gene and human/rat myosin heavy chain genes (Table I). These TGIF target promoter sequences were shown to be adjacent or overlapping to steroid/retinoid receptors recognition sites in these promoters. The sites are bound by the COUP-transcription factor (TF), estrogen receptor (ER), RAR, ARP-1, and/or thyroid receptor (TR) and are composed of half-sites which together with the consensus binding site of TGIF site are arranged as imperfect direct or inverted repeats spaced by one nucleotide as in the CRBPII-RXRE. The spacing is of two nucleotides in the human/rat myosin heavy chain TRE. Furthermore, the TGIF consensus recognition sequence is a COUP-TF natural half-site (3). The presence of a TGIF binding site in the CRBPII-RXRE could be seen as fortuitous. However, the conservation of the TGIF binding site contiguous or overlapping to several steroid/retinoid receptor binding sites argues strongly in favor of a functional relevance for this TGIF site. This observation prompted us to further study the functional interaction of TGIF with the CRBPII-RXRE in the context of RXR dependent transcription activation. We describe in this paper that two regulatory factors, belonging to two different families of transcription factors, are interfering functionally upon binding on their respective DNA targets. The mutually exclusive binding between TGIF and RXR␣ on the rat CRBPII-RXRE leads to the repression of the RXR␣-dependent transcription activation. A weak inhibitory effect is also directed toward the endogenous RXR transcription activity. As demonstrated herein on the rat CRBPII-RXRE, the exclusive binding of TGIF with the retinoid receptor could also influence the steroid/retinoid receptor homodimer and heterodimer-mediated transactivation on the homologous mouse promoter region of the CRBPII gene. Possibly, TGIF could also modulate the activity of the mouse lactoferrin, chicken ovalbumin, the human complement factor H, and human/rat myosin heavy chain gene promoters.
Several homeoproteins (a1/␣2, Extradenticle, Engrailed, PBX, HOX) have been shown to interact cooperatively, thereby changing the DNA target site specificity by conferring strong binding on sites for which the single proteins show only weak affinity (16, 54 -58). This could also hold true for TGIF when several molecules are interacting with the CRBPII-RXRE. There are in half-site 1 two superposed and inverted TGIF binding sites, a high and a low TGIF affinity site, followed in the 3Ј direction by directly repeated low affinity TGIF sites (see Figs. 3B and 4). Furthermore, in the mouse CRABPII promoter (at position Ϫ658 to Ϫ650), a putative TGIF low affinity binding site (5ЈGCTGTGAC3Ј) is overlapping with a DR1 binding site for RXR (CRABPII/RARE2) (59). Sequences reading TGTGA were also found in additional RAREs and EREs, but the functional importance of these possible weak TGIF binding site has still to be proven.
Although the mechanisms involved in transcriptional activation have been extensively investigated over the past years, much less is known, however, of the processes governing transcriptional repression. Transcription repression can be achieved by different mechanisms. It can be brought about by the action of a repressor directly blocking a DNA-responsive site for a transcriptional activator. The repressor also can directly inhibit transcription by neutralizing the activation domain of a transcriptional activator or by titrating out activating factors (60). Recent reports focused on the repression of eukaryotic transcription and in particular on the exclusive binding of transcription factors on contiguous or overlapping DNA sites (61-63 and references cited therein). For example, DAX 1, a novel orphan member of the nuclear hormone receptor superfamily, acts as a dominant negative regulator of RAR-mediated transcription by competing for the RAR DNA sites (64). Another experiment has been carried out with transgenic mice overexpressing an isoform of RAR4 lacking the A domain, which is important for activation of the CRBPII promoter (65). These animals were clearly predisposed to hyperplasia and neoplasia (66). These in vivo disorders have been proposed to result from the competition of RAR4 with other RARs for retinoic acid (RA) response elements contained within the CRBPII promoter and thereby affecting indirectly intracellular RA signaling.
Some inhibitory factors need hormone induction to actively repress transcription and thereby interfere, in the presence of a ligand, with transcription activators by occupying adjacent or overlapping sites (28 and references cited therein). For example, the ligand-dependent effect of RAR-driven AP-1 (c-Fos-/c-Jun heterodimer) transrepression can be dissociated from the ligand-mediated RAR transcription activation (27). Similar to the situation observed with TGIF/RXR␣, the AP-1/RAR mutual repression occurs by exclusive binding on an identical site within the osteocalcin promoter (67). It would be of interest to test whether TGIF's inhibitory activity could be influenced by a ligand. Nevertheless, this hypothesis is quite unlikely since no homeoprotein was reported so far to act in a ligand-dependent fashion. It is interesting to visualize the convergence of two different regulatory pathways, a ligand-dependent and a ligand independent one, to control the regulation of the CRBPII gene through its RXRE sites. The consequence of such an interference of two different classes of factors on an overlapping responsive site can either be the enhanced repression or the transactivation of the target promoters (68,69).
A single factor can either function as a repressor or as an activator of transcription, as described in the AP1/RAR example (see above). The switching of RAR from an activator to a repressor of retinoid-dependent transcription can be obtained merely by changing its relative positioning in the heterodimeric complex with RXR, depending on the spacing between the half sites (23). RAR:RXR heterodimers activate transcription in a ligand-dependent manner by binding on directly repeated half-sites spaced by 5 nucleotides (DR5). RAR occupies the downstream half-site. In contrast, RAR:RXR heterodimers do not activate transcription when bound to a DR1. RAR is inhibiting RXR transactivation by binding on the upstream half-site and thereby blocks the binding of the ligand to the RXR. Although the blocking of 9-cis-RA binding to RXR by TGIF was not studied, mutually exclusive binding seems to be the major mechanism leading to transcription inhibition by TGIF. However, the possibility that TGIF is an activator itself should not be excluded. The switching of TGIF to an activator would not depend on alternate positioning on the CRBPII-RXRE binding site but on other regulatory proteins present in the cell (for examples, see Ref. 60 and references cited therein).
While the RXR␣, RARs, HNF-1, and ARP-1 seem to be the major players in the mouse and rat CRBPII gene regulation (1,2), it is tempting to speculate that this regulation could be modulated also by TGIF. The possibility that TGIF can regulate the transcription of the CRBPII gene suggests that it may synergize with these factors playing an important role in retinoid homeostasis. TGIF may interfere functionally in a similar manner with the members of the retinoid/steroid receptors superfamily which regulate transcription on responsive elements from gene promoters shown in Table I containing the canonical binding site of TGIF. | 2018-04-03T06:01:40.618Z | 1995-12-29T00:00:00.000 | {
"year": 1995,
"sha1": "b01b51a706dbc1da5ca6b2425fef6d43289c45a8",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/270/52/31178.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3e3ab7be0856aac080b9d2851a4248dd5caee49b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.