id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
13181185
|
pes2o/s2orc
|
v3-fos-license
|
Noncommutative Synchrotron
We study the departures from the classical synchrotron radiation due to noncommutativity of coordinates. We find that these departures are significant, but do not give tight bounds on the magnitude of the noncommutative parameter. On the other hand, these results could be used in future investigations in this direction. We also find an acausal behavior for the electromagnetic field due to the presence in the theory of two different speeds of light. This effect naturally arises even if only \theta^{12} is different from zero.
Introduction
The idea of noncommutative space-time coordinates in physics dates back to the 1940's [1]. Recently, due to the discovery of Seiberg and Witten [2] of a map (SW map) that relates noncommutative to commutative gauge theories, there has been an increasing interest in studying the impact of noncommutativity on fundamental as well as phenomenological issues. Two directions seem to us very central: on the one hand, the clear understanding of the spatiotemporal symmetry and unitarity properties of these theories [3], [4], [5], [6], [7], [8], [9]; on the other hand the hunting for experimental evidence (see e.g. [10]), possibly in simple theoretical set-ups [11].
The aim of the present paper is to study the effects of noncommuting space-time coordinates on synchrotron radiation in classical electrodynamics. The motivations are twofold: i) we want to see the fundamental effects of noncommutativity, such as acausality, and violation of Lorentz and scale invariance, practically at work in a simple case; ii) we want to explore the physics of synchrotron radiation in the hope that more stringent limits on the magnitude of the noncommutative parameter θ could be set in this case (for a nice account on the current bounds see for instance [4]).
The current views on the space-time properties of noncommutative field theories are essentially three: i) with the only exception of space-time translations, spatiotemporal symmetries are manifestly violated (see for instance [3]), and sometimes the artifact of the so-called "observer transformations" has to be introduced [4], [12]; ii) full Lorentz invariance (including parity and time-reversal) is imposed on (a dimensionless) θ µν , leading to a quantum space-time with the same classical global symmetries [6]; iii) being the noncommutative field theory an effective theory of the fundamental string theory, space-time symmetries are not a big issue. For a quite comprehensive review see for instance [13]. We believe that a lot more work is needed to fully understand this very fascinating matter.
What we intend to do here is to take a practical view and tackle the problem of finding corrections to the spectrum of synchrotron radiation induced by noncommutativity at first order in θ. We shall find that, in our approximations, these corrections act as a powerful "amplifier" of the effects of noncommutativity: independently from the actual value of θ the synchrotron radiation amplifies the effects of a factor O(10 13 ). On the other hand, due to the current bounds on θ this amounts to a correction of O(10 −10 ) of the commutative counterpart, hence we are still far from possible testable effects. We also see in this analysis that some surprising acausal behaviours naturally arise even if only the space-space component θ 12 is taken to be different from zero. The effect is due to the presence of two different speeds of light in this theory. This result is in contrast with the general belief that acausality effects should arise in this context only when θ 0i = 0. We take this last result as a confirmation that the issues of space-time properties are far from being clarified.
The theory we shall be dealing with in this paper is affected with serious problems in the quantum phase (see e.g. [14], [15]). For instance, the truncation of the theory at first order in θ leads to infrared instabilities at the quantum level [10], and in the limit θ → 0 the commutative quantum electrodynamics is not recovered [16]. These obnoxious features seem to be related to an unusual correspondence among the ultraviolet and infrared perturbative regimes of the quantum theory. It is still unclear whether this correspondence is an artifact of the perturbative calculations or a more fundamental (hence more serious) problem. For instance in [17] it is shown that there are scalar field theories where the connection is actually absent. These facts evidently mean that the quantum theory is still a "work in progress", and we shall not further address these important matters here. We shall instead focus on the classical theory, in the hope that a meaningful quantum theory might be discovered in the future, and that such a theory could have the classical model we are about to use as a limit. After all classical general relativity is widely used and experimentally tested even if a sound quantum theory of gravity still does not exist (and may as well not exist at all).
In the next Section we shall recall the main ingredients of noncommutative electrodynamics [11], and set the notation. In Sections 3 we shall exhibit the electromagnetic potentials for the noncommutative synchrotron, while in Section 4 we shall give the approximate expressions for the electric and magnetic fields to estimate the leading corrections to synchrotron radiation. Finally, in Section 5 we shall draw our conclusions.
Noncommutative Electrodynamics
For us the noncommutativity of space-time coordinates will be expressed in the simplest possible fashion, the canonical form [18], given by where the Moyal-Weyl * -product of any two fields φ(x) and χ(x) is defined as θ µν is c-number valued, the Greek indices run from 0 to n − 1, and n is the dimension of the space-time. This approach, of course, does not contemplate all the possible ways noncommutativity of the coordinates could take place. For instance, two equally valid, if not more general, approaches are the Lie-algebraic and the coordinate-dependent (q-deformed) formulations [18], and many other approaches exist. Nonetheless, the canonical form is surely the most simple and the basic features of noncommutativity are captured in this model.
The action for the noncommutative Maxwell theory for n = 4 iŝ Let us now recall some useful results, valid at all orders in θ. The Noether currents for space-time transformations (full conformal group) for the noncommutative electrodynamics described by Eq.
whereÎ = d 4 xL, Π µν = δL/δ∂ µ A ν , and for translations (the only symmetric case) f µ ≡ a µ , where a µ are the infinitesimal parameters. By making use of the gauge-covariant transformations [9] one finds the conserved energy-momentum tensor [8] whose symmetry is, of course, not guaranteed as in the commutative case (Π µν = −F µν ), and the conservation holds when the equations of motion are satisfied. Thus, in full generality, the conserved Poynting vector is where For our purpose it suffices to treat the simplest case of noncommutative electrodynamics at order O(θ), coupled to an external current, described bŷ where we made use of the O(θ) SW map and of the * -product defined in Eq. (2). From now on our considerations will be based on such a O(θ) theory.
In Ref. [11] it was found that, in the presence of a background magnetic field b, and in absence of external sources (J µ = 0), the O(θ) plane-wave solutions exist. The waves propagating transversely to b travel at the modified speed , with θ ij = ǫ ijk θ k , and θ 0i = 0) while the ones propagating along the direction of b still travel at the usual speed of light c.
The plane-waves, unfortunately, do not give a stringent bound on θ. As a matter of fact, with the current bound of 10 −2 (TeV) −2 [21], one would need a background magnetic field of the order of 1 Tesla over a distance of 1 parsec to appreciate the shift of the interference fringes due to the modified speed of noncommutative light. It is then of strong interest to find more stringent phenomenological bounds on the noncommutative parameters. In the next Sections we shall study the synchrotron radiation in the hope to ameliorate those bounds.
In order to do that let us recall the linearized constitutive relations among the fields following from the modified Maxwell Lagrangian in Eq. (9) [11] where the Bianchi identities are not modified. On the other hand, the dynamical Maxwell equations (6), when a source is added, become leading to (for θ 0i = 0) where the Latin indices run from 1 to 3.
Furthermore, we are interested in evaluating the larger noncommutative departures from the synchrotron spectrum, hence contributions which are of order higher that O(e/R), where e is the electric charge, and R is the distance from the source, will be neglected. Let us write the general solutions of Eqs. (17) and (18) Taking all of the above into account, the approximations made lead us to write Eq.s (17), and (18) as whereJ i ≡ J i /a, i = 1, 2, andρ ≡ ρ/a. We notice here that: i) the 1 ↔ 2 symmetry for Eq.s (20) and (21), due to the rotation symmetry still present on the plane for the noncommutative case; ii) Eqs. (20) and (21) couple the components A 1 and A 2 , while Eqs. (22) and (23) couple the components A 3 and Φ. When one solves the equations for A 3 and Φ, one sees that A 3 ∼ O(λ). This gives a negligible contribution O(λ 2 ) to Φ, but is an effect completely due to noncommutativity, absent in the standard theory. As a matter of fact we have A 3 = 0 even if the current J is taken to lay in the plane (1, 2).
Indeed, by writing Eq.s (20), (21), (22), and (23) in the space of momenta we obtain to order O(λ) or whereJ 0 ≡ cρ. We can now identify the Green's functions as where R ≡ x − x ′ , τ ≡ t − t ′ , and we made use of the fact that translation invariance is still present for the noncommutative theory.
The non-zero Green's functions are then where R ≡ | R|, the prime on the delta function means derivative with respect to its argument. It is interesting to notice that the effect of the noncommutativity appears as a δ ′ coming from the shifted poles in the ω integrals. This means that the difference in the propagation speeds, c and c ′ , will be converted into a pre-acceleration effect due toβ (see below) at one single speed of light c.
One can now compute the electric and magnetic fields from the general expression for the potentials given by ¿From the structure of the Green's functions in (30)-(32) one can sees that where A (0) 3 = 0, and B (0) = n × E (0) . The electric and magnetic fields have quite involved expressions , and the part proportional to λ contains a term of the form [20] 1 where n = R/R, and [ ] ret are the usual retarded quantities. We see here the announced contributions proportional to the derivative of the acceleration. As discussed earlier, theβ contribution arises as an effect of the conversion of the two speeds of lights c and c ′ in the poles of the Green's function into a single speed c with a derivative of the delta function δ ′ (τ − R/c). We have taken this view, rather than retaining both speeds, in order to better compare our results with the experiments.
Let us say here that these terms, which are introduced solely by noncommutativity, recall the familiar acausal scenario of the Abraham-Lorentz pre-acceleration effects for the classical self-energy of a point charge [19]. Even if not directly connected to the Abraham-Lorentz case, this feature is quite surprising in this context. As a matter of fact, we naturally obtain this effect by retaining only one space component of θ µν , while it seems that one should expect such behaviors only for non-zero time components of θ µν .
Corrections to Synchrotron Spectrum
In order to compare our results with the standard ones, we want now to compute the effects of noncommutativity on the synchrotron radiation in the experimentally relevant case defined by the following approximations: • Ultra-relativistic motion β = v/c → 1; • Radiation observed in the plane (1,2), and far from the source | x| ∼ R >> | r(t)|.
The power radiated in the direction n is where all the quantities ( n, β,˙ β, R) are in the plane (1,2), and we used the modified Poynting vector given in Eq.
One can easily verify that in these approximations A (λ) 3 does not contribute to the radiated power in Eq. (36), and for the evaluation of the order of magnitude of the leading noncommutative correction to the power one can use the following approximate expressions for the electric and magnetic fields where ζ ≡ 1 − n · β. The expressions (37) and (38) for the electric and magnetic fields, in the limit λ = 0, reproduce the standard results for the terms O(1/R) in the ultra-relativistic limit [19], which are the only relevant ones for the evaluation of the synchrotron radiation.
By using the expressions (37) and (38) for the electric and magnetic fields, and retaining only the leading contributions for large R, and β → 1, it turns out that [20] d dΩ where The energy radiated in the plane is then [19] d dΩ where L(ω) is the Fourier transform of L(t) given by In the ultra-relativistic approximations two are the characteristic frequencies for the synchrotron: the cyclotron frequency ω 0 ∼ c/| r|, and the critical frequency ω c = 3ω 0 γ 3 . In order to consider only the radiation in the plane, we shall work in the range of frequencies ω >> ω 0 , for which the latitude ϑ ∼ π/2 [19].
Thus, there is an impressive "amplification" of the effects induced by a nonzero noncommutativity parameter (better, our λ). As a matter of fact one gains 13 orders of magnitude (from 10 −23 to 10 −10 for the case considered in this example), and this gain is independent from the actual input value for θ.
Conclusions
We investigated synchrotron radiation in noncommutative classical electrodynamics. The spectrum of this radiation is considerably modified by noncommutativity of the coordinates. These departures from the standard spectrum work as an impressive "amplifier" of the noncommutative effects. On the other hand we cannot obtain a tight bound on θ. This study indicates that the phenomenology of synchrotron radiation in noncommutative electrodynamics deserves further investigation.
We notice a partial analogy of the present theory, in the linearized approximation, with nonlinear optic. A comparison of the latter with the noncommutative case could give interesting results because the properties of the "medium" (as described by ǫ ij and µ ij ), depend in our framework on the external background magnetic field b. It is then possible to separate the effects of noncommutativity, coupled to b, from the other electrodynamical effects.
We also saw a peculiar acausal behaviour for the electric and magnetic fields as time-derivatives higher than two. This Abraham-Lorentz-like effect, which naturally arose even if only one space-space component of θ µν is taken to be different from zero, is due to the two different speeds of light allowed in this theory. Moreover, it is in contrast with the general belief that such acausal effects should arise in this context only when θ 0i = 0. We take this last result as a confirmation that the fascinating issues of the space-time properties in presence of noncommuting coordinates is far from being fully clarified.
|
2014-10-01T00:00:00.000Z
|
2002-12-19T00:00:00.000
|
{
"year": 2002,
"sha1": "7fce04306502f4321dcd2c59a2eed4f1c1e193c3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0212238",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "30d55ebebdc6756684d6a33e6abec8a46c35762e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55204672
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Lactobacillus Supernatant on Differentiation of K562 Cell Line
Background and objective: Considering the toxic side effects of chemotherapy in treatment of cancer, anticancer drugs of natural origin including probiotic Lactobacillus strains have recently attracted a lot of attention. Methods: After culturing chronic myeloid leukemia cell line K562 in 96-well plates, effects of different concentrations of culture supernatant from Lactobacillus casei on differentiation of the cells were investigated after 48 and 72 hours under an inverted microscope. Number of live cells and percentage of viable cells were determined by trypan blue exclusion test of cell viability. Cytotoxicity was assessed by MTT assay. Data analysis was performed by SPSS (version) 22 using one-way analysis of variance and Tukey's test at significance level of 0.05. Results: Secondary metabolites from the probiotic bacteria L. casei induced cellular differentiation, exerted anti-cancer effects and inhibited growth in K562 cells. Apoptotic cell death was confirmed by MTT and DNA fragmentation assays in a way that increasing the dilution from 1.2 to 1.32 significantly increased the viability of cells (P=0.001). In addition, increasing the dilution significantly increased the number of live cells in the first 48 hours (P=0.001). Conclusion: Culture supernatant of L. casei reduces the number of live cells, and induces apoptosis and monocytic differentiation in K562 cells in a dose- and time-dependent manner. Therefore, combined chemotherapy and differentiation therapy using such supernatants could be useful for treatment of cancer.
the bacterium. These supernatants are not involved in the growth, proliferation and evolution of bacteria, and are usually produced in the stationary phase of growth (4). In this study, we evaluate the anticancer effects of culture supernatant of this bacterium on K562 cells. The K562 cell line is composed of undifferentiated blast cells that are rich in glycophorin, and derived from chronic myeloid leukemia (CML) patients. The cells were obtained from the Pasteur Institute of Iran. Given that the supernatant derived from Lactobacilli such as L. casei has strong antibacterial properties, and the compounds derived from these bacteria have anti-cancer properties (5), we aimed to evaluate the anticancer effects of culture supernatant from L. casei.
MATERIAL AND METHODS
For bacterial culture and supernatant extraction, lyophilized L. casei (PTCC 1608) were purchased from the industrial fungal and bacterial collection of Iran. To prepare the bacteria, external surface of the ampules was first disinfected with 70% ethanol, scratched with a diamond scratch pen, and then broken by applying pressure on the scratched area. After addition of 400 μl of MRS broth to each ampule, a uniform suspension was achieved by thorough mixing. The suspension was then added to 20 ml of MRS broth. Remaining drops of the suspension were grown on MRS agar medium to ensure the purity of the bacteria. The media containing L. casei were incubated at 37 °C for 48 hours until the end of the stationary phase. Finally, bacteria-free supernatant was used for testing. Batch culture was performed in order to obtain the supernatant. In this technique, no compound is added or removed during fermentation. Therefore, after increase in cell mass and reduction of nutrients, conditions are provided for the production of supernatant. After homogenizing the bacterial suspensions, 4 ml of the suspension was added to 400 ml (1% v: v) of freshly prepared MRS broth, and then incubated at 72 °C for 72 hours (6). To ensure the entry of bacteria to the stationary phase of growth and production of supernatant, bacterial growth curve was plotted based absorbance at 630 nm. After the incubation time, the number of bacterial colonies was determined in volume unit. The supernatant
INTRODUCTION
Chronic myeloid leukemia (CML) is one of the most recognized forms of leukemia that accounts for 15-20% of total incidents of this type of cancer. It is more common in people over the age of 50. Philadelphia chromosome is a defect found in 95% of patients with the disease that results from a balanced translocation between the long arms of chromosomes 9 and 22 (1). This leads to production of BCR-ABL fusion gene that produces an abnormal tyrosine kinase called BCR-ABL, which plays an important role in CML pathogenesis (1). Cells with sustained tyrosine kinase activity do not stop cell division, and become cancerous. The cause of the chromosomal breakage in these cells is unknown (1). Chemotherapy is the most common treatment option for CML. Drugs such as hydroxyurea and busulfan have been able to control the total number of leukocytes, but failed to increase survival in patients. Imatinib is another highly effective drug in treatment of CML that inhibits the abnormal kinase produced by these cells. However, the effect of this drug is reduced due to drug resistance in cancer cells (2). Therefore, many efforts are being made to find novel drugs and anticancer agents. Accordingly, development of new anticancer agents of natural origin has received more attention (3). Lactobacillus is a genus of probiotic bacteria producing lactic acid as the end-product of carbohydrate fermentation. Lactic acid bacteria are strong prokaryotes with antimicrobial properties. These bacteria are Gram-positive, non-spore-forming, fermentative, catalase-negative, and rodshaped (bacillus) or spherical (coccus). Lactobacillus casei is of great importance in the probiotic food industry, and can interfere with the proliferative and damaging activities of cancer cells by production of various compounds. The main compounds produced by the bacterium are lactic acid, exopolysaccharides, biosurfactants and peptidases (and fewer amounts of hydrogen peroxide, acetic acid, formic acid, and diacetyl) that have anti-cancer effects and play important roles in maintenance and regulation of cell functions. These compounds often have antibacterial, antifungal, antiviral, antiprotozoal and antitumor properties. Some of them are produced in form of supernatant, while some are also structural components of one-way analysis of variance and Tukey's test at significance level of 0.05.
Total bacterial count
The plates cultured from 10 -5 dilution were selected for counting (the selection was based on presence of 30-300 colonies per plate). Colonies were counted using a colony counter (Digital Colony Counter, Teifazma, Iran). Bacterial density was determined as 1.49 × 10 8 CFU/ml. Figure 1 shows the bacterial growth curve after 72 hours of incubation. Bacteria reached the stationary phase after 50 hours of incubation and produced the supernatant.
Trypan blue exclusion test results
This test was carried out in a 96-well plate. After 48 and 72 hours of incubation, number of cells in each well was counted separately under the Neubauer chamber. Cell viability was 30% and 9.09% after 48 hours and 72 hours, respectively. The absorbance values read by the plate reader at 630 nm are shown in Table 3. Cell viability was determined by comparing the optical density (OD) values of treatment group with those of the control group (100% viability). Since cell viability at concentration of 1:8 supernatant did not have a significant difference with that of the control group, it was considered as the highest nontoxic concentration.
was separated from the culture medium by centrifugation. Due to the high production of lactic acid during the growth and metabolism of L. casei that acidify the supernatant extracted, pH of the suspension was neutralized and adjusted to 7.2-7.4. For this purpose, 1N NaOH solution was used and the final pH was adjusted to 7.2 at temperature around 0 o C. For sterilization and ensuring the absence of live or dead bacteria debris, 220 micron filters were used. RPMI 1640 medium enriched with 10% fetal bovine serum (heat inactivated, Gibco®, USA) was used for the culture of K562 cells. Trypan blue exclusion test of cell viability was used to calculate the percentage of viability according to the following formula (7): In the next step, the cells were cultured in 96well plates. Cell counting was performed using Neubauer chamber. MTT assay was performed to determine the toxicity of culture supernatant using 1:20 serial dilution. Cell death or apoptosis was evaluated by Acridine orange staining and fluorescence microscopy. Finally, agarose gel electrophoresis was performed following DNA extraction. Statistical analysis was performed with SPSS (version 22) using demonstrated that supernatant of Lactobacillus plantarum induces necrosis, which is significantly less than the apoptosis induced in the promyelocytic cell line (9). Lactic acid-producing probiotic bacteria have anti-cancer effects. In a study on the effects of L. casei and Lactobacillus acidophilus on colon cancer cells, the ability to increase apoptosis in LS513 cell line by the bacteria was evaluated in the presence of 5-fluorouracil (5-FU). The cells were treated with the mentioned bacteria in the presence of 100 mu g/ml 5-FU for 48 hours. In the presence of 10 CFU/ml of live bacteria, the effect of 5-FU increases up to 40% in a dose-dependent manner. In addition, rapid activity of caspase-3 protein was observed after treatment of cells with the combination of live bacteria and 5-FU. These results indicate that live L. casei and L. acidophilus cells can increase the apoptotic activity of 5-FU. Therefore, first to investigate the cytotoxic effects of L. casei supernatant on the K562 cell line. The results of the MTT colorimetric assay show that the new derivatives induce the death of K562 cells in a time-and concentrationdependent manner (i.e. higher concentrations induce more cell death). In addition, 48 hours of treatment with the supernatant causes morphological change and cell destruction. After 72 hours of treatment, cells disintegrate completely, and cell debris become visible.
|
2018-12-05T04:58:44.712Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e1613e8e223b71197aa313a44b02df998cccb8f6",
"oa_license": "CCBYNC",
"oa_url": "http://mlj.goums.ac.ir/files/site1/user_files_c5015c/admin-A-10-1-479-09c723e.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1764417c31d9074d5809d1740af728b0c05d3371",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
221236613
|
pes2o/s2orc
|
v3-fos-license
|
Clinicodemographic Profile of Kidney Diseases in a Tertiary Hospital of Central Nepal, Chitwan: A Descriptive Cross-sectional Study
ABSTRACT Introduction: Spectrum of kidney diseases differs significantly in developing and developed countries. However, there is no central registry regarding the nature of such diseases in Nepal and our center either. The study aims to know the clinicodemographic spectrum of kidney disease patients admitted to our hospital. Methods: This study was a descriptive cross sectional study done in the department of Nephrology, College of Medical Sciences Teaching Hospital from May 2018 to April 219. Convenient sampling was done and all the consecutive kidney disease patients irrespective of their age, sex, and renal diagnosis were included in the study. Ethical approval was taken from the Institutional Review Committee of the college (reference number. 2016/COMSTH/IRC/049). Clinicodemographic profile of kidney diseases were studied using statistical package for the social sciences version 20 and were represented as mean, standard deviation, number, percentage and ratio. Results: Out of a total of 829 patients, the commonest clinical syndrome and the histological patterns were end-stage renal disease 248 (29.9%) and IgA nephropathy 18 (20.7%) respectively. The mean age was 51.4±18.6 years. The commonest reason for hospitalization was sepsis 372 (44.8%). Males were 486 (58.6%) and females were 343 (41.4%). Conclusions: The commonest clinical presentation and the reason for admissions were end-stage renal disease and sepsis syndrome respectively.
INTRODUCTION
The spectrum of kidney diseases includes various aspects of renal disorders and differs significantly in developing and developed countries. If left untreated they may lead to renal failure that may require renal replacement therapy, which is extremely expensive and places a severe burden on the healthcare system of the country. 1 Kidney diseases have become a major public health problem globally and in Nepal too. [2][3][4][5] and the estimated prevalence of chronic kidney disease (CKD) is around 10.6% but is expected to be higher. 4,5 In a recent metaanalysis of AKI, the incidence of AKI was found to be 7.5% in Southern Asia and 31.0% in South-eastern Asia; 6,7 However there are no such data from Nepal. data regarding the nature of such patients in our center. We, therefore, thought of doing a study to know the clinicodemographic spectrum of kidney disease patients admitted to our hospital.
METHODS
This study was a descriptive cross sectional study carried out in the department of Nephrology over one year, from May 2018 to April 2019. The ethical clearance for conducting the study was taken from the Institutional review committee of the hospital with the Ref no. 2016/ COMSTH/IRC/049. Convenient sampling was done and all the consecutive kidney disease patients, who were admitted in the department of Nephrology, irrespective of their age, sex, and renal diagnosis, were included in the study. The clinical diagnosis of renal diseases was made by a nephrologist with an experience of > 5 years in clinical Nephrology and all the diagnoses were supported by relevant biochemistry, radiology, and pathology reports. The prevalence of chronic kidney disease (CKD) is around (p) is 10.6% 4 , taking 95 % CI and 2.5% margin of error then sample size was calculated by using formula, n= Z 2 x p x q / e 2 = (1.96) 2 x (0.106) x (1-0.106) / (0.025) 2
=582
By taking 10% non-response error the actual sample size of this research was 642 but this research was conducted among 829 patients.
The standard definitions were used to define the renal diagnosis as per the updated kidney disease improving global outcome (KDIGO) equivalent criteria, wherever applicable. Written informed consent was taken from all the patients.. The patient's demographic profile, clinical diagnosis, comorbidities, the reason for hospital admissions, length of hospital stay, and the number of repeat admissions were noted in the proforma. The data were then entered in the MS XP sheet and were transferred to statistical package for social sciences version 20 (Chicago, IL, USA) program for analysis. The data were analyzed using mean, standard deviation, number, percentage and ratio.
RESULTS
A total of 829 patients were admitted within a period of one year from April 2018 to May 2019. The commonest clinical syndrome and the histological patterns were end-stage renal disease 248 (29.9%) and IgA nephropathy 18 (20.7%) respectively. The commonest reason for hospitalization was sepsis 372 (44.8%). Out of them, males were 486 (58.6%) and the females were 343 (41.4%). The mean age of the patient was 51.4±18.6 years. The minimum age was 9 years and the maximum age was 93 years (Table 1). Of 829 patients, 246 (29.7%) were of age <40 years and 583 (70.3%) were of age ≥ 40 years. Further age distributions were as shown in the table below. Most of the histological patterns of the kidney biopsies were of IgA nephropathy 18 (20.7%) followed by lupus nephritis 11 (12.6%), minimal change disease 9 (10.3%), membranous glomerulonephropathy 7 (8.0%) ( Table 3).
DISCUSSION
In Nepal, there are limited numbers of nephrologists and only a few dedicated nephron centers to provide the nephrology service to kidney patients. Our study was the first of its kind from Chitwan to know the nature of kidney diseases prevailing in this region. Of the 829 patients, the majority of the patients were males 486 (58.3%), with the male to female ratio of 1.4. Similar observations of male preponderance were seen in other studies from India. 8,9 This dominance of males over the female may reflect the socio-dynamic influence of our society, where a treatment privilege goes to males or it may be because the males were inherently predisposed to develop kidney diseases. This area of research needs multicentric genetic studies. 10 projecting CKD/ESRD to be the commonest clinical presentation in different parts of the world including ours. The high burden of ESRD could be explained by the silent and asymptomatic nature of the disease, lack of population awareness about the disease, poorly equipped health care system, and high cost of treatment. 1,11 Sixty-six (7.9%) patients were diagnosed ESRD for the first time in our study, making the incident ESRD a significant problem in our center. Similar to our study Sakhuja,et al. 12 also reported about two-thirds of their patients to be new ESRD at the time of the first consultation.
The overall CKD patients including the patients from ESRD group, acute on CKD group, and CKD3-5 group were 527 (63.6%), which projects CKD as the dominant clinical syndrome. The observation of CKD being the dominant clinical syndrome in our study might be explained by the global increase in the prevalence of diabetes and chronic glomerulonephritis. Similar observations of CKD being the dominant presentation were also seen in Indian studies from North India (PGI, 13 AIIMS, 14 and SGPGI). 15 The commonest cause of ESRD in our study was T2DM 94 (37.9%) followed by CGN 75 (30.2%) and HTN 63 (25.4%) highlighting the fact that T2DM is the number one cause of ESRD in our region and also globally. 16,17 However, there seems to a considerable heterogeneity in the causes of CKD/ESRD within and across the country. Few studies from Nepal and India, found CGN to be the commonest cause of CKD.ESRD. 11
CONCLUSIONS
This study has helped us understand the nature and spectrum of kidney diseases prevailing in our region and can help formulate appropriate programmes and plans to tackle the common prevailing problems. However we need a multicentric study to understand the true nature and prevalence of the kidney diseases in the country.
|
2020-08-23T13:06:05.727Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "df9ab2682c44fdc19f44e3829e9611b14d91573b",
"oa_license": "CCBY",
"oa_url": "https://www.jnma.com.np/jnma/index.php/jnma/article/download/4972/3194",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f6e3641754f756b3c4336ebffb3cfb9cfe8c88f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5723991
|
pes2o/s2orc
|
v3-fos-license
|
T-Cell Receptors Binding Orientation over Peptide/MHC Class I Is Driven by Long-Range Interactions
Crystallographic data about T-Cell Receptor – peptide – major histocompatibility complex class I (TCRpMHC) interaction have revealed extremely diverse TCR binding modes triggering antigen recognition. Understanding the molecular basis that governs TCR orientation over pMHC is still a considerable challenge. We present a simplified rigid approach applied on all non-redundant TCRpMHC crystal structures available. The CHARMM force field in combination with the FACTS implicit solvation model is used to study the role of long-distance interactions between the TCR and pMHC. We demonstrate that the sum of the coulomb interactions and the electrostatic solvation energies is sufficient to identify two orientations corresponding to energetic minima at 0° and 180° from the native orientation. Interestingly, these results are shown to be robust upon small structural variations of the TCR such as changes induced by Molecular Dynamics simulations, suggesting that shape complementarity is not required to obtain a reliable signal. Accurate energy minima are also identified by confronting unbound TCR crystal structures to pMHC. Furthermore, we decompose the electrostatic energy into residue contributions to estimate their role in the overall orientation. Results show that most of the driving force leading to the formation of the complex is defined by CDR1,2/MHC interactions. This long-distance contribution appears to be independent from the binding process itself, since it is reliably identified without considering neither short-range energy terms nor CDR induced fit upon binding. Ultimately, we present an attempt to predict the TCR/pMHC binding mode for a TCR structure obtained by homology modeling. The simplicity of the approach and the absence of any fitted parameters make it also easily applicable to other types of macromolecular protein complexes.
Introduction
Recognition by the CD8+ T-cell receptor (TCR) of immunogenic peptide (p) presented by class I major histocompatibility complexes (MHC) is one key event in the specific immune response against virus-infected cells or tumor cells, leading to T-cell activation and killing of the target cell [1]. The first determination of the structure of a TCRpMHC complex in 1996 [2] revealed how the molecular recognition of the pMHC by the TCR is mediated by three complementary determining regions (CDR) of each chain the TCR at the interface with the pMHC complex. The CDR1 and CDR2 loops form the outside of the binding site, while CDR3 constitute the central loops in the TCR binding site and mostly interact with the peptide. However, the commonly accepted paradigm of CDR1 and CDR2 binding to the MHC and CDR3 to the peptide does not fully account for the true structural complexity of TCRpMHC complexes and all CDR loops have been shown to interact both with the peptide and MHC [3][4]. Over the years, successive releases of TCRpMHC structures have revealed a variety of native TCR binding orientations, defined as the angle that is made between the TCR and the pMHC (Figure 1), depending altogether on the peptide, the MHC and the a/b pairing of the TCR [5]. Recent studies reported TCR/pMHC angles spanning more than 45u variations on the current set of known crystal structures [6].
Understanding the molecular basis that governs TCR orientation over pMHC is still a considerable challenge, and also an important need in the field of TCRpMHC modeling [7] and, as a direct consequence, in the field of rational TCR design and adoptive cell transfer immunotherapy [8]. This questions has been recurrently discussed, but only a few studies have focused on predicting the actual binding mode of given TCRpMHC structures: the study from Varani et al. made use of experimental data obtained from NMR chemical shift mapping to obtain lists of buried residues upon binding [9], while the recent study from Roomp and Domingues pedicted the contacts between the pMHC and the TCR, using a training set of TCRpMHC crystal structures [4].
In this work, we use a first-principle based in silico approach to uncover the role played by long-range interactions on TCR docking to pMHC. We present a simplified rigid method, which allows scanning quickly the potential orientations of the TCR with respect to pMHC at long distances, and computing the effective energy at each position. This approach was applied to a set of crystal structures to test the agreement between the position of energetic minima and the native binding sites. In 92% of the cases, the 0u minima, corresponding to the native orientation, was the energetically most favorable, demonstrating the predictive-ability of the method. Our scoring scheme, based on the CHARMM force field with the FACTS implicit solvation model [10], allowed the decomposition of the effective energy into residue contributions, and the study of the importance of the different CDR, the MHC and the peptide towards defining the overall TCR orientation. Ultimately, we briefly present and discuss an attempt to predict the TCR/pMHC binding mode using a TCR 3D structure obtained by homology modeling, to asses the efficacy of the approach as a component of a TCRpMHC structural modeling pipeline [7].
Default Procedure
In this work, the default procedure can be summarized as follows (see Figure 1). For each crystal structure of TCRpMHC complex, the TCR is translated 8Å away from the pMHC, then rotated using 5u steps. The effective energy of the whole system is computed at each step, and plotted against the TCR/pMHC angle. The plots present the energy variation upon rotations of 360u and are referred to as rotation profiles.
TCRpMHC complexes. 26 crystal structures of TCRpMHC class I have been selected in the MPID-T2 database as of July 2011 (http://biolinfo.org/mpid-t2) [11] and were downloaded from the PDB [12]. They are listed in Table 1. Redundant structures, such as those bearing identical chains with point mutations, were not selected. Structures bearing non-natural peptides or peptides longer than 11 residues were excluded as well. All calculations were performed on systems consisting in the peptide bound to MHC, the b2-microglobulin, and the variable domains of the TCR a and b, with the exception of the 2e7l, 2esv, 2oi9, 3e2h and 3e3q systems for which only the binding site of the MHC (residue 1 to 175), the peptide and the TCR were available in the crystal. In the following, the complex formed by the peptide, the MHC class I, and the b2-microglobulin (if available) is simply designated as peptide-MHC (pMHC).
Additionally, 8 TCRpMHC class II structures were similarly selected in the MPID-T2 database. The results obtained from this set of structures are described in the supplementary materials. In the following, MHC stands for MHC class I, unless specified otherwise.
Force field. All calculations were handled by the CHARMM program version c35b1r1, using the CHARMM22 all-atoms force field. First, the His residue protonation states were determined and the systems were set up for use with CHARMM according to an in house automated procedure (V. Zoete, private communication) that is also used behind the SwissDock small molecule docking web service [13]. Following this setup, the system was minimized by 500 steps of steepest descent, using the Fast Analytical Continuum Treatment of Solvation (FACTS implicit solvation model) [10]. Default parameters were used, including a dielectric constant of 1.0 for the protein and 80 for the solvent. A shifting function was applied on the electrostatic with a 12Å cutoff. Finally, the internal degrees of freedom of the TCR and the pMHC were frozen during the remaining calculations (e.g. rigid bonds, angles and dihedral angles), unless specified otherwise. Study of TCR Binding Orientation over pMHC Class I PLOS ONE | www.plosone.org TCR space exploration. The Cartesian axes were oriented along the principal axes of the TCR molecule ( Figure 1). The x axis was defined as the principal axis of the TCR, as calculated by CHARMM. This axis is perpendicular to the plane of interaction with pMHC. The TCR space exploration consisted in a rigid displacement (translation or rotation) of the TCR to successive positions, from which the long-range interaction score values were calculated. First, the TCR was separated from the pMHC by translating it along the x axis, allowing rotations around the x axis without steric clashes. We calculated the electrostatic effective energy after each translation and rotation, as described below.
Effective energy calculation. The effective energy of the TCRpMHC system in a given state is described as the sum of the intramolecular energy and the solvation free energy: Where the intramolecular energy of the system is the sum of the bonded energy E bonded intra , the van der Waals energy E vdW intra , and the electrostatic energy in vacuum E elec intra . The solvation free energy is the sum of a polar term DG elec solv and a non-polar term DG np solv . The effective energy can be rewritten as follows: Since the TCR and the pMHC were kept rigid, E bonded intra is constant and was neglected in the following. Also, unless specified otherwise, TCR and pMHC were always separated by at least 8Å , so that E vdW intra and DG np solv were found constant. As a result, all effective energy variations could be calculated from: The E elec intra term was calculated as the sum of coulomb interactions, while the electrostatic solvation energy DG elec solv was calculated using the FACTS implicit solvation model [10]. Default parameters were used, as explicited earlier.
FACTS energy decomposition. The coulomb and the electrostatic solvation energies were decomposed as described previously [14]. The contribution of the atom i to the total coulomb energy of the system is given by where j loops over all the atoms of the system, and r ij is the distance between atom i and atom j bearing the charges q i and q j , respectively. Since the FACTS approach makes use of the Generalized Born formula [15] to calculate the electrostatic part of the solvation energy, we calculated the contribution of the atom i using [14]: , considering energies expressed in kcal/mol and distances expressed in Angstroms. R i and R j are the FACTS Born radii. The Born radius of atom i was calculated using CHARMM as follows. First, the charge of every atom of the system is set to zero, except for atom i of charge q i . The FACTS electrostatic solvation energy of the system -thus corresponding to the electrostatic solvation energy of atom i in the context of the uncharged protein, DG i elec,solv -was then calculated. The Born radius was finally obtained according to its definition [10]: The approach is similar to MM-GBSA decompositions of former studies [16] although the current FACTS implementation in CHARMM c35b1r1 requires the computation of the Born radii of each atom in the system. The FACTS model was preferred to other implicit solvation models in this study, since it is as efficient in reproducing Poisson Boltzmann solvation energies as GB-MV2, while being 10 times faster. Also, contrarily to GB-MV2, FACTS is not a grid-based method and therefore does not show any unphysical energy variations upon rigid rotation of a protein (see Figure S2).
Correlation coefficient. The rotation energy profile vector, W syst , is defined as a collection of n values of the effective energy of the system, calculated for n angle values regularly distributed along a 360u revolution of the TCR around the x axis: W syst~( w 1,syst , w 2,syst , w 3,syst ,:::, w n,syst ) In this work, considering 5u rotation steps, n is always equal to 72. For each of the n TCR positions, the energy of an atom selection was stored into a vector W sel : W sel~( w 1,sel , w 2,sel , w 3,sel ,:::, w n,sel ) Considering the following average: where X stands for ''syst'' or ''sel'', we defined the normalized and centered vector as: Therefore, the contribution of an atom selection to the profile of the effective energy variation as a function of the TCR orientation was estimated by calculating the correlation coefficient: . Landscape representation of the evolution of TCR polar energy rotation profiles of 1ao7 as a function of the TCR/pMHC distance. The energetic preference for the native orientation (0u) is clearly visible. Rotation profiles were not computed at distances lower than 6Å due to the numerous steric clashes below that distance. doi:10.1371/journal.pone.0051943.g004 The correlation coefficient is dimension-less, and takes values between 21 for totally anti-correlated vectors and 1 for totally correlated vectors.
Molecular dynamics setup. Molecular Dynamics simulations were performed in Gromacs 4.5.1 with the CHARMM27 force field [17], on the A6 TCR extracted from the PDB structure 1ao7. The TCR was solvated in an orthorhombic periodic box of TIP3P water molecules [18], resulting in a system size of 74.2663.4648.5 Å 3 for a total of around 70000 atoms. During the dynamics, the Lennard-Jones interactions were treated with a switch function reaching zero at 14Å , and the electrostatic interactions were calculated using the particlemesh Ewald (PME) method. After energy minimization, the system was heated up to 300 K during 100 ps with positional restraints on all heavy atoms. The system was subsequently equilibrated at constant temperature and volume during 200 ps, then using a Berendsen temperature and pressure coupling for 200 ps [19], and ultimately using two separate Nosé-Hoover thermostats [20][21] for the solvent and the protein, as well as a Parrinello-Rahman barostat [22], for 400 ps. This last setup was TCR homology modeling. 500 models of the A6 TCR [23] were built using the homology module of the TCRep 3D approach, as described elsewhere [7]. TCRep 3D makes use of state of the art homology modeling using the Modeller 9v5 software [24], complemented by the use of additional dihedral restraints applied on CDR 1 and CDR2 to drive the loops towards their canonical conformations [25]. The templates were selected in our set of TCR (see above), excluding those bearing similar a or b chains.
Results
We assessed the variations of the effective energy of the system upon rigid motions of TCR relative to pMHC. The calculations were made on a set of 26 TCRpMHC complexes, corresponding to all non-redundant systems whose experimental structure has been determined (see methods). The TCR was moved 8Å away from the pMHC molecule, along its principal axis, orthogonal to the TCR/pMHC interaction plane ( Figure 1A). Subsequently, 5u rotation steps around this axis were successively applied to the TCR until a complete revolution was obtained ( Figure 1B). At each position, the sum of Coulomb interactions and the electrostatic solvation energy, obtained with the FACTS implicit solvation model [10], was computed. Figure 2 shows the variations of the effective energies during the rotations of the TCR with respect to pMHC. At each position, the smallest distance between the two parts was identified and the van der Waals interaction between the two corresponding residues was computed to verify that they did not clash. Positions where TCR clashes with pMHC were ignored. 0u corresponds by definition to the native orientation seen in the X-ray structure. As can be seen, the rotation profiles are mostly characterized by local minima near the 0u and the 180u positions and sharp maxima near the 90u positions. These rigid profiles suggest that, at an early stage of the TCR approach, long distance interactions with pMHC already play a significant role to guide TCR orientation. The local minima near the native and the opposite orientation were defined as the primary and secondary minimum, respectively, and were recorded in Table 1.
TCR Orientation and Effective Energy Minima
Considering the energy profiles, the structures 2ol3 and 3mv8 were considered as outliers and were analyzed separately (see below). Interestingly, the primary minimum is the global energy minimum in a large majority of profiles (for 22 over 24 complexes). On average, it is distant by only 11.0u (SD = 11.2) from the native orientation, and ranges from 245u to 25u (Figure 3). Noticeably, 20 profiles showed an energetic minimum located at less than 20u from the native orientation. Following the primary translation of 8Å , the amplitude of the energy variation along the TCR revolution ranges from 3.0 kcal/mol for 2bnr to 37.9 kcal/mol for 2e7l, with an average of 19.3 kcal/mol (SD = 8.3) over the 24 profiles after neglecting the TCR positions that make steric clashes. We also investigated the dependency of the rotation energy profiles on the distance between TCR and pMHC. Figure 4 illustrates how the amplitude of the signal increases when the TCR/pMHC distance decreases from 12Å to 6Å , and how the TCR/pMHC approach can be guided by an energetic funnel that will lead the two partners in their final native complex orientation.
The same study was performed on 8 TCRpMHC class II structures. The resulting rotation profiles showed similar shapes and well defined primary minima at 6.3u (SD = 2.3) on average (see Figure S1).
Reliability of the Rigid Approximation
In the rigid body approximation, we neglected willingly the dynamic properties of both the TCR and the pMHC, such as internal fluctuations, and possible structural rearrangement of the system during the binding [26][27]. We used Molecular Dynamics (MD) simulations of the TCR to verify that the nature of the longdistance interactions between TCR and pMHC observed with a rigid model are not significantly changed by the structural fluctuations accessible at room temperature. Thus, we extracted 40 distinct conformations of the unbound A6 TCR (PDB ID: 1ao7), one every 100 ps along a 4 ns MD simulation trajectory. Rigid TCR rotations were performed using these conformers placed at 8Å from the crystal conformation of the Tax/ HLA*A0201 pMHC (PDB ID: 1ao7). Importantly, the average energy profile shows a shape similar to the one calculated using the X-ray conformer ( Figure 5A). The averaged primary minimum was indeed situated at 25.39u (SD = 8.73), from the native orientation, while it was predicted at 0u using the crystal structure. Clearly the shape of the TCR rotation profiles does not depend on the detailed atomic coordinates of crystal structures. This suggests that the role of long distance interactions observed in the rigid body approximation also holds in the dynamical process of TCR approach towards pMHC. Additionally, rotation profiles were computed for the available unbound TCR structures. The 6 TCR where placed at 8Å from the crystal conformation of the corresponding pMHC in the bound TCRpMHC structures (Table 1). Again, we identified welldefined primary minima at less than 15u from the native orientation.
Energy Decomposition
The contributions of structural sub-groups to the TCR rotation energy profiles were calculated by performing FACTS energy decomposition. The approach is similar to MM-GBSA binding free energy decompositions of former studies [16]. We considered 4 distinct parts of the system: MHC, peptide, CDR3, and the TCR without CDR3. The aim was to assess the importance of a selected region of the system in the definition of the energetic minimum, which defines in turn the path that leads the TCR towards its bound conformation. As illustrated on Figure 5, with the 1ao7 complex, a typical decomposition resulted in large contributions from the MHC helices and from the CDR1 and 2 of the TCR (see also Figure S3). Correlation coefficients between the sub-group and the entire system rotation energy profiles (see Methods) confirmed this on 24 structures ( Table 2).
The noticeable outliers are discussed below.
2bnr. This structure is formed by the 1G4 TCR bound to the NY-ESO-1 antigen. The peptide is characterized by a central Met-Trp pair pointing out, towards the TCR [28][29] (Figure 6). Even at 8Å from the bound conformation, the side chains of the peptide remain close enough from the CDR3 for this interaction to play a preponderant role upon rotation of the TCR. The structure remained in good agreement with the shape of most of the rotation energy profiles (Figure 2), with well-defined primary and secondary minima.
2ol3. The rotation profile of this structure failed to discriminate between two opposite minima. Two local minima were found close to 90u from the native orientation ( Figure 2). In the bound conformation, the residue Arg 98 of the CDR3b is deeply buried under the Tyr4 of the peptide [30]. In our rigid approximation, after the 8Å translation, the two side chains face each other at a distance lower than 3Å in an unfavorable conformation ( Figure 6). By removing the contributions of these two residues, we obtained a rotation profile with a well defined primary minima located 10u away from the native orientation (see also Figure S3).
3mv8. The peptide in this structure is a particularly long EBV peptide (11 residues) in a bulged conformation, bearing two side chains (Asp7 and Tyr8) that are deeply buried inside the TCRb chain [31] (Figure 6). These two residues play a disproportionate and unrealistic role in the TCR rotation profiles at 8Å . Clearly the rigid body approximation is unsuited for this structure. Interestingly, by ignoring the contribution of these two residues, we dramatically improved the rotation profile quality ( Figure S3).
On average, 92% of the rotation energy profile is carried by the periphery of the binding site ( Figure 5C), suggesting that the interaction of MHC with CDR1,2 is mostly responsible for guiding the TCR towards the native orientation. This observation is in fair agreement with current knowledge regarding TCR germline bias for MHC [32]. Finally, by considering only the rotation profiles in a 60u range around the primary and secondary minima, the computation of correlation coefficients resulted in an increased role of the CDR3-peptide, from 8% to 26% of the contribution ( Figure 5C), showing that the interaction between CDR3 and the antigen helps discriminating between the native and the opposite orientation during TCR approach. At 8Å , except for the 2ol3 structure (see above), the native orientation was indeed always preferred by the CDR3-peptide interaction by an energy difference of 2.38 kcal/mol (SD = 2.90), on average.
Rigid Pulling
Additionally, we performed a rigid body undocking of the TCR, starting from the complex structure. The full effective energy of the system, including the van der Waals energy E vdW intra and the non polar term of the solvation energy DG np solv (see methods), was calculated every 0.1Å along the principal axis. As shown on Figure 7, the energy profiles clearly show the typical energy barrier of the TCR binding to pMHC [26]. The estimated binding free energy values in that approximation are comprised between 2133 kcal/mol and 215.5 kcal/mol, which is in reasonable agreement with TCRpMHC binding free energies estimated in silico using the MM-GBSA method [14], considering our rigid approximation, and neglecting the entropy terms and DE intra the variation of the internal energy upon reorganization. Interestingly, for all complexes, the energy barrier was identified near 5Å from the bound position. Our results showed that long-range interactions do have an impact on TCR orientation towards MHC at longer distances. This suggests that the orientation driving force is indeed distinct from the final approach and induced fit mechanisms.
Negative Controls
We investigated the shape of rotation profiles of non TCR proteins. We selected the crystal structures of a CD8 homodimer (PDB ID : 1akj) and a JAK2 tyrosine-protein kinase (PDB ID : 3ugc). The former was selected to compare the TCR rotation profiles with another type of Ig-folded dimer and the latter to test a monomeric structure whose fold is unrelated to that of the TCR. The two systems were protonated with CHARMM, minimized (see methods), and aligned in front of the pMHC (PDB ID: 1ao7) in order to share the principal axes of the TCR (Figure 8). First, the rotation profile of the CD8 showed the same two effective energy maxima close to 90u and 290u. However, the 0u and 180u positions are local maxima in this case, surrounded by local minima situated at 2160u, 220u, 30u and 165u. The very symmetrical shape of the profile can be explained by the homodimeric nature of the CD8. Second, the rotation profile of JAK2 showed no symmetric tendency, with two well-defined maxima distant from only 125u. This illustrates that the rotation profiles reported in Figure 2 are specific to TCR with respect to the pMHC.
Using TCR Structures Obtained by Homology Modeling
500 structures of the A6 TCR were obtained using homology modeling. We calculated an average heavy atoms RMSD of 2.01Å (SD = 0.05) between the models and the A6 crystal structure, after optimal least square structural alignment. Each model was minimized by 500 steps of steepest descent. The TCR of the crystal structure (PDB ID: 1ao7) was successively replaced by the different homology models at 8Å from the crystal conformation, and rigid rotations were performed in front of the MHC. The obtained average rotation profile is shown in Figure 9. As can be seen, the two minima were again visible, and the primary minimum was situated at 12.2u (SD = 16.0) on average. This illustrates the potential of our rigid approach for docking predictions in combination with a TCRpMHC structural modeling pipeline [7].
Discussion
The aim of this study was to explore, using a first-principle based approach, the long-distance driving force that guides TCR in the proper orientation with respect to the pMHC. The assumption regarding the existence of such driving force was based on general knowledge regarding the binding process of two distinct proteins [33][34]. The binding mode of the TCRpMHC is indeed essential to predict and understand the peptide recognition leading to T-cell activation. Furthermore, a large diversity of orientations has been seen in the experimental complexes, for the various TCR and pMHC.
TCR Binding Mechanism
The dynamical mechanism the association of the TCR with the peptide-MHC until the final binding mode has been discussed intensively in the literature [26] [35]. In the meantime, knowledge about the geometry of TCRpMHC interaction was recently extended [5] [32]. Current models for TCR/pMHC association describe a two steps approach, where the CDR1 and CDR2 loops first scan the MHC molecule to form, in turn, specific contacts and define the general orientation of the TCR over pMHC (association step). A second step (stability step) includes specific interactions and induced fit of the peptide with CDR3 [26]. The prevalent role of CDR1 and 2 in the definition of the binding mode was also mentioned by Garcia et al. [32] who stated that CDR1,2/MHC interaction defines the general footprint of the TCR on pMHC, while the CDR3 and the peptide are only responsible for subtle orientation variations. Finally, the study by Collins and Riddle [5] proposes a model where the binding of the CD8 to the MHC, while necessary for TCR signaling, is subsequent to the definition of the docking orientation itself. In this model, the binding of CD8 could be regarded as a mechanism that helps discriminating between the native and the opposite orientation of the TCR. However, it is not supposed to be determinant to define the precise binding orientation.
In our approach, TCR rotation energy profiles revealed that the native binding mode orientation is defined at an early stage of the TCR approach, before the emergence of a direct contact between CDR1/CDR2 and pMHC. Before the appearance of the binding energy barrier, the native orientation was already predicted with a deviation of 11u in average, using only a model of long-range interactions, which is quite satisfying in view of the 45u amplitude in the native binding mode orientations observed on crystal structures [6].
The energy decomposition was performed using a generalized Born model, using the method described in a previous study of TCR-p-MHC binding [14]. In the latter, it was shown that such a decomposition approach gives results that are closely related to those of a computational alanine scanning, when used to assess the contributions of single residues to the binding free energy. The decomposition confirmed the importance of the CDR1,2/MHC interaction for the TCR/pMHC orientation, since it defines 91% of the signal of rotation energy profiles, on average (see Results). This contribution successfully delineates a primary minimum energy orientation that leads to the final binding mode and is also capital for preventing orthogonal binding. This result supports strongly the observation from Khan and Ranganathan [6], who identified a ring of charged residues at the pMHC interface, which interacts with CDR1 and CDR2 with complementary charges. We reported that the role of CDR3 and peptide residues is of lesser importance at long distance ( Figure 5C). Interestingly, and contrary to the CDR1,2/MHC interaction, the energy profiles resulting from the center of the binding site (CDR3/peptide) efficiently discriminated the primary minimum from the secondary minimum, defining clearly which energy minimum leads to the native orientation.
Importantly, the typical shape of TCR rotation profiles carries information about the location of the native and the opposite minimum, as well as the location of orthogonal forbidden binding orientations. This seems to be exclusive to TCR rotation profile, as confirmed by CD8 and JAK2 rotation profiles that were calculated as negative controls (Figure 8). By rotating the CD8 protein in front of the pMHC, we observe a symmetrical signal, which can be explained by the homodimeric nature of the co-receptor. The JAK2 profile confirmed that a randomly selected monomeric structure does not reproduce similar energetic properties upon rotation.
Outliers and Limitations
Clearly, the simplified rigid approach was not suitable for a number of crystal structures, such as MHC bearing long peptides in bulged conformations. Indeed, without a relatively flat binding surface, residues that are deeply buried upon binding might still be in contact with TCR after a 8Å unbinding translation. Therefore we restricted our test set to structures with a peptide not longer than 11 residues. The outlier structures 3mv8 and 2ol3 were treated separately, and the energy decomposition allowed the identification of a few outlier residues. Rotation profiles were then re-calculated (similar to Figure S3), and the identification of primary minima was made possible. In the case of MHC class II molecules, the length of the peptides was not an issue since longer peptides do not adopt a bulged conformation as it is observed in MHC class I.
In general, at distances smaller than 8Å from the pMHC, a large amount of steric clashes occurs during the TCR revolution. Furthermore, the orientation and final binding mode of the TCR is then governed by the short range atomic details, the non-polar interactions and desolvation, and the induced fit of the binding sites. The computation of the effective energy variations at such small distances is out of the scope of this study.
Outlook
Crystal structures represent a considerable interest for the field of molecular modeling, as illustrated by the numerous studies of the TCRpMHC binding [36]. Molecular modeling studies on these structures are performed notably to understand the binding process of the TCR [37], the effect of mutations [38] and to perform in silico protein engineering [39]. Since 1996 and the first determination of the structure of the TCRpMHC complex [2], subsequent releases have revealed the variety of TCR binding orientations depending altogether on the antigen epitope, the MHC and the a/b pairing of the TCR [5].
Despite the increased number of available TCRpMHC crystal structures, modeling has quickly become an important complementary approach as experimental techniques allow very quick sequencing of entire TCR repertoires. Molecular modeling methods tried to address in silico the question of TCR binding mode prediction in various ways, including manual orientation based on experimental data [40], homology modeling [41] [7], or protein-protein docking (private communication). Recently, the study from Roomp et al. [4] presented an algorithm dedicated to TCR/pMHC interaction that quite reliably predicted the contacts between the pMHC and the CDR loops, using a training set of existing crystal structures, paving the road for in silico TCR binding mode prediction. Promising approaches are alternatively based on recent progress in identifying experimentally the buried surface footprint upon protein-protein complexation. The approach by Varani et al. [9] successfully identified the TCR footprint on pMHC by NMR chemical shift mapping.
The results of the present work showed a surprisingly good agreement between the primary minimum of rotation profiles at 8Å and the native orientation of the TCR bound to pMHC. We demonstrated a good robustness of the results upon TCR structural variations seen in Molecular Dynamics simulation. Furthermore, we also observed that rotation profiles of long-range interactions do show a relevant signal when unbound TCR crystal structures are put in front of the target pMHC, and that perfect shape complementarity is not required. As most important CDR loops shifts between the unbound and the bound structures were recorded on CDR3 [27], we found the result consistent with our energy decomposition showing that the long range signal is mostly carried by the outside of the binding site. These results investigated the possibility to predict the TCRpMHC binding mode orientation in a pure in silico approach, through computation of longdistance interactions.
To our knowledge, in a TCRpMHC modeling process [7], the precision that is required on TCR orientation over pMHC for fine interface and contact refinements is 610u (data not shown), close to the 11u average distance between the native and predicted orientations using the present approach. Although the precision that was provided by the rigid body simplification may not be satisfying for the prediction of exact binding modes, the quick execution of the protocol makes it well suited for a preliminary search of TCR orientation, which is indeed already defined prior to the binding process.
We demonstrated the potential of this approach as a potential component in a TCRpMHC structural modeling pipeline, by searching the binding mode of the A6 TCR as an average of the rotation profile minima obtained after modeling TCR by homology. As mentioned in the results section, we obtained an average deviation of 12.2u (SD = 16.0) from the native orientation. Typical approaches for TCRpMHC structure predictions make use of homology modeling of the complex, complemented by ab initio refinement of CDR loops at the interface with pMHC [7]. It has been stated that the prediction of the TCR binding orientation was indeed an issue in the process, preventing the correct contact predictions in case of false binding [7]. The present study provides data on how rotation profiles could be used to guide this critical step in homology modeling. The reliability of such approaches will be assessed elsewhere. The approach could also easily be extended to the generalized approach of protein-protein docking after the areas of binding sites have been correctly identified. Figure S1 TCR rotation profiles of the MHC class II test set. The polar contribution to the effective energy of the TCRpMHC complex is plotted against TCR rotation angle around the x axis, after an 8Å translation away from the pMHC. Positions that make steric clashes are ignored. (EPS) Figure S2 GB-MV2 electrostatic solvation energy variation of a single TCR, upon rigid rotation in Cartesian space, calculated with CHARMM. pMHC is not present. The amplitude of the unphysical energy variation, which comes from the mathematical grid-based description of the system, is larger than 15 kcal/mol and makes the approach unsuited for the computation of TCR rotation profiles. (EPS) Figure S3 Contribution of the MHC-helices and CDR1,2 to the TCR rotation profiles of the test set. The polar effective energy of the sub-system is plotted against TCR rotation angle around the x axis, after an 8Å translation away from the pMHC.
|
2016-05-18T15:47:15.689Z
|
2012-12-14T00:00:00.000
|
{
"year": 2012,
"sha1": "f0085eb725cbf8df7845467714c1ede028f9e84e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0051943&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0085eb725cbf8df7845467714c1ede028f9e84e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55267369
|
pes2o/s2orc
|
v3-fos-license
|
The Gelation Ability and Morphology Study of Organogel System Based on Calamitic Hydrazide Derivatives
The gelation property of a series of LMOG bearing hydrazide and azobenzene groups, namely, N-4-(alkoxyphenyl)-N-4-[(4methoxyphenyl)azophenyl] benzohydrazide (BNBC-n, n = 8,12,14), has been systematically studied in this work. The obtained results demonstrate that the gelling ability in organic solvents is significantly influenced by the length of terminal alkoxy chain. In different organic solvents, it is hard to observe the organogel formation for BNBC-8 molecule. On the contrary, the organogelators BNBC-12 and BNBC-14 bearing longer terminal chains have shown great ability to gel organic solvents to form stable organogels. The critical gelation concentration for BNBC-12 reaches as low as 5.3 × 10M, which can be considered as a supergelator. It has beenmanifested that the aggregationmorphology of organogel strongly depends on the nature of the gelling solvents and the length of the terminal alkoxy chain. The gelation of BNBC-n provides an easy method for the preparation of multidimensional structure and manipulation of morphology from ribbons, hollow tube fiber to 3D net-like structure in different solvents. The cooperation of hydrogen bonding, π-π interaction, and Van der Waals force is suggested to be the main contribution to this self-assembled structure.
Introduction
In the past decades, low molecular mass organogelators (LMOGs) have attracted broad attention because they can self-assemble into diverse nano/microstructures, for instance, particles, tubes, fibers, and helical ribbons, through specific noncovalent interactions such as hydrogen bonding, - interaction, hydrophobic interaction and Van der Waals force, and so forth [1][2][3].Supramolecular self-assembly is a spontaneous process of molecular aggregation into ordered nanostructures, which provides a bottom-up approach to obtain structural regularity of various morphologies [4].However, a complete mechanism for the self-assembled supramolecular structures is still beyond our understanding.Recently, there has been a growing interest in tuning morphology by changing molecular structure [5], altering the composition of a binary gel [6,7] and solvent [8,9], and using ultrasound [10,11], light [12,13], and so on.For example, Saha et al. and coworkers reported for the first time the hierarchical tuning of one-dimensional morphology from helical bunched fibers to rods and hollow tubes, by changing the composition of riboflavin-melamine in a hydrogel system [2].Zhu et al. and coworkers demonstrated the structural transition from organogels to flower-like microcrystals in dipeptide self-assembling system, which can be readily induced by using ethanol as a cosolvent [8].Our previous work reported a photoinduced fiber-vesicle morphological transition in a chloroform gel of azophenyl hydrazide derivative [12].However, full control over the morphologies of self-assembled structures for their implementation in special applications is still a great challenge.
Among the noncovalent interactions, hydrogen bonding is most commonly used to direct self-assembly process because of its strength, directionality, reversibility, and selectivity.Meanwhile, peptide, amino acid, amide, and urea groups have been widely employed as building blocks to afford supramolecular gels.represented the first successful application of hydrazide derivatives in the hydrogen-mediated supramolecular systems with well-established structures [15].In this work, we focus on the self-assembly structure of calamitic hydrazide derivatives, N-4-(alkoxyphenyl)-N -4-[(4-methoxyphenyl)azophenyl] benzohydrazide (BNBC-, = 8, 12, 14) (as shown in Scheme 1), which had been synthesized and reported in our previous work [16].The results indicate that the gelation property and the aggregation morphology of organogel strongly depend on the nature of gelling solvents and the length of terminal alkoxy chain.The possible mechanism for the formation of nanostructural aggregates is also discussed in this paper.
Experimental Section
Field emission scanning electron microscopy (FE-SEM) images were taken with a JSM-6700F apparatus.Samples for FE-SEM measurement were prepared by wiping a small amount of gel onto a silicon plate, followed by drying in a vacuum for 12 h at room temperature.FT-IR spectra were recorded with a Perkin-Elmer spectrometer (Spectrum One B), in which the xerogels were prepared by freezing and pumping the organogel of BNBC- for 12 h, and then pressed into a tablet with KBr for FT-IR measurement.UV-vis absorption spectra were obtained on a Shimadzu UV-2550 spectrometer.
Gelation Test.The weighted gelator was mixed in a cap sealed test tube (3.5 cm (height) × 0.5 cm (radius)) with an appropriate amount of solvent, and the mixture was heated until the solid dissolved.The sample vial was cooled to 4 ∘ C and then turned upside down.When a clear or slightly opaque gel formed, the solvent therein was immobilized at this stage.Melting temperature ( ) was determined by the "falling drop" method [17].An inverted gel was immersed in a water bath initially at or below room temperature and then was heated slowly up to the point at which the gel fell due to the force of gravity, that is, the .
Molecular Design and Gelation
Properties.The strategy of this work is to build up a system containing hydrogen bonding, - interaction, and Van der Waals force to drive the self-assembly gelation.The mentioned above interactions play mutual balance to modulate the packing arrangement of molecules and eventually construct a particular superstructure.For this purpose, we had designed and synthesized the calamitic oxadiazole derivative BNBC-, which contains hydrazide, azobenzene, and an alkyl chain with different length (as shown in Scheme 1).The numerous interactions in BNBC- should offer, at least to some extent, the possibility of controlling the aggregation morphology of organogel.A certain solubility of a gelator in solvents is a prerequisite for gelation, whereas microphase segregation from dissolved solvents will induce gelation, in which different types of intermolecular interactions underpin the self-assembly of gelators.The length of alkyl chain is expected to regulate the intermolecular interaction with solvent to achieve a suitable solubility in solvent and thus the self-assembled nanostructures of organogel.We have investigated the gelation ability of BNBC- in various organic solvents at room temperature, and the relevant minimum gel concentrations of BNBC- are summarized in Table 1.
The obtained results indicate that the BNBC-8 bearing shorter terminal chain is difficult to dissolve in organic solvents with weak polarity, such as 1,2-dichloroethane and cyclohexane, but dissolves in medium and strong polarity such as benzene, chloroform, tetrahydrofuran, ethanol, and dimethylsulfoxide (DMSO) by heating and precipitates with cooling.In contrast, compounds BNBC-12 and BNBC-14 bearing longer terminal chain can form a stable gel in polar solvents such as aromatic solvents chloroform, while both of them dissolve in methanol by heating but precipitate with cooling as well.A possible reason for the precipitation from protonic solvents might be that the potential supramolecular aggregation through intermolecular hydrogen bonding is prevented in this case.In other words, the above gelation property in different solvents manifests that intermolecular hydrogen bonding between the hydrazide groups is the driving force for the gelation and subsequent nanostructure.Among these three calamitic hydrazide derivatives, BNBC-12 bearing appropriate length of terminal chain shows the strongest gelation ability in toluene, and the critical gel concentration (CGC) can reach as low as 5.3 × 10 −3 mol/L, which can be considered as a supergelator.Moreover, the sol-gel transition of BNBC-12 and BNBC-14 is fully thermoreversible even after several cycles of heating and cooling.The organogels are remarkably stable and can be stored for months without showing any decomposition.
Figure 1 shows the gel-sol transition temperature ( ) of BNBC-12 gels in benzene, chloroform, and acetone as a function of concentration, respectively.The increases with the concentration until a plateau region is reached (denoted by a concentration-independent ), which is determined by the "falling drop" method [17].With changing the solvent, the value in the "plateau region" decreases from 89 ∘ C (in toluene) to 60 ∘ C (in chloroform) and then to 55 ∘ C (in acetone).The most dramatic feature is that the morphologies of xerogels formed by BNBC-12 display a strong dependence on the nature of gelling solvent.From the summary listed in Table 1, it can be seen that the gelation ability of BNBC- manifests an apparent dependence on the length of terminal chain of BNBC- and intermolecular interaction with solvents, which will modify their self-assembled morphology structures as well.
Morphologies of the Xerogels.
In order to investigate the aggregation morphology of organogel, the xerogels of BNBC- were prepared and subjected to scanning electron microscope (SEM).As shown in Figure 2(a), the organogel of BNBC-12 from acetone consists of flexible root-like fibers with the width of 100-500 nm and length of tens of micrometers.The more entangled and dense fiber morphology with the width of 30-50 nm is observed for xerogel BNBC-12 from toluene (Figure 2(b)), and these fibers further assemble into thick fiber bundles and then constitute a highly developed and entangled network, indicating that the interactions between individual fibers are stronger in toluene.Meanwhile, the gel of BNBC-12 in toluene exhibits better transparency.Interestingly, the morphologies of the BNBC-12 xerogel from chloroform show a quite different packing from that of other xerogels.As shown in Figure 2(c), the xerogel of BNBC-12 from toluene consists of coral-shaped aggregation morphology and reveals 3D cross-linking network structure.From the zoom-in of top right corner in Figure 2(c) (as shown in Figure 2(d)), it can be seen that the coralloid aggregation structure is composed by hillocks with the size of 50 nm in diameter and 50-200 nm in length.As to the xerogel of BNBC-14 from toluene (Figure 2(f)), the morphology image exhibits straight and dense fibrous aggregates with the diameter of 30-80 nm and tens of micrometers in length.The aggregate structure of BNBC-14 xerogel from acetone (Figure 2(e)) is composed of flat ribbons with the width ranging from 1 m to 5 m, and some ribbons bent to form hollow tube as shown in the inset of Figure 2(e).The observed morphology results indicate the gelation ability of BNBC-12 in different organic solvents is stronger than that of BNBC-14.Correspondingly, the assembled fiber or ribbon of BNBC-12 would capture more solvent molecules and would demonstrate a lower CGC value as well.
On the basis of the above results, it can be concluded that the morphology of the BNBC- xerogels strongly depends on the nature of gelling solvents.These observations of tunable organogel structure are consistent with the CGC in different solvents shown in Table 1 for BNBC-12 and BNBC-14.The formation of elongated fiber-like and coral-shaped aggregates indicates that the self-assembly of BNBC- is driven by strong intermolecular interactions.
The Interactions in the Gels.
To investigate hydrogen bonding and alkyl chain conformations in the gelation process, the Fourier transform infrared (FT-IR) measurements on the xerogels of BNBC-12 from toluene, chloroform, and acetone were carried out, respectively.FT-IR spectroscopy of BNBC-12 xerogel from toluene (Figure 3(a)) shows that the N-H stretching vibrations are at around 3220 cm −1 and amide I at around 1641 cm −1 and 1679 cm −1 , respectively, indicating that the N-H groups are associated with C=O groups via N-H⋅ ⋅ ⋅ O=C hydrogen bonding in the xerogel [18,19].For the acetone xerogel (Figure 3(c)), the N-H stretching vibrations locate at 3230 cm −1 , and the peak slightly shifts to higher wavenumbers along with a shift of amide I bands to 1683 cm −1 and 1645 cm −1 , respectively.Thus, it can be concluded that intermolecular hydrogen bonding exists in BNBC-12 xerogel from acetone, though it is somewhat weaker compared with that in the corresponding xerogel from toluene.On the other hand, the ] s (CH 2 ) and ] as (CH 2 ) in the xerogel of BNBC-12 appear at around 2855 cm −1 and 2925 cm −1 , respectively, implying that the alkyl chains are closely packed to form quasi-crystalline domain [20].
Electronic UV-vis absorption spectra of the gels were studied to obtain information about the aggregated state of azobenzene on the molecular scale.As shown in Figure 4, the absorption spectra of BNBC-12 manifest a slight but detectable dependence on concentration in acetone.With a dilute (1 × 10 −5 M) solution of BNBC-12 in acetone, the - * absorption maximum of the azobenzene group of BNBC-12 (Figure 4) locates at 351 nm.With the concentration increasing from 1 × 10 −5 M to 1 × 10 −3 M, the absorption maximum is slightly red-shifted from 351 nm to 357 nm, indicating that the azobenzene units are arranged into J-type aggregates through - interactions in gels [21].Similar results were observed for BNBC-14 in acetone.The self-assembled gelation is a complex process in which several noncovalent interactions are involved in the formation of aggregation structure.The above spectroscopic results with respect to the gelation ability and tunable morphology of BNBC-12 and BNBC-14 indicate that the cooperation of hydrogen bonds, - interaction, and Van der Waals force plays an important role in self-assembly.The gelation of BNBC- provides an easy method for preparation of multidimensional structure and manipulation of morphology ranging from ribbons, hollow tubes, fibers to even 3D net-like structure in different solvents.
Conclusions
We have studied the gelation ability and morphology of a series of LMOG (BNBC-, = 8, 12, 14) containing hydrazide, azobenzene, and alkyl chain with different length.BNBC-8 demonstrates a nonorganogel compound in any solvents or at different temperature.The organogelators BNBC-12 and BNBC-14 bearing longer terminal chains show strong gelation ability in organic solvents, such as toluene, chloroform and acetone, and so forth.The minimum gel concentration of BNBC-12 in toluene is as low as 5.3 × 10 −3 M, which can be considered as a supergelator.It also has been demonstrated that the morphology of the xerogels strongly depends on the nature of gelling solvents, and the selfassembled nanostructure is tunable by the intermolecular interactions between molecule unit with different length of terminal chain and solvents.Based on these observations, the cooperation of hydrogen bonds, - interaction, and Van der Waals force might be crucially involved in the process of self-assembly.The unique and tunable aggregation morphology could be applied to surface modification such as superhydrophobicity and distinguish the obtained organogels as a novel class of functional materials.
a G: stable gel formed at room temperature; I: insoluble; P: precipitated; S: soluble; CGC: critical gelation concentration (mol/L), the minimum solute concentration necessary for gelation.
|
2018-12-13T19:58:03.583Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b0c5b70bd4fe13d59f95b3e110314d3f2fb5e43c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2015/357875.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b0c5b70bd4fe13d59f95b3e110314d3f2fb5e43c",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
253283223
|
pes2o/s2orc
|
v3-fos-license
|
Prototype-Design of Soil Movement Detector Using IoT Hands-on Application
: The landslide disaster that killed people occurred due to public ignorance of the type of soil prone to landslides. Several efforts have been made to create prototype tools for soil movement detection. However, researchers using the Internet of Things (IoT) technology are still limited. The IoT allows for the transmission of data over an internet connection, is always connected, offers remote control capabilities, and data sharing. All of this served as the prototype design of foundation for the soil movement detection. A light-based proximity sensor is used in system, and its output is represented as a movement of the soil on an inclined plane. F furthermore, the data used as input for the NodeMCU ESP8266 microcontroller is linked to the internet. The output is an HMI in the form of an LCD monitor that displays the soil movement measurement. The simulation of disturbances in an inclined plane is done differently depending on the frequency and duration. Moreover, monitoring is carried out by transferring processed data to the Blynk platform, which is subsequently shown in real time via the Blynk Android application. The test results of the tool used three distinct samples, as well as varied disturbance frequencies and durations. With the soil samples, the biggest movement data was 5cm achieved at a disturbance frequency of 5Hz and 40 seconds duration. The largest movement data for sand samples was 11cm at a disturbance frequency of 3Hz and 50 seconds duration, followed by largest movement data for sand soil mixture samples was 8cm at a disturbance frequency of 5Hz and 50 seconds duration. People should not reside on slopes, especially if the soil's primary component is sand.
Introduction
Landslide disasters are among the top three most common disasters in Indonesia, with quite a lot of losses. According to data from the National Disaster Management Agency (Kwon with BNPB), there were 1321 occurrences of landslides in Indonesia until the end of 2021 (Superadmin Data, 2022). Landslides are most common in mountainous terrain, hills, steep slopes, and cliffs. Landslides are caused by two basic factors: the driving factor and the trigger factor. The driving factor is the factor that affects a substance and causes it to move. Meanwhile, the trigger factor is the element that causes the material to shift, resulting in a landslide. The fundamental cause of landslides. However, gravity is pulling the soil down.
Other elements that contribute to landslides include rainfall, geological structure, distance from faults, vegetation levels, and the region's terrain (Wang et al., 2017) Soil erosion, steep cliff slopes, vibrations, sparse trees, and the existence of farms on the slopes are all factors to consider. Landslides occur when there is a mass movement of rocks and soil due to the force of gravity. The landslide occurs when there is an equilibrium disturbance in the restraining force and the launching force operating on the slope (Heru et al., 2019).
Landslide disasters often take the lives of people due to the presence of people in the vicinity and their inability to predict the onset of disasters. The advancement of technology makes information transmission faster, more flexible, and more efficient. IoT is one of the technologies that can be used in the process of exchanging information (Internet of Things). 2246 The Internet of Things (IoT) is a data collection and information transmission concept. The idea is to broaden the benefits of always-on internet access, which includes remote control and data sharing capabilities.
Energy intelligence in the home, manufacturing, logistics, and construction industries, monitoring the environment of healthcare systems and services, and drone-based services are some examples of IoT applications (Naser Hossein, 2020).
Vibration
Vibration (oscillations) occurs when there is a disturbance from its equilibrium position. A small amount of energy influences vibration as a reciprocating motion around the equilibrium point. The vibration frequency is one full back and forth movement (Navila, 2017). Examples of vibrations in everyday life include 1) The swing of the youngsters being played; 2) The pendulum of the swaying wall clock; 3) Plucked guitar strings. Vibration characteristics include deviation, amplitude ( ), period ( ), and frequency ( ).
The frequency value is influenced by the number of vibrations per second and the duration of such vibrations.
Waves
Waves are vibrations that carry propagation energy without moving the matter in between. Wave motion is a type of energy transfer in which momentum is transferred from one point to another without the displacement of matter. Wave types are classified according to their medium, amplitude, and propagation direction (Navila, 2017).
There are mechanical and electromagnetic waves based on the medium. Mechanical waves are waves that require an intermediate medium to propagate, such as sound waves, ropes, and slinky. Electromagnetic waves, unlike light waves, radar, radio, and television, do not require a medium to propagate. There are running waves and stationary waves based on amplitude. A running wave is one with the same amplitude at every point. While stationary waves have different amplitudes at different points.
There are several types of waves based on their propagation direction, including transverse and longitudinal waves. Transverse waves, such as rope waves, light, radio, television, and radar, have propagation directions that are perpendicular to the direction of their vibrations. Longitudinal waves are waves whose propagation direction is parallel to their vibration direction, such as sound waves (sound), springs / slinky wavelength ( ), period ( ), frequency ( ), and rapid propagation of waves ( ) are all wave characteristics (Navila, 2017).
As a result, the wavelength, frequency, and period of a wave all have a significant impact on its rapid propagation value. Another feature is that waves can be reflected if they pound the barrier wall, and they can be refracted if they pass through different density substances. If waves pass through a narrow gap, they can flex and be combined without interfering with their speed.
Firmansyah et al. previously conducted research on soil movement monitoring systems using the concept of an accelerometer as an early detection of soil movement (Nursuwars et al., 2019). The design uses an accelerometer sensor to detect soil on an inclined plane, which is then connected to a NodeMCU microcontroller that is linked to the Internet and a server, where the results of sensor detection data are stored and displayed via the website. Naldi and Wildian, (2018) conducted another study on the design of an Earthquake Alarm System Using the principles of Spring Force and Magnetic Field Sensing. The vibration measuring sensor was positioned in the ground and given a permanent magnet in their design. The sensor detection results are then processed by the Arduino Uno microcontroller and displayed on the LCD. Meisya et al, (2018) conducted a study with the THOR (Landslide Detection): Detection of landslide disasters using SMS gateway-based "prayer beads" sensors.. In their design a prayer bead sensor made of nails and placed on a slope prototype is used. After that, the sensor is connected to the Arduino Mega microcontroller. If a landslide is detected, the prayer bead sensor will send an alert via buzzer and SMS gateway as an early warning of the landslide.
Artha et al, (2018) created a Landslide Disaster Early Warning System Using Accelerometer Sensors and Android-Based Soil Moisture Sensors. The system measures soil vibration with an accelerometer and soil moisture with a soil moisture sensor. Sensor data are then processed by a NodeMCU microcontroller connected to the Internet network, and the processed data is sent from the NodeMCU to a webserver that can be accessed in real time via android.
Parwati et al. (2018) did research on Designing a Landslide Hazard Early Warning System with Hygrometer and Piezoelectric Sensors. A hygrometer sensor detects soil moisture and a piezoelectric sensor measures the load or pressure. The sensors are connected to the ATmega328 microcontroller, which serves as a processor for the measurement data. The processed results are then sent to the user via SMS gateway using the GSM SIM900 module.
Based on the research referenced above, a prototype of the landslide movement detection system will be designed using an IoT-based Sharp GP distance sensor, which can provide information about the inclined plane's soil movement, such as the distance of the soil movement from the starting point and notification of danger warnings due to the presence of a movement with the potential to landslide. The sensor is used as input to detect inclined soil movement and a buzzer is used as an output to generate alarm sounds warning of potential landslides. The Sharp GP sensor is connected to the NodeMCU ESP8266 microcontroller, which is connected to the Internet. The NodeMCU ESP8266 sends sensor detection data to the blynk platform, which stores the data in realtime and can be accessed via an Android app.
This prototype has two outputs: one uses an LCD as a ground shift information display and the other uses a DC motor to simulate disturbance on the inclined plane. The monitoring system built with NodeMCU ESP 8266 sends sensor detection data to the Blynk platform, which stores data in real time and can be accessed through an Android app.
The number of landslide cases has increased due to public ignorance of soil types that have the potential to cause larger landslides, but efforts to simulate soil movement detection devices using IoT technology are still rare. Based on these issues, a soil movement detection device prototype was created. Figure 1 shows a block diagram of a prototype landslide movement detection system that uses an internet of things-based Sharp GP distance sensor. Physical magnitude data is obtained from the sensor readings and then used as input in the system to determine the presence or absence of soil movement.
Method
As the sensor input data processor, the microcontroller employs the NodeMCU ESP8266. The Kit's development is based on the ESP8266 module, which includes GPIO, PWM (Pulse Width Modulation), I2C, 1-Wire, and an ADC (Analog to Digital Converter) on a single board (Artha et al., 2018). In addition to being a data processor, the microcontroller can communicate with the outside world via an internet connection. The data from the treated sensor is then transmitted via the Internet network to the Blynk web server and displayed on an LCD display. Based on simulated shocks applied to the inclined plane, the LCD display will display soil movement information. Disturbance with several hertz values representing the source of vibration and waves generated by the DC motor.
The DC motor controller or driver is an L298 module that controls the rotational direction of the motor (Adriansyah & Hidyatama, 2016). An Android smartphone with a Blynk application that can pull data from the Blynk Web server and display it in real time on the smartphone screen. The Blynk Apps application is programmed using drag and drop, making it easier to add input / output components without having to know Android or iOS programming (Hadi et al., 2019).
Figure 1. Block diagram of the system
Hardware design is divided into two categories: electronic device wiring design and 3D mechanical design. Electronic device wiring design includes input devices, microcontrollers, and outputs based on the ports and pins that will be used to run the tool's prototype. Figure 2 shows the wiring of electronic devices.
Using commercial CAD software, create a 3D mechanical design of the tool. An incline plane simulation medium and an electronic controller box section are included in the design.
Step by step, the design and shape of the tool will be perfected over time, particularly the placement of mechanical and electronic components, as shown in Figure 3. Table 1 is the technical specifications of the system. An actuator is used in the system as a source of vibration so that the soil can be simulated as if it is move. The actuator must have sufficient torque and power to ensure that the soil movement simulation does not deviate from the established parameters.
According to Figure 6, the NodeMCU microcontroller programming flowchart begins with the initialization of the ADC input port for the sensor, the DC motor output port, and the LCD I2C port, Wi-Fi IP address network, and Blynk token account. The program also scans the Wi-Fi to ensure that the microcontroller is connected to the internet. When the microcontroller is connected to the internet network and the blynk platform, sensor begins reading the distance to get data and sends it to the blynk web server to be displayed via the blynk application. When the dc motor's on button is pressed, the microcontroller begins to adjust the dc motor's speed to simulate shocks on the inclined plane. When the sensor detects soil movement, the microcontroller sends the data to the LCD display and the Blynk application, which displays it.
According to Figure 7, the Blynk platform programming flow chart begins with the initialization of the Blynk platform account to obtain the Blynk token account, sensor data variables and dc motor on button variable. Second, after connecting the device, the program receives sensor data from the microcontroller and displays them via the Blynk application. Third, when the DC motor on the button is pressed, the Blynk sends high data to the microcontroller, causing the dc motor to activate and simulate shocks in the inclined plane. Fourth, if the sensor detects soil movement, the Blynk app displays it. Figure 8 depicts the realization of the system prototype, while Figure 9 depicts the interface display on the Blynk application. The Blynk application displays a menu with frequency selection, shift indicators, vibration duration, and shift charts in distance units. The use of frequencies to simulate the state of soil movement in real life. The duration of the vibration is used to determine how long the vibration lasts when it causes a movement in the ground.
Sample and Characteristics
The stages described in the flow chart will be applied later using a variety of different variations in soil material. The use of various soil samples aims to determine which type of soil has the greatest potential for harm. Even though it has a low vibration frequency and a short duration, the potential hazard has a long movement distance. The following Table 3 shows the characteristics of the sample used. The type of soil sample used was the first sample of soil, the second sample of sand, the third sample was a mixture of soil (50%) and sand (50%). The mass of the sample used was uniformly 1kg for each sample at the time of the test. Soil Samples for Prototype System Testing Soil movement data ranging from 3 to 5cm were obtained from sensor as seen in Table 5. When the frequency is 5Hz and the duration is between 40 and 50 seconds, the largest movement occurs which is 5cm. 3 3 3 3 3 10 secs 3 3 3 3 3 15 secs 3 3 3 3 3 20 secs 3 3 3 3 3 25 secs 3 3 3 3 4 30 secs 3 3 3 3 4 35 secs 3 3 4 4 4 Sand Samples for Prototype System Testing Sand movement data ranging from 3cm to 11cm were obtained from sensor as seen in table 6. The biggest movement of 11cm occurs three times when the frequency of 3Hz duration is 50 seconds, frequency 4Hz duration is 50 seconds and frequency 5Hz duration is 50 seconds. Soil Sand mixture Samples for Prototype System Testing Soil Sand-mixed samples movement data ranging from 3cm to 8cm were obtained from sensor as seen in table 7. The biggest movement of 8cm occurs when the frequency of 5Hz duration is 50 seconds. 3 3 3 3 3 10 secs 3 3 3 3 3 15 secs 3 3 3 4 3 20 Test data analysis with three different samples and varying frequency and duration, as follows: Sample 1, Soil, the data obtained for soil movement of 3cm for a duration of 5 seconds to 50 seconds with the first disturbance frequency of 1Hz, the second frequency 2Hz has obtained the largest movement of 4cm for 45 seconds and 50 seconds. The third disturbance frequency of 3Hz obtained the largest movement of 4cm when the duration of 35 seconds, and 40 seconds and the fourth frequency of 4Hz obtained the largest movement shift is 4cm when the duration of 40 seconds.
Sample 2, Sand, the largest sand movement is 10 cm for a duration of 50 seconds with a frequency disturbance of 1Hz. Second, with disturbance frequency of 2Hz has the largest movement is 10cm for a duration of 50 seconds. Third, with the disturbance frequency of 3Hz has the largest movement being 11cm for a duration of 50 seconds. Then with the fourth frequency 4Hz has the largest movement is 11cm for a duration of 50 seconds.
Sample 3, Soil Sand Mixture with the disturbance frequency of 1Hz, the largest movement is 6 cm for a duration of 35 seconds. Second, with a disturbance frequency of 2Hz, the largest movement is 7 cm for 45 seconds. Third, with the frequency of 3Hz, the largest movement is 7cm when the duration is 45 seconds and 50 seconds. Fourth, with a frequency of 4Hz, the largest movement is 7cm when the duration is 50 seconds. Last, the frequency is 5Hz, the largest movement is 8cm when the duration is 50 seconds.
Sand has a greater potential for movement on an inclined plane when the disturbance frequency is 3Hz to 5Hz. When reaching the maximum duration of 50 seconds, there is a movement of 11cm.
Conclusion
This prototype system is designed to provide information on soil movement on a 44.3º incline and types of soil that have a greater potential for move with ranging from 1Hz -5Hz frequency and 5 -50 seconds duration. With a soil sample, the largest movement data are 5 cm with a disturbance frequency of 5Hz and a duration of 40 seconds. Therefore, sand sample, the largest movement data is 11cm at a disturbance frequency of 3Hz and a duration of 50 seconds. Finally, the soil sand mixture has largest movement data at 8 cm at a frequency of 5Hz and a duration of 50 s. 3. The sand has a greater potential for shifting on an inclined plane with a disturbance frequency of 3Hz in a duration of 50 seconds
|
2022-11-04T18:30:53.364Z
|
2022-10-31T00:00:00.000
|
{
"year": 2022,
"sha1": "e29d8d54b360a86e6ddab9e7ee56c2eea524b041",
"oa_license": "CCBY",
"oa_url": "https://jppipa.unram.ac.id/index.php/jppipa/article/download/1709/1572",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "908a74406798116319a3da467d26c457d4f6b952",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
120882093
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of relationships between residual magnetic field and residual stress
The impact of stress on changes in magnetisation is one of the most complex issues of magnetism. Magnetic methods make use of the impact of stress on permeability, hysteresis and magnetic Barkhausen noise, which are examined with fields with a high strength and a small frequency. The paper presents an analysis of the impact of residual stress resulting from inhomogeneous plastic deformations in the notch area of the examined samples on the changes in the strength of the residual magnetic field (RMF). The RMF on the surface of the component is the superposition of the simultaneous effect of the shape, the anisotropic magnetic properties of the material, as well as of the values of the components of a weak external magnetic field (most commonly—the magnetic field of the Earth). Distributions of the RMF components were measured on the surface of samples with a various degree of plastic strain. The finite element method was used to model residual stress in the samples. The impact of residual stress on changes in the residual magnetic field was shown. A qualitative correlation was found between places with residual stress and areas with increased values of the gradients of the RMF components. Further research is now in progress in order to develop the quantitative relationships.
Introduction
Residual stress is the stress which occurs in construction components which are not subjected to external loads. Residual stresses of the first order can arise due to a gradient in plastic deformation caused either by mechanical deformation or by the thermal gradient during cooling [1][2][3][4].
The possibilities of evaluating applied stresses and residual stresses of the first order on the basis of the residual magnetic field (RMF) were indicated in [5][6][7][8][9][10][11]. [5][6][7][8] showed the possibility of the stress state evaluation on the basis of RMF measurements. In [6] the relationships between the gradient of the RMF normal component and stress measured with the Xray diffraction method were studied. In [5] the impact of the level and distribution of stress on the values of the RMF components was found for static tensile loads. For varying loads it was found that there was an influence of the stress amplitude and the number of cycles. L.H. Dong et al. [8] and C.L. Shi et al. [9] found a relationship between the gradient of the normal component and previously applied static tension load. S. Changliang et al. [10] report a considerable impact of the notch effect coefficient (the ratio of the max local stress at the notch under external load to the nominal stress without a notch) on the gradient of the normal component of the RMF. This paper presents an analysis of qualitative relationships between the RMF distributions on the surface of notched samples with plastic deformations and the distributions of calculated values of residual stress. The presented results are a continuation of the research conducted with a view to determining the stress state based on magnetic parameters [3]. They extend the analysis of the relationships between residual stress of the first order and the RMF by a two-dimensional problem.
Experimental details
The samples were of the form of 2-mm thick flat bars made of S235 steel whose chemical composition is given in Table 1. A notch was made in the samples in Fig. 1. The relationship between plastic strain and engineering stress for S235 steel is shown in Fig. 2.
The samples were loaded on a tensile testing machine Galdabini Sun 10P. After the desired loads were applied, the samples were unloaded and removed from the testing machine prior to being examined. The examination was always carried out at the same place and with the same position of the sample. 5 samples of each kind were analysed.
Residual magnetic field measurements
The magnetic field measurements were conducted in the "measuring area" marked in Fig. 1, with a scanning increment of 1 mm along vertical lines which were 4 mm apart from each other. The magnetometer TSC-1M-4 with the measuring sensor TSC-2M supplied by Energodiagnostika Co. Ltd. Moscow was used for the measurements. The instrument was calibrated in the magnetic field of the Earth, whose value was assumed at 40 A/m. • H t,x -tangential component measured in the direction perpendicular to the applied load, • H t,y -tangential component measured in the direction parallel to the applied load, • H n,z -normal component.
The samples were not initially demagnetised; their degree of magnetisation varied slightly at the initial stage. The differences got smaller and smaller after the yield point of the material had been exceeded. The gradients of the RMF components are used to analyse the state of stress and deformation. If the initial distribution of the RMF on the surface of the sample is reasonably uniform and features a low level of RMF gradients, this indicates that there is no or very little residual stress of the first order after the manufacturing process. Example initial distributions of the gradients of the RMF components measured on the surface of the samples are shown in Figs. 3a to 3c for samples with a notch in the centre of the sample-variant A. The areas with increased values of RMF gradients which can be seen in the figures are related to the shape of the sample and the magnetic flux leakage in the vicinity of the notch.
Residual stress calculations
The residual stress values were determined by means of the finite element method (FEM). The software package Ansys 12.1 was used.
The tensile curve for steel S235 was approximated using a multilinear model Fig. 2. The material proper-ties were assumed as isotropic. Due to the small thickness of the samples, the problem was modelled as a two-dimensional one, assuming that it was a plane stress state. The numerical model mesh, which fully corresponded to the geometry of the samples, was built on the basis of eight-node quadrangular elements.
The boundary conditions included the fixing of the model on one hand, and the application of tensile loads corresponding to the force set by the strength testing machine on the other.
The way in which the calculations were carried out made it possible to take account of the plastic strain accumulation in each subsequent cycle of the loading of the sample, i.e. the stress-strain state determined in each calculation step constituted the initial state used to determine the stress-strain state in the next step. For the needs of the performed analyses, the results were derived for the area covered by the RMF measurements- Fig. 1.
Results and discussion
Due to the magnetoelastic effect, mechanical stress has an influence on the energy anisotropy of magnetic domains, which most often results in changes in permeability. The direction of the anisotropy depends on magnetostriction. For materials with positive magnetostriction, the magnetic moments tend to align in parallel to the direction of tensile stress, and perpendicular to compressive stress. In materials with negative magnetostriction, opposite phenomena occurthe magnetic moments tend to align perpendicular to the direction of tensile stress, and in parallel to compressive stress [12,13]. The impact of uniaxial stress on the domain structure can be compared to the effect of a magnetic field with strength H σ which is equivalent to the stress where σ is stress, λ-magnetostriction, μ 0 -magnetic permeability of free space, M-magnetisation, φthe angle between the stress axis and the direction of magnetic field H σ , and ν-Poisson's ratio [12][13][14][15][16]. Dependence (1) results from the fact that, in a certain range of the magnetic field strength H and stress σ , Villary's effect is the inverse of Joule's effect and in this case these effects are interrelated by a thermodynamic dependence.
In order to describe the impact of the complex stress state, the notion of equivalent stress is introduced, i.e. of a fictitious uniaxial stress whose amplitude will lead to the same change in susceptibility as real multiaxial stress [17][18][19][20][21][22]. The problems related to the impact of a complex stress state on changes in magnetisation are issues whose description and modelling, due to potential application for stress measurements, are the subject of current research [17][18][19][20][21][22]. The residual magnetic field of a ferromagnetic element, also known as the Self Magnetic Flux Leakage, is the sum of the simultaneous effect of the geometry of the object and of the magnetic, electrical and mechanical properties of the material of which it was made in the magnetic field of the Earth. Due to magnetomechanical coupling, the stress which occurs in the object (both active and residual) has an impact on Representative results of the RMF measurements in the 'measuring area" marked in Fig. 1 of samples with plastic deformations are presented in this paper. The location and dimensions of the notch in variant A and B samples are presented in Fig. 1. For variant A samples, the measuring area includes the notch; for variant B samples-the notch is beyond the area. In the immediate vicinity of the notch, the distribution of the RMF gradients depends mainly on the magnetic flux leakage caused by the discontinuity of the material. In the remaining area, the RMF distribution is the result of the impact of a certain distribution of stress and of the properties of the material (such as local changes in structure, chemical composition) which affect magnetic properties. For the one-dimensional problem, the possibility of evaluating residual stress based on the gradients of the RMF components is presented in [3]. The presented results are a supplement and continuation of [3]. In further consideration, the concept of the gradient of the RMF component will be understood as its absolute value. In the initial state, the effect of the magnetic flux leakage in the notch area dominates in the distributions of the gradients of the RMF components (example for a variant A sample in Figs. 3a to 3c). For a certain geometry, their values depend mainly on the outer magnetic field and on the magnetic permeability of the material. Due to the impact of the notch being a stress concentrator, even for slight values of active stress σ , small areas of plastic deformations appear in the area of the notch. A rise in active stress causes an increase in the size of the deformations. This results in changes in the values of the RMF components and of their gradients, which can be traced for a variant A sample for the gradient of the tangential component H T ,Y from the initial state- Fig. 3b through subsequent states for the nominal active stress value σ = 50 MPa- Fig. 7a, and σ = 100 MPa- Fig. 7b. A rise in the values of the gradients, as well as an increase and a slight change in the shape of the area where their increased values appear, can be noticed. A rise in the values of active stress leads to the formation of a characteristic distri- (Figs. 9a, 9c, 9e) resemble a flattened X, and the residual stress distributions σ y (Figs. 9b, 9d, 9f) look like an elongated X (the centres of X are located on the centre of the notch). In the image of the gradients (especially those of the tangen-tial components H T ,Y - Figs. 6b, 6e, 6h), shapes similar to two X's can be distinguished, which shows the impact of both σ x and σ y residual stress values. For a variant B sample, the distributions of residual stress σ x (Figs. 10a, 10c, 10e) and residual stress σ y (Figs. 9b, 9d, 9f) resemble rotated V 's, whose arms come out of the notch. In the image of the gradients, similarly to the sample with the notch in the centre, for gradients of Generally, it can be stated that there are qualitative relationships between plastic deformations and the RMF magnitude, as well as between the RMF gradients and residual stress.
Conclusions
It is shown that for the samples under analysis the distributions of residual stress components are reflected in the distributions of the RMF gradients. This is particularly visible for the RMF tangential component.
There are qualitative relationships between RMF gradients and residual stress of the first order. This provides a basis for further research aiming at the development of quantitative relationships, whose concept for one-dimensional problems is presented in [3] and [23]. The research will make it possible to evaluate the state of the material by means of the RMF measurements for two-dimensional problems.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2019-04-19T13:09:25.430Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "9de503dac92051436066cd501950bd29aaf00d4d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11012-012-9582-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5ea8082aebe6d477c938697581c36707d464c1b9",
"s2fieldsofstudy": [
"Materials Science",
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
7639825
|
pes2o/s2orc
|
v3-fos-license
|
Promoter Methylation Precedes Chromosomal Alterations in Colorectal Cancer Development
Background: Colorectal cancers are characterized by genetic and epigenetic alterations. This study aimed to explore the timing of promoter methylation and relationship with mutations and chromosomal alterations in colorectal carcinogenesis. Methods: In a series of 47 nonprogressed adenomas, 41 progressed adenomas (malignant polyps), 38 colorectal carcinomas and 18 paired normal tissues, we evaluated promoter methylation status of hMLH1, O6MGMT, APC, p14ARF, p16INK4A, RASSF1A, GATA-4, GATA-5, and CHFR using methylation-specific PCR. Mutation status of TP53, APC and KRAS were studied by p53 immunohistochemistry and sequencing of the APC and KRAS mutation cluster regions. Chromosomal alterations were evaluated by comparative genomic hybridization. Results: Our data demonstrate that nonprogressed adenomas, progressed adenomas and carcinomas show similar frequencies of promoter methylation for the majority of the genes. Normal tissues showed significantly lower frequencies of promoter methylation of APC, p16INK4A, GATA-4, and GATA-5 (P-values: 0.02, 0.02, 1.1×10−5 and 0.008 respectively). P53 immunopositivity and chromosomal abnormalities occur predominantly in carcinomas (P values: 1.1×10−5 and 4.1×10−10). Conclusions: Since promoter methylation was already present in nonprogressed adenomas without chromosomal alterations, we conclude that promoter methylation can be regarded as an early event preceding TP53 mutation and chromosomal abnormalities in colorectal cancer development.
Introduction
Colorectal cancer development is characterized by the growth of a benign precursor lesion of which eventually a small percentage will progress into a carcinoma [28]. The genetic alterations underlying the adenoma to carcinoma transition have been extensively studied over the past two decades. Pioneering research of Vogelstein and co-workers has proposed a progres-sion model in which genetic alterations as APC and TP53 mutations and allelic loss of 5q and 18q play an important role [2,16,17,24,43]. Previously, we introduced the concept that chromosomal instability does not merely constitute genetic noise but occurs in nonrandom patterns. Accumulation of losses in 8p21-pter, 15q11-q21, 17p12-13 and 18q12-21 and gains in 8q23qter, 13q14-31, and 20q13 are strongly associated with advanced lesions and can be used as indicator of progression towards malignancy [22,32]. Recently, it has become clear that initiation and progression of cancer also involves epigenetic alterations such as DNA methylation and that genetic and epigenetic alterations interact in driving the development of cancer [34].
Colorectal cancer development is associated with epigenetic silencing of the DNA repair genes hMLH1 [10] and O 6 MGMT [9], the WNT signal transduction regulator APC [13], the Ras signalling molecule RASSF1A [44], the transcription factors GATA-4 and GATA-5 [1] and the cell cycle regulators CHFR [8,40], p16 INK4A and p14 ARF [21,33]. Although extensive knowledge exists on epigenetic and genetic changes in colorectal cancer, little is known about the exact relationship between these two [7,19]. In this cross-sectional study we address epigenetic and genetic (at the level of the single gene as well as at the level of whole chromosomes) alterations in colorectal cancers and its precursor lesions. Using a multi-gene approach we investigate the timing of promoter methylation and define how these epigenetic events are related to genetic events in colorectal cancer development.
Patient material
This study was performed on a subset (n = 139) of colorectal adenoma and carcinoma tissues which has been analyzed for structural chromosomal abnormalities by comparative genomic hybridization (CGH) previously [22]. Part of this subset has also been analyzed for mutation status of APC (n = 96) and KRAS (n = 78) [18,22]. We extended this series by adding 20 colorectal adenoma and carcinoma cases, bringing the overall total to 159 tissues. This series consists of 47 colorectal adenomas without signs of malignancy at time of resection (nonprogressed adenomas (nA)), 41 malignant polyps (colorectal adenomas containing a focus of carcinoma) and 38 additional solitary colorectal carcinomas (Cs). Of the 41 malignant polyps, the adenoma part, referred to as progressed adenomas (pA) (n = 41), and the carcinoma part (Cmp) (n = 33) were microdissected and analyzed separately. If present we added morphologically normal mucosa within the resection specimen (n = 18) of patients with solitary carcinomas (Cs) to these series. For each tissue sample, DNA was extracted from fifteen 10-µm paraffin sections, dissecting the most tumor-rich areas, allowing a maximum of 20% nontumor cell contamination.
Overall, the tissues were obtained from 95 patients, 46 males and 49 females (mean age of 67 years: range 40-89). Twenty-four patients exhibited multiple tumors; 4 patients presented with multiple adenomas, 1 patient presented with multiple carcinomas and 19 patients exhibited 1 or more adenomas adjacent to a carcinoma. The histological characteristics are listed in Table 1.
Promoter methylation analysis
DNA methylation in the CpG islands of the hMLH1, O 6 MGMT, APC, p14 ARF , p16 INK4A , RASSF1A, GATA-4, GATA-5 and CHFR gene promoters was determined by chemical modification of genomic DNA with sodium bisulfite and subsequent methylation-specific PCR (MSP) as described in detail elsewhere [11,20,42]. Briefly, 1 µg of DNA was denatured by NaOH and modified by sodium bisulfite. DNA samples were then purified using Wizard DNA purification resin (Promega, Madison, USA) again treated with NaOH, precipitated with ethanol, and resuspended in H 2 O. To facilitate MSP analysis on DNA retrieved from formalin-fixed, paraffin embedded tissue, DNA was first amplified with flanking PCR primers that amplify bisulfite-modified DNA but do not preferentially amplify methylated or unmethylated DNA. The resulting fragment was used as a template for the MSP reaction. Primer sequences have been described before [1,5,42]. All PCRs were performed with controls for un-methylated alleles (DNA from normal lymphocytes), methylated alleles [normal lymphocyte DNA treated in vitro with SssI methyltransferase (New England Biolabs)], and a control without DNA. Ten µl of each MSP reaction were directly loaded onto nondenaturing 6% polyacrylamide gels, stained with ethidium bromide, and visualized under UV illumination. The methylation status was assessable in 96% of the total number of analyses. To asses reproducibility, 234 MSP reactions have been performed in duplicate starting from DNA amplification with flanking PCR primers, the reproducibility was 90%. To exclude false priming, sequencing of the methylated amplicon of APC was performed, revealing extensive methylation of all amplicons, including the primer binding region.
P53 immunohistochemistry
P53 immunohistochemistry (n = 146) was performed using the mouse monoclonal antibody DO7 (DAKO, Glostrup, Denmark). Immunoperoxidase staining for p53 in formalin-fixed, paraffin-embedded tissue sections was performed by a horseradish peroxidase labeled streptavidin-biotin method. Four µm sections were mounted on 0.1% poly-L-lysine coated glass slides, deparaffinized, and rehydrated through graded alcohols to water. Endogenous peroxidase activity was blocked by incubation with 0.3% H 2 O 2 in methanol. Sections were immersed in 10 mM sodium citrate buffer, pH 6.0, and subjected to heatinduced antigen retrieval with microwave. To block non-specific protein binding, sections were pre-incubated with normal rabbit serum (1:50, DAKO) for 10 min at room temperature. Mouse monoclonal antibody against p53 (1:500 DO7, DAKO) was applied, and tissue sections were incubated overnight at 4 • C. Then sections were rinsed with PBS, and treated with biotinylated rabbit anti-mouse IgG (1:500, DAKO) for 30 min at room temperature, rinsed with PBS, and then incubated with streptavidine-biotin-HRP complex (1:200, DAKO) for 1 hour at room temperature. After washing with PBS the complex was visualized with diaminobenzidine and H 2 O 2 for 3 min. Sections were then counter stained with hematoxylin, dehydrated in graded alcohols, cleared in xylene and cover slipped. The area percentage of positive nuclei was scored with a point counting approach using a video overlay measuring system (Qprodit Leica, Cambridge, UK). An area percentage of 20 was used as threshold for positivity [31].
KRAS mutation analysis
KRAS mutation status of 78 colorectal adenoma and carcinoma tissues have been analyzed previously [22]. Fifty-two additional tissues were analyzed by PCR using an oligonucleotide 20-mer panel of codons 12 and 13 (TIB Molbiol, Advanced Biotechnology Center, Genova, Italy) as previously described [18]. Extracted DNA from peripheral blood lymphocytes from healthy donors was used as wild type KRAS codon 12 GGT-gly and codon 13 GGC-gly controls, and extracted DNA from 6 different colon cancer cell lines was used as control for known KRAS mutations.
Array-CGH analysis
One hundred-thirty-nine colorectal adenoma and carcinoma tissues have been analyzed by conventional CGH previously [22]. The 20 additionally collected neoplasms were analyzed by array CGH analysis using 5 K BAC arrays [6,36]. In short, we used a fullgenome array printed in the house containing approximately 5000 clones with an average resolution of 1 Mb (http://www.vumc.nl/microarrays/index.html). After amplification of BAC clone DNA by ligation-mediated polymerase chain reaction (PCR) according to Snijders et al. (2001) all clones were printed in triplicate. Printing of clones was performed on codelinkTM slides (Amersham BioSciences, Roosendaal, NL) at a concentration of 1 µg/µl, in 150 mM sodium phosphate, pH 8.5, using a SpotArray72 printer (Perkin Elmer Life Sciences, Zaventum, BE). After printing slides were processed according to the manufacturer's protocol. Labeling and hybridization of tumor and reference DNAs were performed as described in detail by Snijders et al. (2001) with some modifications, namely, hybridizations took place in a hybridization station (HybArray12TM -Perkin Elmer Life Sciences, Zaventum, BE) and slides were scanned with Agilent DNA Microarray scanner (Agilent technologies, Palo Alto, USA), omitting the DAPI staining step. Segmentation and quantification of the spots was done using Imagene 5.6 software (Biodiscovery Ltd, Marina del Rey, California). Local background median intensity was subtracted from the signal median intensity for both test and reference channels and a ratio tumor/reference was calculated. The ratios were normalized against the mode of all ratios of the autosomes. As the clones were spotted in triplicate, the median value of the corresponding three intensities was taken into account for each clone in the array. Clones from which the intensities of the three spots had a standard deviation >0.2 were excluded. Furthermore, clones with more than 20% missing values in all carcinomas were excluded for further analysis. All the analyses were done excluding chromosome X, as in every hybridization a sex-mismatched reference DNA was used for quality control of the experiment. Clone positions were considered according to freeze May2004. After using a smoothing algorithm [25], DNA copy number ratios obtained by array CGH were recoded as gains and losses at the resolution of whole chromosome arms compatible with the data obtained by chromosome CGH.
Data analysis
Differences in frequencies of gene methylation between the different stages of disease progression and associations between promoter methylation and mutations were evaluated by the Pearson's χ 2 or Fisher exact test where appropriate. The total number of methylated genes, referred to as methylation index (MI), is defined as the number of genes methylated divided by the number of genes analyzed. The same accounts for the total number of gains and losses, referred to as chromosomal events, and the number of losses (8p21pter, 15q11-q21, 17p12-13 and 18q12-21) and gains (8q23-qter, 13q14-31, and 20q13) associated with advanced lesions, referred to as cancer associated events [22]. The Mann-Whitney U and Kruskal Wallis nonparametric test were used for comparing means of continuous variables. The McNemar test for paired cases was used to test the methylation differences between a subset of solitary carcinomas (Cs) and paired normal tissue.
Simple linear regression analysis was performed to investigate the correlation between the number of methylated genes and the number of chromosomal abnormalities. For all analyses SPSS software version 11.0 was used. All reported P values are two-sided, and a P value < 0.05 was considered statistically significant.
Promoter methylation in relation to adenoma-carcinoma progression
In order to investigate the timing of promoter methylation in colorectal cancer development, we studied the frequency of promoter methylation of genes which function in regulating diverse cell functions in the colorectal adenoma and carcinoma tissues. Our data demonstrate that nonprogressed adenomas (nA), progressed adenomas (pA), carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) showed similar frequencies of promoter methylation for the majority of genes ( Table 2). No difference in mean methylation index (MI) (total number of methylated genes divided by the number of genes analyzed) between the different categories of neoplasm's was observed. However, p14 ARF methylation was found in 71.1% and 73.5% of the nonprogressed adenomas (nA) and progressed adenomas (pA) respectively and decreases to 53.3% and 37.1% of the carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) respectively (P value: 0.006).
Since we observed that promoter methylation of the studied genes is already present in nonprogressed adenomas (nA), we were interested in the presence of promoter methylation in matching normal mucosa. For 18 solitary carcinomas (Cs) morphological normal tissue from the resection specimen was available (Table 3a). We performed a nonparametric test for matched pairs and compared the methylation profile of 18 solitary carcinomas to their corresponding normal mucosa. In 158 of the 162 (9 genes × 18 pairs) possible combinations the gene methylation status was assessable in the carcinoma as well as in the normal tissue. In 48.7% of the pairs (77/158) no difference in gene methylation was observed within pairs (N = C) (Table 3b). In 66.2% (51/77) of these pair both components were unmethylated while in 33.8% (26/77) the carcinomas as well as the corresponding normal tissue showed promoter methylation. In 43% (68/158) of all pairs gene methylation was present in the carcinomas while absent in the corresponding normal tissue (C > N), and only in 8.2% (13/158) of the pairs promoter methylation was only observed in the normal tissue (N > C).
The McNemar test for paired cases showed that promoter methylation of APC, p16 INK4A , GATA-4, and GATA-5 occurred significantly more frequent in the carcinomas when compared to corresponding normal mucosa (P -values 0.02, 0.02, 1.1 × 10 −5 and 0.008 respectively) (Table 3b). Although promoter methylation of hMLH1, O 6 MGMT and CHFR was present in the normal tissues, this was predominantly when the carcinomas was also methylated. For example, promoter methylation of hMLH1 was observed in 50.0% of the adenomas and 72.2% of the paired carcinomas (this high frequency could be explained by the fact Note: Results of epigenetic and genetic analyses of nonprogressed adenomas (nA), progressed adenomas (pA), carcinoma parts of a malignant polyps (Cmp) and solitary carcinomas (Cs). Listed are the frequencies of promoter methylation of 9 genes, methylation index (total number of methylated genes divided by the number of genes analyzed), p53 immunopositivity, APC and KRAS mutation, number of chromosomal abnormalities (chromosomal events) and number of cancer associated events (losses at 8p21-pter, 15q11-q21, 17p12-13, 18q12-21, and gain at 8q23-qter, 13q14-31 and 20q13) per group. Data on p53 immunopositivity, APC and KRAS mutations were available for a subset of cases; MSP and CGH have been done on all cases. that in 6 of the 18 pairs (33.3%) the carcinomas part showed microsatellite instability, data not shown. In 5 of these 6 pairs, the carcinoma as well as the paired normal tissue showed hMLH1 methylation). In 8 of the 9 pairs in which normal tissue showed hMLH1 promoter methylation, carcinoma tissue was methylated as well. In 6 cases a difference in methylation was present of which in 5 cases hMLH1 was methylated in the carcinomas while unmethylated in the paired normal mucosa. Comparable patterns were observed for promoter methylation of O 6 MGMT, RASSF1A and CHFR. More difference between paired carcinomas and normal tissues were observed for p14 ARF methylation. In 4 of the 9 cases in which a difference within pairs was observed, normal tissue displayed gene methylation while p14 ARF was not methylated in the corresponding carcinomas.
Promoter methylation in relation to genetic alterations
In order to study the relationship between promoter methylation and genetic alterations we analyzed mu-tations of three key genes involved in development of colorectal cancer, i.e. TP53, APC and KRAS.
Disruption of the p53 pathway, amongst others, can occur by loss of function of TP53 itself and by p14 ARF methylation [46]. The frequency of p14 ARF methylation significantly decreased in tumor progression (Table 2). In contrast, aberrant p53 status, indicated by p53 immunopositivity, increased from 14.3% in the nonprogressed adenomas (nA) through 34.2% in the progressed adenomas (pA) to 55.2% and 59.5% in the carcinoma parts of malignant polyps (Cmp) and solitary carcinomas (Cs) respectively (P value: 0.001). Case by case, p14 ARF methylation shows an inverse relation with p53 immunopositivity, approaching statistical significance (P value: 0.07).
For APC and KRAS, a similar pattern was found. APC was mutated in 50% and 63.2% of the nonprogressed and progressed adenomas (nA and pA), respectively, compared to 58.3% and 66.7% of the carcinomas parts of malignant polyps (Cmp) and solitary carcinomas (Cs) [22]. KRAS mutation was observed in 34.3% and 40.0% of the nonprogressed and progressed adenomas (nA and pA), respectively, and in 28.6% and Table 3a Promoter methylation frequencies in 18 paired carcinoma and normal tissues 28.1% of the carcinomas parts of malignant polyps (Cmp) and solitary carcinomas (Cs) ( Table 2). While neither the frequencies of APC mutation nor APC promoter methylation differ between the different stages of tumor development, case by case analysis indicated an inverse relation (P value: 0.06). A similar pattern and inverse relation was observed for KRAS mutation and promoter methylation of RASSF1A and hMLH1 (P values: 0.01 and 0.001 respectively).
Promoter methylation in relation to chromosomal alterations
The timing and interrelationship of promoter methylation and chromosomal alterations in tumor progression were analyzed by studying promoter methylation in cases without chromosomal abnormalities and by relating gene methylation status to the mean number of chromosomal and cancer associated events. Simple linear regression analyses revealed no correlation between the MI and the number of chromosomal abnormalities or number of cancer associated events. As described previously, the number of chromosomal abnormalities and especially the number of cancer associated events are associated with progressed lesions (P values: 4.1 × 10 −6 and 3.6 × 10 −10 respectively) (Table 2) [22]. Figure 1 shows that while the mean number of chromosomal events and cancer associated events increases during tumor progression the mean MI remains stable. In 13 cases (11 nonprogressed adenomas (nA) and 2 solitary carcinomas (Cs)) no chromosomal alterations were observed. The 12 adenomas without chromosomal abnormalities did not differ in MI from adenomas with chromosomal abnormalities. Interestingly, the 2 solitary carcinomas (Cs) without chromosomal abnormalities (both cases exhibit microsatellite instability; data not shown) were characterized by a MI of 0.96, while the MI of carcinomas harboring chromosomal alterations was 0.56 (P value: 0.006). No association between promoter methylation of a specific tested gene and the number of chromosomal abnormalities was observed.
One of the cancer associated chromosomal changes, deletion of 8p21, includes the GATA-4 gene, which was also a frequent target of epigenetic changes in these tumors. Promoter methylation as well as loss of heterozygosity could combine leading to loss of function of GATA-4. GATA-4 methylation was found in respectively 77.3% and 77.5% of the nonprogressed adenomas (nA) and progressed adenomas (pA) and in 75.8% and 86.5% of the carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) respectively. In contrast, the frequency of loss of 8p21-pter increases in tumor development from 12.8% and 39.0% of the nonprogressed adenomas (nA) and progressed adenomas (pA) respectively to 45.5% and 42.1% of the carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) respectively (P value: 0.005). The frequency in which loss of 8p21-pter is combined with GATA-4 methylation (GATA-4 M/8p-) increases during tumor development from 9% in nonprogressed adenomas (nA) to 25%, 30% and 32% in pro-gressed adenomas (pA), carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) respectively. The frequency in which only GATA-4 is methylated (GATA-4/8p) is stable and occurs in 68.2% of the nonprogressed adenomas (nA) and 52.2%, 45% and 54% of the progressed adenomas (pA), carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs) respectively. Loss of 8p21-pter without concomitant methylation of GATA-4 (GATA-4 U/8p-) is infrequent occurring in 4.5% and 15.2% of the nonprogressed adenomas (nA) and progressed adenomas (pA) and in 15% and 8.1% of the carcinoma parts of malignant polyps (Cmp) and solitary carcinoma (Cs).
Discussion
In this study we attempt to elucidate the timing and interrelation of promoter methylation and genetic alterations in colorectal cancer development. Therefore we studied genetic and epigenetic events known to be associated with colorectal cancer development.
Considering the timing of epigenetic events in tumor progression, our results indicate that promoter methylation of the studied genes can be regarded as an early event in colorectal carcinogenesis. A high frequency of promoter methylation of multiple DNA repair-and tumor suppressor genes is already present in adenomas without any histological signs of progression, and malignant lesions showed similar frequencies of methylation. Even in morphologically normal mucosa from patients with solitary carcinomas (Cs) promoter methylation of hMLH1, MGMT, RASSF1A, p14 ARF and CHFR was observed, but, with exception of p14 ARF methylation, in lower frequencies compared to the carcinomas. P16 INK4A , APC, GATA-4, and GATA-5 methylation occurred predominantly in the carcinomas. Promoter methylation in normal tissues was in most cases consistent with the methylation profile of the paired carcinoma. However, additional studies involving normal colonic mucosa from individuals without cancers are required to determine the exact timing of promoter methylation of the studied genes.
Interestingly, hypermethylation of p14 ARF was more frequently present in nonprogressed adenomas (nA) and progressed adenomas (pA) when compared to carcinoma parts of malignant polyps (Cmp) and solitary carcinomas (Cs). This observation can possibly be explained by the concept that the transition from an adenoma to a carcinoma can be considered as a transition from a heterogeneous cellular population to one that is more homogeneous [17]. Even though promoter methylation is a dynamic process, this indicates that p14 ARF methylation is not necessarily associated with a definitive growth advantage.
Furthermore, since the tumor suppressor functions of p14 ARF is dependent upon the presence of functional p53 [14], p14 ARF methylation is possibly of greater importance in early stages of disease progression where TP53 mutations are not highly prevalent. This is supported by the observation that the frequency of p53 immunopositivity, as a marker of TP53 mutations, increases during colorectal cancer development.
The concept that epigenetic alterations occur most frequently during the early stages of tumor development as well as the presence of promoter methylation of hMLH1 and MGMT in normal colonic tissues of patients with colon cancers has also been shown by others [3,27,35]. Baylin and Ohm recently hypothesized that the early epigenetic alterations predispose cells to acquire the genetic abnormalities that proceed the neoplastic process [3]. In addition, Tlsty et al. showed that hypermethylation of the p16 INK4A promoter in mammary epithelial cells is associated with entrance into a state of unrestricted proliferation accompanied by chromosomal instability [37,38]. The actual mechanism involved is unknown but as epigenetical silencing of mismatch repair (MMR) genes causes a mutator phenotype [45] one would hypothesize that promoter methylation of "stability genes", such as p16 INK4A which retains proper cell cycle control, can initiate chromosomal instability. In this study however, promoter methylation of p16 INK4A shows no association with chromosomal instability. Also no association between promoter methylation of the mitotic checkpoint control gene CHFR and an increased number of chromosomal abnormalities was observed. These results are consistent with a study of Bertholon et al. [4] which showed that methylation of CHFR is not associated with chromosomal instability in cell lines. Apparently, in colorectal cancer, promoter methylation of other control genes needs to be evaluated to determine if epigenetic changes are indeed associated with the initiation of chromosomal instability.
Furthermore, we studied the relationship between epigenetic and genetic alterations in tumor development and inverse relations between promoter methylation and gene mutation within important regulatory pathways were observed. We confirmed previous reports that p14 ARF methylation shows an inverse correlation to TP53 mutation in colorectal cancer [12,14,34], which has also been observed in bladder can-cer [30] and non-small cell lung cancer [23]. In head and neck squamous cell carcinomas (HNSCC) an inverse correlation between TP53 mutation status and cyclin A1 methylation, another downstream target of TP53 has been described [39]. Similar relations have been shown for APC promoter methylation and mutation [13] and KRAS mutation and RASSF1A methylation [41] indicating that gene mutation and promoter methylation do not frequently occur simultaneously in the same pathway, but rather may act in a mutual exclusive or complimentary fashion.
A different approach to study the relationship between epigenetic en genetic silencing of a gene was to examine the relationship between promoter methylation of a gene, GATA-4, and deletion of the chromosomal location of the gene, loss of 8p21-pter. Both events occur frequently in tumor development, but no association was observed. Hypermethylaton of GATA-4 was not restricted to tissues with or without chromosomal loss of 8p21-pter. GATA-4 methylation occurs prior to loss of 8p21-pter and the number of cases in which both events were present increased during tumor progression.
In addition, no association between promoter methylation of a gene and the number of chromosomal alterations was observed. An observation which is in agreement with a recent study on hepatocellular carcinogenesis in which no correlation between the degree of chromosomal structural alterations and that of aberrant promoter methylation was present [26]. Furthermore, in nonprogressed adenomas without chromosomal abnormalities, high frequencies of promoter methylation were already present. Together, these observations suggest that promoter methylation of the selected DNA repair-and tumor suppressor genes precedes chromosomal abnormalities in colorectal cancer development.
In summary, the data indicate that promoter methylation of the selected genes can be considered as an early event which occurs prior to TP53 mutations and chromosomal instability. The association between gene methylation and pre-malignant lesions is highly relevant for methylation-marker based colorectal cancer screening. The observation that the presence of promoter methylation in normal tissues corresponds to the methylation profile of paired carcinomas suggests that methylation levels in normal colonic mucosa could serve as marker of risk of development of CRC. Given that aberrant DNA methylation can also be detected in stool DNA [15,29], studying methylation as common event in pre-malignant lesions is promising to provide novel specific biomarkers for risk assessment and secondary prevention [37].
|
2018-04-03T01:20:56.371Z
|
2006-12-12T00:00:00.000
|
{
"year": 2006,
"sha1": "b0f47ca1e0635848696264f0618dae30ebe1d836",
"oa_license": null,
"oa_url": "https://doi.org/10.1155/2006/846251",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0f47ca1e0635848696264f0618dae30ebe1d836",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
264347111
|
pes2o/s2orc
|
v3-fos-license
|
Effect of age at menopause and menopause itself on high sensitivity C-reactive protein, pulse wave velocity, and carotid intima-media thickness in a Chinese population
Potential associations between menopause, age at menopause, and clinical indicators related to cardiovascular disease (CVD) have not been elucidated. To identify the risk of CVD early and contribute to its prevention and intervention, the present study used relevant biomarkers to evaluate the risk of CVD among pre- and postmenopausal women. An overall population of 816 women (aged 40–60 y) was evaluated as premenopause, natural early menopause, or natural late menopause (ages ≤ 48 and ≥52 y), with ages 49–51 years as reference (natural menopause). High-sensitivity C-reactive protein, carotid intima-media thickness, and brachial-ankle pulse wave velocity were measured. Triglycerides (TG), high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol (LDL-C) of the postmenopausal group were each significantly higher than that of the premenopausal. However, the 3 menopausal groups were similar regarding hypertension, diabetes, triglycerides, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol. In the logistic regression model, the CRP, brachial-ankle pulse wave velocity, and carotid intima-media thickness levels were similar among the premenopause and early and late menopause groups. These results were unchanged after further adjustment for multiple confounders including age, smoking, drinking, salt intake habits, presence of hypertension, or diabetes mellitus. Menopause itself is a more important risk factor for CVD compared with menopause that begins at early or late age.
Introduction
Estrogen, an important female hormone, has anti-inflammatory and vasodilatation functions which before menopause may be protective against cardiovascular disease (CVD). [1]Natural menopause occurring early (i.e., before the age of 48 years) may lower the lifetime exposure to endogenous estrogen.The loss of ovarian function in early menopause may increase prolonged reninangiotensin-aldosterone system activity, resulting in inflammation, immune dysfunction, endothelial dysfunction, and subsequent vascular damage. [2]Changes in endogenous estrogens influence blood lipids, [3] and theoretically therefore, women experiencing menopause or early-onset menopause could be at higher risk of CVD.
Biomarkers of CVD include high-sensitivity C-reactive protein (hs-CRP), brachial-ankle pulse wave velocity (baPWV), and carotid intima-media thickness (CIMT).Specifically, hs-CRP is a biomarker of inflammation and associated with many health issues such as CVD, insulin resistance, type 2 diabetes mellitus, and all-cause mortality, [4] while baPWV is a marker of arterial stiffness, a risk predictor of CVD.The CIMT test is also a powerful predictor of CVD in women. [5]et, an association between menopause, age at menopause, and CVD is controversial.Associations between age at menopause and the relevant clinical indicators of CVD have not been established.The present study assessed the risk of CVD among pre-and postmenopausal women, as reflected by the biomarkers hs-CRP, baPWV, and CIMT.
Materials and methods
The ethics committees of Kailuan General Hospital and Beijing Tiantan Hospital approved this study.The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.Written informed consent was obtained from each patient in the study.From June 2010 to June 2011, 2183 women in the APAC study completed a baseline survey.The data collected included cardiac risk factors: hypertension, diabetes, smoking, and drinking.Hypertension was defined as taking antihypertensive medication currently, or an average of 3 supine measures >140/90 mm Hg over 10 minutes through brachial blood pressure measurement.Diabetes was considered fasting glucose > 6.5 mmol/L, or a history of diabetes medication.Natural menopause was defined as the absence of menstruation for at least 12 months. [6]he participants were initially categorized as premenopause, or natural menopause according to age: younger than 40, 41-44, 45-48, 49-51, 52-54, or ≥55 years.
Measurements of CVD risk factors
An hs-CRP >2 mg/L was considered abnormal (quartile 1), by biochemical test.An automated apparatus (BP-203RPE IU, Omron, China) was used to measure baPWV, and the larger baPWV value on either the subject's left or right side was taken for analysis.A baPWV ≥ 1400 cm/s indicated peripheral atherosclerosis (quartile 1), in accordance with American Heart Association (1993).CIMIT was measured by high-resolution B-mode ultrasound with a 5-to-12 MHz linear array transducer (Philips iU-22 ultrasound system, Philips Medical Systems, Bothell, WA).On the longitudinal image of each carotid artery, CIMT was defined as the distance from the leading edge of the lumen-intima interface to the leading edge of the media-adventitia interface.A CIMT ≥ 1 mm was considered a sign of arterial intimal thickening (quartile 1).Lipids, including high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), triglycerides (TG), and total cholesterol were measured via laboratory examination.
Statistical analysis
All statistical analyses were conducted using SAS (version 9.4).Baseline characteristics are expressed as mean and standard deviation of continuous variables and percentages of categorical variables.Data was not normally distributed, and chi-squared or Kruskal-Wallis tests were applied to analyze group differences.After adjusting for age and other potential confounders, multivariable logistic regression models were constructed to examine hs-CRP levels, CIMT, and baPWV (in quartiles) of the menopause groups.A P-value < .05indicated statistical significance and all statistical tests were 2-tailed.
Results
A total of 1367 potential subjects were excluded, specifically 762, 193, 148, 80, 163, and 21, respectively, for lacking history of age at menopause; bilateral oophorectomy; hysterectomy; use of oral contraceptive; missing data (hs-CRP, baPWV, or CIMT); and use of hormonal therapy.Finally, 816 subjects were eligible and incorporated into the study (Fig. 1).
Clinical characteristics
The overall study population comprised 816 women aged 40 to 60 years (Table 1).The mean age of the subjects at menopause was 49.20 years.The age of the premenopausal group (48.78 y) was younger than that of the postmenopausal group (60.70 y), and the prevalence of hypertension and diabetes was significantly higher in the postmenopausal (P < .01).The age at menarche was strikingly different between the pre-and postmenopausal groups (P < .01).
The TG, HDL-C, and LDL-C of the postmenopausal group were each significantly higher than that of the premenopausal.However, the 3 menopausal groups were similar regarding hypertension, diabetes, TG, HDL-C, and LDL-C.
Hs-CRP, baPWV, and CIMT levels in study subjects
Each of the hs-CRP, baPWV, and CIMT levels in the premenopausal group were significantly lower than that of the 3 postmenopausal groups (P < .0001,.0001,=.0327, respectively), but there were no obvious differences among the postmenopausal groups (Table 2).
In the logistic regression, the subjects were classified according to hs-CRP, baPWV, and CIMT levels, with the following cutoff points: CRP ≥ 2 mg/L; baPWV ≥ 1400 cm/s; and CIMT ≥ 1.0 mm.After adjusting for age, and taking the menopausal women aged 49-51 years as reference, the hs-CRP, baPWV, and CIMT levels of the premenopause and early and late menopause groups were similar.These results were unchanged after further adjustment for multiple confounders, including smoking, drinking, salt intake habits, presence of hypertension, and diabetes mellitus (Fig. 2).
Discussion
This study found that biomarkers of CVD risk (hs-CRP, baPWV, and CIMT) were significantly higher in the postmenopausal women compared with the premenopausal, and TC, TG, and LDL-C levels were also significantly higher.This suggests that in postmenopausal women, menopause itself is a biologic marker of CVD.This is consistent with results from other studies.In a demographic study conducted in the Netherlands, Witteman et al [7] found that women who experienced natural menopause were at 3-times greater risk of atherosclerosis compared with premenopausal women.Menopause is a risk factor for age-related increase in arterial stiffness. [8]Another peer study also showed a higher rate of CVD events in postmenopausal women. [9]Age at menopause is not only a sign of reproductive aging, but also a high-risk indicator of CVD. [10]n the present study, there were no significant differences among the postmenopausal groups regarding hs-CRP, baPWV, or CIMT levels.After adjusting for age and other confounding factors (smoking, drinking, salt intake habits, hypertension, and diabetes mellitus), the hs-CRP, baPWV, and CIMT levels of women in the early and late menopause groups (≤48 and ≥52 y) did not differ significantly from that of the reference group (49-51 y).This suggests that there was no association between age at menopause and risk of CVD.This result is similar to that of other studies.The Japan Collaborative Cohort Study reported that early menopause had no significant association with stroke mortality. [11]Another study of 3304 Chinese women showed that women who experienced menopause at 46 years or younger were no more likely to have CVD than women aged 50 years at menopause. [12]y contrast, some studies have obtained opposite results.A previous peer study showed that early menopause (≤45 y) was associated with a higher rate of stroke (OR, 1.69; 95% CI, 1.25-2.30)compared with menopause at ages 45 to 52 years. [13]Women with premature or early-onset menopause (<40 and 40-44 y) had a significantly higher risk of a nonfatal cardiovascular event before the age of 60 years, compared with women with menopause at 50 to 51 years. [14]Muka et al [15] found that among women who experienced early-onset menopause (<45 y), the risk of CVD, cardiovascular mortality, and overall mortality were higher compared with women 45 years or older at menopause.Ley et al [16] reported that, after multivariable adjustment, women with a reproductive life span of <30 years experienced a 1.32-fold higher rate of CVD compared with women whose reproductive life was at least 42 years.The different outcomes among studies may be due to variations in population ethnicity, study design, or end point indicators.
In addition, an increase of CVD may occur only in the first few years after menopause.The early stage of menopause (i.e., <3 y after onset) is associated with a dynamic hormonal imbalance that may constitute a specific cardiovascular hazard. [17] study by Kryczka et al [18] showed that the transition period, rather than menopause itself, was associated with an increased cardiovascular risk.In the present study, the average age of the postmenopausal population was over 60 years, and most were in the late menopausal period.Thus, the risk factors had presumably stabilized.
Interestingly, a higher risk of CVD may influence the age at menopause rather than the reverse.In premenopausal women, a preemptive CVD event before the age of 35 years doubled the risk of early menopause, while those women with a first CVD event after the age of 35 years experienced a normal menopause around the age of 51 years. [19]
Limitations
This study comprised middle-and low-income women in the northern area of China.Therefore, the observed associations may not generalize to entire populations.Menopausal status was by self-report, which may increase the probability of bias.This was cross-sectional research, and as such it is difficult to draw a definitive conclusion about the true association between CVD and menopause or other risk factors.Furthermore, women who had undergone hysterectomy were not recruited in this research, being too few.
Conclusion
Menopause itself is a more important risk factor for CVD compared with menopause that begins at early or late age.
This is a cross-sectional study based on data collected from a subset of the Kailuan study, i.e., the subset termed the APAC or Asymptomatic Polyvascular Abnormalities Community study.The larger Kailuan study comprised 101,510 employees and retirees of the coal mining Kailuan Company, located in Tangshan, Hebei, China, in which participants' self-reports were followed up every 2 years from 2006 through 2014.The APAC was a prospective long-term follow-up observational study of asymptomatic polyvascular abnormalities in adults aged 40 years or more, with 5440 (40.1% women) subjects randomly selected from the Kailuan study in 2010.
Figure 1 .
Figure 1.Flow diagram of the participant selection.APAC indicates Asymptomatic Polyvascular Community.
Table 1
Baseline characteristics of the subjects.
Table 2
The hs-CRP, baPWV, and CIMT levels by age at natural menopause.
|
2023-10-21T06:18:18.652Z
|
2023-10-20T00:00:00.000
|
{
"year": 2023,
"sha1": "f3592cf920bb791289ceac9c6f46d54260f8a0a4",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "70f2cee0c38fd7500b9ebc4473d4b144d23bbf79",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2243303
|
pes2o/s2orc
|
v3-fos-license
|
Total salinity stress on physico-chemical characterization of lecithin isolated from soya bean oil seeds grown in the coastal region of south, India
Article history: Received on: 25/05/2016 Revised on: 17/06/2016 Accepted on: 10/07/2016 Available online: 30/08/2016 Phospholipid is very essential in the balanced diet. The vegetarian people in the coastal area are habitant of using edible oil seeds as daily food grains. Salinity of water during cultivation decreases the accumulation of oil content (12-15%) in seeds. Present experiment was focused on total salinity and ionic stress on physiochemical characterization of extracted lecithin from soya bean oil under saline and non-saline cultivations. The experiment proves that the percentage of phospholipids in oil and lecithin is decreased by 1.02% and 8.08%, respectively under saline cultivation. The phospholipids of the lecithin were qualitatively identified by thin-layer chromatography (TLC) and high performance of liquid chromatography (HPLC). The Rf values for phosphatidyl-ethanolamine (PE), phosphatidyl-serine (PS), phosphatidyl-inositol (PI) and phosphatidyl-choline (PC) of samples were well related to the standard. HPLC spectrum is well resolved and the retention time (RT) is correlated the standard with high precision. Quantisation of phospholipids shows a variation in the average percentage of PC, PI, PS and PE as 17.925, 9.125, 5.9, 15.1 for saline cultivation and 22.25, 12.025, 8.525, 18.975 for non-saline cultivation. Average decrease in the percentage in saline cultivation is due to the total salinity and ionic (Na + Cl ) stress of water.
INTRODUCTION
Vegetable materials usually contain only small amounts of phospholipids, ranging from 0.3 to 2.5 wt. % (Wagner and Wolff, 1964).Phospholipids are complex lipids which contains one or more phosphate groups. Phospholipids are amphipathic in nature that is each molecule consists of a hydrophilic portion and a hydrophobic portion thus tending to form lipid bilayers (Dowhan and Bogdanov, 2002). In fact, they are the major structural constituents of all biological membranes, although they may be also involved in other functions such as signal transduction. The most abundant types of naturally occurring glycerol phospholipids are phosphatidyl-choline, phosphatidylethanolamine, phosphatidyl-serine, phosphatidyl-inositol, phosphatidyl-glycerol and cardiolipin. The structural diversity . * Corresponding Author Email:naikvinayak. 54@yahoo.com within each type of phosphoglyceride is due to the variability of the head group, variability of the chain length and degree of saturation of the fatty acid ester groups. Phosphatidyl-choline Providesfree choline in the blood for the manufacture of acetylcholine which regulates digestive, cardiovascular and liver functions (Alvarez et al., 1997;Spiers et al., 1996). is essentialfor the production of stable liposomes, anti-spattering agent in margarine. Phosphatidyl-serine is essential to the functioning of all body cell, supports brain functions that decline with age, memory enhancer (Kidd, 1996). In case of pure vegetarians, it is so essential to make up the phospholipid content having vegetables and edible oil grains in their balanced diet. The plant sources of phospholipids are soybean (Wagner and Wolff, 1964), rapeseed (Sosulki, 1981), sunflower (Litinova et al., 1971, cottonseed and peanut (Vijayalaxmi et al., 1969), ricebran (Adhikari andAdhikari, 1986), palm, coriander, carrot (Goh et al., 1982), papaya (Prasad et al., 1987), olive, barley, cucurbit (Schneider, 1989), corn, karanza, castor bean (Paulose et al., 1966, cocoa (Parsons et al., 1969), neem (Prasad et al., 1981), sesame, khakan (Prasad et al., 1979), pear, quince, tobacco (Zlatanov et al., 2000. Phospholipids are removed as by-product during the degumming process of vegetable oil refining. Soybean seed is a major source of high-quality protein and oil for human consumption (Katerji et al., 2001). The unique chemical composition of soybean has made it one of the most valuable agronomic crops worldwide (Thomas et al., 2003). Its protein has great potential as amajor source of dietary protein. The oil produced from soybean is highly digestible and contains no cholesterol (Essa and Al-ani, 2001). Growth, development and yield of soybean are the result of genetic potential interacting with environment. Protein and oil content is related with environmental factors like moisture, temperature etc.
The 2002 U.S. soybean crop had an average protein (35.5%) and oil (19.3%)content because of hot and dry weather conditions in those areas contributed to poor yields (Brumm and Hurburgh Jr, 2002). Soybean seed production may be limited by environmental stresses such as soil salinity (Ghassemi-Golezani et al., 2009). Minimizing environmental stress will optimize seed yield (McWilliams et al. 2004). Soil salinity, resulting from natural processes or from crop irrigation with saline water, occurs in many arid and semi-arid regions of the world (Meloni et al., 2004). Salinity is a worldwide problem, affecting about 95 million hectares worldwide (Kazemghassemi-Golezani et al., 2010). The UNEP (United Nations Environment Program) estimates that 20% of the agricultural land and 50% of the cropland in the world is salt-stressed (Yan, 2008).
Most of the salt stresses in nature are due to Na + salts, particularly NaCl (Demirel, 2005). High salinity lowers water potential and induces ionic stress, and results in secondary oxidative stress. It severely limits growth and development of plants by affecting different metabolic processes such as CO 2 assimilation, oil and protein synthesis (Nasir khan et al., 2007). Present work is mainly focused on the variation of phospholipids content due to salt tress of salinity of water during cultivation of soybeans.
Chemicals
2-Propanol, n-hexane, acetic acid and chloroform with highest purity (HPLC grade) were procured from Hi-Media. Methanol, acetic acid, sodium acetate, per chloric acid, ammonium molybdate, aminonaphtholsulphonic acid, potassium dihydrogen phosphate, petroleum ether and acetone with highest purity (AR grade) were procured from S.D. Fine and Merck.
Collection of samples
Four varieties of soy bean seeds were collected from coastal and non-coastal region of Karnataka, Tamil Nadu and Andhra Pradesh (India) available in the weekend market. The seeds were dried under sun light first and then kept at 90 o C (about three hours) in an oven. Then these are fine powdered and used for the extraction of oil. The samples were labelled as soya bean-a, soya bean-b, soya bean-c and soya bean-d for saline (coastal) cultivation and soya bean-a o , soya bean-b o , soya bean-c o , soya bean-d o for non-saline (non-coastal) cultivation.
Extraction of oil
50±0.5 g of seed powder is packed and stapled in a Whatman grade no. 42 filtered paper. The packet is inserted into the middle piece of soxhlet extractor. Oil was extracted using petroleum ether for three hours. Petroleum ether was recovered and the oil was dried 85 o C in a preheated oven for one hour. Oil obtained is cooled in a desiccator and weighed for constant weight.
Extraction of Lecithin
5 ± 0.2 g of fresh oil was dissolved in analytical grade acetone and stirred well. The insoluble lecithin was filtered and flushed with N 2 gas. A feathery material was obtained which on drying in oven at 75-80 o Cforms a reddish brown transparent solid. It was weighed for constant weight.
High Performance Liquid Chromatogram (HPLC) analysis of phospholipids
High Performance Liquid Chromatogram (HPLC) was recorded using Shimadzu LC-2010HT instrument series of wavelength range 190-600nm with bandwidth 8nm.Instrumental wavelength accuracy and wavelength reproducibility are ±1nm and ±0.1nm. Lichrosorb, Si-60, 10µm (C 18 ) column was saturated by 2-propanol and maintained at 30°C. 10µl of the sample in mobile phase was programmed for injection and the mobile phase n-Hexane: 2-propanol: acetate buffer (8:8:1, v/v) was pumped at rate of 2ml/min and chromatograms were recorded at 206 nm.
Quantization of lecithin
Sample was prepared by dissolving 0.1±0.005gm of extracted lecithin in chloroform according official methods and recommended practices of the American Oil Chemists Society (ACOS, 4 th Edn 1990). Sample was prepared by dissolving 0.1±0.005gm of extracted lecithin in chloroform. Aliquot equivalent to 0.010±0.005gm was pipette into 30ml graduated test tubes and digested by adding 0.9ml 70 % per chloric acid at 80-90 o C followed by 120°C and 150-180°C on sand bath. The colourless and clear solution in the test tube is cooled ad volume is made up to 2ml. To this 7.0ml of distilled water, 1.5ml of 2.5% ammonium molybdate and 0.2ml aminonaphtholsulphonic acid were added. The test tubes were placed in boiling water bath for exactly 7mins and cooled for 20mins. Then optical density was measured at 830nm using UV-Spectrophotometer against the blank.
Calibration curve was plotted using AR potassium dihydrogen phosphate solutions having 1to5µg of phosphorus.
Where, A = Phosphorus content in µg from calibration curve W = Weight of sample in g from the sample aliquot 30.97 = Converting factor for phosphorus into phospholipids TLC analysis of phospholipids TLC plates of size 20x20 cm 0.2mm silica gel coated glass plates were activated at 110°C for 1 hour. The developing solvent was Chloroform:methanol: Acetic acid: Water (25:15:4:2, v/v). Spots were identified by Iodine vapour and eluted by Chloroform.
An aliquot of 10µg of isolated lecithin dissolved in chloroform was spotted on TLC plates. Chromatogram was developed using the mobile phase Chloroform: Methanol: Acetic acid:Water (25:15:4:2, v/v). The condition was followed exactly as given by Skipski and others. The plates were dried at room temperature for 20 min after an average time of running for 1½ hours. Spots were identified by iodine vapour and encircled using sharp needle. When iodine vapour was completely evaporated, the silica gel was scrapped using razor, quantitatively transferred into centrifuging tubes and added 2ml chloroform. It was mixed well, centrifuged and the centrifugates were collected in labelled 30ml graduated test tubes. Extractions were repeated twice for each spots and collected together in test tubes. Finally the chloroform was evaporated and residues obtained were used for the determination of different phospholipids
Determination oil content
Oil was extracted from four different variety soya bean seed powdered materials by soxhlet extraction method. Seed powdered material contains 0.9-1.5% moisture. Average percentage of oil recovery from soya bean seeds grown in coastal and non-coastal cultivation by soxhlet extraction method is reproducible and regressive. Oil content per seed decreases in the coastal cultivation due to salinity stress. The Table1 correlates oil accumulation in soya bean seeds where coastal cultivation gives 18.04 per cent and that of non-coastal cultivation is 20.13%oil which relates literature value 2 1.38±0.6 % reported by American Soybean Association (Brumm . and Hurburgh Jr, 2002). The recovery of oil shows linearity with regression R 2 equal 0.0667 for coastal and 0.5996 for non-coastal cultivation (Fig. 1). Seeds from non-coastal cultivation have oil accumulation of 2.09% more than the seeds from coastal cultivation. This deterioration in oil content is caused by salt out effect of NaCl on soya proteins. Present work was focused on saline and non-saline condition of water for the cultivation of soya bean. Sea shore famers are dependents of saline water and faces saline stress.
Analysis oil and Lecithin for phospholipid content
Phospholipid content in oil and lecithin was determined by estimating the amount of phosphorus by per chloric acid digestion method using Official methods and recommended practices of the American Oil Chemists Society.
Amount of phosphorus in oil and Lecithin was estimated against sample aliquot weight. Table 2 lists the reproduce able results of the phospholipid content of soya bean oil and lecithin under saline, non-saline cultivation. The phospholipid content of oil and lecithin for soya beans in saline cultivation is relatively lesser compare to soya bean seed oil grown in non-saline cultivation.
Amount of phosphorus was determined by recording optical density using standard calibration curves (Fig. 2) thereby determining the percentage of phospholipid content of in and lecithin. Phospholipid content for oil and lecithin in saline (coastal) cultivation is 2.91% and 59.64% as compare to Non-saline (non-coastal) cultivation having 3.93% and 67.72%. These results reveal that the salinity of water is the main cause for deterioration of oil content of soya bean seeds. Saline stress on growing soya bean results in decrease of oil content and phospholipid content per seed. Fig. 3 gives one dimensional TLC of isolated lecithin (AIM) and the standard commercial lecithin of purity 35% against PC. The spots were identified as PE, PS, PI, and PC in comparison with the standard on exposing to iodine vapour in iodine chamber. R f values of each spot was determined as ratio of the distance travelled by solvent (solvent front) to the distance travelled by the spots and related with the standard. Table 3 correlates the R f of the samples and the standard. The R f value of PE 0.912, PS 0.835, PI 0.794, PC 0.670 are well correlated with an average R f value samples as 0.914, 0.834, 0.788 and 0.671. R f values of sample are precisely correlated with the standard and have high accuracy with negligible deviation.
High Performance Liquid Chromatography (HPLC)
The identification of different phospholipids was further confirmed by HPLC. Fig. 4 gives relatively comparable chromatograms. Retention time of each phospholipid was well related and resolved. Table 4 lists retention time for each phospholipids and correlates the chromatograms of the standard and the samples.
Quantization of Phospholipids in Lecithin
An aliquot containing 10µg of each sample of lecithin in chloroform was spotted quantitatively on TLC plate and the spots were identified by iodine vapour (Fig. 5). Corresponding spots were marked using a sharp needle and silica gel was removed using razor after the complete elimination of iodine vapour adhere to the silica gel.
The phospholipids were quantitatively estimated by the experimental procedure. Table 5 summarises the relative percentage of PC, PI, PS and PE in the lecithin isolated from different variety of soya bean seeds obtained from saline and nonsaline cultivations. Quantitative determination of Phospholipids by TLC gives an average percentage of PC, PI, PS and PE as 17. 925, 9.125, 5.9, 15.1 for saline cultivation and 22.25, 12.025, 8.525, 18.975 for non-saline cultivation. The relative percentage of phospholipids in non-saline cultivation is higher than the saline cultivation.
The main cause for the decrease in the phospholipids content is the salinity and ionic (Na + Cl -) stress due to sodium chloride on accumulation of oil and phospholipids content. High concentration of total salt in water decreases the nitrogen fixation and salting out the protein accumulation. The overall results evidences the salinity stress adversely affects the accumulation of phospholipid percentage in the soya bean oil seeds.
CONCLUSION
Present work was focused on salinity stress for the cultivation of soya bean under saline and non-saline conditions. Four varieties of soya bean seeds from coastal and non-coastal cultivation were analysed for oil and phospholipid content. The experimental result proves that the non-saline (Non-coastal) cultivation have higher oil and total phospholipid content (20.09% and 67.72%) than the saline (coastal) cultivation (18.04% and 59.64%).The experiment was also conducted for qualitative identification of individual phospholipids (PC, PI, PS and PE) by one dimensional TLC and HPLC spectrum. The spectral results were well related with the standard and the quantisation of phospholipids by TLC gives that the individual phospholipid content for non-saline cultivation is higher than that of saline cultivation. In summary, the overall results contribute to novelist of the experiments that the total ionic and salinity stress causes a decrease in total and individual phospholipid contents of soya bean seeds.
|
2017-10-11T08:25:32.138Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "6ed5008827e5fef725992c23257bf64fd209ad68",
"oa_license": "CCBY",
"oa_url": "https://www.japsonline.com/admin/php/uploads/1946_pdf.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ed5008827e5fef725992c23257bf64fd209ad68",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
225215557
|
pes2o/s2orc
|
v3-fos-license
|
Public Authority in Interim Replacement of Members of the House of Representatives
The House of Representatives is a people's representative institution or legislative body. Membership of the House of Representatives, both at the central and regional levels, can be replaced with other members through a mechanism, namely Interim Replacement. The purpose of interim replacement is to maximize the performance of legislators effectively and efficiently. However, there is a problem, namely the replacement of members of the legislature in the middle of their term of office. This study aims to analyze the mechanism of interim replacement of members of the House of Representatives and how the involvement of voters in the intertime replacement mechanism. This research is normative juridical research using secondary data as the primary data in the form of legislation, research results, and journals. Based on the research results, the interim replacement mechanism does not involve the public, namely voters. Disputes between members of the legislature and the supporting party cannot be avoided due to the inappropriate process of implementing the mechanism. It is necessary to improve the mechanism for the Interim Replacement of members of the House of Representatives, which is not only the authority of political parties but also the public authority, namely the constituent voters.
and deputy regional heads. Political parties in a democratic country are pillars of democracy or the implementation of popular sovereignty. Political parties have a central and essential status and role in every democratic system because they play an essential role as a liaison between the state government and its citizens. 2 Political parties are pillars of democracy, and if the people no longer trust this pillar, then it is a severe threat to the sustainability of democracy in Indonesia. Returning political parties on the right track in the current democratization is a shared responsibility. Political parties are not just organizations where politicians gather but can also carry out their functions to benefit the community. In a democratic system, parties play a significant role. The general election is one way to determine representatives who will sit in the people's representative body. Being a political representative within the framework of a democratic system carries a relatively significant burden and responsibility and political consequences. In addition to legal entanglements due to violations of statutory regulations that can be materially proven in a general court, council members face challenges being sued politically both by their parent political parties and constituents and society in general. 3 In practice, Indonesia tends to adopt a diversified model in which members of the House of Representatives are representatives of political parties. That means that members of the House of Representatives must represent the interests of political parties and voice the voices of political parties. The existence of a system or mechanism for Interim Replacement used by political parties against members of the People's Representative Council who come from the political party concerned strengthens this. The Interim Replacement Mechanism (recall right) that can replace elected Council Members with new Members creates problems. Members of the House of Representatives have conflicts between being representatives of the people or representatives of political parties caused of the culture of the Indonesian political system. 4 Although Interim Replacement (recall rights) is common, it is not uncommon if it is used for particular interests, such as the interests of political parties or the personal interests of their cadres, or the interests of the election organizers themselves by taking advantage of circumstances to gain the advantage, especially if accompanied by juridical reasons to justify this. For example, Harun Masiku, a cadre of the winners of the 2019 PDI-P Legislative Election and wanted to use the Interim Replacement Mechanism to become a Member of the DPR, replacing Nazarudin Kiemas, who died with the Electoral District of South Sumatra One. Even though, when viewed from the vote, Harun Masiku was only in the bottom third with 5870 votes, quite far from the late Nazarudin, who got 145,752 votes. Meanwhile, the second-most votes in the same electoral district were Rizki Aprilia, with 44,402 votes. To help determine his appointment as an Interim Replacement House of Representatives (Dewan Perwakilan Rakyat, DPR) Member, Harun Masiku bribed the KPU commissioner as operational funds 5 .
In this case, the General Elections Commission and PDI-P have different interpretations regarding the Interim Substitute in which the PDI-P is taking an active course in the judicial review of General Election Commission Regulation No. 3 of 2019 concerning voting in the General Election where the application was officially submitted on June 24, 2019, which was wrong. One of the contents of the petition is that in the recapitulation of votes, the party has the authority to determine which candidates are entitled to transfer votes from candidates who have died because the control of the nomination is in the hands of the political parties participating in the election. On July 19, 2019, the Supreme Court Issued Decision No. 57 P/HUM/2019, which contains the provisions: Determination of the votes for legislative candidates who have died, their authority was left to the leadership of political parties to be given the legislative candidates who are considered the best ".
After getting the green light from the Supreme Court with the decision, the DPP PDI-P then gave a letter to the KPU on August 5, 2019, with the signature of the Secretary-General of the PDI-P, asking the KPU to implement the decision, the DPP PDI-P asked for Nazarudin Keimas' vote to be transferred to Harun Masiku. On September 13, 2019, the PDI-P again gave a letter with the same content as the first request, which asked the KPU to transfer Nazarudin Keimas' vote to Must Masiku. On December 5, 2019, for the third time with the same tone but in the form of an application for the Interim Replacement of Rizki Aprilia with Harun Masiku.
However, the General Election Commission rejected all PDI-Perjuangan applications and still determined the second most votes winner Rizki Aprilia as a replacement for Nazarudin, based on the Election Law No. 7 of 2017, namely: Elected candidates for members of DPR, Provincial DPRD, and Regency/City DPRD as referred to in paragraph (1) in paragraph (1) shall be replaced by KPU, Provincial KPU, and Regency/Municipal KPU with candidates from the final list of candidates for political parties participating in the same election in the electoral district based on the votes of the following most prominent candidate. According to the General Elections Commission, the Supreme Court's decision contradicts Law No. 7 of 2017 concerning General Elections, so the General Election Commission chose to ignore the Supreme Court's decision even though the Supreme Court's decision is final and binding. In its decision, the General Election Commission adheres to the law because, based on Law No. 12 of 2011, the hierarchy of the Act is under the Basic Law. According to the General Elections Commission, the legal standing of the Act is higher than the Supreme Court's Decision.
In addition, the replacement of members of representatives from the supporting parties with their cadres is a right that will result in the House of Representatives being limited to carrying out the people's mandate. 6 Percentage based on the 2014-2019 Legislative data version of the General Election Commission, 22% of members of the House of Representatives have Interim Replacement status, which means that many members of the House of Representatives are alternated without a direct election process. Disputes between members of the House of Representatives who are replaced from time to time by political parties are also not uncommon, both at the central and regional levels of the DPR.
1.
What is the mechanism for the Interim Replacement of Members of the House of Representatives in the middle of their term of office by Political Parties?
2.
How should the involvement of voters be in the mechanism of Interim Replacement of Members of the House of Representatives?
III. Research Methods
The method in this research is normative juridical. It will examine the rule of law as a foundation where various systems that are considered problematic become a legal event. This method aims to create a juridical argument to determine how a legal event is considered appropriate and how a juridical event should occur adequately. The type of data used in this study is secondary data. Secondary data is data obtained employing literature study to obtain data in the form of documents, literature, and scientific writings through a search for legislation, concepts, views, and legal principles related to the subject of writing supported by legal materials primary, secondary legal materials, tertiary legal materials as legal science. This research is library research by going through a series of readings, taking notes, citing books, and using data based on the research objectives. Methods are arranged systematically, logically, and rationally. After the primary and secondary data in the form of documents are obtained ultimately, they are analyzed with regulations related to the problem being studied. 7
IV. Research Results And Discussion
1.
Interim Replacement of Members of the House of Representatives in the Middle of their Term of Office by Political Parties
Interim replacement of DPR members is associated with the recall. Etymologically, the word recall in English contains several meanings. According to Peter Salim (in Contemporary English Indonesia), they are remembering, recalling, or canceling. Interim replacement is defined as the process of withdrawing or replacing members of the DPR by the parent organization, namely a political party 8 . Recall consists of the word "re," which means to return, and "call," which means to call or call. If these words are put together, the word recall will mean calling or calling back. The right of interim replacement is defined by several experts, one of them by Mh. Isnaeni said: The right to alternate between times is generally a "sword of Damocles" for each member of the DPR. With the recall right, DPR members will be waiting for more instructions and guidelines from their faction leaders rather than autoactivating. According to BN, doing high octo-activities without the approval of the faction leader is likely to make a fatal mistake that can result in recalls. Marbun, recall is a right to replace members of the DPR by the parent organization. 9 The right of interim replacement or recall is regulated in Law No. 17 of 2014 concerning the MPR, DPR, DPD, and DPRD. Before the interim replacement is applied, there must be an official who resigns or is replaced. The decision to change the time of the DPR members entirely depends on the political party that carries it, where the interim replacement is to function as a control mechanism of the political party that has its representatives sitting as members of Parliament. There is no problem if the recall right is in the hands of a political party as long as the replacement of DPR members is under the terms and conditions as clearly regulated in Article 239, paragraph (1) and paragraph (2) and is carried out objectively and based on clear, concrete and concrete parameters. Not multiple interpretations. However, the facts that occur in the state dynamics are clear that the recall carried out by political parties is thick with political content. 10 For example, the National Awakening Party finally decided to withdraw its political party members from members of the House of Representatives, namely Lily Wahid and Efendy Choiri, who were affected by their political stance, due to differences in opinion during the decision on the proposed tax inquiry right. The decision to recall the two cadres of Partai Kebangkitan Bangsa (PKB) is thick with political nuances, namely securing the PKB of the SBY-Boediono government coalition. It was proven after that President SBY immediately praised the consistency and loyalty of PKB in supporting government policies. 11 Recalls by political parties harm the country's political life. Negative values that can arise include; First, curb and bind the reasoning of members of the DPR who are critical and want to voice their constituents' voices. Second, it forms the mentality of DPR members to be afraid of their parent organization (political parties), which can cause DPR members to UMPwt. L. Rev. 1 (2): 86-92 DOI: 10.30595/umplr.v1i2.10399 [90] prioritize and prioritize the interests of their political parties, no longer voicing the aspirations of their constituents. Based on several other reasons, the recall of political parties will shift the sovereignty of the people to the sovereignty of political parties and injure the rights of constituents who have elected their people's representatives to sit as members of Parliament, which is highly expected to bring their aspirations to be fought for. 12 In an accurate democratic system, the "party recall" system should be abolished and replaced with a "constituent recall" system. A People's Representative Council member may not be dismissed from his position as a representative of the people, except if the person concerned violates the law, violates the code of ethics, resigns, or dies during office. A member of the People's Legislative Assembly may not be dismissed from his position by being withdrawn or recalled by the leadership of his political party for reasons of a different opinion from the leadership of his party or for other reasons that are contrary to the principle of people's sovereignty which has elected him. Moreover, the appointment of a member of the DPR has been carried out on the principle of majority vote since the decision of the Constitutional Court. Therefore, people's aspirations should not be suppressed just because the people's representatives have different opinions from the party leadership. 13
Involvement of Voters in the mechanism of Interim Replacement of Members of the House of Representatives
To implement the principles of people's sovereignty and democracy, the people should be involved in this recall mechanism. If the people as holders of sovereignty have the right to choose who their representatives are, then voters should also have the right to dismiss or propose the dismissal of a member of the DPR/DPRD. According to LIPI Political Observer Syamsuddin Haris, the mechanism for recalling or replacing time between DPR members needs to be changed. PAW should be the authority of political parties and public authority that is a constituent of DPR members. If it is only from a political party, then PAW is vulnerable to being misused. Dismissal can be done based on likes and dislikes. Therefore, the public needs to be involved in the PAW process. Constituents have the right to control their representatives. If the people's representatives do not work under the people's mandate, then the constituents or the public can take the initiative to recall the PAW concerned. The mechanism varies. It can be by using a limited referendum or petition signed by several constituents. Thus, the legitimacy of recall and PAW is not only from political parties but also from the public. 14 Under the current Indonesian constitution, the recall mechanism is to provide more space for constituents as sovereign holders. There are several options regarding using recall rights against members of the DPR. First, utilizing impeachment or impeachment with the procedure or mechanism to carry out re-election with the content or content of the election is a recall or not against members of Parliament. They are considered no longer capable of carrying out their duties, which can be carried out by collecting voter signatures, photocopies of Identity Cards, and adjusted to the divisor of the selector. 15 Second, in the United States, recall is carried out by collecting signatures from Senators to reach an agreement to replace Senators who are considered incompetent to carry out their duties as members of Parliament, who are then accounted for the results of the collection of signatures and brought to the Honorary Board. The first solution is called the right of recall by the constituents or Constituent Recall, while the second solution uses parliamentary equipment, namely the Honorary Body. The two models above can be applied in Indonesia under the considerations
|
2020-10-30T06:00:57.085Z
|
2020-09-29T00:00:00.000
|
{
"year": 2020,
"sha1": "9ad42dd6e0418966bb051643962fadb15ce87fdc",
"oa_license": "CCBY",
"oa_url": "http://jurnalnasional.ump.ac.id/index.php/umplr/article/download/8661/4780",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "04a54b7197f61fbafabca23aab89916877695879",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
245974130
|
pes2o/s2orc
|
v3-fos-license
|
Anticancer Effect of Second-line Treatment for Castration-Resistant Prostate Cancer Following First-line Treatment with Androgen Receptor Pathway Inhibitors
Introduction: Studies on the effect of androgen receptor pathway inhibitors (ARPI), docetaxel (DTX), and radium-223 (Ra-223) after first-line treatment with ARPI in patients with castration-resistant prostate cancer (CRPC) are scarce. This study compared the efficacy of treatment after ARPI for CRPC. Methods: Patients with CRPC who received ARPI as first-line treatment and different second-line treatments were retrospectively reviewed. Clinicopathological backgrounds and treatment outcomes, including maximum prostate-specific antigen (PSA) decrease, progression-free survival (PFS), and overall survival (OS), were compared between second-line treatments. Results: In total, 88 patients were enrolled. Forty-one (46.6%), 37 (42.0%), and 10 (11.4%) patients were treated with ARPI, DTX, and Ra-223, respectively. Patients whose PSA levels were not adequately reduced by first-line treatment with ARPI were eventually enrolled in the DTX treatment (P = 0.030). PSA decrease was not significantly different when comparing treatments. PFS in the DTX group was significantly better than in the other two groups (P = 0.023). In multivariate analysis, DTX was an independent prognostic factor for better PFS compared to ARPI (hazard ratio, 95% confidence interval; 0.44, 0.25-0.79, P = 0.006). Subgroup analysis showed a favorable impact of DTX on PFS in patients with Gleason score >8 (interaction P = 0.027) and a PSA decline >50% (interaction P = 0.019) during first-line treatment with ARPI. However, no significant difference in OS was observed between groups of different second-line treatments. Conclusions: This study suggests that in patients with CRPC, second-line treatment with DTX following progression in patients who received ARPI as first-line treatment is more beneficial compared with second-line treatment with ARPI or Ra-233.
In retrospective studies, it has been reported that alternating two ARPIs (abiraterone and enzalutamide) offered limited efficacy (9), (10) . In addition, Chi et al. have conducted a phase II clinical trial using both ARPI (abiraterone and enzalutamide), and the efficacy of alternating therapies of both ARPIs was not significant (11) . In the PLATO study, patients received en-zalutamide and at PSA progression were assigned to receive abiraterone plus placebo or abiraterone plus enzalutamide; the efficacy of second-line therapy was limited, with a PSA decline >50% of 1%-2% (12) . Clinical trials comparing the efficacy of different second-line treatment after first-line ARPI for CRPC have not been conducted. Here we investigated the therapeutic outcomes of second-line treatments following first-line treatment with ARPI in patients with CRPC.
Patients
We included patients treated with a second-line agent after first-line treatment with ARPI for CRPC between 2014 and 2018. Eligibility criteria included (i) histopathologically diagnosed carcinoma of the prostate, (ii) confirmed failure of firstline treatment with ARPI, and (iii) age ≥20 years. Clinical staging was determined using the uniform TNM criteria based on the results of digital rectal examination, transrectal ultrasound, magnetic resonance imaging, computed tomography, and bone scan (13) . All patients underwent needle biopsy regardless of radical prostatectomy, and biopsy Gleason score was utilized in this study. The extent of disease score was divided into five grades according to the degree of bone metastasis, as shown by scan as follows (14) : 0, normal; 1, less than six bone metastases, each being ≤50% of size of vertebral body (one lesion with the size of vertebral body accounted as two lesions); 2, 6-20 bone metastases; 3, >20 bone metastases but less than a "super scan"; and 4, "super scan" or bone metastases involving >75% of ribs, vertebrae, and pelvic bones. Baseline clinical characteristics and serum data were obtained retrospectively from the patients' medical records. Written informed consent was obtained from all patients. This study (# 2019-230) was performed in accordance with the principles described in the Declaration of Helsinki and the Ethical Guidelines for Epidemiological Research enacted by the Japanese Government and approved by the institutional review board of Kyushu University and Harasanshin Hospital.
Exposure
All patients had received abiraterone or enzalutamide as firstline treatment until the disease progression, as defined by the Prostate Cancer Working Group criteria (15) . After confirmed failure, patients received an ARPI, DTX, or Ra-233 as secondline treatment. An ARPI including abiraterone (1,000 mg/ day) with prednisolone (10 mg/day) or enzalutamide (160 mg/day) was administered as previously described (16), (17) . DTX (70-75 mg/m 2 ) was administered every 3 or 4 weeks as reported elsewhere (18), (19) . Ra-223 was administered every 4 weeks according to the standard treatment regimen (8) . Castration status by surgical or continuous medical castration with a luteinizing hormone-releasing hormone agonist (goserelin acetate or leuprorelin acetate) or antagonist (degarelix acetate) was maintained simultaneously during treatment. Doses and schedules were modified according to the severity of adverse events in each case. Treatment was discontinued according to the physician's discretion, based on disease progression, adverse events, or patient's refusal.
Endpoints
Disease progression was defined as (i) an increase in serum prostate-specific antigen (PSA) of >2 ng/ml, (ii) a 50% increase over the nadir, and/or (iii) the appearance of a new lesion or progression of one or more known lesions classified according to the Response Evaluation Criteria in Solid Tumors version 1.1 (15) . The primary outcome of this analysis was progression-free survival (PFS) during second-line treatment with ARPI, DTX, or Ra-233. PFS and overall survival (OS) were calculated from the starting date of second-line treatment to the date of disease progression in the case of PFS and death from any cause in the case of OS. Surviving patients without disease progression or mortality were censored at the last follow-up visit.
Statistical analysis
All statistical analyses were performed using EZR version 1.50 software (Jichi Medical University Saitama Medical Center, Saitama, Japan) (20) . Comparison between the three groups was performed using the Kruskal-Wallis test. Survival was estimated using the Kaplan-Meier method, and the log-rank test was used to compare groups. Univariate and multivariate analyses were performed using the Cox proportional hazards regression model. We estimated the impact on survival of DTX under subgroup analysis according to age (<75 or ≥75 years), Gleason score (≤8 or >8), bone metastasis, visceral metastasis, time to CRPC (≥12 or <12 months), first-line agent, maximum PSA decrease during first-line treatment (≤50% or >50%), and median PSA at second-line treatment (<30 or ≥30 ng/ml). Differences in the prognostic impact of subgroups were investigated through interaction tests. The propensity score, that is, the probability of survival, was calculated using a logistic regression model in which potential confounders were as follows: age, Gleason score, bone metastasis, visceral metastasis, time to CRPC, first-line agent, maximum PSA decrease during first-line treatment, and median PSA at second-line treatment. One-to-one propensity score-matched pairs were selected from the two groups by nearest match. All tests were two-sided, and a P < 0.05 was considered statistically significant.
Discussion
This study suggests that second-line treatment with DTX fol-lowing progression on first-line with ARPI is potentially beneficial compared with second-line ARPI in patients with CRPC. Of note, OS was comparable between the two groups.
Retrospective studies have investigated the efficacy of DTX therapy following ARPI for patients with CRPC. Miyake et al. reported that the PSA response, PFS, and OS during second-line therapy in the DTX group were significantly superior to those for the ARAT group in patients with metastatic CRPC (21) . Matsubara et al. evaluated the prognosis of 139 patients with CRPC treated with alternating ARPIs or switched to DTX following first-line ARPI and showed a significantly better PFS in the group that received DTX as second-line treatment compared with ARPI (22) . Similarly, Oh et al. studied 345 patients with metastatic CRPC treated with chemotherapy (DTX/CBZ) or ARPI (23) . PSA response, time to PSA progression, and the objective response were better in the chemotherapy group compared with the ARPI in patients with poor prognostic features (hemoglobin < 11 g/dl, LDH > upper limit of normal, albumin < lower limit of normal). Moreover, those receiving chemotherapy had significantly improved OS. A phase III randomized controlled trial (CARD trial) showed that a novel taxane cabazitaxel chemotherapy significantly improved clinical outcomes, including PFS and OS, compared with ARPI (abiraterone or enzalutamide), in patients with metastatic CRPC who had been previously treated with DTX and ARPI (24) . Taken together, these results suggest that taxane chemotherapy is an appropriate therapeutic option as a subsequent treatment for metastatic CRPC after first-line ARPI.
Subgroup analysis in this study showed a favorable impact of DTX on PFS in patients with Gleason score >8 and a PSA decline >50% during first-line treatment with ARPI. Interestingly, this study showed the importance of Gleason score at initial diagnosis even in second-line treatment for CRPC. Previous reports indicated that the presence of Gleason pattern 5, including tertiary (<5%), is a strong prognosticator in laterline settings (25), (26) . This finding suggested that cancer component with Gleason pattern 5 persisted and regrew even after primary treatment, and then Gleason score at initial diagnosis is still a clinically valuable parameter in this setting. Because a high Gleason score represents poor prognosis, findings showing that DTX was more beneficial to patients with a high Gleason score seems to be consistent with the study by Oh et al., which indicated that anticancer efficacy was better in chemotherapy compared with ARPI among patients with poor prognostic features (22) . This study has several limitations. First, clinical data were collected retrospectively, and some data were missing. Second, the number of patients in each group was small, especially in Ra-223 group. Third, the clinical background of patients may be different from that of ARPI and DTX groups, since radium-223 is indicated only for bone metastasis without visceral metastasis. In addition, Ra-223 is a disadvantage in PSA response. However, radiographic progression was not evaluated during second-line treatment due to the nature retrospective study.
Conclusion
Our findings suggest that DTX may have a superior anticancer efficacy as a second-line treatment for CRPC following first-line treatment with ARPI. Therefore, switching treatment from ARPI to chemotherapy may be an appropriate strategy.
Article Information
Conflicts of Interest M.S. and A.Y. have received honoraria from Janssen Pharma, Astellas Pharma, AstraZeneca, Bayer, and Sanofi. E.M. received honoraria from Takeda and Janssen and a scholar donation from Sanofi, Astellas, Takeda, and Bayer
|
2022-01-16T16:04:24.910Z
|
2021-12-28T00:00:00.000
|
{
"year": 2021,
"sha1": "93c199ced0037b72045bdd688d2a5f041a629c71",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.31662/jmaj.2021-0163",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4e6bd71056a283661d8f807e61a9bb46c0fb5a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12268337
|
pes2o/s2orc
|
v3-fos-license
|
A simple criterion for non-relative hyperbolicity and one-endedness of groups
We give a combinatorial criterion that implies both the non-strong relative hyperbolicity and the one-endedness of a finitely generated group. We use this to show that many important classes of groups do not admit a strong relatively hyperbolic group structure and have one end. Applications include surface mapping class groups, the Torelli group, (special) automorphism and outer automorphism groups of most free groups, and the three-dimensional Heisenberg group. Our final application is to Thompson's group F.
When a finitely generated group G is strongly hyperbolic relative to a finite collection L 1 , L 2 , . . . , L p of proper subgroups, it is often possible to deduce that G has a given property provided the subgroups L j have the same property. Examples of such properties include finite asymptotic dimension, see Osin [26], exactness, see Ozawa [28], and uniform embeddability in Hilbert space, see Dadarlat and Guentner [10]. In light of this, identifying a strong relatively hyperbolic group structure for a given group G, or indeed deciding whether or not one can exist, becomes an important objective.
One of the main results of this note, Theorem 2 in Section 3, asserts that such a structure cannot exist whenever the group G satisfies a simple combinatorial property, namely that its commutativity graph with respect to some generating set S is connected. We describe this graph in Section 3.
The other main result of this note, Theorem 9 in Section 4, asserts that whenever G has connected commutativity graph with respect to some set of generators, it has one end.
Using our main results, we deduce that many well-known groups do not admit a strong relatively hyperbolic group structure and have one end. These examples include all but finitely many surface mapping class groups, the Torelli group of a closed surface of genus at least 3, the (special) automorphism and outer automorphism groups of almost all free groups and the three-dimensional Heisenberg group. We remark that the one-endedness of the surface mapping class groups and (special) automorphism and outer automorphism groups of free groups considered herein was previously established by Culler and Vogtmann [9] using Bass-Serre theory. In Section 4.5, we prove that Thompson's group F has one end and is not strongly relatively hyperbolic using a minor variation of our main argument.
During the preparation of this work we were informed that Behrstock, Druţu, and Mosher [3] have found an alternative argument for the non-strong relative hyperbolicity of some of the groups treated here, using asymptotic cones and the description of relative hyperbolicity due to Druţu and Sapir [13].
Relatively hyperbolic groups
We assume that all groups appearing in this note are infinite, unless otherwise explicitly stated.
There are two related but inequivalent definitions of relative hyperbolicity that are commonly used, one due to Farb [15], and the other developed by Gromov [18], Szczepański [30], and Bowditch [5]. As we only go into as much detail as is required for us to state our results, we refer the interested reader to the cited papers for a more extensive treatment.
We first give the definition given by Farb [15] and refer to this as weak relative hyperbolicity. For a group G, a finite generating set S and a finite family of proper finitely generated subgroups {L 1 , L 2 , . . . , L m }, we form an augmentation Cay * (G, S) of the Cayley graph Cay(G, S) as follows: Give Cay(G, S) the path-metric obtained by declaring each edge to have length one. Then, for each 1 ≤ j ≤ m, adjoin to Cay(G, S) a new vertex v gL j for each coset gL j of L j and declare the distance between each new vertex v gL j and each vertex in the associated coset gL j to be one. We say that G is weakly hyperbolic relative to L 1 , L 2 , . . . , L m if the resulting metric on Cay * (G, S) is hyperbolic in the sense of Gromov. Farb [15] shows this definition does not depend on the choice of generating set.
In the same paper, Farb introduces the notion of bounded coset penetration (BCP), which is a weak local finiteness property satisfied by many important examples of weakly relatively hyperbolic groups. Roughly speaking, BCP imposes certain fellow-travelling conditions on pairs of quasigeodesics on Cay(G, S) with the same endpoints that enter cosets of the subgroups L j , 1 ≤ j ≤ m.
Bowditch [5] gives two equivalent dynamical notions of relative hyperbolicity, of which we recall the second. We will refer to this notion as strong relative hyperbolicity. We say that a group G is strongly hyperbolic relative to the family L 1 , L 2 , . . . , L m of proper finitely generated subgroups if G admits an action on a connected, hyperbolic graph G such that G is fine (that is, for each n ∈ N, each edge of G belongs to only finitely many circuits of length n), there are only finitely many G-orbits of edges, each edge stabiliser is finite, and the stabilisers of vertices of infinite valence are precisely the conjugates of the L j .
We note that strong relative hyperbolicity (with respect to some finite collection of proper finitely generated subgroups) is equivalent to weak relative hyperbolicity plus BCP (with respect to the same collection of subgroups), see Szczepański [30] and Dahmani [11]. The BCP property is crucial: as noted in Szczepański [30], the group Z ⊕ Z is weakly, but not strongly, hyperbolic relative to the diagonal subgroup {(m, m) | m ∈ Z}.
We mention here that a characterisation of strong relative hyperbolicity in terms of relative presentations and isoperimetric inequalities is given in Osin [27].
The commutativity graph
We begin by defining a graph which attempts to capture the notion that a group is well generated by large abelian subgroups.
Definition 1 (Commutativity graph) Let G be a group and let S be a (possibly infinite) generating set for G, all of whose elements have infinite order. The commutativity graph K(G, S) for G with respect to S is the simplicial graph whose vertex set is S and in which distinct vertices s, s ′ are connected by an edge if and only if there are non-zero integers n s , n s ′ so that s ns , (s ′ ) n s ′ is abelian.
As long as there is no danger of confusion, we will use the same notation for elements of S and vertices of K(G, S).
Notice that for any g ∈ G, we have that K(G, S) is connected if and only if K(G, gSg −1 ) is connected. Typically we shall only consider commutativity graphs in which adjacent vertices, rather than powers of adjacent vertices, commute.
Our main result about non-strong relative hyperbolicity may be stated as follows. Recall that the rank of a finitely generated abelian group A is the rank of some (and hence every) free abelian subgroup A 0 of finite index in A.
Theorem 2 Let G be a finitely generated group. Suppose there exists a (possibly infinite) generating set S of cardinality at least two such that every element of S has infinite order and K(G, S) is connected. Suppose further that there exist adjacent vertices s, s ′ of K(G, S) and non-zero integers n s , n s ′ so that s ns , (s ′ ) n s ′ is rank 2 abelian. Then, G is not strongly hyperbolic relative to any finite collection of proper finitely generated subgroups.
The remainder of this section is dedicated to the proof of Theorem 2. The main tool we use is the following theorem on virtual malnormality for strongly relatively hyperbolic groups, which is contained in the work of Farb [15] and Bowditch [5], and is explicitly stated in Osin [27] (Theorems 1.4 and 1.5).
Theorem 3 Let G be a finitely generated group that is strongly hyperbolic relative to the proper finitely generated subgroups L 1 , . . . , L p . Then, Note that this immediately implies that if g ∈ G has infinite order and if g k lies in a conjugate hL j h −1 of some L j , then g lies in that same conjugate We also need the following lemma, which follows directly from Theorems 4.16 and 4.19 of Osin [27]. We note that while this lemma is implicit in the literature, it has never (to the best of our knowledge) been stated in this form, and so we include it as it may be of independent interest.
Lemma 4 Let G be a finitely generated group that is strongly hyperbolic relative to the proper finitely generated subgroups L 1 , L 2 , . . . , L p . If A is an abelian subgroup of G of rank at least two, then A is contained in a conjugate of one of the L j .
We are now ready to prove Theorem 2.
Proof [of Theorem 2] Suppose for contradiction that G is strongly hyperbolic relative to the finite collection L 1 , L 2 , . . . , L p of proper finitely generated subgroups. We first show that no conjugate of any L j can contain a non-zero power of an element of S. So, suppose there is some g ∈ G, some s 0 ∈ S, and some k = 0 so that s k 0 ∈ gL j g −1 for some 1 ≤ j ≤ p. (Note that, by the comment immediately following Theorem 3, this implies that s 0 ∈ gL j g −1 as well.) Let s 1 be any vertex of K(G, S) adjacent to s 0 . As there are non-zero integers n 0 , n 1 so that s n 0 0 , s n 1 1 is abelian, we see that However, since the subgroup s n 0 k 0 of G is infinite, Theorem 3 implies that s n 1 1 ∈ gL j g −1 . By the comment immediately following Theorem 3, we see that s 1 ∈ gL j g −1 .
Now, let s be any element of S. By the connectivity of K(G, S), there is a sequence of elements s 0 , s 1 , . . . , s n = s of S such that s k−1 and s k are adjacent in K(G, S) for each 1 ≤ k ≤ n. The argument we have given above implies that if s k−1 ∈ gL j g −1 , then s k ∈ gL j g −1 . In particular, we have that s ∈ gL j g −1 . Since G is generated by S, it follows that G and gL j g −1 are equal, contradicting the assumption that the subgroup L j is proper. We conclude that if G is strongly hyperbolic relative to L 1 , L 2 , . . . , L p then no conjugate of any L j can contain a non-zero power of an element of S. Now, by assumption there exist adjacent vertices t and t ′ of K(G, S) for which there exist non-zero integers n t , n t ′ so that A = t nt , (t ′ ) n t ′ is rank 2 abelian. By Lemma 4, we see that A must lie in some conjugate of some L j . In particular, this conjugate of L j contains a non-zero power of an element of S, and we have a final contradiction. QED
Ends of groups
In this section, we use the same argument as that used in the proof of Theorem 2 to show that groups with connected commutativity graph have one end. We refer to Lyndon and Schupp [23] for standard facts about amalgamated free products and HNN extensions.
We do not give here a complete and formal definition of ends of groups; for this, the interested reader is referred to Epstein [14], Stallings [29], and Dicks and Dunwoody [12]. Let G be a finitely generated group (which we have assumed to be infinite). Say that G has one end if for some (and hence for every) finite generating set S, the Cayley graph Cay(G, S) has the property that the complement Cay(G, S) \ B n (e) is connected, where B n (e) is the ball of radius n about the identity e. Say that G has two ends if G is virtually Z.
Say that a finitely generated group G has infinitely many ends if and only if G admits either a nontrivial amalgamated free product decomposition G = A * C B or a non-trivial HNN decomposition G = A * C , where (in either case) C is finite. Here, by a trivial amalgamated free product decomposition, we mean a decomposition of the form G = A * C B where C is finite and [A : C] = [B : C] = 2, and by a trivial HNN extension we mean an HNN extension of the form G = C * C where C is finite. Note that, in these two cases, C is a finite normal subgroup of G and G/C is infinite cyclic, and it follows G has two ends.
For an infinite group, these are the only possibilities for the number of ends and these possibilities are mutually exclusive. It is known that a finitely generated group G and a finite index subgroup H of G have the same number of ends, since the number of ends is a quasi-isometry invariant.
We now state standard facts about amalgamated free products and HNN extensions that correspond to Theorem 3 and Lemma 4. We begin with the virtual malnormality results, corresponding to Theorem 3. These results easily follow from the existence of normal forms for amalgamated free products and HNN extensions, see Chapter IV of Lyndon and Schupp [23].
Lemma 5 Let G be a finitely generated group that admits a non-trivial amalgamated free product decomposition G = A * C B, where C is finite.
1. If g, h ∈ G, then gAg −1 ∩ hBh −1 is finite; As a consequence, if g ∈ G has infinite order and if g n ∈ A for some n ≥ 2, then g ∈ A (and similarly for h ∈ B).
Lemma 6 Let G be a finitely generated group that admits a non-trivial HNN decomposition G = A * C , where C is finite. If g ∈ G \ A, then A ∩ gAg −1 is trivial.
As a consequence, if g ∈ G has infinite order and if g n ∈ A for some n ≥ 2, then g ∈ A.
We now state the lemmas corresponding to Lemma 4. The first follows from H. Neumann's generalization of the Kurosh theorem for amalgamated free products, see Lyndon and Schupp [23], Chapter I, Proposition 11.22. The second follows from Britton's Lemma given in Lyndon and Schupp [23], Chapter IV, Section 2.
Lemma 7 Let G be a finitely generated group that admits a non-trivial amalgamated free product decomposition G = A * C B, where C is finite. If K is an abelian subgroup of G of rank at least two, then K is contained in a conjugate of either A or B.
Lemma 8 Let G be a finitely generated group that admits a non-trivial HNN decomposition G = A * C , where C is finite. If K is an abelian subgroup of G of rank at least two, then K is contained in a conjugate of A.
We are now able to state and prove the analogue of Theorem 2 for the one-endedness of groups with connected commutativity graph. This proof follows very much the same line of argument as the proof of Theorem 2.
Theorem 9 Let G be a finitely generated group which is not virtually Z. Suppose there exists a (possibly infinite) generating set S of cardinality at least two such that every element of S has infinite order and K(G, S) is connected. Suppose further that there exist adjacent vertices s, s ′ of K(G, S) and non-zero integers n s , n s ′ so that s ns , (s ′ ) n s ′ is rank 2 abelian. Then, G has one end.
Proof [of Theorem 9] Since G is assumed not to be virtually Z, either G has one end or G has infinitely many ends. Suppose for contradiction that G has infinitely many ends, so that either G admits a non-trivial amalgamated free product decomposition G = A * C B with C finite, or G admits a non-trivial HNN extension G = A * C with C finite. We give full details for the amalgamated free product case; the details in the HNN extension case are analogous.
We first show that no conjugate of either A (or of B, the details are the same) can contain a non-zero power of an element of S. So, suppose there is some g ∈ G, some s 0 ∈ S, and some k = 0 so that s k 0 ∈ gAg −1 . (Note that, by the comment immediately following Lemma 5, this implies that s 0 ∈ gAg −1 as well.) Let s 1 be any vertex of K(G, S) adjacent to s 0 . As there are non-zero integers n 0 , n 1 so that s n 0 0 , s n 1 1 is abelian, we see that s n 0 k 0 ⊆ gAg −1 ∩ s n 1 1 gAg −1 s −n 1 1 = g(A ∩ g −1 s n 1 1 gAg −1 s −n 1 1 g)g −1 . However, since s 0 has infinite order, the second part of Lemma 5 implies g −1 s n 1 1 g ∈ A. By the comment immediately following Lemma 5, we see that s 1 ∈ gAg −1 as well.
Let s be any element of S. By the connectivity of K(G, S), there is a sequence of elements s 0 , s 1 , . . . , s n = s of S such that s k−1 and s k are adjacent in K(G, S) for each 1 ≤ k ≤ n. The argument we have given above implies that if s k−1 ∈ gAg −1 , then s k ∈ gAg −1 . In particular, we have that s ∈ gAg −1 . Since G is generated by S, it follows that G and gAg −1 are equal, contradicting the fact that the subgroup A is proper. We conclude that if G admits a non-trivial amalgamated free product decomposition G = A * C B with C finite, then no conjugate of either A or B can contain a non-zero power of an element of S. Now, by assumption there exist adjacent vertices t and t ′ of K(G, S) for which there exist non-zero integers n t , n t ′ so that D = t nt , (t ′ ) n t ′ is rank 2 abelian. By Lemma 7, we see that D must lie in some conjugate of A (or of B). In particular, this conjugate of A contains a non-zero power of an element of S, and we have a final contradiction. QED
Applications
In this section, we apply Theorems 2 and 9 to a selection of finitely generated groups, and deduce that each is not strongly hyperbolic relative to any finite collection of proper finitely generated subgroups (we will just say that such a group is not strongly relatively hyperbolic) and has one end.
Mapping class groups
Let Σ be a connected, orientable surface without boundary, of finite topological type and negative Euler characteristic. As such, Σ is the complement in a closed, orientable surface of a (possibly empty) finite set of points. The mapping class group MCG(Σ) associated to Σ is the group of all homotopy classes of orientation preserving self-homeomorphisms of Σ. For a thorough account of these groups, we refer the reader to Ivanov [21]. It is known that every mapping class group MCG(Σ) is finitely presentable and can be generated by Dehn twists. Masur and Minsky [24] prove that MCG(Σ) is weakly hyperbolic relative to a finite collection of curve stabilisers. Now let S be the collection of primitive Dehn twists about all elements of π 1 (Σ) that are represented by simple closed curves on Σ. (Here, an element of a group is primitive if it is not a proper power of another element of the group.) The associated commutativity graph is precisely the 1-skeleton of the curve complex, introduced in Harvey [20]: this follows from the observation that two Dehn twists commute if and only if their associated curves are disjoint. Moreover, the Dehn twists associated to any pair of adjacent vertices in the curve complex generate a rank 2 free abelian group. Such a graph is connected provided Σ is not a once-punctured torus or a four-times punctured sphere. Hence, we have the following: Proposition 10 Let Σ be a connected, orientable surface without boundary, of finite topological type and negative Euler characteristic. If Σ is not a once-punctured torus or a four-times punctured sphere, then the mapping class group MCG(Σ) of Σ is not strongly relatively hyperbolic.
This answers Question 6.24 of Behrstock [2] in the negative. Note that, when Σ is a once-punctured torus or a four-times punctured sphere, its mapping class group is isomorphic to PSL(2, Z) which is a hyperbolic group. We remark that the result of Proposition 10 was previously obtained by Bowditch [4], using arguments based on convergence groups.
Since the mapping class group of a punctured sphere can be viewed as a braid group, the braid group B n on n strings is not strongly relatively hyperbolic whenever n ≥ 5. This also follows by considering the usual presentation for B n and its corresponding commutativity graph.
We also have a corresponding result about ends.
Proposition 11 Let Σ be a connected, orientable surface without boundary, of finite topological type and negative Euler characteristic. If Σ is not a once-punctured torus or a four-times punctured sphere, then the mapping class group MCG(Σ) of Σ has one end.
We note that Proposition 11 is implicit in the work of Harer [19], as it is proven there that MCG(Σ) is a virtual duality group with virtual cohomological dimension greater than one, and such groups are known to have one end by a standard yoga. It also is contained in Culler and Vogtmann [9].
As with Proposition 10, the cases of the once-punctured torus and the four-times punctured sphere are anomalous; as noted above, in both of these cases, the mapping class group is isomorphic to PSL(2, Z), which is a free product and as such has infinitely many ends.
The Torelli group
The Torelli group I(Σ) of a connected, orientable surface Σ is the kernel of the natural action of the mapping class group MCG(Σ) on the first homology group H 1 (Σ, Z). It is of continued interest, given its connections with homology 3-spheres and the number of basic open questions it carries. If Σ is compact and has genus at least 3, I(Σ) is generated by all Dehn twists around separating simple closed curves and all double twists around pairs of disjoint simple closed nonseparating curves (called bounding pairs) that together separate (see [22]).
Farb and Ivanov [16] introduce a graph they call the Torelli geometry. The vertices of this graph comprise all separating curves and bounding pairs in Σ, with two distinct vertices declared adjacent if their corresponding curves or bounding pairs are disjoint. Whenever Σ has genus at least three this graph is connected (this holds even when Σ has non-empty boundary, see [25]). For this reason, let us take S to be the collection of primitive Dehn twists about separating curves and double twists around bounding pairs. The corresponding commutativity graph K(I(Σ), S) is precisely the Torelli geometry. Also, as is the case with mapping class groups, adjacent vertices generate a rank 2 free abelian subgroup of I(Σ). Thus, we have:
Proposition 12
If Σ is a closed, orientable surface of genus at least three, then the Torelli group I(Σ) of Σ is not strongly relatively hyperbolic.
We conjecture this extends to all surfaces Σ of genus g and n punctures with 2g + n − 4 ≥ 1 (to exclude small surfaces). For this, one would need to establish the connectivity of the Torelli geometry.
We have also the following result on ends of the Torelli group. Since I(Σ) has infinite index in MCG(Σ), the fact that I(Σ) has one end is independent of Section 5.1.
Proposition 13
If Σ is a closed, orientable surface of genus at least three, then the Torelli group I(Σ) of Σ has one end.
The special automorphism group of a free group
In this subsection, we use the notation and basic results from Gersten [17] (without further reference). Let F n be the free group on n generators and consider the automorphism group Aut(F n ) of F n . Abelianisation gives a surjective homomorphism Aut(F n ) → Aut(Z n ) = GL(n, Z).
Composing with the sign of determinant map we obtain a surjective homomorphism which we call the determinant map.
The special automorphism group of F n is Aut + (F n ) = ker(ϕ), and has the following finite presentation in terms of Nielsen maps: Let X be a free basis for F n and let E = X ∪ X −1 . Given a, b ∈ E with a = b, b −1 , define the Nielsen map E ab for a, b by E ab : F n → F n , where E ab (a) = ab and E ab (c) = c for c = a, a −1 .
Gersten [17] shows that Aut + (F n ) is generated by the finite set and that the following relation holds: (We suppress the full set of relations).
Note that if E ab , E cd are distinct and commute, they generate a rank 2 abelian subgroup of Aut + (F n ), since both of them have infinite order and neither is a power of the other. We then have: Proposition 14 With S as above, K(Aut + (F n ), S) is connected for n ≥ 5. In particular, Aut + (F n ) is not strongly relatively hyperbolic for n ≥ 5.
Proof Let E ab and E cd be any two vertices of K(Aut + (F n ), S). We have that E ab and E cd are adjacent in K(Aut + (F n ), S) unless a ∈ {c, d, d −1 } or b ∈ {c, c −1 }. Let us consider a = c (the remaining cases are similar). We want to find a path from E ab to E ad in K(Aut + (F n ), S).
It then follows, from the commutativity relation in Aut + (F n ) mentioned above, that the sequence of generators E ab , E ed , E bf , E ad gives a path in K(Aut + (F n ), S) from E ab to E ad . QED Let Out + (F n ) = Aut + (F n )/Inn(F n ) be the special outer automorphism group of F n . It is immediate that the natural surjective homomorphism Aut + (F n ) to Out + (F n ) preserves the connectivity of our commutativity graph for Aut + (F n ) and the hypotheses of Theorem 2. We therefore deduce the following: Corollary 15 Out + (F n ) is not strongly relatively hyperbolic for n ≥ 5.
Restricting the surjective homomorphism Aut(F n ) → GL(n, Z) to Aut + (F n ), we obtain a homomorphism Aut + (F n ) → SL(n, Z). The generating set S = {E ab | a, b ∈ E with a = b, b −1 } projects onto a generating set S for SL(n, Z) whose elements have infinite order in SL(n, Z), as immediately follows from the definition of the Nielsen maps. Also K(SL(n, Z), S) is connected, since K(Aut + (F n ), S) is. Thus we have: Corollary 16 SL(n, Z) is not strongly relatively hyperbolic for n ≥ 5.
Similarly, we have the following result about the number of ends of these groups. We can extend the discussion to the automorphism and outer automorphism groups of F n in this case, since the number of ends of a group is invariant under passing to finite index subgroups. We note that these results were earlier obtained by Culler and Vogtmann [9].
The Heisenberg group
Recall that the 3-dimensional Heisenberg group H is given by the presentation Consider the generating set S = {a, b, c}. It is evident from this presentation that the commutativity graph K(H, S) is connected. Also, the group a, c is rank 2 abelian.
Proposition 18
The 3-dimensional Heisenberg group H is not strongly relatively hyperbolic.
We also have the following, which was previously known, see for instance Apurara [1].
Proposition 19
The 3-dimensional Heisenberg group H has one end.
Thompson's group F
Thompson's group F is a torsion-free group of orientation-preserving, piecewise-linear homeomorphisms of the unit interval of the real line; see Brown and Geoghegan [6] or Cannon, Floyd and Parry [8] for a complete definition. Even though F is a finitely presented group, it is sometimes convenient to work with the infinite presentation Let S = {x j | j ≥ 0}. Clearly, the commutativity graph K(F, S) is far from being connected. So, we tinker: If we consider the generating set S ′ = S ∪ {x 0 x −1 1 }, then the commutativity graph K(F, S ′ ) is still not connected, as x 0 and x 1 are isolated vertices. However, K(F, S ′ ) \ {x 0 , x 1 } is connected, since x 0 x −1 1 commutes with x i for all i ≥ 2 (see, for instance, Burillo [7]). This is enough for us to modify our main argument and deduce: Proposition 20 Thompson's group F is not strongly relatively hyperbolic.
Proof Suppose, for contradiction, that F were strongly hyperbolic relative to the proper finitely generated subgroups L 1 , . . . , L p . Since all abelian subgroups of rank at least 2 are conjugate into some L m by Lemma 4, and since x 0 x −1 1 , x 2 is abelian of rank 2, we find that x 0 x −1 1 , x 2 ⊂ gL m g −1 for some m = 1, . . . , p and some g ∈ F . For all j ≥ 2, x 0 x −1 1 and x j commute and so x 0 x −1 is infinite, we have x j ∈ gL m g −1 , for all j ≥ 2, by Theorem 3. Now, gL m g −1 cannot contain both x 0 and x 1 , since S = {x j | j ≥ 0} generates F and gL m g −1 is a proper subgroup of F by assumption. So suppose x 0 / ∈ gL m g −1 . (The case x 1 / ∈ gL m g −1 is similar.) From the presentation above, we see that x j+1 = x 0 x j x −1 0 for all j ≥ 2. Therefore x j+1 ∈ x 0 gL m g −1 x −1 0 , since we have shown that x j ∈ gL m g −1 , for all j ≥ 2. Therefore gL m g −1 ∩ x 0 gL m g −1 x −1 0 is infinite (as it contains x j for any j ≥ 2), contradicting Theorem 3. QED Using the same style of argument, we also have the following.
Proposition 21 Thompson's group F has one end.
Proof Suppose, for contradiction, that F has more than one end. Since F is not virtually Z and F is torsion free, we see that F admits a non-trivial free product splitting F = A * B. Since all abelian subgroups of rank at least 2 are conjugate into either A or B by Lemma 7, and since x 0 x −1 1 , x 2 is abelian of rank 2, we find that x 0 x −1 1 , x 2 ⊂ gAg −1 for some g ∈ F . (The details are similar if x 0 x −1 1 , x 2 ⊂ gBg −1 for some g ∈ F .) For all j ≥ 2, x 0 x −1 1 and x j commute and so is infinite, we have x j ∈ gAg −1 , for all j ≥ 2, by Lemma 5. Now, gAg −1 cannot contain both x 0 and x 1 , since S = {x j | j ≥ 0} generates F and gAg −1 is a proper subgroup of F by assumption. So suppose x 0 / ∈ gAg −1 . (The case x 1 / ∈ gAg −1 is similar.) From the presentation above, we see that x j+1 = x 0 x j x −1 0 for all j ≥ 2. Therefore x j+1 ∈ x 0 gAg −1 x −1 0 , since we have shown that x j ∈ gAg −1 , for all j ≥ 2. Therefore gAg −1 ∩ x 0 gAg −1 x −1 0 is infinite (as it contains x j for any j ≥ 3), contradicting Lemma 5. QED
|
2014-10-01T00:00:00.000Z
|
2005-04-13T00:00:00.000
|
{
"year": 2005,
"sha1": "99d6c1826b6711b585c5b56c3465c856ad0bf98e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "49e9e452b387a711747bc7bb3845d08763c6adc6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
368523
|
pes2o/s2orc
|
v3-fos-license
|
Clinical manifestations and outcome in Staphylococcus aureus endocarditis among injection drug users and nonaddicts: a prospective study of 74 patients
Background Endocarditis is a common complication in Staphylococcus aureus bacteremia (SAB). We compared risk factors, clinical manifestations, and outcome in a large, prospective cohort of patients with S. aureus endocarditis in injection drug users (IDUs) and in nonaddicts. Methods Four hundred and thirty consecutive adult patients with SAB were prospectively followed up for 3 months. Definite or possible endocarditis by modified Duke criteria was found in 74 patients: 20 patients were IDUs and 54 nonaddicts. Results Endocarditis was more common in SAB among drug abusers (46%) than in nonaddicts (14%) (odds ratio [OR], 5.12; 95% confidence interval [CI], 2.65–9.91; P < 0.001). IDUs were significantly younger (27 ± 15 vs 65 ± 15 years, P < 0.001), had less ultimately or rapidly fatal underlying diseases (0% vs 37%, P < 0.001) or predisposing heart diseases (20% vs 50%, P = 0.03), and their SAB was more often community-acquired (95% vs 39%, P < 0.001). Right-sided endocarditis was observed in 60% of IDUs whereas 93% of nonaddicts had left-sided involvement (P < 0.001). An extracardiac deep infection was found in 85% of IDUs and in 89% of nonaddicts (P = 0.70). Arterial thromboembolic events and severe sepsis were also equally common in both groups. There was no difference in mortality between the groups at 7 days, but at 3 months it was lower among IDUs (10%) compared with nonaddicts (39%) (OR, 5.73; 95% CI, 1.20–27.25; P = 0.02). Conclusion S. aureus endocarditis in IDUs was associated with as high complication rates including extracardiac deep infections, thromboembolic events, or severe sepsis as in nonaddicts. Injection drug abuse in accordance with younger age and lack of underlying diseases were associated with lower mortality, but after adjusting by age and underlying diseases injection drug abuse was not significantly associated with mortality.
and complication rates in Staphylococcus aureus bacteraemia. Design. A prospective randomized multicentre trial from January 2000 to August 2002. Setting. Thirteen tertiary care or university hospitals in Finland. Subjects. Three hundred and eighty-one adult patients with S. aureus bacteraemia. Patients with meningitis, and those with fluoroquinolone-or methicillin-resistant S. aureus were excluded. Interventions. Standard treatment (mostly semisynthetic penicillin) (n ¼ 190) or that combined with levofloxacin (n ¼ 191). Supplementary rifampicin was recommended if deep infection was suspected. Main outcome measures. Primary end-points were mortality at 28 days and at 3 months. Clinical and laboratory parameters were analysed as secondary end-points. Results. Adding levofloxacin to the standard treatment offered no survival benefit. Case fatality rates were 14% in both groups at 28 days, and 21% in the standard treatment and 18% in the levofloxacin group at 3 months. Levofloxacin combination did not differ from the standard treatment in the number of complications, time to defervescence, decrease in serum C-reactive protein concentration or length of antibiotic treatment. Deep infection was found in 84% of patients within 1 week following randomization with no difference between the treatment groups. At 3 months, the case fatality rate for patients with deep infection was 17% amongst those who received rifampicin versus 38% for those without rifampicin Introduction Staphylococcus aureus is the second most common bloodstream isolate both in hospital and community-acquired bacteraemias in all age groups. Staphylococcus aureus bacteraemia (SAB) still confers remarkably high mortality, ranging up to 60% in some studies, although antistaphylococcal antibiotics have been available for more than 40 years [1][2][3][4]. During recent decades there has been no significant improvement in the outcome of staphylococcal infections, which cannot be entirely explained by the increased incidence of methicillinresistant S. aureus (MRSA) [5,6]. The clinical course of SAB is determined by its complications, particularly by the development of deep infections due to metastatic spread, and by the high recurrence rate of bacteraemia [6][7][8][9][10][11][12]. The reported frequency of metastatic complications varies greatly, from 10% to 50% [9,[13][14][15][16].
Levofloxacin is a fluoroquinolone with improved activity against Gram-positive bacteria including S. aureus in vitro [25]. In experimental studies fluoroquinolones have shown an additive effect in combination with standard antistaphylococcal therapy in severe S. aureus infections [26]. Furthermore, fluoroquinolones have been combined with rifampicin to provide an entirely oral regimen in staphylococcal right-sided endocarditis [27,28], chronic osteomyelitis or foreign body infections [29,30] and other deep-seated abscesses [31]. In SAB, most deep infections are observed within 2 weeks after the onset of bacteraemia [32]. The metastatic complications might be prevented by early treatment with bactericidal fluoroquinolone, which penetrates well into tissues.
We here report the results of the first prospective trial where the effect of levofloxacin in addition to standard antistaphylococcal treatment of SAB was studied in relation to patient outcome and development of complications.
Study design
This was a prospective, randomized, multicentre trial conducted in five university hospitals and seven tertiary care hospitals in Finland. Adult patients with at least one blood culture positive for S. aureus were included within 1-7 days of blood culture sampling from January 2000 through to August 2002. Randomization was done blindly and separately at each study location after the patient or his/ her representative had given written informed consent. After randomization, the treatments were open for the investigator and the patient. The trial was approved by the ethics committees of all study sites and by the Finnish National Agency for Medicines.
Exclusion criteria included age younger than 18 years, imprisonment, proven or suspected pregnancy, breastfeeding, epilepsy, another bacteraemia during the previous 28 days, polymicrobial bacteraemia ( ‡3 microbes), history of allergy to any quinolone antibiotic, previous tendinitis during fluoroquinolone therapy, prior fluoroquinolone use for more than 5 days before randomization, positive culture for S. aureus only from a central intravenous catheter, neutropenia (<0.5 · 10 9 L )1 ) or failure to supply an informed consent. Patients with bacteraemia due to MRSA and a S. aureus strain resistant to any fluoroquinolone, and those with meningitis at the time of randomization, were also excluded.
Study treatments
Patients with SAB were randomly assigned to receive either standard treatment or standard treatment combined with levofloxacin. The dose of levofloxacin was 500 mg once daily for patients under 60 kg and 500 mg b.i.d. for those over 60 kg in weight, both intravenously and orally. Primarily, the standard treatment consisted of a semisynthetic penicillin, cloxacillin or dicloxacillin (2 g q 4 h), intravenously. Alternatively, cefuroxime (1.5 g q 6 h), clindamycin (600 mg q 6-8 h), or vancomycin (1 g b.i.d.) were allowed if a contraindication against the use of penicillins was noted. When oral treatment was indicated, cloxacillin (500 mg q 6 h), cephalexin or cefadroxil (500 mg q 6 h), or clindamycin (300 mg q 6 h) were accepted as standard therapy. In cases of renal dysfunction, the antibiotic doses were adjusted as recommended by the manufacturers.
If endocarditis was clinically suspected or confirmed, aminoglycoside (either tobramycin or netilmicin at 1 mg per kilogram of body weight q 8 h) was added to the drug therapy described above. Rifampicin (450 mg once daily for patients under 50 and 600 mg once daily for patients over 50 kg in weight, orally or intravenously) was given if there was a suspicion or evidence of endocarditis or other deep infections such as pneumonia, deep-seated abscess, osteomyelitis, septic arthritis, mediastinitis or infection of a prosthetic device.
The duration of antibiotic treatment was determined by the treating doctor. However, all patients received at least 14 days of intravenous antibiotic treatment. In SAB associated with a central intravenous catheter the antibiotic treatment was discontinued after 14 days when the catheter was replaced [33]. Parenteral antibiotic therapy was switched to oral dosing after 14 days in patients with no signs of a deep infection, if the serum C-reactive protein (CRP) concentration was <10 mg L )1 and the patient was afebrile [13,16,34]. When a deep infection was verified or clinically suspected, intravenous antibiotic treatment and rifampicin were recommended to be continued for at least 28 days. In cases of endocarditis, aminoglycoside treatment was discontinued after 7 days [12,35].
Definitions
Staphylococcus aureus bacteraemia was hospital-acquired if the first positive blood culture was obtained ‡48 h after admission, or the patient was a resident in a long-term care facility or attended haemodialysis within the preceding 2 months. Prognosis or severity of underlying diseases were classified as healthy, nonfatal, ultimately or rapidly fatal according to the criteria of McCabe and Jackson [36].
The infection focus was defined as definite if it was documented by bacteriological, radiological or pathological investigations, but suspected if it was evident from clinical findings only. Infection of a central intravenous catheter was defined by the guidelines of the Infectious Diseases Society of America [33]. Endocarditis was classified as definite or possible using the modified Duke criteria [37]. Relapse of SAB was confirmed by the same resistance pattern and pulsed-field gel electrophoresis typing for two S. aureus strains. Other recurrences of S. aureus culture in the blood were classified as reinfections.
End-points
All patients were followed up by an infectious disease specialist during the hospital treatment and thereafter with control visits at 28 days and at 3 months. Primary end-points were case fatality rate at 28 days and at 3 months. Secondary outcome measures were the number of complications (e.g. deep infections) observed after the first week, decrease in serum CRP concentration, length of antibiotic treatment, need for surgical intervention, and time to defervescence (recorded in days until axillary temperature was <37.5°C). Laboratory tests were conducted on the day of positive blood culture for S. aureus, at randomization and every other day during the first week, twice a week thereafter during hospitalization, at 28 days, and at 3 months.
Sample size and statistical analysis
In the sample size calculation, when mortality was assumed to be 10% in the levofloxacin group and 20% in the standard treatment group, a power of 80% would be achieved with 198 patients in each study arm. A two-tailed significance level of 5% was used. Additionally, the length of antibiotic therapy was analysed from a population of which deceased patients were excluded (308 patients). Patients were ineligible for PP analysis if they had received levofloxacin for less than 2 weeks in the levofloxacin group, or any fluoroquinolone for more than 1 week within the first 28 days after randomization in the standard treatment group (Fig. 1).
Statistical analyses were performed with SAS Ò version 8.2. The primary variable and other categorical variables were analysed with chi-square tests. Odds ratios (OR) with 95% confidence intervals (CI) were calculated to estimate the significance of differences in the two treatment groups. The stratified Cochran-Mantel-Haenszel (CMH) test was used in order to adjust for levofloxacin as confounding factor when effect of rifampicin was analysed.
Continuous baseline variables were compared using t-test. Decrease in serum CRP concentration was analysed using analysis of variance for repeated measurements (rmanova). Mortality and time to defervescence survival estimates were calculated with the Kaplan-Meier method. The log-rank test was used to compare the survival estimates. Survival was calculated from the day of randomization until 3 months. All tests were two-tailed, and a P < 0.05 was considered significant. Data were analysed at 4Pharma Ltd (Turku, Finland).
Patient characteristics
During the study period, 1226 patients with SAB were identified (Fig. 1). In total, 381 patients were included in the ITT analysis, with 191 patients in the levofloxacin group and 190 patients in the standard treatment group. All collected data except Patients in the two groups were well matched with respect to demographic characteristics and predisposing conditions (Table 1). When the underlying diseases were grouped by the predicted prognoses (McCabe's classification) [36], 61% of patients had a nonfatal, 27% had an ultimately fatal and 3% had a rapidly fatal disease. Only 9% of the patients were previously healthy. In both groups the median time from sampling of the first positive blood culture to randomization was 3 days.
Antibiotic treatment
All patients were treated with an antibiotic that was effective against S. aureus from the time of the first positive blood culture. In ITT analysis, parenteral cloxacillin or dicloxacillin was given to 150 of 191 patients (79%) in the levofloxacin group and to 135 of 190 patients (71%) in the standard treatment group (P ¼ 0.09, OR ¼ 0.67, 95% CI ¼ 0.42-1.07). Only 31 patients (16%) in the levofloxacin group and 42 patients (22%) in the standard treatment group were initially treated with cefuroxime with no significant difference between the groups (P ¼ 0.15). The treatment groups differed neither in the use of clindamycin or vancomycin. Rifampicin was given significantly more often to patients in the standard treatment group [146 (77%) of 190 patients] than to patients in levofloxacin group [124 (65%) of 191 patients] (P ¼ 0.01, OR ¼ 1.79, 95% CI ¼ 1.14-2.81). Combination therapy with an aminoglycoside was also significantly more common in the standard treatment group [44 (23%) of 190 patients] than in the levofloxacin group [20 (11%) of 191 patients] (P < 0.001, OR ¼ 2.58, 95% CI ¼ 1. 45-4.57).
The median duration of parenteral antibiotic therapy from randomization was 29 days (interquartile range, 22-36 days) in both groups (P ¼ 0.76). Levofloxacin was given for a median of 42 days (interquartile range, 28-58 days). Total duration of antibiotic therapy, including intravenous and oral dosing, was for a median of 72 days (interquartile range, 45-85 days) in the levofloxacin group and 80 days (interquartile range,
Infection foci
At least one deep infection was detected in 331 patients (87%) during the 3 months follow-up (ITT analysis). Deep infections were definite in 252 patients (76%) and suspected in 79 patients (24%).
Most of these (84%) were diagnosed within 1 week of randomization (Table 2). A new deep infection after the first week was found in 33 patients (17%) in the levofloxacin group and in 31 patients (16%) in the standard treatment group (P ¼ 0.80). The infection focus was treated with drainage or surgery in 224 patients (59%) with no significant difference between the groups. Endocarditis was observed in 70 patients (18%) with no significant difference between the study groups ( Table 2). Endocarditis was classified as definite in 55 patients (79%) and possible in 15 patients (21%). During the follow-up, five patients (1%) had a new SAB more than 28 days after randomization with no significant difference between the groups. Recurrent SAB was due to a relapse in three patients and reinfection in two patients.
Outcome
No significant differences were observed between the treatment groups in ITT or in PP analyses ( Table 3). The case fatality rate at 28 days was 14% in both study arms, and at 3 months 21% in the standard treatment and 18% in the levofloxacin group (ITT analysis) ( Table 3, Fig. 2).
In patients with a deep infection, case fatality rate at 3 months was significantly higher amongst those who did not receive rifampicin [25 (38%) of 66 patients] than in patients treated with rifampicin [44 (17%) of 265 patients] (P < 0.001, OR ¼ 3.06, 95% CI ¼ 1.69-5.54) ( Table 4). However, patients who did not receive rifampicin were significantly older and significantly more often had chronic renal failure, a fatal underlying disease, hospital-acquired SAB, or levofloxacin treatment than did those given rifampicin. In contrast, patients not treated with rifampicin had fewer cases of endocarditis and fewer deep infections per patient. Mortality in patients who had a deep infection was analysed separately amongst those treated with or without rifampicin (stratified CMH test). This was done as the patients in the standard treatment group were treated with rifampicin significantly more often than the patients in the levofloxacin group (P ¼ 0.003) ( Table 4). The case fatality rate at 3 months amongst patients with deep infection and rifampicin treatment was 13% (15 of 119 patients) in the levofloxacin group and 20% (29 of 146 patients) in the standard treatment group. Case fatality rates in patients not treated with rifampicin were 37% (16 of 43 patients) and 39% (9 of 23 patients). However, the benefit of levofloxacin was not statistically significant in this stratified analysis either, in which the imbalance in the use of rifampicin was taken into account (P ¼ 0.16).
The mean duration of fever (>37.5°C) was 9 days in both groups (Fig. 2). Decrease rates in serum CRP concentration were similar in both groups (Fig. 2). No significant differences were observed between the treatment groups in the number of patients with leucocytosis, leucopenia, thrombocytopenia, acidosis or liver enzyme elevations (data not shown). There were no significant differences in antibiotic-associated diarrhoea caused by Clostridium difficile or allergic reactions between the groups.
Discussion
This is the first clinical trial to evaluate the efficacy of a new fluoroquinolone, with improved Grampositive activity, combined with standard treatment in SAB. In experimental studies, fluoroquinolone combined with standard therapy has shown improvement in treatment results [26] which could not be confirmed in this clinical trial. New treatment options for bacteraemia caused by MRSA strains would be needed. If fluoroquinolones could be useful in MRSA, bacteraemias cannot be answered by this trial because they were not included. However, resistance to fluoroquinolones has been increasing, especially amongst MRSA strains [38].
Overall, 14% of patients in our trial died within 1 month. This is comparable with the mortality of 17% at 28 days we observed in a nationwide, population-based survey of SAB during 1995-2001 in Finland [39], but clearly lower than the overall mortality of 23-39% generally related to (21) SAB [1,6]. Direct comparison of SAB mortality with previous studies is complicated by inconsistent definitions and variable analysis time-points. In some studies only mortality directly attributable to SAB has been calculated [1,9,40]. Recent nationwide surveys from low resistance areas in Finland and Denmark [39,41] suggest that mortality in SAB has decreased during the recent decade, which might be one explanation for the lower mortality in this trial when compared with previous studies. In the present trial, all patients were followed by an infectious disease specialist, which has been shown to improve the outcome and reduce the number of relapses [6,42]. However, the low mortality in this trial is in contrast to the high prevalence of deep infections which has earlier been related to higher mortality [6,9]. The reported frequency of deep infections has varied from 10% to 50%, but it was over 80% in the present trial [9,[13][14][15][16]. This difference might be partly explained by different definitions as well as by the high intensity search for deep infections in our trial. Furthermore, in most articles, the incidence of deep infections has not been separately reported or they have been classified into primary and metastatic foci. In another recent study [9], 74% of complications were already present at the time of hospitalization, in accordance with our findings of most deep infections being evident during the first week. These data suggest that infection foci cannot be reliably classed as primary or metastatic and that identification of deep infections might be essential for decreased mortality in SAB.
Intravenous antibiotic treatment is recommended for 4-6 weeks in SAB with a deep infection [17,43]. In this trial, the duration of parenteral and oral antibiotic therapy was much more prolonged and extended with an average of 77 days. Of all patients, 44% remained on antibiotic treatment at 3 months. This may have contributed to the low (1%) prevalence of SAB recurrences. Significantly higher recurrence rates from 9% to 23%, have been reported in studies with slightly longer follow-up times from 3 to 6 months [5, 7-10, 42, 44].
Data on the effect and recommendations of various antibiotic combinations are controversial, although they are widely used in complicated SAB. Rifampicin shows excellent antistaphylococcal activity, penetrating well into cells and killing phagocytosed bacteria [22,23]. The combination of oxacillin and rifampicin has shown a synergistic action in vitro when the concentration ratio of oxacillin to rifampicin was low, whereas antagonism occurred with higher ratios [19]. Some small randomized studies have suggested that adding rifampicin to standard treatment improves clinical cure and bacteriological eradication whereas no effect in mortality has been seen [20,21,45]. Furthermore, rifampicin is often used in combination with standard treatment in deep-seated abscesses [18], osteomyelitis [45], foreign body infections and endocarditis, or because of poor response to the standard treatment [12,[22][23][24].
In the current trial, rifampicin was included in the protocol for all patients with deep infection as the ultimate aim was to evaluate whether levofloxacin improved the treatment results when the best therapy was used. Interestingly, if rifampicin was not given, mortality was significantly higher. However, this result must be interpreted with caution, because the trial was not specifically designed to scrutinize the effect of rifampicin. In addition, patients not receiving rifampicin had factors generally associated with higher mortality [6,14,40,44]. These patients were older, more often had hospitalacquired SAB and ultimately or rapidly fatal diseases.
In the levofloxacin group there were significantly more patients with a deep infection not treated with rifampicin (27%) when compared with the standard treatment group (14%). The reasons for not using rifampicin were a concomitant liver disease in 15 cases (nine patients in the levofloxacin group versus six patients in the standard treatment group), a risk of a drug interaction or other decision of the treating doctor in 47 cases (31 patients versus 16 patients), and an early death of the patients in four cases (3 patients versus 1 patient). In patients with deep infection receiving rifampicin, a trend for lower mortality (13%) was observed in the levofloxacin group when compared with the standard treatment group (20%). This difference, however, was not statistically significant. As the present study design did not directly compare levofloxacin to rifampicin, the potential benefit of levofloxacin when rifampicin cannot be used remains to be shown in further prospective studies.
In summary, levofloxacin in combination with standard treatment in SAB did not decrease the mortality or the incidence of deep infections, nor did it speed up recovery. The data indicate that a fluoroquinolone could not be recommended to be combined with the standard treatment of SAB. However, patients with a deep infection appeared to benefit from combination treatment including rifampicin, as suggested also by experimental data.
|
2014-10-01T00:00:00.000Z
|
2006-09-11T00:00:00.000
|
{
"year": 2006,
"sha1": "ad3063f46d5a3476cded1aa490fa7a726d1a8411",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-6-137",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e337dfabe18f69bc5d3c79bbcc6aff6aa6ae8165",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234347539
|
pes2o/s2orc
|
v3-fos-license
|
Shared Standards Versus Competitive Pressures in Journalism
Democratic societies need media that uphold journalistic standards of truthfulness and objectivity. But sensationalism has always been a temptation for journalists, and given the intense competition between news outlets, especially in the online world, there is pressure on them to ‘ chase the clicks ’ . The article analyzes the incentive structures for journalists – focusing on the harmfulness of sensationalist framing as an example – and the challenges of establishing shared standards in a highly competitive online environment. Drawing on concepts and arguments from business ethics, it argues that the structure of this problem points to the need for an ‘ ethics of sportsmanship ’ that upholds journalistic standards despite competitive pressures. But the speci fi c role and nature of the media imply that there can be no once-and-for-all solution. Instead, there is a need for re fl exivity, that is, for an ongoing dialogue about journalistic standards and the role of media in democratic societies.
Introduction
Sensationalist headlines have been part of journalism ever since it came into existence.And yet it seems that in today's online environment, the struggle for attention has intensified.Most business models for online news rely on attracting and retaining large audiences in order to sell advertisements.'Chasing the clicks' has become an imperative for journalists, which stands in obvious tension to the objectivity and measured tone that one would expect from high-quality reporting.How should journalists behave in this situation?How can their responsibility to uphold journalistic standards coexist with the fact that most news outlet operate in such a highly competitive environment? 1he 'ethics of journalism' has been captured in various codes of conduct, for example, the Code of Ethics of the Society of Professional Journalist. 2 But the competitive nature of journalism, and especially the competitive dynamics of online reporting, have received less attention.I will discuss this problem by focusing on the example of sensationalist headlines.In an online environment, the role of headlines and keywordse.g. the ones captured in hashtagsis, arguably, even more important than in offline environments.Headlines spread quickly, and they 'stick': they can continue to frame political issues even when they have long been shown to be one-sided, distorted, or plainly false.This is harmful for democratic public discourse, which requires a certain degree of precision and nuance.
My arguments build on a schematic account of how competition puts pressure on journalistic standards, for example with regard to the choice of sensationalist framings (Section 2).Building on this descriptive account, I ask in what ways such framings are harmful, arguing that they create specific forms of collective harm to democratic discourse (Section 3).How can journalists react to this situation?I argue that the challenge can be understood as structurally similar to many problems in business ethics: competitive dynamics that are, in principle, justified create negative externalities along other dimensions.But while many such problems in other industries can be addressed by regulation, this is not a straightforward option with regard to the media: political censorship would not only be unconstitutional in many countries, it would also create high risks of abuse and threaten the media's watchdog function.This does not mean that legal regulation has no role to playit can, for example, indirectly support journalistic standardsbut other factors, such as individual ethics and social norms in the professional community, are also needed.However, such social norms can create their own problems and need to be carefully balanced against the dangers of selfcensorship.Ongoing criticism and debates about journalistic standards are needed to deal with these intricate ethical questions (Section 4).
Before delving into the discussion, let me briefly state what I will not discuss.There are many other forms of questionable behavior in the online public sphere, e.g.trolling, vitriolic anonymous commenting, the use of bots, or microtargeted political advertisement.Traditional problems of journalism ethicse.g.how to report about suicidestake on specific forms in the online world.Last but not least, there are questions about the power of online platforms and their design decisions and business models.Many of these are related, directly or indirectly, to the problem of journalistic standards.Drawing on concepts and arguments from business ethicsa literature that does not seem to have been connected to that on journalism ethics so farhelps understand why the problem is so difficult to address.
Threats to Journalistic Standards in an Online Environment
Journalism should uphold certain standardsthis is a widely agreed-upon premise from which my reflections start.These standards concern various issue: fact-checking reports before publishing them, correcting false claims, not reporting about certain issues, etc.I here focus on one example: the framing of headlines, teasers, and the general tone of reporting.This aspect of journalistic standards is interesting for a number of reasons.The wrongness of sensationalist framing may not be as straightforward as that of, say, reporting fake news, but this is precisely what can make it more tempting, especially for journalists who think of themselves as having high standards.Moreover, the gradual dynamic of shifting standards, and the specific challenges it raises, can be illustrated very well by focusing on this example.Some of the lessons that can be learned from this example can easily be carried over to other aspects of journalistic standards, but for reasons of scope, I will not do so in this article.
What, then, is framing?The way in which political news are presented can vary massively: from sober one-liners to highly sensationalist statements that evoke various kinds of emotions.This phenomenon is called framing; a widely used definition describes it as selecting 'some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation'. 3The way in which different news items are framed 'will influence the schema called upon [by audience members] to process that information'. 4For example, a famous study showed that different frames of events as instances of 'terrorism' influenced the way in which the audience ascribed responsibility to different actors. 5he framing of topics happens, to a great extent, through the choice of headlines, together with teasers and other keywords.And these are, of course, also crucial for attracting the audience's attention.A headline that reads 'TV duel: A and B disagree on immigration policy' may attract readers who are already interested in reading the article; the headline 'TV duel: A rips B apart over selling out the nation' might attract far more readers.Such sensationalist headlines and other attention-seeking strategies target our 'automatic' rather than our 'controlled' attention. 6They appeal to our emotions and our sense of identity, and they satisfy our desire for outrage and scandal.Hence, while journalistic standards push towards the former version, 7 the pressure to attract a larger audience pushes towards the latter.
In an online environment, the importance of framing is, arguably, even greater than it was when news items were transmitted by other media.Journalists tend to copy one another's ways of framing news, in 'news waves'. 8This phenomenon predates the online era, but here, the waves can be much faster, and more difficult to control.The framing of news can trigger 'shit storms' in which large numbers of people (and often also bots) attack individuals or organizations, for good or bad reasons.Moreover, once certain formulations, catchphrases, or labels have been published online, it is very difficult, and maybe sometimes impossible, to take them back, so that they may continue to shape debates for a long time.
A second, related phenomenon of the online age is that many individuals read online headlines 'and not much more'. 9As a recent study from the US states: 'Overall, 4 in 10 Americans report that they delved deeper into a particular news subject beyond the headlines in the last week' 10 which means that 6 in 10 do not.Evidence also shows that many readers of online news share them on social media without having read the actual articles. 11oreover, in an online environment, the pressure on journalists is enormous.Most commercial news outlets rely on advertisement income, and while there are various ongoing experiments with subscription models and paywalls, they are all 'chasing the clicks'. 12The fact that search engines and online platforms siphon off a considerable share of advertisement income has increased the pressure on news outlets to retain 'the "right" readers, listeners, and viewers'. 13While headlines have always been important for media outlets, the online environment has brought about some changes.One concerns measurability: one can now see, in detailed figures, which headlines get most clicks.Many news outlets have started to use the method of 'headline testing': they use several headlines for the same content to see which one gets most clicks, and then they choose the 'winner' for the presentation of the topic.Click figures have become an important way of measuring journalistic success.By presenting the 'most read', 'more liked', or 'most shared' articles on their websites, news outlets further reinforce this quantitative logic, potentially adding a self-fulfilling dynamic.
Journalists are thus faced with the question of how far to go when framing their topics: where on the scale from 'boring but objective' to 'hair-raisingly sensationalist' should they place their pieces?They are not confronted with this question in a vacuum, but in a highly competitive space, in which different agents vie for the audience's attentionattention is, after all, 'a zero-sum game'. 14This leads to a collective-action problem with an interesting structure, which one needs to consider in both its synchronic and its diachronic dimensions.
One might be tempted to see the problem as one of collective harms: harms that do not arise because of single instanceseach of which, taken in isolation, may seem harmless and hardly worthy of ethical considerationbut because of their sum.Such collective harms raise complex questions about the individual responsibility for contributions. 15In areas such as the fights against climate change or global poverty, such 'new harms' are ubiquitous. 16It is certainly correct to say that one sensationalist headline alone is not particularly harmful, but if more and more headlines are sensationalist, then this can become a problem, because there is a summative effect (I say more about what exactly the normative problems consist in below).But there is an additional dimension here, which arises from the interconnectedness of the perception of media headlines.It adds complexity to the normative structure of this problem.
If an audience sees one sensationalist headline among many other nonsensationalist ones, this one headline is likely to attract more attentionand this creates incentives for other journalists to follow suit and to also frame their pieces in more sensationalist ways.Because they compete for audience attention, and because the degree to which they can distinguish themselves by presenting more attractive (i.e. more sensationalist) headlines depends on the overall level, it is difficult for news outlets to resist this pressure.But once all news outlets have gone one pitch down in their framing, as it were, they are again on a level playing field.This means that there are incentives to go even further and to appeal to even lower instincts among audience members.There is thus a self-reinforcing dynamic, with more and more pressure on those outlets that would like to retain a more objective framing.
Such downward spirals and pressures on standards are a typical feature of market competition. 17They imply that for agents in such markets, there is not only a question of how their own actions contribute, synchronically, to the overall quality at a given point in time, but also a question of how their own actions contribute, diachronically, to these dynamic developments.There is evidence that such self-reinforcing dynamics do indeed exist in media systems 18 and that they exert pressures even on news outlets committed to high journalistic standards. 19Thus, to consider sensationalist framing and related questions of journalistic standards from a normative perspective, one needs to consider this entire constellation of self-reinforcing dynamics in the media system.In the next section, I discuss what is so problematic about the endpoint of this spiral, i.e. a media system in which sensationalist framing prevails and makes other frames difficult or impossible to maintain.
The Harms of Sensationalist Framing
Why should one worry about sensationalist framing, and what is normatively at stake?Spitting out one-sided or distorting headlines does not cause the kind of immediate harm that one finds, for example, when a victim of violence has her privacy destroyed by the press.The latter kind of example has specific consequences for specific individuals.Codes of professional ethics, such as the Code of Ethics of the Society of Professional Journalist, offer specific rules and guidance for preventing such forms of harm, but they remain vague when it comes to less specific, collective issues.Precepts with regard to the latter take forms such as 'Take special care not to misrepresent or oversimplify in promoting, previewing or summarizing a story' or 'Avoid pandering to lurid curiosity, even if others do'. 20But why should journalists do so?In what follows, I spell out four ways in which sensationalist framing is harmful.
First, such headlines often use, and thereby continue and probably reinforce, tropes and stereotypes that contradict an egalitarian ethos by drawing on sexist, racist, or homophobic prejudices or by dehumanizing those who are perceived as opponents.Such tropes and stereotypes constitute a form of harm: they can lead to biases, e.g. in hiring decisions, or to lower confidence of targeted group because of stereotype threat. 21As Baker argues, such effects can be understood as negative externalities: they have implications for third parties beyond the transaction between buyers (the audience) and sellers (news outlets). 22Individuals who are bombarded by headlines that reinforce sexist, racist, or homophobic prejudices are more likely to continue to act on these prejudices in interactions with their fellow citizens.This stands in tension with the egalitarian principles of democracy that forbid discrimination along lines of gender, race, religion, etc. Insofar as they make such discrimination more likely, sensationalist framings are harmful.
Second, sensationalist headlines can overemphasize, and thereby reinforce, the 'competitive, horse-race aspects of politics'. 23Headlines about fights between political opponents are likely to generate more clicks than headlines that focus on political compromise and the search for consensual solutions.Seeing this as a problem does not amount to arguing, as some authors in the tradition of 'civic journalism' do, that journalists have a civic duty to support the common good. 24It is sufficient to endorse an imperative not to make political compromise more difficult.Salacious headlines can harm active politicians, but arguably even greater harm is done indirectly, in ways that are impossible to measure: how many competent, civic-minded individuals decide not to enter politics because of the risk that news outlets, and specifically their sensationalist headlines, might damage their reputation?How much democratic participation is hindered by the fear of being attacked by a sensationalist press (and maybe subsequently being exposed to online shitstorms)?
A third form of harm can be formulated in response to an objection.It might be said that we should not be critical of sensationalist framings because at least they manage to get public attention to important topics, e.g. the abuse of political office.Thus, sensationalism is part of the media's role of holding those in power accountable, this critic might say.One can and should indeed acknowledge that it may sometimes be justified to quickly draw attention to an issue and to use drastic rhetoric for that purpose.'Lame' headlines may not fulfil the media's watchdog function very well: while they may help preserve objectivity and avoid premature judgments, they may also create a risk that important issues are drowned in the steady stream of other news.Thus, one might imagine that too 'civilized' a press would fail to create incentives for politicians and other powerful individuals and organizations to fulfil their role well.
But the problem here is that if everything is a scandal, nothing is.Hypersensationalist media lose the ability to make clear when something truly out-of-the-ordinary happens.Sensationalist framing presents too many news items as worthy of the audience's sense of outrage.This creates the risk that it becomes impossible to distinguish between what are real scandals and what is just made to appear so in order to get more clicks from viewers.When the general standards for how to frame items have been pushed down in the self-reinforcing spiral I have described above, then all headlines scream at the top of their voice, metaphorically speaking.This leaves no room for special attention when really extraordinary events take place.The harm done is thus a decrease in the ability of democratic publics to differentiate between the importance of news items.
Fourth, the framing by media outlets is superimposed onto, and potentially distorts, another form of framing that is, arguably, unavoidable and even desirable for democratic politics.One might think that all framing is bad, but this would be too quick a conclusion.To a certain extent, framing is unavoidableafter all, political actors need to articulate issues and present them to voters, thereby inevitably also framing them.As Disch has argued, it is one of the roles of political parties to offer competing frames of political issues. 25Political parties present 'issues' in different ways, and the battle around frames is a constant feature of democratic politics.It allows citizens to think about alternative frames and to evaluate their strengths and weaknesses. 26To be sure, some such frames may also be problematic from a normative perspective, for example, if they demonize certain groups.But this is a problem of political ethics, which is beyond the scope of this article.
From the perspective of media ethics, what matters is that sensationalist framing by the media superimposes a second layer of framing onto the political framing done by parties.And this framing follows a different logic: market competition instead of party competition.Political parties have to compete for votes, on the basis of 'one person, one vote'. 27Media outlets, in contrast, follow a different competitive logic: while they also aim at maximizing their reach, what they want to maximize are profits.And because they make profits through advertisement, there are distortions when customers have differential purchasing power.Groups that have low purchasing power and are thus not interesting from an advertising perspective do not 'deserve' coverage, from that perspective.As Baker summarizes his analysis of the economic incentives of media outlets: 'the media system is biased toward content connected to marketable products and services and is tilted away from content valued by the poor'. 28here are, thus, various forms of harm that can arise from sensationalist framingnot to mention the additional harms, often connected with them, that come from the lowering of other dimensions of journalistic standards.It is therefore of great importance to prevent media systems from sliding into competitive spirals in which news outlets 'chase the clicks' with greater and greater sensationalism.In the next section, I discuss what could be done to respond to such a dynamic, drawing on literature from business ethics.
Upholding Standards in Competitive Contexts
Having sketched the competitive dynamics that push news outlets towards sensationalism, and the harmfulness of the latter, I now turn to the question of what, if anything, can be done to address such a situation.My account is normative, but also descriptive, in the sense that I take it that in those parts of the media system where journalists are able to resist these pressures, it is thanks to some of the elements I describe.I first briefly explain why professional ethics alone is not sufficient and then look at the limited role that legal regulation can play.I turn to business ethicsand in particular Heath's suggestion of an 'ethics of competition' 29 to analyze further options, and then I return to a proposal that is also an element of professional ethics, namely the role of social norms.Social norms can play an important role in stabilizing journalistic standards, but there are also drawbacks, such as the problem of self-censorship.In fact, analyzing the ambivalence of social norms can illuminate some current developments in media outlets that aim at upholding journalistic standards.
Let's first consider what insights one can gain from professional ethics.Ideals of professionalism and professional ethics have seen a cautious revival in recent years. 30hese new proposals suggest forms of professionalism that are more transparent and accountable to the public than past forms, and that acknowledge the importance of different forms of knowledge, including lay people's knowledge. 31Whether or not journalism should be seen as a profession is a notoriously contested issue 32 one that I will not try to resolve here.Instead, let me draw attention to some differences between typical cases of professional ethics, such as medicine, and journalism.
The paradigmatic constellation to which professional ethics responds is that of an asymmetry of knowledge (or know-how) between lay people and experts, which the latter must not use at the cost of the former.Professional experts have a duty to support the goals of their clients, whether these concern health, legal protection, or spiritual guidance, to name the three classic professions.Many aspects of journalism ethics, in contrast, concern not so much the vulnerability of single individuals, but the responsibility towards society as a whole. 33The point is not (only) to protect vulnerable lay people, but rather to maintain standards of quality and integrity, to make sure that journalism can fulfil its societal function.Professional ethics is not ideally positioned to respond to this constellation, nor does it provide specific guidance with regard to the competitive pressures I have described above.
A more suitable approach is business ethics, in which the tension between ethical standards and competitive pressures is at the center of many discussions.Many business ethicists agree that market competition as such can be justified under certain conditions, but that it can have harmful consequences if it pushes companies towards violations of ethical standards, e.g. by exploiting employees or polluting the environment.Such forms of behavior often create negative externalities.Both from a perspective of efficiency and from a perspective ethics, the first-best strategy for dealing with them is legal regulation: by making certain options illegal and sanctioning them, the competitive pressures on companies are channeled into other directions. 34For example, environmental standards are classic tools by help of which regulators can make sure that market competition does not undercut certain environmental norms and thereby harm society.
Would such legal regulation also be a possible strategy for reining in the competitive pressures that push journalists towards sensationalism (or, for that matter, towards other violations of journalistic standards)?Here, one has to tread with great care.In contrast to other industries, the media have to fulfil specific roles in societyespecially their watchdog function towards governmentsthat make all legal regulation potentially suspicious, raising fears of sliding into illegitimate forms of censorship.Such censorship is unconstitutional in many countries, e.g. the United States, where it is a violation of the First Amendment.'Freedom of the press' is an important principle in democratic countries, and all forms of legal regulationeven when issued with the best of intentions, to rein in harmful competitive pressuresare therefore potentially problematic.
Does this mean that no legal regulation can be legitimate?Not quite.There are good arguments for banning certain forms of hate speech that hinder the equal participation of all citizens in public discourse. 35Adjustments might be also possible with regard to defamation laws and the definition of reputational damage.If individuals can sue media outlets for being portrayed in certain ways, this might reduce certain forms of sensationalism (without creating risks for the freedom of the press, as the experiences of countries with stricter defamation laws show 36 ).Moreover, there can be indirect legal strategies, for example, regulating the degree of competition in media markets.Theoretically, at least for some media, such as TV and radio, states could lower the intensity of competition by handing out fewer licenses.However, in today's world, the genie of unlimited numbers of channels has been let out of the bottle.And with the internet as a medium, there seems to be no way back to less competitive constellations, at least none that would not appear dangerously close to problematic forms of censorship.Thus, while some legal regulation can be part of an answer, it is unlikely to be sufficient to fully address the problem.
Given this constellation, we can next turn to the question of which individual ethical responsibilities lie with journalists.The question of how individuals should behave in competitive situations has been explored by Joseph Heath in his 'ethics of competition', 37 which is based on the assumption that whenever externalities or other market failures cannot be prevented by legal tools, market participants need to prevent them themselves.This implies, for example, that they should not trick customers into buying products by withholding relevant information or externalize costs by polluting the environment, even when this would go unnoticed.Heath's framework focusses on market efficiency; for the media, this is not the only relevant normative standard.But we can nonetheless draw on his point that individuals should follow the logic of an institution even when 'the referee is not looking'.Heath's metaphor of 'good sportsmanship' 38 seems applicable to journalism as well: there should be an appropriate sense of what a foul consists in, and a commitment not to resort to it.
Such a motivational structurea willingness to compete, combined with a commitment not to undercut the standards of the practice in questionhas long been defended by ethically-minded friends of markets.As Adam Smith wrote in a famous quote: In the race for wealth, and honours, and preferments, he may run as hard as he can, and strain every nerve and every muscle, in order to outstrip all his competitors.But if he should justle, or throw down any of them, the indulgence of the spectators is entirely at an end.It is a violation of fair play, which they cannot admit of. 39en in one of the most infamous texts of free-market thinking, Milton Friedman's 1970 piece on 'the social responsibility of business' being to 'increase its profits', there is a rarely noticed phrase that holds that businesses should do so 'while conforming to their basic rules of the society, both those embodied in law and those embodied in ethical custom'. 40pplying this logic to the problem of competitive pressure towards sensationalism (or other forms of lowering journalistic standards) results in a duty of individual journalists to uphold standards even when there is no legal regulation.Is this a realistic proposal?It seems that without positive reinforcement for ethical behavior, it is a rather shaky strategy.Depending on how strong the competition is, those who do not participate in unethical behavior may simply go out of businessor eventually lower their standards in order to survive.In either way, unethical players will survive, while others will be driven out of the market.It is a complex ethical question to what extent the risk of going out of business might justify forms of behavior that are otherwise not justified.I take it that it cannot be answered without looking at the specific features of particular cases and the concrete implications that different forms of behavior would have.
What can be said, with a sufficient level of generality, is that an 'ethics of sportsmanship' creates specific dilemmas and tensions for journalists.One is that there is a constant temptation to lower one's standards, which makes it difficult to habitualize ethical behavior.Another related problem is that there is a great likelihood of psychological tensions, because one asks oneself, for every decision one makes, whether one might be rationalizing forms of behavior that actually fall below the standards one is committed to.It is all too easy to tell oneself that one is making 'just one exception', without considering that by doing so, one also sends a signal to other players to which they might react by also lowering their standards.Thus, an 'ethics of sportsmanship', even when not directly driving ethical individuals out of business, is extremely demanding in terms of the psychological toll it takes on individuals in highly competitive systems who try to live up to higher ethical standards.
However, it is not clear that it is necessarily the case that those who uphold ethical standards will be driven out of a system.Whether or not this happens depends on a number of contextual factors that determine towards which equilibrium a system gravitates.In what follows, I will focus in particular on one such contextual factor, namely the existence of social norms.
As sociologists and social philosophers have long emphasized, the desire to stick to social norms, to avoid being censored by one's fellow human being, and instead to receive their approval, is a powerful motivation, for good or for evil. 41In recent years, after a long period of relative neglect, there has been increasing interest in social norms among philosophers.For example, Bicchieri has emphasized the importance of social norms for coordinating human behavior, often on an unconscious level, but also the possibility of being collectively trapped in a suboptimal state of affairs because of social norms. 42As such, social norms have a descriptive (individuals expect that others will follow) and a normative (individuals expect that others should follow) dimension. 43t is this role of social norms as coordination device that is also of interest for the current topic: if all journalists follow a social norm not to use sensationalist framings, then there is no danger of a downward spiral.As Hlobil argues, we often do not even consider options that would violate social norms. 44If such norms are internalized, then it becomes a matter of one's identity as a journalist that there are certain things that one simply 'does not do'.This is a point that has long been emphasized in the literature on professional ethics 45 and which can also be integrated into the overall conception of an 'ethics of sportsmanship': if one's peers, the other players, also stick to certain norms, then the game as a whole can take place in an equilibrium in which certain moves are simply 'off the table'.However, it is not clear whether social norms among the players aloneto stick to the metaphorwould be sufficient to maintain a stable equilibrium.It also seems crucial that the audience follows the same set of norms: that it does not cheer, but rather boo, when there is a foul that the umpire does not notice.Translated to journalism, this means that readers, listeners, or viewers also need to be susceptible to violations of the norms and sanction them.As Wyatt notes, in an essay on the 'ethical obligations of news consumers', 'we'meaning citizens and readers -'get the journalism we deserve'. 46As Benkler and his coauthors show, for those parts of the US media system in which journalistic standards are held up, there is indeed a self-reinforcing dynamic, of which the audience is also part, that stabilizes fact-oriented norms. 47oes this mean that once such a system of social norms is in place, everything is well?In parallel to the issues around legal regulation, there are some specific challenges around the role of social norms in the media.Whenever one speaks of 'social norms', friends of the freedom of expression are likely to be reminded of John Stuart Mill's powerful warnings against vigilantism and self-censorship. 48Negative sanctions, even informal ones such as those used to enforce social norms, can be problematic from that perspective.The Society for Professional Journalism in fact forgoes any 'quasi-judicial system' that would punish violations of its code of ethics. 49Instead, its strategy is to 'encourage the use of the Society's Code of Ethics' and to showcase 'case studies of jobs well done under trying circumstances'. 50Positive reinforcement of good practices might seem able to avoid problems of peer censorship, at least at first glance.Moreover, by having experienced colleagues provide evaluations, one can make sure that those who make the judgments have a sufficient sense of the complexities and tensions journalists find themselves in.
But even this strategy can have its pitfalls.For example, the withdrawal of positive approval might, under certain conditions, amount to a form of social sanctioning that individuals might try to avoid by self-censorship.This can happen if certain forms of approval become the rule rather than the exception, making cases in which approval is withheld clearly visible.Moreover, there may be a desire for harmony within the occupational group of journalists, in places where antagonism, and maybe sometimes even certain (nonviolent) forms of aggression, may be needed for journalism to fulfil its role.Each professional community probably has its stories about unpopular outsiders who were long ridiculed or even ousted, but who then turned out to be rightjournalism seems to have many and seems to need such people very much!
We thus face a dilemma.On the one hand, social norms are important in order to counteract the competitive pressures to slide into sensationalism (or to violate journalistic standards in other ways).On the other hand, social norms can foster conformism and self-censorship, which make it impossible to fulfil all the roles that the media have in a democracy.This constellation is one for which no stable, one-size-fits-all solution can be found.The adjustment of the right point between social norms that are too weak or too strong, and the application of these norms to new phenomena, are challenges that require constant adaptations and ongoing debates: about the ethical standards for journalistic framing that democratic societies need, about the changes in the incentive structures that technological developments bring, about the appropriateness of peer approval, about the psychological dilemmas that journalists experience, and about appropriate ways of responding to them.Socially established practices must remain open to criticism, which, however, also means that they remain vulnerable to those who want to undermine them with insincere intentions or who want to put pressure on others to self-censor.
I have postulated the existence of such metadebates as a logical consequence of the ambiguous role of social norms as an element of media ethics.But we can also easily find examples of such debates in reality, especially among those journalists who try to uphold high standards in the face of intense competition from outlets that do not.For example, in summer 2020, Bari Weiss, a columnist of The New York Times, stepped down from her post and accused her former colleagues of 'bullying', claiming that she had witnessed various cases of self-censorship. 51Her critics, however, held that Weiss had repeatedly violated norms that they considered important to uphold, beleaguered as they were (or felt) by an unscrupulous right-wing press. 52 cannot adjudicate this particular case here.My point is merely that these are exactly the kinds of controversies we should expect, given the tension between the need to uphold certain social norms and the risks of conformity and self-censorship.Such metadebates may seem onerous or even a detraction from the actual roles that journalists are supposed to fulfil.But given the complex ethical landscape within which they have to operateand of which I have here discussed only one dimensionit seems unavoidable to have such debates.Understanding them as an unavoidable consequence of the fact that there is both a need for strong social norms and a risk that strong social norms lead to self-censorship might help to see them as a normal part of the process.This could maybe contribute to conducting them in a more sober, less vitriolic way.
Conclusion
In this article, I have discussed the challenges for upholding journalistic norms in a competitive environment, focusing on the framing of issues and the avoidance of sensationalism.I have explored why this issue has gained weight in the era of online communication and analyzed the incentive structures for journalists and the ways in which sensationalism causes harm.I have argued that it is a kind of collective harm that need to be avoid by journalists, collectively, sticking to certain social norms, as parts of an 'ethics of sportsmanship'.And yet, if such social norms become too strict, this in turn creates risks, making constant metareflection unavoidable from an ethical perspective.
In the introduction of his edited volume on journalism ethics, Meyers emphasizes, time and again, that journalistic ethics is 'hard work'. 53Part of what makes it so hard, according to the arguments I have presented in this article, is the challenge of maintaining standards under competitive pressures.In online environments, in which the competition for attention is maybe fiercer than ever, and in which trolls and bots play their dirty games, finding the right tone for headlines and framing political issues in appropriate ways is a key task for journalists.While more could be done to support them through appropriate laws and regulations, such steps can only go so far, because the media also need to have the freedom to fulfil their watchdog function, and competition is needed to enable the expression of a pluralism of worldviews.Finding this delicate balance, again and again, is indeed 'hard work', but it is work that needs to be done by journalists in democratic societies.
Lisa Herzog, Faculty of Philosophy, University of Groningen, Groningen, The Netherlands.l.m.herzog@rug.nl
|
2021-01-07T09:07:44.033Z
|
2021-01-05T00:00:00.000
|
{
"year": 2021,
"sha1": "cbb2de952a94d876ffd83d924f7721e702e5feec",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/japp.12491",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "44c25ea47613512ed770e10a2f7414d97cd033db",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
119249741
|
pes2o/s2orc
|
v3-fos-license
|
Spinning compact binary inspiral: Independent variables and dynamically preserved spin configurations
We establish the set of independent variables suitable to monitor the complicated evolution of the spinning compact binary during the inspiral. Our approach is valid up to the second post-Newtonian order, including leading order spin-orbit, spin-spin and mass quadrupol-mass monopole effects, for generic (noncircular, nonspherical) orbits. Then we analyze the conservative spin dynamics in terms of these variables. We prove that the only binary black hole configuration allowing for spin precessions with equal angular velocities about a common intantaneous axis roughly aligned to the normal of the osculating orbit, is the equal mass and parallel (aligned or antialigned) spin configuration. This analytic result puts limitations on what particular configurations can be selected in numerical investigations of compact binary evolutions, even in those including only the last orbits of the inspiral.
I. INTRODUCTION
Compact objects are characterized by their size and gravitational radius being comparable. They appear either as the end state of the stellar evolution as neutron stars or black holes with a few solar masses (M ⊙ ) or emerge from cosmological evolution by continued accretion and a sequence of mergers [1] as supermassive black holes of 3 × 10 6 ÷ 3 × 10 9 M ⊙ , residing in the centers of galaxies. Not much evidence has been gathered for the existence of intermediate mass black holes (IMBH), although a detection of a variable X-ray source of over 500 M ⊙ in the galaxy ESO 243-49 has been recently reported and interpreted as IMBH [2]. It has been proposed that IMBHs ought to be searched for in globular clusters that can be fitted well by medium-concentration King models [3].
Compact objects are expected to frequently coexist in binary systems, formed either by evolution of a stellar binary, by capture events or accompanying the process of galaxy mergers. According to general relativity, compact binaries radiate away gravitational waves, a process leading eventually to their merger. Stellar mass binaries are among the most prominent sources for the Earth-based gravitational wave detectors LIGO and Virgo [4], while the gravitational waves produced during the (low mass) galactic black hole mergers will be sought for by the long-planned space mission LISA [5].
The merging process can be split into three distinct phases. By definition the inspiral is the regime of orbital evolution, which can be described accurately in terms of a post-Newtonian (PN) expansion. Provided the orbits are not excessively eccentric, the same PN parameter characterizes both weak gravity and non-relativistic motion: A manifestly convergent and finite procedure for calculating gravitational radiation to arbitrary orders in a PN expansion was proposed [6], based on solving a flat-spacetime wave equation (representing Einstein equations with a harmonic gauge condition) as a retarded integral over the past null cone of the chosen field point. A study of the Cauchy convergence for PN templates shows an oscillatory behavior: increasing the PN order will not necessarily result in a better template [7] (2PN templates being closer to numerical results, than their 2.5 counterparts). The predictions of various PN approximants (adiabatic Taylor, Padé models, non-adiabatic effective-one-body models) show that their convergence to numerical results is comparable [8]. It is also known, that alternative template families based on the shifted Chebyshev polynomials could exhibit faster Cauchy convergence, than PN templates [9]. Comparisons with full general relativistic numerical runs confirmed that a third PN order approach can be considered accurate for all practical purposes. The inspiral is followed by the plunge, where a full general relativistic treatment is necessary, and can be handled only numerically; and the ringdown, a process during which all physical characteristics of the newly formed compact object are radiated away, except mass, spin and possibly electric charge. In this paper we investigate the conservative dynamics during the inspiral of a spinning compact binary system. We include spin-orbit (SO), spin-spin (SS) and mass quadrupole -mass monopole (QM) couplings, each to leading order. The precession due to these interactions was first discussed in [10]- [11]. With the spins and mass quadrupole moments included, the number of variables in the configuration space increases considerably, therefore we propose to find a minimal and conveniently chosen set of independent variables.
We note discussions of various aspects of the dynamics and gravitational radiation related to the SO coupling in [12]- [14], SS coupling in [14]- [16], and QM coupling in [17]- [19]. PN corrections to the SO coupling were presented in [20] and the Hamiltonian approach including spins has been also worked out [21]. Most recently, the back-reaction on the dynamics due to asymmetric gravitational wave emission in the spinning case, possibly leading to strong recoil effects, has been widely investigated, both analytically [14], [22] and numerically for particular spin configurations [23]. Empirical formulae giving the "final spin" have been advanced in Refs. [24] and some of them compared in [25]. Zoom-whirl orbits (generic for particles orbiting Kerr black holes [26]) were also found in the framework of the PN formalism [27]. A larger spin increases the likeliness of apparition of such orbits [28]. Gravitational wave emission is hold responsible for the occurrence of the spin-flip phenomenon [29]- [30] in X-shaped radio galaxies [29], [31]. Recently it has been shown, that for a typical merger of mass ratio at about 0.1 the combined effect of SO precession and gravitational radiation will result in the spin-flip occurring during the inspiral [32].
In Sec. II we introduce the set of dynamical and configurational variables characterizing the compact spinning binary. Both the configurational and a subset of the dynamical variables depend on the choice of the reference system. We use a number of four such systems, to be defined in subsection II B, only one of them inertial, the rest of three being rather adapted to the binary configuration. In subsection II C we derive two relations among the time derivatives of the introduced angular variables. As a result the time evolution of the configurational variables is determined by the evolution of one single configurational angle α and the true anomaly χ p . At the end of this section we express the position and velocity vectors in the chosen reference systems. As a by-product we recover the true anomaly parametrization of the radial evolution, valid for the chosen perturbed Keplerian setup.
Sec. III introduces the angles characterizing the angular momenta (total and orbital angular momenta and spins). The number of independent variables characterizing them is shown to be 6. We will chose them either as 5 angles and a scale, or equivalently as 3 angles and 3 scales.
In Sec. IV we analyze the conservative evolution of the spins, which is purely precessional, with the inclusion of the leading order spin-orbit, spin-spin and mass quadrupole -mass monopole couplings. We clarify the order (both PN and in the mass ratio) at which the various contributions occur. Then we investigate, whether there are spin configurations conserved by precessions, and we derive a no-go result.
The gravitational constant G and speed of light c are kept in all expressions. For any vector V we denote its Euclidean magnitude by V and its direction byV.
A. Variables
We consider three distinct set of variables. (a) The physical parameters of the binary: The two compact objects are characterized by masses m i , spins S i (i = 1, 2), and mass quadrupole moments.
Equivalently to m i we can use the total mass m ≡ m 1 + m 2 and the reduced mass µ ≡ m 1 m 2 /m. We assume that m 1 ≥ m 2 . We also introduce the mass ratio ν ≡ m 2 /m 1 ≤ 1 and the symmetric mass ratio η ≡ µ/m ∈ [0, 0.25]. The two mass ratios are related as and for small ν we have η = ν − 2ν 2 + O ν 3 . We also note the useful relations Equivalently to S i we can introduce their magnitude, polar and azimuthal angles. It is convenient to define dimensionless spin magnitudes χ i ∈ [0, 1] by As for the spin angles, they depend on the chosen reference system. We will discuss various possibilities in detail in Section III.
We consider axisymmetric compact objects. Therefore the mass quadrupole of the i th axially symmetric binary component is characterized by a single quantity Q i , its quadrupole-moment scalar [17]. Provided the quadrupole moment originates entirely in its rotation (what we shall assume), then the symmetry axis isŜ i and with the parameter w ∈ (4,8) for neutron stars, depending on their equation of state, stiffer equations of state giving larger values of w [33], [17]. For rotating black holes w = 1 [34]. The negative sign arises because the rotating compact object is centrifugally flattened, becoming an oblate spheroid. (b) Dynamical variables: Up to 2PN accuracy the energy E and the total angular momentum vector J ≡ L+S 1 +S 2 are conserved [12]. The orbital angular momentum L and the spins S i are not conserved separately, as the spins undergo a precessional motion. This will be discussed in detail in Section IV.
(c) Angular variables characterizing the orbit : The instantaneous orbital plane is perpendicular by definition to the Newtonian orbital angular momentum L N ≡ µr × v and it evolves due to the spin precessions. We define (i) the inclination α of the orbital plane with respect to the plane perpendicular to J (thus α is the angle span byL N andĴ); (ii) the angle φ n between the intersectionl of these two planes and an (arbitrary) inertial x-axisx taken in the plane perpendicular to J, finally (iii) the angle ψ p measured froml to the periastron (see Figs 1 and 2; the indices p and n stand for the periastron and node line, respectively).
The polar and azimuthal angles of the total angular momentum JĴ, Newtonian orbital angular momentum LNLN and spins S1,2Ŝ1,2. Azimuthal angles are shown in the non-inertial system KJ ≡ l ,k,Ĵ , polar angles both in KJ and respective toLN. The relative angle of the spins is γ. The non-inertial character of the system KJ is encoded in the evolution of the angle φn, measuring the angular separation of an inertial axisx from the axisl.
B. Reference systems
For a better bookkeeping we introduce the inertial system K i withx andĴ standing as the x-and z-axes and three non-inertial systems K J , K L and K A .
The relative angles of the total angular momentum JĴ, Newtonian orbital angular momentum LNLN and spins S1,2Ŝ1,2 as in Fig 1. The intersection of the planes perpendicular toLN andĴ, respectively is the node linel. The inertial axisx is at angle φn, measured froml in the plane perpendicular toĴ. The azimuthal angles (ψ1, ψ2, ψp) of the spins and Newtonian Laplace-Runge-Lenz vector ANÂN (pointing towards the periastron) are also measured froml, however in the plane perpendicular toLN. The true anomaly χp is the angle betweenÂN and the position vector rr. Two of the basis vectors of the inertial reference system Ki ≡ x,ŷ,Ĵ and of each of the three non-inertial reference systems KJ ≡ l ,k,Ĵ , KL ≡ l ,m,L N , KA ≡ Â N,Q N ,L N are shown.
In the system K J the z-axis is fixed alongĴ, while in K L alongL N . We choosel =Ĵ ×L N / sin α as the x-axis of both systems. The system K J is complete byk =Ĵ ×l and K L bym =L N ×l.
The system K A also hasL N as the z-axis, however its x-axis is defined by the Laplace-Runge-Lenz vector which satisfies the constraints and L N ·A N = 0. Here r and v are the position vector and velocity of the reduced mass particle orbiting m. The y-axis is defined by The three angles (φ n , α, ψ p ) will be referred to occasionally as Euler angles, as three consecutive rotations with −φ n , α and ψ p about the axes z, x and again z transform as K i → K J → K L → K A . The sequence of these rotations is encompassed in the transformation matrix cos ψ p cos φ n + sin ψ p cos α sin φ n − cos ψ p sin φ n + sin ψ p cos α cos φ n sin ψ p sin α − sin ψ p cos φ n + cos ψ p cos α sin φ n sin ψ p sin φ n + cos ψ p cos α cos φ n cos ψ p sin α − sin α sin φ n − sin α cos φ n cos α where R with one argument denotes the corresponding rotation matrices acting on the coordinates.
C. Constraints on the Euler angle evolutions
The coordinates of the reduced mass particle in the inertial system K i can be obtained by applying the transformation R (−ψ, −α, φ n ) to the coordinates of the vector r = r (1, 0, 0). Here ψ = ψ p + χ p is the angle span byl and r, with χ p defined as the true anomaly, the angle span by N andr. We obtain A tedious, but straightforward computation carried on in the system K i gives From here we readily obtain Also, dividing the third component (which by definition is (L N ) z = L N cos α) by cos α we get L N µr 2 =ψ −φ n cos α − φ n sin α cos ψ +α sin ψ tan α cos ψ .
The first factor cannot vanish, as to Newtonian order it gives 2 tan α cos ψL N /µr 2 = 0, therefore the vanishing of the second factor (by reintroducing ψ = ψ p + χ p ) gives: Reinserting this in either of the Eqs. (11) or (12) giveṡ We have just derived two relations among the time derivatives of the Euler angles and of the true anomaly, which restrict the number of independent angular variables introduced up to now to α and χ p .
D. The position and velocity vectors in the bases KA and KL
Simple computation starting from the definitions of A N and Q N gives From here the expressions of the position and velocity vectors in K A emerge as In terms of the true anomaly χ p (the azimuthal angle of r in the system K A ), the position vector is given by which compared with Eq. (17) gives the true anomaly parametrization: In terms of the true anomaly, the velocity is expressed as Its square gives v 2 in terms of the true anomaly: (The same emerges from the definition of the Newtonian energy E N ≡ µv 2 /2 − Gmµ/r, by applying Eqs. (7) and (20).) As the basis vectors of K A are related to the basis vectors of K L by a rotation with angle ψ p : A N = cos ψ pl + sin ψ pm , it is straightforward to rewrite r and v in the basis K L as r = r cos ψl + sin ψm ,
By comparing the two forms of thel component of the vectorsŜ i we get sin κ i cos ψ i = sin β i cos φ i .
By computingŜ i ·L N in both systems we find cos κ i = cos α cos β i − sin α sin β i sin φ i .
As the orientation of the spins are independent, we obtain however the direct computation of the left hand side by employing Eqs. (31) and (35) results in the right hand side, leading to an identity rather than an expression of α as function of κ i , ψ i . Therefore Eq. (34) can be considered as a consequence of the other equations. Similarly one can show that Eqs. (32) are consequences of the other equations. We conclude that there are 5 independent constraint equations for the 10 angles (α, β i , φ i , κ i , ψ i , γ), namely Eqs. (31), (35) and (33), and we can take (α, κ i , ψ i ) as the independent angles. The network of all angles in the systems K J and K L is represented on Figs 1 and 2, respectively.
B. Orbital angular momentum
The total orbital angular momentum L contains pure general relativistic (PN, 2PN) and spin-orbit (SO) contributions [14]: 1 There are no spin-spin or quadrupole-monopole contributions to the orbital angular momentum [18]. Here the L PN and L 2PN contributions are aligned to L N (cf. Eq. (2.9) of Ref. [14]): and The SO contribution (Eq. (2.9.c) of Ref. [14]) can be rewritten as Note that and In order to evaluate the PN order of the L SO contribution in J, we evaluate on circular orbits which continue to approximately hold for eccentric orbits. This reasoning shows that the SO contribution is of 1.5 PN order and also indicates how to pick up the dominant terms when the mass ratio is small or when one would like to employ a less accurate, but simpler description, dropping higher order terms. The total angular momentum is then C. One scaling degree of freedom In this subsection we will employ the projections of the Eq. (45) in order to derive relations between the angles and magnitudes of the angular momenta involved. In the K L system the projections along the axesl,m andL N give, respectively: where In the derivation of Eqs. (46)-(48) we have employed Eqs. (25), (26), (29), (30) from where we also obtained with v given by Eq. (23). We thus have introduced the 14 quantities (J, L, χ i , α, β i , φ i , κ i , ψ i , γ) describing the angular momenta, which are constrained by 8 independent relations. This leaves us with 6 independent variables. 5 of these can be thought as the angles defining the directions of the spins and orbital angular momentum in the K L system (α, κ i , ψ i ), a sixth one being a linear scale, most conveniently chosen as J.
Note that in Eqs. (46)-(48) the coefficients ǫ P N , ǫ 2P N , ǫ r i , ǫ v i depend only on the masses and χ p . Therefore all dependences on ψ i are explicit. In principle Eqs. (46)-(47) can be used to express ψ i as function of κ i , α, ψ p , the masses and the spins χ i . In practice however this may be cumbersome. The easiest way to do it is to rewrite both the sin ψ i and cos ψ i in terms of the variables x i = tan ψ i /2. Then Eqs. (46)-(47) become second rank coupled polynomial equations, possibly leading to two distinct values of ψ i for each χ i .
Finally, Eq. (48) can be employed to eliminate L N in the detriment of the angular variables, spins and J, by a series expansion in ε to 2PN order accuracy as where is the leading order contribution to the orbital angular momentum, arising when we approximate J as the sum of the Newtonian orbital angular momentum and the spins. For convenience we also give (54)
D. Summary: the independent variables
The considerations in this section leave us with the following alternative sets of independent variables, all characterizing the angular momenta: (α, κ i , ψ i , J) or (α, κ i , χ i , J). The second set represents the most advantageous way of choosing the variables. Most notably, while ψ i are constant over the orbital scale, they vary with the precessions. By contrast χ i are constant over the precession time-scale either, moreover they are unaffected by gravitational radiation reaction, to quite high PN orders. Also, J stays constant up to 2PN accuracy (thus over precession time-scale) as opposed to either of L, L N , L N,0 . It changes only over the radiation time-scale.
Once the evolution of χ p is known, the other two Euler angles (φ n , ψ p ) become determined by the rest of variables through Eqs. (14) and (15).
IV. SPIN EVOLUTION
The spins obey a precessional motion, as was derived for bodies with arbitrary, but constant mass, spin and quadrupole moments (Eqs. (39) and (43) of Ref. [10], see also Ref. [11]): with the angular velocities consisting of SO, SS and QM contributions. The latter come from regarding each of the binary components as a mass monopole moving in the quadrupolar field of the other component. The precessional angular velocity is decomposed as , where j = i. The sum of the SS and QM contributions, by employing Eqs. (3)- (5) is In order to evaluate the PN order of the coefficients in Eqs. (56), we will employ the estimate from a footnote of Ref. [16], according to which T being the radial period, defined as twice the time elapsed between consecutiveṙ = 0 configurations. We obtain Thus on the orbital time-scale the SO precession is a 1PN effect, while the SS and QM contributions appear as 1.5 PN corrections. As both the SO and SS angular velocities contain terms with O ν −1 factors, whenever the mass ratio is small, the respective precessions amplify.
As Ω QM i ∝ χ i , the QM precession qualifies as a self-spin effect.
A. Spin configurations preserved by precessions
With only the leading order SO precession taken into account, both spin vectors undergo a precession aboutL N . If m 2 = m 1 also holds, the instantaneous angular velocities of the precessions are identical, and the spin configuration is preserved with respect to the osculating plane of the orbit, rigidly rotating about its normal.
With the SS and QM contributions to the spin dynamics included, the above simple picture does not hold any more. In the remaining part of this section we analyze whether there are spin configurations which are preserved by precessions, in the sense that they rigidly precess about a common rotation axis.
We will carry on this analysis order by order, starting with the leading order SO precession. One possibility is that both spins are either aligned or antialigned with the orbital angular momentumŜ i = ±L N , then there is no precession at SO order. Moreover, at the next order we immediately obtain Ω QM i = 0 and Ω SS i ∝L N , such thatṠ i = 0. Thus, when the spins are perpendicular to the osculating orbit at some initial instant, they stay so, even with the SS and QM parts of the dynamics included.
Another possibility to consider is, that the two spins precess with the same angular velocity about a common axis. We could check, whether the axis defined by Eq. (56) could be this, however we will allow for more generic possibilities. As S i undergo pure precessions, one can add arbitrary contributions G/c 2 r 3 (P i − 1) S i to Ω i without changing the dynamics, and ask the question, whether a common instantaneous axis of precession exists for both spin vectors, about which they precess with equal angular velocities, such that Ω ′ 1 = Ω ′ 2 ? The condition for this would be For P i of order unity (meaning that this axis is not very far from the normal to the osculating orbit) the leading order contribution in Eq. (60) remains the term proportional to L N , the vanishing of which implies m 2 = m 1 . For the next order then we get For black holes (w = 1) this gives P 2 S 2 = P 1 S 1 , thus the spins should be parallel (aligned or antialigned), and the common axis of synchronous rotation is with S = S 1 + S 2 . Neither the axis of rotation nor the angular velocity are unambiguous, as Ω ′ i depend on P i , however the axis stays close toL N (no choice of P i would render the axis of rotation exactly toL N ). In summary, only parallel black hole spins can rotate with the same angular velocity about a common axis, provided the axis is only slightly different from the normal to the osculating orbit.
V. CONCLUDING REMARKS
In this paper we have derived the set of independent variables suitable to monitor the evolution of a compact spinning binary during the inspiral. The number of independent variables characterizing the spins and orbital angular momentum was shown to be 6. We have chosen them either as 5 angles and a scale, or alternatively as 3 angles and 3 scales. For the first choice we found advantageous to employ the magnitude J of the total angular momentum; the angles α and κ i span by the Newtonian orbital angular momentum L N with the total angular momentum J and with the spins, respectively; finally the azimuthal angles ψ i of the spins in the plane of motion (perpendicular to L N ), measured from the ascending part of the node line (the intersection of the planes perpendicular to J and L N .) For the second choice we propose J, α, κ i and the normalized magnitudes of the spins χ i . As both J and χ i are unaffected by precessions; moreover χ i vary extremely slowly with gravitational radiation reaction, the latter set seems more advantageous. Nevertheless, expressing ψ i in the detriment of χ i is not immediate (the respective equations are provided). These 6 variables have to be supplemented by the true anomaly χ p . The non-inertial character of the reference systems introduced in Section 3 can be specified through one single angle φ n , characterizing the node line, the evolution of which is governed by the spin-orbit coupling. The orbital evolution being quasi-Keplerian, the position of the periastron is specified by an evolving angle ψ p . As shown in subsection II C, the evolution of these two angles follow from the evolution of α and χ p .
In this paper we have also proven a no-go result, according to which in a 2PN accurate dynamics, with the leading order SO, SS and QM precessions included the only binary black hole configuration allowing for spin precessions with equal angular velocities about a common instantaneous axis roughly aligned to the normal to the osculating orbit, is the equal mass and parallel (aligned or antialigned) spin configuration. When including only the SO precessions, the equality of masses is required, but there is no constraint on the spin orientations. By approaching the innermost stable orbit, the PN parameter increases (leading eventually to the breakdown of the PN expansion), such that the importance of higher order contributions is enhanced. Therefore the SS and QM precessions (of higher order than the SO precession), which lead to the above constraint on the spin directions, become increasingly larger. The result thus will hold up to the very last orbits of the inspiral, and to the extent the PN result approximates well dynamics there, during the plunge. This analytic result puts limitations on what particular precessing configurations can be selected in numerical investigations of compact binary evolutions, even in those including only the last orbits of the inspiral.
|
2010-05-11T05:34:01.000Z
|
2009-12-02T00:00:00.000
|
{
"year": 2009,
"sha1": "f9b968edf5bb12d28359a5d2725f18f806b97835",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0912.0459",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f9b968edf5bb12d28359a5d2725f18f806b97835",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250976311
|
pes2o/s2orc
|
v3-fos-license
|
Qualitative Analysis and Componential Differences of Chemical Constituents in Lysimachiae Herba from Different Habitats (Sichuan Basin) by UFLC-Triple TOF-MS/MS
Lysimachiae Herba (LH), called Jinqiancao in Chinese, is an authentic medical herb in Sichuan Province often used in the prescription of traditional Chinese medicine (TCM). However, in recent years, there has been a lack of comprehensive research on its chemical components. In addition, the landform of Sichuan Province varies greatly from east to west and the terrain is complex and diverse, which has an important influence on the chemical constituents in LH. In this study, ultrafast liquid chromatography coupled with triple-quadrupole time-of-flight tandem mass spectrometry (UFLC-triple TOF-MS/MS) was used to analyze the samples of LH from eight different habitats in Sichuan Basin. The constituents were identified according to the precise molecular weight, the fragment ions of each chromatographic peak and the retention time of the compound obtained by high-resolution mass spectrometry, combined with software database searches, standard comparisons and the related literature. Differential chemical constituents were screened using partial least squares discriminant analysis (PLS-DA) and t-tests. The results showed that a total of 46 constituents were identified and inferred, including flavonoids, phenolic acids, amino acids, tannins, fatty acids and coumarins; the fragmentation pathways of the main constituents were preliminarily deduced. According to the variable importance in projection (VIP) and p-values, four common differential constituents were screened out, 2-O-galloylgalactaric acid, quercetin 3-O-xylosyl-rutinoside, nicotiflorin and kaempferol 3-rutinosyl 7-O-alpha-l-rhamnoside. This study provides basic information for the establishment of a comprehensive quality evaluation system for LH.
Introduction
Lysimachiae Herba (LH), which is the dried whole herb of Lysimachia christinae Hance., has the effect of promoting diuresis and removing jaundice, along with anti-inflammatory and analgesic properties. It is commonly used for symptoms such as jaundice, hypochondriac pain and urolithiasis [1] and it is well known as a key medicine for the treatment of lithiasis. Hitherto, many studies have shown that LH has a variety of pharmacological effects such as the promotion of bile secretion, as well as anti-inflammation, analgesic, bacteriostatic and anti-gout properties [2]. Chemical constituents are the basis for the pharmacological action of traditional Chinese medicine (TCM). Phytochemical analyses have revealed that LH contains multiple chemical constituents, such as flavonoids [3][4][5], phenolic acids [4] and fatty acids [5]. In fact, the quality standard of LH recorded in the Chinese Pharmacopoeia (2020 version) mainly involves the quantification of quercetin and kaempferol [1] and there is still a lack of research on the other constituents of LH. Therefore, it is necessary and important to study its chemical constituents systematically. It is commonly believed that genuine Chinese herbs refer to medicinal materials produced in a specific area of natural conditions. The production is relatively concentrated with certain cultivation techniques and harvesting and processing methods, with excellent quality and effect, as recognized by the clinical practice of TCM. LH is distributed in provinces south of the Yangtze River and Taiwan, but the most well-known habitat is Sichuan Province. Sichuan Province is located inland in southwest China and the upper reaches of the Yangtze River and its terrain is characterized by being high in the west and low in the east. The climate of Sichuan Province is affected by the topography, which can be roughly divided into two climatic zones: Sichuan Plateau and Sichuan Basin [6]. In this study, samples from eight habitats in Sichuan Basin were analyzed to systematically study the chemical constituents of LH and distinguish their differences in different habitats. The research results can preliminarily clarify the material basis of changes in different ecological environments and provide information for rational clinical application.
In recent years, plant metabolomics and liquid chromatography/mass spectrometry (LC-MS) have been widely used in the analysis of complex systems of TCM. Plant metabolomics is a technology for the comprehensive analysis of metabolites in plants, which is especially suitable for multicomponent system analysis of TCM [7]. Combined with the advantages of the high separation of the liquid phase and high sensitivity of mass spectrometry, LC-MS can be used to separate and analyze complex samples and identify their structures. It has been widely used in quantitative and qualitative analysis of complex systems of TCM. Principal component analysis (PCA) is a statistical method that uses the concept of dimension reduction to recombine variables into new composite variables, from which a small number of variables are selected to reflect most of the original information [8]. Partial least squares discriminant analysis (PLS-DA) commonly uses the variable importance in projection (VIP) value to describe the degree of contribution of a variable, which can be considered significant when VIP > 1. Among them, Q 2 > 0. 5 indicates that the predicted value is high, while as R 2 X and R 2 Y approach 1, the model becomes more stable. In this study, a total of 46 chemical constituents in LH were screened out, most of which were flavonoids and phenolic acids. According to the VIP obtained by PLS-DA (VIP > 1) and t-tests (p < 0.05), characteristic peaks of the differentiated chemical constituents were screened out and four commonly differential chemical constituents were finally identified. According to the research ideas and methods of plant metabolomics, this experiment used UFLC-triple TOF-MS/MS to analyze the chemical constituents in LH and combined multivariate statistical analysis to explore the variations in chemical constituents in LH from different areas in Sichuan Basin. Therefore, this study lays a certain foundation for the basic research and quality control of LH medicinal substances.
Optimization of UFLC-Triple TOF-MS/MS Conditions
The effects of methanol-0.1% (v/v) formic acid in water, methanol-water, acetonitrile-0.1% (v/v) formic acid in water and acetonitrile-water as mobile phases under gradient elution on the separation of the peaks in the samples were investigated and we found the methanol-0.1% formic acid in water solution as the mobile phase to achieve a good separation effect. In addition, compared with the electrospray positive ion mode, the negative ion mode was found to have a higher LC/MS response; hence, the samples were measured in negative ion mode in the experiment.
Identification of the Constituents in LH
According to the corresponding chromatographic and mass spectrometric conditions, the chemical constituents of LH samples from eight habitats were identified. The results showed that the constituents identified in the samples from Sichuan Bazhong were more comprehensive. Figure 1 shows the base peak chromatogram (BPC) of the LH sample from Sichuan Bazhong in negative ion mode. A total of 46 constituents were identified, including 25 flavonoids, 11 phenolic acids and 10 other constituents. The detailed information of the identified compounds is shown in Table 1, with their corresponding structures in Figure 2.
Molecules 2022, 27, x FOR PEER REVIEW 3 of 22 methanol at room temperature, a 1:10 solid-liquid ratio, and 60 min ultrasonic extraction were selected as the optimal extraction conditions.
Optimization of UFLC-Triple TOF-MS/MS Conditions
The effects of methanol-0.1% (v/v) formic acid in water, methanol-water, acetonitrile-0.1% (v/v) formic acid in water, and acetonitrile-water as mobile phases under gradient elution on the separation of the peaks in the samples were investigated, and we found the methanol-0.1% formic acid in water solution as the mobile phase to achieve a good separation effect. In addition, compared with the electrospray positive ion mode, the negative ion mode was found to have a higher LC/MS response; hence, the samples were measured in negative ion mode in the experiment.
Identification of the Constituents in LH
According to the corresponding chromatographic and mass spectrometric conditions, the chemical constituents of LH samples from eight habitats were identified. The results showed that the constituents identified in the samples from Sichuan Bazhong were more comprehensive. Figure 1 shows the base peak chromatogram (BPC) of the LH sample from Sichuan Bazhong in negative ion mode. A total of 46 constituents were identified, including 25 flavonoids, 11 phenolic acids, and 10 other constituents. The detailed information of the identified compounds is shown in Table 1, with their corresponding structures in Figure 2. [30] Note: 1 Comparison with reference standards. The cleavage pathway is shown in Figure 4D. On the basis of its mode of breakage and comparison with the standard, compound 15 was judged to be chlorogenic acid.
Identification of Amino Acids
Amino acids are a class of organic compounds containing amino and carboxyl groups. In negative ion mode, the secondary mass spectra show fragmentation peaks of [M − H − NH 3 ] − and [M − H − CO 2 ] − , which may be generated by the loss of NH 3 and CO 2 from amino acids. The three amino acids were identified mainly by comparison with the standards, according to the following process: firstly, we used Peakview to find the retention times of the corresponding amino acids in the mixed standard solution; secondly, we compared the m/z of substances with similar retention times (within 0.5 min) in the samples and those with m/z errors of more than 5 ppm were removed; then, the MS/MS patterns of the eligible amino acids consistent with the references and relevant web queries were determined. In the end, compounds 1, 2 and 9 were presumed to be threonine, glutamic acid and phenylalanine, respectively. The comparison of the chromatograms of the three amino-acid standards with those of the extract is shown in Figure 5.
istic quinic acid fragment ion at m/z 191.0596, indicating that the compound often quinic acid (m/z 192) and produce caffeoyl (m/z 161) in the mass spectrometric cleav The cleavage pathway is shown in Figure 4D. On the basis of its mode of breakage comparison with the standard, compound 15 was judged to be chlorogenic acid.
Identification of Amino Acids
Amino acids are a class of organic compounds containing amino and carb groups. In negative ion mode, the secondary mass spectra show fragmentation peak [M − H − NH3] − and [M − H − CO2] − , which may be generated by the loss of NH3 and from amino acids. The three amino acids were identified mainly by comparison with standards, according to the following process: firstly, we used Peakview to find the re tion times of the corresponding amino acids in the mixed standard solution; secondly compared the m/z of substances with similar retention times (within 0.5 min) in the s ples, and those with m/z errors of more than 5 ppm were removed; then, the MS/MS terns of the eligible amino acids consistent with the references and relevant web que were determined. In the end, compounds 1, 2, and 9 were presumed to be threonine, tamic acid, and phenylalanine, respectively. The comparison of the chromatograms o three amino-acid standards with those of the extract is shown in Figure 5.
Identification of Tannins
The basic structure of condensed tannins consists of flavan-3-ols such as (+)-catechin, (−)-epicatechin, or flavan-3,4-diols condensed by C−C at the 4,8-or 4,6-positions. The process of identification was as follows: first, the results of the analysis were imported into Peakview; then, compounds that had a mass error of less than 5 ppm, had the correct isotopic distribution and contained secondary fragments were identified as targets. Combining features of Peakview such as Formula Finder, matching the mass spectrometry data of each chromatographic peak in the database (SciFinder and HMDB) and considering the cleavage law of each peak, compounds 6 and 8 were eventually identified as prodelphinidin B1 and procyanidin B1. Taking cess of identification was as follows: first, the results of the analysis were imported into Peakview; then, compounds that had a mass error of less than 5 ppm, had the correct isotopic distribution, and contained secondary fragments were identified as targets. Combining features of Peakview such as Formula Finder, matching the mass spectrometry data of each chromatographic peak in the database (SciFinder and HMDB), and considering the cleavage law of each peak, compounds 6 and 8 were eventually identified as prodelphinidin B1 and procyanidin B1. Taking
Identification of Fatty Acids
Fatty acids are a class of carboxylic acid compounds that consist of hydrocarbon groups attached to carboxyl groups. Fatty acids have a better response in negative ion mode. Saturated fatty acids break the C 2 −C 3 bond through γ-hydrogen migration and Mackenzie rearrangement to produce fragments with high intensity, while other fragments are generally (CH 2 ) n −COOH with a 14n difference in relative molecular mass. Reviewing the literature, compounds 3, 4, 5 and 41 were tentatively presumed to be D-mannuronic acid, 2-deoxypentanoic acid, malic acid and 3-methylnonanoic acid. Taking Fatty acids are a class of carboxylic acid compounds that consist of hydrocarbon groups attached to carboxyl groups. Fatty acids have a better response in negative ion mode. Saturated fatty acids break the C2−C3 bond through γ-hydrogen migration and Mackenzie rearrangement to produce fragments with high intensity, while other fragments are generally (CH2)n−COOH with a 14n difference in relative molecular mass. Reviewing the literature, compounds 3, 4, 5, and 41 were tentatively presumed to be D-mannuronic acid, 2-deoxypentanoic acid, malic acid, and 3-methylnonanoic acid. Taking
Identification of Coumarins
Coumarins have a lactone structure and can be structurally viewed as cis-o-hydroxycinnamates formed by intramolecular dehydration of the ring. Oxygen-containing functional groups can be substituted at various positions on the benzene ring, commonly including hydroxyl, methoxy, and sugar groups. Compound 18 had fragments at m/z 177.0221 [M − H] − in negative mode, with a major secondary fragment at m/z 149.0267 reduced by 28 from m/z 177.0221, presumably caused by the loss of CO, which is characteristic of the cleavage of coumarins. Therefore, it could be presumed that compound 18 was 6,7-dihydroxycoumarin, and its MS 2 spectrum and speculated fragmentation pathways are shown in Figure 8.
Identification of Coumarins
Coumarins have a lactone structure and can be structurally viewed as cis-o-hydroxycinnamates formed by intramolecular dehydration of the ring. Oxygen-containing functional groups can be substituted at various positions on the benzene ring, commonly including hydroxyl, methoxy and sugar groups. Compound 18 had fragments at m/z 177.0221 [M − H] − in negative mode, with a major secondary fragment at m/z 149.0267 reduced by 28 from m/z 177.0221, presumably caused by the loss of CO, which is characteristic of the cleavage of coumarins. Therefore, it could be presumed that compound 18 was 6,7-dihydroxycoumarin and its MS 2 spectrum and speculated fragmentation pathways are shown in Figure 8.
PCA of the Samples
PCA was used to analyze the differences between the samples of LH from different habitats (S1, Bazhong; S2, Guangyuan; S3, Yibin; S4, Zigong; S5, Deyang; S6, Suining; S7, Leshan; S8, Nanchong) and the correlation between samples. The PCA model parameters of R 2 X = 0.865 and Q 2 = 0.785 showed that the models were effective and reliable. As shown in Figure 9, the samples of LH from the eight habitats were clustered into one category, and the distribution results were relatively ideal. Among them, the samples of S4 and S7 were highly aggregated and had little difference. The relative dispersion of S3 and other habitats indicated that the chemical constituents of S3 differed significantly from the samples of other habitats.
PCA of the Samples
PCA was used to analyze the differences between the samples of LH from different habitats (S1, Bazhong; S2, Guangyuan; S3, Yibin; S4, Zigong; S5, Deyang; S6, Suining; S7, Leshan; S8, Nanchong) and the correlation between samples. The PCA model parameters of R 2 X = 0.865 and Q 2 = 0.785 showed that the models were effective and reliable. As shown in Figure 9, the samples of LH from the eight habitats were clustered into one category and the distribution results were relatively ideal. Among them, the samples of S4 and S7 were highly aggregated and had little difference. The relative dispersion of S3 and other habitats indicated that the chemical constituents of S3 differed significantly from the samples of other habitats.
PCA of the Samples
PCA was used to analyze the differences between the samples of LH from different habitats (S1, Bazhong; S2, Guangyuan; S3, Yibin; S4, Zigong; S5, Deyang; S6, Suining; S7, Leshan; S8, Nanchong) and the correlation between samples. The PCA model parameters of R 2 X = 0.865 and Q 2 = 0.785 showed that the models were effective and reliable. As shown in Figure 9, the samples of LH from the eight habitats were clustered into one category, and the distribution results were relatively ideal. Among them, the samples of S4 and S7 were highly aggregated and had little difference. The relative dispersion of S3 and other habitats indicated that the chemical constituents of S3 differed significantly from the samples of other habitats. Figure 9. PCA score plot of LH samples from different habitats. Figure 9. PCA score plot of LH samples from different habitats.
PLS-DA of the Samples
In this experiment, the samples from the other seven habitats were compared with the samples from Bazhong and analyzed by PLS-DA. The results are shown in Figure 10a. As can be seen from the Figure 10, the samples from each habitat were obviously separated from S1 samples along the PC1 axis. The models were tested with 200 permutations and the results are listed in Table 2. The results showed that the models did not overfit, indicating that they were effective and reliable. According to the VIP score chart (Figure 10b) and t-tests corresponding to the model, differential chemical constituents (VIP > 1) of samples from different habitats in Sichuan Basin were screened out and the number of characteristic peaks is shown in Table 2.
PLS-DA of the Samples
In this experiment, the samples from the other seven habitats were compared with the samples from Bazhong and analyzed by PLS-DA. The results are shown in Figure 10a. As can be seen from the Figure 10, the samples from each habitat were obviously separated from S1 samples along the PC1 axis. The models were tested with 200 permutations, and the results are listed in Table 2. The results showed that the models did not overfit, indicating that they were effective and reliable. According to the VIP score chart ( Figure 10b) and t-tests corresponding to the model, differential chemical constituents (VIP > 1) of samples from different habitats in Sichuan Basin were screened out, and the number of characteristic peaks is shown in Table 2.
Identification of the Differential Chemical Constituents
A total of four common differential chemical constituents, 2-O-Galloylgalactaric acid, quercetin 3-O-xylosyl-rutinoside, kaempferol 3-O-rutinoside and kaempferol 3-rutinoside 7-rhamnoside, were identified from the eight samples of LH from different habitats in Sichuan Basin. The peak area of each common differential constituent was used as its relative content. The average value and standard deviation of the peak area of the same chemical constituent in different samples were calculated to obtain the relative content changes of common differential constituents between different samples ( Figure 11). As shown in the figure, the content of all four differential constituents of LH from S1 was high.
in Sichuan Basin. The peak area of each common differential constituent was used as its relative content. The average value and standard deviation of the peak area of the same chemical constituent in different samples were calculated to obtain the relative content changes of common differential constituents between different samples ( Figure 11). As shown in the figure, the content of all four differential constituents of LH from S1 was high. Figure 11. Relative contents of the common differential chemical constituents.
Discussion
At present, there are few studies on the components of LH. In the current national standard, the quality control indicator of LH is the content of two glycosidic constituents, quercetin and kaempferol, which have small polarity. This study used UFLC-triple TOF-MS/MS to comprehensively analyze the chemical constituents of LH. Finally, 46 chemical constituents in LH were identified (Table 1), including flavonoids, phenolic acids, and amino acids. Generally speaking, the accumulation of active ingredients in LH varies greatly due to the different ecological environments. Accordingly, the quality of herbs also varies, which makes it difficult to standardize commercial herbs and ensure their effectiveness for clinical use. Therefore, it is of great importance to study the constituents of LH from different habitats in Sichuan Province. This experiment focused on the compositional analysis of herbs from eight different habitats (Bazhong, Guangyuan, Yibin, Zigong, Deyang, Suining, Leshan, and Nanchong) in Sichuan Basin. The PCA showed that the samples from the eight different habitats in Sichuan basin were clustered into one category, and the distribution results were relatively ideal. According to the results of PLS-DA and VIP, four common differential chemical constituents were screened out from samples in Sichuan basin. The content of all four differential constituents in the samples of LH in Sichuan Bazhong was high, and the overall level was good; the content of all four differential constituents in the sample of LH from Sichuan Yibin was low. However, the influence of different habitats on the quality of LH is still unclear in many aspects. Therefore, it is necessary to analyze the LH constituents of the Sichuan Plateau and to further develop a comparison of all samples from Sichuan. The results provide basic data for revealing the Figure 11. Relative contents of the common differential chemical constituents.
Discussion
At present, there are few studies on the components of LH. In the current national standard, the quality control indicator of LH is the content of two glycosidic constituents, quercetin and kaempferol, which have small polarity. This study used UFLC-triple TOF-MS/MS to comprehensively analyze the chemical constituents of LH. Finally, 46 chemical constituents in LH were identified (Table 1), including flavonoids, phenolic acids and amino acids. Generally speaking, the accumulation of active ingredients in LH varies greatly due to the different ecological environments. Accordingly, the quality of herbs also varies, which makes it difficult to standardize commercial herbs and ensure their effectiveness for clinical use. Therefore, it is of great importance to study the constituents of LH from different habitats in Sichuan Province. This experiment focused on the compositional analysis of herbs from eight different habitats (Bazhong, Guangyuan, Yibin, Zigong, Deyang, Suining, Leshan and Nanchong) in Sichuan Basin. The PCA showed that the samples from the eight different habitats in Sichuan basin were clustered into one category and the distribution results were relatively ideal. According to the results of PLS-DA and VIP, four common differential chemical constituents were screened out from samples in Sichuan basin. The content of all four differential constituents in the samples of LH in Sichuan Bazhong was high and the overall level was good; the content of all four differential constituents in the sample of LH from Sichuan Yibin was low. However, the influence of different habitats on the quality of LH is still unclear in many aspects. Therefore, it is necessary to analyze the LH constituents of the Sichuan Plateau and to further develop a comparison of all samples from Sichuan. The results provide basic data for revealing the influence of the ecological environment on the synthesis and accumulation of metabolites of LH, as well as the quality formation mechanism of the herb.
Plant Materials
The samples of LH were collected in the field in Sichuan Province. The materials were identified by Professor Xunhong Liu (Department for Authentication of Chinese Medicines, School of Pharmacy, Nanjing University of Chinese Medicine, Nanjing, China) as the dried whole herb of Lysimachia christinae Hance. Voucher specimens were deposited in the laboratory of Chinese medicine identification, Nanjing University of Chinese Medicine. The source information of the LH samples is shown in Table 3.
. Preparation of Standard and Sample Solutions
The above 23 standards were weighed in a 5 mL volumetric flask using a 1/1,000,000 electronic analytical balance (ME36S, Sydos, Germany), dissolved in methanol and prepared into the corresponding concentrations of standard solutions. Then, 50 µL of the above standard solutions was placed into a 10 mL volumetric flask and methanol was added to prepare a mixed standard solution. The concentration of the mixed standard solution was 5 µg/mL. All solutions were stored at 4 • C for further analysis.
The samples were crushed and passed through a No. 3 sieve and then the powder was dried to constant weight. Next, 0.5 g of dried powder was accurately weighed into a 50 mL centrifuge tube and ultrasonically extracted with 5 mL of 80% methanol for 60 min at room temperature. After extraction for a few minutes, the weight was made up with 80% methanol. The supernatant was taken and centrifuged at 13,000 rpm for 10 min before filtering through a 0.22 µm membrane (Jinteng laboratory equipment Co., Ltd., Tianjin, China) prior to UFLC-triple TOF-MS/MS analysis.
Analysis of the Differential Constituents in LH from Different Habitats
Markerview 1.2.1 software (Sciex AB, Framinghan, MA, USA) was used to perform peak matching, peak alignment and noise filtering for the raw mass spectrometry data. The results were further imported into SIMCA-P 13.0 software (Umetrics AB, Umea, Sweden). On the basis of the above qualitative results, PCA and PLS-DA were used to perform dimensionality reduction analysis on the data to obtain information about differences between groups. The characteristic peaks of the differential chemical components were screened according to the VIP (VIP > 1) and t-test (p < 0.05) results obtained from PLS-DA.
Conclusions
In this study, UFLC-triple TOF-MS/MS was used to analyze the components from eight different habitats in Sichuan basin. According to relevant mass spectrometry data, reference materials and the literature, 46 chemical constituents were identified. The fragmentation pathways of flavonoids, phenolic acids, tannins, amino acids, fatty acids and coumarins were preliminarily deduced by the fragmentation behavior of the major components. PCA, PLS-DA and t-tests were used to identify the different common chemical components of LH from different habitats and their different contents were compared at the same time. Finally, we found that LH from Sichuan Bazhong was best among the eight habitats. In conclusion, these results can help us to better understand the chemical constituents and componential differences of chemical constituents in LH from different habitats, as well as provide data for further exploring the functional material basis and clinical application of LH.
Data Availability Statement:
The data presented in this study are available within the article.
Conflicts of Interest:
The authors declare no conflict of interest.
Sample Availability: Samples of the Lysimachiae Herba are available from the authors.
|
2022-07-23T15:09:32.818Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "eb2bba2f8ec342345afe3c220805fb3210db1dba",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "85df3a279524d08fd447d9685a41edab1c2b30ae",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235538285
|
pes2o/s2orc
|
v3-fos-license
|
Wind and Solar Intermittency and the Associated Integration Challenges: A Comprehensive Review Including the Status in the Belgian Power System
: Renewable Energy Sources (RES) have drawn significant attention in the past years to make the transition towards low carbon emissions. On the one hand, the intermittent nature of RES, resulting in variable power generation, hinders their high-level penetration in the power system. On the other hand, RES can aid not only to supply much more eco-friendly energy but also it allows the power system to enhance its stability by ancillary service provision. This article reviews the challenges related to the most intermittent RES utilised in Belgium, that is, wind energy and solar energy. Additionally, wind speed and solar irradiance variations, which are the cause of wind and solar intermittency, are studied. Then, recent techniques to forecast their changes, and approaches to accommodate or mitigate their impacts on the power system, are discussed. Finally, the latest statistics and future situation of RES in the Belgian power system are evaluated.
Introduction
Due to the depletion of fossil fuels, energy security awareness, increasing electricity demands, and ever-rising concerns about environmental issues, renewable energy sources attract more attention as time goes by. For example, oil and gas reserves of the UK's North sea are in decline as the peak of extraction has already occurred in 2003 [1]. In [2] the global peak of extraction of crude oil and natural gas liquids is estimated to happen between 2009 and 2021. In terms of coal, the peak timing of the worldwide production is predicted to occur between 2011 and 2047 on an energy basis and between 2010 and 2048 on a tonnage basis by [3]. Besides, it is projected that the gas peaks between 2030 and 2035 by [4,5] and between 2024 and 2046 by [2].
Regarding the environmental matters, when fossil fuels are combusted, in the burning process, their carbon combines with oxygen, and Carbon Dioxide (CO 2 ) is released [6]. In [7,8], it is shown that climate change relates directly to CO 2 in the atmosphere as it traps radiated heat from the surface of the earth and raises the temperature. The authors of [9] reported an average temperature rise of 0.6 • C over the last century and forecast 2-4 • C in this century while in the Paris Agreement [10] 1.5 • C is defined as the threshold for the temperature rise, and Intergovernmental Panel on Climate Change (IPCC) already warned about the consequences of passing this limit [11]. Not only that, among the three mediums in which CO 2 is stored, namely atmosphere, biosphere, and oceans, almost 93% of CO 2 can In this article, a comprehensive review is conducted on the most intermittent RES used in Belgium, that is, solar energy and wind energy. It must be noted that hydro-power is benefited in Belgium as well. Nevertheless, due to the small share of it compared to wind and solar output power, in this article, it is not discussed. Additionally, there is less predicted room for improvement of hydro-power in Belgium because of the fairly flat topology of the country. Apart from that, the tendency towards utilising bioenergy is increasing in Belgium. However, as it is far more predictable than wind and solar energy, and the focus of this article is on intermittent RES, it is excluded in this study. Wind and solar energy are two of the proven technologies for a clean energy production. Nevertheless, to achieve a reliable operation of the power system and avoid the degradation of frequency response, studying their negative impacts as well as the positive effects on the grid is of significant importance for the power system operators. Furthermore, it is crucial to identify the roots of the volatile nature of wind speed and solar irradiance as it leads to the accommodation or even mitigation of the intermittency of wind and solar output power. As this article is based on the intermittent RES that are used in Belgium, the status of the Belgian power system is evaluated as well. In this article, the authors have used the most recent technical reports as well as the latest and most cited scientific sources. The rest of the article is organised as follows-Section 2 assesses the impacts of the volatile nature of wind and solar energy on the power system. Sections 3 and 4 are dedicated to a thorough review of the mentioned renewable sources, that is, wind and solar, each of which starts with a history and milestones of the source harvesting, followed by the statistical investigation of the source in the world. What is more, in these sections, a short assessment of the COVID-19 impact on the RES progress is provided to evaluate the resiliency of the source. Then, the variations of wind speed and solar irradiance are appraised. In Section 5, the state-of-the-art methods used to forecast wind speed and solar irradiance are investigated. Section 6 discusses the technologies to combat wind and solar energy volatile nature by assessing the existing approaches for accommodating or mitigating the intermittency. Subsequently, Section 7 investigates the current status of the Belgian power system concerning the RES dominancy as well as the goals of the country for decarbonisation and strength of its power system. Finally, in the last section, a comparison between wind and solar energy technologies are made and conclusions are drawn.
Intermittency and the Power System
The intermittent nature of some RES is a hindrance for high penetration of them in the electrical grid. That is, the erratic wind and solar output power, which is the result of wind speed and solar irradiance variations, causes some challenges for power system reliability and stability such as overvoltages and voltage unbalances occur in the distribution network [30,31].
The primary goal of a system operator is to reliably match the electric generation and demand at the lowest possible cost at all times. The intermittent nature of wind and solar power has some inevitable impacts on the power system, which depend on the size and inherent flexibility of the power system as well as the penetration level of volatile RES in the grid [32]. The impacts of the varying output power of wind and solar on the grid can be classified into two main groups-short-term, which includes balancing the power system at the operational time scale (minutes to hours), and long-term, which incorporates providing sufficient power during peak load situations [32].
Power System Reserves
The first short-term influence on the power system is the negative impact of wind speed and solar irradiance uncertainty on system reserves. Generation and demand need to be balanced at all instants. Utilities carry operation electricity reserve within the power system to compensate for the sudden fail of generation and unexpected load variations. When determining the amount of operating reserve, any unpredictable demand or generation fluctuations must be taken into account. Inevitably, as a result of the changing nature of wind and solar energy, the amount of scheduled operating reserve must be increased to regulate the system adequately, leading to a higher cost of integration [33]. In Figure 2, the resulted change in the reserve of the power system is illustrated [34]. As shown, with the penetration of the intermittent RES, in this case, wind energy, the reserve capacity has to be increased to compensate for the RES uncertainty. Figure 2. Impact of RES intermittency on the reserve amount of the power system [34], modified by [35], reproduced with permission from [34,35].
Several studies have investigated the aforementioned impact and possible solutions to avoid the undesired increase in the reserve capacity as a result of the intermittent RES penetration in the grid. The authors of [36] modelled this impact and compared the increase in the power system reserve as a percentage of the wind penetration level of six different models. They concluded that up to 30% wind penetration level, the primary reserve capacity increases by 0.3-1.0% of the installed unit capacity, while the reserve size could surpass 1% at the penetration level of 40%. The difference in the calculated reserve sizes for the same penetration level is attributed to dissimilar reserve sizing methodologies used in different studies. Moreover, it is concluded that as a result of lesser predictability, a larger capacity of the reserve is required for the same solar Photovoltaic (PV) penetration as the wind. In [37], the reserve associated with the wind intermittency, referred to as wind power reserve, is quantified. In their case study with 100 MW load, a 10% wind power uncertainty, and a [−2,2] load uncertainty interval, it is deduced that wind power reserve increases linearly with respect to the wind penetration rate. Quantitatively, with the wind penetration rate (output power of wind/load) ranging from 0% to 50%, wind power reserve (defined as the ratio of the wind power reserve and the load) varies from 0 to 0.1. However, with the higher wind penetration level in the grid, which results in less generation of conventional units, the operating cost of the power system can be reduced from 0 to more than 25% for the wind power penetration rate ranging from 0-50%.
CO 2 Emission
The second short-term impact of wind and solar intermittency is on the expected reduction of CO 2 emissions. Theoretically, it is expected that when a conventional plant generating a specific amount of power is replaced by, for instance, a wind power plant with the same generation capacity, the CO 2 emissions related to the conventional plant are replaced by the CO 2 emissions associated with the wind plant. However, it is not the case in practice. The amount of the reduction in CO 2 emission relies on what sort of production or fuel is replaced by the wind power plant. What is more, conventional power plants are designed to operate at a specific output level at which their performance is optimised. Nevertheless, as a result of the penetration of intermittent RES in the power system, the ramping rate (the ability of a generator to change its output) of on-line generators has to be increased, or their number of start-ups and shut-downs increases to follow the load variations as well as the output fluctuations of RES. For example, it may happen that while demand increases, the output power of intermittent RES decreases, or vice versa. In both situations, an apparent increased rate of change in the load of the system occurs, leading to non-optimal performance of conventional power plants [33,38]. Therefore, the efficiency of power plants decreases and their CO 2 emissions rises, as shown in Figure 3 [35]. The authors of [35,36] compared a few studies and showed that at intermittent RES penetration levels (percentage of annual load) of 11% (wind), 23% (wind-solar), and 33% (wind-solar), the average increase in specific CO 2 emissions of thermal plants are 7%, 2.3%, and <1%, respectively. However, it is deduced by [35] that despite the reduced reduction in expected CO 2 , still integrating RES in the power system has a significant positive impact on the CO 2 emission reduction.
Power System Losses
The third short-term effect is that intermittent RES can affect transmission and distribution losses and, consequently, increase the costs. Depending on where wind farms and solar plants are located, concerning the voltage level of the integration point and distance to the load, this could be either a positive or negative effect. The authors of [32] claim that in the UK, with the concentration of the wind power production in the north, estimated extra transmission costs would be doubled to 2 /MWh and 3 /MWh at a penetration level of 20% and 30%, respectively. In [39], it is reported that based on an eighteen months experience and monitoring daily forecasted transmission losses, an increase in losses during high wind speed hours and decrease in the noon, when PV generation was at its peak, was observed. However, in their observation, days with bad weather conditions, such as rainy or snowy days, are excluded because of additional corona losses of the transmission lines. Based on their model, the impacts of wind intermittent output in one month is resulted and depicted in Figure 4 [39].
Power Curtailment
The fourth short-term impact stemming from wind and solar power intermittent nature is when their output power exceeds the amount that can be absorbed and utilised safely by the power system, taking place due to high wind speeds and clear days. To provide frequency, voltage, and in general, electric grid stability, a certain number of conventional plants must remain on-line at any time instant. Inevitably, when the output power of solar and wind generation units goes above a certain threshold, curtailment of their power is required. The authors of [32] stated that approximately 10% energy penetration level is the beginning point where curtailing wind power may be necessary. However, it is a rough estimation and still greatly depends on transmission line capacity and infrastructure, demand, and transmission capacity to neighbouring countries. The authors of [40] identified the curtailment ratio of a few countries, that is, the ratio of curtailed energy and summation of wind and PV generation. In a six-year period, it is shown that at the penetration ratio (the ratio of summation of wind and PV generation and total generation) of 2.5-13.8%, the curtailment ratio varied from 9.7-0.3% in Italy. In contrast, during the same period but with one year difference in Spain, the penetration ratio increased from 10.9% to 25.3%, with the curtailment ratio ranged from 0.3% to 1.6%. The authors of [41] analysed the curtailment ratio of wind power in some countries. In their study, in China, Germany and Ireland at the penetration levels of 2.6%, 9.8% and 22.5%, respectively, the curtailment ratio was calculated 11%, 0.7% and 3%, successively. The fluctuating behaviour of curtailment ratio with respect to the penetration level in different countries is justified by [42], where it is stated that countries with high curtailment levels suffer from a lack of sufficient transmission capacity. Notably, the curtailment levels vary significantly by region. As a result, in the areas with more insufficient transmission capacity and infrastructure, by enhancing the transmission capacity and grid infrastructure as well as adding some extra technologies and interconnections to provide more flexibility for the power system, this could reduce considerably. Nevertheless, it is worth mentioning that the system operator can also benefit from the curtailment to address the uncertainty issue of intermittent RES.
Ancillary Services
The fifth short-term impact of the RES on the power system is their effects on the characteristics of the grid. As mentioned before in this article, the dominant difficulty in the transition to RES is related to the intermittent and unpredictable nature of renewables and the non-dispatchable characteristic of these sources, resulting in mismatching between electricity generation and electricity consumption. Moreover, liberalisation of the electricity market and the increased capacity of transactions have led to large fluctuations in the electricity price and complicate managing the security and reliability of the power system [43]. RES can help the power system by providing Ancillary Services (AS). Therefore, ancillary services are required to dynamically address the mentioned problems and ensure the power system quality and safety. Ancillary services generally can be classified into several groups, that is, frequency control, voltage control, and emergency control services [22]. Wind and solar energy sources become extremely engaging to participate in electricity markets. The authors of [44] considered the feasibility of renewables in developing the strategies for sustainable growth. In [45], the authors discussed the possibility of wind power integration in Denmark by supplying the full range of relevant frequency and voltage control. The provision of ancillary services by wind generation resources in the United States is investigated in [46]; the possibility of both inertia emulation and primary frequency support was summarised, along with the impacts of different control architectures. The economic profitability of wind technology as a potential participant in the Spanish secondary regulation market is analysed in [47]. In [48], a wind power trading model, including a regulation strategy, is proposed for the day-ahead California electricity market. The fundamentals of grid code requirements and wind turbine control methods for frequency control participation in Ireland and the UK is investigated by [49]. In addition to wind energy-based ancillary services, a control strategy has been suggested in [50,51] for PV generators to adjusting their active power outputs for frequency regulation. Moreover, a state-space dynamic model is proposed in [52] for PV systems with full ancillary services support. In [53], a reactive power regulation method is suggested to provide ancillary services by PV inverters using three-phase control strategies.
Protection and Control Systems
The last short-term effect of RES on the grid is their effect on the protection and control systems. This stems from the fact that the dynamic features of the power system are changing ceaselessly because of the rapid integration of RES into the network. Protection, automation and control functions that are defined for relays have to be kept up-to-date in accordance with the changing power system characteristics. Thus, following a reconfiguration in the grid, the protection devices need to modify their operational settings. Protection is related to maintaining the grid stable state against abnormal incidents. On the other hand, the control system is associated with providing access to alter the state of any equipment to preserve the normal operation of the power system [54]. The authors of [55] investigated the impact of RES on the performance of transmission line distance protection, referred to as R21. In this study, the setting of R21 is assessed and based on three studied benchmark studies, it is concluded that wind farms affect the operation of distance protection which varies with respect to the fault type, fault location and wind generation level. Furthermore, the necessity of revising the protection schemes based on updated models of the grid is highlighted. In [56], the behaviour of power swing protection in presence of large scale integration of wind generation is studied. Aiming to ensure the efficiency of the protection scheme, the authors conducted a qualitative study through investigating case studies and concluded that wind generation results in the increase in the probability of power swing blocking false operation and may alter the location of the electric centre of the power system. The authors of [57] conducted a study on the effects of Distributed Generation (DG) on distribution grids. They compared various protection schemes and divided the strategies to mitigate the impact of DG on the distribution network into two groups: maintaining the conventional protection strategies and modifying the current strategies. While the former approach minimises the cost and operational downtime of industrial consumers, the latter is more expensive and paves the way for further penetration of DGs into the grid. Moreover, stemming from the multidirectional power flow in the power system because of DGs, it is essential to benefit from directional overcurrent relays.
Power System Reliability
The long-term impact of RES is on the power system adequacy. In general, the reliability of a power system can be assessed by three terms: reliability, security and adequacy [58]. Reliability is defined as all approaches to deliver the desired amount of electrical energy to all points of utilisations in the power system. Security is a measure to evaluate how a power system can withstand unanticipated occurrences such as loss of any elements in the network or probable faults like short-circuit in a part of the grid. Finally, adequacy is a measure of the ability of a power system to supply the electrical energy of customers and meet load demand within limited voltage and frequency ratings while considering operating constraints. Considerable desire to integrate RES into the power system can affect the reliability of the grid due to the volatile nature of RES. Thus, it is crucial to study the impacts of the intermittent RES on the overall system reliability. The authors of [59] investigated RES effects on the power system to find out whether increasing the share of wind and solar PV in electricity generation has an adverse effect on the power system. Based on their empirical study in the U.S., they concluded that at first, increasing the net generation from wind and solar PV has a negative impact on the grid reliability. Nevertheless, in case that their total net generation surpasses 1200 GWh, the duration of disruptions is predicted to lessen. Moreover, the marginal impact of wind and solar PV on the power system reliability is evaluated in this study. It is deduced that if the net generation from wind and solar PV increases by one GWh, the customers that are fed by suppliers who provide less than 1200 GWh of their net generation from wind and solar PV may experience power system outages lasting less than one minute.
Capacity credit is an index to measure the amount of load that can be supplied on the power system by a generation unit with no increase in the Loss Of Load Probability, known as LOLP in the literature. LOLP is the probability of a Loss Of Load Event (LOLE) happening in the power system. In fact, the capacity credit of a generating unit shows the contribution of that unit to the adequacy of the power system and its capability to increase the reliability of the power system [60]. Capacity credit is often expressed as a percentage of the capacity of the installed generation unit. It goes without saying that capacity credit reduction leads to a higher amount of capacity required to maintain system adequacy [35]. In [61], based on some studies, it is concluded that for low penetration levels of <5%, the capacity credit of wind energy is almost equal to the average wind power. At larger wind contribution levels, such as >40% of the generation, the capacity credit goes towards a constant percentage, which is determined by the LOLP without wind energy and its probability of having zero output power. Moreover, in a power system with a few large power plants, the capacity credit of wind energy tends to be higher than a power system consisting of many small generation units. Besides, distributing a wind farm over a wider geographical area can raise the capacity credit about 20% over its value for just one site. Finally, a good correlation between wind power and demand, which is not always the case, can raise the capacity credit by about 20%. The authors of [62] investigated capacity credit of PV and wind in the Moroccon power system at the penetration levels of 16%, 20% and 27%. In all three cases, the mentioned penetration levels consist of 21.8% of PV and 78.1% wind. The calculated capacity credits are 30.0%, 26.6% and 23.7%, respectively. Additionally, it is concluded that as penetration level increases, capacity credit decreases and its value saturates. In Table 1, the RES quantitative impacts on the power system and their variations with respect to the RES share, which are researched in different studies, are tabulated.
Wind Energy
In this section, the most notable milestones in the realm of wind energy are evaluated, and approaches for wind harvesting are compared both qualitatively and quantitatively. Subsequently, wind speed variations resulting in the generated power fluctuations are investigated.
History and Improvements
Exploiting wind energy initiated by using windmills. While some studies suggest that the first idea of wind energy utilisation was by a wind-powered instrument in Egypt either in 1st century A.C. or B.D. [63], some others doubt about its function, structure and even existence [64,65]. The second known reference to windmills dates back to the 9th century A.D. when in the Persian part of Seistan (located in Iran) they were in use [66] and later on in the 12th century, windmills appeared in England, Europe [67]. However, for some reasons, such as being non-transportable and non-dispatchable, windmills were replaced by coal when the Industrial Revolution happened. In the late years of the 19th century, with the emergence of electrical generators, again windmills were used to rotate generators. In the following years, the authors of [68] suggested the three-blade wind turbines, which resembled today's existing turbines, instead of their two-blade counterparts, as they were superior to two-blade ones in that they could result in far less vibration. The next breakthrough could be considered at the end of the 20th century, when supporting technologies made remarkable improvements as well, such as aerodynamics, material science, power electronics, control engineering, computer science, to name but a few [69].
In general, wind energy harvesting can be categorised into onshore wind and offshore wind energy, with the former is benefited on land and the latter in bodies of water, such as in the oceans. The first emerged commercial wind turbine was onshore, built in the late years of the 1800s, while almost one century later, an offshore wind farm consisting of 11 turbines evolved in 1991 in Denmark [70,71]. Although onshore wind turbines have reached maturity due to the sooner evolution and, currently, most of the existing wind farms are onshore, offshore harvesting has drawn attention in the past years as well due to its desirable features. Not only the wind speed is higher in offshore areas, but also it is more predictable and consistent, leading to the fatigue of the turbine becoming less critical and an increase in the lifetime of the infrastructures. What is more, they do not occupy any land; therefore, noise emission becomes less significant and there is less visual impact on the landscape. On the contrary, because of the turbines' special place, the costs of the required technology are higher than onshore turbines. This also adds additional maintenance needed for the turbines, which is more challenging and time-consuming than onshore ones [71,72].
The cumulative onshore and offshore installations of Europe and the world are shown in Figure 5 Today, the average size of the onshore wind turbines being manufactured is around 2.5-3 MW, which can power more than 1500 average European Union (EU) households. For their offshore peers, in 2019, the average size of installed offshore wind turbines in Europe was 7.8 MW, with a 1 MW increase compared to 2018 [75,76]. It is projected by [77] to have 4-5 MW and 12 MW ratings for onshore and offshore commissioned turbines, respectively, by the year 2025. However, Vestas has recently announced a new turbine, that is, the V236, with a rated power of 15 MW, which is expected to be installed in 2022 [78].
Another notable point worth mentioning is the wind energy reaction in the face of unprecedented times due to the COVID-19 crisis investigated by [79,80]. While the fossil fuels industry faced market fluctuations during the pandemic, the wind energy sector showed a relatively stable reaction. In the pre-COVID projections for the year 2020, new wind power capacity installations of 75.5 GW was expected. While it was predicted to see about a 20% reduction in installations because of the pandemic, with new predictions and considering the installed capacity in the first half of 2020, only a 6% decrease is expected now. This shows the excellent resilience of the wind industry. Additionally, as some project completion dates were postponed to 2021, because of the pandemic, 2021 is expected to record in the wind industry with the projected power capacity installations of 78 GW.
Wind Speed Variations
Wind speed varies both in space and time. Spatially reasons are classified into three groups by [32], namely, global, regional, and local. Stemming from different altitudes and solar insulation, there are various climate regions on earth worldwide. From the regional perspective, whether plains, mountains or both surround a specific location, causes differences in the blowing winds. The size of the land and its distance to the sea is another influential regional factor. Locally, vegetation sort and the shape of land surface and its features, which is known as topography, have a leading role in the variability of the wind speed. However, based on many studies, such as [81] and confirmed by [32,82], mean wind power output has been estimated to have a 10% or less standard deviation from one 20-year period to the next, resulting in not a considerable uncertainty of the resource wind spatially.
In terms of temporal reasons, it is needed to split time into time intervals to use appropriate measures for predicting wind speed. In other words, while some approaches can forecast long-term wind speed, some others are only suitable for short-term predictions. Consequently, based on the prediction method, different time intervals are defined, ranging from seconds to years [32,83,84]. As far as wind speed varies on all time scales, that is, seconds, minutes, hours, days, months, and years, irrespective of the chosen duration of a time interval, the time intervals' duration are chosen somehow that can be categorised into four main groups of very short-term (predicting from a few seconds to a few minutes), short-term (predicting from a few minutes to a few hours), medium-term (predicting from a few hours to a month ahead) and long-term (predicting more than a month ahead). In [32,82], wind variations are divided into turbulences, which is changes of wind in the order of seconds or minutes, diurnal, which is variations of wind concerning the time of day, synoptic, which is related to the passage of weather systems, seasonal and annual. Figure 6 shows peaks associated with turbulence, diurnal and synoptic temporal variations in a wind spectrum captured in the U.S [85,86]. Figure 6. Turbulence, diurnal, and synoptic temporal variation peaks captured at Farm Brookhaven National Laboratory, U.S. [85], modified by [86], reproduced with permission from [85,86].
In contrast to turbulences that are not predictable, seasonal and diurnal wind changes are predictable. In comparison with seasonal variations, annual fluctuations are less predictable, but their changes are small. Synoptic variations are predictable as well, but not more than a few days ahead. Additionally, each of these variation types has a different effect on the power system, which is beyond the scope of this article but is investigated in detail in [32].
It is worth mentioning that temporal variations of wind power can be smoothened by aggregation of wind turbine outputs, which positively affects both power system operation and power quality. In [32], it is stated that aggregation of wind power production lessens the temporal changes of wind in two ways-an increased number of turbines in a wind farm and the spreading of wind farms over a wider geographical area. Firstly, an increase in the number of wind turbines within a wind farm smoothens the negative effect of turbulences as wind gusts do not hit every single turbine simultaneously. This reduces the turbulence peak in Figure 6 [85,86]. The authors of [82] reported that ideally, with the number of turbines n, the percentage variation of the output power of turbines is reduced to n − 1 2 , and resulted that to achieve a significant smoothing, the number of wind turbines does not need to be very large. Secondly, the distribution of wind farms over a wider area reduces the impacts of diurnal and synoptic wind peaks as changing weather patterns do not get to affect each individual turbine at the same time, meaning that by expanding the distribution of wind turbines, changing weather patterns face a larger land area [32]. Consequently, up and down ramp rates are much lower for aggregated dispersed wind farms in comparison with a single large wind farm [32]. The exact amount of smoothing effect of the distributing wind turbines significantly depends on local weather effects, and the total size of the aggregated wind farms land [32]. Nevertheless, for example, based on 10 min and 1-hour data of a site, it is reported by [87] that aggregation of wind power outputs of 17 dispersed wind farms is estimated to reduce the variability of the resultant output power by 60-70% in comparison with a single wind farm. In Table 2, temporal variation of winds and their predictability are summarised [82].
Solar Energy
Investigating solar energy more in detail, this section starts with the breakthroughs related to solar energy utilisation, followed by the evaluation of measures for solar electricity applications. Then, variations associated with the solar irradiance are discussed.
History and Improvements
The first practical use of solar energy is said that happened sometime between 287-212 B.C. [88]. However, from the electricity point of view, the first milestone of solar energy dates back to 1839, when it was found out that light can swell electricity generation in two metal electrodes that are placed in an acidic solution, defined as photovoltaic effect [89]. Later in 1873, it was discovered that selenium could act as a photoconductor, leading to another discovery work three years later when it was observed that selenium generates electricity when exposed to sunlight [90]. With benefiting from this, in 1883, the first solar cell made from selenium was created, with less than 1% efficiency [89]. Finally, in 1954, selenium was replaced by silicon, which is today's used material for solar cells, and the first silicon photovoltaic cell was created, being able to power an electric device for several hours with 6% efficiency [91].
Solar energy can be benefited in various ways. In addition to the several applications of solar energy such as in heating buildings [92], heating water [93], ventilation [92,94] and even vehicular purposes like in cars [95], or bicycles [96], which are beyond the scope of this article, solar electricity has attracted significant attention in the past years. It can be divided into two main categories of PV solar system and Concentrated Solar Power (CSP).
Solar PV Systems
Being the primary technology for solar energy harvesting, PV solar system directly converts sunlight into electricity. In PV technology, consisting of semiconductors, solar irradiance is absorbed by the semiconductors. This leads to having the electrons released in them and flowing through the semiconductor, which causes electricity generation. Later, the generated direct current (DC) is turned to an alternating current (AC) using inverters. Figure 7 shows a general PV system [97]. As it can be seen in Figure 7, a general PV system consists of the following parts: PV arrays, which are responsible for the conversion of solar irradiance into electricity; a battery storage system for storing energy if it is needed; a controller to decide for either charging or discharging the storage system; DC/DC converters for increasing the voltage (if needed), or for Maximum Power Point Tracking (MPPT), usually using a Perturb and Observe algorithm, like a Hill Climber; and turning DC to AC for the use of AC load consumers using DC/AC inverters [97].
Solar Thermal Power Systems
In CSP plants, mirrors concentrate the solar irradiance onto a central point or line with a carrier fluid and produce heat and steam to generate electricity via a conventional thermodynamic cycle [98]. Based on the apparatus used for concentrating sunlight, CSP can be further divided into four groups-Parabolic Trough (PT), Solar Tower (ST), Fresnel Reflector (FR), and Solar Dish (SD).
PT, the most mature CSP technology with a parabolic cross-section and being linear in the third dimension, consists of parabolic mirrors that focus the solar irradiance onto the heat receivers which are mounted on the focal line. Both reflectors and receivers of the sunlight track the path of the sun from East (sunrise) to West (sunset) [98,99]. In ST plants, a large number of computer-assisted mirrors, known as heliostats, concentrate the solar irradiance onto a single receiver, which is situated on top of a central tower. Tracking of the sun in ST is done individually by each heliostat that is able to move over two axes [98]. Although FR plants are similar to their PT counterparts, they use flat or slightly curved mirrors to concentrate the solar irradiance onto a fixed linear receiver. In FR plants, mirrors are located at different angles and can follow the sun on a single or dual-axis regime [98,100]. Similar to a satellite dish in terms of shape, SD configuration consists of a parabolic dish that concentrates the sunlight onto a receiver placed at the focal point of the dish. In an SD system, both the dish and receiver track the sun at the same time [98,100]. Schematic of CSP plants, that is, PT, ST, FR, and SD plants are depicted in Figure 8 [101,102]. Initially, PT was the dominant used technology among the other CSP systems, accounting for more than 90% of the total installed CSP capacity. Nevertheless, due to the higher reachable temperature, leading to higher efficiency, and being more cost-effective compared to SD, ST has been favoured since 2010 [103]. Unlike PT and FR plants that get the sunlight focused on a focal line and reach maximum operating temperatures between 300-550 • C, ST and SD plants concentrate the sunlight on a single focal point and, consequently, can reach higher temperatures between 800-1000 • C. FR is the most recent CSP technology. The capacity factor of CSP plants can be enhanced by utilising a thermal storage system. Evidently, the higher the reachable temperature for a CSP technology, the more expensive the thermal storage system. Due to the relative simplicity, while the manufacturing and installation costs of the FR plants is less than PT plants, it is not clear whether FR generated electricity is cheaper than that of the PT. SD is superior to other CSP technologies in that it does not require cooling systems for the exhaust heat, making them suitable for use in water-constrained regions. Finally, high-temperature ST plants are potentially superior to other CSP technologies in terms of capacity factors, efficiency, heat storage, and costs and can provide cheaper electricity in case their complementary technologies make progress and more commercial experience is gained [98]. In Table 3, the main characteristics of CSP plants are summarized [98,100].
When it comes to comparing PV solar systems and CSP, the most notable point is the enormous difference between their generation. It is reported by [104] that, by the end of 2019, while cumulative solar PV deployment reached 578 GW, CSP deployment reached 6 GW, being at its infancy. A few facts can justify this: first, it is known that when solar irradiance enters the atmosphere, it is manipulated. Excluding the absorbed portion by the suspended particles in the atmosphere, they divide into three parts: direct component or Direct Normal Irradiance (DNI), passes unaffected through the atmosphere; indirect or Diffused Horizontal Irradiance (DHI), scattered by atmospheric particles as well as ground; and reflected component, which can be neglected, shown in Figure 9 [105]. The summation of these three components, which is the total solar irradiance incident on a horizontal surface, is called Global Horizontal Irradiance (GHI). DNIs, however, could be reachable at high altitudes too, where scattering is low. In the best regions in terms of DNI (DNIs > 2800 kWh/m 2 /year), the CSP generation potential is 100-130 GWhe/km 2 /year, which is roughly equivalent to the annual generated electricity by a 20 MW coal-fired power plant with a 75% capacity factor [98]. Secondly, the trend of investors choosing solar PV over CSP has led to remarkable declines in PV costs in the last ten years. According to IRENA statistics, for every doubling cumulative installed capacity of solar PV, its Levelised Cost Of Energy (LCOE) lessened by 36% between 2010 and 2019. Furthermore, crystalline silicon (c-Si) PV module price has decreased more than 90% since 2010 [104]. Lastly, PV systems are far easier to manufacture as they do not require much time or cost to be spent [106]. Additionally, a larger area is required by CSP plants for large-scale applications.
Nevertheless, it is worth mentioning that as CSP plants convert the energy of the sun into heat, thermal energy storage can be utilised with them to store energy and release it during cloudy days or at nights to generate electricity. This can combat the intermittent nature of the volatile RES, for example, solar PV and wind. On the other hand, as stated before, solar PV directly converts the sunlight to electricity, making it unable to use thermal storage systems and as mentioned before, storing electricity, especially at large-scale is still a challenging task. In Figure 10, the cumulative installed capacity of solar PV and CSP in the world and Europe are shown [73]. [107] respectively, decreasing by almost 25% and 9% with respect to the similar period in the year 2019 consecutively. Nevertheless, it is worth mentioning that due to stricter measures taken by governments in the first few months of 2020, and later, when the approaches were eased to some extent in the following months, it is expected to not witnessing a significant decline in the new installations, proving excellent resiliency of solar energy.
Solar Irradiance Variations
Like wind speed, solar irradiance varies both spatially and temporally. However, basically, the variability of solar irradiance that reaches the surface of the earth is mostly assessed by studying its consisting components, that is, DNI, DHI and reflections, among which the latter can be neglected.
From the spatial perspective, solar irradiance varies over distance, or changes in the position of the sun affect the solar energy systems output. The authors of [108] compared one-year daily and monthly averages of irradiance sums, that is, GHI, derived from satellite images of three sites in Germany. The first site, chosen as the reference, is compared with two other sites situated at distances of 55 km and 633 km, respectively. Although in the two closer locations, the captured GHI are not identical, they are closely related. Nevertheless, this is not the case about the further cases, and the difference in GHI patterns becomes more significant as the distance increases. It is also notable that daily fluctuations of all three cases around their monthly average value are substantial.
Notably, unlike PV generation fluctuations stemmed from the position of the sun, which is almost uniform, changes in the PV output power due to the motion of clouds are not uniform. In other words, in either consistently clear sky or overcast days, PV plants output variations are of minor importance, but it becomes considerable on partly cloudy days when GHI changes are more significant [109]. In [1], the effect of a passing cloud on irradiance in a summer's day in the UK, which is shown in Figure 11, is clearly visible before 18:00. PV generation can decrease by more than 60 % in a matter of seconds as a result of a reduction in solar insolation when a cloud passes. Nevertheless, the time a passing cloud needs to shade an entire PV system depends on the PV system dimension, cloud pace, cloud height, and a few other factors. For instance, for a PV system with a rated capacity of 100 MW, the needed time by a passing cloud to shade the whole system would be in the order of minutes [109]. Figure 11. The impact of a moving cloud on the time-series of irradiance, reproduced with permission from [1].
In addition to the presence of clouds which influence solar irradiance, other atmospheric conditions such as dust storms, water or ice concentration in a region, types of water particles or ice crystals, the amount of water vapour existing in the atmosphere as well as the aerosol type and its amount all have impacts on the solar generation [110].
When it comes to comparing PV and CSP plants, it is notable that due to the presence of a thermal mass, such as oil or water, variations in output power of CSP plants are less significant. The less dependency of CSP plants on the variations of solar irradiance stems from the fact that as a result of the presence of a thermal mass, such as oil or water, the output power of CSP plants is smoothened. The thermal mass acts similar to a storage system, making CSP plants inherently more capable of handling weather passages even without a major Energy Storage System (ESS).
Similar to wind power plants, in solar plants, the output fluctuations of the power can be smoothened by the aggregating solar plants. This occurs as by connecting solar plants, when a cloud passes, it does not get to affect the entire system simultaneously, or it may even leave some parts of plants unaffected.
Wind and Solar Forecasting Methods
Solar and wind energy forecasting is critical for the sustainable integration of these sources into the power system. Forecasting wind and solar energy provides the system operators with a better insight into the allocation of the resources, resulting in a secure and economic grid operation. Additionally, it can address the problem of intermittency as it gives the end-users an estimation of the future wind and solar generation. Besides, moving towards an inclusive and liberalised electricity market is impractical without accurately predicting the behaviour of these sources. Furthermore, understanding the forecasting errors is of paramount importance. In [111], it is concluded that wind power forecasting error has a serious impact on the fluctuations of the intra-day price. However, stemming from the volatility of RES, forecasting the behaviour of wind and solar energy, precisely, is a demanding task. Several methods are reported in literature that provide accurate short and long-term predictions (from a few minutes to the next few days). These methods can be categorised into four groups: physical-based models, statistical methods, deep learning-based algorithms, and hybrid methods [112]. Physical models are technically able to simulate the atmospheric dynamics according to natural laws and boundary conditions based on meteorological and geographical information [113]. However, when it comes to short-term forecasting, physical methods are not desirable due to their calibration requirement with extensive computational resources, especially in the occurrence of unexpected errors during prediction [114].
Aiming to find the mathematical relationship between time-series data of renewables, statistical methods are the second approach for forecasting wind and solar energy. The authors of [115] proposed a new Repeated Wavelet Transform (WT) based Auto Regressive Integrated Moving Average (ARIMA) (WT-ARIMA) model to predict the wind speed on very short time intervals. Time series ARIMA modelling is used in [116] to predict monthly radiations of the sun utilising remote sensing data obtained from National Aeronautics and Space Administration (NASA)'s POWER (Prediction of Worldwide Energy Resources). In order to reduce wind speed prediction errors, a novel approach is proposed in [117] that applies a sparse Bayesian-based robust functional regression model.
Forecasting models based on deep learning are able to overcome the barriers and limitations of the existing statistical forecasting models, which are mostly formulated as a linear model in dealing with longer forecasting time horizons. In [118], a developed forecasting model for short and long-term predictions of GHI has been suggested based on integrating the support vector machine and discrete wavelet transformation algorithm. A feature selection method is proposed in [119], which uses the Long Short Term Memory (LSTM) structure to lessen the errors of short-term wind speed prediction while decreasing its calculation time. A novel intelligent wind power forecasting method based on a fuzzy neural network is presented in [120], taking advantage of the Particle Swarm Optimization (PSO) algorithm as a new training approach. In [121], the authors proposed a hybridised deep learning framework that integrates the convolutional neural network for pattern recognition with the LSTM for half-hourly GHI forecasting. The authors of [122], investigate the performance of the 'Group Method of Data Handling' type neural network algorithm in short-term time series prediction of wind and solar energy.
Despite the promising results of using deep learning methods in forecasting renewables, some challenges, for example, demanding parameter determination and computational complication, still remain. Therefore, hybrid models are improved to conquer the shortcomings and improve the performance of forecasting. One type of hybrid model takes advantage of data preprocessing and an optimal algorithm to enhance the performance of predictions [123]. In [124], the forecasting performance of two hybrid models, that is, ARIMA-Artificial Neural Network (ANN) and ARIMA-Kalman, are compared. In the hybrid ARIMA-ANN model, the ARIMA model is used to decide the structure of an ANN model. In the hybrid ARIMA-Kalman model, the ARIMA model is applied to initialise the state equations for a Kalman model. Both cases give a reliable performance, which can be employed in the non-stationary wind speed prediction. Another type of Hybrid model presented in [125] combines numerical weather prediction and statistical learning using multivariate post-processing procedures to forecast solar irradiance. The authors showed that the proposed method significantly reduces forecasting errors.
Accommodating or Mitigating Intermittency
Finding technological solutions to overcome the adverse effects of intermittent RES is a controversial topic in the literature. Not only the intermittent RES have a variable generation pattern on each certain day, but also they have fast short-time oscillations. Consequently, balancing both types of variation on a long and short time scale is inevitable. Although there are other ways to mitigate or accommodate intermittency, ESSs are of significant interest to date. Thus, in this study, mitigation or accommodation solutions are categorised into two classifications-solutions not-associated with ESSs and solutions associated with ESSs. The former can be further divided into Supply-Side Management (SSM) and Demand-Side Management (DSM).
By controlling the production of generation units, including the RES plants, it is possible to diminish the negative effects of intermittency of RES on the power system, known as SSM. The first way of accommodating the volatile output of wind and solar power was described in Sections 3.2 and 4.4, namely aggregating wind farms or solar plants, which reduce the variable output power of them to some extent. In Figure 12 the effect of aggregating wind turbines and solar plants are shown [126,127]. For the PV plants, assuming that the time-series are captured when the PV plants were operating at their rated capacity, for the aggregation of 5 and 23 sites, the maximum change in the output of all PV plants in 1-min falls to 40% and 20% respectively, which is far below than 80% that occurred for a single site [127]. Furthermore, wind speed and solar irradiance forecasting methods that were illustrated in Section 5 can smoothen the effect of intermittency as they provide the system operators with an approximate insight of wind and solar near-future generation. Besides, the authors of [128] examined the seasonal and annual patterns of wind and solar power output and found out that they can complement each other in some periods. That is why several studies have focused on combining these two intermittent sources while considering evaluating the system reliability and challenges for connection of the system to the grid [129], assessing the hybrid-system cost [129,130], the capability of meeting load demand [131], CO 2 emission [132], sizing the system [131] and so forth. The optimal allocation of generation mainly depends on the region where the hybrid-system is installed. In general, any generation unit or technological equipment that can provide a complimentary wind or solar generation pattern can be utilised in combination with them to reduce intermittency. That is why combining power plants that have a more flexible output, for example, gas turbine, benefiting from Plug-in Electric Vehicles (PEV) if a moderate portion of PEV owners are convinced to provide service [133] and using diesel generators in combination with other less intermittent RES can reduce output fluctuations of them [134]. In order to decrease the added reserve capacity due to the presence of wind energy, [135] proposed a multi-objective stochastic optimization to provide wind farm operators with an optimal set-point of the wind farm. In this study, the two objectives are maximising the wind farm set-point and the probability of fulfilling the determined set-point in the real-time operation, leading to the reduction of the uncertainty of the output power. Discarding the redundant output power of the intermittent RES plants, power curtailment is another solution associated with the generation side to accommodate the intermittency. In addition to that, RES plants can aid the power system by providing ancillary services. Both of the aforementioned solutions were elaborated in Section 2. Finally, by benefiting from dispatchable power plants, that is, the plants that have a fast ramp rate, it is possible to compensate for the RES fluctuations. For this purpose, reliable power plants with high availability are required. Gas turbine generators, due to having the mentioned attributes, are the most desirable option to accommodate RES intermittency [136].
In terms of DSM, which is also referred to as demand-side response in literature, both on a residential and industrial scale, flexibility can be provided for the grid to some extent. In [137], DSM is categorized into two main groups: energy reduction programme, aiming to decrease the load through the more efficient processes or types of equipment; and load management programme, trying to modify the load pattern and encourage consumers to utilise less during peak hours by adopting incentive measures. The authors of [138] investigated the industry contribution to the grid, by co-optimising the operation of a chlor-alkali electrolysis process and concluded that the flexible operation of the electrolyser is beneficial for the power system.
ESSs can ease the large-scale integration of the intermittent RES. As mentioned before, it is not possible to synchronise demand and electricity production, especially with the presence of renewable generation units. That is, at some hours, RES power output is not sufficient when the load demand is at its peak or vice versa. Being able to store the surplus of intermittent RES when load demand is low and release it during peak hours, ESSs are one of the desiring options for mitigating RES intermittency, shown in Figure 13 [139]. Furthermore, ESSs can provide additional system service for the grid, making them a unique technology to be utilised in combination with RES [140,141]. Generally, ESSs can be classified into five main categories in terms of the form of energy in which electricity is stored-mechanical, electrochemical, chemical, electromagnetic and thermal. Each of them, depending on their materials, manufacturing process, and form of energy, can be further divided into sub-categories, shown in Figure 14 [142]. There are three commonly used technologies regarding mechanical ESSs, namely Flywheel, Pumped Hydroelectric Storage (PHES), and Compressed Air Energy Storage (CAES). As the name suggests, this type of ESS uses the mechanical form of energy to store and release electricity. Flywheel, consisting of a rotor that can be accelerated to the high speed of 20,000 to 50,000 rpm, stores the energy in kinetic form. Having the most maturity among other ESSs, PHES stores and releases energy in the form of potential via two water reservoirs at different heights. CAES uses off-peak electricity or surplus intermittent RES power to compress air in either under or above ground structures.
Electrochemical energy storage systems work based on reversible chemical reactions and store energy in the form of chemical energy. Redox Flow Battery (RFB), also known as Flow Battery, is an electromechanical cell that provides chemical energy by two chemical components dissolved in liquids. Battery Energy Storage Systems (BESSs) are among the most accepted technologies between storage technologies, consisting of individual battery cells connecting in series, parallel, or both. A battery cell consists of a negative electrode (anode), a positive electrode (cathode), and an ionic conductor (electrolyte), and based on the material used for them, various types of BESS have emerged. So far, Lead-acid battery, Nickel-Cadmium (NiCd) battery, Zinc-Bromide (ZnBr) battery, Sodium-Sulphur (NaS) battery, and Lithium-ion (Li-ion) battery have been the most common types of BESSs. The latter, that is, Li-ion battery, concerning the component of electrodes, can further be divided into Nickel Cobalt Aluminum (NMC), Nickel Cobalt Aluminium Oxide (NCA), Lithium Iron Phosphate (LFP) which are the popular materials for cathode, and, graphite and carbon black that are commonly used materials for the anode [143].
Chemical Energy Storage Systems (CESSs) benefit from the released energy of the chemical reactions. Using the energy of a fuel, Fuel Cell (FC) is the most well-known technology among CESSs. Many fuel types have been investigated in the literature and are suggested to be used in FC [144]. Due to the high energy density and other favourable attributes, Hydrogen Fuel Cell (HFC) has attracted attention in the past years [145,146]. However, as a result of the high electrolyser costs and fairly low hydrogen price, the direct conversion of electrical energy to hydrogen is not economically desiring [141]. Among other existing FCs, Proton Exchange Membrane Fuel Cell (PEMFC), Direct Methanol Fuel Cell (DMFC), Alkaline Fuel Cells (AFC), and Solid Oxide Fuel Cells (SOFC) are the most known ones [142].
Unlike other ESS types, in Electrical Energy Storage Systems (EESSs), no transformation of energy occurs. Instead, they store energy in an electromagnetic field by utilising Ultra-Capacitors (UC) or superconducting electromagnets in Superconducting Magnetic Energy Storage (SMES) systems.
The last category, that is, Thermal Energy Storage Systems (TESSs) storing energy in the form of heat, depending on the operating temperature, can be classified in lowtemperature TESSs and high-temperature TESSs. The latter is categorised into three sub-groups: latent heat, sensible heat, and thermal-chemical sorption energy storage [142].
While SE and Flywheel have the highest efficiency among the other types of ESSs with 95-98% and 90-95% successively, HFC with 20-50% has the lowest. When it comes to the lifetime (cycle), Flywheel is superior to other types with near 100k-1M cycles. On the other hand, RFB (0.3-1.4k) and Lead-acid (0.25-2.5k) suffer from a short lifetime in terms of the number of cycles. Concerning response time, most of the mentioned ESSs have a fast response time in the order of milliseconds. However, the response time of PHES and CAES (in the order of minutes) and TESS (in the order of hours) are slower than others [142]. In Figure 15, the characteristics of the most common ESSs are illustrated.
Belgian Power System
Due to the mentioned reasons in Section 1, similar to many countries, Belgium is investing in switching to RES and has joined the Paris Agreement. Under the framework of this agreement, participating countries have defined and quantified their targets to reduce CO 2 and other greenhouse gas emissions. In the delivered pledges, which is called Nationally Determined Contributions (NDCs), 170 out of 188 participating parties (90% of the total) mentioned renewables, and 134 of them (71% of the total) quantified the renewable energy target. According to the NDCs, IRENA estimates a growth of 42% in the global installed capacity for renewable power generation within the decade, from 2523 GW in 2019 to 3564 GW in 2030. Furthermore, IRENA projected that amid the COVID-19 pandemic, many countries are likely to miss their 2020 NDC deadline [150].
The Path towards Decarbonisation
In line with the decarbonisation goals, Europe has committed a reduction in carbon emissions of at least 80% by 2050. Being one of the most critical sectors in this scope, electricity is of paramount importance. The European Commission has defined various scenarios, each of which investigates final consequences in the shadow of a probable incident. To hit the goal of decarbonisation, the European Commission forecasts that the share of RES in electricity generation will have to account for between 64% (in a high-efficiency scenario) and 97% (in a high RES scenario which incorporates notable ESSs) by 2050 [151]. As a result, European countries plan to replace coal-fired plants with RES plants and nuclear plants' phase-out. By doing so, it is estimated that a 20% reduction in conventional generation capacity in Northern Europe, that is, France, Germany, Netherlands, Great Britain and Belgium, will occur by 2030 [152].
Belgium is one of the countries that have quantified its goal in the renewable energy sector. Considering the commitment of Belgium to the Paris Agreement and the decision to phase out nuclear generation by 2025, it can be deduced that the country needs a clear energy roadmap. In Figure 16, the planned time for the phase-out of each nuclear reactor in Belgium, namely Doel 3 (D3), Tihange 2 (T2), Doel 1 (D1), Doel 4 (D4), Tihange 3 (T3), Tihange 1 (T1) and Doel 2 (D2) are shown [153]. The last reactor, that is, D2, is going to be decommissioned in December 2025. This becomes more crucial if it is remembered that nearly one-third of the total installed generation capacity of Belgium and almost half of the electricity generation consists of nuclear plants. In line with the decommissioning of nuclear plants, the Belgian Transmission System Operator (TSO), Elia, stated that to have the nuclear exit occur in an orderly way, a replacement capacity of 3.9 GW will be needed as of 2025. Besides, neighbouring countries of Belgium, that is, the UK, the Netherlands, Italy, France and especially Germany, are shutting down their coal plants sooner than expected, harming the ability of Belgium to import electricity during winter. Elia estimated that in the winters of 2022-2023, 2023-2024 and 2024-2025, more than 1 GW of capacity is needed to maintain the security of supply in these periods [153]. To clarify the path for achieving the goals, Elia has evaluated the strength and challenges of Belgium, which are summarised in Table 4 [152]. Table 4. Strengths and challenges of Belgium on the way to decarbonisation [152].
Strengths Challenges
-In line with its strategy, Belgium has established and maintained a robust and interconnected electricity and gas infrastructure as well as a leading position in market design and integration -Stemming from small area of Belgium, only a limited part of the country's demand could be provided by domestic renewable generation, making Belgium unable to rely upon domestic capacity on the way of decarbonisation -Surrounded by large countries such as Germany, France and the UK with different strategies in terms of energy, giving Belgium the freedom to opt for their best choices -In the current market design, with the presence of higher amounts of renewables in the system, the profitability of conventional units are of concerns -Situated at the center of Europe, crossroads of important renewable generation hubs and close to the main load centers -Meeting load demand becomes growingly burdensome as in less than a decade from now nuclear plants are supposed to phaseout in Belgium Nevertheless, despite the chosen target of 13% share of renewable energy for 2020 (10% in transport, 21% in electricity and about 12% in heating and cooling [154]), in the last days of 2020, authorities announced that the share of renewable energy in energy generation is capped at 11.68%, rising from 9.1% from the beginning days of 2019 [155,156]. In addition to that, Belgium has promised to reach a 2030 total renewable goal of 18.3%. In the draft plan, the renewable share of 40.4% in electricity, 20.6% in transport and 12.7% in heating and cooling are promised by 2030 [156]. This can be compared with the Europe target of having 32% and 50-60% share of RES in energy and electricity consumption successively, by 2030 [153]. Regarding the electricity generation historical values, in Figure 17 share of electricity generated by RES in Belgium in the past years is shown [154]. The potential of Belgium for renewable generation mainly relies on onshore and offshore wind as well as solar PV. However, biomass, geothermal, and hydro energy could still contribute to energy provision, albeit in far lower volumes [152]. Figure 18 depicts the electricity generation mix in 2020, the year marked by the outbreak of the COVID-19 [157]. In terms of CO 2 production, the emissions of Belgium in the last ten years are depicted in Figure 19 [158]. In order to have a better image of the CO 2 production of Belgium, the top five countries in terms of tonnage of CO 2 production in the last five years are compared with Belgium, considering the population of the countries, shown in Table 5. The index is known as CO 2 emissions per capita. It can be seen that Belgium needs to urgently move towards a more sustainable way of life to decrease its CO 2 emissions. This could stem from the fact that, in general, each person in Belgium consumes a higher amount of primary energy compared to the people in the mentioned countries in Table 5. Statistically, after Norway that had the highest primary energy consumption per capita in 2019 with 328.5 GJ per person, Belgium with 235.1 GJ per person was the second country in Europe [158]. Becoming the seventh country in the EU to cut out the tremendously polluting fossil fuel in the electricity generation sector, Belgium closed its last coal-fired power plant in Langerlo in March 2016 [167]. However, since October 2018, two gas turbines of 470 MW and 255 MW returned to the market, reported by Elia. This can be clearly seen in Figure 20, which shows the monthly energy mix for the year 2018 and 2019 in which a significant change in October 2018 is noticeable [168]. It is worth mentioning that in Figure 20, the category 'other' includes hydro-electricity, biomass, and so forth.
Status of the Power System
Concerning the reliability of the power system, different countries have adopted various metrics to evaluate their grid reliability based on the configuration of the power system. For instance, in Australia, an annual Expected Energy Not Served (EENS) standard is used, considering 0.002% to be an appropriate risk level for the expected unserved energy in one year. In the North-West part of the USA, the LOLP metric is applied, which considers 5% for LOLP value and implies that an average of one calendar year in 20 is allowed to have unreliability events at most. On the other hand, in Europe, a reliability standard of LOLE is used, which is expressed in hours/year. For example, the Netherlands uses 4 h/year for this metric or France has defined 3 h/year threshold. Two countries in Europe use dual reliability standards, meaning that two metrics are applied to assess the power system reliability. Apart from Portugal, Belgium is the second country that uses two reliability standards. Having a two-part LOLE criterion, Belgium has set an average LOLE of 3 h/year, in addition to equally binding criteria, known as LOLE95, which should be less than 20 h [169]. The latter metric benefits from Monte Carlo simulation, a technique to understand the consequences of risk and uncertainty in forecasting models [170]. LOLE95 implies that 95% of simulated Monte Carlo calendar years should have a loss of load of less than a certain number of hours, which in Belgium's case is 20 h.
Every year Elia publishes a report evaluating their performance in terms of reliability and grid availability. In the report, in line with the international standard, the number of incidents resulting in at least one customer interruption that lasted more than 3 min for which the TSO is responsible are recorded. That is, interruptions stemmed from customer errors, thunderstorms, third parties, birds, and so forth, are excluded. In Figure 21 the number and duration of incidents that occurred in the past three years at different voltage levels are depicted [171]. It can be seen that even with the annual share of RES, including intermittent sources, in electricity generation of 19.6%, 24.5% and 21.3% in 2017, 2018, and 2019, respectively, still the required reliability and availability is maintained. Concerning the power system adequacy, Elia conducted a study to quantify the needed amount of three reserve types, namely primary reserve or Frequency Containment Reserve (FCR), secondary reserve or automatic Frequency Restoration Reserve (aFRR), and tertiary reserve or manual Frequency Restoration Reserve (mFRR), in ten years from 2017 to 2027. The FCR aims to stabilise the frequency within the range of 49.8 and 50.2 Hz. In order to achieve this goal, the response time has to be very quick, as, in less than 30 s, it needs to compensate for any probable increase or decrease in the frequency. Elia states that it may acquire 70% of the needed R1 reserve outside Belgium. The aFRR has two goals-regulate the frequency to 50 Hz in order to relieve the pressure on the FCR, and ensure that the constant physical import/export rate always remains in correspondence to the contractually agreed import/export balance by the market place in a control area (in Elia's case Belgium and a part of Luxembourg). The set-point signal, which is based on ongoing measurements of the difference between the mentioned import/export balances, is calculated by Elia every 10 s and transmitted to all the power plants participating in the aFRR provision. Relieving the burden on aFRR reserve, the mFRR reserve is controlled manually by dispatchers of TSO with a response time ranging from a few minutes to a quarter of an hour at most. When aFRR is saturated or is at risk of getting saturated, for instance, following the loss of a vital generation unit, mFRR is activated. The tertiary reserve can be supplied by various sources, such as generation units that are already running, not-running units with a short start-up time, or consumers on the distribution and transmission network. As far as the Belgian power system is concerned, the capacity of transmission between the control areas is chiefly devoted to commercial exchanges. Thus, it cannot be assumed that neighbouring countries of Belgium could supply the requirements for mFRR. Elia estimated the required capacities of 80-100 MW, 140-175MW, and 825-1600 MW for the reserves FCR, aFRR, and mFRR, respectively in the period from 2017 to 2027 in Belgium [172]. The projected capacities are derived based on a statistical method in which historical system imbalances, as well as possible errors in forecasted RES generation, are extrapolated towards the future.
RES Share Predictions
When it comes to the predictions associated with RES in Belgium, Elia has defined three scenarios and quantified its projections for the installed capacity of onshore wind, offshore wind, and PV. In the main scenario, referred to as the Central scenario here, assumptions are based on the National Energy and Climate Plan (NECP) of Belgium, which is the most recent source of information from the authorities concerning RES generation and evolution of the demand. Each EU member was obliged to submit a draft NECP by the end of 2018 to the European Commission. As far as Belgium is concerned, NECP quantifies three main terms-firstly, offshore and onshore wind, PV, and biomass capacity to reach EU 2030 targets. Secondly, the nuclear capacity of the country, which, as mentioned earlier, follows a scheduled phase-out. Third, it consists of total electricity consumption growth, including TSO and Distribution System Operator (DSO) grid losses. The draft NECP for Belgium includes two scenarios called With Existing Measures (WEM) scenario, which considers measures that are already taken, and With Additional Measures (WAM) scenario, which considers additional measures into account as well [153]. Besides, each country has submitted a National Renewable Energy Action Plan (NREAP) in which approaches and energy mix are pursued to reach their targets. Historical values and planned share of RES in energy and electricity consumption based on NECP-WAM scenario and NREAP in Belgium is shown in Figure 22 [153].
Along with the Central scenario, High and Low RES scenarios, denoting higher and lower penetration of RES compared to the Central scenario, respectively, are simulated by Elia. In the High RES case, higher penetration in onshore wind and solar PV capacity in Belgium, as well as accelerated commissioning of the second wave of offshore wind is predicted. Conversely, Low RES case considers lower penetration of onshore wind and solar PV capacity in Belgium, while offshore capacity follows the same assumptions as in the Central scenario [153]. As far as onshore wind is concerned, an average increase of approximately 170 MW/year in Belgium is estimated by Elia in the Central scenario, resulting in 4.5 GW of installed onshore capacity in the country. However, this rate differs for High RES case and grows by 250 MW/year, which leads to 5.3 GW in 2030. In the Low RES scenario, 90 MW/year growth rate, which reaches the total onshore capacity to 3.6 GW, is foreseen. Figure 23 shows the historical and predicted values for onshore capacity per scenario in Belgium [153].
For offshore wind capacity, while in the Central scenario, up to 4 GW of capacity is predicted before the end of 2028, in the High RES case, the second offshore wave starts earlier compared to the Central scenario with 700 MW by 2025. The forecasted installed capacities for the other years are predicted to be similar. In Figure 24, the evolution of the offshore capacity in Belgium is depicted [153].
Reaching a total capacity of 11 GW by 2030, PV generation in the Central case is predicted to grow by 600 MW/year. In the High RES case, PV grows by 900 MW per year and reaches 14 GW in 2030. On the other hand, in the Low RES scenario, the growth rate falls to 300 MW/year, leading to 8 GW of installed PV capacity in 2030, shown in Figure 25 [153].
The Year Marked by COVID-19
In the year 2020, there are noteworthy points regarding matters associated with electricity generation in Belgium. First, there were 119 h in this year when more than 50% of the consumed electricity of the country was provided by RES. Considering Table 6, this becomes more significant if it is mentioned that this number had never been reached before in Belgium [157]. Table 6. Hours and annual share of the whole consumed electricity when wind and solar generated more than 50% of consumed electricity in Belgium [157].
Year
Total Furthermore, 39.1% of the electricity demand in Belgium was supplied by nuclear power, declining from 48.7% in 2019. Although in 2018, merely 31.2% of the country's needs were met by nuclear plants, it must be mentioned that 2018 was the year marked by the significant unavailability of several reactors, especially in the last few months of the year. The impact can be clearly seen in the annual net import volume of Belgium in this year, shown in Figure 26. However, almost neutral annual net import capacity in 2020 proves that nuclear power reduction was compensated with the national generation, mostly with RES plants as gas-fired plants production was closely similar to 2019 [157].
In the end, the impact of COVID-19 on load demand must not be neglected. Although it is complicated to detect the impact of COVID-19 on the power system clearly, the 7% reduction in total load compared with the average for the five previous years, shown in Figure 27, could be mainly related to the COVID-19 pandemic. Elia reported that during the first phase of lockdown in Belgium, electricity consumption was 25% below normal at certain times of the day. The higher load in August could be justified by the heatwave that existed in the country. Finally, the fact that, on average, 2020 was warmer than the past five years should not be overlooked [157].
Discussion and Conclusions
In order to combat fossil fuel depletion, air pollution, and as a result of joining the Paris Agreement, countries have a strong desire to switch to alternative forms of energy. Due to abundance, being eco-friendly, and so forth, RES are one of the primary candidates for replacing fossil fuels. However, the transition to RES, besides its numerous advantages, introduces significant challenges to the TSO concerning the power system reliability and degradation of frequency response. Specifically, with the large scale integration of the intermittent RES, the following features are influenced-power system required reserves, CO 2 emissions of the conventional power plants, the grid losses both in transmission and distribution levels, generation of wind and solar plants when their output exceeds a threshold, protection and control systems, and reliability of the power system. Thus, countries need to switch to RES in an orderly and planned way to accommodate the adverse effects of the integration of RES. Having said that, according to the latest statistics, countries are investing in RES towards their decarbonisation goals in recent years.
Varying both temporally and spatially, wind speed and solar irradiance changes are the cause of the volatile nature of wind and solar energy. Consequently, forecasting their variations is crucial. It is notable that wind speed is higher in offshore areas, which, evidently, leads to a higher output power of the connected generator. Additionally, it is more consistent and foreseeable compared to onshore winds. Consequently, as well as a reduction in the level of intermittency of wind power to some extent, offshore turbines do not suffer from high fatigue and the related infrastructure lifetime increase. On the other hand, due to the presence of a thermal mass, such as oil or water, the variations of the output power of CSP plants are less significant. In order to accommodate the oscillations of the intermittent RES, ESSs, as the most promising solution, have been introduced. They can be categorised into five groups of mechanical, electrochemical, chemical, electromagnetic and thermal. Based on their features, each of the mentioned types can be utilised to smoothen the adverse effects of the volatile RES.
Regarding wind technologies, offshore and onshore turbines are used to generate electricity, with the latter is more mature in technology and the corresponding infrastructure due to almost 100 years sooner evolution than the former type. On the other hand, offshore wind harvesting is becoming more accepted, especially for countries with access to seas or oceans, due to its desiring features. Furthermore, from an environmental point of view, offshore turbines do not need to occupy any land as they are installed in the sea or ocean. However, this fact increases the required technology cost and makes the maintenance more costly.
Concerning solar energy, PV and CSP are the two dominant technologies for generating electricity globally. However, as CSP is strongly dependant on the geographical location and mainly could be benefited in regions with DNIs > 2000 kWh/m 2 /year to make it costly justifiable, PV is more utilised worldwide. While PV solar systems directly convert the sun irradiance to electricity, CSP plants concentrate the solar irradiance onto a point or line and first generate heat and steam and then convert it to electricity. Depending on the type of equipment for having the sunlight concentrated, CSP plants can be further categorised into PT, ST, FR, and SD power plants. Although PT plants are the most mature and dominant type among the others, ST has attracted attention in the past years as it has made a compromise between cost and efficiency.
In terms of comparing solar and wind energy, from the intermittency point of view, the solar irradiance variations are more predictable compared to the wind speed. In other words, the day and night changes of the solar irradiance have given a more foreseeable pattern to the output power of solar systems. Additionally, due to a higher correlation of solar output power with demand, it is easier to have them penetrated into the power grid. In Table 7, the most significant characteristics of wind and solar energy, including their related sub-technologies, are summarised. Considering the Belgian power system, in line with the decommissioning of nuclear plants by 2025 and the Paris Agreement, Elia, with the corresponding authorities, is trying to define a clear and feasible roadmap on the path of the country towards decarbonisation targets. In Belgium, solar PV and wind harvesting consist of the dominant part of shares among other types of RES. Due to the access to the North Sea, Belgium is investing in offshore wind plants, too, making the country one of the leading countries in this aspect. Although Belgium failed to reach its 13% goal of RES share in energy for 2020, statistics, such as net import of the country and annual consumed hours of electricity generated by RES in the year 2020, show that the country is on the right path and achieving its goals by 2030 is not hard to reach. The reason for not hitting the targets in 2020 could be justified by the COVID-19 pandemic as it made authorities to postpone the commissioned project deadlines as the country experienced two phases of strict lockdown. That is why, if the situation regarding the pandemic becomes better than now and authorities relax the adopted measures, it is expected that 2021 can be an exceptional year for RES and some records in terms of RES. Acknowledgments: This work was performed in the frame of the Energy Transition Fund project BEOWIND of the Belgian federal government, and the Catalisti cluster SBO project CO2PERATE.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
|
2021-06-09T13:15:06.176Z
|
2021-05-04T00:00:00.000
|
{
"year": 2021,
"sha1": "b51c078d0802dc757857b7a8f2474baad8f2a3d8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/9/2630/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3cb1f48aa6b7e623918fd128218cb40deb4ae45a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
45997576
|
pes2o/s2orc
|
v3-fos-license
|
Severe Thrombocytopenia after Zika Virus Infection, Guadeloupe, 2016
Severe thrombocytopenia during or after the course of Zika virus infection has been rarely reported. We report 7 cases of severe thrombocytopenia and hemorrhagic signs and symptoms in Guadeloupe after infection with this virus. Clinical course and laboratory findings strongly suggest a causal link between Zika virus infection and immune-mediated thrombocytopenia.
The Study
We report severe thrombocytopenia (i.e., platelet count <50 × 10 9 /L) (15), which developed during or after the course of acute Zika virus infection in 7 patients who were admitted to the Guadeloupe University Hospital, French West Indies, during May-August 2016. This period coincides with the peak of a Zika outbreak in Guadeloupe.
The 7 patients (5 women and 2 men, mean age 43 years, range 15−74 years) had petechial purpura in the lower limbs (Table). Five of the patients also had additional bleeding signs and symptoms (gingival bleeding, epistaxis, oral hemorrhagic mucosal blisters, and hematuria). These manifestations prompted us to perform blood tests, which showed isolated thrombocytopenia. Results of physical examinations were otherwise unremarkable. All 7 patients had a typical Zika virus infection (median 5 days, range 2-18 days) before diagnosis of thrombocytopenia. Median minimum platelet count was 2 × 10 9 /L (range 1 × 10 9 /L-17 × 10 9 /L). Results of peripheral blood smears were unremarkable for all patients.
We evaluated patients for a differential diagnosis of isolated severe thrombocytopenia. None had recently received a new medication or vaccination or had traveled to an area to which malaria is endemic. No underlying conditions, such as autoimmune or lymphoproliferative disorders, were known or identified for any patient. Four patients had nonsignificantly positive antinuclear antibody titers (1:80-1:160 with either prednisone or methylprednisolone at an initial dosage of 1-2 mg/kg/day. Three patients also received intravenous immune globulins (IVIG). Two patients received platelet transfusions. Except for patient 2, platelet counts returned to reference ranges <15 days after treatment initiation for all patients. After >2 months without treatment, no relapse was observed in any patient. We provide additional information on the atypical clinical course that was observed for 3 of the patients. Patient 1 had a history of refractory ITP. She had been treated for primary ITP during 2004-2007 with steroids and IVIG, followed by vinblastine and danazol, and eventually splenectomy, which was curative. In 2014, an acute chikungunya virus infection caused a relapse of ITP, which fully responded to a short-course steroid treatment. During 2014-2016, she remained asymptomatic and had a platelet count >100 × 10 9 /L. In May 2016, she was hospitalized 2 days after onset of a typical Zika virus infection. The patient had petechiae in the upper and lower limbs and a platelet count of 17 × 10 9 /L. Her clinical course rapidly became favorable after steroid therapy, and she had a platelet count of 172 × 10 9 /L by day 5 of steroid therapy. She was the only patient who did not have IgG against DENV.
Patient 2 responded only partially to steroids and IVIG and had a maximum platelet count of 92 × 10 9 /L at day 14 after treatment initiation. While she was underdoing tapering of steroid treatment, palate petechiae appeared on day 39 (platelet count 9 × 10 9 /L). Prednisone (1 mg/kg/day) was given for 10 days and was followed by a sustained recovery of the platelet count. Patient 7 had severe hemorrhagic manifestations (gross hematuria and oral hemorrhagic blisters) and platelet count of 1 × 10 9 /L. He was hospitalized in an intensive care unit and received steroid therapy, IVIG, and platelet transfusion. The patient showed a full response to treatment (platelet count 169 × 10 9 /L at day 7 of treatment).
Conclusions
From the beginning of the current Zika outbreak in the Americas to November 2016, nine case-patients with severe Zika virus-associated thrombocytopenia have been reported, 1 in Suriname (10), 4 in Colombia (11,12), 2 in Puerto Rico (13), and 2 in Martinique (14). We report information for these 9 case-patients and the 7 patients we analyzed in Guadeloupe (Table).
The 16 patients had similar characteristics. First, all had severe and profound thrombocytopenia (platelet counts <20 × 10 9 /L). Second, probably as a consequence of thrombocytopenia, hemorrhagic manifestations developed in all but 1 patient. Third, thrombocytopenia was present shortly after acute Zika virus infection, and Zika virus RNA was still detected in urine from 11 of the 12 patients for whom RT-PCR for Zika virus was performed.
Overall, despite the severity of thrombocytopenia, the outcome was generally favorable after conventional steroid treatment with or without IVIG. Among the 4 patients who died, only 1 patient had isolated thrombocytopenia; this patient died of hemorrhagic complications (13). The other 3 patients had various systemic signs and symptoms and thrombocytopenia; thrombocytopenia was the direct cause of death for only 1 patient (11).
For the 7 patients in Guadeloupe, we were able to exclude all other main causes of isolated severe thrombocytopenia, especially DENV infection. Because all 7 patients still had a positive RT-PCR result for Zika virus in urine when thrombocytopenia was diagnosed, we can reasonably assume that ITP was secondary to Zika virus infection. As reported for other viruses, Zika virus-associated ITP might result from stimulation of the immune system, which usually decreases after clearance of viral replication. The mechanism of thrombocytopenia was probably different in patients 1 and 16 (12), in whom thrombocytopenia appeared as a relapse of previous ITP.
In conclusion, thrombocytopenia is a rare complication of Zika virus infection. Our observations strongly suggest a causal relationship between Zika virus infection and ITP. Therefore, Zika virus should be included in the list of viruses that might trigger immune-mediated severe thrombocytopenia.
|
2017-04-13T12:50:05.883Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "a0b071ce95cebf37a154bb52c2b2da0f2c27bb0b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid2304.161967",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86e53d0a4debcb84ab281ca619e369548d40468f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234135038
|
pes2o/s2orc
|
v3-fos-license
|
Business Innovation in The Hotel Industry
Today, business innovation is a hot topic. Therefore, it was proposed to use a competence-based approach to assess staff as a business innovation in the hotel industry. The paper investigates the theoretical and methodological principles of hotel staff evaluation using a competency-based approach. Scientific approaches are critically estimated, and aspects applied to material incentives management for hotel staff are analyzed. The scientific and theoretical basis of managing the staff’s material stimulation in the hotel business on the basis of the competency approach is investigated and practical tools of its implementation in the hotel business activities are estimated. Methodical approaches to evaluating staff competencies as a component of material incentives management have been developed. The structure of remuneration and the system of bonuses based on staff competencies evaluation have been formed, which will help increase the efficiency of personnel management in the hotel industry.
Introduction
The current situation in the hotel services market requires forming new innovatively effective approaches and management concepts, the implementation of which is aimed at consolidating the staff to achieve socioeconomic goals of enterprises. This will harmonize the relations in the systems of "employee-employer", "employee-employee" and "employee-consumer", increase professional competence, productivity and quality of staff. For hotel business enterprises, these management aspects are relevant because the staff is directly involved in creating a hotel product as a priority factor in generating enterprise income, maintaining and strengthening its market position and ensuring competitiveness.
Modern aspects of staff evaluation using a competency approach are covered in the studies of M. Armstrong [6], McClelland D.C. [13]. Boyatzis R.E. [11] analyzed the nature and factors of competence and the existence of the competence structure. J. Raven [12] studied the system of features by which the types of competence can be used.
Barybina Y.O., Lysenko M.O. [14] claims that the Ukrainian hotel market is characterized by a combination of a mixed model of incentives -an effective combination of tangible and intangible incentives focused on staff development and training.
Melnychenko S., Bosovska M., Poltavska O. [4] offer to make decisions as for an employee by the job quality evaluation on the basis of the competence approach -to make individual plans for the competence and career development, as well as to rotate staff.
As a result of many studies, various approaches, procedures and methods for organizing and conducting staff evaluations have been developed. However, at the present stage of innovative development, staff evaluation requires using all scientific approaches, their generalization and combination in order to achieve greater efficiency, optimality and effectiveness in business innovations in the hotel industry.
The purpose of the study is to generalize and develop the theoretical foundations of the hotel staff evaluation using a competency-based approach and to develop practical recommendations for improving the management system of material incentives for hotel staff.
Result and discussion
The coronavirus pandemic has caused serious damage to the hotel business in Ukraine, and it may take more than a year for the sector to recover. According to the study, 93% of respondents confirmed a general decline in their hotel's revenue, in 21% of hotels gross revenue decreased by 25-40%, in a third of hotels -by 40-60%, in 30% of hotels such reductions reached more than 60%, only 4% of hotels showed an increase in annual revenue, and 3% remained at the same level. According to the data: 66% optimized their costs by reducing the staff, 2/3 reduced prices for their services, 63% of respondents improved their product and services (repairing, repositioning, updating standards and conditions with suppliers, changing equipment, etc.), 1/3 of respondents introduced digital and marketing tools, 27% introduced alternative services (coworking, renting rooms for offices, etc.), 7% decided to repurpose some of the premises, for example, for gambling establishments rent [1].
In these conditions, the issues of optimal use of labor, information and financial resources in the hotel business remain underexamined. The capabilities of the hotel business as for cost savings, strengthening the role of security in customer service, automation of its activities, personalization of services are largely related to how well the resource potential is studied, and what will contribute to its development.
Innovative management methods in modern HR management used by the world and EU leading countries are primarily based on a competency-based scientific approach. In the process of personnel management based on the competency approach, HR specialists or auditors determine a system of parameters that combines all the necessary competencies that employees must have in accordance with job descriptions, and obtain results of assessing the current state of human resources management. This allows identifying the level of each component of the set of employees'competencies.
Research based on this approach should take into account the fact that professional competencies should be understood as dynamic, because a person is constantly developing knowledge, skills, abilities, so we can assume that the competencies development is a certain innovative process of increasing the organization's efficiency as a whole by improving the level of employees' professional skills. This is also one of the main tasks of the personnel management system and human resources development.
Of particular note is the definition suggested by J. Raven who gives a detailed interpretation of competence and understands it as a phenomenon consisting of "a large number of components, many of which are relatively independent of each other" with some components belonging more to the cognitive sphere, and others -to the emotional,… ..these components can replace each other as components of effective behavior" [2].
Obviously the competency approach in personnel management is distinguished by its redirecting personnel management goals from solving operational personnel problems to tasks of a higher strategic level, which go beyond the usual responsibilities of the personnel management service. It is not enough just to increase knowledge, improve employees' skills, expertise and behavior. It should result in increasing productivity and organizational change that can boost the competitiveness and efficiency of the corporation as a whole. Therefore, the goals of personnel management in terms of the competency approach are formulated to show that these processes can improve the organization by achieving higher results, changing employees' behavior, increasing productivity and efficiency of the organization.
In fact, this is an innovative concept of personnel management which focuses not on the process or operational results, but on the mechanisms and management models based on the competency approach and their impact on the organization's long-term effectiveness.
A focus on organizational development priorities requires, on the one hand, shaping personnel management functions based on a competency-based approach that can help implement business strategy, and, on the other -highlighting the need to intensify mechanisms of employees' self-development and self-organization, since it is impossible to take an active part in improving the organization's activities without involving the developed knowledge and individual abilities. The task of the personnel management system on the basis of the competence approach is to create an innovative environment that supports and guides the employees' self-development.
The concept of competency approach is an integrated concept that forms the basic principles of personnel management of a modern organization. Such principles include: 1. The principle of systematization when the use of a competency-based approach to personnel management should involve interconnected elements: goals, objectives, personnel management processes and be focused on short-term and long-term organizational goals.
2. The principle of complexity when tactical and strategic decisions in the competency approach use should be developed in accordance with the relationships between different areas and aspects of personnel management.
3. The principle of relevance with activities in the field of using a competency-based approach to personnel management corresponding to the personnel situation, offering solutions to current personnel problems of the organization, based on best practices and modern scientific developments.
4. The principle of continuity with activities in the field of competence approach to personnel management focusing on the progressive training and employees' development in order to improve performance, build capacity for growth and advancement during one's work in the organization; 5. The principle of succession with expanding the dominant values, unique knowledge, skills and experience acquired by employees in the organization in order to improve performance, maintain and increase its competitive advantages.
6. The principle of advanced development with extending the professional horizons and improving employees' skills to create a stock of knowledge, skills and abilities that may be needed to solve complex problems or non-standard tasks of the enterprise in the future. 7. The principle of self-development with creating conditions for employees' self-learning and selfexpression in order to activate development internal mechanisms which motivate to work more effectively, increase job satisfaction, unleash professional and personal potential more deeply.
8. The principle of efficiency when the results of activities in the field of competency approach to personnel management should provide the required level of economic, organizational and social effects, thus helping to increase the efficiency of the organization [3].
It is considered relevant to use the concept of competence in the personnel management of the hotel business, and apply the results of the competence analysis in order to improve the processes of selection and hiring, employee development and motivation.
During a long time of research, the scientific literature has formed a system of features that can be used to characterize the types of competence (Fig. 2.1)
Fig. 1. Types of competence
In addition, competence can be common to the organization -it can be applied to all its employees. Or it can be applied to a group of related jobs in which work is similar but performed at different levels.
As hotel enterprises operate in the field of services, a significant part of which is provided to foreign consumers, we consider it appropriate to introduce an additional compensatory factor (criterion) -"level of foreign languages knowledge". It is known that the quality of hospitality services is the main prerequisite for forming demand for them, and therefore the key to the effective hotel business operation.
In this regard, the list of compensatory factors (criteria) for evaluating the positions of the hotel business includes the "consumer orientation". According to expert estimates, the list of compensatory factors (criteria) for the hotel industry can be systematized as follows Table 1.
The most important factors (criteria) for the hotel industry are the level of qualification, work experience, responsibility for decision-making, complexity and intensity of work, etc.
The conducted expert research of domestic practice of hotel enterprises determined the generalized list of compensatory factors (criteria) with validity of each of them according to strategic purposes and specifics of the enterprise's activity (matrix of job evaluation), formed the scale of factors (criteria) with their distinctive descriptions at various levels, and carried out a direct evaluation of each job Table 2. When forming levels, it is necessary to have their calibrated sequence, which provides clear guidelines for assessing a particular factor. Consistent levels can be defined by describing specific skills, competencies or needs for a particular qualification, training or experience.
Job evaluation of the hotel industry was carried out by an expert committee on the example of PJSC "Salyut Hotel" in Kyiv in accordance with the matrices indicating the level of factor detection for each job. The expert committee included the enterprise's CEO, his deputy for organizational development, the head of accommodation service, the head of the personnel department, the authors as consultants. The results of job evaluation of PJSC "Salyut Hotel" are shown in Table 3. Table 3. Job evaluation of PJSC "Salyut Hotel".
Job at PJSC "Salyut Hotel"
Actual factor evaluation, points Then, it is necessary to analyze which jobs are included in the groups. For example, if, according to the results of the expert assessment, the value of jobs with different qualification characteristics is similar, they can be placed at the same level of the job structure.
Thus, the purpose of designing a hierarchical job structure based on their evaluation in the hotel industry is to create a flexible mechanism for managing staff motivation grounded on a clear and transparent system that will unite all existing jobs in the company into certain groups and divide them by value levels for the enterprise.
According to the proposed method of designing the hierarchical job structure (remuneration) of the hotel industry on the basis of a single competency model for all groups of staff, all jobs of the enterprise are divided into seven levels (G1… G7), each of which is relevant to certain pay ranges, and the latter, in their turn, correspond to five groups (A, B, C, D and E). An individual basic salary within the pay range determined for a given job should be based on evaluating a particular employee by quality criteria.
According to the developed job structure (remuneration) of the hotel industry, payment of an individual employee will be decided in accordance with the results of a comprehensive quality evaluation, which in turn is based on estimating the results of his/her work and assessing the level of competence that has a direct impact on these results.
In the scientific professional literature, such a payment model is characterized as a "contributionoriented payment model" [4].
The "contribution-oriented" payment model can be effectively applied within the differential payment structure developed for hotel enterprises. In this case, personal reward will be formed on the basis of work results, competence and motivational incentives for developing competencies, efficiency, career trajectory [9].
Payment for contribution means payment for the obtained results and competence, as well as for the past and future work achievments (Fig. 2). Thus, contribution-oriented payment is based on a mixed model of performance management -the assessment of input and output factors (competencies and results), and allows concluding that the level of payment for a particular employee in a particular job is determined by both past and future work results.
The results of job evaluation in PJSC "Salyut Hotel" and data on average job payments are given in Table 4.
Payment by future work results
Competence Payment for contribultion The salary is assigned to the employee after analyzing the wage market within the range of a certain level of structure, and its specific amount depends on individual competence, work experience, level of efficiency and requires additional assessment of an individual employee's quality of work.
Therefore, each level of the structure (Gn) also needs to be subdivided into sublevels (Gjn,, where j is the number of sublevels of a certain pay range) which will correspond to certain salary estimates within a given range As the results of the conducted analysis established the relationship between the hotel category and the level of payment, it is advisable to determine the average job salary among hotel enterprises in Kyiv with a 3-star category which corresponds to the category of PJSC "Salyut Hotel". . * The average estimate of an employee's average monthly salary in a similar job among the studied 3-star hotels in Kyiv according to 2019. ** 70% of the average estimate of an employee's average monthly salary in a similar job among the studied 3-star hotels in Kyiv according to 2019 data.
In addition, the research results of the structure of employees' average salary at 3-star hotels in Kyiv showed that its main share is less than 54.8%, while the share of allowances, surcharges, bonuses, other compensation and incentive payments averages 45.2 %. In modern market conditions and in the practice of economically developed countries' enterprises, the share of the fixed part of payment should be at least 70%.
In this regard, we propose to accept 70% of the average market payment among the studied 3-star hotels of Kyiv (Sіn) as a basic part of the salary of the i-th profession in the labor market (ZPdіn) in order to develop a new job payment system of PJSC "Salyut Hotel".
The other (30%) will be a variable part -the amount of bonuses, rewards, other compensation payments (ZPdіn).
Specific salaries and bonuses should be assigned due to the staff competence and contribution by the following models in order to determine the amount of payment and appropriate strategies for the enterprise's activities in the regional labor market in the hotel business sector: ZPi вн < ZPi вн Defense strategy; ZPi вн = ZPi вн Follower's strategy; ZPi вн > ZPi вн Leader's strategy.
Thus, within one job, the range of salary increase is 50%, which is a strong motivating factor. The minimum salary of the lowest (first level) is not lower than state norms and guarantees, i. e. not lower than the stateestablished minimum salary.
The salary for a certain job is set within the range of a certain level of structure, and its specific amount depends on an employee's competence, his/her work experience, the level of work efficiency and requires additional assessment of an individual employee's quality of work.
Basic salaries are adjusted annually in accordance with changes in the labor market, inflation, changes in assessing the complexity of work performed, and so on. The salary grid, which is calculated in relation to basic salaries, is adjusted accordingly.
It is established that the structure of remuneration developed for hotel enterprises has seven levels and, accordingly, seven ranges of payment that intersect (overlap each other) by 30% in average.
The general payment structure at the hotel industry after introducing a new system of basic salaries formation is as follows (Fig. 3). At the hotel enterprises it is rational to accept the main part of the salary at the level of 70%, while the other 30% will be the variable part.
It is planned to accrue one amount of bonus for each employee, which will be considered as compensation for individual contribution and level of competence, the other -as a reward based on the enterprise's final results.
Thus, the application of the developed remuneration system on the basis of job evaluation and the staff competence will increase the efficiency of the hotel business and will help in the following areas: 1. To form a team of adaptive managers who are able to develop and implement a program of the hotel survival and development in changing conditions. To identify and preserve the core human resources of the hotel, i. e. managers, specialists and labour force of special value.
3. To restructure human resources in accordance with: organized transformations while restructuring; implementation of innovation processes; diversification; a complete reorganization.
4. To reduce socio-psychological tension in the team.
The paper's scientific significance
According to the analysis of statistics on the hotel services market in Ukraine, it was determined that it was the crisis that forced hotels to reconsider cost management and turn to a more flexible pricing policy, but what is more -to intensify work with hotel staff and consider them as the main resource that ensures the hotel's competitiveness. In this regard, the development of a job evaluation system on the basis of a competencybased approach is recommended as a way of increasing labor resources potential in the hotel industry. Thus, introduction of business innovation based on jobs evaluation and staff competence creates prerequisites for improving efficiency, optimizing internal business processes, simplifying wage management and motivating employees on the principles of fairness, transparency, flexibility, social partnership.
Conclusions
A balanced (hierarchical) payment structure is formed to improve the organization of payment at the enterprise by introducing a transparent and flexible remuneration system based on individualization of employees' salary and determined not only by traditional parameters (an employee's experience, qualifications, experience), but also by his/her individual characteristics, his/her competence on the basis of abilities evaluation and their possible implementation. In addition, a hierarchical job structure at the enterprise is formed to contribute to achieving business goals and strategies of the enterprise, investing effectively in increasing the cost of human capital, attracting and retaining the best professionals.
|
2021-05-11T00:07:03.991Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3bdf32feab0b20d39d42802936e7760f5efbff1a",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/11/shsconf_iscsai2021_01017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f8d1df03520f1ef2c7b522e6d2ea86210f0337c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
90307053
|
pes2o/s2orc
|
v3-fos-license
|
How familiarity warps representation in the face space
Recognition of familiar as compared to unfamiliar faces is robust and resistant to marked image distortion or degradation. Here we tested the flexibility of familiar face recognition with a morphing paradigm where the appearance of a personally familiar face was mixed with the appearance of a stranger (Experiment 1) and the appearance of one's own face with the appearance of a familiar face and the appearance of a stranger (Experiment 2). The aim of the two experiments was to assess how categorical boundaries for recognition of identity are affected by familiarity. We found a narrower categorical boundary for the identity of personally familiar faces when they were mixed with unfamiliar identities as compared to the control condition, in which the appearance of two unfamiliar faces was mixed. Our results suggest that familiarity warps the representational geometry of face space, amplifying perceptual distances for small changes in the appearance of familiar faces that are inconsistent with the structural features that define their identities.
Introduction
Human beings are adept at detecting, identifying, and discriminating between faces, despite the high degree of visual similarity based on first-order features. A compelling explanation for how we can discriminate different identities reliably comes from the hypothesis that faces are encoded as vectors in a high multidimensional representational space (Valentine, 1991;Lee, Byatt, & Rhodes, 2000;Leopold, O'Toole, Vetter, & Blanz, 2001;Jiang, Blanz, & O'Toole, 2007). Vectors for images of the same identity are located close to each other in this multidimensional face space. Face images for different identities that are located close to each other are harder to discriminate as compared to those that are distant from each other in face space. Several studies suggest that faces we encounter in early life (Slater et al., 2010) or on a regular basis (O'Toole, Deffenbacher, Valentin, & Abdi, 1994) play a dominant role in shaping the dimensions of face space. People can discriminate the identities of faces of their own race better than faces of other races (Feingold, 1914; of faces from one's own race are morphed with images of another race's faces, one tends to perceive morphs of equal mixtures as the other race. The point of subjective equality for mixed-ethnicity morphs is shifted by 8% to 17% toward one's own race. Prior research shows that faces of personally familiar identities are better discriminated and recognized than unfamiliar faces over changes in head angle, lighting, compression, or squeezing across different exemplars of the same identity (di Oleggio Castello, Taylor, Cavanagh, & Gobbini, 2018;Gilad-Gutnick, Harmatz, Tsourides, Yovel, & Sinha, 2018;Harmon & Julesz, 1973;Hole, George, Eaves, & Rasek, 2002;Sinha, Balas, Ostrovsky, & Russell, 2006). Clearly, the neural face system is capable of efficiently extracting the invariant identity across multiple and varied images of the same familiar individual Guntupalli, Wheeler, & Gobbini, 2017). At the same time, small differences in images of familiar individuals can be discriminated more efficiently for familiar than unfamiliar faces (Chauhan, Visconti di Oleggio Castello, Soltani, & Gobbini, 2017;di Oleggio Castello, Halchenko, Guntupalli, Gors, Gobbini, 2017;di Oleggio Castello, Taylor, Cavanagh, & Gobbini, 2018;di Oleggio Castello, Wheeler, Cipolli, & Gobbini, 2017;Ramon & Gobbini, 2018; Visconti di Oleggio Castello, Guntupalli, Yang, & Gobbini, 2014). The visual appearance of a personally familiar face is learned in detail over protracted and repeated exposure during real-life interactions. In a representational space for faces, the sectors populated by familiar faces may be perceptually expanded because of the rich variety of visual experiences with those faces. Much in the same way that faces of people from one's own race are represented more richly and with more discriminating information, faces of personally familiar others are represented perhaps even more richly.
Here, we asked whether personal familiarity affects categorical changes from one facial identity to another. We created morph continua between pairs of familiar (one's own face or faces of friends) and unfamiliar (faces of strangers) identities and measured how frequently different levels of morphed faces were assigned to the original identities. Morph continua with different levels of mixture of the attributes of two stimuli are used extensively in experimental psychology to test categorical perception. Specifically, in the field of face perception, prior research using face morphs has been focused on studying how flexible categorical perception of faces is in a variety of different conditions such as age, gender, and race (Angeli, Davidoff, & Valentine, 2008;Rhodes, Jeffery, Watson, Clifford, & Nakayama, 2003;Webster, Kaping, Mizokami, & Duhamel, 2004). Categorical perception of a stimulus refers to the idea that for a continuous range of morphs between two stimuli, there is a perception of a categorical change around the midpoint of the morph continuum, and distinctions between morph levels within each category are less distinguishable. This has been shown to be the case for a variety of different stimuli (Harnad, 1987), highlighting how humans tend to perceive continuous variations of stimuli as discrete categories. Categorical perception of facial identity has also been reported for both familiar and unfamiliar stimuli (Beale & Keil, 1995;Kaufmann & Schweinberger, 2004;Kikutani, Roberson, & Hanley, 2008;Ramon & Van Belle, 2016;Rotshtein, Henson, Treves, Driver, & Dolan, 2005). In our experiment, we wanted to assess if a shift in categorical boundary for recognizing an identity is observed when morphing a familiar and unfamiliar identity, reflecting an expansion of perceptual distances for variations among stimuli that are perceived as the familiar individual, in support of the hypothesis that exposure to personally familiar faces alters the representational geometry of face space.
We predicted two possible outcomes. The first hypothesis poses that repeated exposures to personally familiar faces leads to the development of perceptually expanded representational subspaces for familiar individuals. Such an expansion would shift the categorical boundary between familiar and unfamiliar identities, making an equal admixture of attributes of familiar and unfamiliar individuals perceptually resemble the unfamiliar individual. This hypothesis is supported by the work of Stevenage (1998), showing that stricter criteria for naming faces of identical twins develop as a result of training, demonstrating the importance of such criteria for discriminating between very similar faces. Alternatively, under a second hypothesis, multiple exposures to faces of familiar individuals may bias the perception of an ambiguous identity (here, a morphed image) toward being labeled more easily as a familiar individual. This hypothesis is based on the evidence that different image manipulations such as squeezing or compressing images of familiar faces do not seem to disrupt the process of recognition of those identities (Gilad-Gutnick, Harmatz, Tsourides, Yovel, & Sinha, 2018;Sinha, Balas, Ostrovsky, & Russell, 2006), despite the alteration of the shape of the face and the shape of the features. Therefore, the manipulation of the identity information with the use of morphs might result in a stable, enhanced recognition with a larger categorical boundary.
In this article, we present results from two different experiments. In the first study, we used the same set of personally familiar targets and unfamiliar controls for all participants who came from the same social group. In the second study, we tested a set of personally familiar faces that varied across participants and unfamiliar faces and, as a further condition, one's own face.
Results from both experiments show that the midpoint of a morph spectrum between a familiar identity and an unfamiliar identity is more likely to be labeled as the unfamiliar identity, suggesting that we use a more conservative threshold for the process of recognizing faces of familiar identities.
Method Participants
Sixteen graduate students from the Dartmouth College community participated in this experiment (five males, 26.8 ± 2.4). Two of the participants were left-handed. For analysis, data for one participant were discarded due to an error in recording responses. Therefore, the results presented in this report are from 15 participants (five males, 26.7 ± 2.4). Sample size was chosen based on the sample sizes used in previous reports investigating categorical perception of facial identity (Jacques & Rossion, 2006;Kaufmann & Schweinberger, 2004;McKone, Jeffery, Boeing, Clifford, & Rhodes, 2014;Natu, Barnett, Hartley, Gomez, Stigliani, & Grill-Spector, 2016). All participants provided written informed consent to participate in the experiment and were compensated with cash for their time. The Dartmouth Committee for the Protection of Human Subjects approved the experiment (Protocol 21200).
Equipment
Participants sat 50 cm from a computer screen in a dimly lit room. The resolution of the screen was 1,440 × 900 pixels. The experiment was run on a GNU/Linux workstation, and the presentation code was written in MATLAB, using Psychophysics toolbox extensions (Brainard & Vision, 1997;Kleiner, Brainard, & Pelli, 2007).
Stimuli
The stimuli used for this experiment were grayscale pictures of three graduate students that were personally familiar to all participants and three unfamiliar faces that were visually matched with the familiar face identities. A morph continuum between each familiar identity and its visually matched control was created with the software FantaMorph (Abrosoft: https://www.fantamorph.com/). This procedure involved placing around 150 points per image on each of the pairs of face images used to create the morphs. These points lay primarily on the internal features of the face and along the silhouette of the face, as depicted in Figure 1A. The image-processing algorithm implemented in FantaMorph used these points as landmarks to align the two images. By regulating the contribution of each identity for each image, we were able to create morphs of different strengths toward one identity or the other that contributed to the morph, resulting in a morph continuum from one original face image to the other ( Figure 1A and Figure 1B). Additionally, with identical procedures as described above, three morph continuums for six independent sets of identities that were all unfamiliar to the participants were created to serve as controls in the experiment. For both experimental and control morph continua, two pairs were male and one pair was female. All of the original pictures for each identity were acquired in a photo studio in the laboratory with the same lighting conditions and the same distance from the camera to minimize low-level visual dissimilarities between stimuli. We also matched the luminance of all stimuli to a target luminance value (128 in RGB) in order to control for differences in visual properties of the images themselves and make the transition from one morph to the other even more homogeneous (Willenbockel et al., 2010).
Paradigm
Pictures of all the original identities used for the experiment were shown to the participants before starting the experiment. Each face was presented for 4 s, and participants were asked to look at the faces as they would under natural viewing conditions. Each identity was presented once.
In the experiment, each trial sequence started with the presentation of a fixation cross that remained on screen for a jittered interval between 500 and 700 ms. This was followed by the presentation of a target, morphed image for 1,000 ms centered on the location of the fixation cross, subtending 3.5 × 4.5 degrees of visual angle. The target stimuli were morphed images from 10% to 90% in steps of 10%. As soon as the target image disappeared from the screen, the two original identities from which the target morphed image was created were presented on either side of the fixation. The distance between the two original faces was 10 degrees of visual angle. The dimensions were the same for all stimuli. The two test faces stayed on screen until the subjects made a response.
The participants performed a two alternative forced-choice identity recognition task. Participants were asked to respond by pressing the left or right arrow key to indicate which of the two original identities was more similar to the target face ( Figure 1B). Participants were instructed to provide their response as quickly as possible, but not at the expense of accuracy.
Three blocks with the familiar/unfamiliar morphs and three blocks with unfamiliar/unfamiliar morphs were presented. Each block consisted of 108 trials, where each morphed identity was repeated four times at each morph percentage (10% to 90%, in steps of 10%). Thus, over the course of the entire experiment, an image from each of the identity continua ( Figure 1A) was presented to the participant 12 times. Different images from the same identity continuum were never presented in consecutive trials.
Data analysis
For the analysis of the percentage of responses, we refer to responses to one of the two faces upon which the morph continuum was built as "Identity B." In the unfamiliar-familiar morphs, "Identity B" corresponded to the familiar identity. In the control condition with unfamiliar-unfamiliar morphs, the analysis of percentage responses required us to designate one identity per morph continuum as "Identity B," and this designation was arbitrary. In order to deal with the randomness of this assignment, we decided to fix the responses to 50% for morphs at 50% by flipping the identities of A and B and estimated the variability in "Identity B" responses to different morphs using a bootstrap procedure. Thus, the percentage of "Identity B" responses for 50% morphs was 50% by definition. The percentage of "Identity B" responses for 40% morphs is the average of "Identity B" responses for 40% morphs and 100 minus the percentage of "Identity B" responses for 60% morphs, and so forth. We calculated 95% bootstrapped confidence intervals around these symmetric averages. These are the values that have been reported in the figures and all the tables and the ones we used for the statistical analyses. We analyzed the percentage of Identity B responses by building a generalized linear mixed model with binomial error distribution and logit model as linking function using the R package lme4 (Bates et al., 2015) and the function "Anova" from package "car" (Fox et al., 2012). All figures were made using the library "ggplot2" in R (Wickham, 2011). We constructed the model with the categorical response ("Identity A" or "Identity B") as the dependent variable, morph percentage and familiarity condition as independent variables, and participant, morph stimulus continuum, and random intercepts for participants as random effects. Scaled values of the morph percentage were used as a continuous variable, whereas the familiarity condition was specified as a categorical variable with a zero-sum contrast. Statistical significance of the main effects and interaction effects was tested using a Type 3 analysis of deviance, as implemented in the package "car" (Fox et al., 2012).
For the analysis of reaction times, we discarded trials that had response times shorter than 150 ms and longer than 5 s. Only correct trials were included in the analysis for reaction time (RT). Trials were considered correct when participants, response choice matched the identity of the face that had a greater contribution to morph target. For example, if the morph was made of 40% Identity A and 60% Identity B, the correct response was considered Identity B. Therefore, this analysis excluded the 50% morph condition, since no "correct" response exists for that condition. A linear mixed model with log-transformed RTs of correct trials as the dependent variable and the morph percentage of the probe and familiarity condition as independent variables was fit to the data. Scaled values of the morph percentage were used a continuous variable, whereas the familiarity condition was specified as a categorical variable with a zero-sum contrast. We also included the participants, morph stimulus continuum, and random intercepts for participants as random effects in the model. The RTs were log transformed in order to fit the assumptions of linear mixed models.
Data availability
Raw data and the code are available https: //github.com/vassiki/CategoricalPerception.
Percentage responses
First, we determined the percentage of responses that corresponded to "Identity B" (Figure 1B) of the morph spectrum. For unfamiliar-familiar morphs, "Identity B" indicates the personally familiar identity. The analysis of this dependent variable with a generalized linear mixed model revealed the main effect of morph percentage (χ 2 (1) = 3,596.5, p < 0.001) but not of familiarity condition (χ 2 (1) = 0.48, p = 0.49). However, the interaction between morph percentage and familiarity condition was significant (χ 2 (1) = 4.9, p = 0.03). The mean percentage responses and bootstrapped confidence intervals are included in Tables 1 and 2. Unstandardized effect sizes depicting the difference in familiar and unfamiliar blocks at each morph level on percentage "Identity B" responses are included in Table 3. The effect sizes indicate that the significant interaction between the morph percentage and familiarity condition is driven by the ambiguous, 50% morph between unfamiliar and familiar identities. There is a significant reduction of 8.7% (CI [1.1, 17.6]) in the percentage of "Identity B" (familiar identity) responses in the unfamiliar-familiar morph condition as compared to the unfamiliar-unfamiliar morph condition. There also is a significant effect of familiarity for 40% morphs, which were less likely to be classified as the familiar "Identity B" face in unfamiliar/familiar morphs than as an unfamiliar "Identity B" face in unfamiliar/unfamiliar morphs (16.3% versus 20.2%, Table 1; effect size of 3.9, CI [2.5, 10.0], In unfamiliar-familiar blocks, "Identity B" corresponds to the familiar face. In unfamiliar-unfamiliar blocks, "Identity B" is arbitrary, and the data points were calculated by bootstrapping across all possible combinations of unfamiliar-unfamiliar morph continua. The error bars represent bootstrapped 95% confidence intervals around the means. This result indicates that when the identity of morph is ambiguous with some resemblance to a familiar face, people are more conservative and less likely to respond "Identity B" when Identity B corresponds to a familiar face as compared to when both Identities A and B are unfamiliar to participants ( Figure 2B).
Reaction times
The estimates of the model revealed that overall, participants were slower in responding to morphs that contained identity information from familiar exemplars as compared to morphs between two unfamiliar identities (unfamiliar-familiar morph mean RT = 800 ± 51 ms, unfamiliar-unfamiliar morph mean RT = 742 ± 64 ms). Participant's, reaction times were also slower for the more ambiguous identities falling in the middle of the morph continuum. The main effects of familiarity and morph percentage conditions on correct, log-normalized reaction times were significant (familiarity χ 2 (1) = 10.50, p = 0.0012 and morph percentage χ 2 (1) = 33.1, p < 0.001) (Figure 2A; Tables 4 and 5). The interaction between the two main effects was also found to be significant (χ 2 (1) = 22.01, p < 0.001). Estimates of the model determined by using the package lmerTest (Kuznetsova, Brockhoff, & Christensen, 2017) revealed that the interaction between the two main effects was driven by the presence of familiarity information for morphs along the morph continua away from the familiar identities. Means and bootstrapped confidence intervals are included in Tables 4 and 5.
Interim discussion
In this experiment, we found that participants were more conservative in labeling an ambiguous image as a friend rather than a stranger. We also found that participants were slower in making responses when the morphed image was created using the image of a friend and when morphs were closer to the center of the morph spectrum than the ends. In this experiment, the same images were shown to all participants and the results are based on three identities.
We wanted to further expand our investigation probing a different type of familiarity (with one's own face) and with different identities of personally familiar individuals (friends). Therefore, we ran a second study with a similar task with a more counterbalanced design. By asking participants to bring in their friends for stimulus collection, we ensured that unique faces were used as familiar identities for all participants and that our results are not driven by idiosyncratic visual features of a specific set of personally familiar faces shared across all the participants. Moreover, we included images of one's own face (self) as a special case of facial familiarity. Last, in this second experiment, we intermixed the trial types corresponding to different morph conditions (stranger-friend, stanger-self, friend-self, stranger-stranger, friend-friend) within the same block.
Method Participants
Fifteen participants were recruited among the Dartmouth Community (14 undergraduate students and a visiting scholar, N = 15) (mean age: 20.1 ± 2.9, all female). All participants were healthy adults with normal or corrected vision. Each participant was accompanied by two of their friends of the same sex to the photo studio in the laboratory at Dartmouth College, where all three individuals were photographed one at a time. All participants provided written informed consent and were compensated with cash for their time. Additionally, all participants signed a model release form in order to allow the use of their photographs as stimuli. The Dartmouth Committee for the Protection of Human Subjects approved the experiments (Protocol 297800).
Equipment
We used the same equipment as for Experiment 1.
Stimuli
Five stimulus identities were used for each subject. For each participant, these stimuli included a picture of the participant herself (self), one picture each of two different friends, and one picture each of two different strangers. The images of unfamiliar identities were collected at Massachusetts Institute of Technology, under similar lighting conditions and with the same equipment used to collect the photographs of the participants at Dartmouth College. For each participant, we created five morph spectra using the procedure outlined in Experiment 1, corresponding to the following conditions: stranger with friend, stranger with self, friend with self, stranger with stranger, and friend with friend. Since we used two photographs of friends and two photographs of strangers for each participant, we created one morph continuum for stranger with stranger, one morph continuum for friend with friend, two morph continua for stranger with self, two morph continua for friend with self, and four morph continua for stranger with friend. This procedure resulted in 10 unique morph continua per participant, with nine images per morph continuum (10% to 90% in steps of 10%). Therefore, for each participant, we created 90 stimuli.
Paradigm
The task for this experiment was identical to Experiment 1. The experimental paradigm was similar to Experiment 1 but, in order to make the task more challenging, we intermixed trials with stimuli from each of the five morph continua. Stimuli were presented in blocks of 90 trials each. The experiment was self-paced, with the participant pressing the spacebar to start each block. Participants performed 10 blocks, with an overall presentation of 10 times for each unique stimulus.
Data analysis
We analyzed participant's, responses designating one face of each morph continuum as "Identity B." For morphs between the face of a friend and the face of a stranger, the "more familiar," or "Identity B," category corresponded to the face of a friend, and we calculated the percentage of times the participants reported that the morph resembled their friend's face. For morphs between one's own face and the face of a friend, we defined one's own face as the "more familiar," or "Identity B" responses for a 50% morph between two identities. The more familiar "Identity B" identities were the "Friend" for stranger-friend morphs, "Self" for stranger-self morphs, and "Self" for friend-self morphs. (B) Reaction times for correct trials as a function of morph percentage; colors represent morph condition.
"Identity B," category and calculated the percentage of times the participants reported that a morph between their own face and the face of a friend resembled their own face. For morphs between two strangers and morphs between two friends, we made the percentage responses for these two morph conditions symmetric around the 50% morph. We flipped labels for which identity was designated as "Identity B," and collapsed the responses across the flipped labels. Reaction times were analyzed similarly to Experiment 1.
Data availability
As for Experiment 1, raw data and the code are available https://github.com/vassiki/ CategoricalPerception.
Percentage responses
The analysis of percentage "Identity B" responses revealed a significant main effect of scaled morph percentage (χ 2 (1) = 2,736.8, p < 0.001) but not of morph condition (χ 2 (4) = 6.00, p = 0.19). Similar to Experiment 1, we found a significant interaction between scaled morph percentage and morph condition (χ 2 (4) = 243.93, p < 0.001) (Figure 4; Table 6). The 50% morph for the stranger with friend condition was labeled as "Friend" 36% of the time [33,40], similar to the results in Experiment 1. The 50% morph for the stranger with self condition was labeled as "Self" 30% of the time [26,35], and the 50% morph for the friend with self condition was labeled as "Self" 37% of the time [33, 40] ( Figure 3A). The effect sizes for all morph percentages are included in Tables 7, 8, and 9.
f(x) is the percentage "most familiar" response to morph percentage x. For a morph percentage of 50, f(y) was set to the value of 50. Negative values of bias indicate a preference for the less familiar identity within a given morph condition ( Figure 5). For morph percentages close to 50% and 40% versus 60%, we observe significantly negative asymmetry biases, suggesting that participants are more conservative in using the "more familiar" label as the morph identity becomes more ambiguous.
Reaction times
Reaction times were found to be slower for stranger with stranger morphs as compared to all other conditions ( Figure 3B, Table 10). Moreover, we found slower reaction times for morphs in the middle (e.g., 40%-60%) as compared to the ends of the morph spectrum (90-80%, 10-20%) ( Figure 3B, Table 10). Analysis of log-transformed reaction times in correct trials revealed a main effect of morph condition (χ 2 (4) = 18.39, p = 0.001) and scaled morph percentage (χ 2 (1) = 14.62, p < 0.001). The interaction between morph condition and scaled morph percentage was not found to be significant (χ 2 (4) = 9.16, p = 0.06). In Experiment 2, the direction of results of the reaction time is in contrast with those from Experiment 1. This discrepancy could be due to the decision of presenting the trials from all conditions intermixed within blocks, unlike the design of Experiment 1, where the trials of one condition (e.g., morph continua between a familiar and an unfamiliar identity) were presented within the same block. Presenting trials of different conditions intermixed in the same block might have forced the participants to use the same response strategy for all morph conditions, as opposed to performing the task as they did in Experiment 1, where the identity for the response was expected in advance.
Interim discussion
In this experiment, we replicated the main finding from Experiment 1, demonstrating that participants were less likely to label an ambiguous face as the more familiar identity, rather than the less familiar or unfamiliar identity. The size of this effect was remarkably consistent across three different familiarity contrasts, ranging from 12.6% to 19.8% for 50% morphs, and larger than the effect size in Experiment 1 of 8.7%. We also found that the participants were less accurate and slower in performing the task for the Stranger with Stranger morph spectrum, in contrast to Experiment 1, possibly due to the use of intermixed trial types in this experiment.
Discussion
We investigated how categorical boundaries of identity are influenced by familiarity. We tested categorical decisions about recognition of identities using morphs between different identities. Using this experimental design, previous work has shown that perception of facial identity is "categorical," reflected in the abrupt transitions in perception of a different identity somewhere along the morph continuum (Beale & Keil, 1995;Ramon & Van Belle, 2016;Rotshtein, Henson, Treves, Driver, & Dolan, 2005). Here, in our first experiment (Experiment 1), unlike previous work reported in the literature, we tested morph continua that were created with a familiar and an unfamiliar identity. Results showed that the categorical decision boundary was shifted toward the personally familiar faces, such that the morphed image at the midpoint was more often Table 9. Effect sizes for percentage more familiar responses in the friend-self morph condition. These values were computed by comparing the percentage more familiar response at each morph percentage with the percentage response for the same morph percentage in the friend-friend morph condition.
Negative values indicate that the friend was chosen as a label more frequently than the self.
judged to be the unfamiliar individual. In our second experiment (Experiment 2), we replicated this result and showed further that the categorical boundary is shifted similarly for one's own face when morphed with the face of strangers or the face of personally familiar others. This finding further supports our hypothesis that the higher degree of familiarity with the appearance of a face affects the categorical boundary that distinguishes that identity from other identities. While familiar faces are flexibly recognized in highly degraded or distorted images (Gilad-Gutnick, Harmatz, Tsourides, Yovel, & Sinha, 2018;Sinha, Balas, Ostrovsky, & Russell, 2006), a more conservative approach is used in labeling an ambiguous identity as a familiar individual when noise from an unfamiliar identity is added to the features of the familiar face. In the light of this result, we propose that multiple exposures to the same individual in a variety of different viewing conditions sharpen tuning to the features that make that familiar identity distinct (Tanaka, Giles, Kremen, & Simon, 1998). Conversely, an ambiguous identity is more likely to be classified as a stranger despite some resemblance to a familiar face. Previous research has provided evidence for categorical discrimination between identities at around the midpoint of the morph continua when two unfamiliar identities are morphed together (Beale & Keil, 1995). When presented with two alternatives for choosing the identity of a morphed face, subjects are able to choose the correct identity with high accuracy if they are familiar with the original identities (Ramon & Van Belle, 2016). In our experiments, the perception of categorical change of identity was closer to the end represented by the familiar identity, indicating that when presented with an ambiguous identity, the visual system is less sensitive to changes in the features of an unfamiliar face. Our results provide strong evidence that learning through a repeated and prolonged exposure to familiar faces warps the representational geometry of face space, resulting in more conservative boundaries for recognition of those familiar identities. Previous research by Wilson and Diaconescu (2006) supports this interpretation by demonstrating that the discriminability between schematic faces that have been learned and visually matched controls improves as a result of training. Our hypothesis is that multiple exposures to faces of familiar individuals under a variety of viewing conditions such as different head views, facial expressions, differences in lighting, and so on result in flexible, enriched representations of these identities that are resilient to distortions in visual features. These enriched representations afford robust recognition across diverse conditions, even when images of familiar individuals are experimentally manipulated by squeezing, flattening, or caricaturization (Gilad-Gutnick, Harmatz, Tsourides, Yovel, & Sinha, 2018; Sinha, Balas, Ostrovsky, & Russell, 2006). At the same time, learning these representations increases our sensitivity to features that are inconsistent with them. For example, distinctions between faces of two siblings of the same sex, or even identical twins, are more easy to discern to family members than they are to strangers (Stevenage, 1998).
Our results are in line with research on face perception of other races. A study by Webster and colleagues (2004) showed that perception of race is influenced by the set of faces that observers are exposed to and that participants have a directional bias in determining the categorical boundary for morphs between two races (Japanese and Caucasian). The direction of this bias is determined by the participants' race. The categorical boundary for a racially ambiguous face was closer to the Japanese face end of the morph spectrum for Japanese observers as compared to Caucasian observers and vice versa. This result suggests that participants use a narrower boundary for categorizing exemplars from their own race due to greater exposure to the features of faces of that race. Similar to these results, our study shows that the categorical boundary for perceiving a face as a familiar individual is shifted toward the familiar original identity.
Our interpretation that personal familiarity changes the geometry of representational space warrants an explanation under the multidimensional face space hypothesis (Valentine, 1991). The norm-based face space hypothesis posits that the average of all the faces encountered by an individual represents the center of a multidimensional space, and unique faces are encoded as points within that space. Tanaka, Giles, Kremen, and Simon (1998) proposed that a face that is atypical, or further away from the norm of the face space, will dominate a 50-50 morph with a more typical face. The authors propose that atypical faces are easier to discriminate than typical faces. This is because representations of atypical faces have larger representational spaces in the absence of a high density of face exemplars competing to occupy the same sectors of the face space. This suggests the possibility that a familiar face similarly occupies an expanded representational space, even though its sector of face space is close to the norm determined by experience and, therefore, should have a high density of face exemplars to compete with. Thus, face space appears to be warped to allocate increased representational space to familiar faces and facilitate discrimination from other face exemplars. Faerber, Kaufmann, Leder, Martin, and Schweinberger (2016) showed that faces of familiar international celebrities are rated as being less typical than their corresponding antifaces, which is not true for faces of strangers (Austrian celebrities and their corresponding antifaces), even though a face and its antiface are equidistant from the norm. Moreover, both may be relatively close to the norm-celebrities often have very regular features-and, therefore, are competing with a high density of other face exemplars. The original face of the familiar celebrity is a distinct point in the multidimensional space, and its antiface is the point that is equidistant from the average in the same space but in the opposite direction. If the subspace for familiar faces in face space is expanded, the perceptual distance between the familiar face and the population average is larger than the perceptual distance between the unfamiliar antiface and the population average, even though these two distances are equal in terms of physical differences. This finding is in line with our results.
In conclusion, our experiment shows that personal familiarity warps face space. A familiar face appears to occupy a sector of perceptual face space that is expanded relative to its size based on differences in measured physical similarity. This expansion of face space for familiar others can enhance view-invariant perception of the identity of a personally familiar other (Jenkins, White, Van Montfort, & Burton, 2011) as well as perception of changes in appearance that have social significance (Chauhan, Visconti di Oleggio Castello, Soltani, & Gobbini, 2017;Visconti di Oleggio Castello, Guntupalli, Yang, & Gobbini, 2014). The expanded representational space for a familiar face allows recognition of identity across varied distortions but at the same time increases the signal to noise to detect features that are inconsistent with an identity.
|
2020-07-23T09:01:59.618Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6114bec4b23312b8aecac988e5e2b219b704b8db",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1167/jov.20.7.18",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b2a063374e4f125f3b7c98c97ea57f59ba807da",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine",
"Psychology"
]
}
|
44043464
|
pes2o/s2orc
|
v3-fos-license
|
Fast electron slowing-down and diffusion in a high temperature coronal X-ray source
Finite thermal velocity modifications to electron slowing-down rates may be important for the deduction of solar flare total electron energy. Here we treat both slowing-down and velocity diffusion of electrons in the corona at flare temperatures, for the case of a simple, spatially homogeneous source. Including velocity diffusion yields a consistent treatment of both `accelerated' and `thermal' electrons. It also emphasises that one may not invoke finite thermal velocity target effects on electron lifetimes without simultaneously treating the contribution to the observed X-ray spectrum from thermal electrons. We present model calculations of the X-ray spectra resulting from injection of a power-law energy distribution of electrons into a source with finite temperature. Reducing the power-law distribution low-energy cutoff to lower and lower energies only increases the relative magnitude of the thermal component of the spectrum, because the lowest energy electrons simply join the background thermal distribution. Acceptable fits to RHESSI flare data are obtained using this model. These also demonstrate, however, that observed spectra may in consequence be acceptably consistent with rather a wide range of injected electron parameters.
Introduction
X-and γ-ray radiations give the most direct window on accelerated electrons in flares. They have revealed that accelerated particles, electrons and/or ions, are an energetically major product of the flare energy release process (e.g. Vilmer & MacKinnon 2003). Brown et al. (2003) have emphasised the importance of the source-averaged electron distribution as a useful 'halfway house' between the observed photon spectrum and the distribution of electrons initially injected into the source region, i.e. the immediate product of the acceleration process. Assumptions about the dominant factors in electron transport then allow deduction from the source-averaged electron distribution of the distribution output by the acceleration process. Quantities like the total energy released in the form of fast electrons follow immediately. Brown (1971) first analysed the case in which electrons slow down via Coulomb collisions in a cold target, i.e. a region in which ambient particle thermal speeds are all very much less than those of the X-ray emitting electrons. Key results were given for a photon spectrum I(ǫ) (photons cm −2 keV −1 s −1 ) depending on photon energy ǫ as a power-law, i.e. I(ǫ) ∼ ǫ −γ for some γ > 0. Such a photon spectrum is often observed. Assuming a cold target from which no electrons escape, it implies an injected electron energy distribution depending on electron energy E as E −γ−1 . The total energy content of such a distribution is governed by the lowest electron energy for which this power-law form holds good. Unfortunately observations remain ambiguous on the likely value of this lower cutoff, so the total flare energy in accelerated electrons remains uncertain by more than an order of magnitude. The flare energy in electrons of energies > 25 keV appears to be a large fraction of the total flare energy (Lin & Hudson 1976;Hoyng et al. 1976;Saint-Hilaire & Benz 2005); observations even exist suggesting a low energy cutoff in the 2-5 keV range (Kane et al. 1992). Emslie (2003) has pointed out that the cold target assumption may be invalid for the lowest energy (few keV) accelerated flare electrons. Spatial structure of RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) images in particular suggests that these electrons stop entirely in the corona, in high temperature (> 10 7 K) regions. Then the test particle slowing-down rate no longer increases monotonically with decreasing particle energy; as particle speed approaches ambient particle thermal speeds from above, the rate of loss of energy to background particles decreases, exhibiting a zero for a fast particle energy E crit very close to the ambient electron thermal energy. Emslie contends that this value should be used as the minimum possible lower energy cutoff when evaluating fast electron total injected energy; electrons below this energy do not slow down monotonically, instead merging with the background thermal distribution. Emslie's suggested procedure has been employed by Lin et al. (2003) to estimate the fast electron energy content in the flare of 23rd July 2002.
Emslie's discussion is expressed entirely in terms of the systematic slowing-down rate. This gives valuable insight but cannot give a complete description of the evolution of injected electrons. Suppose we inject a mono-energetic electron distribution at E crit . Clearly, although the systematic slowing-down rate at E crit is zero, the injected electrons will not stay indefinitely at E crit ; they will spread out in energy in such a way as to eventually join the ambient Maxwellian population, doing so in the first instance via diffusion in velocity rather than systematic slowing down. A more complete treatment is needed to discuss the form of the photon spectrum and what it is telling us about flare fast electrons. McClements (1987) has included velocity diffusion effects, but only as one component of a complicated treatment which also features a number of other processes, and he does not explore the issues we address.
In this contribution we examine the evolution of injected electrons when the cold target assumption breaks down, in the slightly idealised case of a homogeneous source and constant background temperature. We include velocity diffusion as well as systematic slowing down, and reformulate the interpretation of observed photon spectra. The next Section formulates the problem and gives some analytical discussion. Section 3 illustrates our discussion with some calculated spectra, compared to RHESSI data. Section 4 gives brief conclusions.
Assumptions; Fokker-Planck equation
In order to illustrate the consequences of velocity diffusion for photon spectra we consider an idealised problem in which all injected electrons thermalise in a uniform, homogeneous medium, characterised by a single, ambient electron density n e and temperature T e . Loss at boundaries will have a negligible influence on the electron distribution function and pitch-angle information will not be important for the calculation of the total, emergent photon spectrum, particularly since bremsstrahlung directionality is unimportant at the few keV photon energies appropriate here. We can gain significant insight, and also solve a problem appropriate to understanding the X-ray emission integrated over the whole of the event, by studying a steady-state situation, so no quantity depends on time. In practice n e and T e will evolve as a result of the thermal and hydrodynamic response of the atmosphere to the flare energy release, but this is a complication of detail rather than principle and we ignore it in the interests of gaining insight. (Relevant electron timescales such as the thermalisation time are at most on the order of seconds, whereas bulk changes to the plasma, such as changes of temperature as characterised by variation of the soft X-ray flux, take place over timescales of a few minutes.) Thus we can characterise the (pitch-angle integrated) electron distribution everywhere in the source by a single function f (v) ((cm s −1 ) −3 ) of velocity v (cm s −1 ). The normalisation of f is given by where N e is the total number of all electrons in the source. An electron of 10 keV initial energy will stop in a column depth of 2 × 10 19 cm −2 of fully ionised hydrogen, inside the coronal portion of a loop, e.g. for densities > 2 × 10 10 cm −3 and loop lengths > 10 9 cm, conditions not infrequently inferred in flares. Thermalisation, in the alternative case that this energy is close to the thermal speed, will occur in a comparable distance. In addition, magnetic field convergence may further enhance the coronal residence time of electrons and increase the effective distance available for thermalisation. Electrons well above thermal speeds will experience cold target conditions throughout the corona and chromosphere and will in any case be described correctly by what follows. Thus, while possibly not the case in all events, it is not unreasonable that all electrons for which finite thermal velocity ('warm') target effects are important thermalise in the coronal, warm target region.
This steady-state treatment will be valid as long as we use it on times that are not long enough for the injected electrons to become significant in number compared to the thermal distribution, but are longer than relaxation times for most injected electrons. Alternatively, we may regard it as giving the time integral of the distribution function in the case of an initial, impulsive injection, in which case the source function S is actually the initial condition on f (MacKinnon & Craig 1991). The time integral of f is the necessary quantity for calculation of the total bremsstrahlung photon yield.
We use the Fokker-Planck formalism for particle evolution under binary collisions (e.g. Rosenbluth et al. 1957;Montgomery & Tidman 1964). We also make the assumption that the fast particles are 'dilute', in the particular sense that they may be ignored in calculating the velocity space drift and diffusion coefficients: these may be evaluated purely from the background distribution. Then the steady-state, Fokker-Planck equation for f (v), derived from Helander & Sigmar (2002, p.37-38), may be written as where the function Φ is the error function: Here velocities v have been normalised to the ambient electron thermal speed v T = √ 2kT/m e , and times to the electron thermal collision time t c = 4πǫ 2 0 m 2 e v 3 T n −1 e e −4 (ln Λ) −1 Although there is no time-dependence in this problem, the electron injection function S (v) is of course per unit time.
Equation (2) needs two boundary conditions. The boundary condition at infinity is: This condition ensures that there is no flux of particles out of the system at infinity. For the other boundary condition we fix f (v) at v = 0: f (0) = f 0 , consistent with the conditions for validity of our steady-state treatment and with the assumption of 'diluteness' that justified the linearisation of the Fokker-Planck equation. f 0 describes the background thermal distribution. We integrate Eq.
(2) once from v to infinity and use the boundary condition at infinity, then again from 0 to v, employing an integrating factor e v 2 and using the boundary condition at 0. Thus we find the solution which will be used in Sect. 3 to calculate distributions and resultant photon spectra for various forms of S . The result 4 is rendered rather impenetrable, however, by the function Φ. An illuminating, semi-quantitative analytical discussion may be carried out by invoking the large argument form of Φ(v), strictly applicable for v ≫ 1, in which Φ(v) approaches unity for large v. In this case the Fokker-Planck equation becomes:
Approximate solution
First, we note that the LHS of Eq. (5) may be rewritten showing explicitly that the systematic rate of change of v (the coefficient of ∂ f /∂v) does indeed display a zero, even in this approximate form, at v = 1/ √ 2, quite close to the zero found using the full form of Φ by Emslie (2003). All the necessary qualitative features are included in the description of Eq. (5), in spite of its approximate nature, and its solutions will have the appropriate qualitative properties.
Note that the warm target corrections to the systematic slowing-down rate and the dispersive term both become important in the limit v → 1/ √ 2. The arguments of Emslie (2003) rest on the presence of the zero (here at v = 1/ √ 2) in the systematic slowing-down term. Electron slowing-down times approach ∞, so a finite emergent photon spectrum demands S → 0 as v → 1/ √ 2. However, in this limit the dispersive term has become important, removing the divergence in electron 'lifetime'. Using the boundary conditions as before, and changing the order of integration in the resulting integral, Eq. (5) has the solution In the absence of any source S , Eq. (5) has the background Maxwell-Boltzmann distribution ( f 0 e −v 2 ) as its solution, as indeed does Eq. (2). The description in terms only of systematic slowing-down rates divorces the fast particle and background distributions. This is no longer the case in this diffusive treatment: the presence of the boundary condition at v = 0, which must be satisfied using the same background density used to calculate the drift and diffusion coefficients, ensures that the fast particle distribution merges smoothly with the thermal 'core'. It follows that we are obliged to include also the contribution to photon emission from the thermal plasma, if we are indeed looking at photon energies such that velocity diffusion is important for the emitting electrons.
In the limit v → ∞, Eq. (6) becomes Recall that our f is identical with the mean source electron distribution, the key quantity in interpreting X-ray emission . With the traditional assumptions of fast electrons slowing down in a cold, thick target, this mean distribution is just the cumulative distribution of the injected energy distribution. Eq. (7) reproduces this result, as it should. We can rapidly recover well-known results in that limit, for instance Brown's (1971) relations between energy power-law spectral indices of observed photons and injected electrons. We see that the three terms in the solution (6) consist of: the Maxwell-Boltzmann core of the distribution; a term which resembles the cold target result more and more closely as v → ∞; and a term which forces these two regimes to merge smoothly.
Mono-energetic injected population
The special case of a mono-energetic form for S is instructive: for some velocity v 0 . Then the solution (6) becomes: For v < v 0 the distribution is composed of the original, background Maxwellian distribution plus a component which is identical to the cold target result for v, v 0 ≫ 1, but which approaches 0 as v → 0. This additional, non-Maxwellian component becomes less and less significant for smaller and smaller v 0 . For v > v 0 , the distribution is identical with the original background Maxwell-Boltzmann distribution, only with its normalisation increased. Since we must have S 0 ≪ f 0 for validity of the original linearisation of the Fokker-Planck equation, we see that the distribution will resemble the original Maxwellian more and more closely as v 0 gets closer and closer to 0. This justifies the qualitative comments made in Sect. 1: electrons injected close to the thermal speed diffuse in energy rather than slowing down monotonically, merely adding their number to the original background Maxwellian. Figure 1 illustrates this.
Deduction of S for a power-law photon spectrum
As mentioned in Sect. 1, the photon spectrum may in principle be inverted to yield the mean (source-averaged) electron distribution (Brown 1971;Brown et al. 2003). This is identical with the distribution function f in the special case of our homogeneous source. If observations have given us f in this way, Eqs.
(2) or (5) then immediately give S . Consider the case of a power-law photon spectrum I(ǫ) ∼ ǫ −γ . Assume for the moment that this form holds at all photon energies of interest. Then the results of Brown (1971) give us f (v) ∼ v −2γ−3 . Inserting this form into Eq. (5) we find As in the case retaining only systematic energy loss (Emslie 2003), S has a zero, changing sign at We have seen that a diffusive treatment underlines the necessity of including the radiation from the thermal, 'core' part of the distribution. In assuming the power-law photon spectrum to be appropriate at all photon energies we have implicitly neglected this contribution. v * represents the lowest velocity at which the single, uninterrupted power-law photon spectrum can still be reconciled with the presence of the Maxwell-Boltzmann core. Below v * we would have to actually remove particles from this core to prevent deviations from a power-law photon spectrum; hence v * 's dependence on γ. Moreover, we cannot 'overcome' the core Maxwellian distribution by, for example, injecting a power-law energy distribution of electrons that persists down in energy towards thermal speeds. As we saw in Sect. 2.3, these electrons mostly thermalise diffusively, producing only a slight modification to the core. We might follow Emslie (2003) and evaluate total electron energy content by integrating S , given by Eq. (10), from v * to ∞. Rather than adopting a lower energy cutoff for the power-law which evidently holds at high energies, however, this approach underlines the need for a consistent treatment of radiation from both thermal and accelerated electrons.
Numerical illustrations
We return now to the full solution of the Fokker-Planck equation as given in Eq. (4), and provide some illustrative examples relevant to solar observations. We adopt as the source function S (v) a power-law, employing the form whereS is normalised such that per unit time (normalised to the electron thermal collision time) there are S 0 particles injected in total at velocities above v 0 , and S is prevented from going to infinity at low velocities by Heaviside's step function H which removes all particles with velocities less than v 0 . For a homogeneous source, the emission rate of photons of energy ǫ per unit energy range per unit volume, d j/ dǫ (photons s −1 keV −1 cm −3 ), may be found by multiplying the distribution function by v dσ/dǫ to obtain the instantaneous rate of emission of photons by electrons in the velocity range v → v + d 3 v, then integrating over all velocities (Brown 1971), noting that d 3 v = 4πv 2 dv. This gives where n e is the background plasma number density and dσ dǫ is the Bethe-Heitler cross-section: Here, E is the electron kinetic energy and Q 0 is given by where the fine structure constant α ≈ 1/137 and r e = 2.82 × 10 −13 cm is the classical electron radius. The photon spectrum, dj/ dǫ (photons s −1 cm −2 keV −1 ), that would be observed by RHESSI is given by where V is the volume of the source and r ⊕ is the distance from the Sun to the Earth. Since radiation from the whole emitting volume is observed, the value of V will be determined implicitly by the spectral fitting process (see Sect. 3.2) and need not be separately evaluated. As previously stated, the presence of Φ in Eq. (4) renders a full analytical solution intractable. Therefore we proceed numerically, using Romberg integration (Press et al. 1992) to evaluate Eq. (12) with f given by Eq. (4).
The parameters which may be varied in the numerical simulations are: the ratio of f 0 to S 0 i.e. the relative magnitudes of the background Maxwellian population and the injected power-law electrons; the lower cutoff velocity of the injected electrons, v 0 ; and the spectral index δ of the power-law. It should be noted that in the majority of the literature, the power-law used to model flare electrons is a power-law in energy of the form S (E) = S 0 E −δ . For consistency and ease of comparison we shall refer to this δ in subsequent discussion. The corresponding δ v for our velocity power-law, Eq. (11), is related to δ by the expression δ v = 2δ − 1. Unless otherwise stated, the default values of the parameters are f 0 /S 0 = 10 8 , δ = 4.0, and v 0 = v T . Figure 2 illustrates the effect of altering the ratio f 0 /S 0 . The logarithmically-plotted photon spectra consist of two main regions: a straight power-law profile at high photon energy, blending smoothly into a Maxwellian profile at lower energy. As would be expected, increasing the relative contribution of the Maxwellian background has no effect at high photon energies since here the profile only contains contributions from electrons of the photon energy or higher. However, a larger f 0 /S 0 value leads to a correspondingly higher contribution to the Maxwellian portion of the spectra from the background plasma. Furthermore, this larger value also corresponds to an increase in the photon energy up to which the Maxwellian impinges on the otherwise straight power-law: the profile departs from the straight portion at higher energy for a larger f 0 /S 0 . The alteration of the electron energy spectral index δ is depicted in Fig. 3. As may be seen, this has minimal effect at low photon energy, but a larger δ results in a correspondingly steeper slope in the power-law region of the spectrum. A greater value for δ also causes a more rapid reduction in the total number of electrons as a function of increasing energy in the injected population. Thus, a higher δ leads to a relative reduction in the intensity of the power-law spectrum for a given photon energy. This reduced contribution from the power-law electrons also increases the photon energy up to which the Maxwellian element forms a significant part of the resultant profile. Consequently, the departure from the straight power-law profile occurs at a higher photon energy for larger δ values, visually mimicking a non-existent change in the power-law low energy cutoff.
Actual variation of the cutoff, v 0 , is shown in Fig. 4. Since electrons injected below a few times v T thermalise rather than slowing down systematically, allowing the cutoff to extend to lower energies merely adds electrons to the 'background' Maxwell-Boltzmann distribution. This explains the counterintuitive result, clearly visible in Fig. 4, that a lower value of v 0 results in a spectrum which attains power-law form at higher photon energies: the large number of injected electrons at low energies thermalise and enhance the Maxwell-Boltzmann distribution, concealing the lower-energy portion of the power-law form.
Comparisons to RHESSI data
The recent launch of the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) has opened a new era in high resolution X-ray spectroscopy of solar flares (Lin et al. 2002). The analysis of solar flare spectra has revealed statistically significant deviations from a simple isothermal and power-law model . The detailed analysis of the X-ray producing spectra using a model-independent inversion technique (Piana et al. 2003) shows deviations from the pure isothermal model. These new findings can be treated as the manifestation of velocity space diffusion in warm target plasma.
For illustrative purposes, we consider a few example events with sufficiently high count rate to provide reliable photon statistics. We have limited our analysis to within the energy range 10-50 keV, where thermal and nonthermal components merge. Below 9 keV, the bremsstrahlung continuum is contaminated by a complex of strong iron lines. Above ∼ 50 keV, spectral features not related to the model discussed become dominant . We fit the model spectra to the observed spectra by optimising over 4 parameters: the relative magnitudes of the background and injected electron populations, f 0 /S 0 ; the injected electron power-law low-energy cutoff, v 0 ; the background temperature, T e ; and the injected electron power-law spectral index, δ. The optimisation seeks to minimise an un-normalised χ 2 fit statistic -in this case absolute values of χ 2 must be treated with caution since the process of deconvolving the RHESSI instrument response from the observed counts spectrum to produce the photon spectrum introduces an element of error on each photon spectrum data point which is difficult to quantify precisely (Smith et al. 2002). However, we only compare relative values of the fit statistic to optimise the model fits, so un-normalised χ 2 is sufficient for our purpose. As may be seen from Fig. 5, sets of optimal model parameters may be found which give model spectra that closely match the observed RHESSI spectra. The model parameters corresponding to the smallest fit statistic in each case are given in Table 1.
As previously discussed, our steady-state treatment is valid on timescales longer than the collisional timescale but shorter than the timescale for changes in the temperature of the flaring loop. Thus, the fitted model parameters describe the flare plasma at any instant, but will change with time as the plasma evolves during the flare. To obtain a simple estimate of the energy content of the electrons in the example flares, we fit a model spectrum to an observed RHESSI spectrum from during the impulsive phase, and multiply the instantaneous energy content by the duration of the phase. To obtain the instantaneous energy, we insert Optimal fit parameters for fits to RHESSI data using a simple thermal plus power-law model. E fast and E therm are the energy contents of the fast and background electrons respectively, calculated using the optimal parameters given. E tot is the total energy content of all the electrons.
the optimal fitted values of the relevant parameters into the source function, Eq. (11), and calculate the total energy content of the fast electrons by integrating the electron kinetic energy over all possible electron velocities: The electron energy content of the thermal background plasma may also be obtained from the optimal fitted parameters: (For our present illustrative purposes, we assume a background plasma density of 10 15 m −3 -typical of the lower corona -to obtain the absolute value of f 0 from the value of the emission measure as determined from the fits.) The total energy in all the electrons is the sum of E fast and E therm . We find single values of the source parameters to represent the whole of the data time interval. The main assumption in doing this is that background parameters ( f 0 , T e ) do not change. If this is the case then interpreting the time integral of the data gives us the same result as integrating a temporal sequence of data fits (as mentioned in Section 2; see also MacKinnon and Craig 1991). Although these parameters may change, this will be partly because of the relaxation of the fast electrons. Qualitatively, the procedure here may overestimate the injected electron distribution, because temperature will increase, and thus more of the observed photon spectrum will be due to 'thermal' electrons, as time goes on. To address this issue quantitatively we would have to drop the linearisation of the Fokker-Planck equation, Eq.
(2), resulting in a considerably more complex problem that we do not address here. Such a fuller treatment would also allow us to precisely determine the realm of validity of our linearisation.
The time intervals we use include the bulk of the hard X-ray emission from the flares in question, but of course a more complete discussion of flare energetics would also integrate over the entire history of the flare.
Also given in Table 1 are the total energy contents of all electrons and of the injected and background electron populations individually for each event studied. The flares of 21st and 22nd August 2002 have comparable energies in the thermal and fast components, and a total energy consistent with an M class flare. Both also have low-energy cutoffs at only a few times the thermal speed, indicating that velocity diffusion will be relevant. The optimal fitted parameters for the flare of 17th March 2002 result in a much larger total energy which would correspond to a larger and more energetic flare. However, in this case the calculated energy content of the injected electrons is many times greater than that of the background thermal electrons. This arises because the fitted low-energy cutoff is very close to the thermal speed itself, and the power-law spectral index is very large, meaning that the injected population will have a huge number of electrons very near to the thermal speed. These will rapidly thermalise and give rise to the bulk of the Maxwellian portion of the photon spectrum as shown in Fig. 5, dominating the emission from the background plasma. While this set of model parameters corresponds to a minimum fit statistic, and produces a model spectrum which closely reproduces the observations, they also imply a situation where the injected electrons are no longer 'dilute' and our linearisation is no longer applicable. Lin et al. (2003) employed Emslie's (2003) formulae for flare electron energy content in their analysis of the 23rd July 2002 flare. This analysis assumed that the injected electrons had a power-law low-energy cutoff at approximately the thermal speed (T ≈ 23 MK) and, unlike our treatment, does not account for any thermalisation of these lower energy injected electrons. Lin et al. found the fast electron energy content for this X4.8 flare to be in excess of 10 27 J. This is very much greater than the highest total energies ever deduced for the largest flares. We also predict very large energies in the injected electrons for cases where v 0 is very low. However, our estimates are not as extremely high as those made using Emslie's formulae, since our treatment includes the appropriate velocity diffusion effects for the lower energy injected electrons.
Due to the unambiguous nature of the straight portion of the spectral profile at high energies, the fitted value of δ is wellconstrained. However, as is evident from Figs. 2 and 4, variation of the values of f 0 /S 0 and v 0 lead to similar variations in the shape of the resulting spectral profile. This suggests the possibility of a degeneracy in the fitted values of these parameters, which is indeed the case. The quoted value of the fitted v 0 given in the table is that for the most optimal fit. However, it was found that the value of v 0 could be varied from around one half to two times the optimal value for only a 10% increase in the value of the fit statistic. Therefore we are reluctantly forced to conclude that the value of v 0 is less well-constrained by the data than Emslie's original argument might suggest. Similarly, f 0 /S 0 , and to a lesser extent T e , cannot be unambiguously determined. The value for the total electron energy varies by around an order of magnitude when the value of v 0 is varied over our selected 10% range of fit statistic acceptability, with the total energy decreasing as v 0 is increased. Thus, while the model can reproduce observed photon spectra, its nature may preclude a precise determination of the flare electron energy content.
For comparative purposes, Table 2 gives fitting parameters as derived from fits using a 'simple' thermal plus power-law model, as may be employed in e.g. the OSPEX package in the standard RHESSI analysis software (e.g. Schwartz et al. 2002). As measured by our fit statistic, these fits are statistically acceptable at a similar confidence level to our model fits. As may be seen, the simple fits give consistently larger values of f 0 /S 0 (and comparable or slightly greater temperatures) than our model fits, since all of the thermal part of the spectrum must be accounted for by the Maxwellian background with no contribution from thermalising fast electrons. This results in higher energy contents for the thermal background electrons, but the total energy contents are lower than for our fits since the thermalisation process is an energetically expensive way of producing 'thermal' photons. The simple fit values for the low-energy cutoffs are comparable with those of our model in that they are a few times the thermal speed, but again this parameter is not well-constrained. The largest difference is for the 17th March flare, for which our model suggested an extremely low v 0 . The simple fit is more conservative, and consequently does not give the very large energy content in the injected electrons as derived from our model fits.
Conclusions
We have seen that for the case of accelerated flare electrons impinging on a warm target, the effects of velocity diffusion should not be neglected. The process by which the lower energy electrons of the injected population thermalise and merge with the ambient thermal background is of importance when describing the behaviour of the electrons in this regime. In particular, it has the effect of 'smearing out' this region of the resulting bremsstrahlung photon spectrum, which in many cases is therefore not well described by a simple isothermal and power-law model. However, this region can be modelled effectively by a treatment including velocity diffusion effects.
A consequence of including velocity diffusion in the analysis is that simple interpretation of observed photon spectra can be deceptive. For example, as has been shown in Figs. 2, 3 and 4, determining the photon energy down to which the spectrum remains power-law-like does not allow a simple evaluation of the parameters of the injected electron population, most particularly its low energy cutoff. This in turn hinders determination of the flare electron energy content.
The surprising behaviour of the photon spectra is highlighted by the fits to RHESSI data in Fig. 5: visual inspection of the spectra would seem to suggest that the 17th March flare should have the highest temperature background plasma, since the Maxwellian portion of the spectrum is more prominent and extends to higher photon energy in this flare than in the others considered. However, the fitted parameters imply that the 17th March flare actually has the lowest background plasma temperature. The form and extent of the Maxwellian component of the photon spectrum in this case is completely dominated by thermalising electrons from the lower energies of the injected population. This result emphasises that the thermalising process effectively couples the injected and the background populations, such that their contributions to the overall photon spectrum cannot easily be separated. In effect, the distinction between the background and injected populations becomes rather arbitrary at these low energies, and it is no longer meaningful to distinguish between a background thermal electron and an electron which has thermalised out of the injected population. Further, because of this strong coupling it is also not possible to 'swamp' the thermal region of the photon spectrum by contriving a large injected power-law population with a very low cutoff energy, as the lowest energy electrons will inevitably thermalise. However, this thermalisation process is an energetically expensive way to produce thermal emission.
The steady-state solution presented here has illuminated many of the consequences of velocity diffusion in the context of solar flares. However, an approach which explicitly accounts for time-dependence would be an interesting further development. This would allow modelling of the evolution of the plasma parameters over the duration of a flare, and may also be of benefit for more precise evaluations of the flare energy budget.
|
2017-09-23T01:04:26.014Z
|
2005-05-10T00:00:00.000
|
{
"year": 2005,
"sha1": "92f0a50a909827cef2b6a830cd57e8c98c4a4a0a",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2005/30/aa2137-04.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "92f0a50a909827cef2b6a830cd57e8c98c4a4a0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
33707308
|
pes2o/s2orc
|
v3-fos-license
|
An aesthetic approach towards the temporary restoration of missing upper lateral incisors during orthodontic treatment
Maxillary lateral incisors make up approximately 20% of all congenitally missing adult teeth. It is the third most commonly missing tooth, the first being third molars and the second being mandibular premolars. There is little gender variation, with females slightly more affected than males. Interestingly, agenesis of both maxillary lateral incisors is more common than agenesis of only one. Missing lateral incisors are also a common finding in cleft palate patients. The prevalence among different ethnic groups is similar, affecting 1–2% of Northwest Europeans and 1.3% of Japanese children.
Maxillary lateral incisors make up approximately 20% of all congenitally missing adult teeth. 1 It is the third most commonly missing tooth, the first being third molars and the second being mandibular premolars. 2 There is little gender variation, with females slightly more affected than males. [3][4][5][6] Interestingly, agenesis of both maxillary lateral incisors is more common than agenesis of only one. 1 Missing lateral incisors are also a common finding in cleft palate patients. 7 The prevalence among different ethnic groups is similar, affecting 1-2% of Northwest Europeans 8-10 and 1.3% of Japanese children. 11 Congenitally missing maxillary lateral incisors present many challenges for the orthodontist when treatment planning and during active orthodontic treatment. Treatment often begins during the mixed dentition when agenesis is diagnosed at eight to nine years of age, most frequently by the family dentist. 1,12,13 The key reasons for seeking orthodontic treatment are centreline discrepancies, large residual spaces, arch asymmetry and incorrect tooth inclinations. 14 The two main treatment options for agenesis cases are orthodontic space closure and resultant maxillary canine substitution, or orthodontic space opening to allow for prosthetic replacement. Neither option is ideal, as both carry significant biological and financial costs. Therefore, treatment decisions should only be made after careful consideration of the patient's individual circumstances. 15,16 Orthodontic space opening is often more suited for the older patient who presents with generalised spacing or with more than 6.5 mm of mesiodistal width per lateral incisor. Other factors include a low lip line, a Class III skeletal relationship or otherwise adverse skeletal base growth. 15 Orthodontic space opening can also restore lip contour, re-establish a normal buccal occlusion, redistribute the available space, and retract any protruded maxillary incisors while creating parallel roots if implants are to be placed. 16 A common dilemma faced during active treatment is the need to maintain but also aesthetically mask the lateral incisor space. Often this is achieved with an acrylic denture tooth, bonded to an orthodontic bracket which allows it to be attached to a rectangular archwire. Ideally, the denture tooth should mirror the shade of the natural teeth and, following engagement to the wire, should sit comfortably within the edentulous space with snug mesial and distal contacts. The denture tooth should also be out of occlusion to avoid mobility or dislodgement during speech and function.
Clinically, an appropriate denture tooth is selected based on size, shape and shade. The base of the denture tooth is adjusted to conform to the maxillary lateral Seong-Seng Tan: sst@unimelb.edu.au; Katie Xu: hi.katie.xu@gmail.com edentulous ridge and generate even tissue pressure when engaged. The usual clinical practice is to lightcure a bracket onto the denture tooth outside of the mouth, followed by the attachment of the bracket (and tooth) to the rectangular wire using elastic modules, or for self-ligating brackets by full engagement of the wire in the bracket. This procedure, while convenient, too frequently results in a poorly adapted tooth, as there is no accurate way of determining beforehand the optimal position of the bracket. The outcome is often an unaesthetic and loose denture tooth with gaps gingivally and interproximally, resulting in significant tooth mobility and failure.
This clinical hint demonstrates a predictable and aesthetic method of positioning a denture tooth into the edentulous space. The appropriate denture tooth is selected for size, shape and colour to match surrounding soft and hard tissues before its undersurface is trimmed with an acrylic bur to fit the contour of the soft tissue ridge. The labial surface should be lightly sanded to accept the bracket. The bracket should then be loaded
Wet Composite
Direction of denture tooth placement Fig.1 The denture tooth carrying the unset bracket is carefully manoeuvred behind the loosely ligated archwire. The emphasis is for esthetic placement of the denture tooth in the edentulous space, without worrying about the bracket position. Fig.2 Using finger pressure to hold the denture tooth in position, the archwire is engaged into the bracket slot before curing. The bracket is then secured to the archwire using elastic or steel ligatures. Using finger pressure to hold the denture tooth in position, the archwire is engaged into the bracket slot before curing. The bracket is then secured to the archwire using elastic or steel ligatures.
with composite resin and placed on the denture tooth but not cured at this stage ( Figure 1). This permits the tooth (with the uncured bracket) to be seated with light finger pressure into the most aesthetic and stable position on the alveolar ridge. The bracket (but not the tooth) carrying the wet composite can then be slowly slid across the tooth surface until it is fully engaged into the rectangular wire, at which time the resin may be light cured to secure the denture tooth into its optimal position (Figure 2). The denture tooth is then recontoured on the palatal and incisal aspects to remove any functional protrusive and lateral interferences. It needs to be emphasised that for bracket stability, this procedure will only work with rectangular archwires.
The above clinical sequence emphasises priority on denture tooth position over the bracket position. For added stability, a light ligature wire (0.012 inch) can be used to tie the bracket with the denture tooth to the adjacent teeth ( Figure 3). In the accompanying example, an adult female patient presented with a congenitally missing upper left lateral incisor and a peg-shaped right lateral incisor. Treatment was provided to regain the upper left lateral incisor space by distalisation of the upper left quadrant. Following successful distalisation, the edentulous space was temporarily restored using a denture tooth (Figure 3). After treatment, the regained maxillary left lateral space was restored with a resin bonded bridge.
|
2018-04-03T02:27:40.955Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b6615129f15d3b292713d95bc0f2fcd0aeb2c3a1",
"oa_license": "CCBY",
"oa_url": "https://www.exeley.com/exeley/journals/australasian_orthodontic_journal/31/2/pdf/10.21307_aoj-2020-160.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b9147e3ce88ae0db34ba203e9a372c6e8f0f0e5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158623654
|
pes2o/s2orc
|
v3-fos-license
|
The Beat Generation Meets the Hungry Generation : U . S . — Calcutta Networks and the 1960 s “ Revolt of the Personal ”
This essay explores the relationship between the U.S.-based Beat literary movement and the Hungry Generation literary movement centered in and around Calcutta, India, in the early 1960s. It discusses a trip Allen Ginsberg and Peter Orlovsky took to India in 1962, where they met writers associated with the Hungry Generation. It further explains how Lawrence Ferlinghetti, owner of City Lights Books in San Francisco, was inspired to start a new literary magazine, City Lights Journal, by Ginsberg’s letters from India, which included work by Hungry Generation writers. The essay shows how City Lights Journal packaged the Hungry Generation writers as the Indian wing of the Beat movement, and focuses in particular on the work of Malay Roy Choudhury, the founder of the Hungry Generation who had been prosecuted for obscenity for his poem “Stark Electric Jesus”. The essay emphasizes in particular the close relationship between aesthetics and politics in Hungry Generation writing, and suggests that Ginsberg’s own mid-1960s turn to political activism via the imagination is reminiscent of strategies employed by Hungry Generation writers.
In March 1962, Allen Ginsberg and Peter Orlovsky arrived in India, where they would live for the next fourteen months.Students of Beat literature have long noted that during this time they met up with fellow poets Gary Snyder and Joanne Kyger, and that together they immersed themselves in the local religious and literary cultures.Partly because all four writers kept journals of the period that have since been published, readers have tended to characterize their own interactions as among the more significant of the trip. 1 While there is a wealth of material to be mined in this regard-not the least of which is their meeting with the Dalai Lama, during which Ginsberg brought up the consciousness-expanding potential of LSD-other important connections were forged that tell differing stories about the international reach of the Beat movement.Indeed, as Ginsberg and Orlovsky traveled on without Snyder and Kyger, they met a host of writers, artists, and holy men in Banaras, Calcutta and beyond; as Orlovsky wrote to their old friend Lucien Carr: "Main thing we do in Calcutta is meat [sic] Bengale poets by the dozen". 2ne such Bengali poet who caught Ginsberg's interest was Malay Roy Choudhury, also a playwright and essayist who had distinguished himself by founding a literary movement he called the Hungry Generation or the Hungryalists, a group of young writers and artists centered in and around Calcutta who were united in their pugnaciously antiestablishment attitudes and in their drive to reinvigorate what they took to be the tired, academic modes of traditional Bengali arts and letters.Late in 1962, in one of his long, detailed letters to Lawrence Ferlinghetti, owner of City Lights Books and publisher of Howl and Other Poems (1956), Ginsberg turned particularly rhapsodic about his latest exploits, and folded in a copy of "The Hungryalist Manifesto on Poetry," a broadside by Choudhury filled with audacious pronouncements about poetry and culture.Ferlinghetti was impressed enough by Ginsberg's letter and the suggestive energies of the Hungryalist Manifesto that he was inspired to start a new little magazine intended to showcase the contemporary international avant-garde, including the Hungryalists.As he replied to Ginsberg: "I have just been prodded by your India descriptions to start another Journal and publish your description in it, along with anything else you send, and also publish that beautiful Weekly Manifesto of Hungry Generation of India which you enclosed in letter." 3 This exchange was the germ of what would become City Lights Journal (first run: 1963-1966), the magazine that introduced Choudhury and the Hungry Generation to Western readers, but did so by suggesting their contributions to international letters were broadly comparable to what the Beats had achieved in the States. 4he Hungry Generation's association with the Beats was a boon insofar as Ginsberg was already an internationally recognized writer, and ever the astute marketer, he was able to package the Hungryalists as something like the Indian wing of a global Beat phenomenon.In those early days, Choudhury also tended to play up his connection to Ginsberg and the Beats, at least when describing the Hungry Generation to non-Indian readers.In 1963, for example, Choudhury wrote a dispatch from India for El Corno Emplumado, Margaret Randall and Sergio Mondragon's bilingual arts journal published out of Mexico City, and described the situation like this: "We have started a literary rebellion here calling ourselves HUNGRYALISTS, mainly fighting for a change, along with some crazy conceptions.Allen Ginsberg, who came to India and stayed with us for about a year or more (he was in my house for a few days and wrote some beautiful poems in this very room where I am now sitting and writing this letter to you), introduced us to his fellow Beats by reprinting and publishing our Manifestoes and poems etc. in U.S. journals." 5Choudhury has in mind Ferlinghetti's City Lights Journal (No. 1 [1963]; No. 2 [1964]; No. 3 [1966]), which, thanks to Ginsberg's efforts, was the first to print English translations of Hungry Generation manifestos and poetry, a move that at once announced them to the West and cemented Beatdom's international bona fides.Taking the link between Ginsberg and Choudhury as a starting point, this essay explores the Beat-Hungryalist connections via City Lights Journal, which introduced the Hungry Generation to Anglophone audiences by framing it as an extension of the Beat movement.This framing suggested the internationalist cast of contemporary Beat writing while simultaneously conferring hip or underground legitimacy on the Hungryalists through their supposed association with the Beats.Given this scope, I will not wade very deeply into the intricacies of Bengali poetry or the factional rivalries on the local Calcutta literary scene-which would of course be required to more fully understand the poetic and aesthetic interventions of the Hungryalists-but will instead investigate the version of the Hungry Generation presented in English in City Lights Journal and other venues such as the "Hungry!" issue of Salted Feathers (1967), edited by Dick Bakken (Bakken 1967), and the "Poetry of India" issue of Intrepid (1968), guest edited by Carl Weissner (Weissner 1968a).
There is a paucity of critical work on the Hungryalists available in English, but that which does exist tends to cast their importance in terms of rebellion and iconoclasm. 6In his introduction to the 1968 issue of Intrepid he guest edited, for example, German writer and Beat associate Carl Weissner announced that the Hungryalists "have established themselves as the largest & most remarkable avantgarde element in the country." 7More recently, Aditya Misra has called the Hungry Generation "the first avant-garde uprising against modern Bengali poetry which believed in giving the decaying Indian civilization a mortal blow," and Bhaswati Bhattacharya underscores that the movement's "goal" was "to examine the extent to which it could subvert the existing literary and social norms." 8Reflecting on her interviews with Samir Roy Choudhury, Malay Roy Choudhury's elder brother and original Hungryalist, Maitreyee B. Chowdhury argues that the movement gave a new vocabulary to Bengali literature, taught new reading habits and made the stench of the road, among other such 'un-poetic' things, poetic . . . the movement became an expression for those frustrated with the culture and ethics of those times . . . the Hungryalists perhaps spoke for an entire city affected by post-Partition poverty politics.New conversations and a new language became the need of the day-a language that would cast aside elitist aspirations and speak of angst, instead. 9 This critical language of subversion and newness, of anti-civilization stances and "post-Partition poverty politics," suggests the degree to which the Hungry Generation was a literary, social, and political movement rolled into one, and as such was characterized by uncertain distinctions between aesthetic interventions and political statements.As Weissner put it in 1968, "the HG poets, most of them anyway, are as much political agitators as they are poetic discoverers." 10n later years, in fact, Malay Roy Choudhury pinpointed the genesis of the movement not to the writing or publication of a poem, but to the manifesto about poetry Ferlinghetti would eventually print in City Lights Journal: "The Hungry Generation literary movement was launched by me in November 1961 with the publication of a manifesto on poetry in English." 11According to Choudhury, then, the Hungryalists were "launched" into public visibility via their manifestos, which were printed on broadsides and distributed throughout Calcutta and Patna.Indeed, although Choudhury is widely credited as the founder of the Hungryalists, it was a pointedly social and communal enterprise, and 6 To my knowledge, no comprehensive, English-language study of the Hungry Generation exists.In fact, there is a surprising lack of work on the Beats and India in general; even the recent, otherwise thorough Routledge Handbook of International Beat Literature, ed. Lee ( 2018), which has chapters on Beat-associated literature from Australia to China, lacks a chapter on India.Baker (2008) A Blue Hand: The Beats in India (New York: Penguin) is the clear exception insofar as it is a lively account of the American Beats in India, but some critics have questioned the book's accuracy with regard to its brief discussions of the Hungryalists (see pp. 154-59, 177-78, 194-99, 216-20); Chowdhury (2016), for example, calls it "a somewhat fictionalized account of Allen's stay and influences from India" ("Talking Poetry").See also Café Dissensus 26 (special issue on "The Beat and the Hungry Generation: When Losing Became Hip"), ed.Brahmachari and Kumar (2016), https://cafedissensus.com/2016/06/16/contents-the-beat-and-the-hungry-generation-when-losing-became-hipissue-26/.In addition to this special issue (only some of which actually deals with Beat-Hungryalist connections), fans of the Hungry Generation do actively post brief essays, interviews, and other material about the movement online, and while I refer to some of these sites in this essay, note that they can be laced with misinformation and are often colored by the idiosyncratic memories or biases of the compilers.Although it is somewhat difficult to locate, the best English language collection of Hungry Generation material-including poetry, manifestos, letters, and legal documents pertaining to the 1964 arrests and subsequent trials-remains the "Hungry!" issue of Salted Feathers 4. 1-2 (March 1967).Readers of Bengali may also consult Ghosh (1995), Hungry Generation Andoloan [The Hungry Generation Movement] (Kolkata: Pratibhas).7 Choudhury (1968b), "introduction," Intrepid 10 ("Poetry of India" issue) (Weissner 1968b), n.p. 8 Brahmachari and Kumar (2016), "'I can but why should I Go?': The Romance of Opposition in Shakti Chattopadhaya's Poetry," South Asian Review 37.2, p. 181; and Bhattacharya (2017), Much Ado Over Coffee: Indian Coffee House Then and Now (New Delhi: Social Science Press), p. 196. 9 Chowdhury (2016), "Talking Poetry, Ginsberg and the Hungryalists: Samir Roychoudhury, a Retrospective," Café Dissensus 26 (special issue on "The Beat and the Hungry Generation: When Losing Became Hip" (16 June 2016).https://cafedissensus.com/2016/06/16/talking-poetry-ginsberg-and-the-hungryalists-samir-roychoudhury-a-retrospective/. 10 Choudhury (1968b), "introduction," n.p. 11 Choudhury (2009), "Impact of the Hungry Generation (Hungryalist) Literary Movement on Allen Ginsberg," www.sciy.org/?=6127.
he accordingly insisted that it is recognizable as a "generation" or movement because it sprang not from him alone, but from a coterie of four poets: the Choudhury brothers, Debi Rai, and Shakti Chattopadhyay.There were many other writers and artists who came to be associated with the Hungry Generation, and while I do touch on some in the course of this essay, I'll concentrate primarily on Choudhury and others who were most visible in the States.
In connection with their poetry and manifestos, the Hungryalists became notorious in Calcutta in the early 1960s for their public acts of protest.As one later observer explains, for example, "the poets started a campaign to personally deliver paper masks of jokers, monsters, gods, cartoon characters and animals to Bengali politicians, bureaucrats, newspaper editors and other powerful people.The slogan was, 'Please remove your mask.'" 12 Such antics became increasingly irritating to municipal authorities, and tensions came to a head on 2 September 1964, when eleven writers who had appeared in a Bengali-language book titled Hungry Generation were arrested and charged with "criminal conspiracy to bring out the aforesaid obscene publication," which, the complaint read, would "corrupt the minds of the common reader." 13Malay Roy Choudhury, Samir Roy Choudhury, and Debi Rai were among the eleven arrested; Shakti Chattopadhyay, the fourth original Hungryalist, had also published in Hungry Generation, but had managed to avoid arrest by agreeing to testify against Malay Roy Choudhury at his obscenity trial.Chattopadhyay claimed that despite his appearance in Hungry Generation, "I had no relationship with so called Hungry Generation and this book was not published by me."He went on to allege that Choudhury's writing in particular represented "mental pervertion [sic] and [the] language is vulgar," and then "strongly condemned" Choudhury's contribution to Hungry Generation, a febrile, sexually-explicit poem called "Prachanda Baidyutik Chhutar" or "Stark Electric Jesus." 14 Eventually, the charges against the ten other writers were dropped, but Choudhury, as reputed founder of the Hungryalists, was forced to stand trial for obscenity.
The ensuing trial is broadly analogous to an earlier moment in Beat history, when in 1957 Ferlinghetti and bookseller Shig Murao were charged with obscenity for distributing Howl and Other Poems. 15Although "Howl" was finally determined to have literary merit, "Stark Electric Jesus" was found obscene and Choudhury was fined roughly two months' salary, fired from his civil service job, and the poem was banned and extant copies ordered destroyed. 16The immediate, material aftermath of the decision was thus dire for Choudhury, but as the "Howl" trial did for Ginsberg, the public battle over "Stark Electric Jesus" propelled him into a new realm of renown because he came to epitomize the right to free expression in the face of government censorship.In rendering his verdict, A.K. Mitra, Presidency Magistrate of the 9th Court of Calcutta, concluded that "Stark Electric Jesus" was "per se obscene" as "it starts with restless impatience of sensuous man for a woman obsessed with uncontrollable urge for sexual intercourse followed by a description of vagina, uterus, clitoris, seminal fluid, and other parts of the female body and organ, boasting of the man's innate impulse and conscious skill as how to enjoy a woman, blaspheming God and profaning parents accusing them of homosexuality and masturbation, debasing all that is noble and beautiful in human love and relationship." 17Choudhury's trial became a minor cause célèbre in avant-garde circles in India, the States, and beyond, and thus stands as perhaps the landmark moment in the history of the Hungry 12 Basu (2011), "A Sour Time of Putrefaction," Business Standard (December 10), https://www.business-standard.com/article/beyond-business/-a-sour-time-of-putrefaction-111121000058_1.html 13 The Complaint against the 'Hungry Generation' (1967), reprinted in Salted Feathers 4.1-2 ("Hungry!"issue) (March 1967), n.p. 14 Malay Roy Choudhury's Prosecution (1967), reprinted in Salted Feathers 4.1-2 ("Hungry!"issue) (March 1967), n.p. 15 See Howl on Trial: The Battle for Free Expression, ed.McCord (1966), "Note on the Hungry Generation," City Lights Journal 3 (1966), p. 159. 17The Verdict Against Malay Roy Choudhury (1967), reprinted in Salted Feathers 4.1-2 ("Hungry!"issue) (March 1967), n.p.Generation, even as Chattopadhyay's testimony against Choudhury symbolized the dissolution of the original coterie. 18fter Choudhury's arrest, he and the Hungryalists became the latest example of writers and publishers around the world who had been subject to legal action by backward-looking authorities, and Choudhury received letters of support from a wide spectrum of fellow writers, from Daisy Aldan (editor of the poetry magazines Folder and New Folder) and poet Carol Bergé in New York to Margaret Randall and Octavio Paz in Mexico City; Paz had in fact met some of the Hungryalists while visiting Calcutta and been suitably impressed. 19However, while Choudhury's obscenity trial served as a rallying-point for sympathetic writers, even prior to this event the Hungry Generation was being characterized as part of an international avant-garde, thanks in no small part to Ginsberg and Ferlinghetti's efforts.
In the first issue of City Lights Journal, Ferlinghetti's headnote described it as "a new international annual," and he announced in the second that its content "circles the world."20However much the journal favored eclecticism under the banner of the avant-garde, it was also not above framing these international writers in terms of Ferlinghetti's favored literary provocateurs, the Beats.The inaugural issue, for example, began not with Indian writing, but with Ginsberg's and Snyder's writing about traveling through India.The cover even featured a photograph of Ginsberg somewhere in the "Central Himalayas," wrapped in a blanket and staring frankly at the camera, his hair whipped up in the wind.Readers were presented, in other words, with the celebrated Beat poet in his newly adopted environment, the implication being that while he may have been broadening his worldview through travel, his mere presence also served to confer legitimacy on the region and its writers.
As he had promised to Ginsberg, Ferlinghetti printed "fragments of letters from . . .Allen Ginsberg In India" in the first issue of his new journal.The excerpts he selected emphasize the correspondences among what Ginsberg was witnessing in India and his sense of the American underground: "the common saddhu scene here is, feels like, just about the same as beat scene in US-amazing to see the underlying universality of people's scenes."21Elsewhere Ginsberg describes a moment when "drunken saddhus came up-just like mill valley [California] scene" (p.8), and claims that "all the hip rituals in US involving pot have been developed and institutionalized here" (p.8).Thus as he draws attention to what he takes to be the more exotic aspects of his Indian experience, Ginsberg also insists on the "universality" of subterraneans the world over.He even calls saddhus "nothing but a bunch of gentle homeless on-the-road teaheads" (p.8), drawing a direct line from Hindu holy men to the most famous novel of the Beat movement.(Following Ginsberg's letters is Gary Snyder's "A Journey to Rishikesh & Hardwara," a more conventional piece of travel writing that describes yoga and meditation in various ashrams Snyder visited with Ginsberg,Orlovsky,and Kyger. 22 ) It is only after these depictions of India through the eyes of sympathetic Westerners does Ferlinghetti present an example of Indian writing, the manifesto that had inspired him to create a new magazine in the first place.Previously published in Calcutta as a broadside signed by some 25 poets and "written and translated from Bengali" by Malay Roy Choudhury, the version in City Lights Journal accentuates the idea of a literary "Generation" by presenting a kind of poetic board of directors: Editor: Debi Rai Leader: Shakti Chattopadhyay Creator: Malay Roy Choudhury Howrah, India 23 Framed as it is by the impressions of India from Ginsberg and Snyder, including Ginsberg's mention of hanging out with Shakti Chattopadhyay "of enclosed manifest" (p.7), readers of City Lights Journal could be forgiven for interpreting this manifesto as an Indian counterpart to well-known Beat manifestos such as Jack Kerouac's "Belief & Technique in Modern Prose" or "Essentials of Spontaneous Prose," both of which originally circulated in the little magazines Black Mountain Review and Evergreen Review, respectively.The Hungry Generation manifesto announces that traditional Bengali poetry is "cryptic, short-hand, . . .flattered by own sensitivity like a public school prodigy," and that, by contrast, the Hungryalists have "discarded the blankety-blank school of modern poetry, the darling of the press, where poetry does not resurrect itself in an orgasmic flow, but words come bubbling in an artificial muddle" (p.24).According to such a view, something called "the Hungry Generation" is best clarified in terms of opposition: there is a "blankety-blank school of modern poetry," underwritten by the media-and, presumably, the academy-that is stuck merely regurgitating tradition.In the context of City Lights Journal, the details of this tradition are less important than the claim that it is outmoded and "artificial": what matters is that the Hungryalists stand opposed to whatever is the dominant strand in Bengali letters.Indeed, in other versions of the manifesto, Choudhury insists that Hungry Generation writing seeks to "convey the brutal sound of breaking values and startling tremors of the rebellious soul of the artist himself, with words stripped of their usual meanings and used contrapuntally.It must invent a new language, which would incorporate everything at once, speak to all senses in one." 24The shattering of values, the embracing of rebellion, the restless search for "new language"; these are the features of the Hungry Generation that would likewise be recognizable to readers of Beat literature, thus providing a familiar template for seeing Calcutta as the latest outpost in a worldwide literary movement.
Like a teaser trailer for coming attractions, the manifesto announces a new generation of Indian poets but is not accompanied by the work itself, and readers of City Lights Journal would have to wait until the next issue (1964) to encounter actual poetry by these writers, in a special section called "A Few Bengali Poets."As in the first issue, these poets are framed by and filtered through Ginsberg's perspective insofar as he contributed a prefatory statement explaining why the Journal's readers should care about these Bengali poets.First, he establishes a familiar binary between staid traditional poetry and the freshness and immediacy of the selected poets.Reminiscent of the ways Donald Allen's influential anthology The New American Poetry (1960) had positioned the Beats and other writers of the "new poetry" as sharing "a total rejection of all those qualities typical of academic verse," Ginsberg sets the Hungryalists against their national and local traditions. 25As T.S. Eliot came to embody for many Beats the "closed form" of American academic poetry, Ginsberg figures Rabindranath Tagore, Calcutta's Nobel Prize-winning poet and literary giant, as the elder icon to be smashed: "As a modern literary kelson he seems to be a big bore; that is to say early XX century academic preoccupations in the poetic field are so dominated by Tagore festivals, speeches, recitations, criticisms that his work 23 "Weekly Manifesto of the Hungry Generation," City Lights Journal 1 (Choudhury 1963), 24.Note that there are differing methods of transliterating Bengali names into English, and as such writers' names are spelled variously across venues.I have changed the spelling of names in order to maintain consistency in the context of this essay (for instance, Shakti Chattopadhyay appeared in City Lights Journal as Shakti Chatterjee, and Malay Roy Choudhury as Malay Roy Choudhuri). 24 Choudhury (1961), "The Hungryalist Manifesto on Poetry" (Calcutta: H. Dhara, no date), n.p.See also Choudhury (1968a), "The Aims of the Hungry Generation Poets," Intrepid 10 ("Poetry of India" issue) (Choudhury 1968b), which counts among these aims "abolishing the accepted modes of prose and poetry" (n.p.). 25Allen (1960), "Preface," in The New American Poetry, ed.Allen (New York: Grove, 1960), p. xi.
has become institutional and apparently of little use . . . to the young." 26Against this "academic" Tagore monolith, Ginsberg identifies another strand of Bengali poetry associated with "'the modern spirit'-bitterness, self-doubt, sex, street diction, personal confession, frankness, Calcutta beggars etc." (p.117).Substitute "American hobos" for "Calcutta beggars" and this list would be a fair approximation of much Beat poetry, at least in popular conception, and in Ginsberg's telling, the first thing one need know about the Hungry Generation is that its writers attack tradition, that they are iconoclasts invested in remaking language just as he and his circle had done in the States.In fact, Ginsberg insists that the Bengali "poems are interesting in that they do reveal a temper that is international, i.e., the revolt of the personal.Warsaw Moscow San Francisco Calcutta, the discovery of feeling" (p.118).Inasmuch as Ginsberg is gesturing toward the idea of an international avant-garde by facilitating the publication of Bengali poets in City Lights Journal, his particular framing also has a curiously leveling effect, such that the Hungryalists are elevated mainly via their association with the Beats, rendered significant as Indian brothers-in-arms in the "revolt of the personal." Ginsberg's notion of the "personal" does not merely signal intimate, confessional energies, but also underscores the importance of community, and the other thing he wants American readers to know is that the Bengali poets are "excellent drinking companions" (p.117).Such a statement is not as flip as it may seem at first blush because what Ginsberg is really doing is implying that the Hungryalists can be viewed as part of an ever-expanding network of poets who comprise a global literary underground.In this regard, Jimmy Fazzino's recent work on the "worlding" of Beat literature can help us understand Ginsberg's thinking here.Fazzino borrows the concept of "networks" to describe the "expression[s] of felt solidarity and mutual understanding" that the American Beats shared with others outside national bounds, and in fact uses Ginsberg's attraction to the Hungry Generation as his book's opening anecdote. 27Although Fazzino does not pursue the relationship among the Beats and the Hungryalists beyond noting that both "would be censored . . .for their literary licentiousness and antinomian views," he does claim that the relationship suggests that India was not for Ginsberg "timeless or unchanging or utterly exotic . . .but vital and dynamic" (p. 1).Fazzino's work has been a useful corrective to the perception that Ginsberg and other Beats were facilely orientalist in their thinking, and demonstrates how the Beats could and did see international writers as progenitors of a literary avant-garde and fomenters of social and political dissent in the context of their own local and national cultures.
For Ginsberg, emphasizing the social spaces he shared with the Hungryalists served both to advertise his own fluency with the local literary scene and to render this scene legible in terms of a diffuse, international Beat sensibility.He notes, for instance, that the Hungry Generation is a "big gang of friend poets [who meet] in an upstairs coffee-house across the tramcar-bookstall street from Calcutta University" (p.118).These "friend poets" include the likes of Sunil Ganguly and Shakti Chattopadhyay; Malay Roy Choudhury, Ginsberg explains, "isn't there with his friends, he lives in Patna way up the Ganges" and "sits upstairs in his room and writes manifestos for the 'Hungry Generation'" (p.119).As he did with his own circle of friends, Ginsberg construes a whole Generation from the social bonds of a small group, taking care to write himself and Orlovsky into this group, as when he describes a drinking session with the Hungryalists, during which they apparently begged Orlovsky to read and reread his irreverant poem "Morris." 28Ginsberg in fact insists on his own role in bringing Hungry Generation poetry to Anglophone audiences: "The poems were translated into funny english by the poets themselves & I spent a day with a pencil reversing inversions of syntax & adding 26 Ginsberg (1964), "A Few Bengali Poets," City Lights Journal 2 (1964), p. 117.See also Choudhury (1968c), "Tagore," and Choudhury (1968d) in railroad stations" (p.118).Ginsberg, lodestar of the Beats, presents a Hungry Generation mediated by his own guiding hand: not only is he a drinking companion, but their editor, agent, and publisher, and so figures himself as the embodied link between otherwise far-flung literary movements.
The poems that Ginsberg rendered into less "funny english" do seem to bear traces of the Beat sensibility; the particular work collected in City Lights Journal 2 is Sunil Gangopadhyay's "Age Twenty Eight" and "Interruption"; Sarat Kumar Mukherjee's "Toward Darkness" and "The Lion in a Zoo"; Sankar Chattopadhaya's "Civilization Through Angry Eyes" and "Hateful Intimacy"; and Malay Roy Choudhury's "Drunk Poem" and "Short-Story Manifesto" (in his preface, Ginsberg explains that Shakti Chattopadhyay, "perhaps the finest poet" of the Hungryalists, was nevertheless "not represented here because his poems are such elegant Bengali they're too hard to translate"). 29From its title alone, Gangopadhyay's "Age Twenty Eight" may remind Beat aficionados of Gregory Corso's "I am 25", a poem that announces his "love a madness" for the famously youthful poets Shelley, Chatterton, and Rimbaud, declaring: "I HATE OLD POETMEN!" 30 Like Corso, Gangopadhyay asserts his love of language, but is by age 28 haunted by "dead friends" and surrounded by "married women," suggesting that his coevals have passed into adulthood while he clings to the youthful idealism of the written word, piercing "a hornet's nest with my pen." 31 Insofar as "Age Twenty Eight" rails against those friends who have chosen convention, retreating to their "new-bought bed sheet" and hiding "their faces in / domestic dryness" (p.121), the poem amounts to a critique of bourgeois domesticity that would seem familiar to those white, middle-class Americans worried about the creeping conformity of the long 1950s.In fact, in an essay about the Hungry Generation by Debi Rai and others, the authors leveled similar observations that could have been torn from the pages of classic studies of American conformist culture like William Whyte's The Organization Man (1956) or C. Wright Mills' The Power Elite (1956): "Cultural patterns are crowded by a system of mass production and mass communication in which we all become like one another, speaking the same slang, wearing the same clothes, reading the same magazines.But instead of creating a sense of community, this only creates a crowd of faceless and anonymous men" (p.169).Thus the "hidden . . .faces" of Gangopadhyay's domesticated men resonate with the "faceless and anonymous men" against which the Hungry Generation statement positioned itself, underscoring that the oppositional structure described above was instrumental to both the poetry and prose manifestoes of the movement-or at least to the work chosen for inclusion in City Lights Journal. 32he work by Gangopadhyay and others notwithstanding, it is clear that City Lights Journal was positioning Choudhury as the preeminent Hungry Generation writer even prior to the obscenity trial that would later solidify his countercultural legitimacy.Choudhury's "Drunk Poem" may be read as a Hungryalist expression of what Steven Watson sees as a defining feature of Beat literature: "The artist's consciousness is expanded through nonrational means: derangement of the senses, via drugs, dreams, hallucinatory states, and visions." 33A note appended to "Drunk Poem" informs readers that it was "scribbled" after "taking a peg of 'mamushi,' . . .an interesting wine made with the help of snake venom" (p.128), and the poem begins by hailing the reader then immediately deploying unusual 29 Ginsberg (1964), "A Few Bengali Poets," p. 118.Note that from Malay Roy Choudhury's perspective, Sunil Gangopadhyay ought not to be associated with the Hungryalists, but rather with the circle around the journal Krittibas, those whom Choudhury calls a "pro Establishment commercial renegade coterie whose machinations had led to the arrest and trial of the Hungryalists."At the time he was gathering material for inclusion in City Lights Journal 1, such sectarian differences were opaque to Ginsberg, and Choudhury later recalled that "One thing which annoyed me at the time was that Ginsberg was unable to differentiate between the members of avant garde Hungryalist movement and the . . .commercially inclined pro-Establishment Krittibas group" (Malay Roy Choudhury (2009), "Impact of the Hungry Generation (Hungryalist) Literary Movement on Allen Ginsberg," www.sciy.org/?=6127). 30Corso (1992), Gasoline (San Francisco: City Lights; Corso 1992), p. 42. 31 Barry (1964), "Age Twenty Eight," City Lights Journal 2 (1964), p. 120. 32See also Sankar Chattopadhaya's "Civilization Through Angry Eyes," which couches an anti-civilization stance in the concept of hunger: "In all the ravaged scenes, civilization gives birth to art, love / and hunger in a continuous process" (Chattopadhaya 1964, p. 124). 33Watson (1995), The Birth of the Beat Generation (New York: Pantheon), p. 40.
word combinations that betoken a confused sensorium, perhaps akin to what happens in a state of drunkenness: "Ahoy!/Gymtwist spangles of shockboom music." 34Choudhury approximates sense derangement with the kinetic energy of "gymtwist" paired with a visual noun ("spangles") that is then connected to the auditory ("shockboom music").These opening lines introduce readers to the poem's basic method, to juxtapose that which is generally seen as dissimilar; as Choudhury asserted in another manifesto, "The Aims of the Hungry Generation Poets," he wanted to "break the traditional association of words and to coin unconventional and heretofore unaccepted combinations of words." 35hile "Drunk Poem" begins with the individuated body, it quickly links the body and its senses to larger political concerns: Supersonic bombers of totalitarian peace hiss inside the adult stew and the adults Sell their hipholes to social sadders for rhymeless chunks of Rupee $ £ (p.126) The oxymoronic phrase "bombers of totalitarian peace" figures the West as a neoimperial power residing inside "adults," rather than vice versa.In other words, rather than depicting adulthood as mere resignation to bourgeois normality, as in "Age Twenty Eight," Choudhury sees existence in the "adult stew" as being infected by Cold War imperatives that cannot be escaped, signaled in this case by "supersonic bombers" and elsewhere in the poem by "the deathskirts of U235" (p.127), the uranium isotope used in the atomic bomb dropped on Hiroshima.And as attested by the rapid-fire catalogue of "Rupee $ £," the autonomy of nation states is compromised by the reach of capitalism, so the differentiation of national currencies becomes irrelevant in a world in which even hipness is for sale.In a purposively disorienting poem, such lines suggest that the true object of attack is figurations of "civilization," which Choudhury had declared treacherous in the opening sentence of "The Hungryalist Manifesto on Poetry": "Poetry is no more a civilizing manoeuvre, a replanting of the bamboozled gardens; it is a holocaust, a violent and somnambulistic jazzing of the hymning five, a sowing of the tempestual hunger" (p.24)."Drunk Poem" does not merely exemplify the speaker's deranged senses, then, but does so to derange civilization itself by attacking markers of cultural, religious, and political authority, from "naked Shiva" to "bureaucracy" to the "Pax Romana upon the windmill" (pp.126-27).If the poem represents an anti-civilizing maneuver, then these and other examples are stripped of their context and authority such that the only course of action is to drunkenly and violently tear down anything smelling of establishment and tradition; the poem concludes: Pardon the sinner but MURDER the criminal.(p.128) "Drunk Poem" is a good representation of the outré, antiestablishment, anti-civilization pose that the Hungryalists cultivated, a pose that was codified in the States when they were presented with perhaps the greatest gift a countercultural movement could be given: withering coverage in the reliably conservative, middle-of-the-road Time magazine.In November 1964, two months after remarked to Ferlinghetti that he could not title his new manuscript "Tasty Scribbles" because he "used the word scribbles too oft in the book already," it seems likely that he wrote the note appended to "Drunk Poem" (Ginsberg to Ferlinghetti [27 August 1962], I Greet, p. 157).Note that a very different "Drunk Poem" by Choudhury (1968c) appears in Intrepid 10 ("Poetry of India" special issue) (Choudhury 1968c), n.p. 35 Choudhury (1968e), "The Aims of the Hungry Generation Poets," reprinted in Intrepid 10 ("Poetry of India" issue) (Choudhury 1968e), n.p.
the obscenity charges were brought, Time claimed the Hungryalists as an upstart movement overly imitative of the Beats: "Born in 1962, with an inspirational assist from visiting U.S. Beatnik Allen Ginsberg, Calcutta's Hungry Generation is a growing band of young Bengalis with tigers in their tanks.Somewhat unoriginally they insist that only in immediate physical pleasure do they find any meaning in life, and they blame modern society for their emptiness."36Unsurprisingly, Time collapses Ginsberg's instrumentality in introducing the Hungryalists to Western readers with his inspiring their very existence, but the mere act of reporting on the "growing band" as a movement heightened their visibility in the States.The association with the Beats via Ginsberg was one that stuck, not merely because of Time, but also because of statements by Ginsberg, Choudhury, and the Indian press.
In 1965, American scholar and poet Howard McCord, then at Washington State University, became interested in the Hungryalists and traveled to India to meet with Choudhury and others.That same year, he published an English edition of "Stark Electric Jesus" with the aim of raising money for Choudhury's legal expenses, and his Afterword argues that however supportive Ginsberg was to the Hungryalists, it would be inaccurate to say he inspired them: "The Indian press believes to this day that the group's origins can be traced to the 1962 Indian visit of Allen Ginsberg, Peter Orlovsky, and Gary and Jeanne [sic] Snyder.But however stimulating the visit of these American poets . . .I believe the movement is autochthonous and stems from the profound dislocation of Indian life." 37The very fact that McCord felt obliged to claim the Hungryalists as autochthonous suggests how they had already become entangled with the Beats by 1965.
The third issue of City Lights Journal (1966) again enacted such entanglements as its section on Indian poetry contained a single poem, "Stark Electric Jesus," flanked by an expanded version of McCord's Afterword titled "Note on the Hungry Generation," and another seven-page statement by Debi Rai and others simply called "Hungry Generation."By City Lights Journal 3, then, the "Few Bengali Poets" of issue two had been congealed into a "Generation," so despite the protestations to the contrary, it was clear that these writers were being advertised in ways broadly comparable to the Beat Generation.Nevertheless, in his prefatory note, McCord again insists that the Hungryalists did not materialize as a group because of Ginsberg's influence, but he does accede that "[t]here was little notice of the group in the West until 1963, when City Lights Journal No. 1 carried news of them."38He goes on to characterize the Hungry Generation's importance in terms of their on-going refusals to be bullied by the authorities: "In spite of prosecution and harassment, Malay Roy [Choudhury] has published two more long poems, 'Jakham,' (The Wound), and 'Aamar Amimangshita Shubha,' and other members of the Hungry Generation have continued to irritate the authorities with their work" (p.160).Thus, while insisting on the Hungry Generation's distinction from the Beats, McCord relies on the Beats' most visible feature, their rebuke of authority, to argue for their importance.
McCord's insistence that the Hungryalists are best seen in light of their antiestablishment posture is echoed in the other essay accompanying Choudhury's poem.Like McCord, Rai and his co-authors contrast their efforts to what they call "The Establishment" while taking care to distinguish themselves from the Beats.The latter can be somewhat tricky as the authors rely on explicitly hip language to make their case, as when they open by declaring "Modern Bengali writing" is "a lump of academic bullshit" or when they claim a younger generation is "digging" Choudhury. 39Fans of "Howl" might hear shades of the memorable phrase "boatload of sensitive bullshit" or even of Moloch, Ginsberg's catch-all embodiment of normative culture, in the Hungryalists' attack on the "manicured robot hand of the Establishment" (p.164). 40Even so, the authors go out of their way to distance themselves from the Beats: Hunger describes a state of existence from which all unessentials have been stripped, leaving it receptive to everything around it.Hunger is a state of waiting with pain.To be hungry is to be at the bottom of your personality, looking up to be existential in the Kierkegaard, rather than the Jean-Paul Sartre sense.The Hungries can't afford the luxury of being Beats, ours isn't an affluent society.The single similarity that a Beat has with a Hungry is in their revolt of the personal. . . .The nonconformity of the Hungries is irrevocable.(p.166) While this sounds a lot like Ginsberg's later characterization of Beatness as "at the bottom of the world, looking up . . .rejected by society," the Hungryalists force City Lights Journal readers to stretch their notions of dominant culture versus counterculture beyond the confines of the United States.This move reminds American readers that the Beats were far more privileged than their Hungryalist counterparts, a fact indexed by the Beats' very mobility. 41Thus although the Hungryalists rely on some of the same terms Ginsberg and others used to describe the Beats, even borrowing his phrase "revolt of the personal" from his preface to the Hungryalist work presented in City Lights Journal 1, Rai and his co-authors figure themselves as more downtrodden or "beat" than the Beats themselves, for "hunger" can never be a mere pose, but is rather an urgent, all-consuming fact that is therefore "irrevocable".
These prose descriptions are valuable complements to the lone poem included in the issue, Choudhury's "Stark Electric Jesus."The poem was of course notorious by the time it was printed in City Lights Journal, but despite the opinion of the 9th Court of Calcutta, it does not read as particularly obscene, especially by contemporary standards.The poem is a dynamic paean to lust that, like "Drunk Poem", figures the body as something enigmatic that must be investigated.The poem opens with the speaker's skin in a "blazing furore" of desire as he declares "I can't resist anymore, million glass-panes are breaking in my cortex."42This is the "revolt of the personal" identified by Ginsberg and Rai, et al.Choudhury is giving himself permission to articulate feelings he himself does not understand as a route to actuating his embodied existence.The first stanza concludes: "I do not know what these happenings are but they are occurring within me," and the poem takes readers through a catalogue of the speaker's sexual desires that might be achieved were he able to "destroy and shatter" his previous notions about himself and his body (p.161).Taken as a statement of Choudhury's poetics, "Stark Electric Jesus" insists that just as the lover must attend even to those bodily impulses he cannot understand, the poet must find a way to let his body speak through language: "I'll split all into pieces for the sake of art / There isn't any way out for Poetry except suicide" (p.163).With this final turn of the poem, Choudhury analogizes the true lover's experience of bodily defamiliarization with the true poet's need to manifest bodily experience in writing, the very premise of using the word "hunger" to name his generation.Indeed, in the poem's final lines, the speaker says that "Millions of needles are now running from my blood into Poetry," into "the hypnotic kingdom of words" (p.163), such that his body and his poetry are collapsed into one organic being.It is this sense of poetry as something embodied that "Stark Electric Jesus" really argues for, and what has made it a powerful testament to the poetics of the Hungry Generation.Indeed, although the Hungry Generation may have lasted only a few years, "Stark Electric Jesus" has remained one of Choudhury's better-known poems (for instance, it inspired a short film in 2014), even as he has gone on to have a very prolific career writing poetry, drama, and non-fiction in English and Bengali. 43ith respect to the connections among the Hungryalists and the Beats, I think that Choudhury's yoking of the body and poetic utterance offers a suggestive way to understand the shifts in Ginsberg's own poetics after he returned from India.As is well-known, Ginsberg became a prominent political activist in the 1960s while simultaneously developing a pointedly embodied poetics; as Tony Triligio has put it, "Ginsberg's return to the body is not simply a renewal of sensory experience; instead, it claims the body as both product and producer of political experience." 44While I will sidestep questions of who precisely influenced whom, I do see Ginsberg's use of his own body to blur distinctions among poetry and political activism as roughly analogous to what Choudhury and the Hungry Generation were doing in India.Ginsberg himself signals some of these associations even as he does not explicitly name them.For example, in an interview that appeared in City Lights Journal 2 immediately following Choudhury's "Drunk Poem" and "Short-Story Manifesto," Ginsberg discussed his turn to political activism by linking it to his experiences in India.Ernie Barry interviewed Ginsberg right after a demonstration against repressive government leadership in Vietnam, and when Barry asked: "What other political demonstrations have you been involved in?" Ginsberg replied: "None.This is the first time I've taken a political stand." 45When Barry pressed him on this "new policy," Ginsberg directed him to his protest sign, a poem which read, in part: This poem is notable for a few reasons, not the least of which is that it marks Ginsberg's foray into taking a "political stand."It also abandons the distinction between poetry and protest via the moral authority of the gurus and swamis he met in India.The quoted phrase, "'Oh how wounded, how wounded!'," was in fact important enough to Ginsberg that he would repeat it in later work, including the dedication to Indian Journals, which names its source, a "conversation on bamboo platform in Ganges with Dehorava Baba who spake 'Oh how wounded, how wounded!' after I fought with Peter Orlovsky," and in what is perhaps his greatest antiwar poem, "Wichita Vortex Sutra," in which he again refers to "Dehorhava Baba who moans Oh how wounded, How wounded." 46With this poem on a protest sign, then, Ginsberg identifies the origins of his interest in politics as a conversation with celebrated yogi Devraha Baba, and the protest-poem is itself the germ of "Wichita Vortex Sutra", a long poem in which Ginsberg calls on his own body to declare the end of the Vietnam War.
"Wichita Vortex Sutra," written in 1966 at the height of Ginsberg's fame as a poet-protester, is built around the notion that the Vietnam War is "Black Magic Language," an idea imported from his very first poem-protest sign, which announced "War is black magic," lines written just five months after he left India."Wichita Vortex Sutra" develops this idea: The war is language, 44 Snyder (2007), Allen Ginsberg's Buddhist Poetics (Carbondale: Southern Illinois University Press), p. 67. 45Barry (1964), "A Conversation with Allen Ginsberg," City Lights Journal 2 (1964), p.131. 46 Ginsberg (1996), Indian Journals, n.p.; and Ginsberg (1968), Planet News, 1961-1967 (San Francisco: City Lights), p. 127.language abused for Advertisement, language used like magic for power on the planet: Black Magic language, formulas for reality (p.119) This is a poem that banishes all distinctions between poetry and political protest, that understands the war as underwritten by the "Black Magic language" of politicians, military leaders, and the media, all of whom retreat into euphemism and vagueness in order to defend and justify actions Ginsberg considers indefensible.In a moment that might seem to echo the Hungryalists mailing paper masks to politicians and others, "Wichita Vortex Sutra" asks "Have we seen but paper faces, Life Magazine?"(p.118), the implication in both cases being that those practitioners of "Black Magic Language" are hiding their real selves behind masks (in his interview with Barry, Ginsberg had observed that "everyone wants to feel, and wants to feel loved and to love, so there's inevitable Hope beneath every grim mask [p.137]).In "Wichita Vortex Sutra," Ginsberg's answer to the masked mendacity of language abusers is to collapse poetic utterance and political act so completely that his own embodied voice is given the incredible power to declare the war's end.
As he readies himself for this imaginative act, Ginsberg again calls on those same swamis and gurus from the earlier protest-poem: I call all Powers of imagination to my side in this auto to make Prophecy, all Lords of human kingdoms to come Shambu Bharti Baba naked covered with ash Khaki Baba fat-bellied mad with the dogs Dehorhava Baba who moans Oh how wounded, How wounded (p.127) This catalogue continues on for some time, and grows to encompass not only these compassionate souls he encountered in India but also figures like Christ, Allah, and Jaweh, so that he will claim to counter "Black Magic language" by tapping into positive energies of the world's religions, a universalist gesture that Choudhury, for one, has linked to their time together in India.Choudhury has remarked, for instance, that "I can't claim that I contributed to [Ginsberg's] thinking, though, perhaps in changing the notion that there can not be only one God; there has to be innumerable gods for innumerable human spreads out in order to be eclectic, tolerant and resilient." 47In his 1963 protest poem, as well as in "Wichita Vortex Sutra," figures like Dehorhava Baba and Shambu Bharti Baba (whose photograph, incidentally, appears in Indian Journals) were routes toward the eclectic tolerance Choudhury describes.It is only after Ginsberg names these figures that he can muster the "right magic" to counter the black magicians' "formulas for reality": this Act done by my own voice, [. . .] The War is gone, Language emerging on the motel news stand, the right magic Formula, the language that was known in the back mind before, now in the black print of daily consciousness (pp.127-29) 48 These lines have been subject to a fair amount of critical commentary, but I would point out that they are premised on a collapse of the realms of poetry and political action that are reminiscent of the ways the Hungryalists saw poetry and manifesto as two sides of the same coin. 49Ginsberg insists that the embodied nature of his language-"my voice . . .this Act done by my own voice"-can in and of itself effectuate actual political change.This is a "revolt of the personal" that reverses the mandates of "civilization" and its proxies to invest superhuman power in a single individual, Allen Ginsberg the embodied speaker whose utterance is political intervention.Just as the Hungryalists marshaled defiant theatricality as they circulated manifestos and demotic poetry that were political as much as aesthetic statements, Ginsberg's theatrical declaration of the war's end is politically effective insofar as it rebukes the very terms of the "Black Magic language" that had led to the Vietnam War in the first place.This moment in "Wichita Vortex Sutra" is in fact one among many examples of Ginsberg's sixties-era poems and protests that fused aesthetics and politics, the embodied poet and the embodied protester.He articulated such a fusion in a piece published in the Berkeley Barb in 1965, in which he couched political protest as "spectacle," using the same language of declaration found in "Wichita Vortex Sutra": "Open declarations, 'We aren't coming out to fight and we simply will not fight.'We have come to use imagination.A spectacle can be made, an unmistakable statement OUTSIDE the war psychology which is leading nowhere.Such statement would be heard around the world with relief." 50ere Ginsberg is recommending strategies for actual political protests, which, he argues, must exploit "imagination" to be effective; otherwise put, he urges tactics that would seem nonsensical to the "war psychology," but that would paradoxically be effective precisely for the ways they expose broader cultural and political ideologies that have become so widespread as to seem reality itself.Ginsberg's political intervention is to reset the terms of reality via the imagination.
While we can finally only remain suggestive as to questions of influence, I do think it is fair to say that the Beats and the Hungryalists were mutually generative literary and cultural movements.This is perhaps most evident in the material history of how the Hungryalists were circulated and packaged to Anglophone readers as a "generation" not unlike the Beats.But there is also a deeper argument to be made about how these movements came to perceive the relationship between poetry and politics.I do not think it is incidental that in the Berkeley Barb piece quoted above, Ginsberg insists that the use of imaginative spectacle in political protest "would be heard around the world with relief," for this underscores his post-India interest in cultivating political solidarities beyond national borders.When thinking about the Beat movement, then, readers and critics must be attentive to the particularities regarding how U.S.-based writers read and interacted with the work of global writers, and vice versa, which helps us understand the profusion of texts produced in the context of an international avant-garde.And as attested by the various connections among the Beats and the Hungryalists traced throughout this essay, there is need for further work that acknowledges these continuities while still attending to the particularities of the writers understood as associated with these movements.
'
Oh how wounded, how wounded!' says the guru Thine own heart says the swami [. . .] War is black magic Belly flowers to North and South Vietnam include everybody End the human war Name hypnosis and fear is the Enemy-Satan go home!I accept America and Red China To the human race Madame Nhu and Mao Tse-Tung Are in the same boat of meat (p.132)
I
lift my voice aloud, make Mantra of American language now, I here declare the end of the War! [. . .] Let the States tremble, let the Nation weep, let Congress legislate its own delight let the President execute his own desire-47 Choudhury (2015), "Sunflower Collective Interview." , "from Subimal Basak-Victim & Spirit," both of which discuss Tagore and link him to what Maitreyee B. Chowdhury calls Calcutta's "post-Partition poverty politics" (both pieces appear in Intrepid 10 ["Poetry of India" issue] [Choudhury 1968e], n.p.). 27achari and Kumar (2016), World Beats: Beat Generation Writing and the Worlding of U.S. Literature (Hanover, NH:Dartmouth College Press), pp. 1, 33.For more on the rich connections among another key Beat writer and transnational poetics, see Brahmachari and Kumar (2016), Kerouac: Language, Poetics, Territory (New York: Bloomsbury).28See"Morris"inOrlovsky(1978), Clean Asshole Poems & Smiling Vegetable Songs (San Francisco: City Lights), pp.61-67.
|
2019-01-22T09:34:04.396Z
|
2019-01-02T00:00:00.000
|
{
"year": 2019,
"sha1": "a1fa52da7240ea907977f3fc6040acbaaf160705",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0787/8/1/3/pdf?version=1546597464",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a1fa52da7240ea907977f3fc6040acbaaf160705",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Art"
]
}
|
98657324
|
pes2o/s2orc
|
v3-fos-license
|
An Alternative Approach for Acetylation of Amine Terminated Poly-amidoamine ( PAMAM ) Dendrimer
Aim. Polyamidoamine (PAMAM) dendrimers inherent properties have made it the nanocarrier of choice in the current era of innovation. Dendrimer based products are growing and mushrooming like anything in the current time. Although it suffer from hemolytic toxicity which could be reduced by protecting free amino group. Methods. In the present work alternate acetylated method for PAMAM dendrimers was discussed. 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide Linker was used for acetylation. The acetylated conjugate was evaluated for color reaction, Ultraviolet–visible spectroscopy, Fourier Transform infrared spectroscopy, Differential scanning calorimetric, Nuclear magnetic resonance spectra studies. Results. The PAMAM dendrimers were synthesized using divergent approach and further acetylated. Change in λmax values from 282.0 to 282.5 nm was observed for acetylated dendrimers. Characteristic peak of N-H stretch of primary amine at 3284.16 cm-1 was disappeared due to conversion of primary Artículo Original Original Article Correspondencia Correspondence Raj K. Keservani School of Pharmaceutical Sciences, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, India-462036 Mobile: +917897803904 rajksops@gmail.com Financiación Fundings No funding agency for the research work. Conflicto de interés Competing interest Author declares no conflict of interest. Agradecimientos Acknowledgements Authors are thankful to the Mr. Adi Dev Bharti, for valuable suggestion. Support for research facilities by CDRI, Lucknow, Scope, Indore is thankfully acknowledged. Received: 01.02.2015 Accepted: 06.04.2015 An Alternative Approach for Acetylation of Amine Terminated Polyamidoamine (PAMAM) Dendrimer Un enfoque alternativo para la acetilación de la amina terminal del dendrímero de poliamidoamina (PAMAM) Surya Prakash Gautam1 · Raj K. Keservani2 · Tapsya Gautam3 · Arun K. Gupta3 · Anil Kumar Sharma1 1. Department of Pharmaceutics, CT Institute of Pharmaceutical Sciences, Jalandhar, Punjab 2. School of Pharmaceutical Sciences, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal (M.P.), India. 3. Smriti College of Pharmaceutical Education, 4/1, Piplya Kumar Kakad, Mayakhedi Road, Dewas Naka, Indore (MP), India.
INTRODUCTION
Despite of its short history of nearly three decades it has proved itself in the market.Dendrimers based products are contributing significantly and efficiently.Top leaders and Pharma giant companies are strengthening their global presence by launching dendrimer based products 1 .
The exposed groups on outer surface are available to bind the moieties that can facilitate in drug targeting.Dendrimers claims plethora of medicinal activities and is carrier of choice in delivering the drug at the area of interest [2][3][4][5][6] .Cross linkers help to couple ligand with dendrimers.Hemolysis toxicity mainly due to free amine groups is major problem that limits their utilization in clinical applications.Research scientist across the world is actively working on this segment to solve the issue.Surface capping may contribute to make nanocarriers nontoxic, biocompatible, biodegradable.
Their pharmacodynamic and pharmacokinetic attributes may be improved 7 .Acetylation is one more approach for capping the free amine groups of dendrimers.Dendrimer Literature survey disclosed that remarkably few methods are reported for the acetylation of PAMAM dendrimers 9 Solubility enhancements of poorly soluble compounds is facilitated by acetylated conjugates 8 .Acetic anhydride is popularly used for acetylation of amine terminated PAMAM dendrimers.Utilization of dendrimers for drug delivery applications while reducing their cytotoxicity is the area were scientist are working actively.Plethora of strategies are used to minimize the cytotoxicity of dendrimers [10][11][12][13] .Neutralizing charges by amine acetylation enhances the transfection efficiency of plasmid DNA 9,14 .Quite a number of examples have appeared recently where researchers have nicely illustrated the biomedical application of surface modified dendrimers 15,16 .In present research work a new approach have been developed that may open new horizons for the drug delivery.
FT-IR Spectroscopy
The 4.0G PAMAM dendrimer and acetylated derivatives were subjected to FT-IR spectroscopy analysis 21 .
Differential Scanning Calorimetric (DSC)
The samples were kept in aluminum disc and set in a Perkin Elmer DSC apparatus (Uberlingen, Germany) over the range of 50-200°C.Analysis was performed at a heating rate maintained at 20°C per minute in a nitrogen atmosphere.
Alumina was used as a reference substance.Spectrum was recorded using TA 60W software and obtained spectra 22 .
NMR Spectroscopy
The dendrimers were evaluated by NMR spectroscopy.Deuterated methanol was used as co-solvent and analyzed at 300 MHz.NMR signal were recorded for 4.0G PAMAM and acetylated analogs 23,24 .
Color Reaction
The plain dendrimers gives violet color due to free -NH 2 groups.The acetylated dendrimers shows less instance violate color due to decreases in free -NH 2 groups on acetylation.The color reaction provides preliminary information regarding the surface modification of 4.0G dendrimers.
Ultraviolet Spectroscopy
Absorption maxima (λ max ) were recorded for 4.0G dendrimers and surface modified dendrimers.The change in λ max values from 282.0 to 282.5 nm was observed for surface modified dendrimers, which indicates change in surface chemistry of PAMAM dendrimers.
FT-IR Spectroscopy
The 4.0G and acetylated derivatives were analyzed by FT-IR spectroscopy analysis by FTIR 470 Plus, Jasco, Japan.The FT-IR peaks provide the proof of modification of free amino group into amide group.The characteristic peak of 4.0G PAMAM dendrimers were of N-H stretch of primary amine at 3284.16 cm -l , N-H stretch of anti-symmetric substituted primary amine at 3146.67cm-1 , C-H stretch at 2919.48cm- 1. IR spectra of Acetylated PAMAM dendrimer, characteristic peak of N-H stretch of primary amine at 3284.16 cm-1 was disappeared due to conversion of primary amine to secondary amine.A new peak of -(CO)-NH stretch was obtained at 1640.28 cm-1 (medium) which shows attachment of acetic acid surface group.FT-IR spectra of 4.0G PAMAM dendrimers and Acetylated dendrimers are given in Figure 1 and Figure 2 respectively.
acetyl conjugates may reduce the hemolytic toxicity and cytotoxicity problems associated with unmodified dendrimers.Pioneering work in this segment was performed by Majoros et al.., 2003 with commercial success of this novel method 8 .
Preparation of Acetylated 4 .
Scheme 1. Scheme for Acetylation of 4.0G PAMAM dendrimers using EDC as a cross linking agent.
D 2 O
was used to solublize 4.0G and acetylated dendrimers.
|
2018-12-07T02:47:26.487Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "4ae1c3312a25c094b4b450a365ea5d6b1665e284",
"oa_license": "CCBYNCSA",
"oa_url": "https://scielo.isciii.es/pdf/ars/v56n3/original3.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4ae1c3312a25c094b4b450a365ea5d6b1665e284",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
133092525
|
pes2o/s2orc
|
v3-fos-license
|
On the Origin of Mega-thrust Earthquakes
Out of 17 largest earthquakes in the world since 1900 with magnitudes larger than 8.5, 15 of them occurred along convergent plate boundaries as mega-thrust events. Four of these catastrophic earthquakes have occurred during the last decade. The wealth of observational data from these events offer a unique opportunity for Earth Scientists to understand the underlying processes leading to the deformation in subductions zones, not only along the plate interface, but also in plate interiors in both the subducting slab and the overriding plate.
requires multidisciplinary approaches synthesizing a variety of observational data combined with numerical and analogue modeling. Recent studies of the megathrust earthquakes have shown that there are methodological issues which may require revisiting some earlier wisdom, but they have also shown the capability of new promising techniques. In the following, we illustrate these challenging issues through various studies conducted on the latest earthquakes with a special emphasis on the 2011 Tohoku-Oki, Japan mega-thrust earthquake (M ¼ 9.0).
Mega-thrust Earthquakes
Although there are far more very large earthquakes (M !8.0) that have occurred along the plate interface of various subduction zones which can be considered as mega-thrust events, in this study, we have restricted our definition of mega-thrust earthquakes to those that have magnitudes equal to or larger than 8.5. Among the 17 earthquakes since 1900 ( Fig. 19.1), based on the data from USGS (USGS 2014), we consider 15 of them as mega-thrust events since the 1950 Assam earthquake Fig. 19.1 World's largest earthquakes (M !8.5) since 1900 (data from USGS). Please note that the largest earthquakes have occurred in two clusters in time separated by 39 years. The two earthquakes, Assam 1950 and Sumatra 2012 earthquakes are not considered in this study as megathrust events. The 1950 Assam earthquake have occurred in a different tectonic setting with continent-continent collision, and the 2012 Sumatra earthquake was the largest ever recorded strike-slip faulting event which occurred along one of the fractures zones offshore northern Sumatra have occurred in a different tectonic setting with continent-continent collision, and the 2012 Sumatra earthquake was the largest ever recorded strike-slip faulting event which occurred along one of the fractures zones offshore northern Sumatra. The remaining 15 events have all occurred along the various subduction zones in the Pacific and Indian Oceans. Their space/time correlations indicate that the largest of these earthquakes cluster in time. This is clearly shown in Fig. 19.1, with the two All above earthquakes, except the 2012/04/11 event off the west coast of northern Sumatra, are mega-thrust earthquakes associated with the plate interface of a subduction process. 2012/04/11 event is the largest strike-slip earthquake ever recorded clusters in the time-periods 1950-1964 and 2004-present, separated by a quiescence period of 39 years. The most striking feature of these two clusters is that in the first cluster there were three mega-thrust events with M !9 and there were two M !9.0 in the second. Although it is tempting to suggest duration of approximately 10-15 years for these clusters with a rough repeat time of 40 years, statistically such conclusions are not warranted. This is mainly due to the fact that the total time of observation during the instrumental period is far too small and the two temporal clusters within 114 years cannot be generalized unless we have longer time series available. In spite of increasing evidence for mega-thrust events in the pre-historic period, assessing the occurrence of mega-thrust events during the historic and pre-historic period has some obvious limitations. In general the uncertainties of the source parameters increase significantly backwards in time. This however, should not undermine the importance of paleoseismological data which has proven useful in cases such as the subduction zone mega-thrust paleo-earthquakes of NW-US (1700, Cascadia earthquake; Satake et al. 1996) and in NE-Japan (869, Jogan earthquake; Minoura et al. 2001). It is clear that the occurrence of these mega-thrust earthquakes is governed by global tectonics and the total seismic moment-budget associated with the plate convergence rates in the subduction zones (e.g. Pacheco and Sykes 1992;McCaffrey 2007). Nevertheless, their occurrence in time and space is highly dependent on the history of deformation in individual subduction zones and their internal segmentation within the arc. Despite this, there are some common characteristic that can be attributed to the mega-thrust earthquakes. These can be summarized as follows: • All occur on subduction zones along the plate interface and cluster in time.
• All related to strong coupling along the plate interface, where the location and physical properties of the asperities are critical. • Total slip is controlled by the size and the location of the strongest asperity (s) and if shallow, also controls the resulting tsunami size. • Along-dip segmentation of the interface is observed and rupture may include the shallow trench-ward section. • Along-strike segment boundaries are associated with large structural controls on the subducting plate (earlier sea-floor heterogeneities such as sea mount chains, ridges, fracture zones, etc.). • All cause significant stress changes in the neighboring segments (including the outer-rise) and hence increase the likelihood of other mega-thrust events. • All have clear signs of fore-shock activity and significant aftershock activity outside the main asperities.
Based on some of these common features there have been recent attempts to classify the different subduction zones and the associated mega-thrust earthquakes (e.g. Koyama et al. 2013). A simple classification based on three criteria, alongstrike segment boundary, along-dip segmentation and the direction of collision (orthogonal or oblique), although useful to sort out some basic differences, still lacks the necessary details and hence forces one to think in terms of these end members only. However, understanding the subduction zone deformation requires a holistic approach to all controlling factors ( Fig. 19.2).
A complete deformation cycle in a subduction zone starts with the inter-seismic period of strain accumulation due to the plate convergence, which in cases where there is strong coupling along the plate interface, results in internal deformation both in the overriding and subducting plates. In the overriding plate uplift occurs along the coastal areas of the island arc and inland regions whereas subsidence is seen towards the trench in the ocean-ward side. Both of these effects are the consequence of compressional forces due to locking of the plate interface. Observation of the sea-level changes in Sumatra and the response of the coral micro-atoll growth have demonstrated these long-term effects of overriding plate deformation (e.g. Zachariasen et al. 2000;Sieh et al. 2008). Similarly, in the subducting plate in the outer-rise region, compressional deformation occurs during the inter-seismic period, coupled with the down dip extension at depth giving rise to normal faulting deep intraplate events. Once the plate interface is ruptured through a mega-thrust earthquake (co-seismic deformation) the relaxation period following this favors the reversal of the forces acting in the same regions both in the overriding and subducting plates. Subsidence along the shore and inland regions accompanied by the uplift along the trench are typical for the overriding plate deformation. In the subducting plate the same structures that were reactivated as reverse faults now act as normal faults due to extension in the relaxation period (post-seismic deformation). There is off course processes both prior to the rupture of the plate interface (foreshock activity) and immediately after the mega-thrust earthquakes (aftershocks) which is part of the total deformation cycle.
Deformation Cycle in Subduction Zones
Understanding the total deformation cycle in subduction zones and the processes associated with it requires multidisciplinary approaches including a variety of observational data combined with analog and numerical modelling (Funiciello et al. 2013). In recent years indeed a wealth of observational data became available. These include, • Structural data (conventional geology/geophysics) • Petrophysical data (conventional petrology/geochemistry) • Seismological data (conventional source parameters) • Slip inversions based on seismological data for a broad-band of frequencies (backprojection methods for remote arrays etc.) • Seismic tomography at a regional and detailed scales • Seismic anisotropy • Reflection/refraction profiles • Potential field measurements (gravity, magnetics) • Sattelite geodesy (GPS, InSAR, TEC) • Borehole data • Statistical data • Paleoseismological data • Tsunami data (run-up, modeling) • Bathymetric surveys+DEM (digital elevation models at local scales) Synthesizing such a variety of data brings along some methodological challenges as well. In the first place, it is necessary to realize the importance as well as the limitations of each data set before applying an appropriate method. Detailed studies of co-seismic slip-inversions through various data sets for the 2011 Tohoku-Oki, Japan mega-thrust earthquake, illustrate this problem very clearly. Following the earthquake of March 11, 2011 in Japan, there has been a number of co-seismic slip inversions published using tele-seismic data (e.g. Ammon et al. 2011;Ishii 2011;Lay et al. 2011;Koper et al. 2011;Wang and Mori 2011), strong-motion data (e.g. Ide et al. 2011;Suzuki et al. 2011), GPS data (e.g. Linuma 2011Miyazaki et al. 2011;Ozawa et al. 2011;Pollitz et al. 2011) as well as tsunami data (e.g. Fujii et al. 2011;Saito et al. 2011). In addition to these there has also been joint inversions of seismological (teleseismic and strong-motion) and geodetic data (e.g. Koketsu et al. 2011;Yokoto et al. 2011;Yoshida et al. 2011;Kubo and Kakehi 2013). Common for all these inversion results is the shallow asperity with a large slip. In general there is a good agreement on the location of the shallow asperity among the various studies (seismological, GPS and tsunami wave data), where the maximum slip exceeds 40 m. When it comes to the details of the rupture there are significant differences in these inversion results. The main conclusion here is that slip inversions are non-unique and there is strong need for independent data which may help calibrating these. In other words, identifying the location of the strong asperities by multidisciplinary data sets seems critical. Independent evidence for the shallow asperity and the observed large slip came from the sea-bottom GPS measurements (Sato et al. 2011) and shallow seismic data (Kodaira et al. 2012) combined with cores from the borehole drilled at the tip of the sedimentary wedge (Chester et al. 2013).
The down-dip extent of the fault rupture is on the other hand, debated and some of the studies conclude that rupture propagated to the bottom of the contact zone. A number of inversions based on teleseismic data from large and dens arrays (US-array and the Stations from Europe) have revealed a strong short period radiation at deeper part of the rupture plane (e.g. Ide et al. 2011;Ishii 2011;Koper et al. 2011;Meng et al. 2011;Wang and Mori 2011). It is now understood that the slip associated with the shallow asperity was slow and lacking short-period radiation, whereas the deeper asperities produced strong short-period energy (Koper et al. 2011).
Apart from that, arguably, it can be said that the joint inversions smear out the slip distribution and a lot of details such as the short-period radiation at depth is not resolved (Meng et al. 2011). As such the common understanding that the joint inversions are better than individual data sets is questionable. The rupture complexity with a dynamic variation at various frequencies is better resolved by individual analysis of different data sets that are sensitive to these frequencies.
The results from these individual studies, when combined together in a synthesis, seem to be a far better tool than the joint inversion results.
Rupture Preparation and Post-seismic Slip
Mega-thrust earthquakes along subduction zones are mainly controlled by the plate coupling along the interface. Some critical issues related to the degree of coupling are, the location of the strong and weakly coupled zones (asperities and their origin), role of sediments and fluids in coupling, down-dip limit of the coupled zone as well as coupling in the shallow zone close to trench. Regarding the latter the 2011 Tohoku-Oki earthquake has surprised many. Contrary to the common belief that the shallow part of the coupling along the trench is usually weak controlled by the loose sediments of the accretionary prism accompanied by the fluid interaction reducing the friction, more than 40 m of slip is observed along the trench. This very high slip along the trench was also crucial in the development of the following large tsunami wave.
The strongly coupled shallow asperity along the trench was manifested by the various co-seismic slip inversions as discussed earlier. It is also firmly confirmed by the offshore GPS measurements where significant slip was observed (Sato et al. 2011). The maximum horizontal slip measured was as high as 24 m almost 100 km away from the trench. The vertical uplift was as high as 3 m in the same area. The slip was also observed at the very tip of the trench through high resolution seismic data (Kodaira et al. 2012). Later, Chester et al. (2013) have shown the actual plate interface cutting through the contact between the pelagic sediments of the subducting plate and the sediments of the accretionary prism representing the overriding plate.
The rupture process had however started already with the onset of increased earthquake activity just at the periphery of this strong asperity at depth some weeks before the main rupture which culminated in a magnitude 7.5 earthquake at the deeper end of the shallow asperity on March 9, 2011. The static stress transfer from this event was probably the triggering mechanism for the main rupture on March 11, 2011. Such foreshock activity is not unique for the Tohoku-Oki earthquake, similar significant foreshock activity was previously documented in other plate interface thrust events (e.g. Raeesi and Atakan 2009) and more recently during the 2014 Iquique earthquake in northern Chile (Hayes et al. 2014).
Post-seismic slip is usually associated with extensive aftershock activity following the mega-thrust events. This was the case for the Tohoki-Oki earthquake where hundreds of aftershocks were registered in the following weeks after the main shock . The most striking feature of the aftershock sequences was their spatial concentration in areas outside the main asperities that had ruptured during the co-seismic slip. The largest of these aftershocks had a magnitude of 7.9 at the southernmost part of the plate interface off Boso, close to the Sagami trough in the south. In addition to the intensive aftershock activity along the plate interface, there has been also triggered seismic activity both in the overriding plate ) and the subducting slab in the outer rise area such as the M ¼ 7.7 earthquake . Such outer rise normal faulting events can be very large as was the case for the 1933 (M ¼ 8.4) event further to the north. These events are the manifestation of the total deformation associated with the stress transfer from the main shock (Toda et al. 2011)
Segmentation of the Plate Interface
Physical conditions leading to the deformation in subduction zones depend on a variety of factors including: • Direction and speed of the plate convergence. • Differences in the rheology/composition of the two colliding plates. • Age and density difference between and density variations within colliding plates. • Physical/morphological/geological irregularities along the plate interface.
• The degree of coupling along the plate interface between the overriding plate and the subducting slab. • Accumulated stress/strain. • Fluid flow along the plate interface.
• Heat gradient and heat-flow.
• Melting process at the magma wedge. Mantle flow and circulations in the subduction system.
Although the total deformation is controlled by these factors, the physical and the morphological irregularities of the oceanic plate converging to the trench will in time have long term consequences in terms of the segmentation of the plate interface. Iquique ridge entering into the subduction zone in the border area between northern Chile and southern Peru is a good example for this (e.g. Pritchard and Simons 2006;Contreras-Reyes and Carrizo 2011;Métois et al. 2013). The strong coupling along this zone has previously been modelled (e.g. Métois et al. 2013;Chlieh et al. 2014) and is expected to produce large megathrust earthquakes probably larger than the recent Iquique event of 2014 (M ¼ 8.2). Other sea-floor irregularities such as sea-mounts, fracture zones and ridges play thus an important role in the overall segmentation of the plate interface in various subduction zones.
Mapping Asperities
Once the segmentation of the interface is understood, the next critical issue is to find the location of the asperities. Inevitably, slip inversion of earthquakes constitutes an important contribution in this sense. However, there is a need for additional independent data to calibrate and verify the slip inversions as well as to find out more about the location of asperities in subduction zones where there are no recent large mega-thrust events in the latest instrumental period. One promising recent development is the use of satellite gravity data, GRACE in resolving the co-seismic gravity changes due to mega-thrust events (e.g. Tanaka and Heki 2014;Han et al. 2014). These new data opens new possibilities for detecting the location of asperities, because the repetitive slip along the same asperities of the plate interface causes mass dislocations. In the long-term, cumulative mass dislocations in the same part of the overriding plate will lead to permanent density changes. The accumulated density changes then leave an imprint on the overriding plate due to gravity (buoyancy forces) that change the degree of coupling along the plate interface. Cumulative effect of these variations should therefore be detectable as subtle deviatoric gravity changes parallel to the trench. These strongly coupled areas constitute the asperities that will slip in future large mega-thrust earthquakes. Mapping asperities by gravity data was first introduced by Song and Simons (2003), where trench parallel topography and gravity anomalies in the circum-Pacific region have revealed the strongly coupled areas along the plate interface. This was later modified (Raeesi and Atakan 2009;Raeesi 2009) to include also trench parallel Bouger anomaly.
Mapping asperities along the plate interface using these new techniques, if combined with detailed monitoring of seismological as well as geodetic changes in time with the recent observations regarding the short-term precursory phenomena such as total electron content (TEC) in the ionosphere (e.g. Liu et al. 2011;Tsugawa et al. 2011), may provide important opportunities to understand the deformation processes before the occurrence of the mega-thrust earthquakes.
Future Perspectives
In order to understand better the complex processes leading to mega-thrust earthquakes and the total deformation in subduction zones, future studies should focus on identifying the gaps for mega-thrust earthquakes as well as identifying the precursory phenomena in both long-and short-term. Following is a short list of research areas that may be helpful in this sense: Identifying Gaps for Megathrust-Earthquakes • Mapping the location of strongly coupled plate interface along subduction zones (GPS and stress modeling, stress transfer) • Mapping the location and size of the largest asperities (Gravity, TPBA, seismic tomography) • Mapping rupture areas of previous historical and instrumental mega-thrust earthquakes (historical accounts, slip distribution of previous instrumental mega-thrust earthquakes) • Developing segmentation models for the subduction zones (mapping heterogeneities in the ocean-floor)
Identifying Precursory Phenomena
In the long-term: • Monitoring overriding plate deformation (geodetic measurements of interseismic period through GPS, InSAR) • Monitoring space/time variations of seismicity in the interseismic period (dense BB-station networks) In the short-term: • Identifying foreshock activity (detailed seismic monitoring) • Identifying ionospheric disturbances (TEC measurements) Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2019-04-26T14:21:46.791Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "7a53dbcca689955fc7ce096dc9b953910028f286",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-16964-4_19.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f26001dbbcf4bcdb469a9089f91097d3d48de10f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
227297288
|
pes2o/s2orc
|
v3-fos-license
|
Novel doses of sacubitril/valsartan in patients unable to tolerate traditional therapy: Effects on N‐terminal pro B‐type natriuretic peptide levels
Abstract Background Widespread use of angiotensin receptor blocker and neprilysin inhibitor (ARNI) remains low, and many patients are unable to tolerate the medication due to hypotension at the currently recommended starting dose. Hypothesis The aim of this study is to assess if lower than standard doses of ARNI, sacubitril/valsartan (S/V), significantly reduces NT‐proBNP and leads to any change in diuretic dose, serum potassium, or creatinine. Methods In a retrospective study of 278 patients who were started on a low dose S/V at a single medical center, 45 patients were selected for the study cohort. Patients were subcategorized to Group 1 (n = 10): very low dose S/V (half a tab of 24/26 mg BID), Group 2 (n = 10): very low dose titrated to low dose S/V, and Group 3 (n = 25): low dose S/V (24/26 mg BID). NT‐proBNP, diuretic dose, serum potassium, and creatinine were compared before and after initiation of S/V. Results Among all groups, there was a significant reduction in NT‐proBNP level (Group 1: p < .01, Group 2: p < .01, and Group 3: p < .001). In addition, there was a significant reduction in diuretic dose across all groups combined (furosemide 53 mg/day vs. 73 mg/day; p = .03), with 17.8% (8/45) patients being able to discontinue their diuretic completely. There was no significant change in potassium or creatinine. Conclusions Lower than standard dose of S/V significantly reduces NT‐proBNP and diuretic requirement without change in potassium or creatinine, which provides hope that patients who cannot tolerate standard doses of S/V due to hypotension may be able to receive the benefits of S/V therapy.
| INTRODUCTION
Heart failure (HF) is one of the most detrimental and costly disease processes in medicine. Its influence is far reaching, affecting patient morbidity and mortality as well as overall healthcare costs. It is estimated that over 6.5 million adults suffer from HF in the United States and nearly 40 million adults affected worldwide. 1,2 Recently, the Prospective Comparison of ARNI with ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial demonstrated significant promise for the future of pharmaceutical therapy in treatment of HF patients. 3 Sacubitril/valsartan (S/V) has been shown to be efficacious in many trials that have been completed or are ongoing. [3][4][5][6][7] The ability to act not only on RAAS system and reduce afterload as well as capitalize on the volume titration properties of NPs seems, in hindsight, to be an ideal combination. Furthermore, it is well established that NPs play a pivotal role in the process of natriuresis-and in turn diuresis-ultimately becoming a pivotal regulator of cardiovascular homeostasis through their paracrine and endocrine actions. 4,8,9 In the PARADIGM-HF trial, a 20% reduction in cardiovascular death or hospitalization for HF in the PARADIGM-HF led to the approval of S/V from the US Food and Drug Administration (FDA) as well as the European Commission in July 2015 and November 2015, respectively. 3 Furthermore, S/V earned designation as the first line therapy in a focused update of the ACC/AHA guidelines for the treatment of HF as well as the European HF clinical practice guidelines. [10][11][12][13] After a run-in phase that ensured all patients were able to tolerate enalapril 10 mg twice daily, patients were switched to S/V 49/51 mg twice daily and finally S/V to a goal dose of 97/103 mg twice daily. 6 Ultimately, the FDA approved three doses: a 23/24 mg (low dose), 49/51, and 97/103 mg (high dose). Subsequent to PARADIGM-HF, many post-hoc analysis were conducted to investigate hypotension with S/V. While patients were more likely to experience hypotension in the run-in phase of PARADIGM-HF with S/V therapy as compared to ACEI, these patients still derived benefits from S/V therapy. 14,15 Although the dose of S/V used was still among the standard dosing. 15 Subsequent "real world" studies also show a significant increase in hypotensive episodes with S/V as compared with ACEI. 16 Thus, despite estimates of profound benefit to HF patients, in current clinical practice, many patients cannot tolerate high doses of S/V, or even the smallest approved dose of 24/26 mg twice daily due to several factors including hypotension. 10,17,18 This remains even more so in older patients. 14 It remains unclear if lower doses of S/V are associated with reduction in NT-proBNP levels. In the current study, we aimed to investigate if S/V doses below the lowest approved 24/26 mg tablet correlate with reduction in NT-proBNP and associated beneficial effects in patients with HFrEF.
| Statistical analysis
All statistical analyses were performed using the Wilcoxon Rank-Sum test using GraphPad Prism software, Version (La Jolla, CA). Analysis for NT-proBNP level was performed between the baseline and final assessment after therapy. Difference in diuretic dosing was assessed for statistical significance as well with the Wilcoxon Rank-Sum test.
Significance was assessed by p ≤ .05. All analyses were conducted at the two-tailed level, and the significance was set at α ≤ .05. (Table 1). Furthermore, 18% (8/45) were able to discontinue their diuretic therapy completely (none in Group 1, 3 in Group 2, and 5 in Group 3).
| Changes in serum potassium and creatinine
Overall, creatinine did not change across all groups. There was a trend for improvement in serum creatinine levels although Group 1 had non-significant increase in creatinine (1.14-1.18 mg/dl) ( Table 1).
There was a non-significant trend towards increase in potassium. Interestingly, when comparing the VLDS/V and LDS/V, about 30% of the patients were able to meet this reduction threshold.
In addition, patients were able to achieve reduction in diuretic dosing on S/V therapy, with some being able to discontinue all diuretic therapy. Reduction in diuretic therapy may be tied to the natriuresis resulting from increased circulating NP levels. 22 Given that no patient in Group 1 on VLDS/V therapy was able to discontinue diuretic therapy, it seems that while there is significant reduction in NT-proBNP levels with VLDS/V, concurrent diuretic therapy is still needed to achieve euvolemic state. However, there were patients both in Group 2 who were bridged from VLDS/V to LDS/V and in Group 3 on LDS/V, who were able to discontinue all diuretics while on S/V therapy.
As yet there is minimal data on the efficacy of S/V at doses below the target dose of PARADIGM-HF. One study reported that patients in PRADIGM-HF who were down-titrated to reduced doses of S/V did better than those on reduced doses of enalapril, although the reduction was from high dose to moderate dose S/V therapy. 15
| CONCLUSION
S/V is a hallmark therapy that reduces mortality and morbidity in HFrEF patients as evidenced in PARADIGM-HF trial. Usage of S/V in clinical practice remains low, in part limited by hypotension. This study supports the concept that patients who may not have received S/V due to low blood pressure can tolerate VLDS/V and LDS/V, which results in a significant reduction of NT-proBNP without any significant change in serum potassium or creatinine. Moreover, in patients who were able to tolerate LDS/V, there was a significant decrease in their diuretic dose and some were able to stop their diuretic completely.
While our study demonstrates a significant biomarker response to lower than standard dose of S/V, further investigation is needed to confirm clinical benefit with non-target doses of S/V.
ACKNOWLEDGMENTS
We would like to thank Ed Yoon, PharmD for his valuable technical assistance in querying for patients to be included in the study.
Amitabh C. Pandey is supported from the NIH NCATS CTSA Award to Scripps Research (KL2TR002552).
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
|
2020-12-06T14:06:29.386Z
|
2020-12-05T00:00:00.000
|
{
"year": 2020,
"sha1": "cfe5d83c7d1b2878ddfad07613920ab708876454",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/clc.23509",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86cfdc325484d7674fe1b23be8515b2dbc6e2a24",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240046029
|
pes2o/s2orc
|
v3-fos-license
|
The role of antiaggregant agents and anticoagulants in the prevention of aortic valve endocarditis: A double-cohort retrospective study
Objective Antiaggregants (Ag) could prevent infective endocarditis (IE) in preclinical studies. In this study we investigated whether Ag or anticoagulants (Ac) were also protective in humans. Methods In part I we determined the incidence of IE of bioprosthetic aortic valves (PVE) in 333 consecutive patients who underwent aortic valve replacement for noninfective aortic insufficiency between 2009 and 2019. In part II we retrospectively analyzed data of 137 patients who had developed IE of the native aortic valve (NVE) between 2007 and 2015. Multivariable Fine–Gray and logistic regression models were used to investigate associations between Ag and Ac therapy and IE. Results Sixteen of 333 (4.8%) aortic valve replacement recipients developed PVE after a median of 3.72 years. There was no association between Ag and PVE, whereas Ac was associated with a higher IE occurrence (no association for vitamin K antagonists but significant for fondaparinux or low molecular-weight heparins; hazard ratio, 4.61; 95% CI, 1.01-21.9). In contrast, among the 137 patients in part II, vitamin K antagonists (odds ratio [OR], 7.52; 95% CI, 2.51-22.6), double antiplatelet therapy (OR, 44.3; 95% CI, 4.83-407), novel oral Ac (OR, 4.17; 95% CI, 1.15-15.1), and fondaparinux or low molecular-weight heparins (OR, 9.87; 95% CI, 1.81-53.9), but not acetylsalicylic acid, were associated with NVE. Conclusions Ac were associated with IE in both cohorts, whereas Ag were not associated with PVE. This might reflect differences in the studied populations, with Ag and Ac being prescribed for conditions associated with long-term IE risk in the NVE cohort. Therefore, determining the potential protective effect of Ag and Ac will necessitate further well–controlled studies.
Conclusions: Ac were associated with IE in both cohorts, whereas Ag were not associated with PVE. This might reflect differences in the studied populations, with Ag and Ac being prescribed for conditions associated with long-term IE risk in the NVE cohort. Therefore, determining the potential protective effect of Ag and Ac will necessitate further well-controlled studies. (
CENTRAL MESSAGE
The observed effect of antiaggregants and anticoagulants on the risk of infectious endocarditis might depend on study design and the setting in which they are prescribed.
PERSPECTIVE
In this dual retrospective analysis, Ac were associated with the occurrence of IE in patients at risk. In contrast, Ag were not associated with IE after valve replacement, whereas they were in the retrospective native valve IE study. This might reflect differences in the populations. Therefore, determining the possible protective effect of these drugs will necessitate further wellcontrolled studies.
See Commentary on page 313.
Infective endocarditis (IE) is a rare but potentially life-threatening disease. Its incidence has been increasing over the past decade and health care-associated pathogens such as Staphylococcus aureus and Enterococcus faecalis gained ground. 1,2 Despite earlier diagnosis and improved treatment, 30-day mortality remains high at up to 30%. To date, very few effective, long-term prophylactic strategies are at hand. The 2015 European Society of Cardiology guidelines on the management of IE advise to use antibiotic prophylaxis only in high-risk patients undergoing high-risk procedures. 3 Furthermore, this strategy cannot rule out all sources of endocarditis, because transient bacteremia most frequently occurs during day-today activities such as toothbrushing. 4,5 Other strategies such as vaccines against Staphylococcus aureus have been studied since 1910, but have gained no ground in clinical practice so far. 6 Cardiac endothelium is naturally resistant to transient bacteremia. However, it becomes sensitive when injured or locally inflamed, leading to the exposure of the underlying extracellular matrix and subsequent activation of proinflammatory and procoagulant pathways. 2,7 Various antiaggregants (Ag) have been shown to interact with some of the surface receptors and adhesion molecules through which pathogens such as Staphylococcus aureus create biofilms, shielding themselves from the immune system. 8,9 As the processes of adhesion, internalization, and dissemination of pathogens can thus in theory be prevented by Ag, they have gained attention as a potential prophylactic strategy. Initial results from animal studies have indeed shown promising results for among others aspirin, ticlopidine, ticagrelor, and clopidogrel. [9][10][11][12] It remains unclear whether antithrombotic therapies such as anticoagulants (Ac), and Ag in particular, also exert protective effects in humans. In this study, we aimed to investigate the association between the prescription of Ag and Ac and the risk of IE in 2 retrospective cohorts.
Study Population
The institutional review board (IRB) or equivalent ethics committee of the Univeristy Hospitals Leuven approved the study protocol and publication of data (IRB approval number S63228; Nov 15, 2019). Patient written consent for the publication of the study data was waived by the IRB because of the retrospective nature of the study. The study consisted of two parts. In part I, all consecutive patients older than 16 years who underwent aortic valve replacement (AVR) with bioprosthetic valves for noninfective aortic insufficiency between January 2009 and December 2019 were identified through the local cardiac surgery database and the hospitals electronic database. Because bioprosthetic aortic valves resemble the native aortic valve more closely in physiological terms compared with mechanical valves, the latter were excluded from the analysis. The remaining patients were included in the "AVR" cohort and the development of IE of the bioprosthetic aortic valve (PVE) was investigated. In part II, all consecutive patients with confirmed IE of the aortic valve between January 2007 and December 2015 were identified through the same databases. Patients who had IE of the native aortic valve (NVE) were included in the "NVE" cohort. Patients with bioprosthetic valves were excluded from this cohort; a detailed list of reasons for exclusion is provided in Table E1.
All patients were diagnosed according to the modified Duke criteria and treated according to the European Society of Cardiology guidelines. 3 Demographic, clinical, laboratory, and follow-up data were retrieved from information available in electronic medical records, as well as hospitalizations and outpatient consultations. The minimal period of follow-up for every patient was until hospital discharge or death.
Study Design and Objectives
The primary objective of this study was to investigate whether an association exists between antithrombotic therapies and PVE or NVE. In part I, this was achieved through a retrospective time-to-event analysis with the development of PVE after AVR surgery as the event of interest. In part II, groups based according to the prescribed antithrombotic therapy were compared among patients included in the "NVE" cohort. Secondary objectives were to investigate causative pathogens, their associated time to diagnosis of IE, and their relation to antithrombotic therapy.
Abbreviations and Acronyms
Ac ¼ infective endocarditis of the bioprosthetic aortic valve VKA ¼ vitamin K antagonists regression models to account for this. The initiation and duration of treatment with antithrombotic therapies in the PVE group at our center has been published prevoiusly 13 ; details are given in the Appendix E1.
Statistical Analyses
Continuous variables were checked for normality using the Shapiro-Wilk test. Normally distributed variables are presented as mean AE SD, whereas non-normally distributed variables are presented as median (interquartile range [IQR]). Categorical variables are expressed as frequency (%).
In part I, Kaplan-Meier estimates were obtained to quantify the rate of PVE over time. A multivariable Fine-Gray model was developed for each of the antithrombotic therapies, which allows for the estimation of the rate of PVE in the presence of death as a competing event. The model included the additional potential confounders such as age, sex, year of surgery, and combination therapy. Results are presented as hazard ratios (HRs) with 95% CIs. The Lagrange multiplier score test was used to estimate the effect of a therapy in case one of the groups had 0 events and a HR could therefore not be calculated.
In part II, logistic regression models were used. NVE was modeled as the binary response variable, with antithrombotic therapy as the explanatory variable. Hence, the odds of developing NVE were compared between patients who received therapy of a specific type and those who did not. Potential confounders such as age, sex, and combination therapy were included as covariates in the multivariable model. The effect of therapy on the risk of NVE is expressed as odds ratios (ORs) with 95% CIs. All analyses were performed using SAS software (version 9.4; SAS Institute Inc).
Study Population
Between January 2009 and December 2019, 563 patients underwent elective AVR for noninfective aortic insufficiency 137 patients with native aortic valve endocarditis were included in the "NVE" cohort. Demographic characteristics and types of antithrombotic therapy are presented in Table 1. The median age was 74.0 (IQR, 68.7-81.4) years in the "AVR" cohort and 62.1 (IQR, 53.2-74.9) years in the "NVE" cohort. Participants were mainly men in both cohorts (61.6% and 72.9%, respectively). After AVR surgery, more than half of the patients (54.3%) received ASA, followed by NOAC (21.0%) and VKA (12.7%). There were no patients who did not receive antithrombotic therapy in the "AVR" cohort. In contrast, more than half of the patients in the "NVE" cohort had no antithrombotic therapy prescribed before developing IE. In the remainder of the patients in this cohort, ASA (16.1%) was most frequently used, followed by VKA (11.7%) and DAPT (7.3%). No patients in this cohort received monotherapy of P2Y12 receptor antagonists. Table E3. VKA, Vitamin K antagonists; ASA, acetylsalicylic acid; DAPT, dual antiplatelet therapy; NOAC, novel oral anticoagulant; LMWH, low molecular-weight heparins; P2Y12-RA, P2Y12-receptor antagonists. Table E3. PVE occurred in 4 patients (8.2%) receiving VKA, 10 patients (4.8%) receiving ASA, 2 patients (2.5%) receiving NOAC, and 2 patients (20.0%) receiving fondaparinux or LMWH. One of these patients was receiving VKA and ASA, and another was receiving NOAC and ASA. During follow-up, 73 patients (21.9%) died.
Part II: NVE
Results from the logistic regression models in the "NVE" cohort are presented in a league table (
Causative Pathogens
Among patients included in the "AVR" cohort, E faecalis was isolated in 5 (31.3%), Streptococcus species in 4 (25.0%), and coagulase-negative Staphylococcus in 2 (12.5%). Klebsiella pneumoniae, Pseudomonas aeruginosa, and Staphylococcus aureus were found in 1 patient each ( Table 3). The mean time to PVE ranged from 0.05 years for those with Klebsiella pneumoniae to 4.83 years for those with Staphylococcus aureus.
To investigate whether certain types of antithrombotic therapy were associated with specific types of pathogens, a subgroup analysis of the 137 isolates in the "NVE" cohort was performed (Table 4). Streptococcus species was responsible for most cases (n ¼ 56; 40.9%), followed by Staphylococcus aureus (n ¼ 34; 24.8%), E faecalis (n ¼ 20; 14.6%), and coagulase-negative Staphylococcus (n ¼ 11; 8.0%). Other pathogens included Kingella kingae (n ¼ 1 in the VKA group), Propionibacterium acnes (n ¼ 1 in the DAPT group), Candida albicans (n ¼ 1 in the LMWH group), Escherichia coli (n ¼ 2 in patients with no antithrombotic therapy), Abiotrophia defectiva (n ¼ 1 in patients with no antithrombotic therapy), Aspergillus flavus (n ¼ 1 in patients with no antithrombotic therapy), and Gemella haemolysans (n ¼ 1 in patients with no antithrombotic therapy). In 8 cases (5.8%), the culture was negative or undefined. No difference could be observed in the distribution of pathogens between therapies (P ¼ .43).
DISCUSSION
The present study, including 2 retrospective cohorts, investigated whether such protective effects of these and other antithrombotic therapies are also seen in humans ( Figure 3). In part I, we found that in 333 patients who underwent AVR, there was no association between Ag/Ac therapies and PVE, although Ac tended to do less well (this was true for fondaparinux or LMWH but not for VKA). In the 137 patients included in part II, however, VKA, DAPT, NOAC, and fondaparinux or LMWH, but not ASA, were all associated with increased risk for NVE.
Nevertheless, it is worth noting that the clinical history of the 2 studied populations were quite different. In the PVE population the IE risk was rather well defined and decreased over time, in parallel to valve endothelialization ( Figure 2). In the NVE study, the patients (some of them being referred to our tertiary center) were selected by the fact that they had already developed IE, without a complete knowledge of their predisposing risk factors and medical history. In addition, the risk of IE was likely to increase over time in this group. Therefore, patients of the group who received Ag or Ac could well have represented a higher risk population compared with those who did not receive Ag or Ac before infection, thus generating an untoward selection bias for IE. This underlines the potential biases that might be related to different types of analyses.
Epidemiology and Microbiology of IE
PVE is a major complication after AVR surgery, with an estimated incidence of 0.3% to 1.2% per patient-year and a cumulative risk of 5% at 10 years. 14 This corresponds to the numbers observed in the present study (0.5% per patient-year and cumulative risk of 4.8% at 9 years). With regard to timing, early PVE ( 12 months post AVR) and late PVE were found in 50% (8/16) of patients each, in agreement with data from a large Italian multicenter study. 15 Of note, all patients included in our study had received bioprosthetic valves, which tend to have a higher overall risk of IE compared with mechanical valves. 16 Furthermore, the risk of PVE gradually decreased as endothelialization occurred, resulting in late PVE having a risk similar to that of native aortic valves. 17 This might explain why most of the PVE events in our "AVR" cohort (14/16; 87.5%) occurred within the first 2 years after surgery.
From a microbiological perspective, Streptococcus species, Staphylococcus aureus, and E faecalis were the most common causative pathogens in IE, 18 as was confirmed by our finding that these 3 pathogens accounted for 80.3% (110/137) of all cases in the "NVE" cohort and 62.5% (10/16) in the "AVR" cohort. In PVE, the timing after AVR is known to be related to the causative pathogen. The most common microorganisms causing early PVE as
MAIN OBJECTIVE
To assess whether the promising results of antiaggregants (Ag) and anticoagulants (Ac) in the prevention of IE from preclinical studies are also seen in the clinical setting.
IMPLICATIONS
The observed effect of Ag/Ac on the risk of IE might depend on study design and the setting in which they are prescribed. . Key findings of the retrospective double-cohort study. Preclinical research has shown that antiaggregants (Ag) could prevent infective endocarditis (IE). In the present study, including 2 retrospective cohorts, we investigated whether such protective effects of these and other antithrombotic therapies are also seen in humans. In the "AVR" cohort, there was no association between Ag/anticoagulant (Ac) therapies and IE of the bioprosthetic aortic valve (PVE), although Ac tended to do less well (not statistically significant for vitamin K antagonists [VKA], whereas it was significant for fondaparinux or low molecular-weight heparins [LMWH]). In the 137 patients included in the "NVE" cohort, however, VKA, dual antiplatelet therapy (DAPT), novel oral anticoagulants (NOAC), and fondaparinux or LMWH, but not acetylsalicylic acid (ASA), were all associated with increased risk for NVE. This might reflect differences in the studied populations: the first being well-controlled with a progressively decreasing IE risk after AVR; the second in which Ag/Ac might have been prescribed for conditions associated with long-term IE risks, resulting in a selection bias. Therefore, determining the possible protective effect of Ag/Ac will necessitate further well-controlled studies. The 95% confidence limits for the survival curve are shown in Table E3. AVR, Aortic valve replacement; P2Y12-RA, P2Y12-receptor antagonist.
reported in the literature are Staphylococcus aureus, coagulase-negative Staphylococci, and fungi. 19 In PVE occurring later, Enterococci and Streptococcus spp predominate. 20 In our "AVR" cohort, most late PVE was indeed caused by either E faecalis (4/8; 50.0%) or Streptococcus spp (2/8; 25.0%). Interestingly, in 1 patient who developed PVE at 4.83 years after AVR surgery for calcific aortic valve disease, Streptococcus aureus was isolated, a pathogen which tends to be associated with early PVE. Because the patient had not undergone any procedures in the meantime, PVE likely resulted from hematogenous spread of an infection.
Antithrombotic Therapy and IE
Various preclinical studies have shown favorable results of certain types of antithrombotic therapy, and antiplatelet agents in particular, on the risk of developing IE. [9][10][11][12] After entrance into the circulation, bacteria such as Staphylococcus aureus can convert fibrinogen into fibrin through the expression of staphylocoagulase and von Willebrand factor-binding protein. 21 Consequently, fibrin can act as a bridge between clumping factor A on the bacterial membrane and aIIbb3 receptor, which is found on circulating platelets. 7 This makes it possible for Staphylococcus aureus to not only bind to the endothelial wall of damaged valves, but also to recruit platelets for the formation of a biofilm, effectively shielding itself from the immune system.
By interacting with the aIIbb3 receptor, ASA and P2Y12 receptor antagonists can block bacterial-induced platelet activation. In a study using a laminar flow medium, Ditkowski and colleagues 22 reported that bacterial adhesion to tissue valve or conduit heterografts was reduced by approximately 50% when either ASA or ticagrelor were used and 70% when both were combined in DAPT. Moreover, some of these agents have an additional antimicrobial action, as shown by Lancellotti and colleagues 9 in the context of ticagrelor therapy. In time-kill assays, they showed that ticagrelor exerted bactericidal activity against gram-positive strains, including E faecalis and methicillin-resistant Staphylococcus aureus. They also showed that ticagrelor inhibited biofilm growth on Staphylococcus aureus-preinfected implants. Furthermore, Veloso and colleagues 8,10 reported that the combination of ASA and ticlopidine, as well as abciximab, protected against Streptococcus gordonii, Streptococcus gallolyticus, Staphylococcus aureus, and E faecalis IE in an experimental rat model of prolonged low-grade bacteremia. Interestingly, in one of the studies, 8 dabigatran, an anticoagulant, also protected against IE due to Staphylococcus aureus but not Streptococcus gordonii. Supporting the latter finding, Lerche and colleagues 23 reported that dabigatran significantly reduced valve vegetation size, bacterial load, and expression of inflammatory markers when administered in combination with gentamicin in a rat model of severe aortic valve Staphylococcus aureus IE.
As highlighted previously, evidence from preclinical studies thus mainly exists for antiplatelet agents, although limited evidence is also available supporting efficacy of the anticoagulant dabigatran in the prevention of IE. Dabigatran reversibly binds to thrombin as well as staphylothrombin, inhibiting the conversion of fibrinogen to fibrin and enhancing fibrinolysis. Although it thus inhibits Staphylococcus aureus-induced platelet aggregation, it does not interfere with the activity of the bacteria. 8,21 However, experimental animal studies have consistently shown no effect of VKA on IE. 8 Of note, most preclinical research efforts have focused on Staphylococcus aureus, which is the main cause of early PVE but becomes relatively less important in late PVE.
Strikingly, the present study could not confirm the efficacy of any antithrombotic therapy to prevent IE in an actual clinical setting. In part I, all patients received some kind of Ag or Ac, and thus the drugs were to be compared with each other. In contrast, in part II, the prescription of these drugs was associated with an increased risk of IE compared with patients who received no treatment. The question arises as to why such association was observed and whether these results should discourage further investigation of antithrombotic therapies as a potential preventive strategy for IE. Most likely, the main reason is to be found in the fact that antithrombotic therapies are frequently prescribed for patients with concurrent cardiovascular diseases that are considered "high risk" to develop IE according to the 2017 update of the 2014 American Heart Association/American College of Cardiology guideline for the management of patients with valvular heart disease. 24 Patients who are receiving antithrombotic therapies can thus be assumed to carry a higher baseline risk of developing IE. In addition, Strom and colleagues 25 showed earlier that preexisting valve lesions as well as other conditions such as kidney diseases, diabetes, and intravenous access were all strongly associated with community-acquired IE. Interestingly, ASA, which is a drug that is widely prescribed for various cardiovascular conditions and thus not necessarily associated with "high risk" comorbidities, was not associated with IE in our study.
Although sample size restrictions and limited availability of data on comorbidities did not allow us to check this hypothesis, the prescription of antithrombotic therapies in a clinical setting might therefore primarily reflect the risk profile of these patients, rather than providing actual protection against IE. The results of this study should thus not lead to the conclusion that antithrombotic therapies have failed as a strategy to prevent IE. Some antiplatelet agents or Ac might in fact reveal favorable effects in future human trials. In any case, the current finding that none of the
Limitations
Because this was a retrospective study on results of a single tertiary care hospital, only a small sample size was available for some subgroup analyses and some comparisons could not be calculated. For example, no events occurred in patients who received DAPT or P2Y12 receptor antagonists in the "AVR" cohort and no patients who received P2Y12 receptor antagonists were included in the "PVE" cohort, limiting our ability to draw conclusions on these therapies. Furthermore, although age, sex, year of surgery, and combination therapy were included as covariates in the multivariable model, it cannot be excluded that other factors might have influenced the results. Because several of our IE patients were referred cases, we did not have full access to information on comorbidities for all of them.
CONCLUSIONS
This dual analysis of the potential of Ag and Ac therapy to prevent IE highlights the limits of comparing different types of approaches and the untoward biases related to post hoc evaluation. In part I, the cohort and questions were well defined, and the observational follow-up straightforward. However, comparing multiple drug regimens a posteriori might have resulted in statistical underpowering, which could have been solved by determining sample size in a prospective protocol. In part 2, the unavoidable limits of retrospective review of patient files became evident. In addition, the fact that this dual analysis was made in a single investigational center further emphasizes the risk of conclusion biases or misinterpretation when comparing literature data generated by unrelated research groups. Therefore, determining the possible protective effect of Ag and Ac will necessitate further well-controlled studies.
Conflict of Interest Statement
The authors reported no conflicts of interest.
The Journal policy requires editors and reviewers to disclose conflicts of interest and to decline handling or reviewing manuscripts for which they may have a conflict of interest. The editors and reviewers of this article have no conflicts of interest.
APPENDIX E1. METHODS Antithrombotic Management After Bioprosthetic Aortic Valve Replacement
Anticoagulation was initiated when bleeding risk was minimized and all drains were removed. Bridging with low molecular-weight heparins was started when platelet count was > 70,000/mL and/or international normalized ratio < 2. The choice for a certain type of anticoagulant and the duration of therapy was made on the basis of patient comorbidities and risk profile. E1 The minimal duration of therapy was 3 months because the rate of thromboembolism is significantly elevated during this initial period. E2 Aspirin was continued ad vitam.
|
2021-10-28T15:11:52.344Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d033cbfd699ed161a7b4e1ce320660aa59d941fe",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jtcvsopen.org/article/S2666273621003739/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d17f566964c69b796c30555a44396bdb39d5a197",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125165780
|
pes2o/s2orc
|
v3-fos-license
|
AGE AS MODERATOR OF RELATIONSHIP BETWEEN JOB SATISFACTION AND ORGANIZATIONAL COMMITMENT.
Mohd. Ahamar Khan 1 , Shah Mohd. Khan 2 and Kalyan Kumar Sahoo 3 . 1. Research Scholar, Department of Psychology, Aligarh Muslim University, Aligarh202002. 2. Associate Professor, Department of Psychology, Aligarh Muslim University, Aligarh-202002. 3. Professor, Kings Business School, Accra, Ghana. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
This article examined the relationships and evaluates whether demographic characteristics such as; teachers' age moderates the relationship between their job satisfaction and organizational commitment. The study was undertaken on 300 teachers employed in central university (AMU), India. Obtained data were analyzed by correlation and moderation analysis. Results of correlation analysis revealed that age was significantly and positively correlated with job satisfaction as well as organizational commitment. A significant positive correlation was also found between job satisfaction and organizational commitment. Moderation analyses confirm the significant interaction effect of age on the relationship between job satisfaction and organizational commitment. These findings substantiate the crucial role of senior university teachers' in determining their job satisfaction and organizational commitment. Implication of research discussed and suggestions for future research proposed.
…………………………………………………………………………………………………….... Introduction:-
Job satisfaction and organizational commitment (OC) are different concepts, but several meta-analyses have concluded that there is a high correlation between the two variables (Mathieu & Zajac, 1990;Meyer, Stanley, Herscovitch, & Topolnytsky, 2002). It is difficult to separate the two concepts completely as the theory suggests that they share many factor, and it is therefore natural to wonder whether these terms actually are different. Several researches have shown a causal relationship between organizational commitment and job satisfaction, while others have shown that job satisfaction as a determinant of organizational commitment (Mathieu, 1991). Porter, Steers, Mowday, and Boulian (1974) conducted a study on organizational commitment, job satisfaction and turnover of psychiatric technicians. They pointed out that the change in job satisfaction was accounted more willingly as compared to the organizational commitment. Williams and Hazer (1986) assessed the commitment model to examine the cause and effect relationship between job satisfaction and organizational commitment and further to find out the determinants of these variables. The results indicated the relationship between personal/ organizational determinants and job satisfaction. The relationship was also reported between job satisfaction and work commitment. Furthermore, commitment was intended as a significant facet of turnover. Loui (1995) reported that the commitment in public organization was significantly correlated with the trust for the organization, job involvement and job satisfaction. On the other hand, Fresko, Kfir, and Nasser (1997) explored the commitment ISSN: 2320-5407 Int. J. Adv. Res. 6(1), 995-1001 996 among teaching staff. Their findings reported that the only teachers' job satisfaction predicted their commitment. Irving, Coleman, and Cooper (1997) found the positive relationship of job satisfaction with affective and normative commitment. However, job satisfaction was negatively correlated with continuance commitment. Khan and Mishra (2002) made an attempt to estimate the canonical correlation between needs satisfaction and organizational commitment among rail engine drivers of Indian Railways. They found that needs of social attachment and esteem were significantly related with affective and normative commitment. Further, the canonical correlation between five needs (Set-I) of need satisfaction and three sub-components (Set-II) of organizational commitment was also found to be significant. Warsi, Fatima, and Sahibzada (2009) reported the significant positive relationship of work motivation, job satisfaction and organizational commitment. Further, influence of job satisfaction on organizational commitment was found more as compared to the employees' work motivation. Hsu and Chen (2012) observed that those university faculty members who achieved a higher organizational commitment score while they held a higher degree of the job satisfaction. Nagar (2012) reported that the higher job satisfaction made greater contribution for organizational commitment. Suma and Lesha (2013) examined that satisfaction with work-itself, the quality of supervision and pay satisfaction had significant positive impact on organizational commitment among municipality employees working in Shkoder. Further, they reported high degree of organizational commitment and satisfaction with work-itself, supervision, salary, coworkers and opportunities for promotion. Srivastava (2013) found the significant positive relationship between job satisfaction and organizational commitment. Further, trust and locus of control significantly moderated the relationship between job satisfaction and organizational commitment. Nifadkar and Dongre (2014) revealed the significant positive correlation of job satisfaction and age with organizational commitment of teaching staff. Khan (2015b) revealed the direct effect of satisfaction of social needs, citizenship behavior and recognition at work and work relations on organizational commitment among loco pilots of Indian Railways. Ismail and Abd Razak (2016) conducted a study at Fire and Rescue Department of Malaysia and observed that job satisfaction was significantly related with organizational commitment, intrinsic satisfaction was significantly related with organizational commitment, and extrinsic satisfaction was significantly related with organizational commitment. Findings suggested that employees' intrinsic as well as extrinsic satisfaction with the job lead towards the higher organizational commitment.
The literature provides significant evidence of a persistent positive relationship between job satisfaction and organizational commitment in diverse work setting. Considering the importance of proposed variables and research gap; the present study was undertaken to study the moderating effect of demographic characteristic such as age on the relationship between job satisfaction and organizational commitment of university teachers. Because teachers are an extremely important facet of any society for a number of reasons and their role in society is both significant and valuable. They play an extraordinary role in the lives of children in the formative years of their development and the importance of teachers is something that cannot be understated. They are also considered as the leader of the human capital, play a central role in societal development. They are considered pillars of society because they bear the responsibility of educating and training students upon whom our future trusted. In the university setting, teacher shoulders the different responsibilities such as teaching role, research role, service/administrative role, social role and political role etc. Hence, it was assumed that the teachers' wide range of experience along the time will be significant impact on our societal development.
Objectives of the Study:-1. To examine the relationship between age and job satisfaction. 2. To examine the relationship between age and organizational commitment. 3. To examine the relationship between job satisfaction and organizational commitment. 4. To study the moderating effect of age on the relationship between job satisfaction and organizational commitment.
Hypotheses:-H a 1:-There will be positive relationship between age and job satisfaction. H a 2:-There will be positive relationship between age and organizational commitment. H a 3:-There will be positive relationship between job satisfaction and organizational commitment. H a 4:-Age will moderate the relationship between job satisfaction and organizational commitment.
Sample:-
A representative sample of teachers was selected ensuring the quality and characteristics of the target population. In the present study, faculty members teaching in an Indian Central University (Aligarh Muslim University) were the 997 target population. Based on the criterion developed by Carvalho (1984) a sample size of 200 respondents is sufficient for the research study. To ensure the true variance and minimizing the error variance, systematic and random errors; the sample size of the present study was 300 teachers (150 male and 150 female) selected from different faculties using stratified random sampling. In stratified random sampling, the strata were formed based on teachers' working strength of the faculty. In the sample, the mean age of the teachers was 45.28 years (SD = 10.01) with 26 years as minimum and 64 years as the maximum. The mean teaching-experience of the teachers was 16.51 years (SD = 10.33) with 2 years as a minimum and 35 years as the maximum. In terms of educational qualification, they were 90 Post Graduates and 210 Ph.D. In academic rank they were 134 Assistant Professors, 89 Associate Professors and 77 Professors.
Measures:-
Two standardized psychometric measures were used to study the job satisfaction and organizational commitment of university teachers. The description of the measures used in the present study is discussed in the following paragraphs.
Data Collection Procedure:-
Teachers were contacted individually. They were explained about the utility of the study and requested with due respect to extend their cooperation for success of the study. Great care was taken to address any misunderstanding about the purpose of the study and they were told that it is to be used only for research. They were requested to discuss when they feel any doubt in understanding and resultant response of the items, but don't leave any item un-attempted. They were assured of the confidentiality that their identity would not be disclosed at any stage. The order of the tools administration was job satisfaction scale, organization commitment scale and at last personal data sheet.
Data Analysis:-
Keeping in view the objectives and hypotheses of the present research, statistical analyses and discussion have been carried out in two stages. At the first stage, the Pearson Correlation Analysis (Zero order) was calculated in order to determine the relationship of proposed variables. At the second stage, Moderation Analysis was undertaken to examine the role of age as moderator on the relationship between job satisfaction and organizational commitment. The analyses were carried out using software SPSS ver. 22. .10 (p<.05), 0.15 (p<.01), 0.19 (p<.001), one-tailed D= Age, X= Job Satisfaction, Y= Organizational Commitment.
Results and Discussion:-Pearson Correlation Analysis (Zero order):-
Moderation Analysis:-Hierarchical multiple regression analysis outputs in following paragraphs shows the effects of moderating variable. Variables were standardized to make interpretations easier and to avoid multicollinearity. Table-2 shows hierarchical regression analysis outputs for the moderation effect of age on the relationship between job satisfaction and organizational commitment. In the analysis Model 1 (without the interaction) and Model 2 (with the interaction) was examined using the PROCESS procedure given by Andrew F. Hayes (http:// www.afhayes.com). Moderation schema age as moderator on the relationship between job satisfaction and organizational commitment prepared and showed in Figure 1. Table 2, Model 1 without the interaction accounted for a significant amount of variance in organizational commitment, R 2 = .287, F (2, 297) = 59.738, p< .001. It can be inferred that job satisfaction is a significant predictor of organizational commitment. Next, the interaction between age and job satisfaction was added to the regression model (Model 2) which accounted for a significant amount of variance in organizational commitment, R 2 = 0.034, F(1, 296) = 14.904, p< .001. On the basis of this quantitative analysis it can be inferred that there is a significant moderating effect of age on the relationship between job satisfaction and organizational commitment. Further, for visualizing the conditional effect of job satisfaction (X) on organizational commitment (Y) interaction plot prepared and shown as Examination of the interaction plot showed an enhancing effect as age and job satisfaction increases, teachers' organizational commitment also increases. At low job satisfaction, teachers' organizational commitment was different for their low, average, and high age groups. Teachers with average and high job satisfaction with their low, average and high age groups had an enhancing pattern in organizational commitment and came closer to substantiate the interaction effect. Therefore, H a 4 is supported. The finding is in accordance with the findings of Yucel and Bektas (2012) who confirmed the significant moderating effect of age on the relationship between job satisfaction and organizational commitment.
Aforementioned, findings indicated that as the age of university teachers increases, their job satisfaction and commitment also increases. Further, the job satisfaction of teachers appeared as antecedent of organizational commitment. Thus, the results empirically confirmed that older teachers' are highly satisfied and more committed in their academic profession as compared to the younger teachers. Probably, this result may be due to the fact that the older teachers show high levels of organizational commitment in that they only remain with the organization because it would be hard for them to leave due to fewer employment opportunities, shortage of available alternatives, or distraction of their life. Therefore, teachers in the older age group have more organizational commitment as compared to the younger age group.
Implication of Study:-
The findings of the present piece of research work provide the conceptual implication in understanding the relationship of teachers' age with their job satisfaction and organizational commitment. The higher academic body, university management and trainer (academic staff college) can plan an intervention to uphold an organizational commitment of very important workforce who carry on the responsibilities to integrate critical thoughts, examination of emotions and moral values to broaden the learning experience and make it more relevant to everyday life situations. Further, age emerged as significant moderator on the relationship between job satisfaction and organizational commitment which in turn to set on the eyes of university management that older teacher are invaluable assets in operational perspective of academic setting. Older teachers having massive workingexperiences in different academic domains which can serve as input for the institution to employ them in decision making, in identifying organizations' key issues in order to develop strategies to enhance the institutions' rank.
|
2019-04-22T13:11:05.379Z
|
2018-01-31T00:00:00.000
|
{
"year": 2018,
"sha1": "2093c2429c17edf0db40a5083a82fbf7b2fdd46b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/6312",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d2b802f24c7516bc0a65e52a541202aa69840d4e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
46897502
|
pes2o/s2orc
|
v3-fos-license
|
Proposal of a form for the collection of videolaryngostroboscopy basic findings
Videolaryngostroboscopy is a useful investigation required for a correct diagnosis of laryngeal diseases and voice disorders. We present a form for the collection of basic laryngostroboscopic findings, which provides for the evaluation of the classical six parameters codified by Hirano (symmetry and periodicity of glottic vibration, glottic closure, profile of vocal fold edge, amplitude of vocal fold vibration, mucosal wave) and six other parameters which we have included in the form for an essential and complete laryngostroboscopic evaluation (supraglottic framework behaviour, seat of phonatory vibration, vocal fold morphology and motility, level of the vocal fold, stops of vocal fold mucosa vibration). This form was created in 2002 during the elaboration of the protocol for the assessment of dysphonia of the Italian Society of Phoniatrics and Logopedics, which follows the guidelines of the European Laryngological Society published in 2001. We used this form for 15 years in our daily laryngological practice with great satisfaction. We propose a more detailed version of this form, which provides for drawings which show the various videolaryngostroboscopic findings, helping the laryngologist in the collection of videolaryngostroboscopic examination basic findings.
Videolaryngostroboscopy is one of the most widely performed examination in laryngological and in phoniatric fields. It associates the advantages of stroboscopic observation with the video recording of images and voice.
The stroboscopic effect is based on the particular functioning of the human eye, whereby an image remains imprinted in the retina for two-tenths of a second (Talbot's law); if five images are given for subsequent glottic vibration cycles, they are "assembled" as if they were a single moving image. The selection of images to be "assembled" is made by light flashes synchronized with the glottic vibration frequency picked up by a microphone. The technological evolution of the various components (stroboscopic light sources, endoscopic investigation apparatus, computerized video recording instruments) has made it possible to increase the diagnostic sensitivity and fields of application of this examination. As Rosen [1] states, despite the limitation represented by the subjectivity of the interpretation of video images which reduces their applicability in the field of research, videolaryngostroboscopy is currently the most important clinical tool for diagnostic evaluation and subsequent therapeutic planning of patients with voice disorders. In fact the various pathological situations responsible for dysphonia almost always cause voice disorder through an alteration of glottic vibration. Obviously, this diagnostic tool requires adequate experience on the part of the examiner to capture the large amount of information that it can provide. Videolaryngostroboscopy has some limitations, in particular it is only possible with regular voice signal, time duration is needed and also a regular vibratory pattern. Videolaryngostroboscopy has become a useful examination to confirm the diagnostic suspicion of congenital and acquired vocal fold lesions; in these cases, when the voice signal is too irregular to capture a fundamental frequency for stroboscopy, or if the vibratory pattern is too irregular, high speed videolaryngoscopy is one of the main indications. The alteration of the mucosal wave is University of Rome "Cattolica", Section "Claudiana", Bolzano, Italy due to the adherence of the vocal fold epithelium to the vocal ligament, caused by the lack of the lax tissue which is normally contained in the superficial layer of the lamina propria, causing a stop of the mucosal wave [2], as it happens in the vocal fold scars and in laryngeal pre-neoplastic lesions [3][4][5][6].
The laryngostroboscopic examination provides the "slowmotion" mode, which allows the display of the vibration cycle in slow motion with variation of the speed of the displayed glottic vibration by means of a pedal and the "standstill" mode, in which the displayed glottic vibration is fixed in the closing, semi-opening and opening phase.
The recording of the videolaryngostroboscopic examination is currently carried out in digital mode, with storing of the video recording in a host computer, so that it can be reviewed and compared with previous examinations.
The standard examination is usually performed with a rigid 70° or 90° optics; the new flexible endoscopes with distal chip camera allow to perform an examination with the same quality as rigid optics, but with greater comfort for the patient, as well as they allow a more physiological evaluation, since it does not involve the extrusion of the tongue.
Videolaryngostroboscopy is currently also performed in the operating room during thyroplasty or fiberendoscopic phonosurgery, with the possibility of immediate control of the results of phonosurgery on glottic vibration.
The main parameters of interpretation of the videolaryngostroboscopic examination are still those codified by Hirano and Cornut [2,7]; in 2002 Bergamini, Ricci-Maccarini and Fustos during the development of the Protocol for Assessment of Dysphonia of the Italian Society of Phoniatrics and Logopedics (SIFEL Protocol), which follows the guidelines of the European Laryngological Society published in 2001 [8], elaborated a form which contains also other evaluation parameters, which complete the collection of essential findings obtained from the videolaryngostroboscopic examination, improving this important diagnostic tool. This form provides for the evaluation of 12 parameters, which include the 6 classical parameters codified by Hirano [7] and 6 other parameters which complete the clinical-instrumental investigation ( Fig. 1). This form was subsequently extended in 2008 in the occasion of the Official lecture at 32° National Congress of the Association of the Italian Hospital Otorhinolaryngologysts, but 15 years of use in phoniatric clinical practice have resulted in our continuing to use the version contained in the SIFEL Protocol, since this is simple, quick to perform and it contains all the essential evaluations for a complete videolaryngostroboscopic examination.
In the present work we have added, for each parameter contained in the form, the schematization of the relative videolaryngostroboscopic image, to provide the examiner with a complete working tool without any doubts of interpretation.
In addition to the findings relating to the 12 laryngostroboscopic parameters contained in the form, some lines are also provided for the annotation of remarks regarding other particular findings which may be useful for performing a more complete videolaryngostroboscopic examination.
Also noted are the type of endoscope used, rigid or flexible, the mean fundamental frequency in Hz of the vowel produced during the examination, its loudness in dB and the vocal register (modal or falsetto).
We now illustrate the 12 parameters contained in the form, shown in Fig. 1, which can be digitally filed so as to obtain a database for clinical comparisons between examinations performed on the same patient, for example before and after a treatment and statistical comparisons between examinations performed on different patients.
Supraglottic framework behaviour
This parameter, introduced during the elaboration of the form included in the SIFEL Protocol, provides for: the presence of normal behaviour of supraglottic structures during phonation; or the presence of hypercontraction of ventricular bands with their possible vibratory contact; or an anteroposterior supraglottic hypercontraction, with possible vibratory contact between the arytenoids and the epiglottis foot; or an all-round hypercontraction of supraglottic structures.
This parameter allows detecting the presence of a compensatory supraglottal hypercontraction of a glottal insufficiency of an organic or functional type in a framework of hyperkinetic dysphonia.
Seat of voice source
This parameter also has recently been introduced in the SIFEL protocol. It is particularly useful in the evaluation of cordectomy or partial laryngectomy outcomes [5]. The vibratory contact can normally be between vocal fold and vocal fold; or between the ventricular bands, possibly at the same time as the glottic vibration (bitonal voice); or between vocal fold and ventricular band; or between arytenoid and ventricular band; or between the arytenoids and between arytenoid/s and epiglottis; or between arytenoid/s and tongue base, in outcomes of supracricoid or supratracheal partial laryngectomy, when the vocal folds, ventricular bands and epiglottis have been removed [5].
Vocal fold morphology
This parameter is useful for annotating the presence of normal morphology of the vocal folds and ventricular bands, with absence of lesions; or the presence of hypertrophy or atrophy of the vocal folds and/or ventricular bands; or the presence of laryngeal lesions, with some lines for describing the type and location of the discovered lesions.
Vocal fold motility
This is an indispensable parameter in the diagnosis of laryngeal paralysis. The vocal fold can be normally mobile, hypomobile, hyperadducted during phonation (as compensation for a paralysis of the contralateral vocal fold), immobile; in case of immobility, the vocal fold can be immobile in median, paramedian, intermediate or abducted position.
Level of the vocal fold
It is another important parameter for the evaluation of vocal fold paralysis. The immobile vocal fold can be at a normal level compared to the contralateral normomobile vocal fold, or it can be under-levelled (in most cases) or over-levelled. This parameter must be evaluated by means of a flexible endoscope, possibly rotating the videocamera by 180° (or inverting the image in case of distal chip-camera flexible endoscope), to obtain an image "from behind" which permits better appreciating the level difference between the two vocal folds.
Symmetry of glottic vibration
It is a classic parameter codified by Hirano [7]. It can be normal when the opening phase of the two vocal folds during glottic vibration has the same amplitude; or it can be altered in amplitude when a vocal fold has a less wide Fig. 1 (continued) opening phase than the other vocal fold (e.g. due to the presence of an intracordal cyst); or there may be an asymmetry in phase, when during glottic vibration one vocal fold is in opening phase while the other is in closing phase (e.g., in some cases of muscle tension dysphonia). For a correct evaluation of this parameter, it is useful to perform the laryngostroboscopic examination both in "slow-motion" and "stand-still" mode; the latter allows fixing the various u middle third Fig. 1 (continued) phases of the glottic vibratory cycle, highlighting any phase and/or amplitude asymmetry.
Periodicity of glottic vibration
A classic parameter codified by Hirano [7]. It can be regular, irregular or inconsistent. The laryngostroboscopic examination in stand-still mode shows, in case of irregular glottic vibration, an unclear image of the various phases of the vibratory cycle, since it is not repeated in the same way cycle after cycle. The evaluation of inconsistent glottic vibration is a limit of laryngostroboscopy; in these cases high-speed laryngeal endoscopy [9] is preferable as this allows visualizing all the cycles of glottic and/or supraglottic vibration, regular, irregular or inconsistent, with a slower but real image (not "reconstructed" as in the laryngostroboscopy) of 2 s of phonatory vibration, with time delay and without sound.
Glottic closure
It is a classic parameter codified by Hirano [7]. It is one of the fundamental parameters of videolaryngostroboscopy, essential in the diagnosis of glottic insufficiency.
Glottic closure can be complete, incomplete or inconstant (sometimes complete and sometimes incomplete). The incomplete glottic closure may be slightly incomplete or very incomplete, with glottic gap morphology that may be: spindle-shaped, posterior triangle, anterior, anterior hourglass, posterior hourglass, irregular (due to the presence of neoformations affecting the vocal fold edge), total (which affects the entire length of the glottis).
Profile of vocal fold edge
Parameter codified by Hirano [7]. It can be straight, concave, convex or irregular (due to the presence of neoformations).
Amplitude of vocal fold vibration
A classic parameter codified by Hirano [7]. It can be normal, small (as in benign intracordal lesions), large (as in denervated and flaccid vocal folds) or absent (as in neoplastic infiltration of the vocalis muscle) [3,4,6]. The evaluation of vocal fold vibration amplitude must be kept distinct from that of the mucosal wave.
Mucosal wave
It is a classic parameter codified by Hirano [7], it is one of the fundamental parameters of laryngostroboscopy; it evaluates the progression of the wave generated by the flowing of the "cover" (vocal fold epithelium) on the "body" (vocal ligament and vocalis muscle) due to the lax tissue contained in the superficial layer of the lamina propria, driven by subglottic pressure, myoelastic and aerodynamic forces [7]. The mucosal wave may be absent due to the presence of adherence between epithelium and vocal ligament with a vocal fold vibration always present albeit reduced, when the vocalis muscle is normal, as in precancerous vocal fold lesions and in early glottic cancer [3,4,6] or in benign vocal fold lesions such as the deep vergeture and the iatrogenic vocal fold scars [2]; or mucosal wave can be small or large.
Stops of vocal fold mucosa vibration
Parameter introduced during the development of the SIFEL Protocol. It defines, more specifically, the areas of adherence of the vocal fold mucosa where the mucosal wave stops. Vibration stops may occur constantly or occasionally and they may affect the anterior third, the middle third or the posterior third of the vocal fold, or they may affect the entire vocal fold.
At the end of the videolaryngostroboscopic examination, together with the specialist report, the four most significant images are printed, in which the vocal folds are in breathing position, in glottic closure, in half-opening and in opening phase.
We recommend a more widespread use of the videolaryngostroboscopic examination in laryngological clinical practice and the use of this form to collect the examination findings, to obtain a correct laryngological diagnosis and evaluation parameters which can be compared with those obtained from the patient at different times, especially before and after medical treatment, voice therapy or phonosurgery. This form is particularly useful for the training of young laryngologysts, while for the expert laryngologysts it could be too long to apply; a shortened version of the form will be elaborated for this aim in the future.
Compliance with ethical standards
Conflict of interest The authors declare no conflict of interest.
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate
|
2018-06-08T14:07:02.222Z
|
2018-05-22T00:00:00.000
|
{
"year": 2018,
"sha1": "ff1cf7754c83d111dadd7e8cec272339b42f1227",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-018-4991-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff1cf7754c83d111dadd7e8cec272339b42f1227",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18681212
|
pes2o/s2orc
|
v3-fos-license
|
Asymmetric Engagement of Amygdala and Its Gamma Connectivity in Early Emotional Face Processing
The amygdala has been regarded as a key substrate for emotion processing. However, the engagement of the left and right amygdala during the early perceptual processing of different emotional faces remains unclear. We investigated the temporal profiles of oscillatory gamma activity in the amygdala and effective connectivity of the amygdala with the thalamus and cortical areas during implicit emotion-perceptual tasks using event-related magnetoencephalography (MEG). We found that within 100 ms after stimulus onset the right amygdala habituated to emotional faces rapidly (with duration around 20–30 ms), whereas activity in the left amygdala (with duration around 50–60 ms) sustained longer than that in the right. Our data suggest that the right amygdala could be linked to autonomic arousal generated by facial emotions and the left amygdala might be involved in decoding or evaluating expressive faces in the early perceptual emotion processing. The results of effective connectivity provide evidence that only negative emotional processing engages both cortical and subcortical pathways connected to the right amygdala, representing its evolutional significance (survival). These findings demonstrate the asymmetric engagement of bilateral amygdala in emotional face processing as well as the capability of MEG for assessing thalamo-cortico-limbic circuitry.
Introduction
Rapid detection of facial expressions and emotional salience is critical for social communication and interaction. A dual-route model [1] has been proposed that rapid brain responses to emotional facial expressions through two pathways, one from the subcortical colliculo-pulvinar to the amygdala and the other from cortical visual areas to amygdala, within 120 ms [2]. Such rapid amygdala engagement is biologically crucial for survival under threatening confrontation [3,4]. [25]. Furthermore, a recent MEG study [17] demonstrated that the STS can help distinguish facial expressions at approximately 45 ms after stimulus onset, indicating that facial features can be decoded or evaluated in the very early perceptual stage. Hence, it is plausible that the STS plays an essential role in the cortical-amygdala pathway.
In the present study, we addressed two questions. First, are the temporal patterns of left and right amygdala activity in response to different facial expressions distinct in early emotion perception (within 100 ms after stimulus onset)? Second, which model, dual-route or many-road, is more representative for each emotion? The two hypotheses of this study are as follows: 1) that the right amygdala would be activated earlier and habituate faster in response to threatrelated faces than to positive faces, whereas the left amygdala would be activated longer in response to emotional faces than to neutral faces; and 2) that dual-route processing would be engaged only in response to threat-related faces and not in response to positive or neutral faces.
To answer these questions, the brain dynamics of neural responses to emotional face images was investigated using MEG. We focused on gamma-band oscillations of emotional responses, which act as an integrative mechanism underlying cognitive processing [26] or emotion processing [27]. A beamforming technique [28] was used to reconstruct the brain activity in a voxel-wise manner, followed by an effective connectivity analysis using Granger causality analysis (GCA) [29] to determine the directional relationship between cortical and subcortical regions. Previous fMRI studies have reported that amygdala activity highly depends on relatively passive or implicit processing of an emotion [2]. The reduction of amygdala responses to emotional facial expressions, when the demand for explicit emotion recognition is increased, is a common observation across studies [19,[30][31][32]. A meta-analysis of 385 PET and fMRI studies concluded that passive/implicit processing of emotions is associated with a higher probability of amygdala activation than active task instructions [33]. Based on these findings, we adopted a gender judgment task on visually displayed face images to engage less attentional loading of emotional stimuli to investigate amygdala activity. Finally, we determined connectivity models and compared gamma activity of the left amygdala with that of the right amygdala for each facial expression.
Materials and Methods Subjects
Twenty-four healthy volunteers (nine male, mean age 36.6 AE 11.3 yrs) were enrolled. All participants were right-handed as assessed by the Edinburgh Handedness Inventory and had normal or corrected-to-normal vision. They underwent a Mini International Neuropsychiatric Interview by a psychiatrist in Taipei Veterans General Hospital before the experiments to exclude possible morbidity associated with psychiatric illness. All subjects signed written consent forms and were financially compensated for their participation. The study was approved by the Institutional Review Board at Taipei Veterans General Hospital.
Stimuli and experimental design
Face images with expressive emotions including neutral, sad, happy, and angry, were displayed in a random order at the center of a back-projected translucent screen, located 100 cm in front of the subject, and subtended 14°(width) by 17°(height) of visual angle. To avoid ethnic/ cultural differences in emotion processing, the facial expression images used in the present study were all collected from Taiwanese and processed into gray-scaled, face-only images. Each face stimulus was presented for 1500 ms, followed by a preparatory blank image (jittered with a mean duration of 1000 ms) and a response cue of 500 ms, using STIM2 software (Neuroscan Inc.). There were 72 trials for each emotion. Subjects were instructed to perform a gender discrimination task on each presented face image by lifting their left or right index finger for male or female, respectively, when a response cue was displayed. A training phase was conducted before the MEG recordings.
MEG and MRI recordings
Event-related MEG signals at a sampling rate of 1000 Hz with a 0.03-330 Hz band-pass filter were recorded using a whole-head 306-channel neuromagnetometer (Vectorview, Elekta-Neuromag, Helsinki, Finland). Trials contaminated by eye movements or containing deflections exceeding 9000 fT/cm were discarded. MEG signals were processed using the signal space projection method [34] to remove urban interference and obtain noise-free trials for source analysis. Three fiducial landmarks (nasion and bilateral preauricular points) and four head position indicator coils were localized using the Isotrak system (Polhemus Navigation Sciences, Colchester, Vermont, USA). The fiducial points allowed precise co-registration of the MEG and structural magnetic resonance imaging (MRI) data. The anatomical MRI data were acquired by a GE Signa EXCITE 1.5 T system using an 8-channel phased-array head coil with a high-resolution, T1-weighted, 3D fast spoiled gradient-recalled echo sequence (3D FSPGR, TR = 8.67 ms, TE = 1.86 ms, inversion time = 400 ms, matrix size = 256 × 256 × 124, and voxel size = 1.02 × 1.02 × 1.5 mm 3 ). Some of the present data have been published previously [27].
MEG source analysis
Beamformer-based analyses using the trial-by-trial data were performed for each emotion. The noise-free MEG data were filtered at a frequency band of 35 to 55 Hz (gamma rhythm). These gamma-band signals were then analyzed through a beamforming method [28], which yielded a spatial filter that was designed, under a unit-gain constraint, for each targeted brain location to minimize the variance of filtered activities over a time interval of interest. We in the present study used a window of 300 ms (from 0 ms to 300 ms after stimulus onset) as time interval of interest, which is around 10 cycles of gamma activity, to estimate the spatial filter for accurate source reconstruction. A homogeneous spherical conductor model was used in the calculation of forward solution. Tikhonov regularization (Tikhonov 1977) was adopted as a smoothness term, where the parameter of regularization indicates the noise suppression factor. When noise level is high, more regularization has to be applied to gain more stability of the solution, even though more spatial resolution is sacrificed.
For each location in the brain, a sliding window of 30 ms with a shift of 5 ms was applied [35,36] to calculate the temporal dynamics of power ratio of brain activity reconstructed by using the estimated spatial filter. The window was sliding with its center shifting from 35 ms to 165 ms. Within the sliding window, the beamformer estimated the gamma band activation index (GBAI), which was a pseudo F-statistic value, by calculating the power ratio of the reconstructed gamma activity between the active and control states. Here the interval of control state was chosen from 300 ms to 200 ms before stimulus onset. Two simulation studies (S1 Fig. and S2 Fig.) were conducted to further demonstrate the effectiveness of the proposed method for estimating temporal power dynamics of gamma activity. The whole-brain GBAI map was then obtained by iteratively scanning through the brain volume using the same procedure with an isotropic spatial resolution of 4 mm. Because angry and happy emotions have relatively high arousal and sad emotion has relatively low arousal [30,33,37], only the data of angry and happy emotion conditions were analyzed in this study because of their similar arousal levels.
Localization and extraction of cortical gamma activity
The overall processing flow of the analysis and integration of functional and structural imaging data is summarized in Fig. 1. First, for each individual the deformation field was determined by aligning individual T1-weighted MRIs to a standard stereotactic space (Montreal Neurological Institute, MNI, space) using BIRT software [38]. For group analysis, the corresponding GBAI maps of each individual obtained from the beamformer were then aligned to the same standard stereotactic space by applying the resolved deformation field with an isotropic voxel size of 2 mm.
The whole-brain group analysis was used to locate the brain regions with significant activity in the whole brain and then to calculate the mean temporal profile of gamma activity within the located brain regions for further connectivity analysis. In order to provide a unified approach of model selection in GCA for all emotions, the activation foci were determined by combining both functional and anatomical information to accommodate inter-subject variability in functional neuroanatomy. To obtain functional activation map, a one-sample t-test was first conducted on GBAI maps at each time point using SPM2 software (http://www.fil.ion.ucl. ac.uk/spm). The significance level was set at uncorrected p < 10 −7 (t(23) = 7.3) and cluster extension > 1000. This threshold was also met by the false discovery rate (FDR) corrected p value of < 10 −5 . Compared to an FDR corrected p-value, an uncorrected p-value, corresponding to a specific t-value for all time points and conditions, is less dependent on the distribution of the underlying signal. The uncorrected p value (t(23) = 7.3, p < 10 −7 ) remained significant after multiple testing correction across time (met Bonferroni corrected p < 10 −5 ). The union of the significant voxels survived from all time points was then determined as the functional activation map. As to anatomical information, we employed the automated anatomical labeling (AAL) atlas [39] to parcel the cerebral cortex into 90 brain regions. Finally, the intersection area between each AAL-defined region and the functional map was identified as a Region-of-Interest (ROI) if it was larger than 10 voxels. The time course of the mean gamma activity within each ROI was calculated for connectivity analysis.
To further compare the activity between the left and right amygdala, the mean gamma activity of the left and right amygdala ROIs determined by the above-mentioned procedure was analyzed by a 2-tailed paired t-test at each time point. An effect size was computed by a standardized measure (Cohen's d) [40].
ROI identification and GCA
We performed effective connectivity analysis using GCA [29] to investigate directional interaction among brain regions. The GCA method has been widely used in neuroimaging research [11,41,42], which is a statistical procedure based on lagged time-series regression models to determine the ability of one time-varying signal to predict the future behavior of another.
The GCA in this study was performed by using the time series of GBAI in the above-defined ROIs with the Casual Connectivity Analysis toolbox [43] in Matlab. For each subject, the time series of GBAI were calculated by averaging across significant voxels in the ROIs at each time point in each facial expression. To ensure that the mean, variance, and auto-covariance of a series remained constant over time, a Dickey-Fuller test (p < 0.01) was performed to verify whether the time series of GBAI were covariance-stationary. In the present study, a model of order two was selected to identify the main characteristics of the networks. Kopell et al. [44] showed that gamma oscillations support robust lag synchronization between two sites of up to 8-10 ms by simulating physiological parameter regimes. According to synchronous firing within a AE10 ms time lag [45] and neuronal synchronization approximately 10 ms [44,46], a time lag of 10 ms was chosen [41].
To examine whether core cortical regions of face perception (including cuneus, fusiform gyrus (FG), and STS) and subcortical region (thalamus) are involved in the efferent pathway to the amygdala during processing of emotional faces, an effective connectivity network consisting of four regions was constructed using the lagged multivariate vector auto-regressions on the time series of GBAI. For model space specification, four out of 90 AAL-defined cerebral regions were selected as follows. The first one was either the left or right amygdala because of the essential role of amygdala in the dual-route model. The second and third regions were selected from the bilateral thalamus and from the three bilateral face-related cortical regions (cuneus/ FG/STS), respectively, to ensure that both thalamus-amygdala and cortical area-amygdala pathways were evaluated concurrently in the connectivity model. The fourth region was then selected from the remaining 87 AAL-defined cerebral regions. In total, there were 2,088 (2 × 2 × 6 × 87) models to be examined for each emotion.
For each model under examination, the significance level of the directed link between each pair of regions was first estimated by GCA [41,47] for each subject (F-test, p < 0.05, uncorrected) and then verified by group analysis. Significance of connections at the group level was examined by using a binomial test (p < 0.0321, 17/24 subjects; that is, at least 17 of the 24 subjects passed the F-test at the individual level) [42,48]. For a success probability of 0.5 (a connection exists or not) in a binomial distribution with 24 trials, the minimal number of successes is 17 so that the critical number of 17 or more successes (the number of subjects that had a connection) was selected. Finally, the models containing two links to amygdala, one from thalamus and the other from face-related areas, were determined as the representative models for each emotion.
Results
For each emotion, the spatiotemporal GBAI maps obtained from the beamforming method with a 5-ms sliding window illustrate the significant gamma-band activity of the whole brain (see S3 Fig.).
Temporal profiles of each emotion in the amygdala
The spatial patterns and temporal profiles (35-125 ms) of the bilateral amygdala activity (t-value ! 7, extended cluster size ! 10 voxels) are illustrated in the left and right panels of Fig. 2, respectively. We found that angry faces elicited amygdala activity on the right side at 60-70 ms and on the left side at 65-70 ms and 90-105 ms ( Fig. 2A). In response to happy faces, the left amygdala was activated at 35-70 ms and 90-100 ms, and the right amygdala was activated at 80-110 ms (Fig. 2B). In response to neutral facial expressions, right amygdala activity was detected at 45-95 ms and left amygdala activity at 45-55 ms and 110-125 ms (Fig. 2C). The angry (85, 90, and 95 ms; left > right) and happy facial expressions (95 and 100 ms; right > left) showed significant differences of gamma-band activity between the left and right amygdala (paired t(23) ! 2.61, p < 0.016, effect size ! 0.53). Neutral facial expressions did not show significantly different activation. The finding of distinct temporal profiles of the left and right amygdala activity for different emotions confirmed our first hypothesis.
Onset and peak latencies for different emotions in the amygdala, thalamus and face-related cortical regions Table 1 summarizes the activity onset, peak latency, and corresponding MNI coordinates of the bilateral amygdala, thalamus, cuneus, FG, and STS for all expressions. The activity onset denoted the time of the significantly increased gamma oscillatory activity (t(23) = 7.3, uncorrected p < 10 −7 , met FDR corrected p < 10 −5 , effect size = 1.49), which was compared to the baseline period (from 300 ms to 200 ms before stimulus onset). The peak latency represented the time point of the maximal value of gamma activity compared to the control period.
In response to angry facial expressions, the early event-related gamma activity in the right thalamus, bilateral cuneus, left FG, and bilateral STS was detected at 35 ms and followed by activity in the right FG (45 ms), left thalamus (55 ms), and amygdala (right, 60 ms; left, 65 ms). The gamma-band responses in the right cuneus, left STS, bilateral thalamus, right amygdala, and bilateral FG peaked at 40-75 ms, which was earlier than the right STS and left amygdala at 100-105 ms and than the left cuneus at 165 ms.
Happy faces elicited early gamma activity in the bilateral STS, bilateral cuneus, left FG, left amygdala, and right thalamus at 35 ms, which was followed by that in the left thalamus (45 ms), right FG (45 ms), and right amygdala (80 ms). With regard to the time points with peak t-values, the right cuneus, the left amygdala, left thalamus, and right STS responded at 40-55 ms, earlier than the right FG (75 ms), right thalamus (95 ms), right amygdala (100 ms), left FG (155 ms), left cuneus (165 ms), and left STS (165 ms). Notably, the onset and peak latencies of the left amygdala were earlier than those of the right.
In response to neutral faces, the bilateral thalamus, bilateral FG, and the left STS were activated at 35 ms, followed by the bilateral cuneus (40 ms), bilateral amygdala (45 ms), and right STS (45 ms). As to the time points with peak t-values, all of the amygdala, thalamus, FG, and
Effective connectivity from subcortical and cortical regions to the amygdala
A GCA was performed for the duration of 35-165 ms with a four-region model for each emotion category. The significant effective connectivity (p < 0.0321, effect size ! 4.90) between the regions is displayed in Fig. 3.
Two connectivity models in processing of angry emotion were found, as shown in Fig. 3. The first model consisted of pathways from the right thalamus and from the right cuneus to the right amygdala (Fig. 3A). The second model consisted of pathways from the left thalamus and from the right STS to the right amygdala (Fig. 3B). These two models also showed that the right thalamus affected the activation of the right parahippocampal cortex (Fig. 3A) and the left thalamus influenced the activation of the right posterior cingulate cortex (Fig. 3B). With respect to happy and neutral facial expressions, no dual-route model was found. These data provided evidences that dual-route processing of the right amygdala from both subcortical (bilateral thalamus) and cortical (cuneus and STS) regions existed only in negative (angry) face perception, but not in positive (happy) or neutral face perception.
Discussion
Our results show asymmetric engagement of the left and right amygdala in early perceptual processing of emotional faces. The right amygdala responded to angry faces (60-70 ms) earlier and shorter than to happy faces (80-110 ms); whereas the left amygdala responded to happy faces (35 ms) earlier than to angry faces (65 ms) (Fig. 2). With GCA, we found the evidence of the dual-route model, thalamo-amygdala and visuocortical-amygdala pathways, in angry face perception. No evidence was found in happy and neutral face perception. These data are in keeping with the view that a dual route facilitates processing of threatening information in humans.
Asymmetric activation of amygdala in early perceptual processing of emotional faces
We found that the left amygdala activity lasted longer than the right in response to emotional faces, but not to neutral ones. Moreover, our data showed that the left amygdala responded to happy faces (35 ms) earlier than to angry ones (65 ms). These results implicate the leftlateralized involvement of decoding or evaluating expressive stimuli in the early perceptual processing of emotion. It has been proposed that happy facial expression involves greater physical changes of the facial features, including the mouth, compared to negative facial expressions [49]. Our data are also in line with a previous report [8] suggesting a predominant role of the left amygdala in emotion perception. The findings in the present study provide evidence that there could be a left-lateralized amygdala preference in decoding expressive information during the very early perceptual period. Relative to the left side, the right amygdala is more responsible for autonomic arousal. Our results showed that the right amygdala was engaged in processing angry emotion more quickly (60-70 ms) than processing happy emotion (80-115 ms). A previous patient study demonstrated that right amygdala damage resulted in the deficit of an autonomic response as measured by skin conductance [11]. Williams et al. [50] reported that the increased amygdala responses to negative facial stimuli appeared to be associated with concomitant autonomic arousal. This suggests that the function of the right amygdala could be linked to autonomic arousal: the more threatening the confrontation, the faster the processing.
Moreover, we found the right amygdala habituated rapidly for both angry and happy faces. The rapid habituation of the right amygdala may reflect efficient detection of emotional information for triggering autonomic responses. This finding of fast habituation of the right amygdala is in line with one previous fMRI study [13]. Wright et al. reported greater habituation of the right amygdala compared to the left in response to repeatedly presented emotional stimuli, suggesting the right amygdala is a part of a dynamic emotional stimulus detection system. In sum, our data might offer a more general perspective on the left and right amygdala subserving functions of detecting and delivering emotional signals, respectively.
Salient detection of neutral faces by the amygdala
Notably, we found the bilateral amygdala was activated by neutral faces, and the right amygdala activation lasted longer than the left. In almost all previous neuroimaging studies, neutral faces were treated as control stimuli to be compared with emotional faces. Our data demonstrated that the temporal profile of the amygdala activity evoked by neutral faces was quite different from those for angry and happy faces (Fig. 2C). The right amygdala is involved in social perception. Social concepts elicit more right amygdala responses [51], and an impaired right amygdala may confer more derangement of social cognition compared to an impaired left amygdala [52]. Our data also showed that the right amygdala responses to neutral faces lasted longer than those to emotional faces, which could be due to computation of facial expressions. Neutral face computation is more demanding on the brain because these faces have no obvious features of emotional meaning. This finding supports the notion that the right amygdala could be involved in the processing of social cues prompted by faces, a specific aspect of the more general role of the amygdala in the detection of salience [53] and relevance [54]. Our results also suggest a need to reconsider the justification of using a neutral face as a control in the comparison of functional studies in the future.
A dual route to amygdala in response to angry faces To our knowledge, this is the first report of a dual-route model for angry face processing. Our data provide evidence of the LeDoux's dual-route model in humans, that is, thalamo-amygdala and visuocortical-amygdala routes, while perceiving angry faces. This dual-route model has been previously proposed in both rat [1] and human studies [15,19]. These studies proposed a subcortical route that is capable of rapidly sending information to the amygdala raised by fearful stimuli. In our study, angry faces were adopted as negative stimuli, which may contain potential threatening features, similar to fearful stimuli [3,4]. Our findings of a dual route in processing angry faces suggest the generality of high-arousal negative facial expressions (e.g., angry and fearful), which could involve more effective and efficient brain processing to provide a survival advantage in detecting danger.
Another possible explanation of the dual-route model observed only in angry face perception is that the thalamus is involved in the early processing stage of negative faces. Our results showed that the thalamus was activated at similar onset times (all approximately 35 ms) but with different peak latencies in response to angry, happy, and neutral faces. Moreover, the right thalamus was engaged to process angry faces (at 65 ms) more quickly than to process happy and neutral faces (approximately 95 ms). These data suggest that the thalamus participates in processing emotional signals of faces at the very early perceptual stage. These findings are in line with a recent study using single-cell recordings [55], which demonstrated fast traces to the pulvinar (a part of the thalamus) neurons elicited by visual stimuli in monkeys. They reported that some visual-responsive neurons in the thalamus were triggered by angry faces but not by happy or neutral faces, indicating the capability of the thalamus to differentiate distinct emotional faces. Our findings suggest that when perceiving angry facial expression, the thalamus could efficiently convey emotional information to the amygdala and hence could facilitate a rapid response to potentially dangerous events.
Our data demonstrate that the rapid activation of cortical areas could affect amygdala activation through visuocortical-amygdala and STS-amygdala routes in very early emotional perception (starting at approximately 35 ms post-onset). The STS plays an important role in cortical connectivity to the amygdala. The STS is involved in detection of changeable feature of facial expressions (mouth, eyes, and eyebrows) [24] and has a strong connection with the amygdala in monkeys [23,56]. For humans, an interconnection between the STS and amygdala has been proposed as a network model for social perception, in which the STS and amygdala cooperate in processing significant social stimuli [57,58]. Moreover, the results of the present study demonstrated that the cortical responses in the occipital-temporal, parietal, and frontal cortices occur within 50 ms (Fig. S3), consistent with the findings of previous studies [17,59,60]. Our findings provide evidence of rapid cortical activity during early emotional perception, consisting of cortical and subcortical routes to amygdala for angry emotion.
Importance and consideration of our methods
The framework of hemispheric functional specialization has been well investigated, indicating that the right hemisphere is relatively biased towards the processing of more global, holistic aspects of a stimulus, whereas the left hemisphere is relatively biased towards the processing of local, finer details of a stimulus [61]. Based on this, we speculate that binding of finer features, such as the valence/expressive features of a face, may involve nearby neural assemblies in the left hemisphere; whereas the global aspect of a face, such as arousal information, may be conceptualized by large-scale integration across distant brain regions in the right hemisphere. Our findings of more connectivity projected to the right amygdala suggest the role of the right amygdala in processing overall arousal information of faces.
To our knowledge, this is the first study to demonstrate a dual route of effective connectivity projected to the amygdala during early perceptual processing of emotions. This dual route to the right amygdala was found only in response to angry faces, which provides novel evidence of a neural mechanism underlying effective and efficient brain processing to provide a survival advantage in detecting danger in humans. This dual pathway could imply more processing of the expressive features of facing angry faces compared to happy faces, even though the subjects were instructed to judge the gender, not expression, of the face. Previous neuroimaging studies reported impaired emotional processing in patients with affective disorders [27,31,32,62]. However, whether the impairment occurs at the perceptual level or at the cognitive level in the thalamo-cortico-limbic regions remains unclear. The findings presented in this study suggest that MEG could be a potential tool to examine the alternation of thalamo-cortico-limbic circuitry engaged in early perceptual processing of emotional information for patients with affective disorders.
GCA identifies a directional influence that one neuronal population exerts on another. A general limitation of GCA connectivity analysis resides in its selection of ROIs, which has failure to consider the probability that other inputs from unselected regions (e.g., cerebellar areas) exert influences on the selected regions [42,63]. Consideration of other possible models without amygdala in future studies would enrich the understanding of neural mechanisms underlying early perceptual processing of emotional faces.
Numerous human studies have indicated that intra-regional oscillatory activity and interregional phase synchrony over different frequency bands are crucial as mechanisms for localscale (*1 cm through monosynaptic connections) and large-scale (>1 cm over polysynaptic pathways) integration of incoming and endogenous activity [64]. Among the different frequencies, rhythmic synchronization of neural discharges in the gamma band (approximately 40 Hz) may provide the necessary spatial and temporal links that bind the processing functions in different brain areas to build a coherent percept [65], for instance, perceptual binding of spatially separated static visual features in the infant brain [65]. A patient study that directly recorded amygdala gamma activity suggested that the amygdala participates in binding perceptual representations of the stimulus with memory, emotional response, and modulation of ongoing cognition, on the basis of the emotional significance of the stimulus [66]. The present study demonstrates the feasibility of using MEG to investigate local-scale (e.g., amygdala) and largescale (e.g., thalamo-cortical and cortico-cortical) networks.
Conclusions
This study provides the first evidence of functional asymmetry of amygdala in the early perceptual processing of emotions using GCA on MEG data. We suggest that the left amygdala could be more associated with decoding stimuli for all emotions whereas the right amygdala could be linked to autonomic arousal and the processing of social information. Our data demonstrated neural evidences for dual-route model in human, suggesting that processing of negative emotional information engages cortical and subcortical pathways connected to the amygdala. Negative affect engages subcortical pathway (thalamus-amygdala), representing its evolutional significance (survival). The simulated MEG data were originated from background activity and one dipole source located at the right amygdala (x = 30, y = −2, z = −26mm, MNI coordinates) with temporal profile of gammaband sinusoidal waves added by random noises. The structural MRI data and MEG sensor configuration here were adopted from one subject in our facial processing experiment.
|
2018-04-03T01:01:46.532Z
|
2015-01-28T00:00:00.000
|
{
"year": 2015,
"sha1": "a116f8e097338174519a1570072ec821c018e945",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0115677&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a116f8e097338174519a1570072ec821c018e945",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
17567044
|
pes2o/s2orc
|
v3-fos-license
|
The Non-Immune RIP-kb Mouse is a Useful Host for Islet Transplantation, as the Diabetes is Spontaneous, Mild and Predictable
Chemically-induced diabetic mice and spontaneously diabetic NOD mice have been valuable as recipients for experimental islet transplantation. However, their maintenance often requires parenteral insulin. Diabetogenic chemicals can be cytotoxic to the host’s immune system and to other organs some of which are often used as the transplant site. Procurement of diabetic cohorts in the NOD mouse is problematic due to variability in the age of disease onset. We show that RIP-Kb mice, which spontaneously develop non-immune diabetes due to over-expression of the H-2Kb heavy chain in beta cells, offer many advantages as islet transplant recipients. Diabetes is predictable with a relatively narrow range of onset (4 wk) and blood glucose levels (23.0± 4.0 mmol/l for 39 males at 6 weeks of age). The diabetes is mild enough so that most diabetic mice can be maintained to 40 weeks of age without parenteral insulin. This consistency of diabetes avails that outcomes of intervention can be interpreted with confidence.
INTRODUCTION
Mouse models of pancreatic islet transplantation often use recipients in which insulindependent diabetes has either been chemically induced with beta-cell cytotoxic agents, streptozotocin or alloxan, or allowed to develop spontaneously as a result of autoimmune betacell destruction in the NOD mouse. These models are valuable, but have some disadvantages. Both chemical agents are hazardous, and streptozotocin is a carcinogen. The diabetogenic effect of these agents is dependent on multiple factors including mouse strain, sex, age and nutritional status eg. [1][2][3][4][5][6][7][8]. Their bioavailability can be variable due to their instability in solution [1,9,10]. As such, it can be difficult to control precisely the degree of induced beta-cell damage and consequent diabetes. In addition, the toxicity of streptozotocin and alloxan is not limited to the beta cell. Streptozotocin has direct effects on the liver [11], kidney [12] and immune system [13][14][15][16]. Likewise, alloxan is nephrotoxic [17] and possibly hepatotoxic [18]. These effects may compromise the interpretation of experimental outcomes.
Spontaneously diabetic NOD mice more closely resemble the clinical situation in humans and thus it could be argued are more appropriate recipients than chemically-induced diabetic mice. However, NOD mice are also not ideal. They present with diabetes over a wide time-span of months such that it is difficult to procure a large cohort of age-matched diabetic mice for transplant experiments. Once diabetic, they can only be maintained for 3-4 weeks before requiring insulin injections for survival. Finally, in studies where allo-or xenoresponses are being analyzed, it can be difficult to distinguish them from auto-immune responses to the graft. A number of transgenic insulin-deficient diabetic mouse models have also been described (reviewed [19]), including several which are not immune-mediated [20][21][22][23][24]. However, none have been characterized with respect to their suitability as diabetic hosts for islet transplantation. One of these, the RIP-K b mouse, transgenically overexpresses the H-2K b class I heavy chain on the rat insulin promoter (RIP) in pancreatic beta-cells [20]. The mechanism of betacell damage by H-2K b has been proposed to result from the misfolding of the over-expressed heavy chain which somehow impairs the normal secretory pathways [25]. One RIP-K b lineage, designated 50-1, develops non-lethal diabetes, with mice surviving beyond 20 weeks of age. We postulated that these mice might obviate the need for iatrogenic intervention and develop diabetes with a reliable and predictable natural history. This should make them auspicious hosts for islet transplantation and interpretation of outcomes.
Blood glucose levels in female compared with male RIP-K b mice. The mean ± SD blood glucose level is shown for n non-diabetic (hatched bars, transgene negative) or diabetic (filled bars, transgene positive) female or male littermates at 6 weeks of age.
BLOOD GLUCOSE MEASUREMENTS
Non-fasting blood glucose was determined between 1-3 pm using Advantage blood glucose test strips (Roche, Castle Hill, Australia) and an Advantage Meter (Roche). A drop of blood was obtained via a glass capillary tube from the retro-orbital venous plexus of unanesthetised mice, or by tail bleed of anesthetised mice undergoing surgery. Mice with confirmed blood glucose measurements in excess of 11 mmol/l were classified as diabetic. Blood glucose levels are shown either for individual mice or as the mean + SD for n mice.
STATISTICAL ANALYSIS
Differences between blood glucose levels or weights were analysed by two-tailed unpaired t tests using Prism v2.0a (GraphPad Software, Inc., San Diego, USA).
PANCREATIC ISLET TRANSPLANTS
Islet donors were 6-10 week old male and female CBA or B6.C-H2 bm1 mice. Pancreata were collagenase-digested, and islets were separated by density gradient centrifugation over Histopaque-1077 (Sigma, St Louis,USA) as described [28]. Islets were hand-picked and counted under an inverted microscope.
Between 400-700 islets were injected, via a yellow micropipette tip, under the kidney capsule of 6-10 week old male RIP-K b recipients. Blood glucose levels were determined just prior to islet transplantation then monitored periodically as indicated. The graft plus kidney were recovered at the indicated times from mice which were either killed, or nephrectomized and allowed to recover before additional blood glucose determinations. The graft plus kidney were fixed in Bouin's solution before sectioning and staining with hematoxylin and eosin.
DIABETES IS MORE SEVERE IN MALE RIP-K b MICE
Diabetes incidence was compared in male and female mice ( Figure 1). As expected, nontransgenic offspring do not develop diabetes. At six weeks of age diabetic male mice had a blood glucose of 23.0+ 4.0 mmol/l compared to 16.8 + 2.2 mmol/l in diabetic females (p < 0.0001). This rather tight standard deviation allows for reliable determination of any successful intervention. There was no overlap between diabetic and non-diabetic mice ( Figure 2). Indeed, the difference in mean blood glucose between non-diabetic and diabetic mice at 6 weeks of age was 9.9 and 14.5 mmol/l for females and males respectively ( Figure 1). This margin of difference should allow the use of mice of either sex as transplant recipients, although this margin is greater for male mice. Male recipients are also desirable in that donor pancreas of any sex can be transplanted without concern for H-Y antigen differences. This is particularly important for fetal tissue transplantation, as the sex may be unknown.
DIABETES IN MALE RIP-K b MICE DEVELOPS EARLY, IS STABLE AND DOES NOT REQUIRE INSULIN INJECTIONS
A more comprehensive analysis of blood glu-cose levels at various ages was performed in a second cohort of male mice ( Figure 2). Diabetes of moderate severity (16.8 + 3.1 mmol/l) developed by 4 weeks of age in 96% of transgenic mice. The severity of diabetes increased by 6 weeks of age (22.2 + 4.4 mmol/l) and blood glucose levels remained in excess of 20 mmol/l from 6 -40 weeks. Five mice were followed out to 40 weeks of age. They continued to rear well, but were clearly smaller than non-diabetic littermates. One of these mice had a blood glucose above the limit of detection (33.3 mmol/l), was slightly hunched and had ruffled fur. All mice were killed at 40 weeks of age.
RIP-K b mice have delayed weight gain. The 4 0 S U T H E R L A N D , E T A L . development of diabetic compared to non-diabetic mice was assessed by weighing mice at 5, 15 and 27 weeks of age ( Table 1). The weight of diabetic mice increased from 16.8 + 1.3 g at 5 weeks to 23.0 + 0.7 g at 15 weeks (p < 0.0001), and at these times was only 10-12% less than non-diabetic littermates (p < 0.05). Diabetic mice failed to gain weight between 15 and 27 weeks, and rather had some weight lost. The 27 week weight of 20.1 + 1.7 g was 13% less than that of the same mice at 15 weeks (p<0.01), and 31% less than that of non-diabetic littermates (p<0.001). At 27 weeks, 4/5 diabetic mice appeared healthy; one was hunched, had ruffled fur and was killed.
PANCREATIC ISLET GRAFTS
We next examined the suitability of RIP-K b mice for use as islet transplant recipients. The development of diabetes in RIP-K b mice is not immune-dependent: diabetes develops even if mice are neonatally-thymectomized [20] or are on an athymic nude mouse background [29]. In euthymic mice, diabetes develops in the absence of islet infiltration ( [20], Figure 3A; a mouse A U S E F U L H O S T F O R I S L E T T R A N S P L A N T S 41 Blood glucose levels in RIP-K b recipients of (A) syngeneic CBA or (B) allogeneic B6.C-H2 bm1 islet grafts. Removal of the graftbearing kidney by nephrectomy in 3/5 recipients of syngeneic and 1/11 recipients of allogeneic islet grafts is indicated by an arrow. with blood glucose 25.3 mmol/l). The absence of an islet specific autoimmune response in the RIP-K b mice was further supported by the ability of syngeneic CBA islet transplants to reverse diabetes and maintain normoglycemia for the 98 day duration of the experiment ( Figure 4A). Normoglycemia could be established by transplanting 400 islets. Transplants of fewer islets have not been attempted. Normoglycemia was reversed by nephrectomy of the graft-containing kidney ( Figure 4A), indicating that the function of endogenous RIP-K b expressing islets remained impaired in the absence of hyperglycemic stress. The recovered syngeneic grafts contained numerous islets, but no immune infiltrate ( Figure 3B). Allogeneic islet transplants initially reversed diabetes in RIP-K b mice. In 9/11 mice the transplants were rapidly infiltrated by mononuclear cells and rejected ( Figure 3C), such that blood glucose levels returned to pre-transplant levels by 21 days post-transplant ( Figure 4B). In 2/11 mice, the response was less vigorous. These mice had numerous intact islets surrounded by peri-islet mononuclear infiltrates when grafts were recovered 24 and 104 days post-transplant. Mild diabetes (around 14 mmol/l) at the time of graft recovery was suggestive of impaired islet function ( Figure 4B). Graft recovery at 104 days post-transplant by nephrectomy resulted, as for syngeneic grafts, in a return of the blood glucose to the pretransplant level (20.1 mmol/l).
DISCUSSION
RIP-K b mice develop a spontaneous, nonimmune diabetes as a consequence of overexpression of the H-2K b heavy chain under the control of RIP [20]. It has been proposed that the H-2K b heavy chain is misfolded in the absence of β 2 M light chain, and accumulates in insulin secretory granules thereby disrupting the processing and secretion of insulin [25]. Indeed, β 2 M expressed under the control of RIP can be detected in the insulin secretory granules, and its coexpression with the H-2K b heavy chain reduces the severity of RIP-K b diabetes [25]. Transgenic expression of various other molecules including class II MHC molecules [21,22], calmodulin [24], and H-ras [23], under the control of the insulin promoter can also result in non-immune mediated diabetes. However, it remains moot whether any of these mice will be suitable as diabetic hosts for islet transplantation. Some are clearly not. For example, the RIP-ras mice develop diabetes late (5 months), become severely diabetic and die within a few weeks [23]. The RIP-calmodulin mice develop diabetes within hours of birth and at least some lines suffer from fertility problems [24]. The current study has shown that the RIP-K b class I heavy chain transgenic mouse has several characteristics that are desirable in recipients of allo-and xeno-geneic islet transplants. Diabetes develops early and in a predictable and uniform fashion such that by 6 weeks of age male mice routinely have a blood glucose level of between 20-28 mmol/l. Consequently, groups of mice matched for age, sex and degree of diabetes can be acquired with ease. In practical terms, mice can be screened at 4 weeks of age: there are no false positives, while false negatives (due to incomplete expression of the diabetic phenotype at this early time-point) are rare and unimportant as they are discarded. By contrast, diabetes onset in NOD mice is relatively late and in our colony varies from 14 -28 weeks of age, which does not facilitate acquisition of diabetic cohorts. The production of diabetic groups can also be problematic with streptozotocin or alloxan, because even when mouse strain, age and sex are matched, the diabetogenic effect is variable. This can be addressed in part by determining the appropriate dose for a given mouse strain and weighing individual mice prior to injection.
However, other factors such as drug instability are not so easily controlled. Both streptozotocin and alloxan have short half-lives in aqueous solution and plasma (reviewed [1]). The effective dose of streptozotocin is dependent on the ratio of alpha and beta anomers [9], which varies between manufacturers, batches and time in solution [10,30]. Clearly, diabetes induced by streptozotocin or alloxan entails more variables, is less predictable and is more labour intensive than in the RIP-K b model. Streptozotocin and alloxan are also toxic and, in the case of streptozotocin, carcinogenic, necessitating appropriate safety measures to protect researchers and animal technicians.
The general toxicity of diabetogenic drugs may influence graft survival and confound interpretation of results. Islets are usually grafted under the kidney capsule of mice, and less often into the liver, spleen or testes. Notably, the liver is a clinically relevant graft site in humans [31]. Streptozotocin can alter liver morphology and impairs liver function [11]. Nephrotoxic effects have also been described [12]. Alloxan causes atrophy of kidney tubules and interstitial nephritis [17], and is a potential liver toxin [18]. While it is unknown how such effects impact on graft outcome, an advantage of RIP-K b mice is that pathologies at the graft site are limited to the secondary effects of diabetes and the graft procedure itself.
Streptozotocin is also directly toxic to thymocytes and bone marrow cells [13][14][15], and mutagenic for splenocytes due to its capacity to induce DNA strand breaks [16]. The functional consequences of streptozotocin treatment include suppression of cell-mediated immune responses independent of hyperglycemia [13,14,16]. Thus, with streptozotocin it may be difficult to attribute outcomes to an experimental immunosuppressive regimen. Similarly, the abnormal immune system of NOD mice may influence the nature of graft rejection. Preactivated autoimmune effectors present in dia-betic NOD mice may dominate the emerging allo-or xeno-response and thus confound analysis. No such immune perturbations occur in the RIP-K b mouse. Delayed rejection of allogeneic B6.C-H2 bm1 islets occurred in 2/11 RIP-K b mice, but was not considered indicative of an immunocompromised state. In these two cases the host RIP-K b mice had been backcrossed to CBA mice for 5 -6 generations, and we consider it likely that they retained some parental B6.C-H2 bm1 histocompatibility genes. More recent B6.C-H2 bm1 transplants into RIP-K b mice, backcrossed for 9 -10 generations, have been uniformly rejected by day 21 post transplant.
Spontaneous reversal of diabetes due to betacell neoplasia or neogenesis can occur, when using streptozotocin or alloxan induced diabetic hosts [3,[32][33][34][35]. Thus, it can be difficult to determine whether reversal of diabetes is due to this or to the grafting procedure. Hence, it is desirable to confirm graft function at the conclusion of an experiment, for instance by nephrectomy of a graft bearing kidney and reversion to hyperglycemia. Such confirmation is not feasible when grafting into sites such as the liver since the graft cannot be removed without the death of the host. Our experience in RIP-K b mice shows that diabetes is stable with increasing blood glucose with age and the loss of host beta-cell function is not reversible even when hyperglycemic stress is relieved by syngeneic islet grafts. Logically, even if neogenesis of host beta cells does occur, the function of the neo-beta cells should be impaired by expression of the RIP-K b transgene. Furthermore, the RIP-K b mouse has no predilection to beta-cell neoplasia unlike models using mutagenic reagents such as streptozotocin. Thus, we propose that reversal of hyperglycemia in RIP-K b mice can be more reliably attributed to the graft than in alloxan or streptozotocin treated diabetic mice.
The long-term survival of diabetic mice with-out the need for insulin therapy is a distinct advantage when grafting stem cell or fetal tissue grafts that do not result in immediate reversal of diabetes. NOD mice require insulin treatment within 3-4 weeks of onset of diabetes in order to survive, while chemically-induced diabetic mice may require immediate insulin treatment. All RIP-K b mice can reliably be maintained without parenteral insulin for >20 weeks after the onset of diabetes; the vast majority even longer. Despite blood glucose levels in excess of 20 mmol/l from 6 weeks of age, they generally remain healthy although they do not grow as rapidly as wild type mice. By 27 weeks the occasional mouse shows signs of discomfort (hunching, ruffling of fur, reduced activity) and must be killed, but it is possible to maintain most mice out to 40 weeks of age. If mice are transplanted at 6 weeks of age, this provides a period of up to 34 weeks during which grafts can mature in vivo. If glucose toxicity for grafted tissue is a consideration, the stable hyperglycemia in the RIP-K b mouse, should be more readily normalized than hyperglycemia in NOD or chemically-induced diabetic mice. Furthermore, the stability of RIP-K b diabetes means that in the absence of parenteral insulin, even partial reductions in blood glucose, eg. from >20 to about 14 mmol/l in the case of allografts with peri-islet infiltration in the current study, can be interpreted with confidence as evidence of graft function. It is clear that as pancreatic islet transplant recipients RIP-K b mice offer advantages with respect to ease of use, stability of diabetes, lack of autoimmune effects, and lack of iatrogenic effects (from injecting cytotoxic agents, or insulin). The introduction of the RIP-K b model is particularly timely in view of current intense interest in the transplant of stem cells and fetal pig tissue, and transplants into the liver. It will be of additional interest to determine if RIP-K b mice are a useful model for studying chronic hyperglycemia and potential diabetic complications.
|
2014-10-01T00:00:00.000Z
|
2002-01-01T00:00:00.000
|
{
"year": 2002,
"sha1": "3e86affbf521145acd0b3ba28f9f150d58b3c7a6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jdr/2002/408504.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7abb38c00775ec72d536014afe0e5240c4190590",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
62789884
|
pes2o/s2orc
|
v3-fos-license
|
Hydroxyl Radical Production by Light Driven Iron Redox Cycling in Natural and Test Systems
Photoreducing ligand stabilized iron III and its fast reoxidation at pH>7 in the presence of O2 produces Reactive Oxygen Species (ROS) in natural and synthetic fresh water. This redox recycling [1] yields hydroxyl (HO•) and other reactive radicals, generating and transforming toxic compounds and impacting exposed organisms. Both toxicants and organisms influence iron redox cycling, e.g., dissolved FeIII reduction is mediated by cell surface reductases, an established iron uptake pathway for plants [2,3] and microorganism [4,5]. Appropriate ligands need always be present. Biological systems produce iron binding ligands, they’re environmentally ubiquitous, e.g., by poly-carboxylates [6], siderophores [7] and human activities [8] (e.g., agricultural fertilizers [9], food fortifiers [10], detergent stabilizers, etc.).
Both the diverse factors influencing iron redox recycling and the fact that the absorption and metabolism of iron, an essential nutrient, depends on it [11] create difficulties impacting environmental toxicity testing and risk assessment. Many studies have quantified ROS in diverse matrices. Hydroxyl radicals are most reactive, toxic and important, since they form readily radicals with other matrix components (e.g., carbon [1], nitrogen [12,13] and sulfur [14]), which are more stable and reactive. E.g., HO • forms carbonate radicals contributing significantly to environmental pesticide metabolism [15]. Reactive photo-induced HO • , 1 O 2 , and triplet DOM have been quantified in fresh and estuarine waters and presented alongside a kinetic model for xenobiotic solar photo-transformation [16].
To recognize and quantify extracellular iron and ROS dependent effects HO • mediates, the photo-reductive behaviour of FeIIIEDTA complex was investigated both with field collected fresh water under natural conditions and in expended synthetic algal growth media in laboratory conditions. Iron is used because it's the most efficient redox active metal producing ROS and EDTA is most widely used as chelator in consumer goods and industry. Since most metal EDTA complexes, including FeIIIEDTA, aren't eliminated by sewage treatment [17], EDTA is the commonest environmental chelator [18]. Only EDTA is used as a biological growth media metal "buffer" [19], maintaining constant free FeIII ion concentrations. The light source (PAR: 130 μ Em -2 s -1 , UV-A: 0.055 mWcm -2 ) used to photo-generate HO • from FeIIIEDTA imitated low sunlight under a clouded sky, the energy was identical to that used to illuminate algae in the lab. Quantifying HO • is problematic because it is highly reactive. Details of its formation and reaction mechanism, whether a HO • molecule or a higher valency FeOL-ferryl complex is reacting [20], are still debated. Whatever species is reactive, it's here named HO • and usually quantified by reaction with such aromatic compounds as benzene and benzoate. Lest non-fluorescent hydroxylated aromatics such as phenol and benzophenols are produced, GC or HPLC are used for quantification, at lower detection limits fluorescence [21] on a plate reader is more convenient for detecting reaction products.
TA
Terephthalic acid (TA) has been used to measure HO • in such matrices as cerebrospinal fluid [22] or in water sonolysis [23], but has only recently been fully assessed for measuring photo-chemically generated HO • [24]. Being photo-stable, it's HO • specificity and slight darkening effect render it superior to widely used fluorescein based dyes for reporting HO • . [25]. Fractions precipitated from synthesis solution were 98%, 95% and 92% pure (analyzed by HPLC and UV spectroscopy). Atlantic Research Chemical HO-TA became available during the study.
Chemicals and media
FeIIIEDTA was synthesized by adding FeCl 3 solution to acidic EDTA-solution (1:1 molar ratio) in the dark and raising the pH to 6-7 with NaOH after a few hours. The FeIIIEDTA-solution was kept strictly in the dark at 4°C. Various reaction media were used to evaluate components' influence on HO• formation rates. Reaction media made from a stock solution and components cited in the text, yielding a medium containing (mM) HCO -(1), Cl -(0.5), SO 2-(0.075), Na + (1), Ca 2+ (0.25) and Mg 2+ (0.075) at pH 8.0 ± 0.2, corresponding to natural carbonated water. The medium maintained its pH without further buffer additions. pH measurements were performed initially and after irradiation experiments.
Photoreactions
Philips Tanning fluorescence tubes (16, TL 8W) were used as light source. The irradiance as a function of the distance was measured using Sky Instruments PAR and UV-A sensors (Spectro Sense 2). The energy at the solution surface was 130 μEm -2 s -1 PAR and 0.055 mWcm -2 if not otherwise stated. Reaction media samples (10 mL) were irradiated in 50 mL crystallization dishes washed with 3.5% HCl. The low volume enabled complete light penetration through the sample and a loose glass lid allowed air (O 2 ) exchange, as for algae illumination in an Erlenmeyer flask.
Unwashed dishes gave higher and unstable background fluorescence. Up to 6 dishes were irradiated together on a nonreflecting support. Aliquots (300 µL) were transferred to plate wells at the corresponding reaction time and covered with a black lid. The samples were measured within 10 minutes and kept in the dark until re-measured at the next sampling time point. Repeated measurements of the same reaction solution showed no change over several hours, proving the reaction was light dependent and stopped in the dark.
Detection of HO-TA
HO-TA fluorescence (λ ex =320 nm) was measured in Greiner, 96 well black plates (Huber, Basel) using a TECAN M200 plate reader. Readings were taken at every other wavelength between λ em =400-460 nm for a total 30 points. Instrument gain was set for optimal signal to noise ratio, background didn't exceed 10% of the full count scale. H 2 O UV Raman-activity showed plate reader irradiance varied by ± 10%. Average readings between 408 and 446 nm were used for quantification. Non-irradiated reaction solutions kept in the dark and at the same temperature were used as blanks and subtracted. HO-TA (0-8 nM) in the corresponding media was used to calibrate the fluorescence emission counts by a linear function. HO-TA down to 10 -10 M was detectable in samples with a low fluorescence background. The HO-TA (1-25 µM) fluorescence was stabile during irradiation in the presence of FeIIIEDTA. HO-TA-rates were calculated as the initial linear increase in concentration over time.
Results
Since TA and HO-TA were reported to be stable under higher light energies [24] than used in this study, TA alone, TA either in the presence of EDTA or Fe produced no detectable fluorescence. HO-TA was formed at the same rate from both Br-TA and TA (results not shown). Br-TA wasn't used as a reporter for HO • production due to unknown side reactions and reactivity with other ROS but it is representative of the reactions of halogenated herbicides.
Hydroxyl radicals react very fast and non-selectively with media components at close to a speed controlled by diffusion [24], the TAprobe reaction depends on media composition. The influence of several components present in natural waters and Talaquil media on TA reacting with HO • produced during iron photo-redox cycling were investigated.
Adding such inorganic components as NaCl (≤ 500 mM), KNO 3 (≤ 2 mM) and borate (≤ 5 mM) to the reaction medium didn't alter HO-TA formation rate but carbonate and phosphate reduced the TA-OHrate three-fold for 2 mM carbonate and two-fold for 25 µM phosphate.
FeIIIEDTA/TA ratios
FeIIIEDTA and TA were applied in different ratios to the reaction medium to determine optimal HO-TA yield. HO-TA was undetectable at ratios >1 but increasing the excess of TA produced enough. A ten-fold excess yielded maximal HO-TA, greater excesses lowered it (Figure 1). The HO-TA-formation rate also depended on FeIIIEDTA-concentration. Under the same irradiance and identical FeIIIEDTA/TA ratios but increasing FeIIIEDTA concentrations (1-5 µM) yielded higher HO-TA-formation rates. The reproducibility was better for 2 µM FeIIIEDTA so this concentration was used for most of the investigations.
Organic components
Organic buffer salts frequently used in algae growth media at 2-50 mmol/L concentrations provide carbon reacting readily with HO • . The radical capturing properties of the commonest buffers, TRIS and MOPS were investigated, showing decreasing HO-TA formation rate with increasing buffer concentration (Figure 2a). Six-and four-fold molar buffer excess reduced the TA reaction by 50% in both cases. The difference between TRIS and MOPS apparent in Figure 2a is due solely to using molar concentrations. Thus allowing for the number of reactive (Figure 2b), a 50% rate reduction per 625 µM reactive atoms can be deduced when 4 times as many reactive TA atoms are present. The reduction deviates strongly from a linear behaviour at higher concentrations.
Such naturally occurring organic components as porphyrins are known to act as photosenitizers, so PPIX served as model for side-chain degraded chlorophylls. Low amounts of PPIX gave an acceleration factor of 20 in HO-TA formation (Figure 3).
River water
HO • -formation was evaluated in river water with 0.16 mM natural DOC added and 30 times more carbonate than in AAP (US EPA), 8 times more in TG 201 (OECD) and twice as much as in Talaquil growth media. Despite this higher carbon content, HO-TA formation rate in river water was four times higher. Increasing TA concentrations to match carbon content yielded TA values up to 160 µM, and a threefold rate increase compared to Chriesbach river water with 10 µM TA.
Spent algae Talaquil medium
The high organic buffer concentration (10 mM MOPS) required adjusting TA-concentration to 500 µM counter HO • . Synthetic reaction medium lacking algae, trace elements and nutrients produced the same HO-TA production as Talaquil containing them but the spent and filtered Talaquil medium gave a 6-times higher HO • -formation rate, this rate increased by a factor of 2.5 when 0.1 µM PPIX was added. Higher concentrations reduced it from the 60 nM/h maximum but it was even higher than without PPIX.
Discussion
The high reactivity of HO • requires sufficient TA-probe concentrations to compete with other components. Although fluorescein based ROS-reporters are widely used, they're rendered unsuitable by their photo-instability and poor HO • selectivity. The advantages of TA for probing HO • production have been discussed [22], its photo-stability and producing HO-TA in the presence of iron were essential for this work as were its inertness towards other ROS and not darkening photo-redox-reactive species (if more than a few percent forms, darkening can arise from HO-TA [24]). Fluorescence is convenient for detecting HO-TA, providing sub-nano-and pico-molar range detection limits with low background fluorescence, as applied to both freshwater and test media.
The results revealed that HO • formation hydroxylating TA-reporter depends on other reacting components, preferably those containing carbon, nitrogen and sulphur. When FeIIIEDTA exceeds TA, the radicals produced prefer to react with another FeIIIEDTA-molecule and EDTA is oxidatively degraded [27], explaining why excess TA was required to convert TA into HO-TA significantly.
Under the same irradiance, adding 1 µmol/L FeIIIEDTA to river water gave a four-fold increase in rate than in the synthetic reaction medium. Due to its 2 mg/L DOC and double [HCO 3 ], most HOradicals are unavailable for reaction with TA, as shown by increasing TA up to a suboptimal 1:8 ratio of TA to DOC-carbon, substantially increasing HO-TA-formation. The concentration of photoactive FeIIIcomplex concentrations in river water are below 1 µM (total Fe <1 µM) and the possibility of colored DOM producing 1 O 2 is very low (E 320nm =0.02, 1 cm), also 1 O 2 reacting with TA was reported to be 105 times slower than with HO • [24]. TA-concentrations far below oversaturation suggest high rates arise from photo-catalysis accelerating river water components.
PPIX, a good photo-sensitizer [28] also strongly affected HO-TA-formation, whether its rate acceleration arises from improved HO • formation or 1 O 2 production was answered by introducing 0.5% methanol, it quenched HO-TA-formation completely (results omitted). The reaction with 1 O 2 is unaffected by methanol, but efficiently traps HO • , PPIX increases HO • formation. PPIX didn't improve the formation rate in spent Talaquil medium as it did in river water, suggesting sufficient rate accelerating components are already present in the medium.
Decaying algae release chlorophylls containing a porphyrin moiety probably coordinating with FeIIIEDTA to form a ternary complex, improving its light harvesting efficiency and HO • production. To quantify HO • -production, observed HO-TA-formation rates need be transferred into HO • -rates. A detailed mechanistic study of HO-TA-formation from TA and HO • in water [23] showed side-reactions keep maximal yield at 35%. In oxygenated water, side reactions further reduce yield, they're usually 15-20% [21]. Side reaction products haven't been determined in this work but it's reasonable to assume total HO-TA-yield matches that reported in the literature, so HO-TA rates reported here need multiplying by about 5 or 6 to give corresponding HO • -formation rates. Then HO • -production rates for river water can be calculated as 45 to 54 nmol/h, ~ 100 nmol/h for algal medium and ~ 8 nmol/h for the artificial reaction medium. Spent algal growth medium gave higher rates due to their high algal density and resulting higher concentration of rate accelerating compounds. A 200-400 times higher concentration of reactive carbon as organic buffer countered double HO • growth medium production, reducing levels far below those of natural waters. Allowing for natural DOC (0.1-1 mM C) quenching HO • and producing 1 O 2 and despite lower photo-reactive iron concentration, natural conditions impose a much higher burden of HO • and other ROS than 10 mM organic salt buffered laboratory test conditions.
Conclusions
Experiments in this study were not optimized for HO • formation rate but for realistic relative rates under low light conditions for a conservative assessment. TA and Br-TA were used as probes to react with HO • , they represent the susceptibility of aromatic and halogenated aromatic moieties containing toxicant to it. Their TA reaction rates can be anticipated to be representative of environmental toxicants.
Using TA to trap HO • was straightforward, allowing measurement of HO • production by light driven iron redox-cycling in algal test media and natural freshwater. TA-trapping efficiency in oxygenated water appears constant, enabling reliable calculation of HO • -rates. HO • -production depends on the solutes present. Apart from reactive atoms in molecules, such factors as catalysis need consideration. A combination of redox-active species with particular biological ligands is probable, increasing HO • -production.
Inorganic and organic carbon determine HO • -availability. Usually, algal growth and test media concentrate organic carbon (as buffer salts), rendering more HO • available in river water than in growth media. This hasn't been considered and should be allowed for to transfer data reliably between test systems and natural conditions. Recommendations for better agreement are presented. Increasing pH close to its natural equilibrium value associated with natural carbonate, yields pH 7.8-8.2, typical of both fresh-and sea-water, reducing the necessary dissolved organic salts. Decreasing algae density in test experiments would reconcile the two conditions. Decreasing rate accelerating exudate concentrations would better represent real aquatic systems but algal density in test systems will usually be greater which can compensated by lower (1-2 mM) organic salt levels than commonly used (10 nM). Refractive chelated redox-reactive metals are dangerous for microbial communities, a factor which has been vastly underestimated.
|
2019-04-08T13:06:54.131Z
|
2016-06-08T00:00:00.000
|
{
"year": 2016,
"sha1": "6dc0b8ec2f6127cdfa02a345613b273b057d703c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2380-2391.1000182",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ebab9f703f0e57c9bffd919052d278c30fe8e1e4",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
214612141
|
pes2o/s2orc
|
v3-fos-license
|
Closed 1-Forms and Twisted Cohomology
We show that the first twisted cohomology group associated to closed 1-forms on compact manifolds is related to certain 2-dimensional representations of the fundamental group. In particular, we construct examples of nowhere-vanishing 1-forms with non-trivial twisted cohomology.
Introduction
If θ is a closed 1-form on a smooth manifold M, the twisted differential d θ := d − θ∧ maps Ω k (M) to Ω k+1 (M) and satisfies d θ • d θ = 0, thus defining the twisted cohomology groups These groups only depend on the de Rham cohomology class of θ, since the corresponding twisted differential complexes associated to cohomologous 1-forms are canonically isomorphic. In particular, the twisted cohomology associated to an exact 1-form is just the de Rham cohomology.
It is well known that the twisted cohomology defined by the Lee form of Vaisman manifolds, and more generally by any non-zero 1-form θ which is parallel with respect to some Riemannian metric on a compact manifold, vanishes [2].
The twisted cohomology groups, as well as their Dolbeault and Bott-Chern counterparts, play an important role in locally conformally Kähler geometry (cf. [1] or [5], where the twisted cohomology is called Morse-Novikov cohomology).
Twisted cohomology was also used by A. Pajitnov [6], who shows that if θ is a closed 1-form with non-degenerate zeros, then for large t the dimension of H k tθ (M) gives a lower bound for the number of the zeros of θ of index k. This is an analog of Witten's approach to Morse theory, in the more general situation of closed 1-forms.
On the other hand, in [7], A. Pajitnov defined a different twisted Novikov homology theory associated to closed 1-forms θ with integral cohomology class [θ] ∈ H 1 (M, Z), and shows that the twisted Novikov homology vanishes whenever [θ] admits a nowhere-vanishing representative ( [7], Theorem 1.3). We will see in Example 4.2 below that the corresponding result fails for the standard twisted cohomology theory considered here.
Our main result (Theorem 2.3) relates the non-zero elements in the first twisted cohomology group associated to a closed 1-form θ with some set of non-decomposable 2-dimensional representations of the first fundamental group of M which contain a trivial subrepresentation, and whose determinant is the character of π 1 (M) canonically associated to θ.
In Section 3 we derive several applications of this result, like the vanishing of the first twisted cohomology group on manifolds with nilpotent fundamental group (Corollary 3.1), the fact that if the commutator group [π 1 (M), π 1 (M)] is finitely generated, then the set , or the non-vanishing of twisted cohomology on Riemann surfaces of genus g ≥ 2 (Corollary 3.3). In the last section we give several examples of explicit computations of the first twisted cohomology group on mapping tori or Vaisman manifolds.
The Main Result
Notation: the cohomology class of a d θ -closed 1-form α is denoted by [α] θ .
Let us recall the following well-known result and present a proof for it, whose method will be useful in the sequel. Lemma 2.1. Let M be a manifold. There is a bijection between Proof. Let θ be a representative of a cohomology class [θ] ∈ H 1 dR (M) and denote the universal cover of M by π : M → M. Then the pull-back θ := π * θ of θ is an exact form, i.e. there exists ϕ ∈ C ∞ ( M ) such that θ = dϕ. Any element γ ∈ π 1 (M) acts trivially on θ, so γ * dϕ = dϕ, which implies the existence of a constant c γ ∈ R with γ * ϕ = ϕ + c γ . Since γ * 1 γ * 2 = (γ 2 γ 1 ) * , we see that γ → c γ is a group morphism from π 1 (M) to (R, +). We then associate to [θ] ∈ H 1 dR (M) the representation ρ : π 1 (M) → (R * + , ×) defined by ρ(γ) := e cγ . The representation ρ does not depend on the choice of the representative θ in its cohomology class. Indeed, if we replace θ by θ + dh, then ϕ is replaced by ϕ + π * h, and since π * h is invariant by π 1 (M), the constants c γ do not change.
Conversely, for any representation ρ : π 1 (M) → (R * + , ×) we will construct a positive function g on M which is ρ-equivariant, i.e. a * g = ρ(a)g for every a ∈ π 1 (M). To do this, let us pick a non-negative function f on M satisfying the properties (i) and (ii) of Lemma 2.2 below. We introduce the function which is well-defined and smooth on M since the sum is finite in the neighbourhood of any point of M by property (ii). Moreover, g is a positive function on M since f > 0 on V and π 1 (M) · V = M by property (i). For any a ∈ π 1 (M), we have: This shows that θ := d(ln g) is an exact 1-form on M, which is π 1 (M)-invariant, hence θ descends to a closed 1-form θ on M. We associate to ρ the cohomology class of θ in H 1 dR (M). This does not depend on the choice of f . Indeed, if g 1 is any other positive function on M satisfying a * g = ρ(a)g for every a ∈ π 1 (M), then g 1 /g is π 1 (M)-invariant, so it is the pull-back to M of some function h on M. Then the closed 1-form θ 1 on M satisfying π * θ 1 = d(ln g 1 ) is One can easily check that the above defined maps are inverse to each other.
Let (ρ i ) i∈I be a partition of unity subordinate to the open cover (U i ) i∈I . By definition, we have ρ i ≥ 0, supp(ρ i ) ⊂ U i , and every point y ∈ M has an open neighbourhood U y such that the set (1) there is only a finite number of i ∈ I for which the open set π −1 (U π(x) ) meets the support of f i , so the function f := i∈I f i is well-defined, smooth and non-negative on M . We claim that it also satisfies the properties (i) and (ii).
Let now x ∈ M be any point. We define is empty for every i ∈ I \ I π(x) and has exactly one element for every i ∈ I π(x) . This shows that the set of γ ∈ π 1 (M) for which γ · V x meets the support of f is finite, having the same cardinal as I π(x) . (2) Conversely, if there exists an indecomposable representation ξ : π 1 (M) → GL 2 (R) with det ξ = ρ and which fixes the vector If π : M → M denotes as before the universal cover map and ϕ is a primitive of π * θ on M , then so that d θ α = 0 is equivalent to d(e −ϕ π * α) = 0 on M . Hence, there exists a function h ∈ C ∞ ( M ), such that e −ϕ π * α = dh, and thus γ * (dh) = e −cγ dh = ρ(γ −1 )dh. Therefore, there exists for each γ ∈ π 1 (M) a constant λ(γ) ∈ R, such that which equivalently reads We claim that the map ξ : is a group morphism. Indeed, if γ 1 , γ 2 ∈ π 1 (M), we have by (3): We clearly have that det(ξ) = ρ. It remains to check that ξ is indecomposable. Assuming by contradiction that there exists a one-dimensional subrepresentation V ⊂ R 2 of ξ with Thus V is preserved by ξ if and only if λ(γ −1 ) + c = ρ(γ)c for every γ ∈ π 1 (M).
Together with (3) we obtain: This shows that e ϕ (h + c) is the pull-back through π of a function on M, i.e. there exists s ∈ C ∞ (M) such that h + c = e −ϕ π * s. However, this yields: We thus conclude that ξ is indecomposable.
(2) We denote by M γ the matrix of ξ(γ) with respect to the standard basis . Consider again the function f ∈ C ∞ ( M, R + ) given by Lemma 2.2, and define the function g : M → R 2 as follows: As before, the function g is well-defined and smooth, since the sum is finite in the neighbourhood of any point of M , by property (ii) in Lemma 2.2. Note that the function We compute for any a ∈ π 1 (M): Thus, for any a ∈ π 1 (M), we have: Since g 2 > 0 on M and satisfies a * g 2 = ρ(a)g 2 , for all a ∈ π 1 (M), we conclude as in the proof of Lemma 2.1, that d(ln g 2 ) is the pull-back of a closed 1-form θ ′ on M cohomologous to θ. Up to changing the representative, we may assume that π * θ = d(ln g 2 ).
We now assume that [α] θ = 0 in H 1 θ (M), i.e. there exists s ∈ C ∞ (M) such that α = d θ s. Using (2), this implies hence there exists a constant c such that h = 1 g 2 π * s + c. We claim that the one-dimensional eigenspace spanned by the vector c 1 ∈ R 2 is invariant under ξ. Namely, the following equality holds for all a ∈ π 1 (M), according to (5) and to the definition of c: which implies that c + λ(a −1 ) = ρ(a)c. Hence, for any a ∈ π 1 (M), we have: This contradicts the assumption that ξ is indecomposable, hence we conclude that [α] θ = 0.
Applications
We now derive some consequences of Theorem 2.3. 3, we have to show that any representation ξ : π 1 (M) → GL 2 (R) with det ξ = ρ and which fixes the vector 1 0 is decomposable. We assume by contradiction that there exists such a representation ξ which is indecomposable.
Since [θ] = 0, we have ρ = 1, so there exists a ∈ π 1 (M) such that det(ξ(a)) = 1. Then ξ(a) is diagonalizable, so there exists a basis of R 2 , such that the matrix of ξ(a) with respect to this basis is given by M a = 1 0 0 ρ(a) . Since ξ is assumed to be indecomposable, by and λ(b 0 ) = 0. We then obtain for b 1 := b −1 0 a −1 b 0 a: , for all i, which contradicts the hypothesis that π 1 (M) is nilpotent. We denote by M i the matrix of ξ(b i ) with respect to the standard basis of R 2 . Since b i ∈ G = [π 1 (M), π 1 (M)], we have ρ(b i ) = 1, so the matrix M i has the following form: Let us remark that at least one of the numbers x i does not vanish, since otherwise the restriction of ξ to G would be trivial and then, by Lemma 2.4, ξ would be decomposable.
For any 1 ≤ j ≤ m and 1 ≤ i ≤ k, the element a −1 j b i a j belongs to G. Therefore, there exist On the one hand, we compute: On the other hand, we have: Hence, for all 1 ≤ j ≤ m and 1 ≤ i ≤ k, the following equality holds: n ijℓ x ℓ . If for each fixed j ∈ {1, . . . , m}, we define the k × k-matrix with integer entries N j := (n ijℓ ) i,ℓ , then the above system of equations for j fixed can be equivalently written as: As previously noticed, at least one of the x i 's is non-zero. Thus ρ(a j ) must be an eigenvalue of N j , so each ρ(a j ) can take at most k different values. Therefore, when j varies, there are overall at most k m different possibilities for defining ρ, or, equivalently, for defining a cohomology class Proof. It is well known that π 1 (S) has 2g generators γ 1 , . . . , γ 2g subject to the relation Any representation ρ : π 1 (S) → (R * + , ×) is defined by the 2g positive real numbers y i := ρ(γ i ). According to Lemma 2.4 and Theorem 2.3, we need to show for every such ρ, there exists a two-dimensional representation ξ : π 1 (S) → GL 2 (R) with det(ξ) = ρ, which fixes the vector 1 0 ∈ R 2 and whose restriction to [π 1 (S), π 1 (S)] is non-trivial.
We look for ξ of the form ξ(γ i ) := 1 x i 0 y i . The commutator of two such matrices is so by (6), the condition that ξ defines a representation reads Moreover, such a representation is non-trivial on [π 1 (S), π 1 (S)] provided that Since g ≥ 2, for any positive real numbers y i (1 ≤ i ≤ 2g), one can choose the real numbers x i such that (7) and (8) are satisfied.
Examples
Let f A be the diffeomorphism of the torus T 2 = R 2 /Z 2 induced by a matrix A ∈ SL 2 (Z) and let M A be the mapping torus of f A . In other words, M A is the quotient of T 2 × R by the free Z-action generated by the diffeomorphism (p, t) → (f A (p), t + 1). The fundamental group of M A is isomorphic to the semidirect product of Z acting on Z 2 : We pick some non-zero constant c ∈ R and denote by θ c the closed form on M A whose pull-back to T 2 × R is c dt. The associated representation ρ c : π 1 (M A ) → (R * + , ×) maps Z 2 to 1 and the generator of Z to e c . The map λ is clearly a group morphism from Z 2 to (R, +), so Moreover, by Lemma 2.4, λ is not identically zero since [π 1 (M A ), π 1 (M A )] = Z 2 .
Since ava −1 = Av, we get By (9), this is equivalent to Thus e −c is an eigenvalue of t A, and since the spectra of A and t A are the same and det(A) = 1, it follows that e c is an eigenvalue of A.
We can then define a representation ξ : By [2,Theorem 4.5], the twisted cohomology associated to a closed 1-form which is parallel with respect to some Riemannian metric, vanishes. The above example thus shows the existence of compact manifolds carrying nowhere vanishing closed 1-forms which are not parallel with respect to any Riemannian metric.
Our last example concerns the twisted cohomology on Vaisman manifolds. Recall that a Vaisman manifold is a locally conformally Kähler manifold with parallel Lee form [8]. The space of harmonic 1-forms on a compact Vaisman manifold (M, g, J) with Lee form θ decomposes as follows: g) is J-invariant and consists of harmonic 1-forms pointwise orthogonal to θ and Jθ (see for instance [3,Lemma 5.2]). That means that every harmonic 1-form on M can be written as β = tθ + α, with t ∈ R and α ∈ H 1 0 (M, g). By [4,Lemma 3.3], every harmonic form β = tθ + α with t > 0 is the Lee form of a Vaisman metric on M. In particular, for every non-vanishing t, there exists a metric on M whith respect to which β is parallel. By [2,Theorem 4.5], the twisted cohomology H * tθ+α (M) vanishes for all t = 0 and α ∈ H 1 0 (M, g). It remains to understand the case where t = 0, i.e. the twisted cohomology associated to forms α ∈ H 1 0 (M, g). It turns out that there exist Vaisman manifolds (M, g) with H 1 0 (M, g) = 0, for which H * α (M) is non-zero for every α ∈ H 1 0 (M, g) \ {0}.
Example 4.3. Let S be a compact oriented Riemann surface and let π : N → S be the principal S 1 -bundle whose first Chern class is the positive generator e ∈ H 2 (S, Z). For every Riemannian metric g S on S, the 3-dimensional manifold N carries a Riemannian metric g N making π a Riemannian submersion, and which is Sasakian. Consequently, the Riemannian product (M, g) := S 1 × (N, g N ) is Vaisman. Its Lee form is just the length element of S 1 , denoted by θ = dt.
The Gysin exact sequence associated to the fibration π : N → S reads By the choice of c 1 (N) = e, the last arrow is an isomorphism, thus showing that π * : H 1 dR (S)→H 1 dR (N) is an isomorphism too. Since π : (N, g N ) → (S, g S ) is a Riemannian submersion, we thus have π * (H 1 (S, g S )) = H 1 (N, g N ).
Let α be a non-zero harmonic form in H 1 (S, g S ) and let ρ : π 1 (S) → (R * + , ×) be the character of π 1 (S) associated to α, given by Lemma 2.1. Clearly, the character of π 1 (M) associated to p * α isρ := ρ • p * , where p * : π 1 (M) → π 1 (S) is the induced morphism of the fundamental groups. Note that, since the fibers of p : M → S are connected, the exact homotopy sequence shows that p * is surjective.
|
2020-03-24T01:01:09.663Z
|
2020-03-23T00:00:00.000
|
{
"year": 2020,
"sha1": "1ecb9f6fd85aeb7910d2d83a518cb7b118600731",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.10368",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1ecb9f6fd85aeb7910d2d83a518cb7b118600731",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
235220669
|
pes2o/s2orc
|
v3-fos-license
|
Lead concentrations in antlers of European roe deer (Capreolus capreolus) from an agricultural area in Northern Germany over a 119-year period—a historical biomonitoring study
We analyzed the lead content in antlers of 90 adult European roe bucks (Capreolus capreolus) that had been culled between 1901 and 2019 in an agricultural-dominated hunting district in Lower Saxony (Northern Germany). Antler lead values ranged between 0.2 and 10.9 mg/kg dry weight. Median lead concentration was highest after World War II, during a period (1956–1984) of rapidly increasing mass motorization and use of leaded gasoline. Lead levels in antlers decreased markedly after the phase-out of leaded gasoline, but high values were still found in some recently collected antlers. This could indicate persistent lead pollution from former use of lead additives to gasoline, other traffic-related sources, or from agricultural sources (e.g., sewage sludge, fertilizers). This study highlights the suitability of analyzing roe deer antlers for the historical monitoring of changing lead levels in the environment. By collecting antlers and providing them for study, local hunters can significantly contribute to environmental surveillance and the monitoring of environmental pollution by bone-seeking contaminants.
Introduction
Lead (Pb) is a metal that has no physiological function and is toxic even at low concentrations (Ewers and Schlipköter 1991;Pattee and Pain 2003;Ma 2011;Caito et al. 2017;Maret 2017). Various mammalian organs and organ systems are affected by lead toxicity, the most severe impacts concerning the nervous and hematopoietic systems and the kidneys (Ma 2011;Caito et al. 2017;Maret 2017). The developing brain is particularly susceptible to the toxic effects of lead, and therefore, lead neurotoxicity is an issue of special concern in children ( Lidsky and Schneider 2003;Caito et al. 2017).
Lead mining dates back to at least the 4th millennium BC, and Pb mining and smelting activities were widespread in ancient Greek and Roman societies (Retief and Cilliers 2005). The emissions from ancient Greek and Roman lead and silver mining and smelting activities caused widespread lead pollution in the Northern Hemisphere that has been traced both in Greenland ice cores (Hong et al. 1994) and in European bogs (Shotyk 1998).
Compared to the pre-industrial period, the production and anthropogenic discharge of lead dramatically increased during the industrial age, with a particularly steep rise in the second half of the twentieth century (Cullen and McAlister 2017). Thus, global refined lead production amounted to 11.76 million tonnes in 2019 (U.S. Geological Survey 2020). Anthropogenic lead emissions to the atmosphere during the mid-1990s, approximately 120,000 tonnes/year, by far exceeded median fluxes from natural sources, estimated at 12,000 tonnes/year (Cullen and McAlister 2017). The drastically increased discharge of lead from human activities led to widespread lead pollution of the environment on a global scale, with the highest levels of contamination near urbanindustrial areas (Nriagu 1990;Hernberg 2000).
Major anthropogenic sources of lead release to the environment include the combustion of fossil fuels for electricity and heat production, the mining and smelting of lead and other metal ores, iron and steel production, cement production, the use of lead-containing products (like batteries, ammunition, and paint), waste disposal, and vehicular traffic (Stroud 2015;Cullen and McAlister 2017;Baranowska-Bosiacka et al. 2019;Pain et al. 2019aPain et al. , 2019b. Emissions from the latter source were the dominant cause of global lead pollution during much of the twentieth century, resulting in serious impacts on human and environmental health (Cullen and McAlister 2017;Filella and Bonet 2017). Lead emissions from vehicular traffic were largely caused by the use of alkyl-lead additives (mostly tetraethyl lead, TEL) as antiknock agents in gasoline (Stroud 2015;Filella and Bonet 2017). The inorganic lead derived from the organolead additives is highly persistent in the environment and, despite the more recent phasing out of leaded gasoline, high concentrations of legacy lead are therefore present in soils (Filella and Bonet 2017).
Commercial production of TEL started in 1923 and worldwide increased dramatically during the following decades (Hernberg 2000). The struggle to remove lead from gasoline extended over many years (Hernberg 2000;Needleman 2000;von Storch et al. 2003) and was achieved against strong resistance from industry and industry-funded researchers that denied or tried to downplay the public health risks of using leaded gasoline (Hernberg 2000;Needleman 2000). In the Federal Republic of Germany (FRG), the production and importation of gasoline containing more than 0.4 g Pb/liter were prohibited in 1972. Up to then, the usual lead content of gasoline had been 0.6 g/liter. In 1976, the permissible concentration of lead in gasoline was further reduced to 0.15 g/liter (von Storch et al. 2003). Unleaded gasoline (tolerable lead content of 0.013 g/liter) was introduced in the FRG in October 1984, regular leaded gasoline was banned in 1988 (while leaded premium gasoline was still allowed), and since 1997, leaded gasoline is no longer sold in Germany. The lowering of the lead content of gasoline and the subsequent phase-out of leaded gasoline, along with measures reducing lead release from other sources, caused a drastic reduction of anthropogenic lead emissions to the atmosphere, from 16,446 tonnes in 1975 (values for FRG andGerman Democratic Republic combined, von Storch et al. 2003) to 716 tonnes in 1995 in reunified Germany (European Monitoring and Evaluation Programme (EMEP) -Centre on Emission Inventories and Projections 2021). Since then, lead emissions in Germany further declined to 207 tonnes in 2018 (EMEP. Centre on Emission Inventories and Projections 2021). Currently, industry is the main lead emission source, followed by road traffic, the latter now primarily due to tire wear and brake abrasion (McKenzie et al. 2009;Grigoratos and Martini 2015;Adamiec et al. 2016;Adamiec 2017;Bourliva et al. 2018). Recent studies on lead exposure of biota and humans demonstrate that lead contamination of the environment remains an issue of high concern (Müller- Graf et al. 2017;Stokke et al. 2017;Gerofke et al. 2018Gerofke et al. , 2019Martin et al. 2019;Pain et al. 2019a;Helander et al. 2019;Taggart et al. 2020;Lermen et al. 2021).
Lead uptake into the mammalian body occurs primarily via the gastrointestinal tract and the respiratory system. Upon absorption, lead rapidly enters the bloodstream and is transported to different tissues (Ma 2011;Caito et al. 2017). In the body, lead accumulates predominantly in mineralized tissues, and approximately 95% of the body burden of lead in adult humans (75% in children) is present in bones and teeth (Caito et al. 2017). Lead is taken up into the bone mineral (carbonated hydroxyapatite), where the Pb 2+ ion is incorporated at Ca 2+ sites during bone formation, while after the stop of crystal growth, it can replace Ca 2+ by ion substitution (Pemmer et al. 2013). Lead stored in the skeleton is considered a biological marker of long-term exposure; however, under certain physiological and pathological conditions associated with increased bone turnover, skeletally stored lead can be mobilized and re-enter the bloodstream (Caito et al. 2017).
Several studies have demonstrated that antlers are well suited to monitor lead pollution of the environment (Sawicka-Kapusta 1979;Medvedev 1995;Kierdorf and Kierdorf, 1999, 2000a,b, 2002a,b, 2003, 2005, 2006Pokorny 2000;Pokorny et al. 2009;Sobota et al. 2011;Wieczorek-Dąbrowska et al. 2013;Cappelli et al. 2020;Giżejewska et al. 2020). Antlers are periodically replaced, bony cranial appendages of male deer (and females in the reindeer, Rangifer tarandus) that are grown and cast from permanent protuberances of the frontal bones known as pedicles (Goss 1983;Landete-Castillejos et al. 2019). The annual antler cycle of male deer is tightly coupled to their reproductive cycle and controlled by changes in androgen concentrations in blood. In deer from temperate and arctic regions, the reproductive cycle is closely linked to the photoperiod (Goss 1983;Bubenik 1990;Lincoln 1992). Antlers are the fastest forming bones in the animal kingdom and grow during a seasonally fixed timespan of a few months. After the shedding of the skin (velvet) that covers the antlers during their growth, they die off and the bare bony ("hard") antlers are exposed (Goss 1983;Lincoln 1992;Landete-Castillejos et al. 2019).
Growing antlers can accumulate large amounts of "boneseeking" elements like lead during their short, seasonally fixed lifespan (Sawicka-Kapusta 1979; Kierdorf and Kierdorf 1999, 2000a, 2000b, 2002a, 2005, 2006Pokorny 2000;Pokorny et al. 2009;Sobota et al. 2011;Wieczorek-Dąbrowska et al. 2013;Cappelli et al. 2020;Giżejewska et al. 2020). Antlers can therefore be used to monitor ambient lead levels, as they constitute "naturally standardized" environmental samples (Kierdorf and Kierdorf 1999, 2005, 2006Tataruch and Kierdorf 2003). In contrast to other bones, the lead content of antlers constitutes a marker of exposure over a medium timescale of a few months (the antler growth period). Furthermore, contrary to other types of bone, the lead content of antlers is not markedly affected by the age of an individual (Pokorny et al. 2004). Since antlers are collected as trophies by hunters and kept in private or public collections, larger sets of antlers with known dates of collection and locality are often available for study. Such antler series constitute environmental archives whose analysis enables the reconstruction of temporal changes in ambient lead concentrations Kierdorf 2005, 2006).
Among European cervid species, the European roe deer (Capreolus capreolus) is for several reasons particularly suited as a bioindicator. Firstly, it is by far the most abundant deer species, with a harvest (including deaths from other causes than hunting) of 1,226,169 individuals in the hunting year 2019/2020 in Germany, and an annual harvest of more than 1 million animals over the last 20 years (DJV. Deutscher Jagdverband 2021). Secondly, the species is highly adaptable and therefore inhabits a wide range of habitats (Andersen et al. 1998). Thirdly, it has a relatively small home range, enabling a monitoring with rather high spatial resolution (von Raesfeld et al. 1978;Danilkin 1995;Stubbe 1997).
The present paper reports lead concentrations in the antlers of European roe deer that had been culled in a hunting district in Northern Germany over a period of 119 years . This time span covers the period from before the era of mass motorization and the use of leaded gasoline to after the phase-out of leaded gasoline and reduced anthropogenic lead emissions from other sources. We hypothesized that these changes would be reflected by variation in antler lead levels over the study period.
Specimens and study area
We analyzed the antlers of 90 adult roe bucks that had been culled in the hunting district Harsum between 1901 and 2019. All antlers were regenerated ones. Adult roe bucks in Germany cast their old antlers between October and December, followed by antler regrowth during winter. The velvet is shed from the antlers in March/April, and the rutting period extends from mid-July to mid-August (von Raesfeld et al. 1978;Stubbe 1997). Antlers were assigned to the years in which the respective bucks had been taken. For thirteen antlers from the period 1958-1981, the exact year of collection was not known. No antlers from the period 1940-1955 were available, as no hunting was performed in the study area during most of World War II, and private ownership of firearms was not allowed after the end of the war until 1955.
The hunting district Harsum (altitude about 82 m above sea level) lies in the northern part of the county of Hildesheim (federal state of Lower Saxony, Germany) (Fig. 1). The area has a temperate climate, with annual mean temperature of 8.7°C, an annual precipitation of 676 mm (climate-data.org 2020), and prevailing westerly winds (DWD CDC. Deutscher Wetterdienst Climate Data Center 2018). The hunting district is bordered by a branch canal of the Mittelland canal in the west that started operations in 1928 (Hafenbetriebsgesellschaft mbH Hildesheim n.d.). Less than 500 meters to the west of the hunting district, a north-south oriented section of the motorway A7/ E45 is located that was completed in 1962. Further larger traffic infrastructures are the federal highway B494, a section of which is located in the east of the hunting district, and a railway line (electrified in 1965) that divides the district into a larger western and a smaller eastern part. The hunting district has a size of about 1120 hectares and consists mainly of arable farmland (74%) and few patches of deciduous forest (9%). Urban settlements and industrial/commercial areas account for 13% and 4%, respectively (EEA. European Environment Agency 2020). Based on the pattern of land use, the roe deer inhabiting the study area are considered to be of the "field" ecotype (Kałuziński 1982;Demesko et al. 2018). There are nine covered contaminated sites (slurry ponds, dredge material dumps, pond fillings, and a former clay pit) within the hunting district, which add up to an area of approx. 21 ha (2% of the study area; NIBIS Kartenserver 2020). No further information was available about a possible hazardous potential of these areas.
Bone sampling and analysis
Antler bone samples were obtained as described by Kierdorf (2000b, 2002b). Prior to sampling, the antlers were thoroughly cleaned with a nylon brush to prevent contamination of the bone samples by dust or dirt. Subsequently, a hole was drilled into the back of the main beam of each antler approximately 1.5 cm above the antler-pedicle junction using a hand-held electric drill with a tungsten carbide cutter (Fig. 2). The bone powder obtained from each pair of antlers was collected, thoroughly mixed by stirring, individually stored in a small plastic container, and labeled with a consecutive number for each buck sampled.
Precisely weighed samples (approximately 0.2 g for each double determination) of antler bone powder were digested with 5 ml of 65% (w/v) nitric acid (HNO 3 ; Suprapur®) and filtered into glass vials. The digestion vessels were rinsed with high-purity water (AnalaR® NORMAPUR ® ISO 3696 Grade 3), and the content was also filtered into the glass vials, which were made up to 25-ml volume with the high-purity water. Subsequently, each sample was decanted into a 50-ml polyethylene container and stored in a fridge at 4°C until analysis.
Lead concentrations were determined with a graphite furnace atomic absorption spectrometer (ContrAA 800D, Analytik Jena), using a 5-point calibration with 10, 30, 50, 80, and 100 μg Pb/l. After every tenth sample, a certified reference material (NIST Standard Reference Material (SRM)® 1486 Bone Meal) was measured. Recovery rate of the SRM was 96.2 ± 13.4% (mean ± SD), and the nominal concentrations in the antler bone samples were corrected for the recovery rate. The limit of detection (LOD) of the analytical method calculated using the calibration curve method was 0.2 mg Pb/kg. All determined lead concentrations in antler bone, given as milligrams per kilogram on an airdry weight (d.w.) basis, were ≥ LOD. Individual lead values
Results
The lead content of the analyzed antlers ranged between 0.2 and 10.9 mg/kg. Minimum lead values decreased from period 1 over period 2 to period 3, while the coefficients of variation increased (Table 1). Antler lead content differed significantly among the three sampling periods (Kruskal-Wallis H test: chi-squared = 15.184, df = 2, P < 0.001). Concentrations significantly (P < 0.001) increased from period 1 (median: 1.6 mg Pb/kg) to period 2 (median: 3.2 mg Pb/kg), and significantly (P < 0.01) decreased from the latter to period 3 (median: 1.3 mg Pb/kg, Fig. 3). The difference between periods 1 and 3 was not significant (P = 0.549). Before World War II, the clustering of the individual values is rather dense, with only one antler showing a markedly increased lead content of 10.3 mg/kg. In period 2, the scatter of the data points is still rather small, with two outlying low values (0.3 and 0.6 mg Pb/kg) of antlers collected, respectively, in 1982 and 1984. Period 3 shows the highest intra-sample variation, as well as the highest individual value (10.9 mg Pb/kg) for an antler collected in 2018.
Discussion
Our study revealed a marked drop in overall lead concentration in roe deer antlers from the study area following the introduction of unleaded gasoline in the FRG in October 1984. These findings are consistent with the results of other studies that focus on the trend of lead concentrations in antlers of European roe deer over time (Table 2), as well as in antlers of red deer (Cervus elaphus) (e.g., Giżejewska et al. 2020) and in other terrestrial biota (e.g., Schnyder et al. 2018;Helander et al. 2019). This indicates the overall success of the phase-out of leaded gasoline and additional measures taken to reduce lead release to the environment from anthropogenic sources.
While all other European deer species grow their antlers during spring and summer, antler growth in the European roe deer occurs in autumn and winter. This is an important difference, as lead concentrations in plants browsed by deer are markedly higher in autumn/winter than in spring (Reuter et al. 1996;Pattee and Pain 2003) and therefore roe bucks are exposed to higher lead levels than males of other deer species during the period of antler growth, which is a phase of high mineral demand.
The pronounced variation in antler lead concentration observed in period 3 is of special interest. While the overall decline in lead levels compared to the preceding period can be attributed to a reduction of emission from motor traffic, the 2 (2000-2002) 7.3 (1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969) [14] 1925-2003 45 54.7 18.1 2.7 (2000-2003) 554 (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989) [15] Kardell and Källmann (1986), [17] Samiullah and Jones (1991) increased variation in lead concentrations in period 3 compared to periods 1 and 2 points to an increased variability in exposure conditions of the roe deer from our study area in more recent times. Some bucks apparently took up larger amounts of lead, including the individual with the highest antler lead concentration of the entire sample. Further studies are needed to elucidate the sources and pathways of the underlying high but apparently locally restricted exposure. In previous studies, local point sources could be identified as the cause of an increased pollutant exposure of roe deer (e.g., Kierdorf and Kierdorf 2002b;Pokorny et al. 2009). Vehicular traffic is today still an important source of lead release to the environment. After the phase-out of leaded gasoline, current lead release from traffic is, however, mainly due to wear of brakes and tires (De Silva et al. 2021). However, due to the persistence and low mobility of inorganic lead, deposition from the previous use of leaded gasoline still contributes to current high lead concentrations in roadside soils (MacKinnon et al. 2011). These authors suggest that previously deposited lead is continuously redistributed, maintaining a more or less constant transfer into the biosphere along roads. In line with this view, also other studies found high concentrations of lead in roadside dust, and in soils and biota along roads (Walraven et al. 2014;Adamiec et al. 2016;Adamiec 2017;De Silva et al. 2021). We assume that roe bucks from our study area were exposed to traffic-related lead near the motorway (A7) and the federal highway (B494). Especially lead bound to the particulate matter fraction of < 2.5 μm in diameter is readily distributed via atmospheric transport and has a high bioavailability (Padoan et al. 2017;De Silva et al. 2021).
Slovenia/Upper Meža Valley
It has been stated that deposition on surfaces of preferred feeding plants is a main exposure route of lead for browsing mammals (Tataruch and Kierdorf 2003). According to Demesko et al. (2018), field roe deer exhibit higher lead burdens than conspecifics from forest habitats, as dry deposition of lead on forest floor vegetation is relatively low compared to open fields (Grönholm et al. 2009;Schaubroeck et al. 2014). Regarding the lead exposure of field roe deer, also the contribution of lead from manure, sewage sludge, and mineral fertilizers must be considered (Knappe et al. 2008). It can further be assumed that lead bound to soil particles is mobilized by wind erosion on agricultural fields after harvest, which is also intensified by agricultural tillage, and deposited on grazing plants. In Northern Germany, this is particularly the case in late autumn/winter (Willand et al. 2014), i.e., during the antler growth phase of European roe bucks.
Conclusion
The present study revealed marked variation in antler lead concentration of roe deer in an agricultural-dominated area of Northern Germany over a period of 119 years. The findings underscore that the analysis of antlers, which constitute "naturally standardized" monitoring samples, provides a suitable tool for assessing temporal variation in environmental lead levels. The widespread, abundant, and highly adaptable European roe deer is particularly suited as a monitoring species in cultural landscapes. A main advantage of using roe deer antlers as monitoring units is that large samples spanning longer periods of time are readily available, thereby allowing the reconstruction of time trends of the level of bone-seeking pollutants in roe deer habitats. As the antlers can be obtained from individuals harvested in the course of regular management operations for population control, there is no need to kill animals only for providing samples. Since antlers are collected and kept as trophies, local hunting communities can provide the necessary material for studies on changing levels of boneseeking contaminants in their neighborhood. In this way, hunters are able to significantly contribute to environmental surveillance and monitoring in an interesting example of citizen science (Cretois et al. 2020).
Author contribution CL, UK, and HK contributed to the conception and design of the study. CL and HK performed the sampling. Sample preparation, data collection, and analysis were performed by CL. CL, UK, and HK participated in the writing and revision of the draft manuscript and approved the final version.
Funding Open Access funding enabled and organized by Projekt DEAL.
Data availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate Not applicable.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2021-05-28T13:58:54.049Z
|
2021-05-27T00:00:00.000
|
{
"year": 2021,
"sha1": "a4a05edfbb8b6791d58a6314fa8de3dde2f241a8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-14538-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4a05edfbb8b6791d58a6314fa8de3dde2f241a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14875615
|
pes2o/s2orc
|
v3-fos-license
|
Cell Division, Differentiation and Dynamic Clustering
A novel mechanism for cell differentiation is proposed, based on the dynamic clustering in a globally coupled chaotic system. A simple model with metabolic reaction, active transport of chemicals from media, and cell division is found to show three successive stages with the growth of the number of cells; coherent growth, dynamic clustering, and fixed cell differentiation. At the last stage, disparity in activities, germ line segregation, somatic cell differentiation, and homeochaotic stability against external perturbation are found. Our results, in consistency with the experiments of the preceding paper, imply that cell differentiation can occur without a spatial pattern. From dynamical systems viewpoint, the new concept of ``open chaos"is proposed, as a novel and general scenario for systems with growing numbers of elements, also seen in economics and sociology.A
Introduction
Why do cells differentiate? The orthodox answer to this question is that the mechanism is completely determined by genetic codes. This belief is widely accepted by most molecular biologists. Is it correct though ? It is known that genes are not changed in the course of cell differentiation [1]. Cells with identical genes can differentiate, even when in the same environment. Hence it is not a trivial question how identical sets of genes can produce a variety of different cells, not only from a biological but also from a dynamical systems viewpoint. Experiments on cell differentiation of E. Coli by one of the authors (TY), however, leads to a serious question to this widely accepted answer. Indeed, as reported in the preceding paper [2], cells with identical genes may split into several groups with different enzymatic activities. Even prokaryote cells with identical genes can be differentiated there. We note that these cells are under liquid culture, thus are in an identical environment. Of course, spatial information is also important in differentiation. However, usually one differentiation. Often, differentiation and pattern formation are discussed together without distinction. The experimental results of the preceding paper, however, suggest that differentiation can occur without such spatial (positional) information. In the present paper we demonstrate theoretically that cell differentiation is possible without spatial information. Here we note that cells interact with the environment, which is affected by all the other cells. All cells interact with other cells through the environment. The purpose of the present paper is to present an alternative answer to the top question, with an emphasis on the cell-environment interaction. Our answer is based on dynamic clustering in a globally coupled dynamical system. One of the simplest examples of such clustering is given by globally coupled dynamical systems [3]. When many identical elements with chaotic dynamics interact globally through a mean field, it is found that the elements differentiate into some clusters. In each cluster, elements oscillate synchronously, while elements in a different cluster oscillate with a different phase, frequency, or amplitude. Thus spontaneous differentiation of elements is possible through interaction among chaotic elements. In the present paper we extend this idea of clustering to spontaneous cell differentiation by introducing a simple metabolic chemical reaction and cell division dynamics. The clustering mechanism, although it provides a universal key to differentiation, is not enough for the explanation of fixed differentiation in multi-cellular organisms. Indeed, differentiation by clustering is temporal in nature: A group of cells with rich substrates at one time turns to be poor after some time in accordance with the oscillation. By introducing a model with a growing number of cells, we find that the appearance of a new stage after this dynamic clustering, where the differentiation of cells is fixed. Disparity in activities emerges. Thus our result will provide a novel clustering behavior, also from the viewpoint of dynamical systems. As a dynamical system, our problem has one important and innovating feature: growth of the dimension of phase space with cell divisions. Since the chemical dynamics in each cell is represented by a set of differential equations of a given number of degrees of freedom, the total number of degrees of freedom of the system increases with the number of cells N(t). In our system, the increase is closely associated with chaotic instability. Our model of cell growth consists of (i) nonlinear dynamics in each cell (ii) nonlinear and global interaction among cells through a medium (iii) growth and death of a cell depending on its internal state. Such processes can often be seen in a system with growth, by reinterpreting a cell as any replicating unit. For example, economic developmental processes have the above three features, by regarding resources and money as the chemicals and activities. Emergence of spontaneous differentiation in our cell society, thus, can be related with the sharing of resources, division of labor, and the formation of classes in an economical or sociological system. The present paper is organized as follows. A specific model is introduced in §2, combining the processes of metabolic reaction, active transport of chemicals into cells, cell division, and death. Numerical results of the model, given in §3, show three stages of differentiation; coherent growth, dynamic clustering, and fixation of differentiation. Detailed dynamical aspects are presented. In §4, implications of our results to cell differentiation are given. The origin of germ line segregation, differentiation of somatic cells in multicellular organisms, the possible mechanism of programmed death, and homeochaotic stability anism of chaotic instability in a system with a growing number of degrees of freedom.
Model
The biochemical mechanisms of cell growth and division are very complicated, and include a variety of catalytic reactions. The reaction occurs both at the levels of inter-and intra-cells. Hence it is almost impossible to construct a complete descriptive model. Here we introduce a simpler model which captures the essence of these processes (see Fig.1 for a schematic illustration of our model). --- Fig.1 ---In the model we have to include the following processes: • Metabolic Reaction within each Cell : Intra-cell Dynamics i (t), the concentration of m-th chemical species at the i-th cell, at time t. The corresponding concentration of the species in the medium is denoted as X (m) (t). We assume that the medium is well stirred, and neglect the spatial variation of the concentration. Furthermore we regard the chemical species x (0) ( or X (0) in the media) as playing the role of the source for other substrates. Another assumption we make here is the existence of enzymes (for convenience and simplicity we assume that there is a corresponding enzyme E (m) for each chemicals x (m) ). (B) Metabolic Reaction The metabolic reaction here is schematically shown in Fig.1 b). In the present paper we make further simplifications: (1) Only three chemicals including the source x (0) are considered. In other words, each cell contains the variables x i (t). Of course, this is a vast simplification. There can be several orders of cascade reactions in the course of synthesis of DNA [5]. We will see, however, that our simple reaction scheme is enough for our new scenario of cell differentiation. (2) Simplification of enzyme dynamics: To be specific, we take the reaction scheme shown in Fig.1b). Here the enzyme E 0 is constitutive while the others, E 1 and E 2 , are inductive [6]. First we assume that the concentration of the enzyme E (0) is constant, given by E 0 , for simplicity. Here, the generation of the enzymes E (1) and E (2) are activated by the chemical x (2) . The rate equation within each cell (including the enzymatic dynamics) is written as: (1) The dynamics of enzymes E i (t)). as one of the simplest choices. Each enzyme is created from chemicals by reactions within a cell. We neglect the possibility of transportation of enzymes across cells ( since the sizes of enzymes are much larger). For simplicity, we adiabatically solve the equations to get E i (t), with constants e 1 and e 2 . Throughout the paper we adopt this elimination. Further simplification we make here is the neglection of the reaction x (2) → x (1) . Thus the terms with E (2) are neglected, although some simulations with these terms lead to qualitatively identical results. (C) Active Transport and Diffusion through Membrane A cell takes chemicals from the surrounding medium. The rates of chemicals transported into a cell are proportional to their concentrations outside. Further we assume that this transport rate also depends nonlinearly on the internal state of a cell. Since the transport here requires energy [4], the transport rate depends on the activities of a cell. The rate can depend nonlinearly on the chemicals in the cell. To be specific, we choose the following form; where ν is taken to be 3 throughout the paper, although other nonlinear dependences ( ν > 1, i.e., with positive feedback effect) lead to qualitatively similar results. The summation is introduced here to mean that a cell with more chemicals is more active. This form, again, is rather arbitrarily chosen, but similar forms with "active" transport can lead to the same result. Besides the above active transport, the chemicals spread out through the membrane with normal diffusion by Combining the processes (B) and (C), the dynamics for x (m) Since the processes (B) here are just the transportation of chemicals through membranes, the sum of the chemicals must be conserved. The dynamics of the chemicals in the medium is then obtained by converting the sign, i.e., Since the chemicals in the medium can be consumed with the flow to the cells, we need some flow of chemicals (nutrition) into the medium from the outside of the container. Here we assume that only the source chemical X 0 is supplied by a flow into the container. By denoting the external concentration of the chemicals by X 0 and its flow rate per volume of the tank by f , the dynamics of source chemicals in the media is written as (D) Cell Division Through chemical processes, cells can replicate. For the division, accumulation of some chemicals is required. In our model the final product from the chemical species "2" is assumed to act as the chemical for the cell division ( note the term −δ × x (2) i (t) in eq. 2). For example assume that chemical 2 is a mono-nucleotide. Then the DNA synthesis process occurs through the reaction 2 → DNA in Fig.1; thus dDN A dt ∝ x (2) . Accordingly we impose the following condition for cell division: the i-th cell (born at time t = t 0 (i)) divides into two at a time T such that is satisfied, where R is the threshold for cell replication. Here again, choices of other similar division conditions can give qualitatively similar results as those to be discussed. The essential part is that the division condition satisfies an integral form representing the accumulation of DNA as in eq. (10). When a cell divides, two almost identical cells are formed. The chemicals x 2 −ǫ respectively with a small "noise" ǫ, a random number with small amplitude, say over [−10 −3 , 10 −3 ]. We should note that this inclusion of imbalance is not essential to our differentiation. Indeed any tiny difference is amplified to yield a macroscopic differentiation. (E) Cell Death To avoid infinite growth, a condition for cell death is further imposed.
Here we choose either a deterministic or a probabilistic death mechanism. In the former case, we choose the condition for the death as where S is a threshold for "starvation". This choice is again rather arbitrary. We have assumed that a cell dies when the chemicals included therein are too little. Again, a choice of similar other forms is expected to give the same results. Here, the chemicals inside the dead cells are released into the medium. Thus there is a jump in X (j) (t) at every cell death. In the probabilistic case, a number of randomly chosen cells (together with the chemicals included therein) are removed per given time steps (decimation). This choice is closer to experimental situations, since incuvated cells are decimated per some time steps in order to avoid the divergence of the number of cells (see the preceding paper). In fact, both the deterministic and the probabilistic deaths give qualitatively the same behavior with regard to the three stages to be discussed.
Results: Three Stage Differentiation
A typical example for the growth of the number of cells is plotted in Fig.2, as well as the overlaid oscillation of chemicals x (2) i (t) over all cells in Fig.3 . In Fig.2, the number of cells doubles at certain time during the first stage, while the number increases almost linearly with time at the last stage. As can be seen in Fig.3, we have observed the following three stages of evolution, with the growth of the number of cells, for a wide range of parameters. -- Fig.2 of cells oscillate coherently. Thus cells grow coherently. Starting from a single cell, the number increases as 1 → 2 → 4 → 8 → 16 · · · at the stage I. (b) Stage II: Dynamic clustering If there is intra-and/or inter-cell nonlinear dynamics, the coherent oscillation can lose its stability as the cell number increases. As can be seen in Fig. 3 of the overlaid time series of x (2) i (t), the variance of the cells' oscillations is enhanced with time. Then dynamic clustering sets in. Depending on the parameters, the number of clusters can be different. As the diffusion constant D is decreased, the number of clusters increases, and the oscillation gets more complicated. The projected orbit of (x (1) i (t)) is given in Fig.4 a). The phases of oscillations, as well as their amplitudes vary by cells. Some cells start to have large concentrations of chemicals, which prepares them for the differentiation at the next stage. The origin of the clustering (i.e., temporal differentiation) lies in the instability of the reaction and transport dynamics. Any tiny difference between two cells is amplified to a macroscopic difference. The mechanism of clustering has been investigated in globally coupled maps [3], although here there exists a novel feature in dynamical systems, as will be discussed in §5. We note that the clustering here is possible by the interaction among cells through the chemicals in the medium, whose concentrations can also oscillate in time.
i (t)) in Fig.4b). We note that the cells are classified into two completely distinct groups; the population of cells in one group is very few (one or two in the example) but they contain large concentrations of chemicals, while the population of the other group is large but the concentrations of the chemicals are much smaller. We call the former cells active and the latter as sleeping. The active cells replicate much faster than the other (sleeping) ones. In Fig. 4b), orbits with a large amplitude are those of the active group. The segment of straight lines at the upper middle is a result of the cell division. (Here x (2) of the sleeping cells is so small that their orbits in the (x i (t)) plane are hardly visible in Fig. 4b)). In the extreme, and here, typical case the number of cells in the active group is just one. It divides almost periodically in time. After each division, one of the two created cells takes more chemicals than the other. The difference between the two cells is enhanced, till the weaker one belongs to the sleeping group. Thus the number of cells increases by one per some period as long as the "cell death" condition is not satisfied. The sleeping cells, on the other hand, are not identical. They are again weakly differentiated by the concentrations of chemicals ( see Fig.5 ). This differentiation depends on the ordering of the time of birth of the cell. Temporally there are weak oscillations in the chemicals of a sleeping cell. The above regular growth is seen if δ ( production rate of DNA from x (2) ) and D ( diffusion coupling with a medium) are large ( say δ > .2, D > .08). When the parameters δ and/or D are smaller, the dynamics is more complicated: the number of active cells is larger than one (stage "complex-III"). Their oscillations are not periodic, and the division cycle is not regular either. See Fig.6a) for the irregular growth of cells, while the overlaid time series of x (2) i (t) are given in Fig. 6b), as well as the plot of the (x Fig. 6c). Here, the oscillations of active cells are chaotic and display dynamic clustering. By a division of a cell in either group, the balance of the number of cells between the two groups may be destroyed. After this (rare) occurrence, one (or few) of the cells in one group switch to the other group. In Fig.7, sleeping cells "wake up" around t = 4620 and get active by taking chemicals from the medium, while active ones get inactive. Such waking-up of sleeping cells has also been observed in the experiments of the preceding paper. Before closing the paragraph on stage III, we note that for some parameter regimes, ( e.g., much larger D), this stage has not yet been observed, and the cell society remains at stage II. ---- Fig.6 a) Fig. 8. The cell linkage diagram shows from which cell a new cell is born at time t (given by the vertical axis). In Fig.8, we can clearly distinguish active and sleeping cells ( where the cell index ℓ (horizontal axis) satisfies ℓ ≤ 28 or ℓ ≥ 103). Successive division and death is seen for the cells with the index 28 < ℓ < 103. ----- Fig.8 ------If we wait for a very long time, the division condition is satisfied for sleeping cells also (since the condition is of an intergal form). Then divisions of many cells occur successively within a short time scale, but many of the divided cells die right after the divisions. An almost simultaneous death of many cells occurs. Thus the number returns to the level before the multiple division. If the number of active cells is larger than one ("complex-III stage"), the death process is irregular, while the total number of cells is constant on average. Here switching between active and sleeping cells can occur through cell death. When the number of active cells is reduced, the cell society goes back to the stage II, where many cells with similar activities compete for resources, and show dynamic clustering. After some duration of stage II, few cells become active, and the society comes back to the complex-III stage. Accordingly temporal oscillation is observed in the ratio between active and sleeping cells.
Implications to Cell Differentiation
The results in the previous section have many implications for cell growth and differentiation.
• Explanation of the results in the preceding paper Since the present toy cell with simple metabolic reaction systems can show dynamic clustering, it is rather natural to assume that the present clustering can appear in an experimental system. In the experiment of the preceding paper, differentiation of cells and dynamic clustering of oscillations of E-coli are observed. The possibility of fixation of differentiation corresponding to our stage III is also suggested experimentally. Furthermore, the temporal oscillation of the populations of active and sleeping cells is found tion. We note that the differentiation in the experiment is performed with the use of liquid culture, where the coupling is global (through a medium) as in our model. Thus cell differentiation occurs even without a spatially local interaction, in strong contrast with conventional models for differentiation and pattern formation ( e.g., reaction-diffusion equation model of Turing-type or a cellular automaton model). One might think that the deterministic death condition is different from the previous experimental situation. In the experiments, some cells are randomly removed to avoid overgrowth in the medium. To take the experiments into account, we have also simulated a model with stochastic decimation. The results are essentially the same as those given so far, obtained by the deterministic death condition. Of course, death is not periodic, thus the growth is not completely periodic in the regular regime of the deterministic case. Still the dynamical behavior is the same except for some effects of noise. Statistically the ratio of active to sleeping cells is kept at the same level. When an active cell is eliminated, for example, one of the sleeping cells starts getting more chemicals and becomes an active cell.
• From time sharing to emergence of disparity in activities In the present model, many cells compete with each other for finite resources ( the source term X (0) ). Generally speaking, coherent growth may not be advantageous in a system with finite resources, since all elements need the same amount of resources at the same time. There can be two remedies to this problem; the time sharing system and disparity of activities by elements. In the former case, time sharing ( also often adopted in computer systems), is accomplished here through temporal differentiation (clustering). Through clustering, elements use resources successively, thus avoiding strong competition for the resources. In the latter case, elements split into (two) fixed groups. One group is "rich" and uses the resources more and grows much faster than the other (poor) group. In our system, these two types of differentiation appear as successive stages. We should note that the active and sleeping cells coexist. Sleeping cells, although they may look like a "loser" for the competition for resources, can live together with active ones. Indeed this coexistence is important for the stability of cell history and society to be discussed. When there are many active cells, they compete with each other for resources. In this case, the oscillations are chaotic and show dynamic clustering. Thus time sharing of resources is still adopted among active cells.
• Germ line segregation To have a stable cell society, cells, once differentiated, often must be fixed in time. As found at stage III, such fixation occurs in our simulation. This splitting into active and sleep cells reminds us of the germ line segregation. Germ cells are distinguished from somatic cells by their very high division ability. The emergence of the germ line segregation in our simple model is rather striking.
• Somatic Cell Differentiation At stage III, concentrations of chemicals in the sleeping group ( somatic cells) are again differentiated into smaller groups. This differentiation is according to the slight difference of the metabolic network with three chemicals has lead to this differentiation. It is expected that the inclusion of more chemicals can easily provide a larger variety of differentiated cells as observed in a real cell society. In our model, the growth speed of somatic cells is very low. This observation agrees rather well with the well-known fact in biology; that the division speed of cells gets much slower, when they are differentiated [7].
• Sleep/Active Switch and Homeochaos Switching between sleeping and active cells is observed in our simulation when the balance between the numbers of active and sleeping cells is destroyed. There are two origins for this destruction; (a) internal dynamics (typically seen at the "complex-III" stage where the dynamics of many active cells is chaotic); (b) external perturbation: The second case is seen, when, for example, some cells are eliminated externally. Then some sleeping cells become active. Then some sleeping cell becomes active, and the balance is restored, implying that our system has stability against external perturbation. In Fig. 9, the active cell is removed around t = 2030. After this removal, sleeping cells get active and compete for resources.
After competition over few periods of oscillations, one of the sleep cells wins and remains active. Thus the original state is restored. This stability might remind one of "homeostasis". However, the stability here is sustained not by a static (fixed-point) state, but by a dynamical state. Such dynamic stability with the use of high-dimensional chaos is noted as homeochaos [8].
------ Fig. 9 ------ • Stability of cell linkage Since chaotic dynamics underlies our cell system, one might be afraid that the scenario here is unstable, sensitive to initial conditions. This is not the case. The scenario here is, for example, invariant under changes of initial conditions, ( except for a variation of the time required for the first division), and also under slight changes of parameters. The oscillation itself can be chaotic ( in stage II), and the time for the division is not exactly identical, by the initial conditions. Still the division time is statistically invariant, and the cell linkage is completely identical. Of course, the scenario depends on genetic and environmental parameters, such as the external supply of resources and internal parameters characterizing chemical reactions in the cell. Since the supply of chemicals from the medium cannot be genetically determined, the genetic information is not enough for the characterization of cell differentiation. We should again emphasize here that spatial information is not necessarily important for differentiation, either. Examples of cell linkage diagrams are shown in Fig.10. In Fig.10a), coherent division (stage I) is observed up to 32 cells ( at around 500 steps), while the cell society enters into the stage III at around 600 steps. Here cells with an index larger than 107 get inactive (sleep cells). Cells with the index less than 108 are active and divide frequently but one of the created cells dies. Around the time step 2200, divisions of many sleeping cells occur leading to the multiple deaths.
Then the system goes back to stage II, till new grouping into active and sleeping cells is organized around 2700 steps. On the other hand, between active and sleeping cells around time steps 2000 and 4600.
---- Fig.10------- • Cell Death In our simulations with the deterministic cell death condition, some cells are programmed to die. This history of cell death is stable against changes of initial conditions. We have often observed simultaneous deaths of multiple cells. The process eliminates the overgrowth of inactive cells in our simulation. Such programmed death is also known in cell biology and immunology. Indeed some of the cell linkages obtained (see Fig. 10) from our simulations agree with those found for some multicellular organisms, (such as the C elegance) in the following points. (i) Not all of the cells divide (emergence of sleeping cells). The number of such cells and the cell linkage diagram are insensitive to a wide-range change of initial condition. Many cells are derived from few active cells. (ii) The timing of divisions and deaths of cells is independent of initial conditions. If our scenario is true for the programmed death, we have to conclude that programmed death is due to the interaction among cells. It is predicted that the cell linkage by a single cell ( with removal of one of divided cells) does not show the programmed death as seen in a cell society. Experimental check for this conjecture is strongly requested.
Open Chaos; novel mechanism of unstable and irregular dynamics
Although the mechanism in our differentiation is based on dynamic clustering in a network of chaotic elements, there is a novel dynamic instability in our system with "growing" phase space. By the active transport dynamics of chemicals, the difference between two cells can be amplified, since a cell with more chemicals is assumed to get even more. Tiny differences between cells can grow exponentially if parameters satisfy a suitable condition. The grown cell is divided into two, with an (almost) equal partition of the contained chemicals. The process is quite analogous with chaos in Baker's transformations; stretching (exponential growth) and folding (division). One difference between our cell division mechanism and chaos is that phase space itself changes after a division in the former, while the orbit comes back to the original phase space in the stretching-folding mechanism of chaos. Thus conventional quantifiers ( such as Lyapunov exponents) in "closed" chaos may not be applicable to our problem, since they require a stationary measure in the closed phase space. A quantifier to measure the instability along the increase of phase space is required just as the co-moving Lyapunov exponent was introduced to measure the instability along a flow [9]. Our "open chaos" here provides a general scenario of instability and irregular dynamics in a system with growing phase space. Such systems with replicating units for the competition of finite resources are often seen in economics and sociology. In open chaos, disparity of elements in activities emerges through time sharing by clustering. Still, "poor" cells are not extinct but coexist. Such coexistence of very active and inactive elements is also seen in economics and sociology; coexistence of very big and small firms, or the very rich and Figure Captions; Fig.1 Schematic representation of our model: a) the whole dynamics of our system; b) metabolic reaction within each cell. Fig.2 Temporal change of the number of cells N. Throughout the present paper we use p = 1, f = 0.1, E 0 = e 1 = 1, and R = 50. Other parameters are set at D = 0.1, δ = 0.5, X 0 = 15, S = 0.1. All figures of the present paper are plotted per ∆t = 0.5. Fig.3 Overlaid time series of x (2) i (t), corresponding to Fig.2. Fig.4 (a) Plots of (x (1) i (t), x (2) i (t)) over the time steps 800-1000 ( stage II) (b) The plots over the time steps 1900-2200 (stage III). Corresponding to Fig.3. Fig.5 Magnified plot of the time series x i (t), corresponding to Fig. 6a). c) Plots of (x i (t)) over the time steps 4000-4600. Fig.7 Overlaid time series of x (2) i (t), for the parameters D = 0.2, δ = 0.5, X 0 = 15, and S = 0.5. At the time step around 4620, switching between active and sleeping cells is observed. Fig.8 Cell linkage diagram for D = 0.1, δ = 0.5, X 0 = 40, and S = 0.3. The vertical axis shows the time step, while the horizontal axis shows the cell index. ( Only for practical purpose of keeping trade of the branching tree, we define the cell index as follows: when a daughter cell j is born from a cell i's k-th division, the value s j = s i + 2 −k is attached to the cell j from the mother cell's s i . The cell index for the cell j is the order of s j , sorted with the increasing order). In the diagram, the horizontal line shows the division from the cell with index n i to n j , while the vertical line is drawn as long as the cell exists (until it dies out). Fig.9 Overlaid time series of x (2) i (t), for the parameters D = 0.1, δ = 0.5, and X 0 = 40. Instead of the deterministic death process, the stochastic decimation of a cell is adopted. A cell is randomly removed with a probability of 10 −5 per 0.01 second. Around step 2050, the active cell is removed.
|
1993-11-25T05:05:38.000Z
|
1993-11-25T00:00:00.000
|
{
"year": 1994,
"sha1": "bd2a3c0749bf018ba30b58c5bc22365181bc5949",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bd2a3c0749bf018ba30b58c5bc22365181bc5949",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
202781775
|
pes2o/s2orc
|
v3-fos-license
|
The relational structure of a reinforcement learning task is represented and generalised in the entorhinal cortex
The ability to appropriately generalise previously acquired knowledge to novel situations is a hallmark of human intelligence. A possible neural solution to this problem is to devote pools of neurons to represent the relations between entities in the environment explicitly, in a manner that is divorced from the entities themselves. Such an explicit representation can generalise to novel situations with the same relational structure. Grid cells, originally found in the entorhinal cortex, have been proposed as such an explicit representation of the relations between different locations in physical space. However, the neural representations underlying the generalisation of relational structures in abstract tasks remain poorly understood. Here we use fMRI in humans to show that the entorhinal cortex explicitly represents the relations between reward-predicting stimuli in a reinforcement learning task with different underlying correlation structures between the reward probabilities associated with different stimuli. Our results demonstrate that the same brain regions, perhaps with the same mechanisms, represent the relational structure of the task in both spatial and abstract decision-making tasks. This suggests that the brain uses a common coding framework for the structure of tasks across a wide range of domains.
Introduction
The term "cognitive map" was coined by Tolman (1948) to describe the relational internal model underlying the flexible inferences his rats were making in complex spatial mazes. However, the ability of animals and humans to use such internal models to generalise knowledge to novel situations is not unique to the spatial domain (Behrens et al., 2018).
How might this relational knowledge be represented in the brain? One option is to encode the relations between entities (e.g. locations or objects) in the strength of the synapses between pools of neurons representing the different entities. However, this representation is not generalizable: it is tied to the identities of the specific entities. To allow for generalisation of the relational structure, its representation must be explicit -divorced (abstracted) from the sensory particularities of the task or the entities in question (Behrens et al., 2018).
The well-studied domain of spatial cognition has revealed a candidate explicit and generalisable representation of the structure of 2D spatial tasks: "grid" cells, originally found in the entorhinal cortex (EC), fire when an animal is in one of multiple locations on an equally spaced triangular lattice (Hafting, Fyhn, Molden, Moser, & Moser, 2005). Experimentally, grid cells maintain (generalise) their firing covariance structure across perceptually different rooms (Fyhn, Hafting, Treves, Moser, & Moser, 2007). This is only true when in both rooms the animal is required to perform the same task -free foraging. Crucially, the grid code changes when the structure of the task changes (Boccara et al., 2019;Butler, Hardcastle, & Giocomo, 2019). Theoretically, grid-like firing patterns emerge as a low-dimensional representation of the covariance of place cells firing and of 2D open-field state transition matrices (Banino et al., 2018;Dordek, Soudry, Meir, & Derdikman, 2016;Stachenfeld, Botvinick, & Gershman, 2016), suggesting grid cells activity during free navigation encodes the statistical regularities common to 2D open-field environments. Taken together, this suggests that the knowledge embedded in grid cells generalises across environments and tasks with the 235 This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0 same relational structure, but might "remap" when the structure is different (for a review see Behrens et al., 2018). These are exactly the properties that are required from an explicit representation of relational structure.
We hypothesised that the same brain regions where grid cells can be found will code for the relational structure of a non-spatial reinforcement learning task. To test this, we designed a stimulus-outcome association task with two underlying correlation structures between the outcome probabilities associated with stimuli. We could thus test for a brain region that represents the same stimulus differently, depending on the nature of its relationships with another stimulus. Importantly, we also used a second stimuliset, resulting in a 2x2 factorial design of stimuli set x structure. This enabled us to also test for the other requirement of an explicit structural representation: it should generalise across stimuli with the same relational structure, like grid cells generalise when an animal is free foraging in different 2D boxes (Fyhn et al., 2007). As we hypothesized, we found that the entorhinal cortex explicitly coded for the relational structure between stimuli when they were presented.
Task and behaviour
We trained participants on a probabilistic stimulusoutcome association task with two sets of three stimuli. Only one of the stimuli sets was used in each block. In each trial, participants viewed one of the three stimuli in pseudo-random order, and had to indicate their prediction for its associated binary outcome (a "good" or a "bad" outcome) by either accepting or rejecting the stimulus (Fig 1a). Thus, there was always one correct answer in each trial: participants should accept if they predict the outcome to be the "good" outcome, and should reject if they predict the outcome to be the "bad" outcome. Outcome identity was revealed in all trials, including rejection trails, even though the participant's score did not change in these trials (Fig 1b). Predictions of the outcomes could be formed based on the recent history, as the probabilities of outcomes for each stimulus switched pseudo-randomly between 0.9 and 0.1 with an average switch probability of 0.15. Crucially, for a given stimuli set, the outcome probabilities associated with two of the stimuli were positively correlated (+Corr) in half of the blocks, and negatively correlated (-Corr) in the other half, such that participants could learn from the outcome on one correlated stimulus about the other (Fig 1c). The third stimulus served as a control, and had an independent outcome probability (0Corr). Thus, there were four block types, arranged in a 2x2 factorial design of stimuli set by correlation structure (Fig 1d). In the fMRI experiment there were two independent runs of the four block types, with a pseudo-random block order counterbalanced across participants. The current block-type was signaled by the background color of all stimuli in the block. Participants pre-learned the mapping between background color and correlation structure prior to scanning. Hence, the only learning performed during scanning was of reversals/outcome probabilities, not of the relational structure -knowledge of which was available from the first scanning trial. We modelled the subjects' behavior using an adapted delta-rule with cross-terms (CTs) that enable learning from one stimulus to another fitted to behaviour. The fitted CTs indicated that participants indeed used the correlation structure correctly (Fig 1e).
The reward network and the hippocampus use the relational structure
We first wanted to test whether known neural signals of reinforcement learning showed evidence of knowledge about the relational structure. We tested this by comparing how well a model that utilised relational structure explained neural signals relative to one that does not utilise structure (all fMRI analyses were performed on the correlated stimuli only, and ignored the control stimulus). In both models, we calculated the value of the chosen action (accept/reject) and the value prediction error on each trial of the two correlated stimuli. The first model was a naïve Rescora-Wagner model (NAÏVE, cross-terms set to zero), and the second model utilised the relational structure (STRCT, crossterms fit to behavior). The chosen value estimates were used to construct two regressors at the time of stimulus presentation, and the prediction error estimates were used in a similar way to construct two regressors at the time of outcome. Regressors from both models were entered into the same GLM (together with the main event regressors of stimulus presentation, button press and outcome times for all three stimuli). As estimates from both models were pitted against each other in the same GLM, any variance explained by a particular regressor was unique to that regressor, allowing us to compare the neural signals uniquely explained by each model.
A network of regions including the medial prefrontal cortex (mPFC), the amygdala (AMG), the anterior hippocampus (HPC) and the entorhinal cortex coded positively for the chosen action value from the STRCT model, while most of the orbital surface showed strong negative coding (Fig 2A). The difference between the STRCT and NAÏVE chosen value effects was positive in EC, HPC, medial AMG, dorsal mPFC, parietal cortex and the insula, and negative in the orbital surface ( Fig 2B). The STRCT model value prediction error estimates correlated with activity in the ventral striatum, HPC and AMG (Fig 2C). The same regions coded for the STRCT model prediction error more than the NAÏVE model (data not shown). These results are an almost exact replication of (Hampton, Bossaerts, & O'Doherty, 2006), indicating the brain uses the relational structure to calculate value and learning signals. Entorhinal cortex explicitly represents the relational structure of task events An explicit neural representation of the relational structure of the task should be similar for stimuli which are part of the same relational structure, but dissimilar for stimuli under a different relational structure. We asked whether any region on the cortical surface displayed these properties at the times of stimuli presentations, using Representational Similarity Analysis (RSA, (Kriegeskorte, Mur, & Bandettini, 2008) with a searchlight approach (Kriegeskorte, Goebel, & Bandettini, 2006). A searchlight centered on a cortical voxel consisted of the 100 surrounding voxels with the smallest surface-wise geodesic distance from the central voxel. For each searchlight, we obtained 16 patterns of whitened regression coefficients of the responses to presentations of each of the two correlated stimuli in each of the 8 blocks. In other words, we obtained two patterns, one from each of the runs, for each of our 8 experimental conditions (a particular stimulus under a particular correlation structure). To define the "cross-run correlation distance" between conditions and ( %,' ) we first calculated the correlation distance (1 − ) between the condition pattern from run 1 and condition pattern from run 2, and then calculated the correlation distance between the condition pattern from run 1 and condition pattern from run 2. %,' was defined as the mean of these two distances. Importantly, we never correlated conditions from the same block. This resulted in an 8 conditions by 8 conditions symmetric Representational Dissimilarity Matrix (RDM), summarising the representational geometry in the searchlight (e.g. Fig 3b). The ideal explicit structural representation can be formalised as an 8x8 model RDM, where the desired distances between conditions are determined by relational structure (Fig 3a). To test whether the data RDM of a given searchlight was consistent with the model RDM, we calculated the contrast between the means of the data RDM's hypothesised "dissimilar" and "similar" elements (white and black elements in Fig 3a, respectively). We then used permutation tests to ask whether this contrast was significantly positive across participants. We repeated this procedure for each searchlight centre on the cortical surface, resulting in a cortical map of p-values.
The only cluster to survive multiple comparisons correction across a hemisphere was located focally in the right entorhinal cortex (Fig 3B and 3C, P<0.05 FWE corrected on cluster level, cluster-forming threshold P<0.001). This effect did not change when we repeated the analysis using model RDMs where same-stimuli or same stimuli set elements were ignored (data not shown). This suggests the effect was not driven by background color or low-level plasticity between stimuli that appear in the same block, but rather by an explicit representation of the relational structure between the stimuli in the task.
Discussion
Here, we show that the EC explicitly represents and generalises the relational structure of a non-spatial reinforcement learning task. This is the same area where grid cells, suggested to represent relational structure in spatial tasks, are found. Evidence of gridlike coding can also be found in non-spatial, 1D or 2D continuous tasks (Aronov, Nevers, & Tank, 2017;Constantinescu, O'Reilly, & Behrens, 2016), and the EC represents the statistical transition structure of a discrete state-space, even when participants are not aware of this structure (Garvert, Dolan, & Behrens, 2017). Taken together, our results suggest the same brain regions, perhaps with the same coding scheme, represent and generalise task structures in an explicit manner, across a wide variety of domains.
|
2019-09-17T01:09:01.508Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "37d36151ca47f9cc91bf132d099b03de9fdf7dc1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.32470/ccn.2019.1193-0",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cbd1933ba4b0799f65ecec85737f4f956c82e477",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
79718518
|
pes2o/s2orc
|
v3-fos-license
|
Distant Metastases in Head Neck Cancer: An Impact of Reconstruction Modality
Background: With an estimated yearly global burden of 550,000 incident cases and 300,000 deaths Head neck squamous carcinoma is the sixth most common malignancy reported worldwide, and the eighth most common cause of cancer-related mortality. Surgery is the mainstay of treatment and needs extensive morbid resections. Morbidity is partly compensated with use of microvascular free flaps and provides a more functional outcome. Recurrence after all possible initial treatment is the harbinger of failure and death in head and neck cancers, especially distant metastatic disease which occurs in 4-26% of patient with almost no reported significant 5-year survival. Study: A series of surgically treated head neck cancer patients developing distant metastatic recurrent disease was reviewed to evaluate surgical and etio-pathological factors prognosticating the chance of having recurrence at distal sites with a special focus on the type of reconstruction used.. Results: Along with generally accepted factors like tobacco intake, higher T and N stage, extranodal spread and need for neoadjuvant and adjuvant treatment; use of microvascular free flap reconstruction was more conspicuous and statistically significant in the patients having distant metastatic disease compared to those who had locoregional reconstruction done. Conclusion: Extensive local disease needing multimodality treatment predicts a higher incidence of distant metastatic recurrence. Patients getting a free flap reconstruction also showed a statistically significant chance of the same as compared to those treated with locoregional reconstruction. In the present scenario, where microvascular free flap reconstruction is universally accepted as a safe modality, further studies are needed to confirm its role in occurrence of distal metastases.
Introduction
With an estimated global burden of approximately 550,000 incident cases and 300,000 deaths per year and a high case fatality rate, Head and neck squamous cell carcinoma (HNSCC) is the sixth most common malignancy reported worldwide, and the 8th most common cause of cancer-related mortality. (1) It is most common cancer in developing countries, especially in Southeast Asia. (2) In India, it accounts for one fourth of male cancers and one tenth of female cancers. With annualized incidence rate of 30/100000 in males and 10/100000 in females there is occurrence of more than 120000 cases every year in India. According to incidence statistics, in India, there has been an 75% increase in cancer death burden in 2015 compared to 2000, mostly attributed to head neck cancers.
Recurrence is the harbinger of failure and death in head and neck cancers. Despite all possible sitespecific multimodality therapy, up to 60% and 30% of patients will eventually develop local and distant recurrence. (3) Overall survival significantly reduces in patients developing any recurrence. Camisasca et al have reported that the 5-year survival rate was 92% in HNSCC patients without recurrence and only 30% in patients with recurrence. (4) Most of the studies done worldwide have shown a 30-35% disease recurrence rate. Amongst recurrences, local and locoregional diseases are commoner and often amenable to some form of surgical salvage, irradiation or reirradiation. Chang JH recommends that regardless of recurrence stage or site, salvage surgery is the recommended treatment of choice for recurrent HNSCC (5) . Inspite of that, outcome of such patients is still dismal with less than 20-25% 5year survival rate. Worse than locoregional recurrent disease, outcome seen in patients having distant metastatic recurrence is dismal with almost no reported 5 year survival. Lix reported that overall survival rates of patients with DMs were 56.8% at 1 year, 9.1% at 3 years, and 6.8% at 5 years, respectively (10) .
Patterns of recurrence have been frequently evaluated by many authors along with the factors associated with overall recurrence. Very few times have the factors specifically associated with distal metastases been studied. We attempt to focus on this aspect of the disease pathology to gain insights on the chances of distal metastatic recurrence in primary hnscc patients.
Study
We assessed clinico-pathological and treatment data of hnscc patients who had distal recurrence of disease after standard treatment of the primary disease. The data was analyzed to evaluate the factors responsible for the recurrence of disease.
Data collection
All the patients having clinically, radiologically or pathologically proven distal metastatic recurrent disease were taken into study. Duration between primary surgery and first occurrence of the new lesion was considered as the time of recurrence. Site of the initial recurrence was documented. Patient data was recorded and stratified on basis of type of received neoadjuvant treatment, clinical and pathological features of the primary disease, history of tobacco intake, type of surgery and reconstruction done, and need and type of adjuvant treatment advised and received. All the patients were operated by qualified oncosurgeons and underwent all the treatment as per standard evidence based protocols. Surgery was done in standardised manner with appropriately required local and regional resections. Frozen section evaluation of margins was selectively done as deemed appropriate. Reconstructions were done in accordance to the type of defect, comorbidities and available resources and the procedures were stratified according to the type of reconstruction needed as 1. Primary closure, local flaps, skin grafts etc and loco regional pedicled flap reconstructions, mainly pectoralis major myocutaneous flap, LD flap etc. (these were done by the primary surgeon or the plastic surgeon) 2. Microvascular free flaps-free radial flap, anterolateral thigh flaps, free fibular flaps etc. These were done by the plastic surgeon only. All free flaps irrespective of recipient or donor vessel or donor site were included. The histopathology evaluation was standard. The pathology factors considered for evaluation were T stage (grouped as early-T1/T2 and advanced-T3/T4), margins of resection (positive or close and negative), n stage(positive and total nodes), presence or absence of perinodal extension or lymphovascular emboli. Follow up. Standard recommended follow up record -3 monthly for first two years, 6 monthly for next 2 yrs and yearly later was available for most patients. Thorough loco regional clinical evaluation, abdominal usg and chest x-rays were done as a routine in addition to symptom based investigations for all patients. CT/PET were done in clinically relevant scenario. The data was collected once recurrence had been documented. Further follow up and treatment remained as per standard protocols.
Results
From a data of 418 patients treated for upfront localized hnscc, 48 patients having distal metastatic disease recurrence were considered for this study. All patients were initially diagnosed to have hnscc between may 2011 and may 2016. They underwent standard surgery followed by appropriate adjuvant treatment for the malignancy and were on regular follow up.
Discussion
According to Sacco, despite the site-specific multimodality therapy, up to 60% and 30% of patients will develop local and distant recurrence respectively (3) . In a study, Vázquez-Mahía et al. have reported that the recurrence rate was 44.9% in patients with hnscc (7) . Overall incidence of distal metastasis has been reported to be between 4-26%. A total of 9.2% developed DM in a study (9) . Lix et al reported an incidence of distal metastasis in 11.3% patients. (10) . Ferlito noted that Pulmonary metastases are the most frequent in hnSCC, accounting for 66% of distant metastases. Other sites include bone (22%), liver (10%), skin, mediastinum and bone marrow (11) . In our study, 48 patients out of 418 developed distant metastases i.e.11.4%, similar to the available literature. Only about a quarter (23%) were associated with locoregional recurrent disease while 77% were pure distant metastasis. Location of metastatic disease was more or less similar to reported studies with lung and pleural mets accounting for half the cases. Liver and bone mets were next common sites with lesser involvement of adrenals, bone marrow and brain. Occurrence of DM was more common in patients who had a history of tobacco abuse compared to patients who were tobacco naïve. Also the patients who had a locally advanced disease and received preoperative chemotherapy showed more propensity to have distal metastatic recurrence. Locally extensive diseases as shown by T stage III/IV, more nodal involvement, lymphovascular emboli or perinodal extension also showed more chances to have distal recurrence. Patients who had a clear negative margin had less distal recurrence when compared to those having close or positive margins but the difference was minimal. Patients who deferred adjuvant treatment, in spite of being advised, had more incidence of distal recurrence. Also it was seen that patients who had microvascular free flap reconstruction were relatively higher in number amongst recurrent disease patients as compared to patients who had a primary closure or a local flap done. Chang jh et al report that along with age and clinical stage, recurrence-free interval is significant independent prognostic factor for overall survival of recurrent HNSCC patients (5) In a study by Vázquez-Mahía I et al, analysis showed that comorbidities, degree of tumor differentiation, and tumor stage were important prognostic factors for recurrence (7) . Bo Wang et al report that T stage, degree of differentiation, pN stage, flap application, resection margin, and lymphovascular invasion were factors of recurrence in univariate analysis (P < 0.05) while multivariate analysis showed that T stage, degree of differentiation, and pN stage were only independent factors for recurrence (P < 0.001) (8) . Looking specifically at incidence of distant metastasis, Garavello W et al reported that age <45 years, hypopharyngeal localization, an advanced T stage and/or N stage, high histologic grade, and locoregional control were found to be significantly associated with the risk of DM (9) . According to Li X et al, clinical N stage, primary tumor site, level of tumor invasion, pathologic N stage and number of levels with pathologic lymph node were found to be significantly associated with the risk of DM (10) . The presence of pathologically positive nodes is the most critical factor to influence the eventual development of DMs (13) . Most of the factors analysed in this study were commensurate with the findings described by various authors. Tobacco intake, extensive local disease, nodal involvement, positive or compromised margins and need for neoadjuvant and adjuvant treatments were clearly more evident in patients having distant metastatic disease. Notably, analysis showed that type of surgical reconstruction done was a important factor in patients who had distant metastases. 63% of all patients who had distal mets were having microvascular free flaps done as against to only 37% who had a local or locoregional modality for reconstruction. We analysed this data further to evaluate the statistical significance of this finding. From the data of all the 418 patients forming the substrate group, 125 had received a microvascular free flap reconstruction as against 293 who had a primary closure or a loco regional reconstruction done. This number was 30 and 18 respectively amongst the subgroup of 48 patients having distant metastases. Applying the chi square test it was seen that reconstruction using a free flap was significantly associated with occurrence of distal mets (p<0.02). Microvascular free flaps are a commonly used modality for reconstruction in complex head and neck resection to reduce the eventual surgical morbidity and improve the functionality post op. Safety of these flaps has been evaluated by multiple authors and is accepted by word of mouth. De Vicente et al followed up 98 patients with Hnscc. They found that the mortality was 47.0% in patients with flap repair and was 67.3% in patients without flap repair (P < 0.05) justifying that the application of free flap repair can improve the 5-year survival rate of patients. (12) Mucke et al followed up 773 patients with Hnscc treated with curative intent. 274 patients were immediately reconstructed using free microsurgical flaps. They concluded that reconstruction of defects, especially in patients presenting with higher tumor stages, is not associated with shorter overall survival rates (20) . In an experience of 130 cases of free flap reconstruction, Alamdori et al observed an increase in survival with use of free flaps (19) . In a cohort of 242 patients with locally advanced oral hnscc, 93 with free flaps and 149 with local flaps, authors Hsieh et al comment that although cancer stages were more advanced in patients requiring free flap reconstruction, patient survival, and cancer recurrence in the patients with free flap reconstruction were maintained as patients without free flap (18) . Follow up of a cohort of 98 patients, 49 with free flap and 49 with local reconstruction, free-flap reconstruction after oral cancer resection showed a trend toward better 5-year survival (17) . Follow up of 42 patients enrolled from March 1999 to December 2004, for duration of 1 to 94 months was done. The actuarial 5-year survival rate was 41.9% (SD=9.6%) and the authors concluded that this study would not provide definitive statistical evidence, but it could certainly suggest a trend supporting that microvascular free tissue transfer does not change the survival of these patients (16) . Further a study concluded that the site or location of the free flap donor or application area or the vessel used for the flap did not affect survival overall in a study of 184 patients (15). In a review Wong et al have also justified use of microvascular free flaps (21) All findings of our study other than the use of microvascular free flaps are in concordance with all the available literature. In contrast to other studies, Free flaps in this review show increased chance of distant metastasis which is definitely debatable. This can be attributed to several factors: 1. Oral malignancies present in this region in a relatively advanced stage with most being T4 and node positive diseases. In our study, out of the patients who had distant mets, very few had an early disease. 4-12% 2. Use of free flaps for reconstruction are a logical first choice in the western world and at tertiary centers in India even for early diseases. In the geographical area concerned here, free flaps are done more as a necessity when local flaps like PMMC with/without deltopectoral flap or Nasolabial flap or similar are technically not possible as for example with extensive skin involvement by primary, neck nodes involving or close to skin, extensive soft tissue involvement, middle third mandibular resection etc. Eventually, patients receiving free flaps were indirectly biased to have relatively advanced disease. 3. Most of the studies in past have evaluated the overall survival and disease free survival as the endpoint and not the occurrence of distant metastasis as one. We evaluated the patients who had distant metastasis. Locoregional recurrence was also not taken into consideration. Though presence of distant metastasis does project dismal survival, mere occurrence has to be further followed to reach the desired end point which may alter the results to bring them at par with the available literature. Having discussed so, still the statistically significant finding of this study stays that patients with microvascular free flaps did have a higher chance of getting distant metastasis in the long run and eventually had a chance of poorer outcome compared to the patients with local or locoregional flaps used in reconstruction. This needs to be evaluated and possibly refuted by further studies of longer follow ups and preferably randomised trials before the all acceptable oncological safety of microvascular reconstruction is established beyond doubt for all comers in oral cancers.
|
2019-03-17T13:11:52.703Z
|
2018-02-19T00:00:00.000
|
{
"year": 2018,
"sha1": "9b04e031264265cd71d9d2b29be37ff109e102a2",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v6i2.91",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5d07b195fdfaf65e15b5e65debbb9e8f13f5dddd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8701408
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of an Active Humidification System for Inspired Gas
Objectives The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Methods Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. Results While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. Conclusion According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate.
INTRODUCTION
The upper airway is the area of the respiratory system responsible for the conditioning of inhaled gas by providing humidification, heating and filtration. Gas reaches 100% relative humidity (RH) and 29°C-32°C of temperature (Tº) after passing through the nasopharynx. Gas Tº is 32°C-34°C and its RH is 100% at the level of the carina. Finally, at the alveolar level, gas reaches 37°C of T°, 100% of RH and contains 43.9 mg/L of absolute humidity (AH) [1].
The point at which gases reach the alveolar conditions is known as the isothermic saturation boundary (ISB), and usually resides between the fourth and fifth generation of subsegmental bronchi.
With the insertion of an artificial airway, the nasopharyngeal function of conditioning the inhaled gas is bypassed, and the ISB is shifted further down to a zone of the respiratory tract not very well designed to properly condition the inhaled gases. This situation is provided by the fact that medical gases have less moisture than the environmental air [2].
Inadequate humidification of inhaled gas increases the risk of atelectasis, enhances the airway resistance, and promotes a greater incidence of infections, a harder respiratory load, and thickening of airway secretions and destruction of airway epithelium. The slowing down of ciliary activity is a consequence of mucous membrane functional disturbance, and it appears within three hours of mechanical ventilation with gases carrying an AH lower than 25 mg/L [2,3].
The American Association for Respiratory Care recommends that the gas delivered by an artificial airway should have between 31°C and 35°C of Tº, 100% of RH and with a mimimum AH of 30 mg/L [4,5]. These requirements may be achieved by using active humidification systems (AHS) or heated and moisture exchangers (HME), also known as "artificial noses". The HMEs act as the upper airway tract by collecting the heat and moisture exhaled by the patient, which will later be used to condition the inhaled gas [6]. The effectiveness of HMEs is related to tidal volume, inspiratory time, minute volume, and body Tº [1]. HMEs increase dead space and resistance in the airway. Resistance is further increased by the presence of dried secretions, the viral-bacterian filter and/or moisture surplus. These changes may negatively impact weaning from mechanical ventilation, especially in those patients with poor muscular mass [1,[7][8][9][10][11]. The addition of dead space in spontaneous ventilation may cause an increase of pCO2 and work of breathing [1,[12][13][14][15]. Contraindications to the use of HMEs include patients with bronchorrhea, very high or very low tidal volume, bleeding in the airways, dehydration, and hypothermia [16].
AHS add moisture and heat to the inhaled gas in an active way [6]. This system is recommended for patients with a minute ventilation higher than 10 L/minute, or in patients with contraindications to the use of HMEs [7]. Complications associated with its use may include airway burns, decreased hydration and dryness of secretions, increased resistance, excessive condensation of water in the tubing, which may cause a higher risk of infection and contamination of the liquid when filling the chamber.
Chanques et al. [17] tested two different humidification systems, AHS vs. Bubble Humidifier (BH). They evaluated both the level of comfort in patients without an artificial airway and the laboratory performance of these systems using flows of 3, 6, 9, 12, and 15 L/minute. Patients showed higher discomfort (dry mouth and throat) with BH than with AHS. In the laboratory, AHS reached 34.1°C of T°, RH 77.6% and AH 29.7 mg/L, while BH reached 26.7°C of T°, RH 60.7% and AH 15.6 mg/L.
Considering the contraindications to the use of HMEs, and the need of conditioning the inhaled gas in patients with an artificial airway who no longer require mechanical ventilation, we decided to study if AHS is a good humidification and heating alternative for these patients.
The objective of this study was to evalute the efficiency of AHS, in terms of RH, T°, and AH of delivered gas, in a laboratory environment.
MATERIALS AND METHODS
This study was carried out at the Basilea Neurological, Orthopaedic and Respiratory Rehabilitation Clinic, in Buenos Aires, Argentina, from January 29th through February 25th, 2008. Setting A 1.5-m-long and 22-mm diameter open circuit with a concentric lid was created. One end of the tubing was connected to an inflow of the open circuit chamber. A "T" piece was connected to the other end of the tubing. Then, the active heater chamber was filled with 200 mL of distilled water and the AHS was switched on. Later, the flowmeter was opened, and the oxygen therapy piece was connected with the "T63 guide" to obtain the desired flows of 20, 30, 40, 50, and 60 L/minute. Three levels of temperature were set: level I (low), level II (middle), and level III (high). They were compared at different flows (20 to 60 L/minute) (Fig. 1).
Measurements
These temperature settings were measured at different flows over 5 days. A total of 1995 measurements and 300 calculations were carried out during eighteen 10-hour work days. It took three days to quantify values of water temperature without gas flow (Tº water/flow) for each of the three levels of the equipment. Measurements were performed using the OHMEDA 5420 VOLUME monitor (Boc Healthcarem, Manchester, UK); Thermohygrometer Testo 605-H1 (Testo AG, Ciudad Autónoma de Buenos Aires, Argentina), AHS MR810 (Fisher & Paykel Healthcare, East Tamaki, New Zealand); Mercury Thermometer [-10ºC/110ºC] (Luft, Germany); 1.5-m-long, 22-mm-diameter tubing with water trap; Chrome Flowmeter Precision Medical (Precision Medical, Northampton, UK); oxygen therapy piece 24%. The gas used for the study was compressed air. These measured and calculated variables are described in Table 1.
AH was calculated according to the following formula: AH=216.9 ×vapor pressure (VP)/Tº. Where AH was absolute humidity, VP was water vapor pressure in millibars, 216.9 was a constant and Tº is the gas termperature in Kelvin. Water vapor pressure was based on gas Tº according to the values stated by bibliography [18].
The obtained data was included in a table for compilation and further analysis. Environmental temperature and environmental relative humidity (EHR) of the laboratory were measured with a thermohygrometer over a period of two minutes so that the values could reach a plateau. Relative gas humidity without humidification (RGHwo/hum) and gas temperature without humidification (GTºwo/hum) were measured by using a thermohygrometer over two minutes. The maximum water temperature reached at each of the 3 (three) AHS temperature levels, with (water temperature with flow, WATER Tºw/flow) and without gas current flow, was quantified. In order to quantify this temperature, a 400-mL capacity chamber was filled with 200 mL of distilled water. A mercury thermometer was dipped into the distilled water to half its depth, avoiding contact with the base of the chamber. The setup was changed at each preset temperature level and the WATER T°w/flow was recorded once every hour during ten hours. A similar procedure was repeated for the other two temperature levels. Gas flow was measured at the inflow (GFhdi) and outflow (GFhdo) of the humidifier device.
Flow at the distal end of the tubing, i.e., proximal to the patient (GFpp) was measured using a Ventilometer Ohmeda 5420. Flows were measured in the following way: we connected one end of the "T63" guide to the flowmeter for central compressed air. On the other end we adapted an oxygen therapy piece to concentrations of 24%. We connected the oxygen therapy piece to the ventilometer and the latter to the humidification chamber. We gradually increased the flow from the flowmeter until the final flow, measured by the ventilometer, reached the desired values. The original outflow from the flowmeter (between 1.5 and 6 L/minute) changed to 20, 30, 40, 50, and 60 L/minute according to the Bernoulli´s Principle and the Venturi effect.
A Thermohygrometer Testo 605-H1 was used to measure temperature and relative gas humidity of the humifier device outflow (GFhdo and RGHhdo). We added an adaptor piece to the gas outflow of the chamber and tubing was attached to the other end of the piece. Therefore, the adaptor piece was between the water chamber and the tubing.
A Thermohygrometer Testo 605-H1 was used to measure Temperature and RH proximal to the patient, i.e., delivered to a patient (GTºpp and RGHpp). The Thermohygrometer was placed at the end of the tubing and before the "T" piece to obtain measurements over a two-minute period.
Condensation proximal to the patient was evaluated through observation once an hour. Distilled water was poured into the chamber up to 200 mL once every hour, and after each measurement was completed.
Analysis
An analysis of variance (ANOVA) test was used to perform the statistical analysis for repeated measurements with post hoc test, Normality test, multiple linear regression test and Student t-test. We considered the p value of <0.05 as significant.
RESULTS
A total of 1995 measurements and 300 calculations were carried out for the 3 levels of temperature of the AHS and for the 5 levels of flows that were studied.
ETº and ERH were stable during the whole study, with average values of 27.63ºC (±0.13ºC) and HR 51.63% (±1.27%). Tº and RH of gas without humidification were also regular during the evaluation period: 27.63ºC (±0.65ºC) and HR 6.09% (± 0.61). Mean values±SD of all measured variables for each of the three temperature levels and for all flows are shown on Tables 2-4. On average, water Tº without gas flow at AHS levels I, II, and III was 44ºC, 59ºC, and 68ºC, respectively.
AGHpp increased due to the temperature level that was used, and it decreased due to the increase of flow. At T° level I, AGHpp underwent a fall of 40% when the flow increased from 20 to 60 L/minute (20.5 mg/L vs. 12.43 mg/L), whereas at level II the decrease of AGHpp was of 21% (33.5 mg/L vs. 26.7 mg/L). Finally, AGHpp suffered, at level III, a decrease of 11% (36.4 mg/L vs. 32.4 mg/L). ANOVA test was applied to carry out statistical analysis for repeated measurements, showing significant differences between the different Tº levels and the different flow levels (P<0.0001 overall).
No evidence of correlation was found between the Tº level and the flow in the device that we used. A post hoc analysis showed a significant difference between every one of the Tº levels and flows that were used.
The RGHpp analysis also showed significant differences be- Values are presented as mean±SD. WATER Tºw/flow, water termperature with flow; GFhdi, gas flow at the humidifier device infow; GFhdo, gas flow at humidifier device outflow; GFpp, gas flow proximal to patient; RGHhdo, relative gas humidity at humidifier device outflow; GTºhdo, gas temperature at humidifier device outflow; WATERVPhdo, water vapor pressure at humidifier device outflow; AGHhdo, absolute gas humidity at humidifier device outflow; RGHpp, relative gas humidity proximal to patient; GTºpp, gas temperature proximal to patient; WATERVPpp, water vapor pressure proximal to patient; AGHpp, absolute gas humidity proximal to patient; CONDpp, condensation proximal to patient. *P <0.05, statistically significant differences between groups. Values are presented as mean±SD. WATER Tºw/flow, water termperature with flow; GFhdi, gas flow at the humidifier device infow; GFhdo, gas flow at humidifier device outflow; GFpp, gas flow proximal to patient; RGHhdo, relative gas humidity at humidifier device outflow; GTºhdo, gas temperature at humidifier device outflow; WATERVPhdo, water vapor pressure at humidifier device outflow; AGHhdo, absolute gas humidity at humidifier device outflow; RGHpp, relative gas humidity proximal to patient; GTºpp, gas temperature proximal to patient; WATERVPpp, water vapor pressure proximal to patient; AGHpp, absolute gas humidity proximal to patient; CONDpp, condensation proximal to patient. *P <0.05, statistically significant differences between groups.
(P=0.0003). These values show a significant difference in the decrease of AGH in the path of the tube (Fig. 3).
DISCUSSION
In terms of inhaled gas conditioning, the main variable is AH. This variable is closely related to gas Tº, and gas Tº depends directly on the water Tº of the chamber. Consequently, it can be stated that AH delivered by AHS depends directly on the water Tº of the heater. If we focus on AH, we will observe that its behavior varies according to flow rate and water Tº. Flows greater than 40 L/minute at a water Tº of 53.2ºC (level II) do not reach the minimum recommended values. If flows greater than 40 L/minute are needed, the water Tº will have to be increased to at least 61.4ºC (level III).
Relating AH to RH, it was observed that gas may be saturated at 99.9% without having the minimum recommended AH. This is a consequence of a decrease of water Tº, or a decrease of the contact time of gas with the water surface; i.e., the greater the flows, the lower the evaporation rate; thus, the resulting mass of water vapor will also be lower.
Condensation proximal to the patient proved to be an independent predictor of an AH of at least 30 mg/L for all flows at level III, for 20 and 30 L/minute at level II, and an environmental Tº of 27ºC-28ºC, as it was also reported by Ricard et al. [19]. In colder environments, condensation does not necessarily imply the minimun recommended AH. Chanques et al. [17] evaluated condensation by using AHS and HF in patients without an artificial airway and found condensation only when using AHS. However, this might have been influenced by the air exhaled by the patient wearing the mask.
Observing the AH behavior again, this time as it relates to the tubing length, an AH decrease is noted while approaching the end of the tubing proximal to the patient. This decrease could be caused by heat loss along the circuit.
It is quite possible that measured variables may behave differently if they were measured in patients. Measurements made with AHS and referred to as "proximal to patient" in clinical practice could be affected by the minute ventilation or the tidal volume amplitude of the patient. Hence, AH, T°, and RH of inhaled gas could fall as a result of a possible mixture with environmental air Table 4. Mean values±SD of all measured variables for temperature "level III" and for all flows during inhalation. In order to avoid this potential problem, this circuit should have a reservoir of 75 mL at the proximal end to patient. During the exhalation phase, this reservoir is filled with conditioned gas delivered by an AHS, so if the tidal volume of the next inhalation exceeds the volume delivered by the AHS, the reservoir would provide the missing part. Taking into account that an AHS conditions flows from 20 to 60 L/minute effectively, and that a patient´s minute volume is usually between 8 and 10 L/ minute, the AHS delivers higher amounts of humidified gas than necessary. Such device (AHS) might be useful in patients with poor spontaneous ventilation due to the resistance and the dead space created by the passive humidifier.
Finally, we consider that an AHS could be applied to patients with an artificial airway being weaned from mechanical ventilation or during testing periods of spontaneous breathing.
In conclusion, according to our results, it may be necessary to have a device that keeps water at a temperature of at least 53°C for flows of 20 and 30 L/minute, and at a T° of 61°C for flows of 20, 30, 40, 50, and 60 L/minute in order to achieve appropriate RH, AH, and gas T°.
|
2018-04-03T00:05:48.589Z
|
2015-02-03T00:00:00.000
|
{
"year": 2015,
"sha1": "82662ba9e95532f8e096e0262931adeeeb318617",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3342/ceo.2015.8.1.69",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82662ba9e95532f8e096e0262931adeeeb318617",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59417106
|
pes2o/s2orc
|
v3-fos-license
|
A Study of Pansharpened Images Based on the HSI Transformation Approach
A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the lowresolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the lowresolution multispectral image is also shown.
Introduction
Recently, there are many Earth observation satellites, which usually provide both high-resolution panchromatic and low-resolution multispectral images.In remote sensing, it is important to acquire high-resolution images.A pan-sharpen technique artificially produces a high-resolution image by fusing a high-resolution panchromatic image and a low-resolution multispectral image [1]. Figure 1 illustrates the outline of generating a pansharpened image.By this pansharpening technique, the appearance of the color image can improve.The pansharpening method is not limited to only a remote sensing image.If there are both high-resolution panchromatic and lowresolution multispectral images, the pansharpening approach is an adaptive technique for every image.Therefore, the fundamental study of a pansharpening technique is believed to contribute to the development of the image processing techniques.J. Zhang reviews current techniques of multi-source remote sensing data fusion [2].In [2], a considerable amount of effort has been devoted to summarize the data fusion techniques of remote sensing data.Moreover, the evaluation of pansharpening fusion methods is reported [3].However, there is little study of a pan-sharpening method of HSI transformation models [4,5].Therefore, in this paper, the effectiveness of three pan-sharpening methods based on the HSI transformation approach is investigated.Three models are the hexcone, double hexcones, and Haydn's methods [6].From the experimental results, the HSI transformation approach based on the Haydn's model works well.Furthermore, the effectiveness of smoothing the low-resolution multispectral image before the HSI transformation is also investigated.The smoothing methods are the use of the Gaussian [4] and bilateral filters [7].From the experimental results, the pansharpened image is shown to be improved by using the smoothing techniques of Gaussian and bilateral filters.
Pansharpened Images
A color is known to be expressed by various color feature models [4,6].In this paper, we used the HSI family of color model for making pansharpened images.The HSI model is a color appearance system.It consists of hue, saturation, and lightness (or intensity).The model based on intensity, hue, and saturation is considered to be better suited for human interaction.As the advantage of the use of the HSI model, each of hue, saturation, and intensity is considered to be independent.In the studies of the pansharpening method, the HSI transformation approach is usually used.Figure 2 shows a flow of pan-sharpening by the HSI transformation.We describe the pansharpened images based on the HSI transformation approach as follows: Suppose that there exist both the low-resolution RGB color image and the high-resolution panchromatic image.Firstly, each RGB element of the low-resolution color image transforms into the HSI element, hue, saturation, and intensity.Secondly, the gray value at the high-resolution panchromatic image is replaced by the intensity obtained from the HSI transformation at the low-resolution color image.Finally, the pseudo high-resolution RGB color image is generated by inverse transformation of the HSI transformation.In this paper, we examine the pansharpened images generated by three representative HSI transformations.Three models are the hexcone, double hexcones, and Haydn's models [6].Note that the intensities (I) of the hexcone, double hexcones, and Haydn's models are defined by the brightness as I = max{R,G,B}, I = (max{R,G,B}+min{R,G,B})/2, and I = R + G + B, respectively.The details of the HSI transformation and inverse transformation are shown in the image processing handbook [6].
Furthermore, in order to improve the pansharpened images, we consider applying the smoothing of the lowresolution multispectral image.The smoothing methods are the use of the Gaussian and bilateral filters.The smoothing technique is expected to use not only one attention pixel but also the color information of its neighbouring pixels.The Gaussian filter is defined as follows: where g(i,j) and f(i,j) denote the intensity of after and before transformation at (i,j) coordinates, respectively.The parameter 2 σ is the spatial variance.The effect of smoothing is determined by a value of σ.The image gets more blurred as the value of σ is getting larger.On the other hand, if the value of σ is small, the image is considered to get sharpen.
The bilateral filter [7] is considered to smooth the image, while preserving the edge information of the image.In smoothing an image, the bilateral filter takes into account of not only pixel difference but also intensity difference.The bilateral filter [7] is defined as Equation ( 2).Here, the parameters 2
Experiments
In the experiments, both the low-resolution color and the high-resolution panchromatic images are generated from an original image.Then, we use 3 types of a ratio.
where the RGB elements of the original image are R, G, and B. On the other hand, the RGB elements of the pansharpened image denote r, g, and b.The smaller value of the RMSE means that the pan-sharpening method works better.Table 1 is the RMSE values of three types of HSI transformation approach for each of a ratio.From Table 1, the Haydn's model gives a smaller RMSE value at any images and a ratio.The results also show that the double hexcones type is usually superior to the hexcone one.From the experimental result, we recommend to use the Haydn's model.
Figure 3. A ratio of the image resolution between the multispectral and panchromatic images.
Table 1.RMSE values of three types of HSI transformation approach for each of a ratio.From the results of Table 1, we focus on improving the Haydn's model.The effects of the parameter value σ of the Gaussian filter are investigated while changing the values from 0.2 to 2.0.On the other hand, for the bilateral filter, the value of 1 σ varies from 0.2 to 2.0, and that of the 2 σ is from 10 to 400.The size of the Gaus- sian and bilateral filter is 3 x 3. Table 2 is the RMSE values of the Gaussian, bilateral, and without filters using the Haydn's model for each of a ratio.The results of the Gaussian and bilateral filters show the minimum RMSE values.Then, Table 3 shows the optimal parameter value which gives the minimum RMSE value.From Table 2, the results of the Gaussian and bilateral filters outperform that of without filter at any images and a ratio.Furthermore, the RMSE value of the bilateral filter is equal to or slightly superior to that of the Gaussian filter.By using the Gaussian or bilateral filters, the pansharpened image can improve.This shows the effectiveness of smoothing the low-resolution multispectral image.From Table 3, the optimal parameter values of σ for Gaussian filter and 1 σ bilateral filter are comparatively very small.This means that the low-resolution multispectral image may be getting sharpen.And, its image quality is considered to improve for making pansharpened images by appropriately adjusting the parameter values.
Conclusion
In this paper, we have examined the effectiveness of pansharpened images using three types of HSI transformation approach.Experimental results show that the Haydn's model is a promising method in the limited study.And the use of the Gaussian or bilateral filters for the low-resolution multispectral image is shown to be effective.Therefore, the Haydn's model with image smoothing techniques such as Gaussian or bilateral filters for the low-resolution multispectral image should be used.In the future study, the effectiveness of other pansharpening methods [3] will be investigated.
1 σ and 2 2 σ 1 σ or 2 2 1 σ
are the spatial and intensity variance.The larger the value 2 σ is, the more an image gets smoothed.Otherwise, the image gets sharpened.The role of 2 is the same as the 2 σ in the Gaussian filter.The effect of smoothing is determined by values of 1 σ and 2 σ .
Figure 1 .
Figure 1. Outline of generating a pansharpened image.
Figure 2 .
Figure 2. Flow of pan-sharpening by the HSI transformation.
Fig- ure 3
shows a ratio of the image resolution between the multispectral and panchromatic images.It indicates that one pixel of the multispectral image corresponds to two by two pixels of the panchromatic image.This shows a ratio 1:2.In the experiment, we change the ratio: 1:2, 1:3, and 1:4.It is fundamentally important to investigate the influences of the ratio on the pansharpened images.For these images, we obtain the pansharpened images using three types of HSI transformation techniques.The effecttiveness of pansharpened images is examined in terms of the RMSE value.The RMSE value is one of the effective measures of the image quality performance.The RMSE value is defined by
|
2018-12-22T03:09:35.199Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "db3c63594be430911be894a916359eec424e05c7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=27141",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "db3c63594be430911be894a916359eec424e05c7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
116893347
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic field dependent impact ionization in InSb
Carrier generation by impact ionization and subsequent recombination under the influence of magnetic field has been studied for InSb slab. A simple analytic expression for threshold electric field as a function of magnetic field is proposed. Impact ionization is suppressed by magnetic field. However, surface recombination is dependent on the polarity of magnetic field: strengthened in one direction and suppressed on the opposite direction. The former contributes quadratic increase to threshold electric field, and the latter gives additional linear dependence on magnetic field. Based on this study, electrical switching devices driven by magnetic field can be designed.
There are many reports about the impact ionization in semiconductors. However, it is hard to find the one explaining the influence of magnetic field on the impact ionization. To the best of our knowledge it is only recently that a reasonable pilot model has been proposed for the magnetic field effect and compared with experimental results 1 . The previous model is restricted to a special case: electric transport is quasi-ballistic, and carrier recombination is independent of magnetic field. We approach to this issue from more general background, in which an electron can experience many scatterings before reaching the impact ionization, and magnetic field affects carrier recombination process.
Our model gives a result that magnetic field contributes to the carrier generation and recombination process: the field reduces the generation rate and increases the threshold voltage, and it also makes recombination rate sensitive to the polarity of magnetic field. After describing some picture regarding our model qualitatively, quantitative treatment will be followed.
When a high bias voltage is applied, electrons accelerate to a high speed. If the kinetic energy acquired from the electric field equals the ionization energy, impact ionization occurs. Upon impact with the lattice, the electron expends its kinetic energy on ionizing a valence electron (refer Fig. 1 (a)). This process produces electron-hole pairs and abruptly increases the electric current in the device. Impact ionization makes equal number of excess electrons and holes. Because electron mobility is more than 100 times larger than that of hole in InSb, we consider only electronic conduction in impact ionization regime. Before reaching the ionization energy, the energetic electron can experience energy loss due to inelastic scatterings. To achieve impact ionization, the electron should accumulate kinetic energy despite the inelastic scatterings. Magnetic field affects this carrier generation process. When magnetic field is applied, the Lorentz force deflects the electronic trajectory, and the net gain of kinetic energy for a given path length is reduced (Fig. 1 (c)). To achieve the ionization energy a longer trajectory is required; however, this longer trajectory gives rise to the greater possibility of inelastic scattering. Thus, the deflection of the electronic trajectory caused by magnetic field leads to suppression of the impact ionization. To restore impact ionization, a greater electric field is needed to increase the net energy gain between the scatterings. Consequently magnetic field suppresses the carrier generation and increases the threshold electric field.
Recombination is an elimination process of electronhole pairs and generally follows carrier generation. We are interested in recombination at the two interface, S1 and S2, as depicted in Fig. 1(d). S1 has higher recombination velocity than that of S2 and carrier electrons are readily recombined near S1. Magnetic field produces the Lorentz force. Electrons accumulate or deplete near S1 according to the polarity of magnetic field. When the polarity of magnetic field is negative, negative z-direction in Fig. 1(d), the Lorentz force deflects electrons to S1 and recombination process is facilitated, whereas positive polarity makes electrons near S1 depleted and results in relatively slow recombination.
When the bias voltage exceeds threshold voltage, generation rate is larger than recombination rate, and then number of electrons increases with time, which is known as avalanche state. Under steady-state, however, generation process is balanced with recombination, i.e., generation rate is same as recombination rate. In this work we consider the limit of steady-state, which is on the border of avalanche state. Now we are in position to treat the model quantitatively. The model proposed in this work presents a simple analytic expression for the threshold electric field as a function of magnetic field. For small band gap semiconductors such as InSb, the ionization energy is approximately equal to the band gap energy ε g 2 . Energetic electrons undergo inelastic scatterings before their kinetic energies reach ε g . The dominant scattering process of InSb at room temperature is optical phonon scattering 3,4 , and the optical phonon energy ω o is known to be 23 meV 5 . Hence, in the present model each scattering makes an energy loss of ω o . We will obtain a probability for an electron to acquire ε g in spite of energy losses due to in- Recombination is dependent on the polarity of magnetic field when recombination strength at S1 and S2 are different. Axis and two interface, S1 and S2, in the sample are depicted in the bottom of (a).
elastic scatterings. Assuming this probability is proportional to carrier generation rate, generation rate will be expressed in terms of electric and magnetic field. Introducing steady-state condition and recombination parameters, a simple analytic relation between the threshold electric field and magnetic field will be proposed. Adopting Dumke's theory for InSb 6 , scatterings are classified into two groups according to scattered directions: small-and large-angle scattering ( Fig. 1(b)). For an electron incident along the direction of an electric force, the small-angle scattering produces a scattered angle less than π/2, and the electron is ready to be accelerated again by the electric field after the scattering. The large-angle scattering gives a scattered angle greater than π/2, which results in deceleration and lose a chance to acquire further kinetic energy from the electric field. Thus, large-angle scattering should be avoided to achieve impact ionization.
Our model starts from this classification. The largeangle scattering probability in a time interval of dt can be given by dt/τ L , where τ L is the relaxation time of the large-angle scattering. Then, the probability P of an electron surviving large-angle scattering is P = exp(− 1/τ L dt) for a finite period of time. P can be expressed with an electric field. An electric field E sup-plies energy to an electron at a rate of eE v x , where e is the electron's unit charge, and v x is an average velocity parallel to the electrostatic force in the time interval between the successive small-angle scatterings, i.e., v x ≡ (1/τ s ) τs 0 v x dt, where τ s is the relaxation time of the small-angle scattering. Energy loss by the smallangle scattering is given by ω o /τ s . Thus, the net rate of energy gain is expressed by A reasonable choice for the lower bound in the integrand is ω o because optical phonon scattering is absent for an electronic energy less than ω o . When an electron with an initial energy of ω o moves to a final energy states of ε g , various paths are possible, and P is dependent on these paths. We only consider a path which gives a maximum value of P . The path having the shortest electronic path-length gives the maximum P . A diffusion effect in the energy space is not considered; it is a second order effect 7 and ignored in this work.
To elucidate v x in Eq. (1), more details for the smallangle scattering are needed. Velocity is generally depen- dent on kinetic energy. However, due to the band nonparabolicity 8 of InSb, the velocity of InSb is a slow function of energy for energetic electrons (refer Fig. 2(a)). Hence, the variation of the magnitude of velocity between the successive small-angle scatterings is considered to be negligibly small even though the corresponding energy change is considerable. After a scattering event the electron may have various scattered directions, and these directions work again as incident directions for the next scattering. The shortest path is achieved when this scattered direction is in parallel with the electric force (along x-axis). Thus, the direction of electronic velocity just after the small-angle scattering is considered to be in parallel with the electric force.
It is only a few electrons which survive large-angle scattering and obtain kinetic energies equal to ε g and finally contribute to the impact ionization 10 . Because the impact ionization is governed by the energy-gain process specified by P , a generation rate per unit carrier due to the impact ionization is assumed to be proportional to P . Therefore, P/P 0 − 1 in Eq. (2) represents a normalized generation rate.
Recombination relies on magnetic field when the recombination strength at the two interfaces are different (see Fig. 1 (d)). In low magnetic field regime the first order approximation of recombination rate R(B) with respect to B gives R(B)/R 0 − 1 = −C R µ B, where R 0 is recombination rate at zero field 11 . When surface recombination is dominant over bulk recombination and sample thickness is smaller than carrier diffusion length, the following simple expression can be obtained from Lile's results 12 : s 1 and s 2 are surface recombination velocities at the two interfaces, S1 and S2, respectively, and difference of them gives non-zero value of C R . The steady-state condition asserts that the generation rate is equal to the recombination rate, which leads to P/P 0 = R(B)/R 0 and therefore P/P 0 −1 = −C R µ B. Then, the threshold field in Eq. (2) is expressed by Plotting threshold field E according to magnetic field permits an overview of the present model. The plots in Fig. 3 are calculated ones using Eq. (4) and Eq. (5). SI unit for mobility and magnetic field makes µ B dimensionless. For identical surface recombination at the two interface, s 1 = s 2 , the normalized threshold field increases quadratically with magnetic field. This increase is caused by the suppression of impact ionization by magnetic field. Difference in the surface recombination velocities, s 1 > s 2 , adds the linear term and the threshold field becomes asymmetrically dependent on magnetic field. Note that the positive (negative) polarity of magnetic field corresponds to the middle (bottom) diagram in Fig. 1 (d). Weak (strong) recombination results in small (large) threshold electric field.
The curves in Fig. 3 represent boundaries between normal and avalanche states in the space of electric and magnetic field: the avalanche and normal states correspond to the upper and lower areas of the curve, respectively. The yellow area corresponds to avalanche state for s 1 /s 2 = 3, for instance. By varying magnetic or electric field, one of the two conducting state, normal and avalanche state, can be selected. An interesting application of this phenomenon is switching device. For a given electric field, electric current can be changed abruptly by varying magnetic field, which can be a good candidate of magnetic-field-driven electrical switching device (refer the inset of Fig. 3).
|
2012-06-06T00:01:30.000Z
|
2012-06-06T00:00:00.000
|
{
"year": 2012,
"sha1": "d7fa736ceeb00af7dd2bd27f45f1af28a4e227c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d7fa736ceeb00af7dd2bd27f45f1af28a4e227c9",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
6833690
|
pes2o/s2orc
|
v3-fos-license
|
preference
Psychologists have recently begun to develop computational accounts of how people infer others' preferences from their behavior. The inverse decision-making approach proposes that people infer preferences by inverting a generative model of decision-making. Existing data sets, however, do not provide sufficient resolution to thoroughly evaluate this approach. We introduce a new preference learning task that provides a benchmark for evaluating computational accounts and use it to compare the inverse decision-making approach to a feature-based approach, which relies on a discriminative combination of decision features. Our data support the inverse decision-making approach to preference learning.
Introduction
Social choice theory (SCT) [5] is a theoretical framework that is widely deployed in various domains such as economics, political science, computer science etc. In recent years, there is an increasing interest in the SCT subdomain relating to multiwinner elections. Specifically, multiwinner elections employ mechanisms or voting rules that elect a subset of candidates instead of a single winner [10]. Such mechanisms can find application in a variety of domains, from picking parliaments to choosing contest winners to recommending products or services [11]. However, to the best of our knowledge multiwinner election mechanisms have not yet been used in real-world recommender systems, since such mechanisms have mostly been studied at a theoretical level.
In general, Recommender Systems (RS) are tools that help users browse and retrieve relevant information or items of interest (e.g., products or services) from large collections [28]; RS is an inherently multiagent domain, in which both users and the RS system itself can be viewed as (perhaps bounded) rational agents. Many RS provide personalized recommendations by analyzing the users' previous interactions with the system, or by exploiting available data (e.g., ratings from similar users). Recommender systems increase users' satisfaction, since they do not waste their time with items not matching their interests and preferences. Thus, popular travel platforms incorporate RS to enhance customer experience. Now, most of RS research focuses on recommendations for a single user, while in real life there is a plethora of scenarios where users form groups in order to experience some product or service, e.g., music in a car ride or a movie in the theater. However, group recommendations come in naturally in many domainssuch as tourism, since usually users travel with company, e.g., friends or family. Thus, an RS in such a domain should take this aspect into account [9]; and, importantly, needs to ensure that recommendations are fair [32] with respect to the preferences of the individual group members. In general, the generation of group recommendations by a system is achieved by employing an aggregation mechanism that considers individuals' preferences [20]. However, there is a multitude of alternative ways to produce recommendations for groups of users [15], e.g., find the recommendations for each member of the group separately and produce the group recommendations by selecting items based on a preference aggregation technique; or build a group recommender model by merging group members into one [18].
In the tourism industry, RS serve as digital guides for the many activities a group of visitors may partake in according on their individual interests. On top of the preferences aggregation problem, RS in tourism have to deal with very sparse items ratings originating from the user(s) of interest, and thus the employment of classic RS approaches (e.g., collaborative filtering [28]) can be a complicated task [9]. Additionally, there are many kinds of short-term visitors, e.g., cruise tourists. This category of tourists have limited time when visiting a travel destination. As such, the cost of recommending wrong or irrelevant items is high. Thus, an RS should be able to provide effective recommendations in order to maximize user satisfaction via a lightweight user-system interaction process.
Against this background, we employ, for the first time in the literature, a multiwinner election mechanism, Reweighted Approval Voting (RAV) to the group recommendation problem; and show via a systematic experimental evaluation process that it outperforms several other well-known aggregation mechanisms with respect to standard fairness metrics. We used a real-world dataset for our evaluation, specifically one created for the needs of a real-world short-term visits planning recommender system built for the municipality of Agios Nikolaos, a popular travel destination in Crete. Real world data on points of interests (POIs) and users' preferences were collected by: (i) local knowledge, (ii) online sources, and (iii) questionnaires that were filled by tourists. Our mechanisms can be applied on top of any standard single-user recommender system. In this work, this was a personalized Bayesian recommender we put forward, along with a novel picture-based preference elicitation process.
Background and Related Work
In this section, we provide the necessary background for this work.
Social Choice Theory
In general, social choice theory studies aggregation mechanisms of individual preferences in order to reach a collective choice or decision [5]. Over the years, particular emphasis has been given to the scenario of electing a single "winner" over a set of items or alternatives. In more detail, given a set of voters or agents and their corresponding preferences, social choice theory studies efficient mechanisms and rules in order to elect the best alternative with respect to voters' preferences. Many single-winner voting rules have been proposed in the literature with most known plurality, Borda count, Copeland, etc. [5].
However, there is another type of elections that has recently gained the interest of researchers. Specifically, in this type of elections, the purpose is to select a k-sized group of alternatives, i.e., a committee of size k, rather than just a single winner, i.e., a single alternative. This type of elections are also known as Multiwinner elections [11]; and can be categorized as: (i) Shortlisting, (ii) Diverse Committee, and (iii) Proportional Representation mechanisms [10], based on their type and their properties. Intuitively, a Shortlisting mechanism elects a committee consisting of the alternatives that have the best quality with respect to some feature(s), e.g., on a job interview scenario the application of a shortlisting mechanism would result to a committee consisting of candidates that have similar skills and characteristics [11]. By contrast, a Diverse Committee mechanism elects a committee consisting of alternatives that are diverse based on some feature(s). For instance, consider a travel agency that recommends travel destinations to a customer. The employment of such mechanisms in this case could produce a set of k travel destination that differ with each other based on their location (or any other feature). As such, a capital city in South America, an exotic island in the Pacific ocean and a traditional village in Asia could be elected as the recommendations of the agency. Finally, a Proportional Representation mechanism selects a committee that captures all the different preferences of the voters proportionally.
An important class of voting rules are the so-called approval-based rules, in which voters indicate the alternatives they "approve". Perhaps the most wellknown such mechanism is the Approval Voting (AV). Given a list of candidates, Approval Voting allows each voter to express her approval, i.e., her support, for many candidates. Then the mechanism elects the candidate who earned the greatest number of approval votes. In general, AV satisfies several desirable properties in the situation of a single winner, but it fails to attain proportionate representation in the case of multiwinner elections [1]. On the other hand, the PAV multiwinner election method is an approval-based rule that meets strong theoretical guarantees for election proportionality. In more detail, according to the PAV rule, the weight of each voter to the committee's final score is based on how many candidates from the voter's approval set were elected [1]. However, Skowron et al. [33] showed that winners determination under PAV is an NP-hard problem. To tackle this, a "sequential" PAV variant, namely the Reweighted Approval Voting (RAV), which is essentially a "greedy" approximation of the PAV rule, has been introduced in the literature [11]: Definition 1. (Reweighet Approval Voting -RAV [11]) Consider an election with n voters where the i-th voter approves candidates in the set A i . RAV starts with an empty committee S and executes k rounds. In each round it adds to S a candidate c with the maximal value of i:c∈Ai 1 |S∩Ai|+1 .
That is, according to this definition, at each iteration RAV re-adjusts the weights of each voter's ballot in order to achieve a proportional representation in the final committee.
To the best of our knowledge, such mechanisms have mostly been researched as theoretical tools, as they have not been implemented in real-world recommender systems-with the notable exception of Gawron and Faliszewski [13]: in that paper, the authors introduced a system that exploits multiwinner election mechanisms in order to produce a set of resources (or items) that are similar to a given query. In more detail, by employing different mechanisms the system is able to control the degree of relation between the recommended resources and the given query. However, their proposed system operates more like a search engine rather than a classic recommender system, since it elects items that are somewhat related to the given query, instead of recommending items that can increase the satisfaction of a specific user (i.e., they do not take into account user-specific features in order to provide personalized recommendations).
Personalized Recommendations
Tourism is a natural application domain for personalized RS, since users with variant preferences need to select items of different types (e.g., leisure or cultural POIs). Our work in this paper is related to the development of a real-world tour planning application. As such, we now discuss a few tourism related personalized recommender systems relating to this specific sub-problem.
Ziogas et al. [36] proposed a content-based tourism RS which provides personalized recommendations for touristic attractions. Specifically, the authors constructed hierarchies of items-to-be-recommended, and employed various hierarchical and non-hierarchical similarity measures in order to provide the final set of recommended POIs. In [31] the authors proposed a Crow Search Optimizationbased Hybrid Recommendation model in order to produce recommendations to tourists by combining collaborative filtering and content-based filtering techniques. The accuracy of their approach was evaluated experimentally via a dataset provided by TripAdvisor, the well-known tourism platform. Kashevnik et al. [16] proposed a context-driven tour planning service by exploiting user's past interactions with the system. In this approach, authors employed the SCoR algorithm [26] in order to predict the ratings of a specific user for any given item and subsequently produce the final recommendations. In [19] personalized recommendation are provided to the users by a Hybrid recommender which employs three different types of filtering, namely collaborative filtering, along with content-based and demographic filtering. Gavalas et al. [12] introduced another personalized tour planner, that generates personalized tours. In more detail, a context-aware web/mobile application produces tailored multimodal tours based on a selection of urban attractions. On top of that each individual is able to define the starting and the ending point of her daily tours. The system was evaluated by actual users in city of Berlin. Finally, an extensive overview of recommender systems in tourism can be found in [4].
Group Recommendations
Traditionally, most RS research focuses on individuals by recommending items that maximise users' satisfaction. However, users often interact with one another, forming groups. This social aspect gives rise to the development of group recommender systems. In the literature, several RS have been proposed for groups in various domains, such as movies, music, TV programs, travel, etc. [8]. Briefly, in the movies domain such systems may operate by exploiting users' social and behavioral data [30], or by calculating the probability that a user likes a movie or not [14,35]. Additionally, some Collaborative Filtering approaches have been introduced [3,17]. MusicFX [22] is a group recommender system for music in shared environments, e.g., in gyms, which is also extended to other domains, i.e., restaurants [21].
In terms of work in our domain of interest, there is a number of tourism-or travel-oriented group RS. McCarthy et al. [23] introduced the CATS travel RS for groups of users. Specifically, CATS is a collaborative advisory travel system that recommends travel destinations to groups of at most 4 individuals. An extension of CATS, was proposed in [29], where an extra module of negotiation was added. Moreover, Bayesian methods, which maintain and exploit probabilistic beliefs regarding user preferences, are able to provide high quality recommendations for both individual tourists and groups; and, most importantly, such techniques can be applied for real-time mobile recommendation services [6]. The travel RS in [6] exploits data from the "community-contributed" photos by giving tags to the groups. Moreover, users can be categorized based on some user specific features (i.e., age, gender, etc.); whereas groups are categorized based on the type of the formed group (i.e., group of friends, family, etc). Finally, [27] employs Bayesian networks to recommend restaurants to groups of people in mobile environments.
Our Approach
The main aim of this work is to tackle the group recommendation problem in the tourism domain. To this end, we use "tools" from the social choice theory to effectively aggregate tourist group members' preferences in a fair manner. Specifically, in this section we proceed to show how to tackle the group recommendation problem via employing various preference aggregation mechanismsand, importantly, for the first time in tourism-oriented RS, a multiwinner voting rule. We also describe the main workings of a Bayesian recommender system that we use in order to generate single-user preferences and feed these into the aggregation mechanisms tackling the group recommendation problem. Figure 1 provides an overview of our approach. We see there that any singleuser recommender method of choice can be employed in order to generate lists of single user preferences. These can be used either for recommending items to individual users; or, importantly, as input to a group recommendation stage that employs preference aggregation mechanims in order to come up with (hopefully effective and fair) recommendations for groups of users.
Recommendation Process for Groups
We first describe how to apply our preference aggregation methods and mechanisms on top of any single user recommendation technique. In more detail, we consider a set of items (or alternatives) and a set of users (or agents), denoted as I and U correspondingly. We can employ any recommender system technique of choice (e.g., collaborative filtering, content-based, Bayesian, or any other), in order to generate a predicted score, r u,i , for any possible combination of i and u, where i ∈ I and u ∈ U-i.e., the (single-user) recommender system predicts that user u will rate item i with a score of r u,i . Moreover, for each individual u, we can make the natural assumption that u prefers item i over j if the predicted score of i is larger than the one of j, i.e., i u j if and only if r u,i > r u,j . Thus, our system is able to create the preference list for any individual u. Now, assume that a group, denoted as g, consists of |g| members corresponding to tourists, i.e., individual users. Our system is able to exploit the aforementioned preference list of every member of the group, derived after the employment of the single-user recommendation technique of choice during the (independent) interaction of each user with the system. Then, our group recommender system is able to produce group recommendations to any group g, by exploiting the preference lists of the group members and by applying any aggregation mechanism of choice.
In the experimental evaluation of our system, we explore several well-known aggregation strategies, such as the Least Misery (LM) strategy, Most Pleasure (MP) strategy and the Additive Utilitarian (AU) strategy. Formally, the LM mechanism provides recommendations (or items) that maximize the minimum individual rating among the members of the group, i.e., for each item i ∈ I, LM mechanism assigns a score equal to min u∈g,i {r u,i } and recommends the item(s) with the highest score(s). Similarly, the MP strategy elects the items that maximize the maximum individual rating among the members of the group, i.e., for each item i ∈ I, MP assigns a score equal to max u∈g,i {r u,i } and recommends the item(s) with the highest score(s); while the AU mechanism recommends the items that maximize the average individuals ratings among the members, , i.e., for each item i ∈ I, AU assigns a score equal to avg u∈g,i {r u,i } and recommends the item(s) with the highest score(s). Note that the LM, MP and AU aggregation strategies have been in the past employed for tackling the problem of group recommendations in the tourism domain [8].
In addition, we use the RAV multiwinner election mechanism (see Sect. 2.1)for the first time in the recommender systems' literature. We remind the reader that RAV is a greedy multiwinner voting rule that elects a committee (i.e., a set of items), that represent proportionally the preferences of the voters, i.e., the members of the group. Generally, in the problem of group recommendations, such mechanisms can be very useful due to their properties, since the proportionality that is achieved by the elected committee provides a notion of fairness among the members of the group with respect to their preferences 1 . Furthermore, the RAV mechanism was selected since our purpose is to find a computationally efficient mechanism that is able to satisfy fairness in a real world application, where the group size can be large and the recommended items have to be displayed quickly in order to ensure that visitors would not waste their time waiting for the results, i.e., the final group recommendations. Note that all aforementioned aggregation mechanisms can be employed for any possible group size.
Deriving the Preferences of Individuals
In this section, we describe a recommendation process for individuals, which can be used to derive the preference lists to be fed into our aggregation mechanisms for the group recommendation stage (see Fig. 1). To this end, inspired by the work of [2], we designed a Bayesian RS that performs updating of beliefs in order to learn the users' preferences. The main idea of this approach is to model the users and the items (the tourist POIs) by a common representation; evolve user models via a Bayesian updating procedure; and provide as recommendations the items whose representation best-matches the evolved user models. The common representation used for users and POIs is multivariate normal distributions 2 of dimension D over ranges of values, describing the degree that each feature describes a specific user or item.
On top of that, our single-user recommendation module employs a lightweight, picture-based elicitation process in order to determine user's preferences. Such picture-based elicitation approaches have been shown to be efficient for tourism recommender systems [24,25], effectively tackling the inherent complexity of the domain [9], but have never been used for Bayesian recommenders, such as the one we use in this work.
To describe this process in some detail, in each iteration of the elicitation process our system presents n alternative generic, travel-related pictures to the user. As mentioned, each picture is represented as a multivariate Gaussian, and corresponds to a specific POI type, i.e., a restaurant, a monument, a beach and so on.
Note that our agent uses the well-known Boltzmann exploration technique [2] to select which generic pictures to present to the user, based on the available preference-related information in the user model (i.e., the multivariate distribution representing the user). However, the well-known "cold start" problem plaguing recommender systems is an issue to deal with in our case also: there is no information about new users entering the system. In such cases, our recommender randomly picks some generic pictures to show to the user during their very first interaction with the system-barring the possibility that some prior information regarding a user "category", "type", "class" already exists. In our work, we were able to exploit prior knowledge regarding the preferences of actual tourists of specific age categories visiting Agios Nikolaos (as we later explain), in order to show to new users pictures relating to tourists of the same age. Now, once the user is presented with a set of generic pictures, she clicks the one image that she "likes" more, that is most "attractive" to her, wrt her interests, and subsequently rates it by giving it 1-5 stars. (Note that all this functionality is actually included in a real-world mobile app developed for the needs of Agios Nikolaos' visitors.) A "5-star" rating is interpreted by the system as a "perfect match" between the generic picture's representation model, and the actual user model; while lower ratings correspond to lower levels of matching among the models. The Bayesian updating process effectively aggregates all such information via a sampling based approach, as we now describe.
Similarly to the work of [2], we compute the similarity of any item (POI or generic picture) and any user model via exploiting the well-known Kullback-Leibler (KL) Divergence criterion to assess the "distance" between their corresponding multivariate Gaussian distributions. As mentioned, we make the natural assumption that the more similar the Gaussians of a user u and an item i are, the higher the rating (of user u for item i) would be; while the smaller the KL-divergence between a Gaussian x and a Gaussian y is, the more similar these distributions are. As such, the (predicted) rating of a (represented as a Gaussian) user u for (a represented as a Gaussian) item i can be defined as: with M being the maximum possible rating (i.e., 5). Given this predicted rating, the Bayesian recommender uses the logistic function [7] in order to draw an appropriate number of samples from the selected generic picture's distribution. Then, Bayesian inference [2,34] is employed so as to combine the prior user model and the new samples collected in order to produce an updated user model (that is, a posterior Gaussian distribution describing the user). This process then repeats for a pre-specified number of iterations, with the posterior becoming the new user prior, exploited to pick new generic pictures to present to the user in her next interaction with the system 3 . Following this elicitation process, the single-user recommendation module exploits the built user model in order to output a list of the most preferred POIs by this user (i.e., those POIs having the lower KL-divergence when compared with the user model); and provides it as input to the aggregation mechanisms used for group recommendations.
Experimental Evaluation
In this section we present a series of experiments, mainly to evaluate the fairness that is achieved by several well-known aggregation mechanisms for group recommendations. We use a real-world dataset including 430 POIs located in or around the city of Agios Nikolaos, Crete, Greece.
Evaluating Aggregation Mechanisms for Group Recommendations.
In our experiments we evaluate the performance of different aggregation mechanisms for group recommendations, in terms of the standard m-PROPORTIONALITY and m-ENVY-FREENESS fairness metrics [32]. These respectively signify the percentages of users that consider m items in a recommended set to be either in their top Δ p % of preferred items, or for which the user belongs in some top Δ e % of users that are favored by the recommendation of these items (these will be clarified more below). We created synthetic groups g of users of various sizes-specifically, |g| = {5, 10, 15, 20}-and applied the following aggregation strategies: (i) Least Misery (LM), (ii) Most Pleasure (MP), (iii) Additive Utilitarian (AU), and (iv) Reweighted Approval Voting (RAV). In more detail, for the RAV mechanism, we assume that a user approves an item, i.e., a POI, if and only if her expected score (computed from Eq. 1) for this item is larger than 3. We note that in order to compute the expected user's rating for an item we use her inferred model (i.e. the model that our approach constructed via our preference elicitation process) and not her real one (since we want to assess our system's ability to provide fair recommendations, and the real user model is not known to the system). For the m-PROPORTIONALITY fairness metric we assume that Δ p for all users is set to 0.1-i.e., a user likes a POI, if this POI is ranked in the top-10% of the user's preferences over all available POIs in the dataset. Additionally, for the m-ENVY-FREENESS fairness metric we assume that Δ e = 0.4-i.e., a user is envy-free for a POI, if for this POI the user is in the favored top-40% of the group (i.e., in the 40% of the group members that prefer the POI more than the rest 60%). Finally, for both m-PROPORTIONALITY and m-ENVY-FREENESS metrics we consider that the m items for which the corresponding property is required is set to: where # of recommended POIs = 20. Table 1 illustrates the results of our approach on this set of experiments. Note that the presented results are the average values over 500 simulations of experiments on settings with the same properties-i.e., we randomly generated 500 groups for each group size |g| value, and ran one such simulation per generated group (i.e., we ran 2000 simulations in total). We can see that the RAV mechanism is able to provide very efficient group recommendations for every setting with respect to our fairness metrics, achieving consistently better performance compared to the other aggregation mechanisms. In fact, the RAV mechanism usually achieves a score over 92%, i.e., the proportionality and envy-freeness properties are achieved for almost all members of the group, irrespective of group size (with only one exception, for |g| = 5 when a 4-ENVY-FREENESS of 79% is achieved). Notice also that RAV's performance is significantly better than that of the other mechanisms for the smaller group sizes. In the case of larger groups, however, every aggregation mechanism achieves very high scores (signifying fair group recommendations), which are also comparable to each other. Such a result is expected, since for larger groups the m parameter for both metrics decreases (see Eq. 2). Thus, we need fewer items (i.e., m items) in the final set of recommendations, in order to consider this set of POIs fair for a member of the group with respect to our metrics. Additionally, for larger groups it is easier to find members that share similar interests-e.g., it is easier to recommend POIs that satisfy more than one members belonging to the group.
Validating the Underlying Single-User Bayesian RS. As noted earlier in the paper, any type of single-user recommender can be used to provide input to the aggregation mechanisms used for group recommendations. The experiments conducted above used as input the preference lists associated with user models created by the single-user Bayesian recommender presented in Sect. 3.2. An extensive evaluation of this single-user recommendations algorithm is provided in a RecTour-2022 workshop paper that is freely available online [34]. Since this single-user recommender approach is not the main focus of our work in this paper, and for the interest of conciseness, here we only present briefly the main findings of that evaluation.
Specifically, our results showed that given a specific user, the more she interacts with the system (by providing information regarding her interests via clicking on pictures she likes presented to her by the system), the Bayesian RS is able to produce a model that represents her preferences quite closely. Additionally, when more options are provided to the user by the system (i.e., when more pictures are included in the set of pictures presented to her by the system), then the quality of the recommendations increases. Indicatively, in the case where the user interacts with the system three times, i.e., each she selects an image from a tuple of three different pictures and then provides a rating for the selected picture during the elicitation process, our approach achieves a similarity score of 3.67 out of 5 with respect to the Kullback-Leibler metric, while in the case that the user interacts with the system five times the corresponding score stands at 3.92 out of 5. However, there is a trade-off between the satisfaction of user and the effectiveness of a recommender, i.e., as the number of user-system interaction increases the system is able to provide more efficient recommendations since it can exploit further information regarding the interests of the user, but it is also creates an emotion of dissatisfaction since the user has to provide more answers to the system. As such, our results indicate that our algorithm can produce a good model even with limited interaction, i.e., three, since the difference in terms of efficiency does not exceed a 5% = (0.25/5) "penalty" compared to the case of five iterations. Finally, the employment of prior information regarding tourists' average preferences (in our experiments, this prior was constructed given information relating to the preferences of real tourists grouped by age), results to better performance (i.e., higher quality single-user recommendations), especially when the user-system interaction is limited.
Discussion
In this work, we employed the RAV multiwinner election mechanism for the challenging problem of group recommendations. Our choice for the particular mechanism was motivated by the following factors. First of all, RAV can be considered as a good greedy approximation algorithm for the PAV rule [11]. Noticeably, PAV is the only w-AV rule which satisfies the property of Extended Justified Representation (EJR) [1,11]. However, we highlight the fact that the purpose of this work is to produce recommendations of POIs for a real-world mobile application. Thus, we have to employ a computationally efficient mechanism, since mobile phones have limited processing power.
To the best of our knowledge, this work applies for the first time multiwinner elections in a real-world recommender system. As such, we chose to focus on the metrics of m-PROPORTIONALITY and m-ENVY-FREENESS, which are derived from the recommender systems literature; and exploit them for the first time for the evaluation of multiwinner rules. Our results clearly demonstrate that such techniques outperform other well-known mechanisms that have been employed for the group recommendation problem, especially when the size of the group is small, i.e., |g| = 5. Now, note that many researchers have focused on fairness notions in the recommender systems literature. Note also that a recommender can use only the inferred model for generating recommendations. As such, as is natural and common in the literature, we exploit the inferred user model instead of the real one for producing recommendations.
Of course, the real user model can be exploited, for the evaluation of the elicitation procedure-i.e., for answering the question "How well our system has learned the (real) preferences of the user, via the selected elicitation process?". (We tackle this question when evaluating single-user recommendations, using the real model of the user and the corresponding metrics [34].) However, we believe that it is not appropriate to exploit the real user model to evaluate the mechanisms for group recommendations in real-world systems.
To explain this, assume that we employ any multiwinner mechanism that is able to perform perfectly with respect to our metrics. If our elicitation process cannot learn the users' preferences efficiently (e.g., due to a small number of interactions), then when evaluating our multiwinner mechanism with respect to m-PROPORTIONALITY and m-ENVY-FREENESS, the results would show that the mechanism does not perform well, when evaluated with respect to the real users' models. This effect however would be due to the fact that our inferred model does not describe efficiently the real users' preferences, and not because our mechanism is unable to provide a committee that satisfies to a large extent our selected metrics. Hence, we do not consider it appropriate to use the real user models for evaluating the multiwinner election mechanisms, as by doing so we would not be able to draw clear conclusions as to which component is responsible for potential poor performance of the real-world system, as our example indicates.
On the other hand, one could add to the pipeline a step in which the real users evaluate the final group recommendations (indeed, our real-world application includes such a step). This would result in an update of the inferred users' models (using any technique of choice), leading to improved recommendations in future interactions with the system.
Finally, we note that in [34] we introduced a novel social choice theory framework that is able to provide diverse personalized recommendations to a single user. In more detail, our system is able to exploit the mean vector that our system has constructed via the elicitation process, in order to create a personalized election that consists of auxiliary voters that represent proportionally each feature based on the corresponding scores of a specific user. Our results show that such an approach can improve the performance of our system especially when the user-system interaction is limited; while due to the nature of such a framework, one can employ any multiwinner election mechanism that results to a committee, i.e., final recommendations, that satisfy any desired property.
Conclusions and Future Work
In this work we put forward a recommender system to tackle the group recommendation problem in the tourism domain. Our approach utilizes a number of well-known preference aggregation mechanisms alongside a multiwinner voting rule, the Reweighted Approval Voting (RAV) mechanism. We also outlined the principal operations of a Bayesian recommender system that we employed in order to build the single-user preferences and feed them into aggregation techniques that address the group recommendation problem. Finally, we evaluated our methodology using a real-world dataset of a tourist destination. Our results confirm that the employment of the RAV multiwinner election mechanism results to fair group recommendations with respect to the fairness metrics of m-PROPORTIONALITY and m-ENVY-FREENESS derived from the recommender system literature. We note that RAV clearly outperforms its competitors for small tourist group sizes (which are in fact quite common in real life).
As future work, we intend to further evaluate our approach in scenarios in which different types of prior knowledge is available-i.e., when we have and can exploit information regarding the general preferences of a type of visitors not only based on the age group that they belong to, but also their nation, their gender etc. We also plan to employ other multi-winner election mechanisms, as well as equip our system with an additional negotiations module that helps the members of the group to decide fairly among the recommended POIs. Another interesting line of work would be the employment of other multiwinner election mechanisms that provide more theoretical guarantees derived from the social choice literature, e.g., Justified Representation (JR) and Extended Justified Representation (EJR), in order to (i) evaluate them with respect to fairness metrics from other domains; (ii) find a brake-down point in terms of computational complexity. Finally, we plan to extensively test our group recommendations approach with actual tourists, via employing our real-world mobile application for short-term visits planning to this purpose; and try these ideas in other related domains of practical interest, such as road-trips planning.
|
2014-10-01T00:00:00.000Z
|
2011-12-12T00:00:00.000
|
{
"year": 2011,
"sha1": "d8164930f24fa45a38a43b1068ebc340b600aa5d",
"oa_license": "CCBYNCSA",
"oa_url": "https://sirokbastra.kemdikbud.go.id/index.php/sirokbastra/article/download/234/173",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2e1807ab6f313241330efcf65c577f812f5412ef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
241108171
|
pes2o/s2orc
|
v3-fos-license
|
CRISPR-dependent endogenous gene regulation is required for virulence in piscine Streptococcus agalactiae
ABSTRACT The clustered regularly interspaced palindromic repeats (CRISPR)-Cas (CRISPR-associated) system is a prokaryotic defence against invading mobile genetic elements, such as bacteriophages or exogenous plasmids. Beyond this, this system has been shown to play an important role in controlling the virulence of some bacterial pathogens. Streptococcus agalactiae strain GD201008-001, a causative agent of septicemia and meningitis in tilapia, contains a single type II CRISPR-Cas system with Cas9 as a signature protein. In this study, we found that the deletion of CRISPR significantly reduced adhesion, invasion, cytotoxicity and haemolysis, and caused severely attenuated virulence in the piscine S. agalactiae strain. RNA-Seq identified 236 endogenous genes regulated by CRISPR, with 159 genes upregulated and 77 genes downregulated. The resulting change in gene transcription by CRISPR was much more pronounced than that by cas9 in this bacterium, indicating CRISPR-mediated endogenous gene regulation was mostly independently of cas9. Subsequent studies showed that CovR/S two-component system was transcriptionally upregulated due to CRISPR deletion, which repressed the expression of the cylE gene coding for a cytolytic toxin, and thus decreased the activity of β-haemolysin/cytolysin. However, upregulation of CovR/S was not the contributor to the attenuation phenotype of ΔCRISPR. Further, we demonstrated that CRISPR is capable of repressing the expression of Toll-like receptor 2 (TLR2)-activating lipoprotein Sag0671 and thus dampens the innate immune response. This study revealed that the CRISPR system of S. agalactiae exhibited extraordinary potential capability in the regulation of endogenous transcripts, which contributes to bacterial innate immune evasion and virulence.
Introduction
The clustered regularly interspaced palindromic repeats (CRISPR)-Cas (CRISPR-associated) system is widely distributed in most archaea and many bacteria, which acts as a defense system against invasion by foreign nucleic acids derived from phages, plasmids and viruses [1,2]. The principles and effector module design differentiate the CRISPR-Cas system into two main classes, which further branch into six main types and at least 33 subtypes [3]. CRISPR RNA (crRNA), which harbours the spacer sequence, helps Cas proteins recognize and cleave foreign genetic elements [4]. This cleavage requires a trans-activating crRNA (tracrRNA) to bind with the repeat region of crRNAs via base pairing to form a mature duplex RNA for guidance [5,6]. In addition to the canonical function in immune defense against foreign nucleic acid, the roles of CRISPR-Cas system in bacterial physiology are being uncovered. An increasing number of studies have indicated that CRISPR-Cas is involved in the regulation of endogenous genes, including some genes involved in virulence. The type II-C CRISPR-Cas is indispensable for invasion and replication of Nesseria meningitidis in host cells [7]. In Francisella novicida, type II-A CRISPR-Cas downregulates the expression of bacterial lipoprotein (BLP) and ultimately promotes both pathogenesis and commensalism [8]. The type I-F CRISPR-Cas system in Pseudomonas aeruginosa has been proven to inhibit biofilm formation through crRNA-guided targeting and damaging of integrated prophage DNA [9]. Another study from P. aeruginosa [10] showed that CRISPR-Cas system targets the mRNA of the quorum-sensing regulator LasR to evade recognition by Toll-like receptor 4 (TLR4), and consequently diminishes proinflammatory responses and escapes innate immunity.
Streptococcus agalactiae or group B Streptococcus (GBS) is a Gram-positive zoonotic bacterium that can infect multiple hosts, including humans, bovines and other mammals, and also fish. As a primary pathogen causing meningoencephalitis in cultured tilapia, this bacterium is considered a major threat to the tilapia aquaculture industry [11][12][13]. Although various virulence factors are known, the exact pathogenesis of this bacterium remains unclear. To date, two different CRISPR-Cas systems have been identified in S. agalactiae: Type II-A and I-C [14,15]. Liu et al. [16] reported that the chromosome of S. agalactiae strain GD2008-001 only harbours a single type II-A CRISPR-Cas system that consists of four cas genes, namely, cas9, cas1, cas2 and csn2, and a CRISPR array with eight spacers. The signature protein Cas9 of type II system has previously been demonstrated to regulate endogenous genes and be involve in the virulence of strain GD2008-001 [17]. Here, we showed that the deletion of CRISPR caused dramatically attenuated virulence in zebrafish and mouse infection models. Further investigation demonstrated that the upregulated CovR/S two-component system is responsible for the decreased haemolytic activity and adhesion, but not the contributor to attenuation phenotype of ΔCRISPR. CRISPR-mediated repression of Toll-like receptor 2 (TLR2)-activating lipoprotein Sag0671 expression is critical for S. agalactiae to dampen the host innate response. The findings in the current study advance our understanding of the CRISPR-Cas system function and provide new insights into the contribution of this system to bacterial pathogenesis.
The bacterial strains and plasmids used in this study are listed in Table S1. The S. agalactiae strain GD201008-001, which is β-haemolysin/cytolysin positive and belongs to serotype Ia and multilocus sequence type (MLST) ST-7, was isolated in 2010 from tilapia with meningoencephalitis from a fish farm in Guangdong Province, China [16]. S. agalactiae strain GD201008-001 was grown in Todd-Hewitt broth (THB) (Oxoid, Basingstoke, England) or on THB medium with 1.5% (wt/vol) agar. Escherichia coli strain DH5α was used as the host for plasmids and cultured in Luria-Bertani (LB) broth or on LB agar medium. The antibiotic spectinomycin (Spc) (Sigma, St. Louis, MO, USA) was added into the solid medium or broth at 100 μg/mL for S. agalactiae and 50 μg/mL for E. coli when necessary.
Construction of S. agalactiae mutants and complemented strains
To delete the CRISPR array from S. agalactiae GD201008-001, a thermosensitive pSET4s suicide vector carrying the homologous CRISPR deletion cassette was constructed. The upstream and downstream arm fragments were first amplified using two sets of primer pairs, CRISPR-A/B and CRISPR-C/D, and then fused into one fragment without the CRISPR cassette by overlap PCR. All primers are listed in Table S2. Both the pSET4s and the fusion fragment were digested by the restriction enzyme BamHI and ligated by the ClonExpress II One Step Cloning Kit (Vazyme, Nanjing, China) to generate the CRISPR deletion vector pSET4s-CRISPR. The pSET4s-CRISPR candidates were transformed into E. coli DH5α for propagation, and the construct was verified by colony PCR and sequencing before electroporation into S. agalactiae GD201008-001 competent cells, which were selected on THB agar medium with 100 μg/mL Spc [18]. Additional deletion mutants were constructed using the same approach.
To construct the corresponding complementary strain for a deletion mutant, a fragment containing the promoter and complementary locus was amplified and ligated to the pSET2 vector. Then, the recombinant plasmid was electroporated into mutant competent cells. Complementation vector-transformed mutants were cultured on Spc-containing THB agar medium, and positive clones were verified by PCR.
In vitro growth curve assay
Overnight S. agalactiae cultures of the wild-type (WT) and its derivative mutant strains were prepared, and the cell densities were equalized by dilution adjustment. Bacterial growth (optical density at 600 nm, OD 600 ) in THB were measured every 2 h from 0 h to 12 h after incubation.
Adhesion assay
The adhesion assay was performed as described previously [19]. bEnd3 brain microvascular endothelial cells were cultured in DMEM supplemented with 15% FBS at 37°C with 5% CO 2 . Cells were seeded in 24-well plates at a density of 10 5 cells/mL a day before the experiment. Bacterial cells were pelleted at 5000×g for 5 min and then resuspended in phosphatebuffered saline (PBS). After washed three times with PBS, the bacterial pellet was resuspended in serumfree DMEM. Cell monolayers were washed three times with PBS prior to being cultured with bacteria at a multiplicity of infection (MOI) of 1:1. Co-cultured cells were incubated at 37°C with 5% CO 2 for 2 h and washed five times with PBS before being lysed. Lysates were serially diluted in PBS and plated on THB agar medium, and the colony-forming units (CFUs) were counted after overnight incubation at 37°C.
S. agalactiae intracellular survival assay RAW264.7 macrophages were cultured in DMEM with 10% FBS at 37°C with 5% CO 2 . RAW264.7 cells at a density of 10 5 cells/mL were seeded in 24-well plates a day before the experiment. Bacterial and cell monolayers were processed in the same way as described for the adhesion assay. Co-cultured cells were incubated at 37°C for 1 h. Extracellular bacteria were removed by washing with PBS five times, refilling the wells with 100 μg/mL penicillin G-containing 1% FBS-DMEM and incubating at 37°C with 5% CO 2 for 1 h, which represented the 0 h time point. After 2, 4, 6, 8 and 12 h, monolayer cells were washed and lysed. The lysates were serially diluted in PBS and plated on THB agar medium to count the CFUs after incubation at 37°C overnight.
Cytotoxicity assay
A lactate dehydrogenase (LDH) cytotoxicity assay was performed as previously described [20]. The CytoTox 96 Non-Radioactive Cytotoxicity Assay (Promega, Madison, WI, USA) was utilized to measure the LDH activity. Bacteria were cultured and diluted as described above. RAW264.7 macrophages cultured in 96-well plates were infected with 100 μL of bacterial suspension at an MOI of 1:1 and incubated for 4 h at 37°C with 5% CO 2 . Cells were lysed with Triton X-100 at a final concentration of 1% (vol/vol) as the maximum-release positive control. LDH released by untreated cells and bacteria was measured as the spontaneous-release control. The LDH release value (OD 492 ) was measured by a microplate reader. The percentage of cell cytotoxicity was calculated as 100 × [(sample LDH release-spontaneous LDH release)/ (maximum LDH release-spontaneous LDH release)], as shown in the manufacturer's protocol.
LD 50 determination in zebrafish
The zebrafish used in this study were raised for over a week before being challenged, and their care and feeding were performed according to established protocols [21]. Before being injected into the zebrafish, bacterial cells in late log phase in THB were washed and resuspended in PBS. Zebrafish were anaesthetized with 90 mg/L tricaine methanesulphonate (MS-222) and were then intraperitoneally (i.p.) injected with 20 μL of 10-fold serially diluted suspensions of bacteria (10-10 6 CFU/mL). Each treatment group included 11 zebrafish. Fish in the control group were injected with an equal volume of PBS. Mortality was recorded twice per day for the next 7 days. The 50% lethal dose (LD 50 ) values were calculated by the Reed-Muench method [22].
Murine infection
For the bacterial burden assay, female BALB/c mice (5 to 7 weeks of age) were purchased from the Experimental Animal Center of Yangzhou University. Mice were challenged with 5×10 2 CFU of the indicated strains. Each treatment group had 6 mice. At 16 h post-infection, brain, spleen and blood samples were harvested, weighed and homogenized in PBS. Homogenates were serially diluted and plated to enumerate the CFUs. For survival experiments, groups of 10 mice were infected i.p. with 5 × 10 2 CFU of the indicated strains and monitored for death every 4 h until 7 days post-infection.
Detection of blood brain barrier (BBB) opening
To investigate the effect of CRISPR on BBB opening, we used a BALB/c mouse model based on the intravenous injection of β-galactosidase-positive E. coli M5 as an indicator. This investigation was carried out as described previously [23]. S. agalactiae strains at mid-log growth phase were washed twice in PBS and resuspended in PBS to 1 × 10 3 CFU/mL. The concentration of E. coli M5 was adjusted to 2 × 10 9 CFU/ mL. Three groups of mice were infected with 100 µL of the indicated strains by intraperitoneal injection. At 3, 9, and 15 h post-infection, five mice from each group were selected randomly and inoculated with 100 µL of E. coli M5 by the intravenous route. At 5 min post-inoculation with E. coli M5, the brains were aseptically removed and homogenized in PBS. Then, the cells were serially diluted and spread onto M63 plates for E. coli M5 counting. The bacteria were counted and reported as CFU/g per mouse.
Transcriptomic analysis
The VAHTSTM mRNA-seq v2 Library Prep Kit for Illumina® (Vazyme, Nanjing, China) was used to generate the transcriptome library for RNA sequencing. Transcriptome reads were mapped against the reference sequence of S. agalactiae GD201008-001 using TopHat2 software. Cuffdiff program was used to identify differentially expressed genes (DFGs). DFGs were identified as those with a P value <0.05 and a fold-change of >2 between two samples.
Real-time quantitative PCR (qRT-PCR)
qRT-PCR was carried out as described previously [24]. Total RNA from bacterial cultures at mid-log phase was extracted with an E.Z.N.A. Total RNA Kit I (Omega, Norcross, GA, USA) and then reverse transcribed to cDNA using HiScript II QRT Supermix (Vazyme, Nanjing, China). Two-step relative qRT-PCR was used to measure the mRNA transcription level. The 16S rRNA housekeeping gene was used as the internal control. The primers used for qRT-PCR assays are listed in Table S2. SYBR Green PCR was performed in triplicate using SYBR FAST qPCR Master Mix (KAPA, Boston, MA, USA) following the manufacturer's protocol on an ABI 7500 RT-PCR system. Changes in gene transcription were determined using the comparative cycle threshold (2 -ΔΔCT ) method [25].
Haemolytic activity
The haemolysin assay was performed as described previously [26]. Bacterial cells in mid-log phase were pelleted by centrifugation at 3000×g, washed with PBS twice and resuspended in 1 mL of PBS with 0.2% glucose. The bacterial suspension (0.1 mL) was pipetted into the first row of a 96-well conical bottom plate, and serial twofold dilutions in PBS with 0.2% glucose from 1:2 to 1:256 were then prepared, each in a final volume of 0.1 mL. Glucose (0.2% in PBS) and 0.1% sodium dodecyl sulphate (SDS) alone were used as negative and positive controls, respectively. An equal volume of washed 1% tilapia red blood cells in 0.2% glucose-containing PBS was then added to each well, and the plate was incubated at 37°C with 5% CO 2 for 1 h. After incubation, the plate was centrifuged at 3000×g for 10 min, and 0.1 mL of the supernatant was transferred to a new plate. Haemoglobin was assessed by measuring the OD 420 in a spectrophotometer. The reciprocal of the greatest dilution of the supernatant from a given strain that showed at least 50% lysis compared to the SDS control was taken as the haemolytic titre.
Cytokine assay RAW264.7 cells were grown in DMEM containing 15% FBS in 24-well tissue culture plates. The monolayers were washed with sterile 10 mM PBS to remove unattached cells. S. agalactiae was grown overnight in THB medium at 37°C and washed three times with PBS. The collected bacteria were diluted to 4 × 10 7 CFU/mL. To inactivate TLR2 signalling, the cells were incubated with 100 μg/mL antagonist C29 for 1 h. The macrophage cells were infected at an MOI of 1:1 for 2 h. The extracellular bacteria were removed by washing the monolayers with PBS and replaced with DMEM containing 100 μg/mL penicillin G. To measure the cytokine expression, the infected cells were sampled at 8 and 16 h after the addition of antibiotics, and treated with 0.02% Triton X-100 for 15 min at 37°C. Uninfected RAW264.7 cells in medium served as controls. The levels of IL-6, IL-1β and TNF-α in macrophages were measured by qRT-PCR. The β-actin housekeeping gene was amplified as an internal control. The primers used for the qRT-PCR assay are listed in Table S2.
Statistical analysis
Data were analysed with SPSS Statistics version 20.0. Multiple comparisons were performed by analysis of variance (ANOVA) for the qRT-PCR results. The nonparametric Mann-Whitney U test was used for analysis of the data obtained from animal experiments and intracellular assays. A value of P < 0.05 indicated a significant difference, and all error bars in the figures represent the standard deviation of independent experiments.
Results
Analysis of the CRISPR-Cas systems of S. agalactiae GD2008-001 Computational analysis of whole-genome sequence using the CRISPR finder program (https://crispr. i2bc.paris-saclay.fr/Server/) revealed a single type II-A CRISPR-Cas system (spans ∼6.6 kb) in S. agalactiae GD2008-001, typically consisting of a CRISPR array and four cas genes that are organized in an operon. The CRISPR array contains nine unique spacers of 20-31 bp in length, separated by the eight identical 36-bp repeat sequences. Four cas genes are sequentially located upstream of the CRISPR array, including cas9 (locus_tag: A964_0899), cas1 (locus_tag: A964_0900), cas2 (locus_tag: A964_0901), and csn2 (locus_tag: A964_0902). A tracrRNA sequence is located upstream of the cas9 gene and is encoded on the opposite DNA strand. The details of the CRISPR system are shown in Figure S1.
CRISPR deletion significantly decreases S. agalactiae adhesion, invasion and cytotoxicity to host cells
The ΔCRISPR mutant had a similar growth curve as the WT strain in terms of both the growth speed and the highest density at the stationary growth phase when cultured in THB ( Figure 1A), suggesting that in nutrient-rich conditions, the deletion of CRISPR did not affect S. agalactiae growth. To elucidated the role of CRISPR in bacterial adhesion, we compared the relative level of S. agalactiae adhesion to bEnd3 brain microvascular endothelial cells. Compared to the WT strain, ΔCRISPR exhibited decreased adhesion to bEnd3 cells by approximately 4-fold, and the adhesion ability was restored in the complementary strain CΔCRISPR ( Figure 1B). Consistent with the bacterial adhesion results, ΔCRISPR also exhibited a 1.5-fold decrease in the invasion rate compared to the WT and CΔCRISPR strains ( Figure 1C). Additionally, CRISPR was necessary for S. agalactiae-induced macrophage injury. After 4 h of coincubation with S. agalactiae strains at an MOI of 1:1, the cytotoxicity of ΔCRISPR on RAW264.7 cells was 1.4-fold lower compared to the WT strain ( Figure 1D). Taken together, our results clearly demonstrated the importance of CRISPR in S. agalactiae colonization and its induced host cell injury.
CRISPR is positively involved in S. agalactiae virulence and contributes to BBB penetration in vivo
To investigate the role of CRISPR in S. agalactiae virulence, zebrafish were injected i.p. with the WT, ΔCRISPR or CΔCRISPR strains. The LD 50 value of the ΔCRISPR strain (1.72 × 10 4 CFU) was 71-fold higher than that of the WT strain (2.43 × 10 2 CFU), which was restored to 5.46 × 10 2 CFU after complementation with CRISPR (Table S3). Furthermore, we tested mortality in infected mice. Mice infected with the WT or CΔCRISPR strains rapidly succumbed to death, with 100% mortality within 28 h after injection. However, ΔCRISPR did not cause any death, even 128 h after infection (Figure 2A). To better understand the effect of CRISPR on the multiplication and distribution of S. agalactiae in hosts, the bacterial burdens in the blood, spleen and brain were calculated. At 16 h post-infection, the deletion of CRISPR resulted in significantly decreased bacterial loads in the spleen (368-fold) ( Figure 2B), blood (210-fold) ( Figure 2C) and brain (433-fold) ( Figure 2D). To colonize the brain, S. agalactiae must traverse the BBB. We used a BALB/c mouse model to assess the integrity of the BBB. Mice infected with the WT strain exhibited a significantly greater amount of E. coli M5 in brains at 9 h post-infection, compared to mice infected with ΔCRISPR, and the increasing trend was more pronounced at 15 h post-infection ( Figure 3). CRISPR complementation partially restored the capacity of ΔCRISPR to disrupt the BBB. Thus, the marked defect of ΔCRISPR in colonizing the brain may, at least in part, be explained by the reduced capacity of this strain to penetrate the BBB.
Identification of the DEGs in ΔCRISPR by RNA sequencing
To better understand the mechanisms by which CRISPR influences S. agalactiae virulence, we performed transcriptome analysis to compare the differences between the WT and ΔCRISPR strains. A total of 236 DEGs were identified in ΔCRISPR, with 77 genes downregulated and 159 genes upregulated ( Figure 4A; Table S4). In order to determine whether there exists a link between the CRISPR array and the cas9 gene in regulating endogenous gene expression, we compared the transcription profile obtained from ΔCRISPR (236 genes) in this study with that previously reported in the Δcas9 (29 genes) [17]. As shown in Figure 4B, there was an overlap of 26 genes, among which 16 genes are located on the lamb-daSa04 prophage gene cluster (Table S4). Notably, 210 genes were only identified in ΔCRISPR but not in Δcas9, with 58 genes downregulated and 152 genes upregulated. By comparing the sequences of the 159 upregulated genes in ΔCRISPR on Freiburg RNA platform (https://rna.informatik.uni-freiburg.de/ IntaRNA/Input.jsp), we found that mRNAs of 147 genes could partly hybridize with one or more CRISPR spacers (Table S5), including covS, a sensor gene of the CovR/CovS (CsrR/CsrS) two-component system that has been suggested to be a negative regulator of bacterial virulence in several studies [27,28]. We quantified the mRNAs of covR and covS in the WT, Δcas9 and ΔCRISPR strains by qRT-PCR. The deletion of cas9 did not impact the mRNA levels of covR and covS, but in ΔCRISPR, both covR and covS were significantly upregulated ( Figure 4C), suggesting that covS and covR might be regulated by CRISPR independent of cas9. Sequence alignment of each crRNA spacer with covS mRNA showed eight covS mRNA regions that may be recognized by the CRISPR-Cas system ( Figure S2).
Decreased haemolysin and adhesion activities in ΔCRISPR are closely related to the CovR/S two-component system
Based on the fact that CovR/S is a well-studied virulence control system in S. agalactiae [29] and that it was demonstrated to be downregulated by crRNA in this study, we speculated that CovR/S might have been involved in the repression of virulence in ΔCRISPR. To verify this hypothesis, we deleted the covR/S in both the WT and ΔCRISPR strains. As shown in Figure 5A, both the ΔcovR/S and ΔCRISPR-covR/S mutants exhibited increased expression of orange pigment, which changed the colony colour from white to light orange. The amount of pigment produced by GBS always correlates with the amount of haemolysin produced [30]. Compared to the WT strain, ΔCRISPR exhibited 3.2-fold decreased haemolysin activity, but this activity was greatly improved, even higher than that of the WT when covR/S was deleted in the ΔCRISPR background. Not surprisingly, ΔcovR/S showed an over 16-fold increase in haemolytic titre compared to the WT strain, while the haemolytic activity in CcovR/S-ΔCRISPR was restored to the similar level as in ΔCRISPR. The cylE gene has been reported to be necessary for haemolysin production in S. agalactiae [31]. Then, we compared the cylE transcription level in these strains. Consistent with the haemolysin activity, cylE transcription was significantly enhanced in both ΔcovR/S and ΔCRISPR-covR/S but reduced in CcovR/S-ΔCRISPR as compared with the WT strain ( Figure 5B).
To better evaluate the role of CovR/S in the interaction between S. agalactiae and host cells, we compared the bacterial adhesion capacity to bEnd3 endothelial cells. The adhesion rate of ΔcovR/S was 1.9-fold higher than that of the WT strain. The absence of covR/S in ΔCRISPR caused the bacterial adhesion to endothelial cells from a repressed to a 1.5-fold higher level than that caused by the WT, while the adhesive abilities of ΔcovR/S and ΔCRISPR-covR/S were restored to the WT or ΔCRISPR levels after covR/S complementation ( Figure 6). We speculated that the reduced adhesion in ΔCRISPR was due to the upregulated expression of the CovR/S negative regulator.
Upregulation of CovR/S is not associated with virulence attenuation in ΔCRISPR
Some previous studies have suggested that haemolysin production and adhesion are essential virulence factors of GBS [31][32][33]. In this study, we demonstrated that CovR/S acts as a repressor to regulate haemolysin and adhesion activities. Therefore, we assume that the repression of virulence in the ΔCRISPR mutant might be due to CovR/S upregulation. To test this idea, we monitored the mortality rates of WT and its derived mutant strains in mice. Similar to WT, the ΔcovR/S mutant equally resulted in 100% mortality in the infected mice, but the time of death was 20 h later than that caused by WT ( Figure 7A). At 16 h postinfection, colonization by ΔcovR/S in the brain ( Figure 7B), blood ( Figure 7C) and spleen ( Figure 7D) was lower than that by WT. Furthermore, loss of covR/S in the ΔCRISPR background did not become more virulent than ΔCRISPR, as evidenced by similar mortality rate and bacterial loads in tissues. All the data indicated that virulence attenuation in ΔCRISPR could not be interpreted with the upregulated CovR/S.
Upregulation of the lipoprotein Sag0671 activates TLR2-mediated IL-6 expression in ΔCRISPR
It has been suggested that lipoprotein can trigger a proinflammatory innate immune response to combat pathogens [34]. Based on our transcriptome data, we found that the expression of the lipoprotein gene sag0671 was significantly upregulated due to the deletion of CRISPR. The in silico analysis predicted that crRNA could partially base pair with the Sag0671 transcript ( Figure S3). Next, we detected the expression of IL-6, IL-1β and TNF-α in RAW264.7 cells after infection with the WT or ΔCRISPR strains. As a result, macrophages infected with ΔCRISPR showed an upregulated expression of IL-6 at 8 and 16 h, similar to those with the WT + psag0671 strain (overexpression of the sag0671 gene in the WT strain), while macrophages infected with the Δsag0671 mutant expressed lower levels of IL-6 than WT-infected macrophages ( Figure 8A, B). In addition, the deletion of sag0671 in the ΔCRISPR background caused markedly reduced ability of infected macrophages to produce IL-6, similar to that caused by the WT strain, indicating that upregulation of sag0671 in ΔCRISPR was largely responsible for the high level of IL-6 expression.
Subsequently, to verify whether the increased expression of IL-6 is related to TLR2 which is a host innate immune receptor activated upon sensing bacterial lipoproteins, we disrupted TLR2 signalling by antagonist C29. The data showed that the increased expression of IL-6 in macrophages infected with the ΔCRISPR and WT + psag0671 strains was restored to the level seen in WT-infected macrophages after adding the TLR2 inhibitor C29 ( Figure 8C, D), suggesting that the increased expression of IL-6 in response to infection by ΔCRISPR was due to hyperstimulation of TLR2. Except for IL-6, no significant difference was observed in the expression of IL-1β and TNF-α among the five groups.
Discussion
Beyond protection from invading nucleic acids, CRISPR-Cas systems, especially CRISPR-Cas9, have shown an important role in regulating bacterial endogenous genes [35]. However, most of the previous information on the physiological role of CRISPR-Cas9 system comes from studies on Cas9, whereas seldom pay attention to the association of crRNA with bacterial physiology and disease. Considering the link between Cas9 and crRNA, we hypothesize that crRNA may also relate to bacterial virulence. Not surprisingly, our study demonstrated that the deletion of CRISPR caused a dramatic decrease in S. agalactiae virulence in challenged zebrafish and mice.
Meningitis is the most common clinical syndrome of S. agalactiae infection. The process of penetrating the BBB and invading the central nervous system is essential for the ability of this bacterium to cause meningitis in the host. As the primary elements of the BBB, endothelial cells form capillaries and tight junctions between cells [36,37]. Here, we used bEnd3 brain microvascular endothelial cells to evaluate bacterial adhesion and invasion. As a result, CRISPR deficiency caused significantly reduced bacterial adhesion and invasion to bEnd3 cells, suggesting that CRISPR might be involved in the breaching of the BBB by S. agalactiae. Furthermore, we confirmed that CRISPR is necessary for S. agalactiae to disrupt BBB integrity using the BALB/c mouse model based on the intravenous injection of β-galactosidase-positive E. coli M5 as an indicator. Next, we wanted to investigate how the CRISPR contributes to bacterial virulence. Transcriptomic RNA-Seq provided more details of the genes impacted by CRISPR. A total of 236 transcriptionally altered genes involved in various physiological processes were identified, suggesting the complex mechanisms via which CRISPR might be involved. After observing an overlap of the 26 DEGs previously identified in Δcas9 [17], we hypothesize that CRISPR and cas9 might be consistently involved in the regulation of these genes. Intriguingly, the regR gene, which has previously been reported to be upregulated in Δcas9 and negatively regulate S. agalactiae virulence by repressing the hyaluronidase activity [17], was also identified among the up-regulated genes in ΔCRISPR. This finding supports the involvement of crRNA-Cas9 complexes in virulence regulation. Notably, however, the virulence attenuation phenotype of ΔCRISPR may not depend entirely on the effect of Cas9, since the decreased virulence in Δcas9 is not as proud as that in ΔCRISPR. This idea was further supported by evidence that among 236 DEGs, 210 were only identified in ΔCRISPR, indicating that the regulation of diverse physiological functions mediated by CRISPR is mostly independent of the guide of Cas9. This reminds us of an earlier report in which a CRISPR RNA (originally named RliB) was identified as being involved in the virulence of Listeria monocytogenes, despite the absence of cas genes [38].
Among the differentially expressed genes that were only identified in ΔCRISPR, the upregulation of covS has attracted our attention. CovS is a sensor of the CovR/S (alternate designation CsrR/S) two-component regulatory system, which contributes to bacterial pathogenicity by negatively regulating various genes in S. agalactiae, including many virulence factors, such as β-haemolysin/cytolysin, pili, and surface proteins [29,39,40]. In this study, base pairing analysis showed that eight covS mRNA regions could be recognized by crRNA spacers, indicating the possibility of direct regulation by CRISPR RNAs. To determine whether there may be any associations among crRNAs, CovR/S and β-haemolysin in S. agalactiae, we analysed the cylE transcription level and haemolytic activity. Our data suggested that the deletion of CRISPR resulted in remarkably upregulated expression of CovR/S, which repressed the transcription of cylE and thus decreased the activity of β-haemolysin/cytolysin. A similar effect was also observed with in vitro adhesion of ΔCRISPR to bEnd3 endothelial cells. However, surprisingly, upregulation of CovR/S was not responsible for virulence attenuation of ΔCRISPR, since the deletion of covR/S in the WT or ΔCRISPR background did not increase bacterial virulence. This finding indicated that negative regulation on virulence described for CovR/S in most other bacteria appears not to be applicable to CovR/S of S. agalactiae strain GD201008-001. We hypothesize that virulence regulation by the CovR/S two-component system may exhibit different discriminatory powers among different bacterial species or strains. This idea is further supported by two early observations: S. agalactiae strain A909 with decreased CovR expression showed a dramatically increased capability to cause bloodstream infections and penetration of the BBB [27]; in contrast, inactivation of the CovR/S system in strains 515 and 2603 caused significantly decreased virulence in mice [41].
Previous studies on F. novicida have shown that CRISPR-Cas components could downregulate the expression of the lipoprotein FTN_1103 by promoting its mRNA degradation and therefore facilitate bacterial immune evasion [8]. In agreement with this, we found that CRISPR reduced TLR2-dependent expression of the proinflammatory cytokine IL-6 by repressing the lipoprotein Sag0671. IL-6 has been demonstrated to be important for primary resistance to several pathogens [42][43][44]. Thus, we speculate that CRISPR-mediated suppression of Sag0671 might dampen recognition by TLR2, thus diminishing proinflammatory responses and leading to a virulenceenhanced phenotype. The mechanism of action of CRISPR on Sag0671 is unclear. Notably, however, crRNA partially base pairs with the Sag0671 transcript based on in silico prediction. This supports the idea that CRISPR might regulate the expression of lipoprotein Sag0671 via base pairing of the crRNA with the target mRNA, resulting in silencing or degradation of the target transcript. Certainly, we cannot rule out another possibility that CRISPR participates in the regulation of endogenous genes in an indirect way. In F. novicida, the CRISPR-Cas system is involved in bacterial pathogenicity by repressing the production of an immunogenic membrane protein via a tracrRNA-based silencing mechanism [8]. In this study, Northern blot analysis demonstrated that the absence of CRISPR could impact the maturation of tracrRNA ( Figure S4). We have not investigated whether the tracrRNA was involved in the regulation of endogenous genes in S. agalactiae strain GD201008-001. Further studies will be specifically designed to address this issue.
Also, it should be pointed out that as for the attenuated phenotype of ΔCRISPR, the effect of CRISPR deprivation on some regulatory pathways cannot be excluded, since a large number of genes involved in diverse physiological processes (Table S4) were altered. The present investigation together with our previous study of cas9 [17] suggest that type II-A CRISPR-Cas system plays an important role in S. agalactiae virulence by modulating endogenous gene expression. We analyzed the CRISPR/Cas locus among 128 S. agalactiae strains with published whole genome sequences using the CRISPR finder program online, and identified four strains with a single type II-A system, in addition to strain GD201008-001 used in this study. BLAST results showed that all the genes that were differentially expressed in the CRISPR array deletion mutant of S. agalactiae GD201008-001 could be found in these four strains ( Figure S5), implying that endogenous gene regulation mediated by CRISPR RNAs of type II-A might be conserved in S. agalactiae strains. Considering that the five bacterial strains analyzed here were isolated from tilapia suffering from streptococcosis in southern China, the significance of this type II-A system in the pathogenesis of piscine S. agalactiae may be of great concern.
In conclusion, our work has presented evidence that CRISPR is widely involved in virulence-associated traits in S. agalactiae. Although the molecular mechanism of crRNA-involved endogenous gene regulation remains to be clarified, our data provide a rich resource for future studies that may better characterize the CRISPR-Cas function in the regulation of diverse biological characteristics, extending beyond bacterial virulence.
Author contributions
YD and KM performed most of the experiments described in the manuscript and wrote the article; QC, HH, MN and MJ participated in the design of the study and performed the statistical analysis; CL provided expertise in study design; GL provided supplementary materials and revised the manuscript; YL conceived and designed the study. All authors read and approved the final manuscript.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Ethical approval
Animal experiments were implemented according to animal welfare standards and were approved by the Ethical Committee for Animal Experiments of Nanjing Agricultural University, China [permit number: SYXK (SU).2017-0007]. All animal experiments were performed in compliance with the guidelines of the Animal Welfare Council of China.
|
2021-11-04T06:22:23.263Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "36f10a6584a9c8695473ee1ef063e58b152bea78",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22221751.2021.2002127?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92eedec22b803ba374040e86d5b4fb149eabb1c3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237323929
|
pes2o/s2orc
|
v3-fos-license
|
Physiologic IGFBP7 levels prolong IGF1R activation in acute lymphoblastic leukemia
Insulin and insulin-like growth factors (IGFs) are mitogenic and prosurvival factors to many different cell types, including acute lymphoblastic leukemia (ALL). Circulating IGFs are bound by IGF binding proteins (IGFBPs) that regulate their action. IGFBP7 is an IGFBP-related protein (IGFBP-rP) that in contrast to other IGFBPs/IGFBP-rPs features higher af fi nity for insulin than IGFs and was shown to bind the IGF1 receptor (IGF1R) as well. The role of IGFBP7 in cancer is controversial: on some tumors, it functions as an oncogene, whereas in others, it functions as a tumor suppressor. In childhood ALL, higher IGFBP7 expression levels were associated with worse prognosis. Here we show that IGFBP7 exerts mitogenic and prosurvival autocrine effects on ALL cells that were dependent on insulin/IGF. IGFBP7 knockdown or antibody-mediated neutralization resulted in signi fi cant attenuation of ALL cell viability in vitro and leukemia progression in vivo. IGFBP7 was shown to prolong the surface retention of the IGF1R under insulin/ IGF1 stimulation, resulting in sustained IGF1R, insulin receptor substrate 1 (IRS-1), protein kinase B (AKT), and extracellular signal-regulated kinase (ERK) phosphorylation. Conversely, the insulin receptor was readily internalized and dephosphorylated on insulin stimulation, despite IGFBP7 addition. The af fi nity of homodimeric IGF1R for insulin is reportedly . 100 times lower than for IGF1. In the presence of IGFBP7, however, 25 ng/ mL insulin resulted in IGF1R activation levels equivalent to that of 5 ng/mL IGF1. In con-clusion, IGFBP7 plays an oncogenic role in ALL by promoting the perdurance of IGF1R at the cell surface, prolonging insulin/IGF stimulation. Preclinical data demonstrate that IGFBP7 is a valid target for antibody-based therapeutic interventions in ALL.
Introduction
Insulin and insulin-like growth factors (IGF1 and IGF2) are well-known mitogenic and prosurvival factors to many different cell types, including both B-cell precursor (BCP) and T-cell acute lymphoblastic leukemias (ALLs). 1 Insulin/IGFs act by binding to receptor tyrosine kinases made of homo-or heterodimeric insulin receptor (INSR) and IGF1 receptor (IGF1R) chains, that recruit and phosphorylate insulin receptor substrate proteins (IRS1-4), thus initiating the downstream activation of the phosphatidylinositol 3-kinase/ protein kinase B/mammalian target of rapamycin (PI3K/AKT/mTOR) and Ras/Raf/MAPK signaling pathways. 2,3 Circulating IGFs are normally bound by 1 of the 6 IGF binding proteins (IGFBPs) that can both inhibit or potentiate IGF action. 4 In ALL, IGFBP1, 3, and 4 were shown to inhibit, whereas IGFBP2, 5, and 6 had no influence on, IGF1-induced proliferation of the NALM16 and RS4;11 cell lines in vitro. 5 The IGF-binding domain of IGFBPs is shared by some other extracellular proteins, collectively called IGFBP-related proteins (IGFBP-rPs). IGFBP-rPs bind IGFs with low affinity (100-1000 times lower than IGFBPs 1-6) and have multiple IGF-independent roles; therefore, their physiologic significance in the IGFs system remains undefined. 6 However, IGFBP-rP1, best known as IGFBP7, is a special case among IGFBP-rPs or IGFBPs, because it binds insulin with relatively high affinity, although with lower affinity than that exhibited by the INSR. 7 In short-term experiments (3 minutes), IGFBP7 was shown to inhibit insulin-stimulated INSR signaling. 7 In 72-hour cell proliferation assays, on the contrary, IGFBP7 was shown to enhance the mitogenic activities of IGFs and insulin, 8 suggesting that physiologically, it may not compete with the INSR/IGF1R for their ligands but instead may augment the half-life of these growth factors.
We previously reported that ALL cells are the main source of IGFBP7 in the leukemia bone marrow microenvironment. The leukemia-secreted IGFBP7 was shown to stimulate bone marrow stromal cells to produce more asparagine, thus counteracting the effect of the antineoplastic drug L-asparaginase. 9 In that study, we also found that the knockdown of IGFBP7 in 2 BCP-ALL cell lines (REH and 697) resulted in reduced proliferation, 9 suggesting that IGFBP7 could play an autocrine role as well. Here we characterize the autocrine effects of IGFBP7 in ALL, performing preclinical studies to validate it as a target for therapeutic intervention.
Methods
Cell culture ALL cell lines were cultured in RPMI-1640 (Cultilab, Campinas, São Paulo, Brazil) with 10% fetal bovine serum (FBS), 50 U/mL penicillin, and 50 mg/mL streptomycin (Cultilab). Cryopreserved mononuclear cells from diagnostic bone marrow samples of children with ALL were thawed, depleted of dead cells by Ficoll-gradient centrifugation, and cultured in AIM-V medium (Thermo Fisher Scientific, Waltham, Massachusetts). The study was approved by the Research Ethics Committee from the State University of Campinas (CAAE: 0014.0.144.146-08 and 0018.0.144.146-08) and was conducted in accordance with the Declaration of Helsinki. Animal experiments were approved by the Animal Experimentation Ethics Committees of the State University of Campinas (CEUA/UNICAMP, protocol 1133/2008) and Centro Infantil Boldrini (CEUA/BOL-DRINI, protocol 0006/2020).
5-bromo-2 9 -deoxyuridine cell cycle assay
We used the FITC 5-bromo-2 9 -deoxyuridine (BrdU) Flow Kit (Becton Dickinson). Cells at 50 000 cells per well in 96-well plates were starved 4 hours in serum-free RPMI medium for synchronization and then were cultured in RPMI-10% FBS for 24 hours and labeled for 4 hours with BrdU. Finally, cells were labeled with 7AAD and analyzed in a LSR Fortessa cytometer using the FlowJo software.
Migration assay
Thirty thousand cells were seeded in the upper chamber of transwell culture plate inserts with 5-mm pores (Corning, New York) in RPMI-10% FBS. Stromal cell-derived factor 1 (SDF-1, 100 ng/mL; Sigma-Aldrich) or vehicle was added in the lower chamber to stimulate migration. After 4 hours of incubation, cells in the lower chamber were counted in a LSR Fortessa cytometer using the FlowJo software.
Anti-IGFBP7 monoclonal antibody production
Balb/c mice were immunized with a IGFBP7-derived peptide antigen (sequence 100% homologous in human and mouse) by standard methods. 10 Hybridomas were selected by ELISA against the native IGFBP7. 9 The detection limit reached by clone C311 in ELISA against the bovine serum albumin (BSA)-conjugated antigen peptide was 0.05 ng (data not shown). Of note, the peptide antigen has no relevant similarity against any other protein as evaluated by BLASTp.
INSR and IGF1R internalization assay
ALL cells (2.5 3 10 5 ) were starved in serum-free medium (RPMI for cell lines and AIM-V for primary cells) for 4 hours and then were left untreated or stimulated with insulin (500 ng/mL) and/or IGFBP7 (100 ng/mL) for 15 minutes or 4 hours. When indicated, the C311 anti-IGFBP7 antibody was added 210 minutes after the beginning of the 4-hour starvation time. After washing with PBS, the surface expression of INSR and IGF1R was analyzed by labeling cells with the anti-hCD220-PE (clone 3B6; Becton Dickinson) and anti-hCD221-BV421 (clone 1H7; Becton Dickinson) antibodies or the corresponding isotype controls (mIgG1k-PE, clone MOPC-21, and mIgG1k-BV421, clone X40; Becton Dickinson) diluted in 0.5% BSA in PBS for 30 minutes at 4 C. Cells were analyzed in a LSR Fortessa cytometer using the FlowJo Software.
In vivo experiments NOD/SCID (NOD.CB17-Prkdc scid /J) mice (Jackson Laboratory, Bar Harbor, Maine) were provided by the animal facility at the State University of Campinas (CEMIB, UNICAMP, Brazil). The NSGS (NOD.Cg-Prkdc scid Il2rg tm1Wjl Tg(CMV-IL3,CSF2,KITLG)1Eav/MloySzJ) mice (Jackson Laboratory) were provided by the Boldrini's animal facility. Ten million IGFBP7 knockdown or Scramble ALL cell lines were injected via the tail vein into nonirradiated NOD/SCID mice. Blood from the retro-orbital plexus was collected weekly to monitor by flow cytometry the percentage of leukemia cells (cells positive for hCD45-FITC, clone HI30; Becton Dickinson) in total CD45 1 mononuclear cells: sum of hCD45 1 and mCD45 1 (mCD45-PE, clone 30F-11; Becton Dickinson). After 4 or 6 weeks, mice were killed, and blood, spleen, liver, and bone marrow were collected to evaluate the percentage leukemia cells. For survival analyses, animals were killed in the moribund state, and Kaplan-Meier curves were compared using the log-rank test. The therapeutic efficacy of C311 anti-IGFBP7 vs polyclonal Balb/c, given intraperitoneally, at 25 mg 3 times per week for 4 weeks was tested against a T-cell ALL (T-ALL, patient T979) xenograft in nonirradiated NSGS mice. Treatment initiated 24 hours after transplantation. In another experiment, using a BCP-ALL (patient B1421) and nonirradiated NOD/SCID, treatment awaited overt leukemia ($0.5% hCD45 cells in the peripheral blood of half of the animals), and consisted of 20 mg of the C311 anti-IGFBP7 or an irrelevant anti-PSA (Rheabiotech) antibody, intraperitoneally, 3 times per week for 4 weeks. The percentage of ALL cells in peripheral blood mononuclear cells was monitored weekly. The study was approved by the Animal Experimentation Ethics Committees of the State University of Campinas (CEUA/UNICAMP, protocol 1133/2008) and Centro Infantil Boldrini (CEUA/BOLDRINI, protocol 0006/2020).
IGFBP7 knockdown or neutralization decrease the proliferation, survival, and migration of ALL cell lines
To determine whether IGFBP7 has an autocrine role in ALL, different BCP-and T-ALL cell lines were stably transduced with lentiviral particles carrying shRNA constructs directed against IGFBP7 (sh.959 and sh.812) or a noncoding random sequence (sh.Scramble). Downregulation of secreted IGFBP7 levels was confirmed by ELISA and Western blot (supplemental Figure 1A-B). IGFBP7 knockdown markedly reduced the growth rate of sh.IGFBP7 cells in comparison with sh.Scramble-transduced controls ( Figure 1A; supplemental Figure 1C), confirming our previous findings on the mitogenic effect of IGFBP7 in BCP-ALL cell lines. 9 Accordingly, BrdU incorporation assays revealed a decreased rate of nucleotide incorporation during S-phase on IGFBP7 downregulation, accompanied by rising of the sub-G1/G0 population of apoptotic cells ( Figure Chemokines and cell migration play a key role in establishing the leukemia bone marrow niche and in organs infiltration on disease progression. IGFBP7 knockdown significantly reduced the migration of ALL cells in the transwell system toward SDF-1 ( Figure 1D). Being an extracellular protein, IGFBP7 is an attractive target for therapeutic intervention using monoclonal antibodies. We produced an anti-IGFBP7 antibody (clone C311) and tested its effects on ALL cell viability. Both our own antibody ( Figure 1E) and a commercial one (supplemental Figure 2D) significantly reduced ALL cell viability when added to the culture medium, whereas no effect was seen with an anti-PSA control.
IGFBP7 potentiates the insulin prosurvival effect on primary ALL cells
After confirming by orthogonal approaches that IGFBP7 exerts an autocrine mitogenic and prosurvival effect in ALL, we investigated whether this was dependent on the presence of its ligand insulin. As shown in Figure 2A-B, IGFBP7 mitogenic action was clearly dependent on the combined addition of insulin. Although IGFBP7 or insulin alone promoted cell proliferation in half of the samples tested, their effect was clearly higher and more frequent when used in combination (supplemental Figure 3).
Because cell lines are not good surrogates for primary ALL cells, some experiments were repeated using short-term culture of primary BCP-and T-ALL cells. As shown in Figure 2C-D (supplemental Figure 4A), the combined addition of IGFBP7 and insulin protected primary ALL cells from apoptosis, as visualized by the increased number of Annexin-V/7AAD-negative cells. Likewise, addition of anti-IGFBP7 antibody (clone C311) resulted in a drastic reduction of cell viability ( Figure 2E; supplemental Figure 4B). No clear association could be found between the survival of ALL cells and the corresponding mRNA expression levels of INSR, IGF1R, or IGFBP7 (supplemental Figure 5A-B). Of note, these experiments were performed using physiologic levels of IGFBP7, which in the diagnostic bone marrow plasma from children with ALL is $50 ng/mL. 9 When nonphysiologic, much higher concentrations were used (20 mg/mL) to reproduce some of our fellows' contradictory results 11,12 IGFBP7 caused an inhibitory effect on primary ALL cell viability ( Figure 2F; supplemental Figure 4C).
IGFBP7 prolongs IGF1R but not INSR activation in primary ALL cells
Insulin/IGFs act by binding to homo-or heterodimeric INSR and IGF1R cell surface tyrosine kinase receptors that recruit and phosphorylate insulin receptor substrate proteins (IRS1-4), thus initiating the downstream activation of the phosphatidylinositol 3-kinase/Akt/ mTOR and Ras/Raf/MAPK signaling pathways. 2 Here we found that IGFBP7 significantly enhanced IRS-1 (pan-tyrosine) phosphorylation on insulin stimulation ( Figure 3A). Interestingly, treatment of ALL cell lines with IGFBP7 plus insulin resulted in increased IRS-1 phosphorylation for at least 4 hours but not when these factors were added separately. Likewise, IGFBP7 plus insulin prolonged Akt (S473) and Erk1/2 phosphorylation in IGFBP7 knockdown cell lines ( Figure 3B). Sustained Akt (S473) phosphorylation by the combined stimulation of cells with IGFBP7 plus insulin was confirmed in 18 primary ALL samples analyzed ( Figure 4A; supplemental Figure 6; supplemental Table 1). When the status of IGF1Rb (Tyr1131) and INSRb (Tyr1150/1151) was addressed, both showed similar levels of activation at the 15-minute time point and both for the insulin and IGFBP7 plus insulin treatments. At the 4-hour time point, however, only the IGF1Rb (Tyr1131) remained phosphorylated and exclusively when primary ALL cells were treated with the IGFBP7 plus insulin combination. Insulin treatment alone was not sufficient to keep the receptor active ( Figure 4B-C; supplemental Figures 7 and 8). Similar results were obtained with the use of IGF1; however, doses 5 times lower than insulin were required, because 5 ng/mL IGF1 resulted in IGF1Rb (Tyr1131) phosphorylation at levels equal to those obtained with 25 ng/mL of insulin (Figure 4D). Two conclusions could be drawn from these findings: first, IGFBP7 did not simply increase the half-life of insulin, 2 otherwise both IGF1R and INSR should have been activated at the 4-hour time point; and second, the IGF1R seemed to be the candidate molecule in mediating the IGFBP7 potentiation of insulin/IGF1 stimulus. To picture the importance of IGF1R in ALL, we silenced the IGF1R gene in the REH and Jurkat cell lines using an Isopropyl b-D-1-thiogalactopyranoside (IPTG)-inducible shRNA vector. IGF1R knockdown was strongly detrimental in terms of proliferation and survival of both cell lines ( Figure 4E-F; supplemental Figure 9A-B).
A previous work has shown that the N-terminal 97-amino-acid portion of IGFBP7 binds to the extracellular portion of IGF1R and suppresses its internalization in response to IGF1. 11 Here we confirmed these findings both in ALL cell lines and primary ALL cells ( Figure 5A-B; supplemental Figure 10). As expected, insulin treatment of serum-starved ALL cells resulted in significant internalization of both the INSR and IGF1R receptors. When insulin was added in conjunction with IGFBP7, however, only the INSR was internalized, whereas IGF1R remained at the cell surface for as long as the 4 hours tested. As expected, preincubation of ALL cells with the anti-IGFBP7 antibody (clone C311) abolished IGFBP7-induced IGF1R retention at the cell surface. Treating of ALL cells with primaquine or cycloheximide did not interfere with IGFBP7-mediated cell surface retention of IGF1R ( Figure 6A-B), suggesting that receptor recycling or synthesis did not contribute to the surface maintenance of IGF1R. Interestingly, treating of IGFBP7-silenced cell lines with the endocytosis inhibitor dansylcadaverine partially restored their proliferative response to insulin ( Figure 6C-D). These results are consistent with the notion that IGFBP7 exerts its effects through binding and stabilization of the IGF1R receptor at the surface of ALL cells, thus prolonging its response to insulin.
IGFBP7 knockdown or neutralization decreases the progression of ALL in vivo
Considering the complexity of the insulin/IGF system, in vitro experiments are just a poor approximation for the real situation. For instance, several proteases are known to cleave IGFBPs/IGFBP-rPs, liberating insulin/IGF for INSR/IGF1R binding. In addition, interaction of IGFBPs/IGFBP-rPs with extracellular matrix components can modulate their affinity for insulin/IGFs. 13 To address the leukemogenic potential of autocrine IGFBP7 secreted by ALL cells, under physiologic paracrine/endocrine IGFBP7 levels and under the interaction/competition with other cells and molecules, we transplanted the sh.959 cell lines into immunocompromised NOD/SCID mice. Silencing of IGFBP7 resulted in significant attenuation of leukemia progression, as evaluated by the percentage of leukemia cells (human CD45 1 ) in the peripheral blood, bone marrow, liver, and spleen, in relation to mice transplanted with Scramble cells ( Figure 7A). Moreover, IGFBP7 knockdown resulted in a significant increase in the Kaplan-Meier survival curves of mice ( Figure 7B). Similar results were obtained in mice transplanted with patient- derived xenografts (PDX-ALL; Figure 7C-D) or the RS4;11 cell line ( Figure 7E). Mice treated with the anti-IGFBP7 antibody (clone C311) showed decreased leukemic progression and survived longer than animals treated with an isotype or polyclonal antibody controls. Of note, the mouse ortholog of IGFBP7 is functional on human IGF1R (supplemental Figure 11), and our anti-IGFBP7 antibody binds both human and mouse IGFBP7 (supplemental Figure 2E).
Discussion
Although insulin and IGF1 have been known to enhance ALL survival/proliferation in vitro for decades, [14][15][16] their role in ALL has not been fully explored, except in T-ALL where IGF1R expression was shown to be under Notch1 control and to play a fundamental role in T-ALL progression and transplantability in mice. 17,18 Interestingly, tumor-associated dendritic cells support T-ALL growth via IGF1R activation. 19 Our gene expression data indicate that primary BCP- ALL and T-ALL express both the INSR and IGF1R, although INSR seemed lower in T-ALL (supplemental Figure 12B). This was confirmed by us ( Figure 5A; supplemental Figure 10) and others 5,18 by flow cytometry analyses. Thus, ALL cells seem ready to respond to insulin/IGFs. However, as we showed here, addition of insulin/IGFs alone has a rather small effect on ALL survival (supplemental Figure 13). Association of IGFBP7 with the insulin/IGF stimulus seemed mandatory to have any significant effect on primary ALL survival, and this may be the reason why the role of insulin/IGF1 in ALL biology could have been underappreciated in previous studies.
High IGFBP7 expression has been associated with treatment resistance 20 and/or worse prognosis in ALL. 9,21,22 Here, we demonstrate that IGFBP7 exerts an autocrine prosurvival and mitogenic On activation and autophosphorylation, receptor tyrosine kinases undergo rapid internalization, mainly by clathrin-mediated endocytosis. 23 The INSR has a C-terminal motif for MAD2 binding, which in turn recruits BUBR1 and the clathrin adaptor protein complex AP2, which facilitates clathrin coating and endocytosis on receptor activation. 24 Conversely, IGF1R has no such MAD2 binding motif and exhibits prolonged perdurance at the cell surface, apparently mediated by IRS-1 binding to and blocking of AP2. 25 In keeping with previous findings in breast cancer, 11 we found that IGFBP7 inhibited IGF1R internalization despite insulin stimulation. The cell surface retention of IGF1R in ALL was found to prolong the insulin-induced phosphorylation of IGF1Rb, IRS-1, ERK, and AKT. In contrast, in breast cancer MCF10A and MCF10CA1a cell lines, the cell surface-retained IGF1R was insensitive to IGF1 or insulin. Hypothetically, IGFBP7 binding to unoccupied IGF1R sterically restricts or allosterically prevents subsequent binding of IGF1/insulin. 11 However, this inhibitory effect required preincubation of breast cancer cells for 2 hours with IGFBP7 before addition of growth factors. Simultaneous addition of IGFBP7 with insulin/IGF1 had no effect. 11 Here, IGFBP7 and insulin or IGF1 was added simultaneously. Although differences in the INSR/IGF1R signaling and/or internalization complexes between ALL and breast cancer cells should not be undervalued, we suspect that the amount of IGFBP7 added to cells likely contributed to the discrepancy. Evdokimova et al 11 used IGFBP7 at 20 mg/mL, whereas we used 100 ng/mL. At 20 mg/mL, we also found an inhibitory effect of IGFBP7 on ALL cells. It is plausible that at this high concentration, IGFBP7 may indeed restrict or prevent IGF1/insulin binding to the IGF1R, but 20 mg/mL is far above the physiologic levels of IGFBP7 in adult serum (21-35 ng/ mL) 26 or childhood ALL bone marrow plasma (49 ng/mL). 9 Unfortunately, the use of supraphysiologic amounts of IGFBP7 has been the rule rather than the exception in the literature. As shown here, physiologic levels of IGFBP7 promoted the perdurance of IGF1R at the surface of ALL cells, prolonging insulin/IGFs stimulation. Accordingly, a recent study in mouse hepatocytes showed that IGFBP7 (20 ng/mL) binds to the INSR, potentiating its activation by insulin. 27 Likewise, ectopic expression of IGFBP7 in the breast cancer MDA-MB468 cell line enhanced IGF1Rb/INSRb phosphorylation by insulin, 11 an effect probably resulting from IGF1Rb activation because this cell line expresses very low levels of INSR. 28 ALL cells express both the INSR (INSR-A isoform) and the IGF1R (supplemental Figure 12A- 29 At the supraphysiologic levels (500 ng/mL or 100 nM) used in this study, insulin was expected to bind INSR, IGF1R, and INSR/IGF1R receptors. However, only IGF1R was retained at the cell surface and showed prolonged phosphorylation on IGFBP7/insulin treatment of ALL cells. Previous coimmunoprecipitation assays demonstrated that IGFBP7 binds the extracellular part of IGF1R but not INSR. 11 Thus, we deduce that IGFBP7 exerted its effect by binding to homotypic IGF1R. Otherwise, we would expect the INSR chain, at least from heterotypic INSR/IGF1R receptors, to be phosphorylated as well.
Insulin is at least 200-fold less specific for the IGF1R than is IGF1. 29 Interestingly, monoclonal antibodies directed against the extracellular part of IGF1R are able to dramatically increase insulin binding to homotypic IGF1R to affinity levels approaching IGF1 binding. Apparently, these antibodies could promote insulin binding by inducing a conformational change in the IGF1R, which results in loss of inhibitory constraints. 30 We speculate whether something similar was produced by IGFBP7, because 25 ng/mL insulin was able to activate the IGF1R to levels equivalent to those obtained with 5 ng/mL IGF1 (ie, a fivefold difference only).
Different drugs and antibodies targeting IGF1R have been developed, but most did not advance to clinical trials. 31 The similarity between IGF1R and INSR receptors and their heterodimerization has been a challenge to the design of specific therapeutic drugs targeting IGF1R. 3 The SCH 717454 monoclonal antibody against IGF1R has shown promising in vivo results against 2 of 8 ALL samples tested. 31 Here we present IGFBP7 as a new target candidate for therapeutic antibodies directed to modulate the insulin/IGF system. We showed that IGFBP7 neutralization using a monoclonal antibody was safe (in mice) and resulted in significant increments on the survival of mice transplanted with patient-derived xenografts. Although dose-response experiments were not performed, the anti-IGFBP7 antibody was used at 1 mg/kg, every 3 days, which was compatible to that of a recent preclinical study with an Fc-engineered CD19 antibody. 32 In conclusion, we confirmed the notion that insulin/IGFs are important mitogenic and prosurvival factors in ALL and revealed IGFBP7 as a relevant player in this context and as a valid target for therapeutic intervention in the treatment of leukemia and possibly other cancers that involve the IGF system.
|
2021-08-28T06:17:16.217Z
|
2021-08-26T00:00:00.000
|
{
"year": 2021,
"sha1": "02e276478ffdbb9dd04b379a22fa5dfd10d41cd7",
"oa_license": null,
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/5/18/3633/1823530/advancesadv2020003627.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7101397554b3a3bc94d92e397ac5f95de090cf55",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227065701
|
pes2o/s2orc
|
v3-fos-license
|
Genetic findings in miscarriages and their relation to the number of previous miscarriages
Purpose Early pregnancy loss leads to a devastating situation for many couples. Genetic disorders found in the pregnancy tissue are a frequent cause of miscarriages. It is unclear whether maternal age or previous miscarriages are associated with a higher chromosomal anomaly rate. This study aimed to determine the cytogenetical distribution of chromosomal disorders in couples after one or more previous miscarriages as well as the influence of maternal age. Methods 406 fetal tissue samples obtained after spontaneous abortion between 2010 and 2014 were successfully karyotyped. This included 132 couples with at least two losses and 274 couples with sporadic miscarriage. Normal and abnormal karyotype rate was determined for age, parity, gravidity, gestational week and number of previous miscarriages by logistic regression analysis. Results 145 (35.71%) fetal tissue samples had a normal karyotype, and 261 (64.8%) did not. After adjusting for age, older patients have a statistically significantly higher probability of genetic disorders in the pregnancy tissue (p < 0.001, OR 1.064, 95% CI 1.03–1.11). With each additional year, the probability of finding chromosomal abnormalities in a miscarriage increased by 6.4%. Patients younger than 35 years have a lower probability of having chromosomal disorders in the aborted material after two or more miscarriages than after sporadic miscarriages (50.7 vs. 58.9%) (p = 0.014, OR 0.67, 95% CI 0.48–0.914). Nevertheless, the risk of embryonic chromosomal disorders in patients aged 35 and above increased from 75.5% in sporadic miscarriages to 82.4% after more than one pregnancy losses (p = 0.59, OR 1.14, 95% CI − 0.72 to 1.92). Conclusion Chromosomal disorders found after one or more previous miscarriages are related to patients’ age. Couples suffering two or more miscarriages should be further researched, especially in younger patients.
Introduction
Around 13% of pregnancies intended to be carried to term end with fetal loss [1]. Sporadic miscarriages are described in the literature with an incidence as between 0.88 and 5% [2,3]. For previous miscarriages, conflicting definitions have been proposed. American and European guidelines regard the loss of two or more pregnancies as recurrent pregnancy loss (RPL) (ESHRE GDG early pregnancy loss 2017, ASRM 2013), while the WHO criteria consider three or more losses as RPL. The overall incidence under the WHO definition is around 1-3% [4][5][6]. Even though its prevalence seems to be lower than sporadic miscarriage, for the individuals concerned, it is an even more devastating diagnosis.
One of the most important causes of pregnancy loss found in around half of the first trimester miscarriages is fetal chromosomal abnormalities [7][8][9][10]. In case of previous miscarriages, the role of fetal chromosomal disorders needs to be better elucidated. At least half of the cases of patients suffering recurrent miscarriages are due to unknown reasons [11][12][13]. This fact disappoints help-seeking couples and clinicians seeking for further reasons. This emotional distress can affect future attempts to conceive because no standardized therapy can be stablished.
With the exceptions of studies by Sugiura et al. [14], Van der Berg [8], and Popescu [15], most studies about recurrent miscarriages are > 10 years old, making it difficult to compare their results with current early pregnancy detection methods. Today, the overall fetal loss and recurrent miscarriage prevalence including clinical and biochemical pregnancies might be higher.
In this study, we aimed to clarify the influence of chromosomal abnormalities on miscarriages. Therefore, we analyzed fetal chromosomal anomalies throughout the fertility lifespan found in both sporadic and after previous miscarriages.
In this retrospective approach, chromosomal analyses from 406 aborted specimens were examined. We were especially interested in evaluating the impact of chromosomal disorders as well as maternal age on sporadic and after previous miscarriages.
Materials and methods
This is a retrospective, single-center cohort study performed by the gynecological staff at the University Hospital in Mainz.
Population
From January 2010 to December 2014, 752 generally health patients were admitted for curettage because of miscarriage in early pregnancy. Each patient was offered a chromosomal exam with determination of the embryonic karyotype. The inclusion criteria included a sonographic presence of a gestational sac and the patient's consent to perform a chromosomal exam. In 346 cases, no chromosomal exam was performed for several reasons: no tissue growth, tissue contamination, or refusal to agree to a chromosomal exam. Patients with known genetic anomalies in either parent were not included in the study.
In total, 406 fetal tissue specimens from spontaneous abortions were obtained and analyzed. Documented parameters included parity, gravidity, maternal age, number of previous miscarriages, gestational age and cytogenetic results.
Tissue culture and cytogenetic analysis
Upon receipt in the laboratory, chorionic villi (approximately 30 mg) were retrieved and cleaned with 0.9% NaCl solution. The probes were assessed using an inverted microscope and stored in Leibowitz media (L15) until further processing. Half of the tissue was cultivated overnight in Chang media B+C (Irvine Scientific, Giessen, Germany) at 37 °C with 5% CO 2 . It was then used for direct preparation. The remaining material was minced with a scalpel. It was split into two cultivation bottles with Bio Amf-2 media (Biological Industries, Israel) and stored at 37 °C with 5% CO 2 for 8-14 days (long-term cultivation). The cultures were processed and stained with QFQ (Quinacrine)-banding using standard procedures. Chromosomes were analyzed with the GenASIs Bandview imaging system (Applied Spectral Imaging, ADS Biotec, Glasgow).
Statistics
Statistics Package for Social Sciences (SPSS 22.0, Chicago IL; USA) was used for the statistical analysis.
For our overall modeling approach, we chose maternal age, parity, gravidity, number of previous miscarriages and gestational age as known relevant continuous variables to describe a pregnancy, and the outcome variable normal or abnormal embryonic karyotype to describe chromosomal factors as a cause of miscarriage.
Abnormal karyotypes were divided into numerical and structural aberrations. Numerical aberrations included trisomies, double trisomies, other trisomies (such as triple, quadruple and quintuple trisomies), sex monosomies, triploidies and tetraploidies. Mosaicisms were considered if a proportion above 10% of distinct chromosomal cell lines was found. The group others included a combination of multiple numerical and structural genetical disorders. Categorical variables were used for the descriptive analysis, summarized as total numbers, and percentages. Continuous variables were examined by reporting means with standard deviations.
The association between different continuous variables, such as age and parity, and the outcome variable presence or absence of a chromosomal disorder was studied using a linear model. For crosstabs, Fisher's Exact Tests were performed. The significance level (α) was set at 5%. Odds ratios with 95% confidence intervals were used to present the results of the models. It should be considered as an explorative analysis, and as such, p-values are given for descriptive reasons only and should be interpreted in association with the effect estimates (OR/mean difference). To analyze the interaction between the terms age and number of previous miscarriages, we performed a separate logistic regression using the number of previous miscarriages as a continuous variable. As Independent variables, maternal age, parity, gravidity, number of previous miscarriages and gestational age were used. The OR shown was applied on a yearly basis.
Results
The study population included a total of 406 fetal tissue specimens obtained during dilation and curettage after the diagnosis of spontaneous miscarriage. The characteristics of the patients are shown in Table 1.
Patients were divided in two groups. Group 1 comprised all patients with an abnormal embryonic karyotype in the aborted material. Group 2 comprised all patients with a normal embryonic karyotype in the aborted products.
Miscarriages were considered to be sporadic if a single miscarriage was reported. 274 patients experienced a single sporadic miscarriage, whereas 132 patients experienced more than one miscarriage: 87 patients had two previous miscarriages and 45 had three or more previous losses. 243 patients were younger than 35 years and 163 patients were at least 35 years old. The role of previous miscarriages The overall prevalence of an abnormal karyotype in sporadic miscarriages was 65.3% and after more than one miscarriage 62.1%. Patients with at least two miscarriages showed a lower number of chromosomal disorders than patients after one miscarriage (p = 0.039, OR 0.768, 95%CI 0.60-0.99 log regression, χ 2 ). Overall, each additional miscarriage reduced the probability of chromosomal embryonic abnormalities by 23.15%.
The role of maternal age
Maternal age was the only significant predictor of chromosomal embryonic abnormalities found (p < 0.001, OR 1.06, 95%CI 1.03-1.11 log regression χ 2 ). To specify the role of age, patients were separated into two subgroups: patients younger than 35 years and patients aged 35 years and above. The embryonic anomaly rate in patients younger than 35 years was 56.4% and in patients of at least 35 years of age 76.1. In all patients, the probability of cytogenetic aberrations in the aborted material increases by 6.4% with every year. Figure 1 shows the prevalence of chromosomal abnormalities of the embryo with increasing maternal age.
The role of maternal age and previous miscarriages
Normal/abnormal embryonic karyotypes in sporadic and previous miscarriages were estimated year by year. When focusing women younger than 35 years with and without previous miscarriages, patients with previous miscarriages showed a lower rate of abnormal embryonic karyotypes than those with sporadic miscarriages (50.7 vs. 58.9%) (p = 0.014, OR 0.668, 95%CI 0.48-0.91 log regression). Each further miscarriage in patients younger than 35 years reduced the probability of finding chromosomal disorder in a miscarriage by 33.16%. A lower rate of abnormal embryonic karyotypes after previous miscarriages compared to sporadic miscarriages can be seen year by year until the age of 34 (Fig. 2). In patients aged 35 years and above the number of abnormal embryonic karyotypes found in a miscarriage was higher after previous losses (82.4%) than after one miscarriage (75.5%) (p = 0.59, OR 1.14, 95%CI − 0.72 to 1.92 log regression).
The interaction between the terms number of previous miscarriages and age was significant (p = 0.03227, OR 1.061, 95%CI 1.009-1.125 log regression).
Other parameters, such as parity and week of gestation, did not influence the prevalence of abnormal embryonic karyotypes.
Distribution of genetic disorders by age
The male/female distribution of the aborted embryos was quite similar, with 50.34% normal karyotypes in females.
No remarkable differences in chromosomal anomalies were found after one or more miscarriages among different age groups (see Table 2).
Discussion
Pregnancy loss and especially repeated pregnancy loss are traumatic and overwhelming situations for many couples. Embryonic chromosomal disorders are a frequent and identifiable cause of early miscarriages. This study provides an overview of genetic disorders in aborted material in a university hospital in Germany using cytogenetics.
On the one hand, maternal age has a clear effect on embryo genetics: the prevalence of abnormal embryonic karyotypes increases with advancing maternal age.
On the other hand, our study approach tried to explain the role of chromosomal abnormalities after spontaneous and after repeated miscarriages: the overall prevalence of embryonic abnormalities in sporadic miscarriages was significantly higher than after repeated miscarriages.
Analyzing maternal age, previous miscarriages and abnormal embryonic karyotypes, we could demonstrate that in women younger than 35 years, the probability to find chromosomal aberrations in embryonic tissue decreased with every miscarriage. In other words, it was less probable to find chromosomal aberrations in the embryonic tissue after two or more miscarriages than after one single miscarriage in younger patients. This inverse correlation is observed until the age of 34 years. In patients aged 35 and above the probability of an abnormal embryonic karyotype increased anyway with every miscarriage.
This study sample suggest that an abnormal embryonic karyotype is the most common cause of miscarriage [16,17]. Compelling data suggest that the recurrence risk of chromosomal aberration mainly depends on maternal age because increasing age is associated with a higher aneuploidy rate, especially trisomy risk embryonic karyotype is the most common cause of miscarriage. The overall prevalence of chromosomal abnormalities in aborted material was 64.3% in our study. Comparably high genetic aberrations rates after miscarriages have been reported in previous publications [14,[16][17][18][19].
Maternal age can be considered as significant predictor for chromosomal aberrations in aborted material. This has been clearly demonstrated in earlier literature as well [16,17]. Compelling data suggest that the recurrence risk of chromosomal aberration mainly depends on maternal age because increasing age is associated with a higher aneuploidy rate, especially trisomy risk. Even the absence of other risk factors, being older than 30 years is a risk factor for sporadic and repeated pregnancy loss [18][19][20]. In accordance with the well-known age-dependent decreasing oocyte quality associated with increasing aneuploidy rates, particularly above the age of 35 years [21], we found a higher total number of chromosomal embryonic abnormalities in the older group (at least 35 years old) compared to their younger counterparts. Our finding accords the Scandinavian study of Roepke et al. [23] with increasing prevalence of genetic aberrations found on miscarriages for the last 10 years due to the older age upon childbearing. Similar abnormality rates after three and more losses than after sporadic losses (78% vs. 70% p = 0.28) were reported by Marquard et al., in patients at least 35 years old [22].
The prevalence of abnormal embryonic karyotypes was higher after one miscarriage than after two and more miscarriages year by year up to a maternal age of 34 years. A real cut-off was found at the age of 35 years.
Due to the known effect of ovarian ageing in the oocyte and embryo quality, the risk of finding an abnormal embryonic karyotype increased with every miscarriage in women aged 35 years and above. Therefore, it was more probable in the older population (where the age impacts the most) that fetal chromosomal abnormalities increased year by year, regardless the number of previous miscarriages.
On the contrary, in the younger population with presumably shorter times to pregnancy and good embryo quality, every additional miscarriage might hint other pathologies related to miscarriages. Age impact here was minimized Fig. 2 Embryonic chromosomal anomality rate depending on the age in spontaneous and after previous miscarriages and the probability of finding chromosomal disorders was reduced by 33.16% with every miscarriage.
On the overall study sample analysis and on the younger population up to 34 years old, our finding was consistent with results from other groups reporting similar higher normal karyotypes after previous miscarriages than after sporadic losses [20,21,23,24].
Interestingly enough, when focusing on our older population who showed higher chromosomal abnormalities after previous miscarriages, discordant results have been reported. Some groups found an association between the number of abortions and the chromosomal abnormalities [23,24], while others did not [20,22,25,26].
Plausible explanations for this conflicting finding may be found in the sample stratification. While some groups included patients being 35 years and above [22], other groups analyzed a wider range of ages [25,26] acting as a confounding factor. Missing homogenous criteria for repeated spontaneous abortion (after two or after three miscarriages) as well as the imbalance in the groups´ sizes may have led to the differences observed.
There are other pathologies like thrombophilia, thyroid dysfunction, parental genetics or uterine malformations which are related to recurrent pregnancy loss and should be taken into account [27]. Since chromosomal aberrations are the most leading cause of miscarriage, the genetic analysis of aborted material is quite indispensable in case of recurrent pregnancy loss, despite the high costs.
Popescu's group of American researchers described an overall cost-effective algorithm. Independent of age a chromosomal microarray analysis should be performed on the second miscarriage. Only after euploid results further diagnostic procedures like parental genetics, hysterosalpingography, thrombophilia, thyroid function, HbA1c and Prolactin-testing should be performed according to American guidelines. Using this algorithm, the reason for recurrent miscarriage can be found in more than 90% [9]. A similar algorithm for further diagnostics in cases of a second euploid miscarriage has been suggested by previous groups [23,24]. The ESHRE´s European guidelines also suggest a conditional recommendation to perform genetic analysis on the pregnancy tissue only for explanatory purposes, preferring array CGH (ESHRE GDG Early Pregnancy Loss 2017).
The strength of this study is found the cohort size. This study sample had a distinct advantage in that it was composed of a defined cohort of patients with sporadic or previous miscarriages who were treated at the same institution with identical laboratory techniques, ensuring consistency of results.
Limitations to be acknowledged in this study include the missing data about other fertility parameters or the missing cytogenetic results of preceding miscarriages or from the couples themselves. Another limitation is the well-known maternal contamination risks in female fetuses which can be found in conventional karyotyping, resulting in a higher euploid female rate [9,28].
Specific test failure rates were unfortunately not recorded. Some reasons for culture failure might have been insufficient specimen on early weeks of pregnancy or on incomplete abortion. The high number of samples excluded for further analysis may also suggest a high culture failure when performing karyotyping. There is evidence that chromosome microarray analysis may be more effective detecting aneuploidies [29,30]. Microarray testing has shown to be preferable over karyotyping due to a lower culture failure and it has a small incremental diagnostic yield detecting submicroscopic copy-number variations [31][32][33].
The findings of this study lead us to the assumption that especially patients younger than 35 years suffering from repeated miscarriages have a low probability to find chromosomal disorders in the embryonic tissue. In patients aged 35 years and above, the effect of maternal age seems to overcome the impact of the abort recurrence itself.
Chromosomal analysis should be offered after previous miscarriages before other pathologies are taken into account and further diagnostic methods are performed.
Human and animal rights This article does contain studies with human participants.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2020-11-20T14:07:47.661Z
|
2020-11-19T00:00:00.000
|
{
"year": 2020,
"sha1": "8bbb089fa61836739a993e94fc0856472b2e4653",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-020-05859-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f69bd615096d71b11735ec7e176bac7c94a8746",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233982951
|
pes2o/s2orc
|
v3-fos-license
|
Development of angle closure and associated risk factors: The Handan eye study
Abstract Purpose To investigate the development of angle closure from baseline open angle and associated risk factors in a rural Chinese population through a longitudinal study over a 5‐year period. Methods Subjects aged ≥30 years and older with bilateral open angles at baseline of the Handan Eye Study who participated in the follow‐up and had undergone both baseline and follow‐up gonioscopic examinations were included. Subjects with any form of angle closure, glaucoma, incisional ocular surgery or other conditions that could influence the results were excluded. The development of angle closure was defined as the presence of primary angle closure suspect (PACS) or primary angle closure (PAC)/primary angle closure glaucoma (PACG) during the follow‐up in normal subjects with baseline bilateral open angles. Logistic regression was performed to identify the baseline risk factors for the development of angle closure. Results A total of 457 subjects with bilateral open angles at baseline aged 53.0 (45.5, 58.0) years were enrolled. 94.7% of the included cases developed PACS, 5.3% developed PAC and no one developed PACG after 5 years. In logistic regression, significant risk factors for the development of angle closure were shallower central anterior chamber depth (ACD) (p = 0.002) and narrower mean angle width (p < 0.001). Conclusions This study reports the development from baseline open angle to angle closure after a 5‐year follow‐up. We confirm that the mean angle width and central ACD were independent predictive risk factors for the development of any form of angle closure.
Introduction
Glaucoma has long been recognized as a major cause of ocular morbidity and the leading cause of irreversible blindness worldwide (Foster et al. 2001;Quigley et al. 2006). One of the major types of glaucoma, primary angle closure glaucoma (PACG), is an aggressive condition which can lead to severe vision loss and has the highest prevalence among Asian populations, especially Chinese (Foster et al. 2001;Quigley et al. 2006;Tham et al. 2014). With 10 million people estimated to be affected with PACG in China by 2020 (about half of the total worldwide), the disease is a serious challenge for health care in China (Foster et al. 2001).
Primary angle closure disease (PACD) is considered to be a potentially preventable disease. If detected and treated with prophylactic intervention in the PACS and early PAC stages, the progression to PACG can be prevented to some extent or slowed (Nongpiur et al., 2011b;Sun et al. 2017). However, there is a paucity of information about the risk factors associated with the development of angle closure in those who initially have open angles (Tham et al. 2014;Sun et al. 2017).
According to previous studies, shallower limbal ACD, shallower central ACD, rapid shallowing of the ACD, increased lens thickness (LT), shorter axial length, anteriorly positioned lens, hyperopia, higher intraocular pressure (IOP) and narrower anterior chamber angle have been reported to be associated with the development of angle closure in different populations (Alsbirk et al. 1992;Ye et al. 1998;Yip et al. 2008;George et al. 2012;Kashiwagi et al. 2013;Vijaya et al. 2013;Wang et al. 2019). No studies referring to the rural Chinese population were reported.
The aim of this study was to report the associated risk factors for the development of angle closure from baseline bilateral open angles in a rural Chinese population.
Subjects
The Handan Eye Study (HES) was a population-based cohort study conducted on a sample of rural Chinese adults initiated in 2006. At the baseline, 6830 eligible subjects aged 30 years or older were included from 13 villages in Yongnian County, Handan City, Hebei Province, Northern China, using a clustered random sampling method (Liang et al. 2009). The follow-up research was implemented between 2012 and 2013, following the same protocol. All participants from the baseline study were invited to take part in follow-up examinations 5 years later. This follow-up study included 5394 participants who returned for the repeat examinations (85.3% of survivors).
The subjects enrolled in our study were participants who received gonioscopic examinations at both baseline and follow-up and were diagnosed with baseline bilateral open angles. Those who satisfied the following criteria were excluded: subjects who were diagnosed with any form of angle closure, primary open angle glaucoma (POAG) or any form of secondary glaucoma, leucoma, keratoconus, iridocyclitis, iris or ciliary body cysts or tumours, spherophakia, congenital microphthalmia etc., had incisional ocular surgery or ocular trauma, which could have influenced the results at the baseline examination. Subjects who had bilateral cataract surgery during the 5-year follow-up (if it was unilateral cataract surgery, untreated eyes were used for outcome analysis) were also excluded from the analysis.
This study was conducted in accordance with the Helsinki Declaration and was approved by the Ethics Committee of Beijing Tongren Hospital. The subjects were adequately informed of the study, and verbal and written informed consent was obtained from all of them.
Examination
The ophthalmic examination consisted of measuring the presenting visual acuity (PVA) and best-corrected visual acuity (BCVA) using logarithm of minimum angle of resolution (log-MAR) 4-metre charts, objective and subjective refraction, slit-lamp biomicroscopy, visual field examination, intraocular pressure (IOP) measurement, gonioscopy, A-scan ultrasound biometry and fundus examination.
Refraction and corneal curvature radius (CCR) were measured using a KR-8800 auto kerato-refractometer (Topcon, Tokyo, Japan), visual field test using the standard 24-2 Swedish Interactive Testing Algorithm (SITA) program on a visual field analyser (Humphrey Visual Field Analyzer 740i or 750i; Carl Zeiss, Jena, Germany). Slit-lamp biomicroscopy was performed, and peripheral anterior chamber depth was graded according to the modified van Herick system, in which the limbal chamber depth was graded as a percentage fraction of the thickness of the adjacent cornea at the most peripheral point in the following seven categories: 0%, 5%, 15%, 25%, 40%, 75% and ≥100% (Foster et al. 2000). Intraocular pressure (IOP) was recorded using a Kowa applanation tonometer (HA-2, Kowa Company Ltd. Tokyo, Japan) under topical anaesthesia using proparacaine 0.5% and fluorescein staining of the tear film. The right eye was measured first, and 2 measurements of IOP were taken per eye; if they differed by more than 2 mmHg, a third measurement was taken. The mean value of two measurements with smaller differences was identified as the IOP value.
Gonioscopy was performed on one in ten participants as well as on all persons with limbal anterior chamber depth (LACD) ≤40%, IOP >21 mmHg, and those having a history of glaucoma or suspect, with a one-mirror Goldmann gonioscopic lens (Ocular Instruments, Bellevue, WA) at 925 magnification by experienced ophthalmologists at baseline and follow-up.
The gonioscopic observations were standardized. The baseline gonioscopic examinations were performed by a single observer. The follow-up gonioscopic examinations were performed by one of two observers who had a weighted kappa score of 0.76. Static examination was performed in dim ambient illumination with a shortened slit that did not fall on the pupil. The anterior chamber angle width was graded according to the Spaeth system and recorded as 0, 10, 20, 30, 40 and 50 degrees (°), and the peripheral iris contour, degree of trabecular meshwork pigmentation and other angle abnormalities were recorded in all four quadrants of each eye. The mean angle width was calculated as the mean value of the angle widths in four quadrants. Indentation gonioscopy was performed with increased illumination after static gonioscopy, to assess angle opening, and findings on indentation were recorded.
A-scan ultrasound was performed by a 10-MHz A/B-mode ultrasound device (CineScan; Quantel Medical, Clermont-Ferrand, France), using a hard-tipped, corneal contact probe mounted on a slit lamp at baseline and an OcuScan RxP (Alcon, Inc., Fort Worth, TX, USA) at the follow-up. The anterior chamber depth (ACD), lens thickness (LT) and axial length (AL) were measured before mydriasis. Absolute lens position (ALP) was defined as ACD + 1/2 9 LT and relative lens position (RLP) as ALP/AL. All subjects except those with a broad peripheral anterior synechiae (PAS) on gonioscopy (≥6 clock hours) who have a high risk of acute angle closure (AAC) underwent pupillary dilation using 1% tropicamide. Lens grading for cataract using the lens opacity classification system III (LOCS III) was performed, comparing with standard photographs for nuclear opalescence (NO), nuclear colour (NC), cortical cataract (CC) and posterior subcapsular cataract (PSC), after pupil dilation by two trained graders in baseline and follow-up examinations (Chylack et al. 1993). The NO scores (ranged from 0.1 to 6.9 with one decimal) and CC scores (ranged from 0.1 to 5.9 with one decimal) were used in our study to evaluate the severity of cataract.
Stereoscopic evaluation of the optic nerve head was performed using a +78 diopter lens or +90 diopter lens at 916 magnification after pupil dilation and the slit lamp. The vertical and horizontal cup-to-disc ratios (CDRs) were measured and recorded. The presence of any notching, splinter haemorrhages or peripapillary atrophy was documented.
All participants underwent height, weight, waist circumference and hip circumference measurements. Body mass index (BMI) and waist hip ratio (WHR) were calculated for all participants. We also administered questionnaires for assessing the socioeconomic status, education level, demographic and personal history (smoking, alcohol consumption and diet), and any history of ophthalmic problems or surgeries, systemic diseases such as diabetes mellitus, hypertension, use of systemic or topical medication were also elicited and recorded.
Definition of primary angle closure disease
The definitions developed by the International Society for Geographical and Epidemiological Ophthalmology (ISGEO) were used for the states of PACD in our study: PACS: an eye in which 180°or more of the posterior pigmented trabecular meshwork could not be seen during a static examination, with IOP <21 mmHg and no PAS, previous AAC or glaucomatous optic neuropathy (GON) (Foster et al. 2002). PAC/PACG: a PACS eye with established PAS and/or IOP >21 mmHg and/ or GON (Foster et al. 2002).
The development of angle closure in this study was defined as the presence of PACS (occludable angle on gonioscopy) or PAC/PACG (the presence of increased IOP and/or the presence of PAS in PACS subjects with or without GON) during the follow-up in normal subjects with bilateral open angles at baseline.
Statistical analysis
All statistical analyses were performed using SPSS statistical software (Version 25.0; SPSS, Inc., Chicago, IL, USA). We used ocular factors of the right eye for cases where either both eyes and the right eye only developed angle closure. For those with development of angle closure only in the left eye, ocular factors of the left eye were used.
Comparison of variables between subjects with developed angle closure and controls was done using the independent t-test (for variables demonstrating a normal distribution) or Mann-Whitney U-test (for variables failing to demonstrate a normal distribution) for continuous variables and Pearson's chi-square test for categorical variables. Statistical significance was assessed at p values less than 0.05. Univariate and multivariate logistic regression was performed to identify the baseline risk factors for the development of any angle closure; these included age, sex, IOP, biometric parameters, BMI, socioeconomic status, education level, demographic and personal history. Variables with a p value less than 0.05 in the univariate logistic regression were included in the multivariate regression analysis.
Receiver operating characteristic (ROC) curves and area under the curve (AUC) were used as an index of testing the performance of baseline ocular parameters on predictive detecting development of any forms of angle closure.
Results
The number of participants who received gonioscopic examination in the baseline study was 2046. Of these, 376 were not eligible for the follow-up study because (1) died, 153 subjects (7.48%), (2) had severe physical or mental diseases, 48 subjects (2.35%), (3) were at work, 103 subjects (5.03%), (4) refused to attend, 54 subjects (2.64%), (5) were out of contact, 18 subjects (0.88%), leaving 1670 participants who completed the follow-up study. Among them, 16 refused or could not tolerate gonioscopic examination. 552 did not meet the requirements for gonioscopic examination in the follow-up study. Hence, a total of 1102 participants received gonioscopic examinations in both baseline and in the follow-up study ( Fig. ).
In comparison with non-examinees, the enrolled examinees tended to be older (p < 0.001); female (p < 0.001); have lower income (p < 0.001); likely to be hypertensive (p = 0.006); and have larger SE (p < 0.001), smaller ACD (p < 0.001), larger LT (p < 0.001) and smaller AL (p < 0.001) ( Table 1). No difference was noted in education level, prevalence of diabetes, prior cataract surgery, BMI and IOP. Among the 1102 subjects who completed the baseline and follow-up gonioscopic examinations, 623 subjects presented with PACD, 21 with POAG and 1 with secondary glaucoma at baseline. Three subjects received bilateral cataract surgery during the fiveyear period, including 2 with PACD and 1 with POAG at baseline. Eight subjects with unilateral cataract surgery (4 in the left eye and 4 in the right eye) during the 5 years were included, and the eyes on which the cataract surgery was not performed were analysed. A total of 457 subjects with bilateral open angles at baseline aged 53.0 (45.5, 58.0) years who had sufficient data were enrolled in the study (Fig. 1). 160 were male, and 297 were female. The overall development of any form of angle closure disease was 150 cases.
Eighty-one cases developed unilateral angle closure (33 involved the right eye and 48 involved the left eye), and 69 cases developed bilateral angle closure. Primary angle closure suspect (PACS) was the most common form of angle closure to develop, with 142 of 150 cases (94.7%) being classified as PACS. Eight of 150 cases (5.3%) developed PAC in either eye, and none developed PACG. All the 8 cases with PAC were unilateral and presented with PAS, but not an elevated IOP.
Compared with those who did not develop any form of angle closure over the 5-year period, those who developed angle closure were more likely to be female (p = 0.016), had shallower limbal ACD (p = 0.040) and central ACD (p < 0.001), had narrower mean angle width (p < 0.001), had thicker lenses (p = 0.005), had smaller ALP (p < 0.001) and smaller RLP (p = 0.004), and had shorter AL (p < 0.001) at baseline (Table 2). No difference was found in age, education level, income, prevalence of hypertension, prevalence of diabetes, BMI, WHR, cataract surgery, SE, CCR, IOP, NO score and CC score.
In multivariate logistic regression, significant risk factors for the development of any form of angle closure were shallower central ACD (p = 0.003) and narrower mean angle width (p < 0.001) ( Table 3).
ROC analysis was used to assess the potential performance of mean angle width and central ACD as a combined determinant of development of any form of angle closure (Fig. 2). The AUC was 0.703 (95% CI, 0.650-0.753).
Discussion
This study on the development of angle closure from baseline normal subjects with open angle was based on the 5year follow-up of a cohort of subjects who participated in the Handan Eye Study. Since not all the subjects at baseline and follow-up received gonioscopic examinations, we were not able to provide the incidence of angle closure. We could, however, provide data on the progression of angle closure from baseline, the demographic and biometric characteristics of subjects who developed angle closure compared with those who did not, as well as explore associations between pre-existing risk factors at baseline and the development of any form of angle closure. There were no cases of AAC; all subjects with progression developed either PACS or PAC.
Previous cross-sectional studies have shown that the prevalence of eyes with angle closure is high among the elderly and women, which we found in the baseline study of the HES (Yamamoto et al. 2005;Casson et al. 2009;Liang et al. 2011). Our study suggested that female subjects with bilateral open angles at baseline were more likely to develop angle closure compared with those who did not, but no statistically significant difference in age was found between the two groups. We did find that the progression from open angle to angle closure peaked in those in their 50s.
Ocular biometric parameters are known risk factors for PACD. Eyes with angle closure tended to have shallower ACD, narrower angle width, more hyperopic spherical equivalence, thicker lens, greater lens vault and shorter AL than the eyes of those without angle closure (George et al. 2003;Lavanya et al. 2008;Casson et al. 2009;Nongpiur et al., 2011a). Shorter AL, shallower ACD and narrower angle recess width are all associated with a crowded anterior segment which makes the eye susceptible to PACD (Vijaya et al. 2013). Hyperopia is a known risk factor; this could be due to two possible reasons. First, eyes with hyperopia tend to have shorter AL and crowed anterior segment which makes these eyes susceptible to angle closure (Vijaya et al. 2013). Second, hyperopia may be caused by cortical cataracts which increase the lens thickness and contribute to angle closure (Wong et al. 2001;Vijaya et al. 2013). The lens is believed to play a crucial role in the pathogenesis of PACD because of increased lens thickness and/or a more anterior position (Vijaya et al. 2013). However, the exact association between lens and angle closure is equivocal and inconsistent (Yamamoto et al. 2005;Casson et al. 2009;Liang et al. 2011;Nongpiur et al., 2011a;Vijaya et al. 2013). One possible explanation for that is the inability to control the accommodation while measuring the lens thickness (George et al. 2012). We found that compared with those who did not develop angle closure after 5-year of follow-up, subjects who developed angle closure from baseline open angle tended to be female, had shallower central ACD and limbal ACD, narrower anterior angle width, thicker lens, smaller ALP and RLP and shorter axial length. We further observed that the mean angle width and central ACD were determinants for the development of angle closure. However, these two parameters as a combined determinant had an AUC of 0.703, not enough predictive ability in deciding who will develop angle closure in open angle subjects.
To date, only a few prospective studies conducted on Eskimos, Chinese, Caucasian, Indians, Mongolians or Japanese have reported the progression from open angle to angle closure and the associated risk factors, as summarized in Table 4 (Alsbirk et al. 1992;Wilensky et al. 1993;Erie et al. 1997;Ye et al. 1998;Yip et al. 2008;Kashiwagi et al. 2013;Vijaya et al. 2013;Wang et al. 2019).
Although the rates of development of angle closure differed among these studies which may be caused by different inclusion criteria and populations, the important common risk factors which have consistently been shown to be associated with the development of angle closure in longitudinal studies were shallower ACD and narrower anterior chamber angle and this was also the case in our study (Alsbirk et al. 1992;Ye et al. 1998;Thomas et al. 2003a;Thomas et al. 2003b;Yip et al. 2008;George et al. 2012;Kashiwagi et al. 2013;Vijaya et al. 2013;Wang et al. 2019).
However, we found that the combined determinant of central ACD and mean angle width did not have strong enough predictive ability to warrant its use in determining who requires more intensive monitoring for the development of angle closure. The possible reason might be that other dynamic risk factors involved in the pathogenesis of PACD were not included, such as dynamic iris changes with pupil dilation (Zhang et al. 2014;Zhang et al. 2015;Zhang et al. 2016).
In our previous studies, we reported an AUC of 0.844 with inclusion of angle recess area at 750 lm, anterior chamber volume, lens vault and iris cross-sectional area change / pupil diameter change after physiologic mydriasis in the prediction model for detection of PACS (Zhang et al. 2020). In the future study, we tend to investigate the demographic, ocular anatomical and dynamic risk factors associated with the development of angle closure.
Risk factors other than shallow ACD and narrow anterior chamber angle have also been reported. For example, in the community-based study conducted in Japanese residents aged ≥40 years old, rapid shallowing of the ACD assessed by scanning peripheral anterior chamber depth analyser (SPAC) grades were demonstrated to be associated with angle closure development (Kashiwagi et al. 2013). The ACD grades provided by SPAC were obtained by quantitatively measuring ACD in a non-contact fashion from the optical axis to the limbus and comparing with the ACD values derived from a sample of Japanese subjects (Kashiwagi et al. 2004). In our study, the central ACD was measured using Ascan, but with different machines and different positions of subjects when measuring at baseline and at followup. Hence, we could not investigate the association of shallowing of central ACD with the development of angle closure.
Increased LT has also been reported as risk factors associated with angle closure development in Indian and Chinese populations (Vijaya et al. 2013;Wang et al. 2019). Moreover, hyperopia, shorter AL and anteriorly positioned lens were found to be associated with angle closure development in Indian population (Vijaya et al. 2013). We did find the differences in LT, AL, ALP and RLP between the subjects who developed angle closure and those did not but failed to demonstrate the association between these variables and the development of angle closure. It should be noted that in these two studies, there were relatively higher ratios of developed PAC/PACG and relatively lower ratios of developed PACS during the follow-up of 6 years and 10 years, respectively, compared with those in our study during the 5year follow-up. Hence, this might be the reason.
The strength of our study lies in the population-based cohort design with international standardized protocols and data collected by trained researchers under strict quality control. There are some limitations of this study. One of the main limitations in our longitudinal study is loss to gonioscopic examination at follow-up. At baseline and the 5-year follow-up, we did not perform gonioscopy on all subjects but only did so on those with LACD ≤40% of corneal thickness or other sign of glaucoma and suspects, as well as one in ten of the examined subjects each day. Liang et al had reported that at baseline, this strategy might miss some PACS and PAC (one in over 400 PAC cases) cases but were not likely to miss any PACG case (Liang et al. 2011). However, this same strategy for gonioscopic examination used at follow-up would cause bias in our study.
To address this issue, we compared available baseline parameters between the two groups who did and did not receive the follow-up gonioscopic examination. Our data suggested that subjects who received follow-up gonioscopic examinations tended to be older and female, have lower income, lower prevalence of hypertension, larger SE, shallower central ACD, thicker lens and shorter AL. Given that female gender, larger SE, shallower central ACD, thicker lens and shorter AL were risk factors for angle closure, the development of angle closure is likely to have been overestimated. The underrepresentation of men and those with higher income was probably due to occupational reasons: men and those with higher income were more likely to be at work during the examinations.
Secondly, in our study, after a duration of 5 years, 94.7% subjects with baseline open angle developed PACS, 5.3% developed PAC and no one developed PACG. The reason for no PACG developed in the follow-up might be the relatively short duration. Hence, caution is warranted in extrapolating the findings to all the spectrum of PACD.
Thirdly, gonioscopy is a subjective measurement using a standardized grading system, and therefore, interobserver and physiological variation can alter the interpretation of the results. Last but not least, the study population was Chinese, and the results may not be extrapolatable to other racial / ethnic groups.
Conclusions
In conclusion, this study reported the development from baseline open angle to angle closure after a 5-year followup. The study confirmed that the mean angle width and central ACD were independent predictive risk factors for the development of any form of angle closure. Greater baseline LT, shallower ACD and narrower angle width Prospective cohort study 2019 AAC = acute angle closure, ACD = anterior chamber depth, AL = axial length, CI = confidence interval, IOP = intraocular pressure, LT = lens thickness, PAC = primary angle closure, PACG = primary angle closure glaucoma, PACS = primary angle closure suspect.
|
2021-05-08T06:17:04.656Z
|
2021-05-07T00:00:00.000
|
{
"year": 2021,
"sha1": "bb7e68eee1362046f2422f06d02e667f6d598281",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1111/aos.14887",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "add0445e5efeda6f9d20753562008180261d72af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234022364
|
pes2o/s2orc
|
v3-fos-license
|
The Effectiveness of Blended Learning in Chemistry Creative Media Course
This study has purpose to figure out the effectiveness blended learning activity in creative media course through blended learning for education chemistry students year 2017. The effectiveness declared from learning outcome and students’ response. The study was held in odd semester 2019-2020 for 15 meetings. After the instrument of blended learning (Students worksheets, visual and video/audio media) showed validity then learning activity was conducted from August to December 2019. There were combination of online and offline in teaching-learning activity for 15 meetings. Discussion, Assignments and its submission activities was using vi learn for 4 meetings, and the rest of meetings were used offline. As final project, students in team presented chemistry creative media in internal exhibition. The theme of exhibition is physical chemistry creative media. The media assessed by lecturers and students (other years or other study programs) who attended the exhibition. The data of learning outcome were obtained from final project score and assignments. While the data of students’ response were gained after the exhibition was held. The result of this study showed 100% students could achieve mastery learning outcome and student responded this course in very good criteria. It can be concluded that chemistry creative media course effective to be conducted by using blended learning activity.
Introduction
The 21 st century education requires adaptation in every line which is marked by very tight competition in the fields of technology, management and human resources (HR). Mastery of technology and science is a must so that the nation's competitiveness increases. The learning process with e learning is increasingly developing as a result of the development of science, knowledge, and the need for distance education.
The use of e learning as distance learning is increasingly being used in the world of education because of its practicality. Distance learning is structured learning that takes place without the presence of direct education in front of students [1]. Other terms of distance learning that apparently synonymous and interchangeable and are merely the preferred delivery mechanism for most distance learning are online learning, digital learning, e-learning and virtual learning [2]. Distance learning can be combined with traditional teaching which need face-to-face class interaction, this strategy can be called as blended learning. Blended learning is an innovative concept that embraces the advantages of both traditional teaching in the classroom and ICT supported learning including both offline learning and online learning for the same students studying the same content in the same course [3], [4], [5]. Blended learning forces teacher and learner to consider the characteristics of digital technology, in general, and information communication technologies (ICTs) [6]. Blended learning also known as hybrid or mixed-mode learning [5]. Asynchronous on-line learning experiences provide students with opportunities for meaningful reflection [7]. Most campus-based classrooms with their large class sizes do not provide students with an environment conducive for reflection [7]. Education institutions all around the world for over decades concern to certain skills, such as language skills and critical thinking, while some other skills are more recently emergent, namely, digital literacies [8].
In blended learning implementation, digital literacies also to be one concerned by education institution. For this reason, requirements that must be met in the implementation of blended learning are: well trained teachers, teachers with scientific attitude, teachers with a wider outlook and positive approach towards change, complete facilities, students have access to the internet at their private computers, flexibility in the system, fully aware and agreed parents, and formative evaluation and continuous internal assessment [4]. Blended learning provides an effective learning environment that can ease students' access, success, withdrawal, and perception [6].
Blended learning also can be conducted in Creative Media course, one of the elective courses provided, for undergraduate Chemistry Education study program students. In the 2019/2020 academic year this course was provided for undergraduate Chemistry Education student year 2017. Creative Media courses aim to meet the demands of 21 st century education which requires students to be creative, communicative, collaborative, and critical individuals. Some of the topics raised in the creative media course are 1) the characteristics of chemistry competence standard for student in high school and vocational school, 2) the characteristics of creative media, 3) student characteristics and their relationship with creative learning media and chemistry materials.
Hopefully, blended learning in this course could improve the quality of activities, strengthen and improve online lectures. The use of visual and audio-visual media at online lecture meetings will be optimized to facilitate and facilitate the quality of learning and learning experiences received by students so that they can achieve the expected competencies.
Method
This research aims to describe the effectiveness of blended learning in creative media course in project based activity.The research was conducted on odd semester in 2019-2020 academic year. There were 35 students enrolled creative media course that were devided into 11 groups containing 3-4 students. From 15 meetings, there were 4 meetings conducted online, while the rest of them were conducted offline. Online meetings were on third, fourth, sixth, and seventh meetings. Which every online meeting students were given assignment. Online meeting using vilearn Unesa (Universitas Negeri Surabaya). This vilearn used moodle 3.0.
Research instrument and data collecting method were assignment, middle test, final project, and student response questionnaires. For final project, students in groups presented chemistry creative media in internal exhibition. Assignment and final project conducted in group so the score was group score. While middle test was individual paper test that was held on 8 th meeting. The distribution of questionnaires was used to determine student responses to the effectiveness of blended learning in creative media courses. Questionnaire sheets were given to each students after the final test was held. This questionnaire was using google form.
Students were said to have completed their studies if they have succeeded in obtaining a success rate of 70% (assignment, middle test, and final project) and classical completeness is 85%. The results of the student response questionnaire were analyzed using descriptive statistics. Data from student response questionnaires were analyzed by adding up student ratings then calculating the average score. The rating ranges from 1 (not good) to 5 (very good). From the average value, conclusions are drawn which is determined by the score as follows: Not good = 1.00-1.80 Not good = 1.81-2.60 Good enough = 2.61 -3.40 Good = 3.41-4.20 Very good = 4.21-5.00 Chemistry creative media course can be concluded effective to be conducted by using blended learning activity if the response of the students in good and very good criteria.
Results and Discussion
The blended learning design in this study is the implementation of e learning in creative media courses that has been carried out starting from the first lecture meeting in the odd semester 2019-2020. The obstacle that occurred at the first meeting was that many students were unfamiliar with e learning, so some students still needed to be accompanied to enter the e learning menu of the Creative Media course. This phenomenon is in accordance with the results of research by Arkorful 3 that e learning can lead to inefficient use of time, especially for the first experience of using e learning [9], [10].
The learning material tool (PPt and worksheet) that already developed for Creative Media course have been validated by 2 experts and the results are obtained that the course of the development of Creative Media courses is used to support online lectures (e-learning) [11]. At the first meeting, students were asked to visit e learning in the Creative Media course. The menu at the first meeting was a discussion forum related to student understanding of the term creative media. The discussion forum was used by all students (35 students who enroll) in the first meeting. In this activity there were no obstacles experienced by students. The students are getting used to the new e learning site. This can be seen from the upload time of each student's comments, which do not differ too much between comments from one student to another. In online discussion (in this case is discussion forum), all students have a voice and no students can dominate the conversation [12]. If an information is to be retained in memory, and is related to the previous information stored in memory, then the individual (in this case students) who learns must be involved in cognitive rearrangement or elaboration of the material [13].
When students use e-learning in the discussion forum menu, students individually gave their opinion about the meaning of creative media based on the material read in PowerPoint (PPt). PPt is one of the learning media used in lectures which are carried out by e learning. The use of e learning also involves digital-based learning tools such as PPt [9].
For the second meeting, lecturing activities through blended learning by utilizing e learning, especially when uploading group work for worksheet 1. At worksheet 1 students were asked to look at the competency standards for chemistry graduates in high school and vocational school, then the groups were asked to analyze its characteristics. The results of student work are then uploaded via e learning. The menu selected for this activity is an assignment. The task of the supervising lecturer after the assignment upload activity is to assess the assignment through e learning.
In this assignment, students were asked to upload individually in e learning even though this assignment was a group assignment. However, the menu at e learning still could not accommodate this need. So, when entering grades through e learning, the lecturer pays close attention to the names of the groups so that they were not mistaken for giving the same score to students in the same group. The use of vi learn has advantages, that is able to maintain student discipline in doing and submitting assignment/task [14].
Student Learning Outcomes
The implementation of e-learning using e learning was carried out in 4 meetings. The four meetings were the 3 rd meeting, 4 th meeting, 6 th meeting, and 7 th meeting. There were also 4 assignment that must be completed. The first assignment (at 3 rd meeting) deals with standard competency for high school and vocational school student and Media analysis. In this assignment, students were asked in groups to look at the competency standards for chemistry graduates in high school and vocational school, then they write down the characteristics of chemistry competence standard for student in high school and vocational school. Students in groups were also asked to plan appropriate learning media to achieve the standard competency. In the 2 nd assignment students in groups analyzed suitable media to teach material with various student learning styles. From the analysis of the selected media, students were also asked to describe the media design. The 3 rd assignment was to compile a storyboard and flowchart of the media plan to be developed. Students were also asked to write down tools and materials from the media to be developed. For the material to be developed, the learning media was devoted to physical chemistry material. From the planned material tools, students were also asked to determine the costs required in media development. The 4 th assignment was the development of creative media. In this fourth assignment, this was not done in only one meeting, but gradually until the 15 th meeting. The results of this assignment are illustrated by the student scores in Table 1. The 1 st assignment (from worksheet 1) has been done by students in their groups with an assessment range between 84 and 90. The group who scored 84 gave more answers using IT media such as PowerPoint compared to real-life media, this is not in line with expectations in the creative media course. In the creative media course, students were expected to be able to use material tools that are easily found around them to be able to transfer chemistry to students. For groups of students who scored 90, they were more likely to choose media that prioritized hands on activity and minds on activity, not only in the form of visual (PPt). For example, gilding coins with copper using electric cables, batteries, CuSO4 0.1 M solution, copper metal, and zinc coins. This activity train student creativity by constructing their prior knowledge and collaborated with creative idea to take advantage tools and materials available around them. Constructivist teaching approaches create opportunities for learners to extend their own knowledge by engaging them in stimulating learning experiences [8].
The 2 nd assignment (from worksheet 2) regarding student characteristics, linkage of student characteristics, creative learning media, and chemical materials was carried out online through vi learn Unesa at the fourth meeting. For the results of assignments, the assessment range is between 82 and 90. The group that scored 82 actually wrote down the media that would be used to facilitate students with different types of learning, but it was more likely for students who had visual learning types only. Because this group said was only making/developing the atomic form of several atomic theories. The group that scored 90 has provided a more concrete picture of the media that can be used for students with different types of learning (audio, visual, and kinesthetic learning styles).
The third online activity at the 6 th meeting was carried out with the help of worksheet 3 assignments, regarding storyboards. The results of the worksheet 3 assessment were in the assessment range in the score 75 to 90. The group that scores 90 has displayed a design drawing on the storyboard. Whereas for the group that got a score of 75, the results of the assignment did not present a description of the media design that would be developed, only contained a description of the media design.
On assignments from worksheet 4 which were carried out online through vi learn at the 7 th meeting, the assessment range obtained by the student group was between 86 and 90. Score 86 is given if the assignment is done does not list sequentially how to use creative media that will be developed by students. A score of 90 is given to groups that have listed how to use the creative media to be developed. In worksheet 4, which assigns students to produce creative learning media, they could practice their higher-order thinking skills. This is in line with Valente's statement that this training provides a marginal increase in creative problem-solving skills by reducing perceptual and functional fixations and mental blocks [15].
This group assignments were also related to practicing tolerant attitudes and communication between students. As they work on the task, they monitor their performances and judge whether their performances are moving them toward outcome attainment [16].
The activities of doing assignments were carried out not in class but outside the classroom with the hope that students can freely express ideas without being limited by space and time. Students can communicate directly or through communication tools when discussing and consulting with lecturers regarding assignments. Students may be able to choose the social and physical settings they use to work MISEIC 2020 Journal of Physics: Conference Series 1747 (2021) 012037 IOP Publishing doi:10.1088/1742-6596/1747/1/012037 5 on the task given by teacher, so that they structure their environments to make them conducive to learning and seek help when they need it [16].
At 8 th meeting, the students did middle test individually. This test consist of students' knowledge about storyboard and flowchart in chemistry material. At 9 th to 15 th meeting the students in group continued preparing and developing creative media in physical chemistry topic. The topics were: exothermic and endothermic reaction, factors affecting reaction rate, electrolysis, voltaic cell, colligative characteristic of solution, and chemistry equilibrium. Each group selected one topic that differ from other group. Table 2 illustrate the total score of each component (assignment in average, middle test, and final project). Table 2, the lowest score of middle test is 72.00 and the highest score is 86.00, this score shows that students already mastery the knowledge of components in developing learning media for chemistry matter. The lowest score shows the ability to create storyboard still can be optimized during the rest of meetings (9 th to 15 th meeting). At the final project, it is not only assess the storyboard and flowchart, but also assess creativity of group in creating storyboard into real learning media. By doing hands on activity and minds on activity, students can process the information to be stored in long term memory. Information processing theories emphasize the transformation and flow of information through the cognitive system. It is important that information be presented in such a way that students can relate the new information to known information (meaningfulness) and that they understand the uses for the knowledge. Learning should be well structured so that it builds on existing knowledge and can be clearly comprehended by learners. Teachers also should provide advance organizers and cues that learners can use to recall information when needed and that minimize cognitive load [16].
Result of Student Response Questionnaire
Students were asked to fill out a response questionnaire at 15 th meeting. This was done with the assumption that all worksheet and assignment have been done by students. The results of the student response questionnaire presented in Table 3. Table 2, the range of assessment is generally between 3 and 5. However, in terms of "the terms used are appropriate and understandable", the student assessment range is between 4 and 5. The criteria for all aspects are very good. Based on the suggestions and criticisms submitted by students, in general students think that the Creative Media course requires discussion time of more than 2 credits. Even though students think that vi learn can help in this lecture, based on the criticism and suggestions that are provided, it shows that this course is more comfortable if it is carried out with more face-to-face discussions. Some of the criticisms conveyed by the students were related to limiting material for media development, which according to students could limit creativity. The reason for this limitation is so that the materials and concepts that are carried through the creative media do not deviate far and become more focused. Each year a different theme will be given for students programming Creative Media courses.
Conclusion
Based on the research results obtained, namely the learning outcome data which has a minimum value and the results of the student response questionnaire which have very good criteria, it can be concluded that Creative Media lectures are effective to be conducted when combined with E-Learning (blended learning). The effectiveness of blended learning in this course was showed by the student discipline in doing and submitting assignment on time because in vi learn (other e learning platform) the teacher can set and adjust the due date time. If the student submits late, the teacher can reduce the score. Another advantage of using e learning is the student creativity and critical thinking can be optimized because the student can communicate and argue freely in discussion forum.
Suggestions
Technical problems can be overcome by using the internet network from the provider. Students need to use e learning not only for uploading assignments but for simple communication such as discussions in forums so that communication skills are getting better.
|
2021-05-10T00:03:53.599Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d5c1e63241b3e2e296f077e8d2fd70da5eac43cb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1747/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dd83a14dbb09503b2a4d13705e5be74a2288c1d6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
}
|
119449179
|
pes2o/s2orc
|
v3-fos-license
|
Measuring High Energy Neutrino-Nucleon Cross Sections With Future Neutrino Telescopes
Next generation kilometer-scale neutrino telescopes, such as ICECUBE, can test standard model predictions for neutrino-nucleon cross sections at energies well beyond the reach of collider experiments. At energies near a PeV and higher, the Earth becomes opaque to neutrinos. At these energies, the ratio of upgoing and downgoing events can be used to measure the total neutrino-nucleon cross section given the presence of an adequate high energy neutrino flux.
This length is equal to the diameter of the Earth for a cross section of which is predicted (but not yet measured) to occur near E ν ∼ 100 TeV for neutrino-nucleon interactions. The fraction of neutrinos which are absorbed by the Earth is a function of cross section. This can be expressed independently of the flux, as the ratio of downgoing events to upgoing events, at a given energy or in a given energy range. Figure 1 shows this relationship. The simulation used for this calculation considered a detector located at a depth of 1.2 to 2.4 km beneath the Earth's surface. The Earth was taken to have a core of radius 2500 km and density 11, 000 kg/m 3 and a 2 km later of ice/water along the surface. Figure 1 shows that below ∼ 10 −7 mb, the ratio of downgoing to upgoing events changes between 1 and 1.2 fairly slowly and may be difficult to observe. Conversely, above ∼ 10 −4 mb, the ratio grows rapidly and well above the number of events that we may expect to observe, making a measurement difficult due to poor statistics. For this reason, this technique is most well suited for energies within this range of cross sections. Above these energies, ground-level fluorescence cosmic ray detectors, such as EUSO and OWL, may be able to make accurate measurements [5].
To predict how effectively we will be able to measure high energy neutrino cross sections, knowledge of the flux of neutrinos at the relevant energies is needed. A variety of such fluxes have been discussed in the literature. These include, but are not limited to, neutrinos from compact objects such as Gamma-Ray Bursts (GRB) [6] and Active Galactic Nuclei (AGN) [7], cosmogenic neutrinos generated by cosmic rays scattering off of the photon background [8] and top-down scenarios where neutrinos are generated in mechanisms such as the decay of supermassive particles, topological defects or primordial black holes [9]. In my calculations, I considered four cases. First, the Waxman-Bahcall flux for transparent sources of cosmic rays. This is a conservative choice because a more opaque source will yield higher neutrino fluxes. This flux is given by E 2 ν dN ν /dE ν = 10 −8 GeV cm −2 s −1 sr −1 for each of ν e , ν µ andν µ [10]. Secondly, I used the present flux limit for AMANDA-B10 of E 2 ν dN ν /dE ν = 9 × 10 −7 GeV cm −2 s −1 sr −1 for each of ν µ andν µ [4]. Finally, I used a fluxes of E ν dN ν /dE ν = 6.3 × 10 −12 cm −2 s −1 sr −1 and E ν dN ν /dE ν = 5.7 × 10 −10 cm −2 s −1 sr −1 for each of ν µ andν µ for comparison. The last two fluxes are normalized to the same number of events between 1 PeV and 1 EeV as for the Waxman-Bahcall flux and the AMANDA-B10 limit, respectively.
The distributions of upgoing and downgoing events are each fit by Poisson statistics. The ratio of these rates is, therefore, described by a binomial distribution. Using the astrophysics convention of 84.13% confidence upper and lower limits containing a 68.27% confidence interval, error bars can be fit for any pair of values for the number of upgoing and downgoing events [11]. Figures 2 through 5 show the ability of a cubic kilometer neutrino telescope to constrain the total neutrino-nucleon cross section after 1 and 10 years of integrated observation for each of the neutrino fluxes described above. The energy has been divided into bins, each a factor of 10 wide. The quantity being measured is in the total cross section averaged among events in a given bin.
Below ∼ 1 PeV, even for the conservative Waxman-Bahcall flux, the cross section can be measured to a factor of 3 or better with only 1 year of observation. After ten years, the accessible energy range increases to 10 PeV or higher (see figure 2). For the optimal flux of the AMANDA-B10 limit, cross sections can be measured accurately over 100 PeV and to within one order of magnitude up to 10 EeV (see figure 3). Figures 4 and 5 show that for a less sharply falling flux, normalized to the same number of events between 1 PeV and 1 EeV, cross sections for PeV-EeV energies are well-measurable, while cross sections at TeV energies are more challenging. Even for the most energetic colliders planned, these measurements will be impossible. For a discussion on the ability of colliders to study such effects, see Ref. [12].
The systematic uncertainties involved in high energy neutrino astronomy can, presumably, be understood and limited by calibration with the atmospheric neutrino spectrum. The remaining systematic errors will result from a detector's finite angular and energy resolution. ICECUBE is expected to achieve angular resolution below 1 degree. Also, energy resolution for shower events is expected to be at the level of 30% or better. This is significantly more precise than the energy resolution for lepton track events.
In conclusion, next generation neutrino telescopes may be capable of constraining the total neutrino-nucleon cross section by comparing the number of upgoing events to the number of downgoing events. This method is independent of the shape of the neutrino flux. Optimal energies for this measurement are in the range of 100 TeV-100 PeV where the Earth becomes opaque to neutrinos and large enough neutrino fluxes may exist for observation. This energy range is complimentary to lower energy collider experiments and higher energy cosmic ray air shower experiments.
|
2019-04-14T02:34:02.391Z
|
2002-03-25T00:00:00.000
|
{
"year": 2002,
"sha1": "3346efd514f82a32107793c2a143dedfca4c3edd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0203239",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3346efd514f82a32107793c2a143dedfca4c3edd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
12237537
|
pes2o/s2orc
|
v3-fos-license
|
Preejaculatory illness syndrome: Two cases of a rare psychosomatic disorder Case Report
to stop the prescribed medication. On the last follow-up, they still take fluoxetine on a regular base with satisfactory sexual life.
This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com Case Report death. This would occur just before ejaculation during sexual intercourse resulting in his inability to ejaculate.
His history included reactive depression and anxiety disorder as well as family history of anxiety and panic disorder.
As per patient's history, he is married with a monogamous relationship with his wife with five siblings. The patient did not have such symptoms prior to his psychiatric problems.
Clonazepam 0.5 mg was prescribed empirically for a couple of months with no improvement that was substituted by fluoxetine 20 mg OD and propranolol 10 mg.
After 2 weeks, the patient reported significant improvement. On follow-up, the symptoms disappeared, and the ability to ejaculate was restored. Propranolol was stopped as the patient developed diabetes mellitus to avoid masking of the hypoglycemic symptoms. He has maintained on fluoxetine 20 mg daily that restored the preejaculation loss of muscle tone and treated the associated anxiety.
Case two
A 30-year-old recently married patient reported similar symptoms as case one during sexual intercourse in his first marriage.
He had a history of a recently sustained car accident with serious head injury and brain hemorrhage. After surgery, injury to the optic and olfactory nerves ended with a loss of smell sensation and blindness. The patient has had depression from the associated multiple disabilities and breakdown of his finances that was treated with psychotherapy and selective serotonin reuptake inhibitors (SSRIs) as citalopram.
After he had treated and recovered from depression, he got married. The patient admitted failure to ejaculate since marriage, decided to ask for medical advice and disclosed his problem after 10 times of unsuccessful attempts of sexual intercourse.
Due to the previous diagnosis, the patient was started on a treatment of fluoxetine 20 mg OD. There was a significant improvement after 2 weeks. When the patient stopped the medication for few weeks, he reported the recurrence of the symptoms. Fluoxetine was prescribed for a second time with restoring of the symptoms.
Comment
A midline search did not reveal any reported cases of PEIS. It is a group of psychosomatic symptoms that include episodes of palpitation, sweating, fainting, loss of muscle tone, and sense of impending death. The syndrome occurs during sexual intercourse with subsequent inability to ejaculate.
As first described by William Masters and Virginia Johnson, the human sexual response consists of four discrete phases; excitement, plateau, orgasm, and resolution phase. PEIS can be considered a disorder at the end of the plateau and the beginning of orgasm phases. [4] In men, orgasm is triggered by a subjective sense of ejaculation followed by forceful emission of semen. Orgasm lasts for 3-15 s and is associated with changes in the genital organs that include rhythmic contraction of pelvic floor muscle with a slight clouding of consciousness. Extragenital changes include general cardiovascular (tachycardia and elevated blood pressure) and respiratory changes as well as increase skeletal muscle tone (characteristic spastic contractions of the feet). [5] Temporary loss of muscle tone at a critical point of impending ejaculation has a devastating effect on the psychological equilibrium of males leading to anxiety and depression in addition to primary and secondary symptoms.
As medical conditions that may lead to panic attack type symptoms as anemia, mitral valve prolapse, hyperthyroidism, etc., may give similar symptoms, our work up first was ruled out.
The clinical picture of PEIS is similar to cataplexy, which is a sudden, short-lived loss of muscle tone, and paralysis of voluntary muscles induced by strong emotions. As there are no reported cases of PEIS, we tried SSRI (fluoxetine) as a primary treatment of cataplexy and sleep paralysis.
If we would consider PEIS part of depression or anxiety disorders, we do not have an explanation why all SSRIs -that were prescribed-improved all symptoms of depression except the inability to ejaculate which is the core symptoms of the PEIS syndrome. Fluoxetine is among SSRIs "it is known to work on muscle tone" which was able to improve PEIS symptoms and restored the muscle tone.
Compared to POIS, the etiology seems to be different, because Waldinger and Schweitzer [3] hypothesized that males with POIS develop an immunogenic reactivity to their own semen. This was supported by the positive skin tests with diluted autologous seminal fluid and by successful hyposensitivity therapy in men with POIS.
Our report is lacking the data regarding the situation of patient's female partners. Although we asked our patient, their wives did not attend. It seems to be an embarrassing situation for patients in our community.
The full explanation of this syndrome (PEIS) is unclear in the view of rarity of the disease and maybe different compared to POIS. However, the two cases shared past history of depression, family history of panic disorder, and anxiety. We aimed to alert physicians and urologists to this group of symptoms for future reporting and a possible explanation.
CONCLUSION
PEIS is a rare psychosomatic disorder. Patients may have symptoms of sympathetic over activity, muscle atonia, and sensation of impending death. Depression, anxiety disorders, and family histories of psychiatric problems were noticed as risk factors. Although this condition is embarrassing, it is equally serious. Both patients were prescribed fluoxetine 20 mg OD resulting in the successful restoration of their sexual activity.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-03T00:26:58.848Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "0332f4da3ecb15e272ff1202b92a6c9e3656af7e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0974-7796.171492",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b6eff17ca1788bb3a7f20beb6716dd7b44bcea06",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207957785
|
pes2o/s2orc
|
v3-fos-license
|
Cell Nanomechanics Based on Dielectric Elastomer Actuator Device
Highlights The main components, principle, and technology of dielectric elastomer actuator (DEA) were reviewed to illustrate that DEA can be an effective carrier for mechanobiology research. Comparison between DEA-based bioreactors and current commercial devices is provided, as well as the outlook of the DEA bio-applications in the future.
Introduction
Mechanobiology is an emerging science concern about the effects of mechanical loadings and physical forces on cell behaviors and diseases [1]. Biological cells and tissues living in vivo environment are exposed to several mechanical stimulations such as stretching and contracting. As reported so far, mechanical loadings can be sensed by cells and then influence cellular behaviors like migration, proliferation, orientation, and gene expression [2][3][4][5][6][7][8][9]. Additionally, some diseases such as atherosclerosis [10] and cancers [11] are proved to have similar relation with mechanical cues as well. For example, Kim et al. [12] found that mechanical effects can affect cellular remodeling and regeneration of tissue, which means the possibility of developing cell-based therapies. Besides, Park et al. [13] confirmed that equiaxial and uniaxial strains have different induction effects on the differentiation of mesenchymal stem cells. In a word, people are now trying to achieve better understand of cellular mechanism for the purpose of developing more effective and advanced biomedical technology.
However, studying mechanical stimulus in vitro directly remains difficult because the traditional cell culture technology cannot provide such mechanics, so the first problem need to solve is to apply mechanical stimulations to cells while cultured in vitro to mimic the true environment in vivo. As reported so far, some methods have been taken to apply mechanical loadings on cells, including hydrostatic pressure and fluid shear [14][15][16], other interesting devices such as biochip [17], wrinkled skin-on-a-chip [18], pneumatic stretching system [19], motor-driven system [20], piezoelectric [21], and optical actuation methods [22,23]. More recently, a custom-built open-source stretch system assembled from laser-cut acrylic plates emerged [24]. In general, such a gap between engineering and biology is attractive and even profitable; some companies have entered this market and are selling their products, which include Flexcell system [25,26] from Bio-Equip, STB-1400 [27] from Strex, etc. Generally, these means require complex designs and result in complicated system structures and besides high costs. In contrast, dielectric elastomer actuators (DEAs) are simpler, have advantages of highly controllable deformation, submillisecond response time, and optically transparent, and can be integrated with cell culture environment conveniently. Since Pelrine et al. [28] presented their landmark discovery about electrostatically activated elastomeric actuators in 2000, this novel technology has been used in many fields, such as energy harvesters [29], tactile displays [30], soft robotics [31,32] and what to be emphasized here, mechanically biological cells stimulus, and the potential application as the biosensor to measure the cellular contraction force.
In this review, firstly we introduce the basic dielectric elastomer actuator technology simply, including the components and actuation mechanism of the DEA devices, and the characterization methods. Secondly, we overview the applications of DEA-based devices in the field of cellular mechanical loading, which can be divided into bio-stretching device and biosensor. Thirdly, comparisons of popular commercial methods and DEA-based devices are made. Lastly, we further provide our prospect on DEAs' applications in the future mechanobiology research.
Components of the DEA Devices
DEAs are typically simple in structure and require very different materials with traditional actuators like electric motors. Briefly, the dielectric elastomer membrane (DEM) and the compliant electrodes are what demanded, the prestretched DEM sandwiched by electrodes [32,33] and then fixed by rigid frames. These two main components are the key to determine the performance of DEA-based devices and combine with various pre-stretch sets to make DEA forms diversify.
Dielectric Elastomer Membrane
The DEM belongs to one subcategory of electroactive polymers (EAPs), which can respond to electrical stimulation with significant size or shape change, and has already emerged as a new actuation material [34]. As one of the most important components of DEAs, the material properties of DEM directly determine the actuation performance of DEAs. Since the 1990s, researchers have conducted massive experiments to find proper DEM materials, such as silicones, polyurethanes, acrylics, and nitrile rubbers. Among these, silicones and acrylics are the two most commonly used materials. The most widely used acrylic DEM is 3 M VHB 4910 and 3 M VHB 4905 [35]. Both of them are made of a mixture of aliphatic acrylate, which shows a property of high viscosity, flexibility, and tensile resistance. However, 1 3 the VHB-based DEAs show serious viscoelastic nonlinearity that makes the precision tracking control challenge [36][37][38][39][40][41].
Silicone rubber, which has good elastic properties, has fast strain response speed, and can maintain constant modulus at higher temperature, is one of the commonly used matrices for the preparation of DEM materials although the deformation degree of silicone membrane is low. Because of its weak viscoelastic characteristics, the response speed of silicone film is faster and shows higher efficiency. Besides, Akbari et al. [42] have presented theoretical guidelines for improving the deformation actuation of silicon-based DEM by changing the pre-stretch ratios.
Actually, DEA devices used for biomedical and bioinspired systems have already been reported [43], such as refreshable braille displays for the blinds and bioinspired tunable lenses for the visually impaired. However, as reported by Herbert Shea and colleagues, for cell-and tissue-related applications, the DEM materials to be chosen should satisfy some special requirements [44]: Firstly, they should be non-cytotoxic and compatible with standard cell culture protocols like sterilization and incubation; secondly, they need to be optically transparent for the convenience of integrating with the optical microscopes. After that, the selection of DEMs can be flexible since various designs and fabrications may be chosen. For example, some works used Sylgard 186 (Dow Corning) as the DEM and covered it with Silbione LSR 4305 (BlueStar Silicones) as the biocompatible membrane, which contacts the biological samples directly [44,45]. Besides, as the alternative, other PDMS has been used as well [46][47][48]. In our group, ELASTOSIL Film 2030 250/100 from WACKER was used to meet the principles above.
Materials and Techniques for Electrodes
Another indispensable element for DEAs is the compliant electrode; well-designed electrodes patterning can bring the charges to the target shape and area and therefore form the desired deformation. As commonly accepted, the electrode materials should have some properties: (1) They have the ability to maintain conductivity during large strains; (2) their stiffness can be ignorable, comparing with that of DEM; (3) they have the ability to maintain good stability [49]; (4) they are preferably to be patternable for conducting flexible electrode designs [50]. For applications on cells and tissues, as reported by Samuel Rosset et al. [51], manufacturability, miniaturization, impact on DEA performance, and the compatibility with low-voltage operation need to be taken into consideration. In this section, some widely used electrode materials are introduced.
Carbon-Based Electrodes
Because of the low stiffness and ability to maintain conductive at large strain [50][51][52], carbon-based electrodes are the most popular electrode materials for DEAs; typically, they can be divided into three main categories: carbon powder, carbon grease, and conductive rubber.
Carbon Powder Electrodes The main outstanding merit of powder-based electrodes is their less contributory to the stiffness of the DEM. Applying the loose carbon powders directly on the membrane became the solid choice in the early stage. However, the disadvantages of carbon powder are obvious: It is difficult to maintain conductivity at large strain [53,54] and lifetime is also limited because of the detaching of conductive particles from electrodes [51].
Conductive Rubber Electrodes Similar but not identical to carbon grease, conductive rubbers are produced through directly mixing conductive particles with silicone. As a result, the ablation or migration of the conductive particles can be avoided, and the lifetime of the electrodes can be extended. However, the impact on the stiffness of the DEM is not negligible [51].
With less requirements on precision, thickness homogeneity, and shape of the electrodes, they can be easily painted on the DEM. Nevertheless, for cellular research, the DEAs usually command accurate electrode pattern. Here, we introduce several techniques to precisely fabricate the carbon electrodes on the DEM as shown in Fig. 1.
Clearly, a mask covered on the surface of the membrane can be helpful to paint the carbon material into desired shape. Pelrine et al. [54] have presented their work for fabricating loose carbon grease and powder-based electrodes. To improve the uniformity, Schlaak et al. [55] proposed the use of spray coating as presented in Fig. 1a, which is an efficient manufacture method and can be used for commercial applications. Similarly, the electrodes can be stamped on the DEM [56]. In such case, fabricate a soft stamp into desired pattern by replication on an etched silicon negative master as shown in Fig. 1b. Besides, printing techniques can also be used to pattern electrodes as presented in Fig. 1c since the carbon-based materials can be made into conductive ink [51]. As reported by Krebs, printing methods have been utilized to manufacture flexible devices [57]. More advanced, high-resolution and robust compliant electrodes for siliconbased soft actuators and sensors are completed through laser ablation technology [58] (Fig. 1d).
Metallic Electrodes
Due to the low resistance, metallic electrodes are also alternative for the field of DEAs. However, the high Young's modulus and relative small strain make it difficult to directly pattern them. To overcome this drawback, some fabrication methods have been proposed, such as photolithography, depositing [51], and implantation. Among these methods, DEAs with implanted metallic electrodes present better performance. Here, we introduce two methods to implant metallic electrodes: filtered cathodic vacuum arc implantation (FCVA) and supersonic cluster beam (SCB) implantation. The principle of FCVA can be depicted simply [59][60][61][62]: The plasma generated from the source consists of metal ions, electrons, and the undesirable macroparticles, and the magnetic filter is used to remove the macroparticles from the plasma; nanometer-sized clusters can be generated through this technology (Fig. 2a). Similarly, the progress of SCB is shown in Fig. 2b: Vaporize the metal firstly and then use a pulse of inert gas to quench the plasma to form the neutrally charged clusters. Then, the clusters can be injected into the deposition chamber and form metal/PDMS nanocomposite layer. Figure 2c, d shows the nanostructure of the metallic electrodes that are produced via FCVA and SCB, respectively.
Transparent Electrodes
Basically, the conductive materials which meet the requirements of DEA electrodes application are non-transparent. However, some works proposed several transparent electrodes that can be potentially used in optical applications. As reported, Hu et al. [64] have studied transparent and conductive nanotube thin films on electrical and optical Fig. 1 The techniques for creating carbon-based compliant electrodes. a Shadow masking: using a shadow mask to selectively spray the carbon material on target area and then removing the mask to get the final electrodes. b Stamping process: using patterned elastomeric stamp to pick up the carbon material and stamping it on the DEM. Redrawn from Ref. [56] with permission. c Printing: the carbon-based materials can be made into conductive ink; then using printing technology to pattern electrodes. Adapted from Ref. [51] with permission. d Laser ablation: the thick PDMS-carbon composite layers can be patterned by laser ablation and bonded to PDMS membrane by oxygen plasma activation. Adapted from Ref. [58] with permission 1 3 properties. Kovacs et al. [65] found that loose carbon black with extremely thin thickness can be partly transparent when applied on adhesive acrylic membrane. Implanted palladium and gold electrodes have also been reported to be possibly present transparency with a ratio of 35 and 70%, and the value is dependent on the metal and the implanted dose [62]. Besides, ionic hydrogel is a novel, ordinarily, and transparent material for electrodes of DEAs [66,67]. Combining ionic hydrogel with 3D printing, people successfully fabricated electric-driven soft actuators that can achieve a maximum vertical displacement of 9.78 ± 2.52 mm at 5.44 kV [68]. As mentioned above, DEAs for cellular research need to be optically transparent. Therefore, the development of transparent electrodes can accelerate the applications of DEAs of cellular use; before that, much work is still demanded to develop such novel electrodes materials and techniques.
Actuation Mechanism of DEAs
In general, DEA is a device that converts electrical energy into mechanical deformation [28]. The basic actuation mechanism is presented in Fig. 3, a dielectric elastomer film is sandwiched by the patterned electrodes on both top and bottom sides, and high-voltage (HV)-induced compression along thick direction causes in-plane expansion. This physical response of dielectric elastomers can be linked with their Maxwell stress effect [49]. Since the electrode patterns can be various, the specific strain characterization of DEAs can be different in the concrete cases. Typically, the strain along the thickness direction S M [69] and the effective electromechanical pressure P [70] of the elastomer membrane can be used to describe the strain level.
where 0 is the permittivity of vacuum, r is the dielectric constant of the elastomer, E is the electric field applied, and s is the elastic compliance.
Fig. 2 a
The schematic picture of FCVA. Adapted from Ref. [59] with permission. A high-voltage (600 V) impulsion initiates the main arc from the cathode, the filter helps to trap the macroparticles and the negatively substrate holder accelerates the positive ions through the plasma sheath. b The schematic of SCB progress. Adapted from Ref. [63] with permission. The Au NPs nanoparticles generated from the cluster source is accelerated by a carrier gas in a supersonic expansion and then focused by aerodynamic lens. The cluster beam can be injected into the deposition chamber and impacted on the surface of a thermo-retractable polystyrene (PS) sheet. c The product of FCVA. Adapted from Ref. [47] with permission. d TEM image of the product of SCB. Adapted form Ref. [63] with permission Generally, for the DEA-based devices, it is noteworthy that the pre-stretch of DEM is another indispensable design parameter that can affect the performance. Obviously, pre-stretching can prevent the buckling of the dielectric elastomer when electrically activated. Secondly, pre-stretching the elastomer contributes to improve the performance of DEAs since it can increase electromechanical instability (EMI) of the membrane [28,[72][73][74][75]. Additionally, equiaxial and uniaxial pre-stretch can help to generate equiaxial and uniaxial strain, respectively. Therefore, it is common to change the amplitude and ratio of pre-stretch to generate desired behaviors of DEAs. Figure 3c, d shows the classical configuration of DEAs for cell mechanical stimulus [71]. Usually, the rigid frame is necessary to hold pre-stretch of the DEM. Once the voltage is applied, the area with electrodes is expanded, while the remaining area is compressed. As a result, the expanded area is defined as active and can be used for stretching the cells and the compressed area is called passive and can be adopted to compress the cells.
Evaluation of DEAs
It is important to understand cellular environment when studying cells' responses to the mechanical loadings, so precisely characterization of DEA devices is indispensable, that is, we need to gain the strain distribution of our target area in the DEA. Currently, finite element modeling (FEM) and image processing (machine vision and digital image correlation) techniques are widely used to calculate the strain distribution. FEM is a non-contact way commonly used in many fields to calculate stress and strain distribution of a mechanical/ structural system [76]. To complete the FEM, some basic parameters of the materials are required such as Poisson's ratio and elastic modulus. Constrain the model with certain boundary conditions and divide it through multiple grids, calculate the strain in every single grid, and finally generate the whole strain distribution. For example, Akbari et al. [47] optimize the geometric configuration of the actuator by using a simplified FEM.
Image processing technology can be helpful as well and seems more popular among the people who study cells via DEAs. Similar to FEM, digital image correlation (DIC) is another non-contact method to obtain the strain distribution. Set the region of interest (ROI) on the original image; the algorithm conducts correlation calculation in the image after deformation to find the most relevant points/pixel with the original one, and then, displacement and strain can be calculated. Recently, Blaber et al. [77] published their open-source 2D digital image correlation software. With this software, the strain distribution of the actuator can be successfully measured by processing the pictures in actuated and rest state. For instance, as shown in Fig. 4, Poulin et al. [78] obtained the strain distribution at the ROI from both compressive and tensile modes through DIC. Analogously, it is easy to complete the characterization when the shapes of electrodes changed [45].
Besides the strain distribution, the basic deformation-voltage responses (average strain) can be obtained through image processing as well. As reported by Rosset et al., they measured the strain of actuator using machine vision via a LabVIEW image processing to track the four corners of the electrodes [79]. After calibration, the coordinates of these four corners that are under both actuated and non-actuated states are recorded, so that the curve of voltage induced strain can be plotted.
DEA-Based Devices for Cellular Research
According to the different cell amounts that the DEA-based bioreactors may apply in, they can be divided into two categories: one for single cell and the other for a small population of cells or in other word tissue engineering.
DEA-Based Bio-Stretcher/Reactor for Single Cell
For single-cell mechanical stimulation, technologies like microfluidics [80], atomic force microscope [81], microelectromechanical systems [82], and optical tweezers [83] have been widely used. The DEA-based devices for singlecell mechanical loading rely on the principle that cells may deform with the stretchable substrate where they already adhere. Now, such devices are at the stage of conceptual design. Figure 5 shows the schematic of DEA designed for stimulating single cell. Implanted gold ion electrodes are patterned on both sides of PDMS film [47]. Continuous electrodes are implanted on the top side of membrane, while the bottom side has narrow implanted electrodes. Red lines are ion-implanted electrodes on the bottom of a PDMS membrane and horizontal lines are trenches with square cross section. Mechanical stretch happens in the intersection regions of the top and bottom electrodes. Such design forms numerous units, and Fig. 5c actually shows four units for four cells to be stretched [46]. When the DEA is actuated by a voltage of 3.8 kV, the membrane expands by 56% along the x-axis (Fig. 5d).
DEA-Based Bioreactor for Small Population of Cells
For a small group of living cells, it is more flexible to generate tensile or compression strains through DEAs, and the corresponding research is shown in Fig. 6. In 2014, Alexandre et al. [71] reported that the stress in passive region of DEAs can be utilized to compress the cells. Since then, in 2016, they developed the DEA-based cell stretcher to stimulate lymphatic endothelial cells (LECs) and demonstrated that DEAs can be interfaced with living cells and used to supply mechanical loading [44]. After that, an innovative muscle-like bioreactor (mimicking the small intestinal) for the investigation of physiological phenomena was described by Cei et al. [70]. The bioreactor can maintain its performance even if incubated with Caco-2 cells for 21 days until the differentiation of cells can be observed. In 2018, the actuator that can generate alternately tensile and compression strains was proposed by Poulin et al. [78]. Afterward, as the newest work, for the purpose of investigating drastic cases like rapid stretch effect on cardiac tissue, Imboden et al. [85] showed their high-speed mechano-active multielectrode actuator, which can provide stimulus of mechanoelectrical coupling mechanism. Such a progress actually indicates the diversified future of the DEA-based bioreactor. DEAs can be applicable for different purposes and further fabricated into the bioreactor with a specific function.
DEAs Designed for the Measurement of Traction Force of Cells
Monitoring the biological indicators of cells, e.g., metabolic analysis, biomarker detection, cell force, and strain, is very necessary for investigating the cells' behaviors and understanding some diseases. The metabolic analysis and biomarker detection mainly rely on mass spectrometry and electrochemistry ways [86][87][88], while force sensing technologies predominantly rely on optical methods [89], which hinders scaling up of devices for the purpose of parallelized and real-time measurements. Except for the function of passing mechanical loading to the living cells, as reported, the DEAs can also be the potential sensors to measure traction force of cells. For example, in 2017, Rosset et al. [90] used the DEAs to achieve subcellular resolution measurement of cell traction forces. A DEAbased sensor system for measuring the contraction force of smooth muscle cells was also reported [91], and the principle is shown in Fig. 7a: The system contains three main parts, including the DEA-based cell culture support (24well unit is designed for high-throughput parallel measurement), the read-out electronics and the computer to show the measured data. Actually, every culture well works independently to sense the expansion which is caused by cellular contraction and the changes in the device capacitance which is caused by the contraction. Cell contraction causes changes in the geometry, i.e., diameter of the cell region and thickness of the DEM from d to d′ and t to t′, respectively. These changes consequently cause changes in the device capacitance that can be measured.
In 2016, the same group proposed further optimization of thin-film sensors for contractility measurement of muscle cells [92]. In this work, they proposed their modeling to predict sensor behavior as shown in Fig. 7b. They describe the system based on cylindrical coordinates which use R and θ to where R is the radial position of an arbitrary point in the sensing region before pre-stretch and cell contraction, t is the initial sensing layer thickness before pre-stretch, and 0 and r are the permittivity of free space and the relative permittivity of the sensing layer, respectively. A is the radius of cell region before pre-stretch, B is the radius of the DEM before pre-stretch, and λ Z is the stretch in the thickness direction (assuming material incompressibility).
Comparison Between Usual Commercial Bioreactors and the DEA-Based Ones
In this part, we provide comparison between the widely used commercial devices and the DEA-based ones. The traditional mechanical systems can be typically divided into two categories, the motor driven (STB series from STREX) and (3) the pneumatic (FX series from Flexcell). Generally, almost all of the mechanical methods to stimulate cells in vitro rely on a membrane (usually PDMS or silicone that are biocompatible) to deliver the stimulus. For the pneumatic devices, the membrane is placed on the holder, a loading post is filled to form the air chamber, and a channel is made to allow the operation of pumping (Fig. 8a). Once we pump air out from the chamber, the atmospheric pressure will squeeze the membrane into the chamber and consequently create a tensile strain. In contrast, motor-based system is more direct. The membrane (or chamber) is installed on two holders. One is fixed, while the other is movable and is connected with the motor through driven components. When activate signal is applied, the motor can generate corresponding rotation; the driven components then translate the rotation to traction of the movable holder for stretching the membrane (Fig. 8b).
For the DEA-based bio-stretcher, uniaxial pre-stretch of DEM usually is set to produce the strain along one desired direction. Biocompatible membrane in which cell is cultured firstly is placed on the DEA after electrodes patterning, and then rigid frames are used to fix the membranes. The compliant electrodes will expand along the reverse direction of pre-stretch; once HV is applied, then cells can be [78] with permission. In 2019, more advanced DEA-based bioreactor providing mechanoelectrical coupling stimulus is proposed. Adapted from Ref. [85] with permission stretched (Fig. 8c). Usually, the volume of the pneumatic and motor-driven devices is higher than that of the DEAs, because of the extra elements such as air chambers and movable holders. Besides, the pneumatic and motor-driven methods require the pump and motor to activate the membrane; in other words, these are indirectly control system, facing problems that strain can be limited since the original activated signal which need to be converted may beyond the performance of the pump or motor. In contrast, DEA-based devices are motivated directly by the electric signal, which is another unique advantage. When it comes to specific using, several parameters may be important to assess the devices, including available frequency and strain. Here, we obtain the information of several well-known commercial motor-driven and pneumatic systems by searching their manual on the Web site [93,94] and list them in Table 1. The performance of the pneumatic highly depends on the membrane and the vacuum pump, while for motor-driven devices, the key element is the motor itself. As mentioned above, DEAs are directly activated via electric signal without the progress of conversion from control signal to motor driver or vacuum pump that can restrain the performance, i.e., the faster response time and higher frequency may be obtained. Thus, DEA-based devices can theoretically provide more various loadings than the other two ways. In addition, due to the compact volume and highly flexible design of DEAs, real-time monitoring of cells via microscope is feasible and relatively simple. For instance, Poulin et al. [44] presented the DEA-based real-time monitor cellular stimulating system (Fig. 9a). The cell-seeded DEA device is placed in a simplified transparent incubator over the microscope objective, and the microscope is programmed to periodically capture the cell picture. In addition, for clear observation of cellular internal elements like nucleus under mechanical stimulus, staining is also compatible in such system. They stained the human lung carcinoma cells A549 and recorded the dynamic position of cellular DNA and mitochondria during uniaxial stretch. The nuclei displacements they measured show a linear relation with the initial nuclei positions, which can be the evidence that DEAinduced strain is transferred to the cells [78].
In a word, DEAs can be a greater available carrier choice for cellular mechanical loading research. Therefore, we give a brief outlook of DEAs' applications (next section) and hope to enlighten the combination of mechanobiology and DEAs.
Outlook of DEAs in Cellular Mechanical Loading Research
A phenomenon of cellular and tissue behavior adjustment is strongly associated with the changes in extracellular matrix (ECM) and protein expression, which can be determined by the loading and cell type. For in vitro cellular stimulus, we can deform the cell membrane by stretching the cell adhesion substrate [95,96]. Over the past years, people have conducted interesting researches on the load-sensitive cells, trying to discover the relations between cellular responses and different loading conditions. Although most of the published results were completed by the pneumatic and motor-driven devices, comparison suggests that DEA-based devices share the properties as well or even better, so the following applications can theoretically be the references for development of DEA-based bioreactors.
Among the various cellular responses, reorientation of the cells is intuitively visible, and the relevant research has been carried out for a long time. In 1986, Dartsh et al. [97,98] presented their work to cyclic stretch smooth muscle cells, and the result shows uniform reorientation of the uniaxial stretch cells compared to the control group. Then, experimental results suggest that cells form weak adhesions on the soft substrates [99], which allow the reorientation under in vitro mechanical loading. Based on these results, people tempted to model this phenomenon. They proposed that mechanical sensor system of cells is the reason why cells response to the loadings; they mainly focus on the state dynamics of cells during reorientation such as focal adhesions (FAs), stress fibers (SFs), and actin cytoskeleton. Some representative theories were raised. For example, a widespread theory once proposed that cells realign along the zero (or minimal) strain direction to maintain original undisturbed state [100], and the theory can be expressed as Eq. 4: where xx and yy refer to the strain along x and y directions, respectively. However, Livne et al. [8] found deviation between the experimental cellular reorientation and theoretical predictions above and proposed the new theory that regards the reorientation as result of dissipative process to relax the passively stored elastic energy, and can be predicted as Eqs. 5 and 6: This function is valid for r ∈ [1 − 1/b, 1 + 1/(b − 1)], where b is a dimensionless parameter which is related to cellular Young's moduli along the polarized reference system (Fig. 10). For verification, they tracked cells under various original angles and stretch parameters and obtain well fit between the experimental results and the predicts. More recently, Chagnon-Lessard et al. [101] demonstrated that strain gradients guide the orientation as well. The mechanism was described as gradient avoidance response, and the statistic results show great similarity of cellular arrangement between high-strain region and the low-strain but high-gradient area. Besides, some basic functions of cells, such as endocytosis and exocytosis, are associated with the membrane tension. In fact, the decrease in tension improves endocytosis, while an increase in tension brings suppression [102,103], and works have been conducted to research such endocytosis modulating through mechanical stimulus. For example, Boulant et al. [104] used a mechanical stretching device (motor-driven) to stretch the cell substrate to verify the adjuvant effect of actin-on clathrin-mediated endocytosis. Real-time monitor system through DEA devices. a Schematic of the whole system with DEA-based cellular stretcher and inverted microscope. The DEA is motivated and controlled by the high-voltage source, a mini incubator is used to provide the standard environment for living cells (lymphatic endothelial cells (LECs) used), and the microscope is used to monitor the cells. Adapted from Ref. [44] with permission. b Imaging of lung cells A549 during stretching: The DNA and mitochondria are stained as blue and green, respectively. Top picture shows displacement track of the intracellular content of A549 at ×40 magnification; the arrows with different colors and lengths refer to different degrees of displacements. Bottom picture shows the nucleus track of A549, ε x = 0.12 along the stretch orientation measured through this method, and a line is found to fit the nuclei displacement perfectly. Adapted from Ref. [78] with permission They found that jasplakinolide treatment of the stretched cells causes a dramatic increase in pit lifetimes and percentage of arrested pits under 25% stretching, which shows that the clathrin-mediated endocytosis must be assisted by actin in order to complete the vesicle dissociation process under high membrane tension. Thottacherry et al. [105] applied 6% stretch strain to the Chinese hamster ovary (CHO) cell and observed a remarkable reduction in fluorescent dextran (F-Dex) endocytosis compared to the static. However, F-Dex uptake increases significantly at the movement of stretch relax (Fig. 11b); they thus proposed the CLIC/GEEC(CG), a dynamin-independent pathway that can react to the change in membrane tension and regulate F-Dex uptake.
For broader fields to discuss, tumor research and rehabilitation engineering may be appropriate [106][107][108]. Uncontrolled proliferation of tumor causes forces interaction to the ECM and tissues nearby, which can be usually classified into shear, compress, and stretching stresses [106]. Hofmann et al. [109] found that mechanical stretching increases the proliferation of cancer cells, while Helmlinger et al. [110] found that compressive stress inhibits the growth of tumor spheroids. Besides, cells and tissues in human's joint undergo complex forces during our daily life, which means that rehabilitation research from athletic injury can be inspired from mechanical stimulus as well. As reported, the state of tendon and meniscus can be associated with mechanical loadings. 10%-strain dynamic compression promotes anabolic of meniscus, while strain at 20% regulates the state into catabolism [111], and the responses can be frequency and time dependent. More interestingly, cyclic tension strains show inhibition of inflammatory of meniscal cells [107]. In addition, experimental results demonstrated that tendons also respond to mechanical loads, appropriate loads enhance tendons, while chronic mechanical loading may accelerate tendinopathy [112,113].
In this section, we simply introduce several potential applications of DEA-based devices for mechanobiology research, including cellular reorientation, endocytosis, etc. Actually, much more concrete scenarios of this field are still waiting to be explored. Similar to the motor-driven devices, DEAs are at the junction of medical (biology) and engineering science; they can be designed into various mechanical loading bioreactors while maintaining cell and tissue affinity. As a promising tool, we hope that DEAs can contribute to the development of mechanobiology.
Conclusions
Exploring the response of cells is an exciting, evolving but challenging task which is significant for biomedicine engineering. Directly, many illnesses can be linked with the disordered cells and tissues functions, which means that it makes sense to research cellular responses under various mechanical loadings for better understanding of some diseases or even cancers. Furthermore, if we can regulate cells and tissues into the proper states or functions through mechanical stimulus, some effective and promising treatments can be developed.
In this work, we firstly provide simple introduce of DEAs, including components, actuation principle, evaluation methods, and several applications on cellular mechanical loading. Then, we compare the DEA-based bioreactors with current widely used custom-built bioreactors, showing their connections and differences, and some prominent properties of DEAs stand out. At last, we give short outlook of DEA technology in the future mechanobiology research.
In a word, although much corresponding examples are still lack, employing DEAs as the bioreactors and biosensors for cellular applications is actually opening the door of cellular mechanobiology through a novel method. As the new generation of actuators, DEAs bring some irreplaceable advantages compared to traditionally used peers like the motor-driven and pneumatic: They have simpler structure, faster response, and higher controllability. In addition, DEAs are more flexible to design and can be easily catered the request of biocompatible and combine with microscope to form an experimental system. Among these advantages, the property of rapid response makes the DEA-based devices potential to simulate some extreme conditions, such as sudden cardiac death, which is absolutely difficult to realize by some other bioreactors. Furthermore, continuous advances in material science and microfabrication technology make it feasible and promising to study cellular response of mechanical stimulus through DEA devices since they can be manufactured into micro-nanoscale, and then design into highthroughput devices that are meaningful for cellular research. What is more, because the using of powerful algorithm and Jasp: Fig. 11 Effect of membrane tension differences on cellular endocytosis. a Jasplakinolide treatment increases pit lifetime and percentage of arrested pits of stretched cells, adapted from Ref. [104] with permission. b Stretching regulates F-Dex uptake, adapted from Ref. [105] with permission image processing tools, this field can be multidisciplinary and a hot issue in the future, which means low threshold for the people to conduct this study. Nevertheless, some challenges still remain elusive. Firstly, the drive voltage for DEAs is usually too high (several thousand volts), which makes this technique risky and limit their broad applications. Therefore, much works are still need to cut down the required voltage or electric field. As reported, for example, Shea's group have tried to reduce the voltage by decreasing the thickness of DEM [114]. Secondly, optimization of DEAs' basic performance can be crucial, including larger strain, higher energy density, longer lifetime (cycles that can be tolerated, longer shelf time), and better stability, and all of these are important to determine DEAs' further applications in both cell and tissues' mechanobiology and other possible fields.
|
2019-11-14T14:36:23.268Z
|
2019-11-11T00:00:00.000
|
{
"year": 2019,
"sha1": "5b7f26a1e27ef5c9d0000f73af0ac32cdd8673d3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40820-019-0331-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ac9f501027a22d8dcb89d284015cc9b8882ccc5",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
14623074
|
pes2o/s2orc
|
v3-fos-license
|
On Gelfand models for finite Coxeter groups
A Gelfand model for a finite group $G$ is a complex linear representation of $G$ that contains each of its irreducible representations with multiplicity one. For a finite group $G$ with a faithful representation $V$, one constructs a representation which we call the polynomial model for $G$ associated to $V$. Araujo and others have proved that the polynomial models for certain irreducible Weyl groups associated to their canonical representations are Gelfand models. In this paper, we give an easier and uniform treatment for the study of the polynomial model for a general finite Coxeter group associated to its canonical representation. Our final result is that such a polynomial model for a finite Coxeter group $G$ is a Gelfand model if and only if $G$ has no direct factor of the type $W(D_{2n}), W(E_7)$ or $W(E_8)$.
Introduction
Let G be a finite group. A Gelfand model for G is a complex linear representation of G that contains each of its irreducible representations with multiplicity one. One is interested in finding "natural" Gelfand models for classes of finite groups. Klyachko and others ([Kl-1983, IRS-1990, Ba-1991) gave a construction of Gelfand models for the groups W (A n ), W (B n ) and W (D 2n+1 ). It is obtained by taking sum of inductions of certain one dimensional representations of involution centralizers, hence called involution model. It is easy to see that, for a finite group G, the dimension of an involution model for G is equal to the dimension of a Gelfand model for G (i.e., the sum of dimensions of all irreducible representations of G) if and only if every irreducible complex linear representation of G can be realized over R. Therefore, if a group G has a non-real irreducible representation then any involution model for G is never a Gelfand model. Thus, the study of involution models is rather restricted.
In this paper, we study another approach for constructing a Gelfand model. For a finite group G and a faithful representation V of G, we define the polynomial model for G associated to V , denoted by N (V ), to be the space of complex valued polynomial functions on V that are annihilated by all G-invariant differential operators with polynomial coefficients of negative degree (see §2 for more details). Araujo and others ([AA-2001, Ar-2003, AB-2005) proved that the polynomial models associated to the canonical representations of the Weyl groups W (A n ), W (B n ) and W (D 2n+1 ) are Gelfand models.
The purpose of this paper is to study polynomial models for all finite Coxeter groups, irreducible or not, associated to their canonical faithful representations. In Theorem 2.4, we give another description of the polynomial model N (V ) associated to a faithful representation V of a finite group G. Our description is easier to work with and gives much more information than the original description in [AA-2001]. For instance, it is quite clear from Theorem 2.4 that an irreducible representation of G is always contained in any polynomial model. Thus, N (V ) is a Gelfand model for G if and only if the multiplicity of each irreducible representation of G in N (V ) is one.
The question now is to compute the multiplicity of an irreducible representation of G in its polynomial model N (V ). If G is a finite Coxeter group, then this question is related to the study of the fake degree of an irreducible representation of G. In Section 3, we recall some well-known facts about finite Coxeter groups and the fake degrees. Section 4 is devoted to proving the following main result.
Theorem 4.2. Let G be a finite Coxeter group and let V be its canonical faithful representation. The polynomial model N (V ) for G is a Gelfand model if and only if G has no direct factor of the type W (D 2n ), W (E 7 ) or W (E 8 ).
Remark. Every irreducible representation of a finite group G is contained in any polynomial model for G (see Theorem 2.4). Hence a polynomial model for G is a Gelfand model if and only if its dimension is equal to the dimension of a Gelfand model. On the other hand, as discussed above, the dimension of an involution model for a finite Coxeter group G is always equal to the dimension of a Gelfand model for G. Therefore, an involution model is a Gelfand model for a finite Coxeter group G if and only if it contains each of its irreducible representations.
Thus, in the case of finite Coxeter groups, involution models and polynomials models have complementary properties and it would be interesting to study the interplay between these two models. We hope to take up this study in a subsequent paper.
We also believe that our description of the polynomial model in Theorem 2.4 can be used to describe Gelfand models for finite groups which are not Coxeter groups. A trivial example is that the polynomial model for a finite cyclic group G associated to any of its one dimensional faithful representations is a Gelfand model for G.
Acknowledgements. The first author thanks D.-N. Verma for introducing him to Gelfand models. Thanks are also due to Dipendra Prasad, Ulf Rehmann and Anuradha Garge for their comments at various stages of this work. This work was done when the first author was visiting the Institut de Mathématiques de Jussieu, Paris, and Universität Bielefeld, Germany. He acknowledges the hospitality of the members of both the places as well as a fellowship from "Région Ile-de-France" and the support from "SFB 701: Spektrale Strukturen und Topologische Methoden in der Mathematik" which made these visits possible. The authors also thank the anonymous referee for a careful and speedy report.
The polynomial model
In this section, we define the polynomial model for a finite group G together with a faithful C-representation V . This is a certain subspace of the ring of complex valued polynomial functions on V . In Theorem 2.4, we give another description of this representation space which turns out to be easier and more useful than the one given in [AA-2001].
Let G be a finite group and let V be a faithful C-linear representation of G. Let A denote the ring of complex valued polynomial functions on V . Then G acts on A by (g · P )(v) = P (g −1 · v), for all g ∈ G, P ∈ A and v ∈ V.
Let W denote the Weyl algebra consisting of differential operators on A with polynomial coefficients. Then W admits an action of G determined by the following property: (g · D)(P ) = g · (D(g −1 · P )), for all g ∈ G, D ∈ W and P ∈ A.
By choosing a basis of V , we identify A with C[x 1 , . . . , x n ] and the Weyl algebra W with C[x 1 , . . . , x n , ∂ 1 , . . . , ∂ n ], where ∂ i = ∂ ∂x i . Note that the multiplication in W is non-commutative. Any element D ∈ W can be written in a unique way as a finite sum D = c α,β x α ∂ β where x α = x α 1 1 · · · x αn n , ∂ β = ∂ β 1 1 · · · ∂ βn n and c α,β ∈ C. For D = c α,β x α ∂ β ∈ W, we define the degree of D as Note that the degree is invariant under G-action. Let Z be the set of G-invariant elements of negative degree in W. Finally, let N (V ) ⊆ A denote the space of polynomials annihilated by Z, i.e., It is clearly G-stable. We define the G-representation N (V ) to be the polynomial model for G associated to V . We now begin our study of the polynomial model N (V ) with the following basic observation.
Further, there exists a polynomial P ∈ A such that P (v) = 0 and P (g · v) = 0 for g = 1. We define the required map by sending g to g · P .
Let A m denote the subspace of homogeneous polynomials in A of degree m. Since G acts linearly on A, each A m is stable under the action of G. It now follows from the above lemma that each irreducible representation of G is contained in some A m .
Further, let W p,q denote the subspace of W generated by the elements x α ∂ β where |α| := i α i = p and |β| := i β i = q. Again, each W p,q is stable under G-action. We recall that Hom C (A q , A p ) admits an action of G determined by (g · φ)(P ) = g · (φ(g −1 · P )) where φ ∈ Hom C (A q , A p ) and P ∈ A q . By letting W act on A, we get a G-equivariant linear map Ψ : W → Hom C (A, A). From this map, we obtain a G-equivariant linear map Ψ p,q : W p,q → Hom C (A q , A p ) for each p and q.
We now state and prove the main theorem of this section.
Theorem 2.4. Let V be a faithful representation of a finite group G. Let N (V ) denote the polynomial model for G associated to V . For an irreducible representation U of G, let p(U) be the smallest integer such that U is isomorphic to a subspace of A p(U ) and let C U denote the U-isotypical component of A p(U ) . Then , has a subspace U ′ isomorphic to U, then there exists a non-zero G-equivariant map from A q to A p(U ) . This, by Corollary 2.3, comes from an element D ∈ W G p(U ),q such that D(U ′ ) = 0. Since q > p(U), we have deg(D) < 0 and hence D ∈ Z. This proves that the only homogeneous component of N (V ) which has a subspace isomorphic to U is of degree p(U).
Corollary 2.5. With the above notations, the polynomial model N (V ) is a Gelfand model for G if and only if each irreducible representation U of G appears with multiplicity one in its first occurrence in the homogeneous components of A.
We complete this section with an easy observation and its two consequences.
Proof. Let A(W ), A(V 1 ) and A(V 2 ) denote the rings of complex valued polynomial functions on the vector spaces W, V 1 and V 2 respectively. It is then clear that The lemma now follows from Theorem 2.4.
Finite Coxeter groups and fake degrees
In this section, we recall some basic facts about finite Coxeter groups and their representations. More precisely, we recall the notion of the fake degree of an irreducible representation of a finite Coxeter group. The basic results about Coxeter groups are recalled from [Bo-2002] and those about the fake degree are recalled from [Ca-1985]. Towards the end of the section, we describe the fake degrees of irreducible representations of some finite Coxeter groups.
A Coxeter system is a pair (G, S) where G is a group generated by a finite set S with relations s 2 = (ss ′ ) m(s,s ′ ) = 1 for s, s ′ ∈ S. If (G, S) is a Coxeter system, by abuse of notation, we call G a Coxeter group. In this paper, we are concerned with finite Coxeter groups. Let G be a finite Coxeter group and let V denote the C-vector space generated by the basis (e s ) s∈S . Then G admits a natural action on V determined by the following property: s(e s ′ ) = e s ′ + 2 cos π m(s, s ′ ) e s .
This is a canonical faithful representation of G associated to the generating set S. In the rest of this paper, we shall not mention the generating set S and say that Φ : G ֒→ GL(V ) is the canonical faithful representation of the Coxeter group G.
As described in the previous section, G acts on the ring A of complex valued polynomial functions on V . Let A G denote the subalgebra of G-invariant polynomials in A. It is known that A G is a polynomial algebra over C and, moreover, one has that A G = C[f 1 , . . . , f n ] where the polynomials f i can be chosen to be homogeneous. The generating set {f 1 , . . . , f n } is not unique, however, the degrees d i = deg f i are uniquely determined by the group G and its representation V ( [Ca-1985, 2.4.1]).
The quotient ring of A modulo the ideal generated by non-constant G-invariants, A = A/(f i ), admits a natural G-action. We decomposeĀ into (G-stable) homogeneous components asĀ = ⊕ i≥0Āi . It is known thatĀ is isomorphic to the regular representation of G as a G-representation ( [Ca-1985, 2.4.6]). For an irreducible representation U of G of degree d, let a 1 ≤ · · · ≤ a d denote the degrees of homogeneous components ofĀ, counted with multiplicities, that contain a subrepresentation isomorphic to U.
The polynomial f U (t) := d i=1 t a i is called the fake degree of the representation U. If χ denotes the character of the representation U, then the fake degree of U can be computed using the formula ( [Ca-1985, 11 where d i are the degrees of the homogeneous generators f i of A G , as discussed above. Remark 3.1. If we write f U (t) = t q(U ) · g U (t), where g U (t) is a polynomial with nonzero constant term, then q(U) is equal to p(U), the smallest degree of the homogeneous component of A which has a subrepresentation isomorphic to U. Further, the constant term of g U (t) is equal to the multiplicity of the representation U in A p(U ) .
The finite Coxeter groups are classified by the corresponding Coxeter graphs ([Bo-2002, IV.1.9]). A finite Coxeter group is said to be irreducible if its Coxeter graph is connected, and a general finite Coxeter group is a (finite) direct product of irreducible ones. The irreducible finite Coxeter groups have been classified and, up to isomorphism, they are of the following types: (1) Classical Weyl groups: W (A n ), W (B n+1 ) and W (D n+3 ) for n ≥ 1; (2) Exceptional Weyl groups: W (G 2 ), W (F 4 ), W (E 6 ), W (E 7 ) and W (E 8 ); (3) Non-crystallograhic groups: H 3 , H 4 and the dihedral groups G 2 (n) of order 2n for n = 5 and n ≥ 7.
We shall use these notations for the rest of this paper. The fake degrees of irreducible representations have been computed for various finite Coxeter groups. Steinberg calculated them for the groups W (A n ) in [St-1951] whereas Lusztig and others computed them for many of the remaining finite Coxeter groups ( [Lu-1977, BL-1978, AL-1982). We now reproduce the formulae for the fake degrees of the irreducible representations of classical irreducible Weyl groups from ( [Lu-1977, §2]). In the end, we also compute the fake degrees for the groups H 3 and G 2 (n).
Fake degrees for W (A n ). It is known that irreducible representations of W (A n ) ∼ = S n+1 are in 1-1 correspondence with partitions of n + 1, α : α 1 ≥ α 2 ≥ · · · ≥ α m ≥ 0 with |α| = α i = n + 1. For a partition α, we define λ i = α i − i + m. Then the fake degree of the corresponding representation U α is given by . Fake degrees for W (B n ). The irreducible representations of W (B n ) are in 1-1 correspondence with ordered pairs (U α , U β ) of irreducible representations of S k and S l where k + l = n (see [Lu-1977, §2.3] for the details of this correspondence). For the partitions α : α 1 ≥ · · · ≥ α m ′ ≥ 0 and β : β 1 ≥ · · · ≥ β m ′′ ≥ 0, we define Then the fake degree of the representation U α,β , corresponding to the ordered pair (U α , U β ), is given by .
Fake degrees for W (D n ). The Weyl group W (D n ) is a subgroup of index 2 in W (B n ). It is known that the restriction U α,β := U α,β | W (Dn) , for an irreducible representation U α,β of W (B n ), remains irreducible if α = β and that U α,β ∼ = U β,α . Further, U α,α splits into two distinct irreducible representations U ′ α,α and U ′′ α,α . All irreducible representations of W (D n ) are obtained in this way ( [Ca-1985, 11.4.4]). The fake degrees of the irreducible representations of W (D n ) are as follows Lemma 3.2. For an irreducible representation U of a finite Coxeter group G, we decompose the fake degree of U as f U (t) = t p(U ) · g U (t) such that the constant term of g U (t) is non-zero (as in Remark 3.1). If G is W (A n ), W (B n ) or W (D 2n+1 ) then the constant term of the polynomial g U (t) is equal to 1 for all irreducible representations U of G. Further, there exist irreducible representations U of W (D 2n ) such that the constant term of g U (t) is 2.
Proof. We use the notations: U α , U α,β and U α,β , as above, for the irreducible representations of the groups W (A n ), W (B n ) and W (D n ) respectively.
It is clear from the formulas (3.2) and (3.3) for the fake degrees f Uα (t) and f U α,β (t), given above, that for any irreducible representation U of W (A n ) or W (B n ), the constant term of g U (t) is 1.
Fake degrees for H 3 . The fake degrees of the irreducible representations of the group H 3 can be easily computed using the formula 3.1. The group H 3 is isomorphic to the direct product of the alternating group A 5 with Z/2Z = {1, −1}. The group A 5 has five irreducible representations, which we denote by U 1 (=trivial), V 4 , W 5 , Y 3 and Z 3 , where the subscripts indicate the degrees of the representations. Each irreducible representation of A 5 gives rise to 2 irreducible representations of H 3 depending on whether the action of Z/2Z is trivial or not. Thus, we get 10 irreducible representations of H 3 . We denote the irreducible representations in the same way as for A 5 if the action of Z/2Z is trivial and the ones with the nontrivial action of Z/2Z are denoted by U ′ 1 , V ′ 4 , W ′ 5 , Y ′ 3 and Z ′ 3 . It is clear that any of the Y ′ 3 and Z ′ 3 can be taken as a defining representations of H 3 . We choose Y ′ 3 . The degrees of H 3 are 2, 6 and 10. Then the fake degrees of the irreducible representations of H 3 are as follows: U 1 : 1, U ′ 1 : t 15 , V 4 : t 4 + t 6 + t 8 + t 12 , V ′ 4 : t 3 + t 7 + t 9 + t 11 , W 5 : t 2 + t 4 + t 6 + t 8 + t 10 , W ′ 5 : t 5 + t 7 + t 9 + t 11 + t 13 , Y 3 : t 6 + t 10 + t 14 , Y ′ 3 : t 1 + t 5 + t 9 , Z 3 : t 8 + t 10 + t 12 , Z ′ 3 : t 3 + t 5 + t 7 .
Fake degrees for G 2 (n). The degree of an irreducible representation of a dihedral group is at most two. The group G 2 (n) is generated by a rotation ρ of order n and a reflection σ with relation ρσ = σρ −1 . The group G 2 (n) has [ n−1 2 ] representations of degree 2. These representations W j , for 1 ≤ j ≤ [ n−1 2 ], can be described by the images of σ and ρ as follows: σ → 1 0 0 −1 and ρ → cos jθ sin jθ − sin jθ cos jθ , where θ = 2π √ −1 n . We take W 1 to be the defining representation of the Coxeter group G 2 (n). By using formula (3.1) and the fact that the degrees of G 2 (n) are 2 and n, one has f W j (t) = t j + t n−j .
If n = 2k + 1, then G 2 (n) has 2 one dimensional representations with fake degrees 1 and t 2k+1 . If n = 4k, then it has 4 representations of degree one and their fake degrees are 1, t k , t k and t 2k .
Polynomial models for finite Coxeter groups
In this section, we prove the main result of this paper (Theorem 4.2). By our description of the polynomial model in Theorem 2.4, it is enough to study the polynomial models for irreducible finite Coxeter groups. Again using the same description, we connect the study of polynomial models for a finite Coxeter group G with the study of fake degrees of its irreducible representations. We then prove the main result using some classical results of Steinberg, Lusztig and others which are recalled at the end of previous section. Finally, we give an explicit construction of the polynomial model for dihedral group G 2 (n) corresponding to its canonical faithful representation W 1 .
Theorem 4.1. Let G be an irreducible finite Coxeter group and let V be its canonical faithful representation. Then the corresponding polynomial model N (V ) of G associated to V , constructed as in §2, is a Gelfand model for G if and only if G is not of the type W (D 2n ), W (E 7 ) or W (E 8 ).
Proof. Let G be a finite Coxeter group and let V denote the canonical faithful representation of G. We recall, from Corollary 2.5, that the polynomial model N (V ) of G is a Gelfand model if and only if the multiplicity of each irreducible representation U of G in A p(U ) is one. By Remark 3.1, we further deduce that N (V ) is a Gelfand model for G if and only if for each irreducible representation U of G, the polynomial g U (t) (associated to the fake degree f U (t) of U) has constant term one.
The irreducible finite Coxeter groups constitute Weyl groups of split simple linear algebraic groups, dihedral groups, the symmetry group of the icosahedron, denoted by H 3 , and a group which is denoted by H 4 (Bourbaki [Bo-2002, 6.4.1]). Beynon-Lusztig, in [BL-1978, § § 2, 4, 5], have computed the fake degrees of irreducible representations for all exceptional Weyl groups. These computations, combined with Lemma 3.2, imply that when G is a Weyl group of a simple algebraic group, the polynomial model N (V ) for G is a Gelfand model if and only if G is not of the type W (D 2n ), W (E 7 ) or W (E 8 ).
Similarly, from the computation of the fake degrees of irreducible representations of the group H 4 in [AL-1982, § 3], it follows that the polynomial model N (V ) associated to the canonical faithful representation of H 4 is a Gelfand model.
Finally, from our computation carried out in the previous section, it follows that the polynomial model for dihedral groups and the group H 3 is a Gelfand model. Proof. Follows from Theorem 4.1 and Corollary 2.8.
The polynomial model for G 2 (n). In the case of the dihedral group G 2 (n) one can explicitly describe the polynomial model corresponding to the representation W 1 . Under this representation the images of the elements ρ and σ are as follows: σ → 1 0 0 −1 and ρ → cos 2π/n sin 2π/n − sin 2π/n cos 2π/n .
Proof. Observe that, by formulae (4.1) and (4.2), it follows that ∂∂ and z n−m ∂ m +z n−m ∂ m , where m = [ n 2 ] + 1, belong to Z. It follows that N (W 1 ) is contained in the vector subspace of A, say V , generated by {1, z, . . . , z [ n 2 ] , z, . . . , z [ n 2 ] , z n − z n }. Further, by Theorem 4.2, N (W 1 ) contains each irreducible representation of G 2 (n) and the sum of degrees of irreducible representations of G 2 (n) is 2[ n 2 ] + 2 ( [Se-1977]). It therefore follows that N (W 1 ) = V and that N (W 1 ) is indeed a Gelfand model for G.
|
2009-07-27T11:27:45.000Z
|
2009-07-27T00:00:00.000
|
{
"year": 2009,
"sha1": "ff0cff2e5cdc1f430ab0684f5ab71379c172f476",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0907.4605",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "baa86619cadcce11ff290a5e7c49365b6ebb7f51",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
262584865
|
pes2o/s2orc
|
v3-fos-license
|
Alcohol significantly lowers the seizure threshold in mice when co-administered with bupropion hydrochloride
Background Bupropion HCl is a widely used antidepressant that is known to cause seizures in a dose-dependent manner. Many patients taking antidepressants will consume alcohol, even when advised not to. Previous studies have not shown any interactions between bupropion HCl and alcohol. However, there have been no previous studies examining possible changes in seizure threshold induced by a combination of alcohol and bupropion HCl. Methods Experimentally naïve female Swiss albino mice (10 per group) received either single doses of bupropion HCl (ranging from 100 mg/kg to 120 mg/kg) or vehicle (0.9% NaCl) by intraperitoneal (IP) injection in a dose volume of 10 ml/kg, and single-dose ethanol alone (2.5 g/kg), or vehicle, 5 min prior to bupropion dosing. The presence or absence of seizures, the number of seizures, the onset, duration and the intensity of seizures were all recorded for 5 h following the administration of ethanol. Results The results show that administration of IP bupropion HCl alone induced seizures in mice in a dose-dependent manner, with the 120 mg/kg dose having the largest effect. The percentage of convulsing mice were 0%, 20%, 30% and 60% in the 0 (vehicle), 100, 110, and 120 mg/kg dose groups, respectively. Pretreatment with ethanol produced a larger bupropion HCl-induced convulsive effect at all the doses (70% each at 100, 110 and 120 mg/kg) and a 10% effect in the ethanol + vehicle only group. The convulsive dose of bupropion HCl required to induce seizures in 50% of mice (CD50), was 116.72 mg/kg for bupropion HCl alone (CI: 107.95, 126.20) and 89.40 mg/kg for ethanol/bupropion HCl (CI: 64.92, 123.10). Conclusion These results show that in mice alcohol lowers the seizure threshold for bupropion-induced seizures. Clinical implications are firstly that there may be an increased risk of seizures in patients consuming alcohol, and secondly that formulations that can release bupropion more readily in alcohol may present additional risks to patients.
Introduction
Bupropion HCl is known to cause seizures both when given at therapeutic doses or following accidental or intentional overdose in a dose-dependent manner [1][2][3][4][5][6][7]. It is also known that factors which include the excessive use of alcohol and sedatives, history of head trauma or prior seizure, and substance abuse, to mention a few, are associated with increased risk of bupropion-induced seizures [7]. In addition, postmarketing surveillance reports have indicated that there have been rare cases of adverse neuropsychiatric events or reduced alcohol tolerance in patients who are taking alcohol during treatment with bupropion [7]. Despite these latter reports, previous studies of the pharmacokinetic and/or pharmacodynamic interactions between alcohol and bupropion have revealed no significant pharmacodynamic interactions in animals [8], and no pharmacokinetic interactions in healthy human volunteers [9]. Furthermore, there are no studies specifically investigating the interaction between alcohol and bupropion-induced seizures in animals or man. Therefore, the objective of this study was to evaluate the effect of ethanol pretreatment on single-dose bupropion HCl-induced seizures in the Swiss albino mouse model.
Materials and methods
The study protocol and any amendment(s) or procedures involving the care and use of animals were reviewed and approved by an appropriate ethics committee following internationally approved guidelines (Charles River Laboratories Preclinical Services Inc.'s (CRM) Institutional Animal Care and Use Committee; Charles River Laboratories, Wilmington, MA, USA). During the study, the animals were maintained in a facility fully accredited by the Standards Council of Canada (SCC) and the care and use of the animals was conducted in accordance with the guidelines of the Canadian Council on Animal Care (CCAC).
Animals
Experimentally naïve female Swiss Crl: CD1 (ICR) albino mice (Mus Musculus; Charles River Canada Inc., St. Constant, Quebec, Canada) of approximately 7 weeks of age, and weighing 17.3 to 28.6 g were housed individually in stainless steel wire mesh-bottomed cages equipped with an automatic watering valve in an environmentally controlled vivarium (temperature 22 ± 3°C; relative humidity 50 ± 20%) with a 12-h light/dark cycle. All animals were acclimated to their cages and to the light/dark cycle for 3 days before the initiation of treatment. In addition, all animals had free access ad libitum to a standard certified pelleted commercial laboratory diet (PMI Certified Rodent Diet 5002; PMI Nutrition International Inc., St Louis, MO, USA) and tap water except during designated procedures. Animals were randomly assigned to 8 treatment groups of 10 mice per group, using a computer-gen-erated randomisation scheme, ensuring stratification by body weights. Four groups were pretreated with ethanol followed by treatment with increasing doses of bupropion HCl as follows: group 1, ethanol 2.5 g/kg + 0 mg/kg (vehicle); group 2, ethanol 2.5 g/kg + 100 mg/kg; group 3, ethanol 2.5 g/kg + 110 mg/kg; and group 4, ethanol 2.5 g/kg + 120 mg/kg. The other four groups were only treated with the same increasing doses of bupropion HCl as follows: group 5, 0 mg/kg (vehicle only); group 6, 100 mg/kg; group 7, 110 mg/kg; and group 8, 120 mg/kg. The doses of bupropion HCl 100 to 120 mg/kg selected for this study are higher than the low dose of 12.5 mg/kg used in a previous study [8] because more recent studies have revealed that bupropion HCl at low doses of 15 to 30 mg/ kg does not induce seizures but protects albino mice against seizures induced by maximal electroshock (anticonvulsant), and at high doses of 100 to 160 mg/kg is proconvulsant in the mice [10]. Animals in poor health or at the extremes of the prespecified body weight range (18 to 30 g) were not assigned to treatment groups and unassigned animals were released from the study.
Drugs
Bupropion HCl was obtained from Biovail Corporation, Steinbach, Manitoba, Canada, in white powder form. The dose formulations of bupropion HCl were prepared on each day. The appropriate amount of bupropion HCl was weighed and dissolved in an appropriate amount of 0.9% NaCl and then vortexed until a solution was obtained. On each day of treatment, the single doses of bupropion HCl dose were administered by intraperitoneal (IP) injection in a dose volume of 10 ml/kg and dose concentrations of 0, 10, 11, and 12 mg/ml for the 0, 100, 110, and 120 mg/ kg doses. The actual dose administered was based on the most recent body weight of each animal. In the applicable treatment groups (groups 1 to 4), each animal was pretreated with ethanol in a dose volume of 10 ml/kg 5 min prior to bupropion dosing. Ethanol was obtained in liquid form from Les Alcools de Commerce Inc., Montreal, Quebec, Canada. Ethanol 2.5 g/kg was administered as a dose volume of 10 ml/kg, and a dose concentration of 0.25 g/ml. Vehicle was 0.9% sodium chloride (NaCl) for injection USP and was obtained from Baxter Healthcare Corporation, Deerfield, IL, USA.
Study procedure
All animals were examined twice daily for mortality and signs of ill health or reaction to treatment, except on the days of arrival and necropsy when they were examined only once. After the acclimation period and randomisation, on the day prior to the initiation of treatment, all animals were weighed and the individual body weights were used for dose volume calculation. Treatment was then initiated and lasted for 4 consecutive days with equal numbers of animals from each group dosed on each day.
On the days of treatment, approximately 5 min prior to bupropion HCl or vehicle dosing, animals in groups 1 to 4 were pretreated with a single dose of ethanol 2.5 g/kg IP in a dose volume of 10 ml/kg. These animals then received the assigned dose of bupropion HCl or vehicle IP. Animals in groups 5 to 8 were not pretreated with ethanol but received their assigned dose of bupropion HCl or vehicle by the IP route. Thereafter, the animals were placed in clear perspex observation boxes containing a foam base for padding and observed for the occurrence of seizures for 5 h, followed by a 5 min assessment at 24 h post dose. The presence or absence of seizures, the number of seizures, the onset, duration and intensity of seizures were all recorded. The intensity of each convulsion was graded using Charles River Laboratories, Inc.'s grading system of mild: head and tail slightly extended and little jerking; moderate: head and tail fully extended and some jerking; or severe: head and tail fully extended and strong jerking. In addition, the presence or absence of ataxic gait, paralysis, and catatonic episodes (without a grading of the intensity or number) were recorded over each 15 min observation period. Any animal that had a single episode of severe seizure lasting longer than 1 min or any animal displaying greater than 40 separate episodes of severe seizures over a 1-h period was sacrificed for humane reasons. At the end of the 5-h observation period, all animals were returned to their home cages, and as deemed necessary, additional bedding, food (on cage floor) and water bottles were provided if an animal was still showing adverse effects from the administration of study drugs.
Assessment of convulsant activity
The primary outcome variable was the percentage of mice that had seizures. This was the number of animals with seizures (mild, moderate or severe) divided by the total number of animals in each group multiplied by 100. In addition, the convulsive dose of bupropion HCl required to induce seizures in 50% of mice (CD 50 ), was calculated for the dose-response curves for bupropion HCl treatment alone and the ethanol/bupropion HCl treatment. The secondary outcome variables were the mean (SD) seizures per mouse in each group, and the duration of seizures.
Data presentation and statistical analysis
Data was summarised and presented in tables by treatment groups for the primary outcome variable, the percentage of convulsing mice, and the two secondary outcome variables, the mean (SD) seizures per mouse in each group, and the duration of seizures. The CD 50 values were calculated using the PROBIT procedure in SAS (SAS Inc., Cary, NC, USA). The 95% confidence limits for CD 50 were calculated according to the method of Litchfield and Wilcoxon [11]. A total of 10 mice per group (total of 40 animals) were used to calculate the CD 50 for the bupropion alone treatments, and 39 animals for the CD 50 for the ethanol/bupropion HCl treatments. The number of seizures per mouse was analysed using analysis of variance (ANOVA) on the rank-transformed values, with presence of ethanol (yes/no), bupropion dose, and presence of ethanol-by-bupropion dose interaction as fixed effects in the model. p Values of ≤ 0.05 were considered statistically significant.
Results
In all groups, except the group treated with vehicle only (group 5), a convulsive effect was observed following the administration of bupropion HCl and/or ethanol. The onset of convulsion was about 9 min following the administration of single doses of bupropion HCl, however, this was highly variable between animals in the same group and across the dose levels for the bupropion HCl alone and ethanol/bupropion HCl treatments. The intensity of the seizures observed following bupropion HCl alone treatment were only mild and moderate (Table 1). Following ethanol pretreatment, overall, there was an increase in the intensity of the bupropion HCl-induced seizures at all the doses. In the 100 mg/kg dose group (group 5), there were marked increases in the number of mild, moderate and severe seizures. In the 110 and 120 mg/kg dose groups, there was a redistribution of the intensity of the seizures resulting in reductions in the mild seizures but a fivefold and twofold increase, respectively, in the moderate seizures (Table 1).
There were no deaths in the study. One animal treated with ethanol/bupropion HCl 110 mg/kg had excessive convulsions and was therefore euthanised for humane reasons. A variety of clinical signs were observed in the mice following the administration of bupropion HCl, some of which include paralysis, ataxic gait, catatonia, increased respiratory rate, twitching, tremors, increased activity, decreased activity, partially closed eyes, etc. Clinical signs were not dose dependent and pretreatment with ethanol had no effect on the signs observed.
Percentage of convulsing mice
Administration of single doses of IP bupropion HCl alone induced seizures in mice in a dose-dependent manner with the 120 mg/kg dose showing the largest effect. The percentage of convulsing mice were 0%, 20%, 30% and 60% in the 0 (vehicle only = 0.9% NaCl), 100, 110, and 120 mg/kg dose groups, respectively ( Table 2 and Figure 1). Pretreatment with ethanol produced a larger bupropion HCl-induced convulsive effect at all the doses including the ethanol + vehicle only group. There was a marked increase in the percentage of convulsing mice (70% of convulsing mice) at the ethanol/bupropion HCl 100 mg/kg dose, compared to bupropion HCl alone treatment, which was maintained at the ethanol/bupropion HCl 110 and 120 mg/kg doses, resulting in a flat doseresponse curve ( Table 2 and Figure 1). Ethanol/vehicle (group 1) treatment induced a 10% incidence of seizures. [10].
Mean convulsions per mouse
The analysis of variance results showed a significant overall effect of ethanol pretreatment and bupropion dose on the number of bupropion HCl-induced seizures, and a borderline significant overall ethanol-bupropion interaction effect at the p ≤ 0.10 level (Table 3). Single-dose bupropion HCl alone treatment induced a dose-dependent increase in the mean (SD) seizures per mouse from 0 in the vehicle only-treated group (bupropion HCl 0 mg/ kg) to 2.20 (4.49) seizures per mouse in the 110 mg/kg dose group, which was maintained at the 120 mg/kg dose group (mean (SD) convulsions per mouse = 2.10 (1.97)). Pretreatment with ethanol markedly and significantly increased the mean (SD) seizures per mouse compared to bupropion HCl alone treatment only in the 100 mg/kg dose group (ethanol/bupropion HCl = 10.90 (17.28); bupropion HCl alone = 0.20 (0.42); p = 0.0019). There were no statistically significant differences between the mean (SD) seizures per mouse obtained for ethanol/ bupropion HCl versus bupropion alone treatments for the 0, 110 and 120 mg/kg dose groups ( Table 3).
Duration of convulsions
Administration of single doses of bupropion HCl alone induced only short and medium duration seizures. The number of short seizures increased with dose to a maximum of 22 at the 110 mg/kg dose with a slight decrease to 18 at the 120 mg/kg dose (Table 4). In contrast, pretreatment with ethanol increased the total numbers of bupropion HCl-induced short and medium seizures, as well as caused long seizures. In addition, the number of short, medium and long seizures was markedly highest at the 100 mg/kg dose followed by a marked reduction at the 110 mg/kg dose and a further reduction at the 120 mg/kg dose only for the medium and long seizures (Table 4).
Discussion
The pharmacokinetic and pharmacodynamic interactions of ethanol with antidepressant drugs are well known [12][13][14][15][16][17]. Interactions between ethanol and psychotropic drugs could be additive, synergistic (potentiation) or antagonistic [15]. Even though there are published reports of animal [8] and human [9,18] studies investigating the pharmacokinetic and/or pharmacodynamic interactions between alcohol and bupropion, there are no published studies precisely evaluating the effects of alcohol on the convulsive liability of bupropion. This study was therefore designed to investigate the effect of ethanol pretreatment on single-dose bupropion HCl-induced seizures in the Swiss albino mice. The results of the primary outcome variable showed that bupropion HCl alone treatment in the dosage range 0 to 120 mg/kg was associated with a dosedependent increase in the percentage of mice with bupropion HCl-induced seizures. This finding is consistent with previous reports that indicate bupropion induces seizures in a dose-dependent manner in animals [10,19] and humans [2,3,7]. Pretreatment with ethanol resulted in markedly increased percentage of mice with bupropion HCl-induced seizures at the 100 mg/kg dose, which was maintained at the 110 and 120 mg/kg doses. The latter results are consistent with a 3.5-, 2.3-and 1.2-fold increase in the percentage of convulsing mice at the 100, 110 and 120 mg/kg doses, respectively, following ethanol pretreatment. In addition, ethanol pretreatment resulted in a flat dose-response within the dosage range of 100 to 120 mg/kg studied. The CD 50 for bupropion HCl alone treatment, a well known index of convulsive liability, of 116.72 (CI: 107.95, 126.20) mg/kg is similar to the value of 119.7 (CI: 104.1, 137.6) mg/kg reported previously for bupropion HCl in Swiss mice [10], and confirms the validity of this animal model. Pretreatment with ethanol resulted in a 23% reduction in the CD 50 value for bupropion HCl-induced seizures.
The results of the secondary outcome variables were generally consistent with the results of the primary outcome variable. Bupropion HCl alone treatment induced a dosedependent increase in the mean seizures per mouse up to the 110 mg/kg dose, which was maintained at the 120 mg/kg dose. Ethanol pretreatment resulted in a marked and statistically significant 54-fold increase in bupropion HCl-induced mean seizures per mouse only at the 100 mg/kg dose. There were no significant differences in bupropion HCl-induced mean seizures per mouse at the 110 and 120 mg/kg doses following ethanol pretreatment. With respect to the duration of seizures, bupropion HCl alone treatment only induced short and medium duration seizures, which when combined was dose dependent up to the 110 mg/kg dose. Ethanol pretreatment increased the duration of the seizures overall, resulting in more episodes of short, medium, and long duration bupropion HCl-induced seizures, but particularly in the 100 mg/kg dose group.
The results of this study are in conflict with the results of previous studies that reported no pharmacodynamic interactions between alcohol and bupropion in mice [8], and no pharmacokinetic interactions in normal healthy volunteers [9]. The reason for the discrepant previous results may be because those studies of the pharmacokinetic and pharmacodynamic interactions between alcohol and bupropion in normal healthy volunteers [9,18] used a low dose of bupropion (100 mg, approximately 1.5 mg/kg) that is unlikely to be associated with the occurrence of seizures since bupropion-induced seizures are dose dependent. Similarly, a previous study [8] investigating the interactive effect of combined treatment with alcohol and bupropion in adult albino mice utilised a low dose of bupropion (12.5 mg/kg IP) which is much lower than the convulsive doses of 100 to 160 mg/kg IP, with a CD 50 of 119.7 (CI: 104.1, 137.6) mg/kg and CD 97 of 156.7 mg/kg, that were subsequently reported for bupropion in mice by other investigators [10]. In addition, lower doses of bupropion (15 to 30 and 5 to 10 mg/kg, respectively), which did not induce seizures, have been reported to protect against seizures evoked by maximal electroshock [10] and nicotine [20] in mice. However, one group has reported that the combination of bupropion with alcohol abolished the impairment in auditory vigilance and mental slowness observed following the administration of alcohol alone in normal healthy volunteers (a pharmacodynamic interaction) even though they used a low dose of bupropion (100 mg) and found no pharmacokinetic interaction [18].
The mechanism of bupropion HCl-induced seizures is unknown [21,22]. Similarly, the mechanism for the synergistic interaction reported here between ethanol and bupropion HCl is also unknown. This interaction is unlikely to be solely due to pharmacokinetic reasons since a previous crossover study that investigated the interactions between alcohol and bupropion found no such interactions [9]. This previous study, also in normal healthy human volunteers, examined the effect of administration of oral bupropion HCl 100 mg followed by the administration of ethanol found no changes in the pharmacokinetics of bupropion, and vice versa [9].
The observed interaction between ethanol and bupropion reported in the present study has potential clinical implications. It has been recognised that the seizure risk of bupropion is increased in subjects undergoing abrupt withdrawal from alcohol [3,4], hence, bupropion administration is contraindicated in such patients [7]. However, the more recent although rare postmarketing reports of adverse neuropsychiatric events or reduced alcohol tolerance in patients who are drinking alcohol during treatment with bupropion [7], suggests that there is an interaction between alcohol and bupropion following coadministration, consistent with the findings of this study. Consequently, patients should be cautioned to not consume alcohol with bupropion. Nonetheless, there is good evidence that many patients on bupropion, as well as other anti-depressants, continue to use alcohol [23].
In conclusion, the results of this study demonstrate that ethanol pretreatment followed by single-dose IP bupropion HCl resulted in an increase in the number and per- centage of convulsing mice, mean seizures per mouse, the intensity, and the duration of the seizures. Following ethanol pretreatment, the CD 50 for bupropion HCl alone treatment was reduced from 116.7 to 89.0 mg/kg, representing a 23% reduction. The dose-related increase in the percentage of convulsing mice and mean seizures per mouse is consistent with previous reports that bupropioninduced seizures are dose dependent in animals and humans. The observed pharmacodynamic interaction between ethanol and bupropion-induced seizures in this study is novel and the mechanism is unknown. However, it has potential clinical implications for the prescribing of bupropion. It also implies that caution should be used when bupropion is prescribed to patients either using alcohol or at high risk of doing so.
|
2018-04-03T04:25:48.862Z
|
0001-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "3544869c255c40f175d4b512a3a65f045bae3c5a",
"oa_license": "CCBY",
"oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/1744-859X-7-11",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44efc081f80cfdaec975c7e22b6b1a03f8ab757c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
239556037
|
pes2o/s2orc
|
v3-fos-license
|
Here, There, and Everywhere: Applying Vignettes to Investigate Appraisals of Job Demands
: The job characteristics literature has revealed that job demands can be differentiated into hindrance and challenge demands. However, there has been little consensus on this categorization. Additionally, studies have revealed that job demands can be perceived as hindering and challeng ‐ ing at the same time. The present study aims to bring nuance to this topic by investigating two job demands (i.e., time pressure and emotionally demanding situations) and to what degree they are appraised as challenging and hindering for two occupational groups (i.e., nurses and real estate agents). This study also investigates the impact of emotional dispositions on demand appraisals. A convenience sample ( N = 851 Norwegian students) read vignettes and reported their appraisals for six different job situations. A factor analysis revealed that our measures of demand appraisals dif ‐ fered from those reported in previous studies. We therefore labeled the two kinds of appraisals as hindrance ‐ like and challenge ‐ like since they overlap without being identical to the previously re ‐ ported labels of hindrance and challenge, respectively. Furthermore, we found that job demands were appraised as hindrance ‐ like and challenge ‐ like at the same time but to different degrees. Job demands for core tasks were typically appraised as more challenge ‐ like than hindrance ‐ like. Job demands for non ‐ core tasks were typically appraised as more hindrance ‐ like than challenge ‐ like. Positive trait emotions predicted challenge ‐ like appraisals. By documenting how imagined job de ‐ mands appear as hindrances and challenges, our study supports previous studies showing that challenge ‐ like demands may play a role in the motivational process in the job demands–resources model. Limitations to vignette studies are discussed. and been appraised and/or chal ‐ lenges for and real estate agents. We also analyzed the participants’ own trait emo ‐ tions and how these were related to the vignette appraisals.
Introduction
The job-demands (JD-R) model [1,2] proposes that working conditions initiate two distinct processes that lead to well-being and ill-being at work. Specifically, job resources start a motivational process that leads to engagement and positive outcomes, whereas job demands start a health impairment process that leads to burnout [3], workaholism [4,5], and negative outcomes. Thus, job demands are positioned as predictors in the health impairment process but have no roles in the motivational process. However, it has been argued that job demands can also be motivating. For example, LePine, Podsakoff, and LePine [6], as well as Podsakoff, LePine, and LePine [6] made a distinction between hindrance and challenging demands, in which hindrance demands have a negative impact and challenge demands have a positive impact on employee well-being. In their paper, in which they summarize the development of the JD-R theory and address issues that need to be solved, Bakker and Demerouti [2] specifically raise the concern about the two types of job demands (i.e., with a positive or negative impact on well-being) and suggest that new research may try to uncover the conditions under which job demands act as challenges versus hindrances.
The JD-R model states that several job resources and job demands should be grouped into general higher-order factors of resources or demands. However, some studies have suggested that job demands may not always belong to one overarching construct. For example, Luchman and González-Morales [7] found that a model, in which several job demands were included as individual factors, fit the data better. It is possible that this finding is due to the notion that demands can be differentiated into demands that have a positive or negative impact (i.e., challenge or hindrance) on employee well-being. Additionally, the confirmatory factor analyses reported in the study of Van den Broeck and De Cuyper [8] supported the differentiation between job hindrances and job challenges. Additionally, structural equation-modeling revealed that job challenges were positively associated with vigor and were unrelated to exhaustion, while job hindrances were positively related to exhaustion and negatively related to vigor. Furthermore, Searle and Auton [9] found that even when the effects of demands were accounted for, it was the individual differences in the appraisal of the demands that consistently explained the unique variance in the outcomes (i.e., affective states). Webster and Beehr [10] revealed that although a demand was primarily perceived as either challenging or hindering, it could also be perceived as both challenging and hindering at the same time. Taken together, this suggests that more research is necessary to clarify the role and denomination of job demands by investigating them in various jobs and work situations, and by assessing how individual characteristics influence appraisals of job demands.
Differentiation of Job Demands
LePine, Podsakoff, and LePine [6], as well as Podsakoff, LePine, and LePine [6] introduced the differentiation of job stressors into challenge stressors and hindrance stressors. Hindrance job stressors have been defined as "job demands or work circumstances that involve excessive or undesirable constraints that interfere with or inhibit an individual's ability to achieve valued goals" ( [11], p. 67). This description corresponds with the definition of job demands described in the JD-R model, which describe it as "physical, psychological, social, or organizational aspects of the job that require sustained physical and/or psychological (i.e., cognitive or emotional) effort" ( [12], p. 296). Examples of hindrance job demands reported in previous studies include role ambiguity (e.g., [13,14]) and illegitimate work tasks (e.g., [15]). These job stressors are considered negative. Conversely, stressors that have the potential to promote personal growth as well as goal achievement are defined as challenge stressors [6]. Examples of challenge stressors reported in the literature include high workload levels (e.g., [16]) and responsibility (e.g., [17]). These demands, although they require effort, may lead to beneficial individual and organizational outcomes, and are therefore considered stressors with positive potential.
It is not yet known whether the differentiation between job demands as challenging and hindering is valid due to the lack of evidence regarding this issue. Moreover, it is still unclear whether such a differentiation between job demands is valid for every occupation [18]. For example, some studies have classified role conflict as a hindrance demand [19], while others have considered it a challenge demand [20]. Similarly, emotional demands have been considered a hindrance demand by some [21,22] and a challenge demand by others [23]. Hence, regarding job demands, the observations, opinions, categorization, and conclusions are not always the same in the scientific literature, and the same job demands are not consistently classified as either hindrances or challenges. Furthermore, it has been shown that employees will not always experience job demands as either hindrances or challenges; indeed, several researchers have argued that the categorization of job demands into being either a hindrance or challenge demand is too simplistic (e.g., [24,25]). A more fruitful approach to the hindrance-challenge framework of job demands may be to investigate the degree to which employees experience job demands as hindering and challenging at the same time. For example, Bakker and Sanz-Vergel [18] showed that nurses perceived work pressure more as a hindrance than as a challenge demand. This approach in which employees report the degree to which they experience each job demand as hindering and challenging may provide more nuanced insight into the differentiation and role of job demands.
Appraisal of Job Demands
Whether or not the same job demand is appraised similarly by individuals has seldom been tested [25] but some studies have reported on individual subjective appraisal accounts for the differences regarding whether a job demand is classified as hindering or challenging (e.g., [9,25]). Appraisal, in the context of the present study, can be defined as an individual's perception and interpretation of specific job characteristics, and how these job characteristics hold potential for personal growth, gain, and goal achievement (i.e., challenging) or whether they are appraised as constraints that are hindering [26]. Research by Lazarus [27], Lazarus [28], and Bagozzi [29] has contributed to the literature on occupational stress models and appraisals, and they argue that employees make continuous appraisals of their work environments. Based on these appraisals, they form mental representations of which behavior they may apply to cope with these appraisals. Specifically, the Lazarus [27] transactional model of stress and coping (TMSC) suggests that individuals first make primary appraisals, that is, evaluating the significance and importance of a stressful episode, followed by secondary appraisals, which is an evaluation of the available options and resources to handle stressful events [30]. The TMSC suggests that not all stressful episodes will lead to negative stress reactions, which will happen only when the stressors are appraised as exceeding the available resources, and that they will impact well-being negatively. Thus, individuals will appraise stressors or stress episodes differently, and the same stressor may therefore be appraised negatively by one and not by another [28]. Hence, according to the TMSC, appraisals can function as mediators between job demands, well-being, and work outcomes. Research has also revealed that appraisals of job demands may function as moderators of the relationship between job demands and work outcomes [25,26]. In line with the person-context interaction theory [31], which states that individual functioning is a result of the interaction between the individual and the environment, Li and Taris [25] as well as Li and Peeters [26] argue that individuals may appraise a stressor (i.e., job demand) as potentially impacting them positively (i.e., challenging), negatively (i.e., hindering), or both. This may, in turn, moderates the relationship between job demands, well-being, and other work outcomes. Although a body of research has revealed a link between job demands, employee strain, and well-being (e.g., [32][33][34]), this relationship is not fully understood. Job demands may be perceived and experienced in several ways. Investigations of appraisals of job demands are needed to gain knowledge and to validate the hindrance-challenge framework of job demands.
Nature of Work Belonging to an Occupational Group
One of the reasons why a given job demand has been classified as hindering or challenging may be due to the nature of work that they are related to. For example, Bakker and Sanz-Vergel [18] found that nurses perceived workload and time pressure as hindering rather than challenging. These hindering demands were experienced as inhibitory and destructive for both personal growth and achieving work goals. Specifically, high levels of time pressure reduced the quality of patient care. In the same study, the authors describe how a different occupational group, namely journalists, appraised time pressure as a challenging job demand. The nature of many journalists' jobs is to work under a strict time regime. Time pressure may be a job demand that does not hinder journalists from achieving their work goals and rather is a challenge demand they often and successfully overcome, which leads to goal achievement.
Emotional demands at work have been perceived as positive indicators for better performance by some occupational groups. Bakker and Sanz-Vergel [18] reported that emotional demands were experienced as more challenging than hindering among nurses. Nurses experienced both interactions with patients and the need to confront emotional demands as part of their everyday work lives and as part of their job. Conversely, it might be that other occupational groups find emotional demands hindering and not a natural part of their jobs. For instance, it might be that real estate agents can experience emotional demands as something outside their core job tasks and as something that will hinder them from achieving their work goals.
Although there are individual differences among employees in occupational groups, there are some characteristics belonging to the job performed by certain occupational groups that may influence the appraisal of job demands. Hence, the role of a given job demand may vary by occupation and may therefore be appraised differently (i.e., as hindering or challenging) not only individually but also based on the nature of the work belonging to that occupational group.
Positive Trait Emotions
In addition to the context in which job demands occur (i.e., occupation), individual traits and differences may also impact appraisals of job demands, of which positive trait emotions may play a role. Over the last decade, positive emotions related to work have received increased attention in the literature [35]. For example, evidence has revealed that positive emotions are associated with beneficial job attitudes [36], productivity [37], creativity [38], job crafting [39], organizational citizen behavior, and cooperation with others [40]. However, the majority of the research on the relationship between positive emotions and work outcomes has focused on general positive emotions and thus suggested that all positive emotions are equally related to other work variables [41]. By applying the functional wellbeing approach (FWA, [42,43]), we aim to bring nuance to this topic. According to the FWA, two distinct categories of positive emotions are particularly important for well-being: hedonic feelings, such as pleasure and happiness, and eudaimonic feelings, such as interest and immersion. Hedonic feelings are important because they help sustain homeostatic stability, whereas the major function of eudaimonic feelings is to facilitate change. Hedonic feelings are typically experienced when goals are achieved or needs are fulfilled, i.e., when an equilibrium has been reestablished. Hence, hedonic feelings signal to our minds that our current actions appear to succeed in maintaining our well-being. Relatedly, hedonic feelings also facilitate a kind of mental flexibility, including broadened attention, thus preparing the organism for a change of goals and plans. In contrast, eudaimonic feelings narrow attention to help us stay focused in the process of reaching a difficult goal. Eudaimonic feelings commit us to put in extra effort and to value the striving toward goals, even when the going is rough. Thus, eudaimonic positivity feels different and functions differently than hedonic positivity.
Previous research on emotions in the workplace corroborated the association between hedonic feelings and goal achievement, on the one hand, and that between eudaimonic feelings and the process of overcoming a challenging work task, on the other. For example, Stone and Schwartz [44] found that happiness increased when the workday ended, while building competence had the highest levels during midmorning when demands were dealt with. Similarly, Straume and Vittersø [45] found that hedonic feelings decreased during challenging work tasks, whereas eudaimonic feelings increased. Additionally, research on goal pursuit and goal achievement has revealed similar findings. Thorsteinsen and Vittersø [46] reported in their longitudinal study that eudaimonic wellbeing initiated and sustained goal pursuit processes, while hedonic well-being was more related to goal achievement.
FWA encompasses both momentary state feelings and more stable and trait-like feelings. According to the FWA, high levels of hedonic feelings predict well-functioning stability, while high levels of eudaimonic feelings predict well-functioning change processes. The orientation to life in hedonic feelings is typically the tendency to evaluate the environment and oneself as good rather than bad, while for eudaimonic feelings, it is the proneness to develop and attain personal growth. Thus, when facing demanding job situations, it is likely that individuals with higher levels of hedonic and eudaimonic feelings will more often evaluate those demands positively and possibly overcome (i.e., good rather than bad) as well as see them as opportunities for utilizing and developing abilities to experience personal growth. This is also in line with [47], in which the authors reported that positive emotions at work did not decrease hindering demands but increased challenging demands.
Applying Vignettes
Vignettes are usually short stories portraying a made-up person and/or a made-up scenario, and vignette studies can very powerful. For example, Kahneman and Tversky's contributions to economics and psychology was to a large extend due to their observations of the responses people provided to small vignettes [48]. By identifying salient characteristics in a specific context, a vignette approach makes it possible to elicit critical patterns in human thinking and emotions [49,50]. One of the advantages of the methodology for the current study concerns how it makes a standardization of a demanding job situation possible, thus allowing for all participants to respond to the same stimuli [51]. The imaginary nature of vignettes poses a limitation to the design. Hence, a probable association between the imagined and real-life response must be established to generalize the results.
In the present study, we have chosen to investigate appraisals of time pressure and emotional demands for two occupational groups, namely nurses and real estate agents. There are several reasons for choosing these occupational groups. Firstly, we have made an effort to choose occupational groups that differ regarding core work tasks. In line with this, we aimed to investigate job demands (i.e., emotional demands and time pressure) that could have different positive and negative denominations (i.e., hindering or challenging) based on the demands in relation to core tasks within those occupational groups. This includes, for example, that one of the core tasks for nurses is to care for and comfort their patients (i.e., emotional demands), while this is not the case for real estate agents. On the one hand, time pressure is related to core tasks for real estate agents, particularly during bidding rounds. According to Norwegian regulations, when real estate agents receive bids, they are obliged to communicate them in writing to the sellers and other potential bidders, and it is a common practice with 5 to 15 min deadlines, a process that is commonly known to be hectic. On the other hand, time pressure is not recognized as a builtin part of nurses work tasks but may rather be understood as a consequence of understaffing. Finally, both occupations are well-known in Norway and the general population have at least basic knowledge about their core work tasks. Hence, it is reasonable to assume that it is possible for the participants to read vignettes about nurses and real estate agents, and appraise the job situations described.
Aims of the Study
With the present study, we aim to contribute to the job characteristic literature by applying vignettes to investigate the degree to which participants will appraise two job demands (i.e., time pressure and emotionally demanding situations) for two occupational groups (i.e., nurses and real estate agents) as hindering and/or challenging. Additionally, we aim to reveal how the participants' positive trait emotions are related to their appraisals. A vignette study applied on a convenience sample of Norwegian students provides empirical data for the study. We posit the following hypotheses:
Design
To investigate the circumstances under which time pressure and emotional demands are perceived as hindering or challenging, we developed a quasi-experimental study with vignettes. Specifically, we provided two vignettes to the participants, three times each. For each subsequent time the vignette was presented, additional information about the occupation of the person in the vignette was provided. The first time the vignettes were presented, only the employee's name (Hans or Hanna) and demand category (time pressure or emotionally demanding situation) were included. The second and third times the vignettes were presented, we included the occupation of the fictional person in the vignette, who was either a nurse or real estate agent.
The first vignette described a job situation with high-time pressure: "Hanna/Hans has been at work for a few hours. She or he has not been able to take a break yet. It is not certain that she or he will have time to sit down during the rest of the workday. There are many job tasks to be done, and the tempo is high. It is often like this at Hanna's/Hans' job. She or he must often choose which job tasks should be prioritized and which job tasks must wait. A hectic day at work often means that Hanna/Hans are not able to perform all the tasks of the day before she or he goes home, and it is not unusual that she or he must work extra hours and at unfavorable times of the day. To what degree do you think Hanna/Hans is experiencing her or his job as…" Then, six appraisal items were presented as detailed in the Measures section below.
The second vignette described an emotionally demanding job situation: "Hans/Hanna has been at work a few hours when he or she gets into a situation with a woman who is having a very hard time. The woman cries a lot. Hans/Hanna feels like the woman is overwhelmed with emotions and that she is seeking help from him or her to handle the situation she is currently in. It is hard for Hans/Hanna to understand what the woman is trying to tell him or her; she cries so much that it is hard to have a conversation. The woman takes a long time to be able to find the words to describe what she wants and appears somewhat chaotic when meeting Hans/Hanna. To what degree do you think Hans/Hanna is experiencing his or her job as…". Again, the six appraisal items were presented (see below).
The participants were randomly selected into one of two conditions, in which the persons' gender and profession in the vignettes varied. See Figure 1 for the flow diagram.
Participants
Of 1453 students, 851 in the age range from 16 to 56 (M = 25.22, SD = 5.25) completed the survey and were included in the analyses, of which 77.6% were women (N = 664) and 21.8% were men (N = 187). The students came from a broad variety of study fields: 191 (22.3%) in psychology, 221 (25.8%) in nursing, 30 (3.5%) in real estate, 84 (9.8%) in economics, 111 (13%) in law, and 213 (24.9%) in "other study fields". The students were invited to participate in an electronic survey through various social media platforms and by e-mail.
Measures
Data were analyzed using IBM SPSS 25 (IBM, Armonk, NY, USA) and Mplus version 8 [52]. Age and gender were controlled for.
Appraisal of Job Demands
To measure the participants' appraisal of job demands as hindering or challenging, we applied six items previously used by Bakker and Sanz-Vergel [18]. The adjectives were introduced after the vignettes with the text: "To what extent do you believe that Hans/Hanna experienced the situations as….". Responses were given on endpoint-labeled scales, ranging from 1 (to a small degree) to 5 (to a large degree). A principal axis exploratory factor analysis with promax rotation suggested that two factors may account for the correlations between the demand variables. Two eigenvalues were higher than 1 and a parallel analysis [53] also supported the choice of a two-factor solution. Since the correlation between the two factors was trivial (r = −0.05), we reran the final model with varimax rotation. Different from Bakker and Sanz-Vergel [18], who conceptualized hindrance demands as consisting of the three items of "hindering", "stressful", and "demanding", our analysis also revealed that the item "challenge" belonged to this factor. Furthermore, our results revealed that the second factor consisted of the items "interesting" and "motivating." We believe this result is due to the Norwegian language, in which the term "challenge" has a more negative connotation than in English and even more so when reading about demanding situations (i.e., vignette stories). Hence, "challenge" is therefore associated with negative appraisals, while "interesting" and "motivating" represent positive appraisals. Accordingly, we believe that our factor structure does correspond to the hindrance-challenge framework reported in previous studies. Nevertheless, to make visible that our factor structure is different, we chose to apply the terms "hindrance-like" for hindering (i.e., negative) appraisals and "challenge-like" for challenging (i.e., positive) appraisals when reporting our findings. Two mean-score demand variables were computed with Cronbach's α = 0.82 for the hindrance-like subscale and α = 0.83 for the challenge-like subscale.
Emotions
Trait-level emotions were measured with the Basic Emotions Trait Test (BETT, [54]). A short version of the scale is comprised of nine items reflecting five basic emotions (happiness, interest, fear, anger, and sadness). The two positive emotions represent hedonic (i.e., happiness) and eudaimonic emotions (i.e., interest), respectively, whereas the three negative emotions may be summarized as a single negative composite score [43]. The participants were asked to report the overall frequency of the five basic emotions in their lives overall. The introduction reads "In general, how often do you feel …" followed by nine adjectives or adjective phrases. For example, "happy" or "scared" (adjectives) or "completely absorbed in what I am doing" (adjective phrase). The response options ranged from 0 = never to 6 = all the time. To check the three-dimensional structure of the test, we ran a principal axis exploratory factor analysis with promax rotation. Three eigenvalues were higher than 1 and a parallel analysis [53] also supported the choice of a three-factor solution. Negative emotions were not used in the present study; hence, two mean-score emotion variables were computed for subsequent analyses with Cronbach's alphas α = 0.86 for the hedonic emotions subscale and α = 0.79 for the eudaimonic emotions subscale. We take this result as evidence for H10. Table 1 presents the means, standard deviations, skewness, and varimax-rotated factor loadings for the demand items. The participants' gender (man = 0, woman =1) was significantly related to hindrancelike appraisals (B = 0.18, p < 0.001), whereas age was not (p = 0.485). Similarly, the participants' gender (B = 0.20, p < 0.001), but not age (p = 0.720), was related to challenge-like appraisals. Hence, age was excluded from subsequent analysis. A multilevel (mixed model) regression analysis with grand-mean-centered variables showed that the intraclass correlations (ICC) were 0.29 for hindrance-like demands and ICC = 0.19 for challenge-like demands. Overall, no mean differences were found between time pressure and emotional demands, neither for hindrance-like (p = 0.401) nor challenge-like (p = 0.061) demands. Looking more closely at the different vignettes, however, provides a more differentiated picture. A factorial repeated measure (GLM) was conducted with gender as the between-participant covariate. Separate models were run for hindrance-like demands and challenge-like demands, and the results are summarized in Figures 2 and 3, respectively, showing means and standard errors for the six vignettes for males and females separately. For hindrance-like demands, the Huynh-Feldt sphericity was ε = 0.81. The main effect was significant, F(4.02, 3400) = 68.64, p < 0.001, as was the interaction with gender, F(4.02, 3400) = 4.79, p = 0.001. Although the overall interaction test was significant, the 95% CI for males and females did not overlap in the no-job emotional and the two nurse vignettes. For challenge-like demands, the Huynh-Feldt sphericity was ε = 0.92. The main effect was significant, F(4.62, 3891) = 186, p < 0.001, but the interaction with gender was not, F(4.02, 3400) = 1.81, p = 0.113. Although the overall interaction test was non-significant, the 95% CI for males and females did not overlap in the no-job emotion, nurse time pressure, and real estate time pressure vignettes, indicating a post-hoc interaction effect for these three conditions. To further test the hypotheses, we conducted post-hoc paired sample t-tests. As hypothesized in H1, time pressure and emotional demands were appraised as hindrancelike and challenge-like. Specifically, the overall mean hindrance-like score, M = 3.89, SD = 0.50, was higher than that of the challenge-like score, M = 3.02, SD = 0.62. A paired sample t-test showed that this difference was significant, t(850) = 34.61, p < 0.001 (two-tailed). We further divided the two variables into time pressure hindrance-like and time pressure challenge-like, and observed that the former (M = 4.01, SD = 0.54) was significantly higher than the latter (M = 3.2, SD = 0.73), t(850) = 24.6, p < 0.001 (two-tailed). For the emotional demands, the hindrance-like scores (M = 3.77, SD = 0.59) were also higher than the challenge-like scores, (M = 2.77, SD = 0.75), t(850) = 32.61, p < 0.001.
Results
The hindrance-like scores for nurses during time pressure situations (M = 4.07, SD = 0.66) were higher than those during emotionally demanding situations (M = 3.56, SD = 0.85) and a paired-sample t-test showed that the difference was significant, t(846) = 18.75, p < 0.001 (two-tailed), supporting H2. The challenge-like scores for nurses during emotionally demanding situations (M = 3.25, SD = 1.00) were lower than those during time pressure situations (M = 3.47, SD = 1.03) and a paired-sample t-test showed that the difference was significant, t(846) = 5.98, p < 0.001 (two-tailed). H3 was not supported.
In line with H4, the hindrance-like scores for real estate agents during emotionally demanding situations (M = 4.01, SD = 0.87) were higher than those during time pressure situations (M = 3.80, SD = 0.76) and a paired-sample t-test showed that the difference was significant, t(846) = -7.02, p < 0.001 (two-tailed). The challenge-like scores for real estate agents during time pressure situations (M = 3.36, SD = 0.98) were higher than those during emotionally demanding situations (M = 2.01, SD = 0.98) and a paired-sample t-test showed that the difference was significant, t(845) = 30.28, p < 0.001 (two-tailed). H5 was supported.
The challenge-like scores for time pressure were higher for nurses (M = 3.47, SD = 1.03) than for real estate agents (M = 3.35, SD = 0.98) and a paired-sample t-test showed that the difference was significant, t(848) = 2.76, p = 0.006 (two-tailed). H8 was not supported. The challenge-like scores for emotionally demanding situations were also higher for nurses (M = 3.25, SD = 1.00) than for real estate agents (M = 2.10, SD = 0.98) and a paired-sample t-test showed that the difference was significant, t(845) = 28.53, p < 0.001 (two-tailed), confirming H9.
The only effect of gender of the employee in the vignette was seen for hindrance-like scores among nurses: Hanna was assigned higher scores for the emotional vignette, (M = 3.64, SD = 0.81) than Hans (M = 3.46, SD = 0.89) and a paired-sample t-test showed that the difference was significant, t(846) = 3.04, p = 0.002 (two-tailed).
In line with H10, our results revealed that hedonic and eudaimonic feelings are different concepts accounted for by different factors. Table 2 presents the means, standard deviations, skewness, pattern matrix, and factor correlations for the trait-level emotions. Finally, we fitted a multilevel path model to the data (Figure 4). The model included hindrance-like and challenge-like appraisals as dependent variables, alongside hedonic feelings, eudaimonic feelings, and gender as the independent variables. The model depicted in Figure 4 was saturated, with zero degrees of freedom (hence, no goodness-of-fit estimates were available). Gender predicted both hindrance-like appraisals (β = 0.18, p < 0.001) and challenge-like appraisals (β = 0.18, p < 0.001). Hindrance-like appraisals were not significantly associated with emotions (ps > 0.239), whereas challenge-like appraisals were predicted by both hedonic feelings (β = 0.12, p = 0.008) and eudaimonic feelings (β = 0.12, p = 0.014). This result was not consistent with H11.
Discussion
We aimed to contribute to the job characteristics literature by using a vignette study. Norwegian students with no specified work experience imagined how time pressure and emotionally demanding situations might have been appraised as hindrances and/or challenges for nurses and real estate agents. We also analyzed the participants' own trait emotions and how these were related to the vignette appraisals.
Typically, when differentiating job demands, the scientific literature has presented this in a hindrance-challenge framework (e.g., [6,16,21,55]). In previous research, the items that have been used to measure challenge and hindrance demands were most often decided a priori; that is, researchers decided which items (i.e., adjectives) measure hindrance demands and challenge demands before the measures are done. In our study, we wanted to explore to what degree the appraisals could have both a positive and negative denomination at the same time. Thus, we chose to apply the six adjectives previously applied to measure hindrance and challenge demands [18], but also applied a data-driven approach that grouped the items in accordance with the result from the factor analysis. Our results revealed a similar division between "good" and "bad" job demands, as did the Bakker and Sanz-Vergel [18] study, but differed from those of previous studies in that the item "challenge" was loaded with items belonging to the previously reported subscale of hindrance demands (i.e., "hindering", "stressful", and "difficult") and not with the more positive appraisal items "motivating" and "interesting". Several reasons may account for these results. First, it might be due to language. Although the Norwegian word for challenging (i.e., utfordring) holds both positive and negative connotations, depending on the context, the term has more negative connotations than the English term. It is not unreasonable to assume that this difference in meaning contributed to the different factor structures. Second, when applying vignettes, the reader might underestimate the engagement of the person in the vignette in demanding situations. When a person is engaged, the term challenge is often positively charged. When a person is disengaged or stressed, the term challenge is often negatively charged. In connection with our previous argument regarding the Norwegian term for challenge, when evaluating the fictional persons' experience (i.e., how do you think Hanna/Hans experienced this situation), it might be that underestimation of engagement led to challenge having mostly negative connotations in our study. Taken together, we believe that our factor structure does correspond with the previously reported labels of hindrance and challenge demands. Nonetheless, since our results did have a different factor structure than previous studies, instead of using the labels hindrance and challenge, we applied the labels hindrance-like and challenge-like, respectively. However, when we used the terms hindrance-like and challengelike, our intention was merely to make visible that our results revealed that one item (i.e., challenge) loaded differently from previous studies. Hence, the new labels (i.e., hindrance-like and challenge-like) are, in our opinion, representing the same meaning as the previously used labels (i.e., hindrance and challenge).
In line with H1, when no job was specified, both job demands (i.e., time pressure and emotionally demanding situations) were appraised as hindrance-like and challenge-like to different degrees, specifically more hindrance-like than challenge-like. This is in line with the literature reporting that all job demands require sustained effort and even if some job demands have motivational potential, all job demands have costs [56]. This is also in line with the literature reporting that the same job demands can be appraised as hindering and challenging at the same time [18,26]. Thus, it seems that imagined and real-life job demands share some basic characteristics, although the results from the two approaches are not identical.
We argued that the nature of work belonging to an occupational group could impact the degree to which job demands were appraised as hindering or challenging and that this was related to whether job demand typically hindered the occupational group from achieving their work goals. Thus, we hypothesized that time pressure would be appraised as more hindering for nurses than emotional demands (H2). Specifically, and as expected, time pressure for nurses was appraised as more hindrance-like than emotional demands, in line with the literature that revealed how time pressure prevents nurses from achieving their work goals and attending to patient care [57]. In addition, emotional demands in which a nurse is offering care and comfort are viewed as one of the core work characteristics for nurses and therefore as less preventive of goal achievement, although these situations require effort. Moreover, among the appraisals of hindrance-like demands, the vignette with nurses facing emotional demands received the lowest score. We also hypothesized that emotional demands would be appraised as more challenging than time pressure (H3). This hypothesis was not supported, as we unexpectedly found that time pressure was appraised as more challenge-like than emotional demands. One of the reasons for this finding may be that when the participants, who were not nurses, read the vignettes, they interpreted emotional demands as a very clear part of the nurse's daily job tasks. Thus, their appraisal may reflect that they believe the nurse will solve these situations (i.e., they are to a little degree hindering) and that emotional demands are such an integrated part of their daily jobs that they were not appraised as challenging, as expected.
Furthermore and in line with H4, we found that for real estate agents, the participants appraised emotional demands to be more hindrance-like than time pressure. Moreover and in line with H5, time pressure was appraised as more challenge-like than emotional demands. This may be explained by the nature of work belonging to this occupational group, in which emotional demands may not be considered a core work experience, while time pressure is part of real estate agents' daily activities (e.g., bidding rounds). Additionally, emotional demands were appraised to be less challenge-like for real estate agents than the other six job situations described in the vignettes. These findings are also in line with the literature that describes how short-term time pressure (e.g., during a workday with deadlines), which is something real estate agents regularly face during their workday, can be motivational [58].
When comparing appraisals of the job demands for the two occupational groups, we found, as hypothesized in H6, that time pressure was appraised as more hindrance-like for nurses than for real estate agents and, in line with H7, that emotional demands were appraised more hindrance-like for real estate agents than for nurses. These results align with the literature which revealed that job demands that are typically part of the nature of work in an occupation are appraised as less hindering than job demands that are faced less frequently as part of the work (e.g., [18,55]). Although nurses are struggling with time pressure on a frequent level, it is not considered a part of their work in a way that helps them achieve their work goals. Thus, time pressure is appraised as hindering them to a greater degree than time pressure is hindering real estate agents, who frequently encounter time pressure as a part of their work tasks. Conversely, real estate agents are not as experienced in facing emotional demands as part of their job; therefore, real estate agents may appraise emotional demands as more hindering compared to nurses who are expected to handle emotional demands as an integrated part of their work.
We hypothesized that time pressure would be appraised as more challenging for real estate agents than for nurses (H8). However, and unexpectedly, time pressure was appraised as more challenge-like for nurses than for real estate agents. This result may be related to the finding that time pressure unexpectedly was appraised as more challengelike than was emotional demands for nurses (H3). Thus, overall, time pressure for nurses was appraised as more challenge-like than we expected, both when we measured this only for nurses (i.e., comparing challenge-like appraisals between time pressure and emotional demands for nurses) and between occupational groups (i.e., comparing challenge-like appraisals of time pressure between nurses and real estate agents). These findings may also reflect what the participants, who are not nurses, believe about nurses' jobs. For example, in the Norwegian media, nurses are often portrayed as working under intense time pressure. This portrayal of nurses working under constant time pressure may lead others (i.e., participants) to interpret time pressure as a core job characteristic that nurses must overcome, different from nurses themselves who report time pressure as preventing them from doing their job in the way they want to. Hence, the appraisals of time pressure for nurses may therefore be appraised as more challenge-like than we expected. Additionally, emotional demands were, as expected and in line with H9, appraised as more challengelike for nurses than for real estate agents. This result is in line with how we expect nurses to handle emotionally demanding situations as a part of their daily work, while the same is not expected for real estate agents. Additionally, it is in line with the literature reporting on how some demands do have motivational potential, although they require sustained effort, e.g., [8].
Altogether, our findings from H1-H9 revealed that the same job demands can be appraised as hindrance-like and challenge-like to different degrees within an occupational group and that when two occupational groups are compared, the same pattern follows. Thus, categorizing job demands a priori as having either a negative or positive impact on employee well-being does not seem to bring enough nuance to the understanding of job demands. Rather, it seems that the degree to which job demands are appraised as hindrance-like or challenge-like is not only due to the job demand itself but is also connected to the context within which the job demand occurs (i.e., occupation). Even though H3 and H8 were not supported, the overall results were meaningful and supportive of our suggestion that job demands are better understood when approached more nuanced, as opposed to categorizing them a priori. Moreover, our findings support our proposal that each job demand should be measured in such a way that the degree of positive (i.e., challenge-like) and negative (i.e., hindrance-like) appraisals may be captured when they occur simultaneously. Additionally, our results support the notion that some job demands (i.e., challenge-like) may also play a role in the motivational process of the JD-R model and not only in the health-impairment process.
We wanted to investigate how the participants' positive trait emotions were related to their appraisals of job demands. Specifically, we hypothesized that hedonic and eudaimonic feelings would be differently related to hindrance and challenging demands (H11). Our Hypoth was, however, not confirmed. This result was surprising given the large number of previous studies showing how hedonic feelings are unrelated or even negatively related to challenging tasks, whereas eudaimonic feelings are positively associated with such tasks (see Vittersø, 2016, for an overview). Again, a possible reason might be that our data derives from participants imagining how other people might be feeling in challenging situations and not from real feelings in such situations. Some studies indicate that people underestimate the positivity evoked in the process of being immersed in overcoming a challenging task (e.g., [59]) and we speculate that an underestimation of eudaimonic feelings in challenge-like demand appraisals may account for the current result.
Finally, some gender effects were found. We observed gender differences among the participants in which women reported higher scores on all appraisals of job demands, both in the hindrance-like and challenge-like conditions. This finding may be explained by a relatively consistent finding in the literature, namely that women are expected to display stronger emotional expressivity than men. These differences are observed both for negative and positive emotions [60]. The underlying reason for these differences may stem from role development by which women are socialized to be emotionally expressive and men are socialized to express fewer emotions [61]. According to poststructuralist feminist theories, different emotional roles for women and men have also been found in workplaces integrated as part of organizational norms and practices [62]. Thus, when responding to the questionnaires used in our study, women may tend to score higher than men. Nonetheless, although women reported higher scores than men on all 12 appraisal conditions, the responses followed the same patterns, as depicted in Figures 2 and 3.
We also found one effect of the gender of the employee in the vignette, namely that when reading about nurses, the emotional demands were rated as more hindering for Hanna than for Hans. This one-employee gender effect may be explained by the shifting standards model [63], which suggests that when we make judgments about members of a social category (e.g., men) based on stereotype-relevant dimensions, these judgments are based on comparing standards for the within-group (e.g., judging a man relative to a male standard). Society still views nursing as a gender-specific occupation and the public perspective is that nursing consists of female-associated qualities, such as compassion and caring [64]. Additionally, women are overrepresented as nurses; for example, in Norway, only 11.4% of nurses were men in 2020 [65]. Thus, when the participants evaluated Hans' experience in the emotionally demanding situation, they may have attributed female-associated traits of nursing (i.e., care and compassion) to him and with that, according to the shifting standards model, compared him to other men, which again led to lower hindrance-like scores for Hans in the emotionally demanding situations. These findings were not obtained when the participants appraised the job demands faced by the real estate agents. One reason for this may be that this is a profession with more gender equality, as almost 40% of this profession in 2020 in Norway were women [66]. Moreover, for the other vignettes, there were no effects of the gender of the employee.
Limitations and Future Research
There are some limitations to this study that need to be acknowledged. First, when applying vignettes, it is possible that the assessments of the hypothetical job demands were less externally valid than if they were obtained by actual nurses and real estate agents. Moreover, the external validity could also be stronger if the situations were experiences in the field and not in fictional stories with fictional characters. Furthermore, our participants were relatively young and their job experiences were unknown. Nevertheless, previous studies have found that hypothetical situations can evoke similar reactions to those obtained in the field [67], even if it cannot be guaranteed that the same reactions and appraisals would have found place in real-life settings [49]. Another limitation that must be recognized is that we do not know if the participants based their appraisal on occupational stereotypes and how this may have influenced the results. Moreover, all participants were students and it is unknown whether they had previous work experiences. Thus, our findings cannot be generalized to other populations. Clearly, a replication study with nurses and real-estate agents reporting from their actual work experiences would strengthen the generalizability and external validity of the presented results.
Although our factor analyses resulted in similar differentiations of job demands as in previous studies, that is, positive and negative, our study differed in that the items "hindrance" and "challenge" belonged to the same factor (i.e., hindrance-like demands). Future studies should attempt to validate the differentiation of challenge-like and hindrancelike demands, particularly in Norway but also in other areas of the world.
Finally, we focused only on job demands (i.e., time pressure and emotionally demanding situations) and on how knowledge of an occupational group and individual trait emotions affected the appraisals of these demands. We did not investigate how these demands were related to, for example, work engagement and burnout, or other outcome variables. To validate that challenge-like job demands have motivational potential, it would be fruitful to design studies that also measure these relationships.
Conclusions
Despite the limitations, our study extends the understanding of the challenge-hindrance framework for job demands. Using a vignette approach, the present study showed that hindrance and challenge are separate, though related, dimensions of the concept of job demands. We also found that the same job characteristics were appraised differently depending on the occupational group they belonged to. In addition, our study revealed that positive trait emotions predicted challenge appraisals but not hindrance appraisals. Furthermore, our results revealed that job demands can be appraised as challenges and hindrances at the same time. This indicates that it is too simplistic to categorize a job demand as hindering or challenging a priori. Knowledge about the positive and negative potential of job demands is important when researching the nature and consequences of job demands, and calls for a nuanced approach in future job characteristics research.
Our findings also have implications for the development of sustainable work environments. That is, knowledge concerning to which degree demands have a motivational potential (i.e., challenge) and/or are distressing (i.e., hindrance) is important for facilitating and strengthening employee well-being. Funding: This work was supported by the Northern Norway Regional Health Authority (grand number HST1186-14). The publication charges for this article were funded by a grant from the publication fund of UiT The Arctic University of Norway.
Institutional Review Board Statement: This study followed the guidelines of the Declaration of Helsinki and further approval by an ethics committee was not required as per applicable institutional or national guidelines and regulations.
Informed consent: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: The data are protected and not openly available.
|
2021-10-24T15:07:40.891Z
|
2021-10-21T00:00:00.000
|
{
"year": 2021,
"sha1": "163a134b156c202285809fc4413845879c732288",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/21/11662/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b50c37bde5b2554981ea0c579c38ae24600f3238",
"s2fieldsofstudy": [
"Psychology",
"Business"
],
"extfieldsofstudy": []
}
|
186320117
|
pes2o/s2orc
|
v3-fos-license
|
Competitive Mechanism For The Distribution Of Labor Resources In The Transport Objective
This article proposes a close labor force allocation model based on the use of a competitive mechanism. Many practical economic activity tasks and some economic theory issues are associated with the tasks of determining the optimal variant for solving the problem of the distribution of labor resources. One of these solutions is a compromise set. This article is devoted to the search for this set in a non-antagonistic non-coalition game related to the transport problem of integer programming.
Introduction
The study of methods of mathematical programming, especially dynamic programming, becomes necessary for the practical work of an economist. In mathematical economics, the task of the most efficient transfer of labor resources from one point to another is of great importance. Tasks of this type are relevant and constantly arise in various areas of our life, such as economics, industry, etc. This article discusses decision-making tasks with multiple participants. In such tasks, the compromise value of the income function for each of the participants depends on the decisions made by all other participants [1][2][3][4][5]. The basis of the work is the transport problem of integer programming. In this task, we consider the process of moving labor resources in order to obtain maximum income. The paper considers the game-theoretic version of this problem and for which the static model is built. However, unlike the transportation problem, in the constructed model, the goal of each participant in addition to maximizing their income is to reach a compromise with the other participants. As a result, it is possible that not all participants in the process will receive their maximum possible income. In a static model, a displacement plan will play the role of compromise, satisfying all participants [6][7].
Formulation Of The Problem
L,R,K), where L is the set of nodes consisting of N production points, M consumption points and intermediate nodes; R is the set of edges; K is the bandwidth function defined on the edges of the network [8]. All N production points produce different goods. We will enumerate all labor resources so that the resource number will correspond to the number of the manufacturer who produced this product. The quantity of manufactured goods s production point i is given and is denoted by . In -cooperative game [9][10][11].
is the set of player strategies, situations X, is called non-cooperative game. We will describe the game-theoretic version of the above model. In accordance with the definition, in order to define a game, it is necessary to determine a plurality of players, a variety of game situations and functions of player income [12]. Consequently, the set of players P are points of consumption and production . The set of situations S of the game will be the plans of transportation of the form: is the quantity of the s goods, transported from the i production point to the j consumption point.
The income functions of players are introduced as follows. For production points: where revenue from the sale of goods, production costs, the cost of In this game we need to find a compromise point, that is, such a situation , the implementation of which is a compromise between all players [16][17][18]. are known. Also known for the needs of each product and prices for each consumer [26][27][28][29][30]. On the edges of the network given capacity and the cost of transporting a unit of product. The network itself can have an arbitrary appearance, but it should be noted that with the addition of only one edge of the network or another participant in the process, the complexity of the task increases several times. For example, choose the network of the simplest form N=2, M=3. 10 Acting exactly according to the algorithm described above, in the problem thus posed we find a compromise point [31][32][33][34]. Thus, in this particular example will be a compromise :
Solution Algorithm
.
Conclusion
Thus, a description was given of the game model of the transportation problem, in which the parameters do not depend on time. For this model, an algorithm for finding a compromise point is considered, the application of which is illustrated with a specific example.
|
2019-06-13T13:23:10.373Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ecf463e20751d9a8dd26184dd515983f0620df5e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1172/1/012089",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0f500707eb0cbebb62155613fa1cac2e5831d08e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
253315354
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical Properties of Cement Reinforced with Pristine and Functionalized Carbon Nanotubes: Simulation Studies
Concrete is well known for its compression resistance, making it suitable for any kind of construction. Several research studies show that the addition of carbon nanostructures to concrete allows for construction materials with both a higher resistance and durability, while having less porosity. Among the mentioned nanostructures are carbon nanotubes (CNTs), which consist of long cylindrical molecules with a nanoscale diameter. In this work, molecular dynamics (MD) simulations have been carried out, to study the effect of pristine or carboxyl functionalized CNTs inserted into a tobermorite crystal on the mechanical properties (elastic modulus and interfacial shear strength) of the resulting composites. The results show that the addition of the nanostructure to the tobermorite crystal increases the elastic modulus and the interfacial shear strength, observing a positive relation between the mechanical properties and the atomic interactions established between the tobermorite crystal and the CNT surface. In addition, functionalized CNTs present enhanced mechanical properties.
Introduction
Recent years have witnessed an increasing interest in cement composites with the incorporation of different types of carbon nanotubes (CNTs) [1][2][3][4]. These are long cylindrical carbon molecules found by Iijima in 1995 [5]. They look like a layer of graphene rolled up on itself. Different authors [6,7] highlighted their excellent electrical, chemical and mechanical properties, which have revolutionized composite materials, microelectronics, biomedical applications and energy storage [8][9][10]. CNTs are characterized by a high elastic modulus as well as tensile strength [11], which makes them a very suitable option to reinforce materials such as cement. Therefore, it is possible to obtain composite materials with improvement in both tensile and compression strengths, as well as better durability, since crack propagation is inhibited [12][13][14][15][16][17].
Although most studies on cement reinforced with CNTs have been carried out only on a laboratory scale, some works attempted to extend the use of these materials to the large scale. The most critical point of the problem is the production of CNTs with controllable size and length. Due to great advances in material science, CNTs have already been mass produced in orders of several to several tens of kilograms per hour [18,19]. Large-scale production of CNT-cement composites has also been reported by several authors. Silva et al. [20] patented a method to produce CNTs embedded in a cement matrix in a continuous and large-scale stage. The authors state that this process could produce several tons per day and thus, be appropriate for the conventional cement industry. Jianguo et al. [21] developed a method to disperse CNTs in a cement matrix, appropriate for large-scale application. Jianlin et al. [22] described the fabrication of an intelligent
Model Systems
With the aim of analyzing the effect of the addition of pristine and functionalized single-walled CNTs (SWCNTs) on the mechanical properties of tobermorite 11 Å, different crystalline models were used. A SWCNT (2,2) was inserted in one of the interstices of the tobermorite crystal, as shown in Figures 1 and 2. It was decided to work with a SWCNT (2,2) due to its small diameter (2.71 Å), which does not cause much distortion on the tobermorite structure. on the CNT surface, the higher the values obtained for E and ISS. These results are a consequence of better interactions that are established at the cement-CNT interface.
Model Systems
With the aim of analyzing the effect of the addition of pristine and functionalized single-walled CNTs (SWCNTs) on the mechanical properties of tobermorite 11 Å, different crystalline models were used. A SWCNT (2,2) was inserted in one of the interstices of the tobermorite crystal, as shown in Figures 1 and 2. It was decided to work with a SWCNT (2,2) due to its small diameter (2,71 Å), which does not cause much distortion on the tobermorite structure. When calculating the ISS, the pulling out of the CNT must be done in a certain direction, pulling it out from the crystalline structure until it does not interact with it anymore. Therefore, the length of the cell has been increased to 500 Å (see Figure 2) in the pullingout direction in order to simulate the pull-out process and avoid the interaction with the surrounding cells and the crystal. Figure 3 depicts the different pristine and functionalized CNTs that were used to calculate the mechanical properties. on the CNT surface, the higher the values obtained for E and ISS. These results are a consequence of better interactions that are established at the cement-CNT interface.
Model Systems
With the aim of analyzing the effect of the addition of pristine and functionalized single-walled CNTs (SWCNTs) on the mechanical properties of tobermorite 11 Å, different crystalline models were used. A SWCNT (2,2) was inserted in one of the interstices of the tobermorite crystal, as shown in Figures 1 and 2. It was decided to work with a SWCNT (2,2) due to its small diameter (2,71 Å), which does not cause much distortion on the tobermorite structure. When calculating the ISS, the pulling out of the CNT must be done in a certain direction, pulling it out from the crystalline structure until it does not interact with it anymore. Therefore, the length of the cell has been increased to 500 Å (see Figure 2) in the pullingout direction in order to simulate the pull-out process and avoid the interaction with the surrounding cells and the crystal. Figure 3 depicts the different pristine and functionalized CNTs that were used to calculate the mechanical properties. When calculating the ISS, the pulling out of the CNT must be done in a certain direction, pulling it out from the crystalline structure until it does not interact with it anymore. Therefore, the length of the cell has been increased to 500 Å (see Figure 2) in the pulling-out direction in order to simulate the pull-out process and avoid the interaction with the surrounding cells and the crystal. Geometrical features of the models used to calculate the mechanical properties are listed in Table 1.
Calculation Method
The Forcite Module of Materials Studio Software [50] was used to calculate mechanical properties. Within the Forcite Module, the NPT ensemble was selected (N: number of particles, P: pressure, T: temperature), with both a constant temperature (298 K with a Nose-Hoover thermostat [51]) and pressure (1 × 10 −4 GPa with a Berendsen barostat [52]). The MD simulation was carried out for 6000 ps simulation time and 1 fs as a time interval. The chosen times were enough to achieve the equilibrium in the potential energy of the system.
The forcefield used to calculate the interaction between the cement and the CNT was the condensed-phase optimized molecular potential for atomistic simulation studies forcefield (COMPASSII) [53]; a forcefield based on ab initio calculations that allows describing the structure and properties of molecules and condensed phase systems in a wide range of temperature and pressure values. COMPASSII has been successfully applied in the simulation of systems containing CNTs and different materials derived from cement [37,[54][55][56][57][58][59].
To calculate the mechanical properties (E, ν, G and K), the elastic method was used, in which the response to an applied strain is derived from the second derivative of the potential energy with respect to strain. The relaxation of the system under applied strain is determined from the Hessian matrix. This method was applied to the last 10 frames of the MD trajectory and the elastic constants, K and G, were averaged over all frames. The Geometrical features of the models used to calculate the mechanical properties are listed in Table 1.
Calculation Method
The Forcite Module of Materials Studio Software [50] was used to calculate mechanical properties. Within the Forcite Module, the NPT ensemble was selected (N: number of particles, P: pressure, T: temperature), with both a constant temperature (298 K with a Nose-Hoover thermostat [51]) and pressure (1 × 10 −4 GPa with a Berendsen barostat [52]). The MD simulation was carried out for 6000 ps simulation time and 1 fs as a time interval. The chosen times were enough to achieve the equilibrium in the potential energy of the system.
The forcefield used to calculate the interaction between the cement and the CNT was the condensed-phase optimized molecular potential for atomistic simulation studies forcefield (COMPASSII) [53]; a forcefield based on ab initio calculations that allows describing the structure and properties of molecules and condensed phase systems in a wide range of temperature and pressure values. COMPASSII has been successfully applied in the simulation of systems containing CNTs and different materials derived from cement [37,[54][55][56][57][58][59].
To calculate the mechanical properties (E, ν, G and K), the elastic method was used, in which the response to an applied strain is derived from the second derivative of the potential energy with respect to strain. The relaxation of the system under applied strain is determined from the Hessian matrix. This method was applied to the last 10 frames of the MD trajectory and the elastic constants, K and G, were averaged over all frames. The mentioned method has been used by other authors, who obtained results comparable to the experimental values of the mechanical properties of both the C-S-H gel and polymers reinforced with CNTs [54,57,59]. Using the Voigt-Reuss-Hill (VRH) approximation [60], it is possible to compute E and ν, with the following equations, respectively: The Gou et al. [61] equation was used in order to calculate the ISS. They defined the pullout energy, E pullout , as the energy difference between the fully embedded CNT configuration and the complete pullout configuration. In turn, E pullout can be related to the ISS through the following equation: where r and L are the radius and length of the CNT, respectively. Table 2 shows the values obtained for bulk modulus (K), shear modulus (G), Young's modulus (E) and Poisson's ratio (ν), which were calculated using Equations (1) and (2). Several authors have calculated tobermorite modules by MD, obtaining similar results. B. Bhuvaneshwari [55] computed all the modules for the three types of tobermorite (9 Å, 11 Å and 14 Å) and jennite, obtaining very similar values to those presented in this work. M. Arar [62] also reported similar results.
Mechanical Properties
On the other hand, other authors have studied the mechanical behavior of tobermorite when adding pristine [63,64] and functionalized [65][66][67][68] CNTs, showing that the tensile strength of C-S-H reinforced with CNT (functionalized and non-functionalized) is significantly enhanced in the CNT direction, when compared to pure C-S-H.
To better understand the results: Figure 4 represents the E obtained for each structure. It can be clearly seen that E increases when adding pristine and functionalized SWCNTs. In addition, functionalized CNTs perform better. The enhancement seems to be a consequence of the interaction between the tobermorite and the functionalized CNT thanks to the added functional groups. The larger the number of carboxyl groups, the higher the elastic modulus.
It is important to mention that the experimental values seem to be lower than the computed values. Velez et al. [69] measured the elastic modulus of different clinkers contained in the cement as a function of porosity and found out higher values for those materials with null porosity. Jennings [70] proposed a colloidal model for the cement structure, which consists of globular C-S-H particles. Depending on the compaction level of the cement, it is possible to obtain two different structures: high-density C-S-H (HD C-S-H) and low-density C-S-H (LD C-S-H), with a greater average porosity for the second one.
There exists a simple method to compute E as a function of porosity using the equation proposed by Knudsen and Helmuth [71]: where E o is the elastic modulus in the absence of porosity, p is the porosity and 3.4 represents a coefficient obtained from many experimental measurements [69]. It is important to mention that the experimental values seem to be lower than the computed values. Velez et al. [69] measured the elastic modulus of different clinkers contained in the cement as a function of porosity and found out higher values for those materials with null porosity. Jennings [70] proposed a colloidal model for the cement structure, which consists of globular C-S-H particles. Depending on the compaction level of the cement, it is possible to obtain two different structures: high-density C-S-H (HD C-S-H) and low-density C-S-H (LD C-S-H), with a greater average porosity for the second one. There exists a simple method to compute E as a function of porosity using the equation proposed by Knudsen and Helmuth [71]: where Eo is the elastic modulus in the absence of porosity, p is the porosity and 3.4 represents a coefficient obtained from many experimental measurements [69]. Table 3 shows the value of E as a function of the average porosity for HD (0.26) and LD (0.36) structures. As it is possible to appreciate, an increase in porosity causes a significant decrease in E. In addition, our values for tobermorite are similar to those measured by Arar [62], who obtained 15 GPa and 25 GPa for LD and HD, respectively. Moreover, González et al. [72] measured 32.2 GPa for HD and 16.3 GPa for LD, while Fu et al. [27] attained 31.45 GPa (HD) and 18.11 GPa (LD). The experimental values for HD and LD were measured by Constantinides [73] and Keinde [74] using nanoindentation techniques, who obtained 21.7 GPa for LD and 29 GPa for HD. Table 3 shows the value of E as a function of the average porosity for HD (0.26) and LD (0.36) structures. As it is possible to appreciate, an increase in porosity causes a significant decrease in E. In addition, our values for tobermorite are similar to those measured by Arar [62], who obtained 15 GPa and 25 GPa for LD and HD, respectively. Moreover, González et al. [72] measured 32.2 GPa for HD and 16.3 GPa for LD, while Fu et al. [27] attained 31.45 GPa (HD) and 18.11 GPa (LD). The experimental values for HD and LD were measured by Constantinides [73] and Keinde [74] using nanoindentation techniques, who obtained 21.7 GPa for LD and 29 GPa for HD. Table 4 shows the values obtained for ISS and non-bond energy for pristine CNTs and CNTs functionalized with four and six COOH groups. Moreover, the ∆E non-bond , which represents the difference between the non-bond energies of the structures without and with the CNT inside the crystal (E pulled-out configuration non-bond − E fully embedded configuration non-bond ), is also shown in this Table. The structures with functionalized CNTs show higher ISS and ISS is increased for CNT with larger number of COOH groups. Since there is no chemical bonding at the interface between the CNT and the tobermorite, the interaction between both is mainly dominated by Van der Waals and electrostatic forces. When the CNT is removed from the structure, these interactions disappear and a decrease in the non-bonding energy is observed. The values of ∆E non-bond are positive as E non-bond is negative. CNTs with polar groups, such as COOH, show better values (more negative) of the E non-bond , because the interaction with the tobermorite structure is enhanced, which, in turn, increases ISS [37,75,76]. As is seen from Table 4, there is a positive relation between E non-bond and ISS.
Hydrogen Bonds (H-Bonds)
CNTs functionalized with COOH groups can establish H bonds with oxygen atoms in the tobermorite. Geometry requisites that have been used to define the presence of a H bond are: the distance between H from the donor group (D) and O from the acceptor group (A) is lower than 2.5 Å and the angle DHA is higher than 90 • . Several authors [29,37,43,64,77] have found a positive relation between the mechanical properties of a cement matrix reinforced with carbon nanostructures and interactions, chemical bonds or non-bond interactions, which are established at the interface of the materials.
The number of H bonds (NHb) established between the CNT and the tobermorite (for the models used to compute E) and the average length of those bonds can be seen in Table 5. Bond lengths between the CNT and the matrix are slightly larger for the structures containing pristine CNTs. The results for the models used for the pull-out process are listed in Table 6. As an example, Figure 5 shows the H bonds, as black dashed lines, which are created between the tobermorite and the CNT functionalized with four COOH groups.
As an example, Figure 5 shows the H bonds, as black dashed lines, which are created between the tobermorite and the CNT functionalized with four COOH groups.
According to our above results, the larger the functionalization degree, the better the ISS and E values. The values in Tables 5 and 6 show that a larger number of carboxyl groups allows for more H bonds at the interface and, consequently, a more favorable nonbond energy.
Conclusions
After carrying out MD simulations to study the mechanical properties of a tobermorite matrix, with either a pristine or a functionalized CNT with different concentrations of carboxyl groups, the following conclusions can be highlighted:
•
Young's modulus of tobermorite is enhanced when incorporating nanotubes in its composition; • Young's modulus presents higher values when the concentration of carboxyl groups in the CNT is increased. This means that compounds with functionalized nanotubes show better mechanical properties, if compared to pristine nanotubes; • The obtained values for E are significantly higher than those obtained by other authors with experimental techniques, since it is not possible to simulate the porosity of the cement matrix. When correcting the values with the equation proposed by Knudsen and Helmuth [66], it is possible to obtain values with an order of magnitude very similar to other authors, keeping the tendency of a higher E as a function of the number of functional groups; and • The functionalization of CNT with carboxyl groups promotes the formation of a Hbond network with tobermorite. The larger the number of functional groups, the more H bonds established at the interphase, causing enhanced adhesion and thus, improving mechanical properties. According to our above results, the larger the functionalization degree, the better the ISS and E values. The values in Tables 5 and 6 show that a larger number of carboxyl groups allows for more H bonds at the interface and, consequently, a more favorable non-bond energy.
Conclusions
After carrying out MD simulations to study the mechanical properties of a tobermorite matrix, with either a pristine or a functionalized CNT with different concentrations of carboxyl groups, the following conclusions can be highlighted:
•
Young's modulus of tobermorite is enhanced when incorporating nanotubes in its composition; • Young's modulus presents higher values when the concentration of carboxyl groups in the CNT is increased. This means that compounds with functionalized nanotubes show better mechanical properties, if compared to pristine nanotubes; • The obtained values for E are significantly higher than those obtained by other authors with experimental techniques, since it is not possible to simulate the porosity of the cement matrix. When correcting the values with the equation proposed by Knudsen and Helmuth [66], it is possible to obtain values with an order of magnitude very similar to other authors, keeping the tendency of a higher E as a function of the number of functional groups; and • The functionalization of CNT with carboxyl groups promotes the formation of a Hbond network with tobermorite. The larger the number of functional groups, the more H bonds established at the interphase, causing enhanced adhesion and thus, improving mechanical properties.
|
2022-11-05T15:26:00.005Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "06b41dd45cb0d9d07870b30a90db683bdfd4b80d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/21/7734/pdf?version=1667458814",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f11b5568a484eab52ae42c2a260680f942bf7bb3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
16485427
|
pes2o/s2orc
|
v3-fos-license
|
Critical Behavior of the Kramers Escape Rate in Asymmetric Classical Field Theories
We introduce an asymmetric classical Ginzburg-Landau model in a bounded interval, and study its dynamical behavior when perturbed by weak spatiotemporal noise. The Kramers escape rate from a locally stable state is computed as a function of the interval length. An asymptotically sharp second-order phase transition in activation behavior, with corresponding critical behavior of the rate prefactor, occurs at a critical length l_c, similar to what is observed in symmetric models. The weak-noise exit time asymptotics, to both leading and subdominant orders, are analyzed at all interval lengthscales. The divergence of the prefactor as the critical length is approached is discussed in terms of a crossover from non-Arrhenius to Arrhenius behavior as noise intensity decreases. More general models without symmetry are observed to display similar behavior, suggesting that the presence of a ``phase transition'' in escape behavior is a robust and widespread phenomenon.
Introduction
Noise-induced transitions between locally stable states of spatially extended systems are responsible for a wide range of physical phenomena [1] . In classical systems, where the noise is typically (but not necessarily) of thermal origin, such phenomena include homogeneous nucleation of one phase inside another [2] , micromagnetic domain reversal [3,4,5] , pattern nucleation in electroconvection [6] and other non-equilibrium systems [7] , transitions in hydrogenbonded ferroelectrics [8] , dislocation motion across Peierls barriers [9] , instabilities of metallic nanowires [10] , and others. In quantum systems, the problem of tunneling between metastable states is formally similar, and problems of interest include decay of the false vacuum [11] and metastable states in general [12] , anomalous particle production [13] , and others.
The modern approach to these problems, beginning with the work of Langer on classical systems [2] and Coleman and Callan on quantum systems [11] , considered systems of infinite spatial extent (for a review, see Schulman [14] ). In certain systems, however, finite size may lead to important modifications, and in some instances qualitatively new behavior. Approaches to noise-induced transitions between stable states in finite systems modelled by nonlinear field equations have been investigated by a number of authors [15,16,17,18,19] .
In a recent paper [20] , Maier and Stein studied the effects of weak white noise on a bistable classical system of finite size whose zero-noise dynamics are governed by a symmetric Ginzburg-Landau φ 4 double-well potential. Their surprising result was the uncovering of a type of second-order phase transition in activation behavior at a critical value L c of the system size. That a crossover in activation behavior must take place is clear from both simple physical and mathematical arguments (cf. Sec. 2). What is not so obvious is that the crossover is an asymptotically sharp, second-order phase transition in the limit of low noise. The change of behavior arises from a bifurcation of the transition state, from a zerodimensional (i.e., constant) configuration below L c , to a spatially varying (degenerate) pair of "periodic instantons" above L c .
The quantitative effects of the transition are significant. In the weak-noise limit, the activation rate is given by the Kramers formula Γ ∼ Γ 0 exp(−∆W/ǫ), where ǫ is the noise strength, ∆W the activation barrier, and Γ 0 the rate prefactor. The barrier ∆W is interpreted as the height, in dimensionless energy units, of the transition state, and by analogy with chemical kinetics, the exponential falloff of the rate is often called 'Arrhenius behavior'. The dependence on system size L of ∆W changes qualitatively at L c . Also, the rate prefactor Γ 0 diverges as L c is approached both from above and below. Precisely at L c , Γ 0 becomes ǫ-dependent in such a way that it diverges as ǫ → 0. This is 'non-Arrhenius' behavior. (For boundary conditions that give rise to a zero mode, such as periodic, there is in addition a noise dependence of Γ 0 above L c , and the prefactor divergence as L → L + c may be affected.) Given the increasingly anomalous behavior of the escape rate as L c is approached from either side, a few words should be said about the domain of validity of the Kramers formula Γ ∼ Γ 0 exp(−∆W/ǫ) for the escape rate, which displays Arrhenius behavior with both Γ 0 and ∆W independent of ǫ. Strictly speaking, this formula is asymptotically valid, that is, only in the limit ǫ → 0. In a looser sense, the formula can often be applied to physical situations when the noise strength ǫ is both small compared to ∆W , and so that the prefactor Γ 0 is small compared to exp(−∆W/ǫ). These represent minimal requirements; in all cases applications need to be made with care. For a fuller discussion on these and related issues, see [21]. For the models presented here, a discussion of the regions of validity of all derived formulae will be presented in Sec. 4.3.
A question that naturally arises is whether this critical behavior is generic: could it depend on special features of the potential studied in [20], in particular its φ → −φ symmetry? It was noted [20] that in more complicated models [19] , the transition may become first-order; in others, it could conceivably disappear altogether. The purpose of this paper, however, is to provide support for the claim that the transition found in [20] is at least not confined to models with φ → −φ symmetry; that it should in fact appear in a wide range of models and corresponding physical situations. To support this, a nonsymmetric φ 3 model will be studied and solved, and a second-order transition similar to that found in [20] will be uncovered. This will be followed by a brief discussion of general nonsymmetric Ginzburg-Landau models with smooth polynomial potentials up to degree four, and it will be argued that this second-order transition should appear in typical representations of these models.
The Model
We consider on [−L/2, L/2] a classical field φ(x, t) subject to the potential as shown in Fig. 1.
It is already clear that a crossover in activation behavior must occur. In the limit ℓ → 0 the gradient term in the integrand of the energy in Eq. (4) will diverge for a nonuniform state; while for ℓ → ∞ the V (u) term will diverge for a uniform state. In this paper we will employ periodic boundary conditions throughout, and it is clear that there must be a crossover from a uniform to a nonuniform transition state as ℓ increases from 0. Physically, the crossover arises from a competition between the bending and bulk energies of the transition state.
This crossover will be analyzed in succeeding sections; we will see that it corresponds to an asymptotically sharp, second-order phase transition in the activation rate. Both stable and transition states are time-independent solutions of the zero-noise Ginzburg-Landau equation, that is, they are extremal states of H[φ], satisfying As already noted, we will assume periodic boundary conditions throughout. So there is a uniform stable state u s = +1, and a uniform unstable state u u = −1. In the next section we will see that the latter is the transition state for ℓ < ℓ c = √ 2π. At ℓ c a transition occurs, and above it the transition state is nonuniform.
The Transition State
Following the notation of [20], we denote by u inst,m (x) the spatially varying, time-independent solution ("instanton state") to the zero-noise extremum condition Eq. (5), for any m in the range 0 ≤ m ≤ 1. The instanton state is (see Fig. 2) where dn(·|m) is the Jacobi elliptic function with parameter m, whose half-period equals K(m), the complete elliptic integral of the first kind [22] . Accordingly, imposition of the periodic boundary condition yields a relation between ℓ and m: The minimum length that can accommodate this condition is ℓ c = √ 2π, corresponding to m = 0. In this limit, dn(x|0) = 1, and the instanton state reduces to the uniform unstable state u u = −1. As m → 1 − , ℓ → ∞, and the instanton state becomes as shown in Fig. 3. As we will see, the instanton given by Eq. (6) is a saddle, or transition state, above ℓ c ; it will be seen to have one unstable direction (in addition to a zero mode resulting from translational symmetry). Physically, it can be thought of as a pair of domain walls, each of which separates the two uniform states over a region of finite extent.
For small but nonzero noise strength the leading-order asymptotics in the escape rate from the metastable well are governed by the energy difference between the transition state, which is u u = −1 below ℓ c and u inst,m (x) above, and the stable state u s = +1. (A stability analysis justifying these identifications will be given in Secs. 4
.1 and 4.2.) In the Kramers formula in
Sec. 1, the activation barrier ∆W , which governs exponential dependence of the escape rate on the noise strength, equals twice this energy difference. So below ℓ c , ∆W/2E 0 = (4/3)ℓ. Above ℓ c , we find where E(m) is the complete elliptic integral of the second kind [22] . The activation barrier for the entire range of ℓ is shown in Fig. 4. As ℓ → ∞, ∆W/2E 0 → 24 √ 2/5; this value is simply the energy of a pair of domain walls.
Rate prefactor
For an overdamped multidimensional system driven by white noise, the rate prefactor Γ 0 can be computed as follows [21,23] (see also [2,14]). As in [20], let ϕ s denote the stable state, and let ϕ u denote the transition state; it will be assumed (as is the case here) that this state has a single unstable direction. Consider a small perturbation η about the stable state, i.e., ϕ = ϕ s + η. Then to leading orderη = −Λ s η, where Λ s is the linearized zero-noise dynamics at ϕ s . Similarly Λ u is the linearized zero-noise dynamics around ϕ u . Then [21,23] where λ u,1 is the only negative eigenvalue of Λ u , corresponding to the direction along which the optimal escape trajectory approaches the transition state. In general, the determinants in the numerator and denominator of Eq. (11) separately diverge: they are products of an infinite number of eigenvalues with magnitude greater than one. However, their ratio, which can be interpreted as the limit of a product of individual eigenvalue quotients, is finite.
ℓ < ℓ c
In this regime, both the stable and transition states are uniform, allowing for a straightforward determination of Γ 0 by direct computation of the determinants. Using reduced variables, the stable state u s = +1 and the transition state u u = −1. Linearizing around the and similarlyη about the transition state. The spectrum of eigenvalues corresponding toΛ[u s ] is and the eigenvalues corresponding toΛ[u u ] are This simple linear stability analysis justifies the claims that u s is a stable state and u u a transition state, or saddle point. Over the interval [0, ℓ c ) all eigenvalues ofΛ[u s ] are positive, while all but one ofΛ[u u ] are. Its single negative eigenvalue λ u 0 = −2 is independent of ℓ, and the corresponding eigenfunction, which is spatially uniform, is the direction in configuration space along which the optimal escape path approaches u u .
Putting everything together, we find which diverges at l c = √ 2π as expected; in this limit, Γ 0 ∼ const × (ℓ c − l) −1 . The divergence arises from the vanishing of the pair of eigenvalues λ u ±1 as ℓ → ℓ − c (each eigenvalue contributing a factor (ℓ c − ℓ) −1/2 ). This indicates the appearance of a pair of soft modes, resulting in a transversal instability of the optimal escape trajectory as the saddle point is approached.
ℓ > ℓ c
Computation of the determinant quotient in Eq. (11) is less straightforward when the transition state is nonconstant. This occurs when ℓ > ℓ c , where the transition state u u is given by Eq. (6), and its associated linearized evolution operator iŝ Evaluation of Γ 0 therefore requires determination of the eigenvalue spectrum ofΛ[u u ] with periodic boundary conditions.
An additional complication follows from the infinite translational degeneracy of the instanton state (i.e., invariance with respect to choice of x 0 ). This implies a soft collective mode in the linearized dynamical operatorΛ[u u ] of Eq. (17), resulting in a zero eigenvalue. Removal of this zero eigenvalue can be achieved with the McKane-Tarlie regularization procedure [18] for functional determinants.
That procedure is implemented as follows (see [18] for details). Let y 1 (x, x 0 ; m) and y 2 (x, x 0 ; m) denote two linearly independent solutions ofΛ[u u ]y i = 0, i = 1, 2. Let det ′Λ refer to the functional determinant of the operatorΛ with the zero eigenvalue removed.
Then, with periodic boundary conditions, it is formally the case that where z is arbitrary, y 1 |y 1 = and where β(m) = (m 2 − m + 1) −1/4 and E(·|m) is the incomplete elliptic integral of the second kind [22] . Inserting these solutions into Eq. (18) yields Using a similar procedure (see the Appendix of [18]), we find the corresponding numerator for the determinant ratio in Eq. (11) to be consistent with the numerator of Eq. (16), obtained through direct computation of the eigenvalue spectrum. We emphasize again, however (cf. the discussion above Eq. (16)), that it is only the ratio of the determinants that is sensible, not the individual determinants themselves: these each diverge for every ℓ. In contrast, the expressions in Eqs. (21) and (22) are well-behaved for all finite ℓ > ℓ c (m > 0), but still separately diverge in the ℓ → ∞ (m → 1) limit. Nevertheless, here also the divergences cancel to give We next compute the eigenvalue λ u,1 corresponding to the unstable direction. With the substitution w = β(m)(x − x 0 )/ √ 2, the eigenvalue equationΛ[u u ]η = λη becomes where E = −2λ/β(m) 2 + 4(2 − m). Using the identity [22] dn 2 [z|m] = 1 − msn 2 [z|m], we observe that Eq. (24) is the l = 3 Lamé equation [24,25] , a Schrödinger equation with periodic potential of period 2K(m). Its Bloch wave spectrum consists of four energy bands, and its eigenfunctions can be expressed in terms of Lamé polynomials [24] . A fuller discussion, especially for higher l-values, is given in [26]; for our purposes here we do not need to utilize the full machinery of the Hermite solution (a detailed treatment is given in [24]).
It is easy to check that the eigenvector η u 1 with smallest eigenvalue is which approaches −2 as m → 0 in agreement with the single negative eigenvalue of Eq. (15).
As noted in [20], of ℓ, indicating that the physical quantity of interest above ℓ c is the transition rate per unit length. The general procedure for including this correction is described by Schulman [14].
(Our case differs by a factor of 2 from his due to the lack of symmetry in our model.) The net result is to multiply the prefactor by The most important qualitative changes are the ǫ −1/2 factor, leading to a non-Arrhenius transition rate above ℓ c , and the effect on the behavior as ℓ → ℓ + c ; both will be discussed in more detail below.
The above discussion also makes clear that it is not necessary to separately evaluate y 1 |y 1 . For completeness' sake, however, we present it as well. Its evaluation is straightforward and yields Putting everything together, we find the rate per unit length prefactor for ℓ > ℓ c to be The prefactor over the entire range of ℓ is plotted in Fig. 5. The prefactor divergence has a critical exponent of 1 as ℓ → ℓ − c . Above ℓ c the prefactor is non-Arrhenius everywhere, and the vertical axis is rescaled to account for the singular ǫ 1/2 behavior.
The rescaled prefactor above ℓ c as a function of m is shown in Fig. 6, in order to indicate more clearly the m → 0 (ℓ → ℓ c ) behavior.
The behavior of the rate prefactor Γ 0 for all ℓ > ℓ c is unusual in two ways. First, it is non-Arrhenius -that is, it scales as ǫ −1/2 for all ǫ → 0. Second, it does not formally diverge as ℓ → ℓ + c , as seen in Fig. 6 (in fact, the divergence is present but 'masked', as discussed below). Both of these features are consequences of the translation-invariance of the periodic boundary conditions used here, and would not appear if translation-noninvariant boundary conditions, such as Dirichlet or Neumann, are used. (See, for example, Fig. 3 of [20].) exactly at the critical length. By boundary condition-independent, we mean behavior that is seen in all four of the most commonly used boundary conditions in this type of problem, namely periodic, antiperiodic, Dirichlet, and Neumann; all were considered for symmetric quartic potentials in [27]. In the present case of periodic boundary conditions, the removal of the zero mode that is present for all ℓ > ℓ c renormalizes the prefactor by the factor ζ in Eq. (27). This renormalization masks the divergence of the determinant ratio as ℓ → ℓ + c , because the factor ζ includes the Jacobian of the transformation [14] from the translationinvariant normal mode to the variable x 0 ; this in turn equals the norm y 1 |y 1 , which vanishes as m → 0. The crucial point is that a divergence is still embedded within the prefactor , in the sense that the square root of the determinant ratio diverges with a critical exponent of 1/2 as ℓ → ℓ + c . Upon closer examination, this arises from the lowest stable eigenvalue, λ u,3 , of det Λ ′ [u u ] approaching zero as ℓ → ℓ + c , in a similar fashion to the eigenvalue behavior below ℓ c (cf. n = ±1 in Eq. (15)). This eigenvalue and its corresponding eigenfunction η u,3 are given by Fig. 7 shows the lowest three eigenvalues for the operatorsΛ[u < u ] andΛ[u > u ], where u < u (u > u ) indicates the transition state below (above) ℓ c . This figure illustrates the evolution of the eigenvalues (λ < ±1 and λ > 3 ) that control the formal prefactor divergence as ℓ passes through ℓ c . We note in particular the merging of the second and third eigenvalues ofΛ[u > u ] as ℓ → ℓ + c , consistent with the double degeneracy of the corresponding eigenfunction when ℓ < ℓ c . The eigenvalues are everywhere continuous. Fig. 8 displays the behavior of the full determinant ratio above ℓ c .
Interpretation of the Prefactor Divergence
The formal divergence of the Kramers rate prefactor at a critical length l c (cf. Fig. 5) requires interpretation. It is interesting that a prefactor divergence was also found [28,29] in a completely different set of systems, namely spatially homogeneous (i.e., zero-dimensional) systems out of equilibrium, i.e., in which detailed balance is not satisfied in the stationary state. That divergence arose from an entirely different reason: the appearance of a caustic singularity in the vicinity of the most probable exit path as a parameter in the drift field varied. The caustic singularity arises from the unfolding of a boundary catastrophe; a detailed analysis in given in [31]. In contrast, the problem considered here is that of a spatially extended system in equilibrium, so no such singularities can be present. Moreover, no parameter in the stochastic differential equations describing the time evolution of the system is being varied in the case under discussion here; rather, the variation is in the length of the interval on which the field is defined. The 'phase transitions' in the stochastic exit problem in the two classes of systems are therefore physically unrelated.
What does it mean for the prefactor to (formally) diverge? In fact, at no lengthscale is the true prefactor infinite, for any ǫ > 0. Indeed, given that the analysis presented here is, strictly speaking, valid only asymptotically as ǫ → 0, the escape rate is always small where the above results are applicable. What the formal divergence of the prefactor does mean is that the escape behavior becomes increasingly anomalous as ℓ c is approached, and it is asymptotically non-Arrhenius exactly at ℓ c .
That is, when ℓ = ℓ c the true rate prefactor should scale as a (negative) power of ǫ for all ǫ → 0. As in [28,29], this can be treated quantitatively by studying the 'splayout' of the 'tube' within which fluctuations are largely confined, as the saddle is approached.
'Splayout' here simply means that the fluctuational tube width, which for ℓ = ℓ c is O(ǫ 1/2 ), becomes O(ǫ α ), with α < 1/2, as the saddle is approached when ℓ = ℓ c . In the model studied in [28,29], the Lagrangian manifold comprising optimal fluctuational trajectories has a more complicated behavior than in the model under study here. As a result, in [28,29] fluctuations near the saddle occur on all lengthscales, while in the present case fluctuations near the saddle occur on a definite lengthscale, but one larger than O(ǫ 1/2 ), leading to non-Arrhenius behavior for all ǫ → 0. Details will be presented in [30].
Our main interest here is in the region close to ℓ c , where the rate prefactor Γ 0 is growing anomalously large (but remains everywhere finite for all ℓ strictly away from ℓ c ). The for- bounded away from zero, the prefactor formula applies, but in an ǫ-region driven to zero as ℓ → ℓ c by the rate of vanishing of the eigenvalue(s) of smallest magnitude. Therefore, as ℓ → ℓ c from either side, the Kramers rate formula applies when ǫ scales to zero at least as fast as |ℓ − ℓ c | 1/2 (of course, the constraints already mentioned in Sec. 1 must continue to hold as well). More precisely, the criterion considered here (which is necessary but not a priori sufficient) for Arrhenius behavior to hold on either side of ℓ c is that the noise strength ǫ be small compared to λ m η m |η m , where λ m is the eigenvalue of smallest magnitude and η m its corresponding eigenfunction(s). This criterion arises from the condition, used in the derivation of Eq. (11), that the noise strength be small compared to the size of quadratic fluctuations about the extremal action. For ℓ slightly below ℓ c , these quantities are given in Sec. 4.1, and above ℓ c , by Eqs. (30) and (31). The resulting computation is straightforward and the resulting ǫ-region is sketched in Fig. 9 (in the dimensionless units used here, the coefficients of the scaling terms are of O(1)).
The result is that, for fixed ℓ close to ℓ c , there should be a crossover from non-Arrhenius to Arrhenius behavior at sufficiently weak noise strength (cf. [29]). The figure represents a type of 'Ginzburg criterion' that describes, at a given ℓ near ℓ c , how far down as ǫ → 0 the non-Arrhenius behavior persists. It should be emphasized that Fig. 9 sets an upper bound on the scaling of the region of ǫ vs. |ℓ − ℓ c |, below which asymptotic Arrhenius behavior sets in. It would be interesting to consider also the behavior at very small but fixed noise as ℓ increases thorugh ℓ c . Here one would observe a crossover from Arrhenius to non-Arrhenius behavior and back again as ℓ passes through the critical region. An interesting problem for future consideration is to analyze this phenomenon in greater quantitative detail. Figure 9: A sketch of the scaling of the regions where the Arrhenius prefactor formulae given by Eqs. (16) and (29) are valid, when ℓ is very close to ℓ c . For fixed ℓ, ǫ must be small enough so that it is below the shaded region for these formulae to apply. Non-Arrhenius behavior is expected in the shaded region. Because this figure is intended to illustrate only the rate of scaling of ǫ with |l − l c | so that the Kramers rate formula is valid, the axes are unmarked (except for ℓ = ℓ c , where the ǫ-range has shrunk to zero).
To summarize: strictly away from ℓ c , the prefactor formulae Eqs. (16) and (29) hold (corresponding to Arrhenius behavior of the rate), but in an increasingly narrow range of ǫ as ℓ c is approached. A crossover from non-Arrhenius to Arrhenius behavior as ǫ → 0 should be observed, along a boundary that scales as shown in Fig. 9. Strictly at ℓ c , the formulae do not hold: the prefactor is finite, but acquires a power-law (in ǫ) character. That is, the rate behavior is non-Arrhenius all the way down to ǫ → 0. This (boundary condition-independent) non-Arrhenius behavior at criticality should be distinguished from the (noncritical) non-Arrhenius behavior strictly above ℓ c that appears only when translation-invariant boundary conditions, such as periodic, are used.
Asymmetric quartic potentials
In this section we will consider more general asymmetric potentials of the form as shown in Fig. 10.
We will consider only the small-ℓ regime, and show that a transition at finite ℓ c exists with a divergence of the prefactor as ℓ → ℓ c .
Conclusion
We have found an explicit solution of the Kramers escape rate in an asymmetric φ 3 field theory of the Ginzburg-Landau form. This result, and the brief discussion in Sec. 5 of more general asymmetric potentials, suggests that the critical behavior found in [20] might hold for a more general class of models than those with a high degree of symmetry. How widespread the transition phenomenon is remains uncertain, but it appears to hold at least for arbitrary smooth potentials with terms up to and including φ 4 . It would be interesting to find models with other types of behaviors. One interesting possibility, discussed in [20], is a class of models that display a first-order transition: for example, a discontinuity in the derivative of the activation barrier height with respect to the interval length, at a critical length. A possible candidate for such a model is the sixth-degree Ginzburg-Landau potential of Kuznetsov and Tinyakov [19], but a detailed analysis of its transition behavior remains to be done.
|
2014-10-01T00:00:00.000Z
|
2003-11-12T00:00:00.000
|
{
"year": 2003,
"sha1": "2ea49075bbdd85b38fc917194fa39abc0944e3b4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0311263",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "179ab3251bc51e9634738f9b065a2d3f475c534f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
13679135
|
pes2o/s2orc
|
v3-fos-license
|
Sensitization and cross‐reactivity patterns of contact allergy to diisocyanates and corresponding amines: investigation of diphenylmethane‐4,4′‐diisocyanate, diphenylmethane‐4,4′‐diamine, dicyclohexylmethane‐4,4′‐diisocyanate, and dicylohexylmethane‐4,4′‐diamine
Summary Background Isocyanates are used in polyurethane production. Dermal exposure to isocyanates can induce contact allergy. The most common isocyanate is diphenylmethane diisocyanate used for industrial purposes. The isomer diphenylmethane‐4,4′‐diisocyanate (4,4′‐MDI) is used in patch testing. Diphenylmethane‐4,4′‐diamine (4,4′‐MDA) is its corresponding amine. Concurrent reactions to 4,4′‐MDI and 4,4′‐MDA have been reported, as have concurrent reactions to 4,4′‐MDI and dicyclohexylmethane‐4,4′‐diisocyanate (4,4′‐DMDI). Objectives To investigate the sensitization capacities and the cross‐reactivity of 4,4′‐MDI, 4,4′‐MDA, 4,4′‐DMDI, and dicyclohexylmethane‐4,4′‐diamine (4,4′‐DMDA). Methods The guinea‐pig maximization test (GPMT) was used. Results The GPMT showed sensitizing capacities for all investigated substances: 4,4′‐MDI, 4,4′‐MDA, 4,4′‐DMDI, and 4,4′‐DMDA (all p < 0.001). 4,4′‐MDI‐sensitized animals showed cross‐reactivity to 4,4′‐MDA (p < 0.001) and 4,4′‐DMDI (all p < 0.05). 4,4′‐MDA‐sensitized animals showed cross‐reactivity to 4,4′‐DMDA (p = 0.008). Conclusion All of the investigated substances were shown to be strong sensitizers. Animals sensitized to 4,4′‐MDI showed cross‐reactivity to 4,4′‐MDA and 4,4′‐DMDI, supporting previous findings in the literature. The aromatic amine 4,4′‐MDA showed cross‐reactivity to the aliphatic amine 4,4′‐DMDA.
handling is a well-known occupational health hazard, mainly because of the adverse effects on the respiratory tract (1,2). Strict rules on monitoring of air exposure limits apply to isocyanate work. Although skin exposure has been suggested to be an important route to diisocyanate asthma (3,4), dermal exposure and the risk of developing contact allergy has not gained as much attention as the risks associated with airway exposure.
This prompted an investigation of the pattern of crossreactivity between isocyanates and their corresponding amines. In this context, the term cross-reactivity refers to when an individual initially sensitized to one chemically defined substance (A) reacts to a second chemically defined substance (B) that he or she has not been in previous contact with. The first compound is the primary sensitizer, and the other is the secondary sensitizer (23). Cross-reactivity can occur because A and B are structurally similar, or because A is metabolized to a compound that is similar to B and vice versa, or because A and B are both metabolized into similar compounds (24). Cross-reactivity does not need to go in both directions; that is, if A is a primary sensitizer giving rise to a reaction to the secondary sensitizer B, it does not automatically imply that primary sensitization to B will also cause a reaction to A.
Materials and Methods
The study was approved by the Lund Ethical Committee on Animal Experiments, Lund, Sweden, and conducted in accordance with ethical standards (approval number M 340-12).
Guinea-pig maximization test
The GPMT was essentially performed according to the original description (25)(26)(27), which is also the method described in OECD test guideline 406 that can be used to classify skin sensitizers according to the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) (28). However, in order to standardize the test and objectify the evaluation of the patch test reactions, some modifications were made regarding, for example, the statistical calculations used to evaluate potency, blind readings, induction concentrations, and the introduction of a positive control group (29)(30)(31)(32). The background for the introduction of these modifications can be found in Appendix S1.
Topical irritancy
Before sensitization and cross-reactivity patterns can be assessed, the topical irritancy thresholds must be determined, in order to ensure that the chosen test concentrations do not give rise to irritant reactions. This was performed by applying different concentrations of each of the investigated substances intended for induction as a closed patch test for 2 days on both the neck and the flank of one side of 4 animals. All animals were pretreated with FCA. In order to maximize the number of test concentrations that could be evaluated, the animals were tested first on one side of the body and then on the other side ( Fig. 1). Concentrations that did not cause irritation were chosen for topical induction and elicitation (Table 2).
Concentrations
Equimolar concentrations were used for all substances used in the study, with the exception of sensitization series A and C ( Table 2). In series A, as a precautionary measure when testing with 4,4 ′ -DMDI, because it was suspected of causing irritant reactions in the animals, it was tested at a non-equimolar concentration in relation to the test substance in challenge I. This also determined the concentration for the rest of the substances in challenge II. 4,4 ′ -DMDI was later tested at two different concentrations in challenge II in series B, one of which was equimolar. However, as a precaution, the non-equimolar concentration for challenge I was used in series C. The concentrations used for induction and challenge are shown in Table 2. The use of equimolar concentrations constitutes a modification to OECD test guideline 406, in which it is stated that the concentration used for topical induction should be the highest that causes mild-to-moderate skin irritation, and that the concentration in the challenge should be the highest non-irritant concentration. The use of equimolar concentrations enables better comparisons in cross-reactivity studies, but may result in an underestimation of the sensitizing potential (for more information regarding the modification of test concentrations, see Appendix S1).
Induction
Twenty-four test animals (12 control animals and 6 positive control animals) were used for induction in each of the six sensitization series (Table 3) according to the following scheme, which is also described in Fig. 2.
Day 0. All animals were shaved on the neck, and three intradermal injections in a row on each side of the shoulder were then given, resulting in a total of six injections. For the test animals, the following injections were made in duplicate: (i) 0.1 ml of 40% FCA in water (wt/vol); (ii) 0.1 ml of the test substance (wt/vol) in propylene glycol or liquid paraffin; and (iii) 0.1 ml of a mixture of the test substance and FCA in propylene glycol or liquid paraffin in which the concentration of the test substance was the same as in (ii), and the concentration of FCA was the same as in (i). For (ii) and (iii), the vehicle varied according to whether the sensitizing substance was an isocyanate or an amine, as isocyanates can react with propylene glycol, which is normally the vehicle of choice. For sensitization series A, C, E, and F, liquid paraffin was used, and for sensitization series B and D, propylene glycol was used. For the control animals, the following injections were made in duplicate: (i) 0.1 ml of 40% FCA in water (wt/vol); (ii) 0.1 ml of propylene glycol; and (iii) 0.1 ml of 40% FCA in propylene glycol (wt/vol). For the positive control animals, the following injections were made in duplicate: (i) 0.1 ml of 40% FCA in water (wt/vol); (ii) 0.1 ml of 25% 2-MP in propylene glycol (wt/vol); and (iii) 0.1 ml of 25% 2-MP and 40% FCA in propylene glycol (wt/vol). the nature of the sensitizing substance, on a 2 × 4-cm piece of filter paper placed on adhesive bandages. The patches were covered with impermeable plastic adhesive tape, and held in place with adhesive bandages. The patches were left in place for 48 h. The control animals were patch tested with the vehicle alone in the same manner as the test animals and the positive controls.
Challenge
The challenge procedure consisted of two parts: challenge I, in which the sensitization rate of the test substance used in the induction was assessed; and challenge II, in which cross-reactivity to other substances was assessed. Challenges I and II were performed at the same time but on different flanks of the animal; challenge I was performed on the left flank and challenge II on the right flank, according to the scheme shown in Fig. 2. substance in acetone or ethanol, depending on whether it was an isocyanate or an amine, on both the cranial and caudal patch. Six + 6 test animals were challenged with the induction substance on either the cranial or the caudal patch, and the vehicle (acetone or ethanol) alone on the other patch. Six of the control animals were tested with the induction substance on both patches, and 3 + 3 animals were patch tested with the induction substance on either the cranial or the caudal patch, and the vehicle alone on the other patch. Two of the positive control animals were tested with 2-MP on both the patches, and 2 + 2 animals were patch tested with 2-MP on either the cranial or the caudal patch, and the vehicle alone on the other patch. Al test ® on a Durapore ® adhesive band was used for patch testing. Thirty microlitres of test solution was applied. The patches were covered with impermeable plastic adhesive tape, and held in place with adhesive bandages. Challenge II (right flank, six patches) was performed on 24 test animals and 12 control animals by patch testing with putatively cross-reacting substances. The distribution of the positions of the test substances was based on a Latin square table. In this article, the results of sensitization with 4,4 ′ -MDI, 4,4 ′ -MDA, 4,4 ′ -DMDI and 4,4 ′ -DMDA and their cross-reactivity patterns are described. Cross-reactivity to substances tested on the remaining two patches in challenge II are described elsewhere (Hamada et al., manuscript in preparation 2017). Evaluation Day 23. The minimum criterion for a positive reaction was confluent erythema covering the test area. All tests were evaluated blindly 24 h after the patch tests had been removed, that is, 48 h after test application. First, all of the left flanks of all the animals were read, and then, still blindly and without knowledge of the test outcome of the left side, the right flanks were read on the test animals and the control animals (Fig. 2).
Statistics
The proportion of positive animals in the test group was compared with the proportion of positive animals in the control group. Among the animals challenged with the induction substance on both the cranial and caudal patches (12 test animals and 6 negative control animals; Fig. 2), only one of the patches, chosen in advance, was included.
Statistical significance for the sensitizing capacity and cross-reactivity was calculated with a one-sided Fisher's exact test. When significant values (p < 0.05) were obtained, the compound was considered to be a sensitizer or to show cross-reactivity to other compounds on the basis of set criteria (p < 0.001, strong; p < 0.01, moderate; p < 0.05, weak).
Results
Six different sensitization series were performed on different occasions with the same method during the period Tables 3 and 4. In Fig. 3, the cross-reactivity patterns for sensitization series B, C, D and F are compiled. In all sensitization series, at least 4 of the positive control animals showed positive reactions, indicating good performance of the method without negative influences resulting from, for example, sick animals or adjuvant with impaired effectiveness (Table 3).
Discussion
The GPMT is a well-recognized method used to detect contact sensitizers and their cross-reactivity patterns.
The method was first developed by Magnusson and Kligman in 1969, and has been described in several articles (25,26,27,33). It is also one of two guinea-pig tests described in OECD test guideline 406 (the other being the non-adjuvant Buehler test) that can be used in order to classify skin sensitizers according to the GHS (28). The GHS has been implemented in the EU by Regulation (EC) No. 1272/2008 on classification, labelling and packaging of substances and mixtures (CLP regulation) (33), and thus results from GMPT studies can affect the classification of chemicals and mixtures within the EU. This study was not performed for regulatory purposes, but rather for diagnostic and clinical reasons, and some changes from the original method as suggested by Bruze (29,34) were made in order to standardize the test and make the evaluation of the patch test reactions objective. In Appendix S1, all changes from the original method and the rationale for making these are described.
Sensitizing capacity
In order to elicit allergic contact dermatitis, a chemical must have physicochemical characteristics suitable for penetration of the stratum corneum. Once in the viable epidermis, it must be able to form reaction products with proteins for the elicitation of an immune response. Thus, contact allergens are either protein-reactive in themselves or are metabolized in the skin into protein-reactive species (35). Isocyanates are theoretically potent contact allergens, because they possess electrophilic carbons that can be readily attacked by nucleophilic atoms present on macromolecules in the skin. However, it has been proposed that their reactivity is so high that they might polymerize before they penetrate the skin (36). Amines are lipophilic and penetrate the skin quite readily. However, in order to react with proteins in the skin, the amines need to be metabolized.
In the literature, there are some animal studies investigating the sensitizing capacity of 4,4 ′ -MDI. In 1976, Duprat et al. used the GPMT to study the sensitizing capacity of 4,4 ′ -MDI, and concluded that the proportion of test animals that reacted to 4,4 ′ -MDI 10% pet. showed it to be a strong allergen. In general, Duprat et al. used higher concentrations than in the present study, with intradermal injections of 5.0% 4,4 ′ -MDI in olive oil and epicutanous sensitization with 25.0% 4,4 ′ -MDI in pet. In the study presented here, there were apparent difficulties in sensitizing with 4,4 ′ -MDI. It was used as an induction substance on three different occasions. On the first occasion, it was found to be a strong sensitizer, with 18 of 24 test animals reacting to 1% in acetone (p < 0.001). However, in this first sensitization series, there was a suspicion that 4,4 ′ -DMDI might cause irritant reactions if patch tested equimolar to 1% 4,4 ′ -MDI. Therefore, the concentrations of 4,4 ′ -MDI in challenge I and challenge II were not the same. The concentration in challenge II was lower because we were able to patch test equimolar to a 'safe' concentration of 4,4 ′ -DMDI. Expectedly, a lower proportion of test animals were positive in challenge II, in which they were patch tested with a lower concentration of 4,4 ′ -MDI than in challenge I (18 of 24 positive animals in challenge I versus 7 of 24 animals in challenge II). In the second sensitization series, two concentrations of 4,4 ′ -DMDI were investigated, and it was concluded that 1% did not cause irritant reactions. Hence, a new series was performed to induce with 4,4 ′ -MDI and perform challenge II with equimolar concentrations to those in challenge I. On this occasion, the induction failed, and only 2 of 24 test animals were sensitized. As the positive controls reacted, there were no obvious reasons for the failure. 4,4 ′ -MDI was used as an induction substance for a third time. On this occasion, 8 of 24 test animals (p < 0.05) reacted, making it as a weak allergen according to the set criteria.
In order to explain the different results, all steps in the study procedure were carefully revised. The only factor found that could have varied on the three occasions was, possibly, the concentration of 4,4 ′ -MDI in the preparation when it was mixed with liquid paraffin and FCA to be used for the intradermal injections. Chemical analysis, presented elsewhere (Hamada et al., manuscript in preparation 2017), showed that 4,4 ′ -MDI readily reacts with constituents in FCA, and that the injected concentration can vary according to the mixing procedure, the duration between preparation and injection, and the storage temperature. 4,4 ′ -MDA, 4,4 ′ -DMDI and 4,4 ′ -DMDA were shown to be potent sensitizers. This is in accordance with clinical observations (18)(19)(20)22). In fact, 4,4 ′ -MDA is known to sensitize patients when tested at 0.5% pet. (37,38).
Notably, all of the investigated substances fulfil the criteria for classification as subcategory 1A skin sensitizers according to the GHS and the CLP regulation, as ≥ 60% of the test animals responded at an intradermal induction dose of > 0.1% to ≤ 1%. Admittedly, 4,4 ′ -MDI failed to induce sensitization in sensitization series E, and would only have been classified as a subcategory 1B skin sensitizer on the basis of the results from series F, as only 33% of the test animals responded at an intradermal induction dose of 0.1% to ≤1%. However, as suggested by Basketter et al. the higher-potency category should apply when multiple animal datasets lead to different categorization of the same substance (39).
Cross-reactivity
The results obtained in this study correspond to the clinical observations made in other studies, namely that 4,4 ′ -MDA is a marker for 4,4 ′ -MDI allergy, as animals primarily sensitized to 4,4 ′ -MDI also react to the amine. However, in the clinical situation it is doubtful whether 4,4 ′ -MDA is a good screening substance for 4,4 ′ -MDI. In 2012, Engfeldt et al. published the results of consecutive patch testing in Belgium and Sweden with 4,4 ′ -MDA and 4,4 ′ -MDI (37). They concluded that positive reactions to 4,4 ′ -MDA seem to be associated with contact allergy to p-phenylenediamine (PPD). As PPD is one of the most common contact allergens in the baseline series, this possible cross-reactivity might make 4,4 ′ -MDA too blunt a tool to single out contact allergy to 4,4 ′ -MDI; a positive reaction might say more about the patients' hair dyeing habits than his or her exposure to isocyanates. In order to give advice on the use of 4,4 ′ -MDA as a marker for 4,4 ′ -MDI in a patch test series, further exploration of the relationship between 4,4 ′ -MDA and PPD is needed.
However, for an individual who is primarily sensitized to 4,4 ′ -MDI, the fact that cross-reactivity to 4,4 ′ -MDA can occur can be of clinical relevance. 4,4 ′ -MDA is used as a hardener in PUR production, so a contact allergy to 4,4 ′ -MDI in a worker at a PUR plant might lead to multiple exposure sources if 4,4 ′ -MDA is used as a curing agent. Furthermore, 4,4 ′ -MDA is also used as hardener in other plastic applications, such as epoxy, and possible exposure to 4,4 ′ -MDA needs to be taken into consideration before a worker is reassigned other tasks because of a confirmed contact allergy to 4,4 ′ -MDI. 4,4 ′ -MDA is also a known rubber additive, and it is possible that an individual who has acquired contact allergy to 4,4 ′ -MDI at work might react to rubber items later in life.
In the present study, animals primarily sensitized to 4,4 ′ -MDI showed cross-reactivity to the secondary allergen 4,4 ′ -DMDI. However, when 4,4 ′ -DMDI was the primary sensitizer, no cross-reactivity to 4,4 ′ -MDI was shown. There are, to our knowledge, no reports in the literature describing concurrent reactions between the two isocyanates. Instead, concurrent reactions between 4,4 ′ -MDA and 4,4 ′ -DMDI have been described (18)(19)(20). Possibly, the lack of concurrent reactions between the two isocyanates stems from the fact that commercially available patch test preparations of 4,4 ′ -MDI have a high risk of false-negative reactions (5). The reason for the suggested cross-reactivity might seem evident when the two-dimensional structures of the isocyanates are considered. However, the spatial orientation of cyclohexane is quite different from that of the aromatic ring. Finally, it was shown that animals sensitized to 4,4 ′ -MDA also showed cross-reactivity to 4,4 ′ -DMDA. Concurrent reactions between 4,4 ′ -MDA, 4,4 ′ -DMDA and 4,4 ′ -DMDI have been described in 2 patients working at a medical company where a lacquer based on 4,4 ′ -DMDI was used (20). As with the isocyanates, the spatial orientation of the cyclohexane ring versus the benzene ring differs between the two amines.
Conclusions
All investigated substances were shown to be sensitizers. Regarding the evaluation of cross-reactivity, the previously noted clinical observation that 4,4 ′ -MDA is a marker for 4,4 ′ -MDI was verified, as animals sensitized to the isocyanate also reacted to the amine. Furthermore, animals sensitized to 4,4 ′ -MDI cross-reacted to 4,4 ′ -DMDI, and animals sensitized to 4,4 ′ -MDA cross-reacted to 4,4 ′ -DMDA.
|
2018-04-03T05:53:35.518Z
|
2017-05-29T00:00:00.000
|
{
"year": 2017,
"sha1": "8d7210b31fcfd77daf25990c09ea1d5a7d1c3e21",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cod.12809",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d7210b31fcfd77daf25990c09ea1d5a7d1c3e21",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
2388818
|
pes2o/s2orc
|
v3-fos-license
|
Global Persistence in Directed Percolation
We consider a directed percolation process at its critical point. The probability that the deviation of the global order parameter with respect to its average has not changed its sign between 0 and t decays with t as a power law. In space dimensions d<4 the global persistence exponent theta_p that characterizes this decay is theta_p=2 while for d<4 its value is increased to first order in epsilon = 4-d. Combining a method developed by Majumdar and Sire with renormalization group techniques we compute the correction to theta_p to first order in epsilon. The global persistence exponent is found to be a new and independent exponent. We finally compare our results with existing simulations.
Directed Percolation
At the initial time A particles are placed randomly with density ρ 0 on the sites of a d-dimensional hypercubic lattice. They perform independent simple random walks with a diffusion contant λ. Multiple occupancy is allowed. The A particles undergo three reaction processes: coagulation upon encounter at a rate k, branching at a rate k ′ , spontaneous death at a rate γ, As the branching rate k ′ is decreased below a threshold value k ′ c (equal to γ in mean-field), the steady state of this system exhibits a continuous transition from a state in which a finite positive density of A's survive indefinitely to an absorbing A-free state. The order parameter of the transition is ρ A , the average of the local density of A's.
We have used here the language of the Schlögl reaction-diffusion process to describe directed percolation as in [2]. Various alternative formulations exist ( [3,4]). Furthermore the scope of directed percolation reaches far beyond chemical kinetics, as an overwhelmingly large class of nonequilibrium systems possessing a phase transition in their steady state fall in the same universality class (cellular automata, surface growth, reaction-diffusion processes). This makes of the process Eq. (1.1) a paradigm for non-equilibrium systems with a transition in their steady state. Our knowledge of the behavior of the system in the steady state and during the relaxation stages rests on numerical simulations (in low space dimensions, d = 1, 2) and on analytical techniques (short time series expansions in d = 1, 2, renormalization group in d = 4 − ε). The critical regime is characterized by a set of three independent exponents: the dynamical exponent z, the anomalous dimension of the order parameter η and the correlation length exponent ν. Scaling laws for ρ A can be extracted from special cases of which holds in the limit b → ∞ with the arguments of F fixed. Similar scaling relations exist for correlation functions.
Global Persistence
In this article we want to focus on a property that cannot be deduced from the knowledge of the scaling properties of correlation functions alone. We first define the deviation of the global time-dependent order parameter with respect to its average: In Eq. (1.3) we denote by n A (x, t) the number of A particles at site x at time t, in a particular realization of the reaction-diffusion process. The brackets ... denote an average with respect to the set of microscopic realizations consistent with the initial conditions, the rules Eq. (1.1) and the diffusion.
We define the global persistence probability as the probability that Ψ remain of constant sign between 0 and t. Similar quantities have been considered (see [1,5]) in critical dynamics of magnetic systems, Ψ simply being the total magnetization. There it was shown that, following a quench from a high temperature disordered state to the critical point, the global persistence probability decays with time as a power law characterized by a universal exponent θ p . In critical dynamics the persistence probability is a quantity that appears naturally in the description of the system while it relaxes to its equilibrium state. Our motivation for the present work lies in the lack of both qualitative and analytical picture of the onset of longe range correlations in non-equilibrium systems relaxing to their steady state. We believe that the knowledge of the global persistence probability will shape our picture of the way the system organizes at criticality.
The remainder of the article is divided as follows. We first recall in Sec. 2 the well known correspondence between directed percolation and field theory. Following Majumdar and Sire [1], it is possible to obtain the global persistence probability from a careful analysis of the autocorrelation of the global order parameter. This analysis is performed in great detail in Secs. 3 and 4. In Sec. 5 we turn to the explicit calculation of the persistence exponent. In our conclusion we compare our results with existing simulations.
Field theoretic formulation
There are several ways of mapping directed percolation onto a field theory ( [2,3]). The resulting field theory involves a field ψ whose average is the local density of A individuals, and a conjugate fieldψ ; dropping terms irrelevant in the vicinity of the upper critical dimension d c = 4 the corresponding action reads The parameter g can be expressed in terms of the original reaction rates k, k ′ , γ and the mass in the propagator is λσ = γ − k ′ . The action Eq. (2.1) is the starting point of the subsequent analysis. Renormalization group techniques allow us to focus on scaling laws close to or at criticality, during the relaxation process or in the steady state. From here on, as we shall eventually focus on phenomena taking place at criticality, we set σ = 0. We now summarize a few well-known results on the renormalization of the action Eq. (2.1) that can be found e.g. in [6].
One first defines renormalized parameters and fields as follows where µ is a momentum scale. From the one loop expression of the two and three-point vertex functions one deduces the values of the Z-factors using dimensional regularization and the minimal subtraction scheme. They read The β-function has the one-loop expression . Critical exponents are then obtained from linear combinations of the γ i (u ⋆ ), e.g.
We find it convenient to shift ψ by its mean-field expression Therefore the action expressed in terms of the fields φ ≡ ψ − ψ mf andφ ≡ψ reads We have thus gotten rid of the initial term localized at t = 0. We will use the following notation: G (n,m) denotes the (n + m)-point correlation function involving n fields φ's and m fieldsφ's, as defined in Eq. (4.4), and W (n,m) denotes its connected counterpart. The basic ingredients for a perturbative expansion are the free propagator G and the free correlator C, defined by the zero-loop expression of G (1,1) and G (2,0) , respectively. We shall need the large time behavior of G and C: Note that the dependence on the initial density ρ 0 has disappeared, this is because we are focussing on times large with respect to the time scale set by ρ 0 . We have now in hands the building blooks for a perturbation expansion of the expectation values of time-dependent observables.
Autocorrelation function
Our aim is to find the one-loop correction to the function C(t, t ′ ) defined by which merely is the autocorrelation function of the field ψ. In order to determine C(t, t ′ ) we carry out a perturbation expansion in powers of the coupling constant g. The first term of this expansion is of course C(k = 0; t, t ′ ). The first non trivial corrections come in six pieces, each depicted by a one loop connected Feynman diagram shown in Fig. 1. The explicit calculation of these diagrams combined with Eqs. (2.2) and (2.3) allows one to determine the renormalized autocorrelation function Using again the shorthand notations t < = min{t, t ′ }, t > = max{t, t ′ }, we find which holds for t < , t > large with t > /t < finite. In Eq.
We have listed in the appendix the individual contributions to C(t, t ′ ) arising from the corresponding Feynman diagrams.
Short time expansion
The result of the previous section Eq. (3.3) for C R (t ′ , t) holds for all times t and t ′ , with t/t ′ finite, but the limit t ≪ t ′ is singular. In this section we show that for t ≪ t ′ the autocorrelation function C R (t ′ , t) displays power law behaviour with respect to both time arguments and determine the corresponding exponents. In this limit the random variable Ψ(t) becomes a Markovian process for which the persistence exponent may be expressed in terms of well-known critical exponents. In the case of the Ising model the Markovian approximation for θ p is already close to the values obtained by simulations [7,5]. An appropriate method to study the correlation function for t ≪ t ′ is the short time expansion (STE) of the field ψ(r, t) in terms of operators located at the 'time surface' t = 0. Since the Gaussian propagator and correlator are of the order t 2 for t → 0 we expect that the leading term in the STE is the second time derivativeψ of ψ, i.e. ψ(r, t) − ψ(r, t) = c(t)ψ(r, 0) + . . . To compute the scaling dimension ofψ in an ε-expansion one could determine the additional renormalization that is necessary to render correlation functions withψ insertions finite. Fortunately, it is possible to express this dimension to every order in ε in terms of other critical exponents. For the initial density ρ 0 = ∞ there is a similarity between directed percolation and the semi-infinite Ising model at the normal transition (i.e., for infinite surface magnetization). In the latter case the short distance expansion of the order parameter field near the surface is governed by the stress tensor [8,9]. Due to the translational invariance of the bulk Hamiltonian the stress tensor requires no renormalization. Here we look for an initial field which remains unrenormalized as a consequence of the translational invariance (with respect to time) of the stationary state.
Our argument applies to any dynamic field theory defined by a dynamic functional of the form We assume that ψ satisfies the sharp initial condition ψ(r, 0) = ρ 0 . For directed percolation we have Correlation functions may be written in the form where the functional integral runs over all histories {ψ,ψ} which satisfy the initial condition. We now introduce a new time variable t → t ′ = t+a(t) (withȧ(t) > −1 to maintain the time order) and the transformed fieldsψ ′ and ψ ′ withψ ′ (t) = ψ(t ′ ) and ψ ′ (t) = ψ(t ′ ). At lowest order in a(t) the dynamic functional becomes where a(0) = 0 has been assumed. Performing the time shift in the correlation function G (n,m) and comparing the terms of first order in a(t) on both sides of Eq. (4.4) one finds Here the angular brackets indicate the average with respect to the weight exp(−S[ψ,ψ]). We may choose where T + denotes the operator T [ψ,ψ] in the limit t → 0 + . (We have assumed that all time arguments of the correlation function are nonzero.) This result shows that T + remains unrenormalized to every order of the perturbation theory. Therefore its scaling dimension is given by d(T + ) = d + z. At the upper critical dimension d c = 4 we find d(T + ) = d(ψ) = 6. In fact, one can show that T + andψ(0) differ for ρ −1 0 = 0 only by a constant prefactor. To see this we express T [ψ,ψ] in terms of the shifted field φ = ψ − ψ mf . Since φ(t),ψ(t) ∼ t 2 for t → 0 while ψ mf (t) ∼ t −1 only the term −(λg/2)ψ 2 mfψ contributes. Thus T + ∼ψ(0) ∼ψ(0), and the STE in Eq. (4.1) becomes ψ(r, t) − ψ(r, t) = c(t)T + (r) + . . . (4.9) with Combining the STE with the general scaling form of the autocorrelation function one obtains which holds for t/t ′ → 0.
A detour via the Ornstein-Uhlenbeck process
Let X(τ ) be a Gaussian process with the following autocorrelation function for τ, τ ′ large (but arbitrary τ − τ ′ ). The random variable X is thus a Gaussian stationary Markov process (of unit variance). It satisfies a Langevin equation where ζ is Gaussian white noise: Therefore X is an Ornstein-Uhlenbeck process. For such a process the probability that X be positive between 0 and τ decays exponentially as These are standard results.
Expansion around an Ornstein-Uhlenbeck process
We now consider a Gaussian stationary process X(τ ) which has the autocorrelation function with f (0) = 0 and ǫ ≪ 1. Then X is not a Markovian process. Majumdar and Sire [1] have shown how to evaluate the probability that X be positive between 0 and τ to first order in ǫ. They find Hakim [10] has extended this result to O(ǫ 2 ).
Application to the global order parameter
At any fixed time t there exists a dynamical correlation length ξ ∼ t 1/z such that the system may be considered as a collection of effectively independent blocks of linear size ξ. Hence Ψ is the sum of (L/ξ) d independent degrees of freedom, which is a Gaussian variable in the limit L → ∞. We are now in a position to apply the result Eq. (5.7) to the random variable which has the autocorrelation function with, after Eqs. (4.11) and (5.1) Substitution into Eq. (5.7) yields where the integral has the analytic expression =0.630237... (5.13) where C denotes Catalan's constant and 3 F 2 the hypergeometric function of order (3,2). The final result reads 6 Discussion
Comparison to existing simulations
Recently Hinrichsen and Koduvely [11] have performed a numerical study of one-dimensional directed percolation in order to determine the asymptotic behavior of the global persistence probability. In terms of the variable Ψ defined in Eq. (1.3), they find the following results. For the probability that Ψ remain negative between 0 and t they indeed find a power law decay characterized by a universal exponent θ p that has the numerical value θ p = 1.50(2). However they find an exponential decay for the probability that Ψ remain positive between 0 and t. The latter assertion is in contradiction with our finding that the global persistence probability decays algebraically irrespective of the sign of Ψ. A plausible interpretation for such an asymmetry could be the following. On the one hand the global persistence exponent is well-defined in the regime in which the system has lost the memory of the initial condition. This regime takes place for times t such that t d−η 2z ρ 0 ≫ 1. On the other hand, Ψ is well defined in the limit of infinitely large systems, and then it is the sum of a large number of effectively independent contributions, which, on a lattice of size L, forces L ≫ ξ ∼ t 1/z . Hence for numerical simulations to yield acceptable results, care must be taken that the double limit L z ≫ t ≫ ρ −2z/(d−η) 0 is satisfied. Whether the simulations in [11] fulfill these bounds is questionable. Finally, in mapping the random process Ψ(t) to X(τ ) we have assumed that the time interval under consideration contains only times large compared with ρ −1 0 so that the regime in which X is stationary be reached. Strictly speaking, we should have defined the persistence probability over a time interval [t 0 , t], with t ≫ t 0 ≫ ρ −1 0 . In a simulation the choice t 0 = 0 leads to a persistence probability that enters the asymptotic regime after times t very large with respect to ρ −1 0 . This may be another problem with [11].
|
2014-10-01T00:00:00.000Z
|
1998-05-05T00:00:00.000
|
{
"year": 1998,
"sha1": "51e507a018a611bc499feed6969853072c8416c6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9805046",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "51e507a018a611bc499feed6969853072c8416c6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
233372156
|
pes2o/s2orc
|
v3-fos-license
|
Taxus baccata intoxication: the sun after the electrical storm
European yew (Taxus baccata) is a tree with alternate branchlets, green needles and reddish-brown bark. A high-dose ingestion of Taxus baccata for suicidal purposes usually results in death. The systemic toxicity is mainly cardiac. The authors describe the case of a young patient who ingested a high dose of yew needles and presented to the emergency department with a serious intoxication, which manifested as a chaotic malignant arrhythmia that was successfully treated after exhaustive supportive care.
An arterial catheter and central venous line were placed. The arterial blood sample for blood gases revealed metabolic acidosis, with a pH of 7.406, sodium bicarbonate (HCO 3 ) of 16.9mmol/L, lactate level of 4.9mmol/L, partial pressure of oxygen (PaO 2 ) of 307mmHg and partial pressure of carbon dioxide (PaCO 2 ) of 21.5mmHg.
After that, the baseline electrocardiogram (ECG) showed tachycardia at a rate of 119/minute with a wide QRS complex (0.257s) (Figure 1). The patient's condition worsened, with a relapse of extreme bradycardia and decreased level of consciousness, and rapid sequence intubation with 20mg etomidate and 50mg of rocuronium bromide was performed.
After 4 hours of exhaustive Advanced Life Support, we verified a progressive rhythm organization to sinus rhythm (ECG, Figure 2). The dose of norepinephrine was reduced to 0.1µg/kg/minute. A change in rhythm was noted; new tachycardia with a broad QRS complex. One hundred milligrams of lidocaine was administered, and the patient presented with asystole. After 2 minutes of cardiopulmonary resuscitation (CPR) and 1mg of epinephrine, the patient recovered to ventricular tachycardia (VT) with a pulse, which rapidly evolved into a new relapse of extreme bradycardia, and the patient went in asystole again after transcutaneous pacing support. The patient returned to VT with a pulse after 2 minutes of CPR and 1mg of epinephrine.
A decision regarding synchronized electrical cardioversion was made. After a 200J synchronized shock, the patient degenerated into ventricular fibrillation and CPR was restarted.
The return of spontaneous circulation was reached after 4 minutes of CPR and two defibrillations (defibrillation = 200J biphasic electric shock).
The vasopressor support was changed to norepinephrine (1.9µg/kg/minute), 10mL of calcium gluconate 10% solution and 10 vials of digoxin-specific antibody fragments were administered, and an infusion of amiodarone (1,200mg/24 hours) was started.
The echocardiogram documented normal-sized chambers and normal systolic and diastolic biventricular function, without evidence of valvopathies. The patient was transferred to the intensive care unit (ICU). The laboratory tests upon ICU admission showed changes in renal and hepatic function tests, reflecting acute kidney injury and ischemic hepatitis, respectively.
The patient was extubated after 24 hours of invasive mechanical ventilation. No neurological damage was objectified, and the patient was discharged to the internal medicine ward on day 2.
The clinical condition of the patient improved. She was observed and medicated by a psychiatrist. She was discharged home on day 7.
DISCUSSION
Taxus baccata is a moderately-sized ornamental evergreen conifer tree. It is widely used in landscaping and is considered an ideal plant for hedging due to its relatively slow growth and tolerance of pruning. (1) In the hospital setting, Taxus baccata is the source of paclitaxel, a chemotherapeutic agent used in the treatment of lung cancer.
All parts of a yew, except the fleshy aril surrounding the seed, are toxic. They contain a complex mixture of compounds, including phenolic constituents (e.g., 3,5-dimethoxyphenol), nonalkaloid diterpenoids (e.g., 10-deactetylbaccatin III), alkaloid diterpenoids (e.g., taxine B, taxine A), flavonoids (e.g., myricetin) and bioflavonoids (e.g., bilobetin). (5) The lethal dose for an adult is reported to be 50g of yew needles. Estimating that 1g of yew needles contains approximately 5mg of taxines, the minimal toxic dose for humans is calculated to be 3.0 -6.5mg taxines/kg. (1) The time from ingesting a lethal dose to death is usually 2 to 5 hours, with symptoms occurring from 30 minutes to 1 hour following ingestion. (6) The major compounds of the alkaloid fraction are taxine A and taxine B. (5) The latter represents 30% of the total alkaloid fraction extracted from T. baccata and is more potent than other taxine alkaloids, therefore having more cardiac toxicity. Taxine A represents only 1.8% of the alkaloid fraction. (3) A pharmacological investigation of taxine alkaloids revealed that the hypotension induced by taxines is not mediated via the sympathetic or parasympathetic nervous system, but rather, by a direct action on the myocardium and vascular smooth muscle. (1) Taxines, particularly taxine B, are cardiac myocyte calcium and sodium channel antagonists that inhibit calcium and sodium channels in the same manner as drugs such as verapamil, although taxines are more potent and cardioselective. (3) This cardiotoxicity is manifested by negative inotropism and an atrioventricular conduction delay that increases the electrocardiographic QRS complex duration; the P wave can also be absent, as seen in the first ECG. The QRS width can be explained by the degree of inhibition of the fast cardiac sodium channels during phase 1 of the action potential.
Like other calcium channel antagonists, taxines also suppress vascular smooth muscle contraction and can produce marked arterial vasodilation-mediated hypotension. (7) This is the reason why the patient was in vasoplegic shock at admission.
Our patient initially presented with a disorganized, wide QRS complex bradycardic rhythm, which usually precedes electromechanical dissociation, subsequent asystole and death. (8) In the early stage of poisoning, the ECG may show multiple extrasystoles, followed by persistent VT.
Since there is no specific antidote or evidence-based anti-arrhythmic therapy, rapid supportive care is crucial to avoid progression to death. (9) In our case, despite the exhaustive supportive treatment carried out, we saw recurring malignant ventricular arrhythmias. Due to this, we used intravenous lidocaine, but it was ineffective.
Although we did not consider the possibility of provisional invasive pacing, we think it should be considered in similar cases due to the possibility of having to perform overdrive pacing during VT.
After the two episodes of ventricular fibrillation, ten vials of digoxin-specific antibody fragments were administered. After this, we verified a progressive rhythm organization to sinus rhythm, possibly due to the absorption of taxine by the Fab fragments, thereby eliminating the free, unbound portion of the alkaloid. (9,10) In the literature, there are several case reports that describe the successful use of extracorporeal membrane oxygenation to support the circulation of patients with refractory ventricular arrhythmias and consequent cardiogenic shock, thus allowing time for toxin metabolism, rhythm stabilization and myocardial recovery. (11) Continuous renal replacement therapy or hemodialysis was not considered since they are unlikely to be effective due to the large volume of distribution and the high protein binding of taxine A. (12) CONCLUSION Yew (Taxus baccata) is an evergreen commonly used for ornamental landscaping. Taxines are poisonous constituents in yew plants that block sodium and calcium channels in the heart, leading to life-threatening cardiotoxicity (atrioventricular block, ventricular tachycardia and refractory ventricular fibrillation). Patients who ingest a lethal dose frequently die, despite resuscitation efforts. Serious poisoning occurs in the setting of suicidal ingestion. In patients who present with signs of high toxicity, the clinician should rapidly perform an initial assessment of clinical findings and support the airway, breathing and circulation as needed.
Since there is no known antidote, management of yew intoxication is essentially supportive, requiring intensive care management with vasopressor support, invasive mechanical ventilation, eventual temporary cardiac pacing, and a temporary period of extracorporeal membrane oxygenation. Treatment with calcium infusions and the administration of digoxin-specific antibody fragments may play an important role in the management of this cardiotoxic plant.
|
2021-04-24T06:17:57.055Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e180fb7be52adcb32e61f3a489ab1b2103665b78",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ba6368d2f64d4515d8295cee5fc6198f7f28757e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225368006
|
pes2o/s2orc
|
v3-fos-license
|
A Composite Sketch of Fast-Spiking Parvalbumin-Positive Neurons
Abstract Parvalbumin-positive neurons are inhibitory neurons that release GABA and are mostly represented by fast-spiking basket or chandelier cells. They constitute a minor neuronal population, yet their peculiar profiles allow them to react quickly to any event in the brain under normal or pathological conditions. In this review, we will summarize the current knowledge about the fundamentals of fast-spiking parvalbumin-positive neurons, focusing on their morphology and specific channel/protein content. Next, we will explore their development, maturation, and migration in the brain. Finally, we will unravel their potential contribution to the physiopathology of epilepsy.
Introduction
The brain is constituted of several cell types, interacting together in a fine-tuned network to allow individuals to perform complex tasks. Novel cellular and molecular actors in this neuronal network are continuously brought to light, increasing the complexity of the yarn we are trying to unravel. The human brain contains approximately one hundred billion neurons and roughly as many glial cells (von Bartheld et al. 2016). In mice, the balance is more tilted towards neurons (Herculano-Houzel et al. 2006). Although it is challenging to assess the proportion of each cell population of the brain, it is considered that 80% of neurons in the mouse brain are excitatory and 15% of them are inhibitory (Keller et al. 2018). The remaining 5% fall into several specific neuronal populations, whose description is beyond the scope of this review.
The considerable heterogeneity of the brain has been known for over a century, and inhibitory neurons have been particularly targeted by many classification attempts based on markers, electrophysiological profile(s), connectivity, morphology or other characteristics. These various endeavors indicate in fact that their identification and classification remain unsatisfactory. None of the aforementioned features is sufficient to distinguish an interneuron per se, and inhibitory neurons are more intricate than excitatory neurons. Thanks to single-cell RNA sequencing, efforts to sort inhibitory neurons by gene expression recently resulted in even more molecular subclasses than previously suggested (Zeisel et al. 2015;Tasic et al. 2016;Tasic et al. 2018). Interestingly, these subclasses seem universal as they can be found almost in all areas of the brain, whereas many glutamatergic neuronal groups differ from one brain region to another (Zeisel et al. 2015;Tasic et al. 2018). Inhibitory neurons are usually divided into 3 subpopulations, based on molecular markers: they are either positive for parvalbumin (PV-a calcium-binding protein), somatostatin (SOM-a neuropeptide), or 5HT 3 (a serotonin receptor). Besides, inhibitory interneurons can be distinguished by the expression of other calcium-binding proteins such as calbindin (CB) or calretinin (CR), other neuropeptides as cholecystokinin (CCK), vasoactive intestinal peptide (VIP), or neuropeptide Y (NPY) (Defelipe et al. 2013). In this review, we will focus on the parvalbumin-positive neurons (PV+ neurons), and more specifically on PV+ neurons from the cortex and the hippocampus.
Discussion
General Features of Fast-Spiking PV+ GABAergic Interneurons PV+ neurons are one of the most abundant subtypes of GABAergic interneurons, accounting for 30%-40% of them (Tremblay et al. 2016). These neurons are usually characterized by fastspiking profile and can be found almost everywhere in the brain. The term "fast-spiking" (FS) refers to the firing pattern of those cells. It is associated with short-action potentials and the capability to sustain a high firing frequency. Having a closer look at the available tools for studying PV+ neurons, one could notice that those cells are built for speed in almost every aspect of the transmission.
Based on their axonal arborization, PV+ interneurons of the cortex are morphologically described either as "basket cell" (BC) or "chandelier cell" (ChC, also named axo-axonic cell) (Katsumaru et al. 1988a;DeFelipe et al. 1989;Hendry et al. 1989). While almost all fast-spiking BCs are PV+, things are more complicated for ChCs. With their typical morphology and low number, ChCs rarely required any specific labeling, so the observation that only a fraction of ChCs is PV+ was made quite recently (Taniguchi et al. 2013). In the cortex, PV + -ChCs seem to express vasoactive intestinal peptide receptor 2 (Vipr2) as a secondary marker and a recent study identified the protein Fgf13 as expressed by virtually all ChCs (Tasic et al. 2018;Favuzzi et al. 2019). Vipr2 is a G protein-coupled receptor for the corresponding neuropeptide, while Fgf13 is a multifunctional protein implicated in microtubule stabilization for instance.
BCs and ChCs are also found in the hippocampus, along with 2 other PV+ populations: oriens-lacunosum moleculare cells (O-LM) and bistratified cells (BiC) (Halasy et al. 2002). BCs, ChCs, and BiCs can be observed mainly in stratum pyramidale (Fig. 1), while O-LMs are found in the stratum oriens of the hippocampus (Yamada and Jinno 2017). However, as O-LMs and BiCs also express other markers, we will not go into further details about them (Somogyi and Klausberger 2005;Jinno and Kosaka 2006).
Mode of Action of PV+ Neurons
Recruitment Mechanisms: The Dendritic Tree PV+ cells display one of the most extensive dendritic trees among interneurons, as well as the largest number of received inputs, most of them being excitatory (Fig. 2). In the hippocampus, PV+ cells receive between 16 000 and 35 000 contacts mainly originating from pyramidal cells (PC) or granular cells (GC), only around 10% are inhibitory (Gulyás et al. 1999;Tukker et al. 2013). The dendritic tree of cortical PV+ neurons displays a low density Figure 1. Schematic view of layers most commonly occupied by PV+ neurons in the cortex (green) and the hippocampus (blue). I-VI: layers of the cortex, so: stratum oriens, sp: stratum pyramidale, sr: stratum radiatum, slm: stratum lacunosum-moleculare, po: polymorphic layer, gc: granule cell layer, mo: molecular layer.
of spines with excitatory synapses (Sancho and Bloodgood 2018), the majority of inputs being received on dendritic shafts (i.e., axo-dendritic contact). A small proportion of those axodendritic inputs come from SOM+ or other PV+ interneurons. Inhibitory axo-somatic inputs are mainly coming from VIP+ interneurons, whereas axo-axonic contacts are probably arising from ChCs (Hioki et al. 2013;Tukker et al. 2013). One recent study has reported that PV+ neurons with local high spine density, associated with few perineuronal nets (PNN, see Maturation), could be found in the dentate gyrus. (Foggetti et al. 2019).
In terms of glutamate receptors, it has been shown that the post-synaptic compartment of PV+ neurons presents different types of NMDA and AMPA receptors (NMDAR and AMPAR) to suit their specific electrophysiological profile. NMDARs differ from each other in their diverse subunits and therefore exhibit different decay kinetics (Tovar and Westbrook 2017). Adult cortical and hippocampal PV+ neurons mainly rely on the GluN1 and GluN2A NMDAR subunits, the duo with the fastest decay kinetics, but the association of GluN1-GluN2B and GluN1-GluN2D (in the hippocampus only) can also be found Geiger et al. 1997;von Engelhardt et al. 2015;Mierau et al. 2016). NMDARs are involved in calcium influx when Mg 2+ blockade is alleviated and may be linked to PV and GAD67 expression in the cortex (Kinney 2006). Interestingly, NMDARs tend to be concentrated in spiny synapses, in contrast to AMPARs, which are uniformly distributed along the dendrites of cortical PV+ neurons (Sancho and Bloodgood 2018).
AMPARs are also made of different subunits, conferring specific gating and kinetics to the complex (Lee 2012). In cortical and hippocampal PV+ neurons, it has been shown that the majority of AMPARs lacks the GluA2 subunit, making it permeable to Ca 2+ (Jonas et al. 1994;Geiger et al. 1995;Talos et al. 2006). These calcium-permeable AMPARs are largely responsible for Ca 2+ influx into the postsynaptic compartment. They also ensure NMDAR activation through the clearance of Mg 2+ (Goldberg et al. 2003b;Camiré and Topolnik 2014). AMPARs allow PV+ neurons to quickly generate excitatory post-synaptic potential (EPSP) during a short period following excitatory inputs. Because one EPSP is rarely enough to induce an AP generation, this system allows PV+ neurons to fire only if they received several synchronous EPSPs. Indeed, AMPARs' conductance, rapid rise, and decay parameters associated with strong desensitization and small recovery time make them particularly suitable to the task (Geiger et al. 1997;Traynelis et al. 2010).
Aside from these glutamate receptors, EPSPs are also shaped by ions channels in PV+ neurons. At least 2 L-type Ca 2+ voltagegated channels can be found in hippocampal PV+ neurons, which also influence their firing properties (Jiang and Swann 2005;Xu et al. 2006). Besides, voltage-gated K + channels type 3 (Kv3) are highly concentrated in cortical and hippocampal PV+ dendrites (Rudy and McBain 2001). Kv3 channels contribute to the propagation and the quick decay time of EPSP, ensuring a tight summation window for generating action potential (AP) (Fricker and Miles 2000;Hu et al. 2010). Besides, the concentration of Na + channel is relatively low, specifically at the apical dendrite. This particular cocktail of ion channels explains why cortical and hippocampal BCs are unable to produce dendritic spikes. Different teams showed that short high-intensity somatic currents fail to trigger an AP in the dendrite of FS neurons, contrary to the situation observed in PCs. Upon long somatic current pulse, an AP can be observed in the basal dendrite but fails to ascend further (Goldberg et al. 2003a;Hu et al. 2010). The ascension of APs is also called "backpropagation" and probably participates to synaptic plasticity, which we will approach in the section The long-term Plasticity in PV+ Neurons. Finally, cortical and hippocampal dendrites of PV+ neurons are connected by gap junctions, further contributing to the high synchronic power of this neuronal network (Katsumaru et al. 1988b;Galarreta and Hestrin 1999).
Action Potential Generation: The Axonal Compartment
The signature of PV+ interneurons lies in their axonal thickness and ramifications. In cortical BCs, the axon usually originates from the apical part of the soma and extends mainly around it, targeting proximal dendrites and soma of postsynaptic cells. In cortical ChCs, the axon arises from the basal part of the soma and has a typical candlesticks-like shape, connecting to the axon of their target (Markram et al. 2004;Jiang et al. 2015). The location of synapses that connect PV+ interneurons with their targets improves their efficiency: the closer it is to the AP generation site, the stronger the synapse is (Kubota et al. 2015). Recent studies have demonstrated that most cortical BC neurons present patches of myelin sheets on their axon, both in mice and humans. Authors suggest an effect on the energy needed for the AP propagation (rather than the classical role in AP velocity), but pieces of supporting evidence are scarce (Micheva et al. 2016;Micheva et al. 2018).
All PV+ neurons initiate the AP in a structure particularly close to the soma, called the Axon Initial Segment (AIS) and characterized by a specific cocktail of ions channels that are in high densities (Hu et al. 2010;Hu and Jonas 2014;Höfflin et al. 2017). The AIS of PV+ neurons contains classical voltagegated channels (Kv1.1, 1.2 and Nav1.6), along with some other channels (Lorincz and Nusser 2008). Indeed, the voltage-gated sodium channel Nav1.1, enriched in the AIS of all FS cells, contributes to AP generation by sustaining high-frequency firing, while Kv3.2 channels complete the AP during repolarization at least in the cortical PV+ neurons (Lau et al. 2000;Ogiwara et al. 2007). A comparison between CCK+ and PV+ neurons revealed that they use different types of channels to trigger Ca 2+ entry into the pre-synapse. Cortical and hippocampal BCs mainly rely on Cav2.1 (also called P/Q-type) channels to release neurotransmitter rapidly following APs, in a synchronous manner. Neurons in which fusion events can occur several seconds after APs mediate Ca 2+ entry partly or entirely using Cav2.2 (N-type) channels (Hefft and Jonas 2005;Zaitsev et al. 2007;Rossignol et al. 2013).
Similar to dendrites, axons of cortical and hippocampal PV+ neurons are connected by gap junctions, allowing them to propagate newly generated AP (Kosaka and Hama 1985;Tamás et al. 2000). Interestingly, a recent study notably reports that simultaneous or sequential excitation of several connected cerebellar BCs increased the probability of AP generation and reduced their latency (Alcami 2018).
Signal Transmission: Pre-Synapses and Synapses
The synapses of PV+ neurons also contribute to their ability to fire very rapidly following an input. All hippocampal interneurons apparently display a similar presynaptic terminal density (i.e., 21-28 synapses/100 µm), which means that 1 BC innervates around 1500 cells (with an average of 6 contacts per target) (Gulyás et al. 1993;Buhl et al. 1994;Sik et al. 1995). Their synapses also present a low failure rate and a rapid release of GABA following to AP arrival, again stressing out the need for synchronous communication (Kraushaar et al. 2000). Hippocampal BCs organize their synaptic machinery into small boutons, bearing a small number of active zones, considered as "nanodomains." As a result of their spatial promiscuity, 2 or 3 Ca 2+ channels are sufficient to induce the vesicle release via Ca 2+ sensor proteindependent exocytosis (Bucurenciu et al. 2008;Bucurenciu et al. 2010). These well-organized nanodomains also facilitate Ca 2+ clearance following neurotransmitter release and save energy, as less ion exocytosis is needed to return to the resting condition.
Besides, the parvalbumin protein has a buffering ability and helps to rapidly decrease the Ca 2+ concentration. PV is a calciumbinding protein with 3 EF-hand domain also found in muscle cells (Bottoms et al. 2004). Even if the dissociation constant of PV and Ca 2+ is very low, this protein presents rather slow binding kinetics. Indeed, the binding capacity of PV to Mg 2+ at physiological concentration makes it the favored partner, impeding or slowing the reaction with Ca 2+ (Schwaller 2009). As a result, PV co-exists in 3 states: free of ions, bound to Mg 2+ or Ca 2+ . It turns out that this three-state organization could contribute to the buffering capacity of PV. Indeed, computational studies performed on cerebellar BCs revealed that free PV is replenished only from the Mg 2+ -bound fraction, thus ensuring a constant buffer capacity during stimulation . The high mobility of this protein combined with the organization in nanodomains could, therefore, contribute to easier maintenance of buffering capacity, as even small changes in the absolute number of ions could impact their total concentrations (Schwaller 2009;. PV could also participate in the depressing profile of BCs synapses, as suggested by the facilitation phenotype of cerebellar and hippocampal BCs in PV-deficient mice (Vreugdenhil et al. 2006;. A depressing synapse, in contrast with a facilitated one, presents a reduced amplitude of post-synaptic current following a train of APs. Neurons with a high probability of release usually harbor depressing synapses, as their efficient machinery rapidly clear transient Ca 2+ and/or they will use the majority of their readily releasable pool (RRP) of vesicles faster than its replenishment potential (Jackman and Regehr 2017).
The Synaptotagmins in PV+ Neurons
Synaptotagmins (Syt) also take part in the peculiar profile of PV+ neurons. This family of proteins includes 17 members, divided into 3 groups according to their ability to bind 0, 5, or 10 Ca 2+ ions. Syt also differ by their binding kinetics. A majority of them are also able to bind SNARE proteins, a complex responsible for synaptic vesicles docking to the plasma membrane of neurons Wu et al. 2019). Hippocampal PV+ neurons express 9 paralogs of the synaptotagmin family (i.e., 1-5, 7, and 11-13), although their genuine role in inhibitory neurons is unclear (Kerr et al. 2008). The fast-release sensors Syt1 and Syt2 are found in cortical and cerebellar PV+ neurons (as well as in a fraction of hippocampal BC) and ensure a fast and synchronous neurotransmitter release, contributing to the depressing profile of PV+ neurons' synapses (Sommeijer and Levelt 2012;Bouhours et al. 2017;Bornschein and Schmidt 2019). Contrary to Syt1 and Syt2, Syt7 is not located in the vesicle membrane but at the plasma membrane of cerebellar and hippocampal PV+ neurons and is implicated in an asynchronous release, facilitation, and managing of the RRP (Jackman et al. 2016;Li et al. 2017). Even though asynchronous release and facilitation are not typical characteristics for PV+ neurons, Syt7 ensures sustainable neurotransmission by spreading vesicle release over time following one or several APs, thus prolonging inhibition . Besides, Syt1 and Syt7 may contribute to vesicle recycling through clathrin-mediated endocytosis (CME) and a slower calcium-independent mechanism, respectively, (Haucke et al. 2000;Li et al. 2017).
Syt11 could play a balancing role by inhibiting the CME in hippocampal and ganglionic neurons, thereby maintaining a reasonable number of active endocytosis sites in a calciumindependent manner (Wang et al. 2016;Wang et al. 2018). Other teams reported the presence of Syt11 in dendritic endosomelike structures, linking the protein to long-term potentiation (LTP, see below) rather than vesicle recycling. Indeed, Syt11 knock-out (KO) neurons display a normal secretion of neurotransmitters and peptides (Dean et al. 2012;Shimojo et al. 2019). The role of Syt4 seems to be subtler, as several experiments on neurons and neuroendocrine cells showed that Syt4 can impair vesicle fusion by unproductively competing for SNARE binding with other Syt. In contrast, it enhances exocytosis under high Ca 2+ concentration (Wang et al. 2001;Wang et al. 2003;Bhalla et al. 2008;Zhang et al. 2009;Huang et al. 2018). These concentrationdependent opposite effects of Syt4 is surprising given its inability to bind ions (Dai et al. 2004) but could be associated with its interaction with other calcium-dependent Syt members. The absence of effects following Syt4 overexpression in hippocampal neurons may argue against its putative fusion-impairing role or highlight a failsafe mechanism preventing the system from overinhibiting fusion (Ting et al. 2006). Another paper has shown that the absence of Syt4 in presynaptic terminals increases the spontaneous release of vesicles via BDNF, thus confirming the inhibitory effect of Syt4 on vesicle fusion ). The same team has also reported a link between Syt4 and LTP, as observed for Syt11.
Syt12 also seems able to compete with Syt1 for SNARE binding, potentially acting as another inhibitory protein (to a lesser extent compared to Syt4, likely not biologically relevant) (Bhalla et al. 2008). By contrast, it was reported that Syt12 could support spontaneous release when phosphorylated after binding to Syt1 in an SNARE-independent manner (Maximov et al. 2007). Finally, Syt3 has recently been linked to AMPAR internalization following NMDA or AMPA activation in hippocampal neurons primary cultures (Dean et al. 2012;Awasthi et al. 2019). Removing Syt3 in mice leads to a lack of forgetting ability, but since its activity is mainly mediated by GluA2 binding (a subunit of AMPA receptor weakly expressed in BC), its effect in PV+ neurons could be limited.
Most of the information about Syt5 functions is inferred from its activity in secretory cells. Syt5 has been found in peptidecontaining, dense-core vesicles from the adrenal medulla or pancreatic-derived cell lines. Syt5 could act as a positive modulator of calcium-dependent exocytosis (Saegusa et al. 2002;Iezzi 2004). To date, no genuine evidence for a defined function of Syt13 in the brain has been generated. However, the glucoseinduced secretion of insulin is significantly reduced in pancreatic cells with lower expression of Syt13 (Andersson et al. 2012). Together, all these studies show the wide variety of molecular tools that PV+ neurons can use to fine tune their activity and ensure fast and efficient responses.
The Long-Term Plasticity in PV+ Neurons
Once generated in the AIS, the AP can backpropagate towards the proximal dendrite and usually fails to reach to the distal dendrites, probably due to the lack of Na channels. In cortical BCs, this phenomenon has minimal impact on Ca 2+ accumulation and is regulated by A-type K + channel (Goldberg et al. 2003b;Cho et al. 2010). Nonetheless, recent findings indicate that sharp wave oscillations and nicotinic cholinergic receptors (nAChR) could carry the AP further into the distal dendrite and sustain high Ca 2+ signal in hippocampal BCs (Chiovini et al. 2010). Moreover, type I metabotropic glutamate receptor (mGluR) were reported on PV+ neuron membranes and could also contribute to Ca 2+ accumulation in the post-synaptic compartment (Muly et al. 2003;Sun et al. 2009;van Hooft et al. 2018). Unlike axons, Ca 2+ concentration in dendrites of hippocampal BC is mainly managed by a fixed rather than a mobile buffer. A fixed buffer should induce a slow Ca 2+ release close to the source, elongating the decay time of transient Ca 2+ . This could provide an efficient summation of Ca 2+ influx between 2 events and explain why successive inputs are able to accumulate Ca 2+ in dendrite and support AP backpropagation (Aponte et al. 2008).
Once accumulated in the post-synaptic compartment, the calcium contributes to the synaptic plasticity of the neuron. The phenomenon has been mainly studied in the hippocampus, and it was shown that yet again PV+ neurons have several tools available to modulate their activity. In vivo experiments showed that theta burst stimulation (TBS) in rat hippocampi can either induce long-term potentiation (LTP), facilitating the next AP generation or long-term depression (LTD), inhibiting the next burst of AP generation (Lau et al. 2017). Unlike PCs, interneurons of the CA1 showed a special Hebbian LTP that is independent of NMDAR but involve group I mGluR and AMPAR (Perez et al. 2001). Further ex vivo experiments on mice hippocampal slices showed that subthreshold TBS (thus triggering no AP) can induce anti-Hebbian LTP through CP-AMPAR-driven Ca 2+ accumulation (Lamsa et al. 2007;Camiré and Topolnik 2014). On the contrary, suprathreshold TBS (thus producing a signal) creates LTD, weakening the next input. In this case, the transient Ca 2+ increase is higher and relies on internal storage in addition to CP-AMPAR contribution (Camiré and Topolnik 2014). Moreover, group I mGluR and cannabinoid receptors have also been implicated in LTD of hippocampal FS neurons (Péterfi et al. 2012). A computational study revealed that internal stores, clearance mechanisms and the specific morphology of the dendrite have a major impact on the calcium summation system of the neuron (Camiré et al. 2018).
Development of PV+ Neurons Migration
During mouse development (Fig. 3), the majority of PV+ interneurons populating the cortex and the hippocampus comes from the rostral part of the medial ganglionic eminence (MGE), with weakened Wnt signaling (McKenzie et al. 2019). MGE is present from mouse embryonic day 9 (E9) to E16 and generates interneurons from E13.5, which migrate through the marginal or subventricular zone to reach their final destination (Wichterle et al. 2001;Xu 2004). A smaller proportion of fast-spiking BCs arises at E11.5 from the preoptic area (POA), after migration via the marginal zone, the subplate to the cortex and the hippocampus (Gelman et al. 2009;Gelman et al. 2011). Each population begins to migrate tangentially around E14 to E18, then switch to radial migration to invade the cortical plate between E18 and postnatal day 2 (P2) and finally reach the correct layer at P2 to P6.
Several waves of interneurons follow these steps, the first ones settling in deep layers of the cortex while the late ones invading the superficial layers (Bartolini et al. 2013). The existence and the relevance of these diverse paths are not clearly understood yet. Nonetheless, results obtained in a sub-class of SOM+ interneurons indicate that future interneurons could choose one road or another based on their mature morphology and final destinations (Lim et al. 2018b). On the other hand, neurons assigned to the hippocampus preferentially migrate tangentially through the marginal zone towards the stratum lacunosum moleculare and populate all layers of the hippocampus (Tricoire et al. 2011). Recent results suggest that the final concentration of intracellular PV decreases with each wave of interneurons, as early-born PV+ neurons display stronger PV signal than late-born PV+ neurons in the hippocampus, somatosensory cortex, and dorsal striatum (Donato et al. 2015).
After the disappearance of the MGE around E16, the ventral germinal zone of the lateral ventricle (VGZ) continues to generate interneurons, including the majority of ChC (even though a small fraction of them were generated earlier by the MGE) (Inan et al. 2012;Taniguchi et al. 2013). After the tangential migration around P0, neurons will cross the cortical plate around P2 to spread a little more at the cortical surface between P4 and P6 and finally invade cortical layers around P7 (Fig. 3) (Taniguchi et al. 2013). If some other subtypes of interneurons have postnatal sources to replenish their ranks (Inta et al. 2008;Wu et al. 2011;Riccio et al. 2012), it is yet to be proven for PV+ neurons. The migration of interneurons and their early diversification is a wellorchestrated mechanism that we are slowly beginning to grasp. As 2 recent reviews have dissected them in detail (Peyre et al. 2015; Lim et al. 2018a), we will not linger on those phenomena. As reported by several authors, the first 2 postnatal weeks are characterized by a 40%-50% decrease in the density of different cell types, including interneurons (either in cortical or hippocampal structures) (López-Bendito et al. 2004;Tricoire et al. 2011). This phase corresponds to an increase in the whole brain volume, but stainings have proven that programmed cell death is also at play (Verney et al. 2000;Southwell et al. 2012;Denaxa et al. 2018;Priya et al. 2018). Indeed, the number of interneurons is to be tightly regulated: the loss of specific inhibitory cortical subpopulations is compensated by other subtypes or grafted interneurons to preserve the ratio of excitatory versus inhibitory neurons (Azim et al. 2009;Batista-Brito et al. 2009;Lodato et al. 2011;Denaxa et al. 2018). This reduction of inhibitory neuron density could be linked to network activity, as blocking NMDA receptors increases the number of apoptotic cells in the cortex (Roux et al. 2015). Also, brain regions showing higher network activities present reduced apoptosis of cortical neurons (Blanquie et al. 2017). A recent report shows that, in most interneurons, the network activity reduces postnatal cell death through activation of the Calcineurin protein (Priya et al. 2018).
Maturation
Regarding PV+ neurons, the first weeks of life are also marked by profound changes and maturations. Different studies have found that the fast profile of cortical and hippocampal BCs is reached between P7 and P25 (Itami et al. 2007;Doischer et al. 2008), after the switch from excitatory to inhibitory GABAergic signal (Rivera et al. 1999).
In the cortex, this modification of the firing profile is accompanied by changes in gene expression that could explain some electrophysiological parameters, e.g., 1) the down-regulation of Kcnn2, encoding for the small conductance Ca 2+ -activated K + channel, which could favor the depressing profile observed from P10 or 2 the up-regulation of Kcnc1 and Kcnc2 genes (corresponding to the Kv3 potassium channel type) could account for the increased firing rates and reduced spikes from P10 (Okaty et al. 2009). Indeed, the blockade of Kv3 channels at P10 has little effect in PV+ neurons, whereas inducing a massive increase in IPSC in PCs at P18, confirming the upregulation of Kv3 channels and their effect on synaptic depression (Goldberg et al. 2011). Between P10 and P18, a K + leak current appears, influencing the resting membrane potential (RMP) and the membrane resistance (R m ) of PV+ neurons. At least K ir 2 and K 2P channels participate, as RMP and R m are modified upon application of specific blockers (Goldberg et al. 2011).
This critical period also corresponds to the settling of the Ca 2+ managing system. Several Ca 2+ channel subunits, which form low-voltage threshold (T-type: Cacna1g) or long-lasting activation (L-type: Cacng4, Cacnb1) channels, are down-regulated, presuming narrower regulation of Ca 2+ flux. Indeed, the PV protein and the plasma membrane Ca 2+ -ATPase are also upregulated, inducing a tighter control of intracytoplasmic Ca 2+ concentration (Okaty et al. 2009). The fastest calcium-sensor Syt2, used as BC marker in the visual cortex, is also upregulated from P10 to P18 (Sommeijer and Levelt 2012). Interestingly, electrical synapses between PV+ neurons can already be observed at P10, even if the arborization is not yet fully matured (Goldberg et al. 2011).
As for apoptosis, the correct maturation of PV+ interneuron also relies on network activity. In the cortex, GluN2C and D subunits of NMDA receptors help to establish the neuronal arborization (Hanson et al. 2019), whereas the GluN2A subunit is needed to face oxidative reactions and to establish the perineuronal net (PNN) (Cardis et al. 2018). The PNN is a specific type of extracellular matrix that surrounds several neurons and their dendrites. The PNN is composed of proteoglycan (PG), hyaluronan and smaller molecules synthesized by neurons and neighboring glial cells (John et al. 2006;Carulli et al. 2007). Different associations of PG can form these PNNs. PNNs in cortical BCs, (but not in ChC), are composed of chondroitin sulfate, keratan sulfate, and brevican PG (Wegner et al. 2003;Takeda-Uchimura et al. 2015;Favuzzi et al. 2017;Yamada and Jinno 2017). The PNN is settled from P10 to P30, mostly around cortical PV+ neurons and its establishment is influenced by received inputs and stimuli (McRae et al. 2007;Ye and Miao 2013;Ueno, Suemitsu, Murakami et al. 2017a). Some studies demonstrated that the magnitude, shape, and content of PNN associated with PV+ neurons can vary between the regions of the brain (Yamada and Jinno 2013;Ueno, Suemitsu, Okamoto et al. 2017b). PNNs are well known to influence synaptic plasticity and may have a subtler role than only a physical barrier preventing synapse establishment. PNNs can bind molecules such as β-Integrin in the hippocampus or Sema3A in the cortex, respectively, preventing their promoting role in spine formation (Orlando et al. 2012) or enabling their inhibitory action (Vo et al. 2013). Indeed, the PNN may impact the cortical neuron without complete net digestion, but rather via the change of sulfation pattern involved in protein binding (Miyata et al. 2012;Miyata and Kitagawa 2016). PNNs also modulate receptor activity by acting 1) directly as a physical fence, as demonstrated for the AMPAR GluA1 and 2 receptors (Frischknecht et al. 2009), or 2) indirectly by sequestrating or accumulating partners. For example, the neuronal-pentraxin 2 (NP2 encoded by the Narp gene) modulates GluA4 in hippocampal PV + neurons after being secreted, only in the presence of PNN (Chang et al. 2011). Finally, PNNs are also involved in the trafficking and the clustering of K + channel Kv1.1 and 3.1 in the hippocampus (Favuzzi et al. 2017).
Concerning maturation, less information is known about ChCs. Some papers have highlighted a slower establishment of synapses and FS properties in mouse cortical ChCs, compare to BCs (Miyamae et al. 2017;Pan-Vazquez et al. 2020). Nonetheless, one specific feature of maturating ChCs is still under debate: ChCs may remain excitatory longer than other inhibitory neurons. At first, the high intraneuronal Cl − concentration makes GABA inputs depolarizing, and consequently all "inhibitory" neurons remain excitatory. Around P7 in mouse, the expression of symporter KCC2 rises and shifts the Cl − gradient to induce the hyperpolarizing effect of inhibitory neurons (Rivera et al. 1999;Ben-ari 2002). Yet, several teams have observed depolarizing input coming from ChCs in cortices and hippocampi of rodent between P15 and P35 (Szabadics et al. 2006;Khirug et al. 2008;Woodruff et al. 2011). As interneurons connect to different parts of their targets, those observations could be explained by local change of Cl − concentration gradient. In mouse cortical neurons, KCC2 expression rises later in the AIS, which could explain why ChCs would remain depolarizing longer than BCs (Rinetti-Vargas et al. 2017;Pan-Vazquez et al. 2020).
The late maturation of PV + neurons, simultaneous to the establishment of the synaptic network, makes them a pivotal actor in neurodevelopmental and neurodegeneration diseases (van Bokhoven et al. 2018;Ferguson and Gao 2018;Wen et al. 2018).
Parvalbumin Neurons in Epilepsy
PV+ neurons display a strong ability to decrease the global brain excitability and are thought to play a role in epilepsy. Epilepsy is a group of disorders commonly characterized by the recurrent appearance of seizures, which consist of abnormal and highly synchronous brain activity and AP discharges. The cause of these diseases can sometimes be identified (i.e., genetic, traumatic, hemorrhagic . . . ), but is unknown in most cases (i.e., idiopathic or cryptogenic epilepsy). Epilepsy is a chronic disease with very acute expression, and patients suffering from epilepsy are treated to control seizures onset rather than handling the cause of the disease. Epilepsy being an evolving disorder (frequency and severity of seizures can change over time), treatments have to be regularly adjusted. Moreover, one-third of epileptic cases are "refractory" or "intractable" to current antiepileptic treatments (Chen et al. 2018;Faraji and Richardson 2018).
Epilepsy is thought to be linked to a failure of the excitatory to inhibitory balance (E/I balance), and PV+ neurons were rapidly assumed to participate in ictogenesis (seizure onset) or epileptogenesis (epilepsy appearance and evolution). Indeed, several studies have shown that PV+ neuron density is reduced in epileptic tissue of animal models, as well as in human patients (Zamecnik et al. 2006;Kuruba et al. 2011;Marx et al. 2013;Nakagawa et al. 2017;Cameron et al. 2019;Alhourani et al. 2020). Interestingly, a recent paper observed an increased mitochondrial fragmentation in rat PV+ neurons following induced status epilepticus. Moreover, they show that reducing mitochondrial fission with chemical inhibitor mitigate PV+ neuron loss (Kim and Kang 2017). However, other teams observed a normal number of PV+ neurons with an altered electrophysiological profile or morphology in different animal models with induced seizures (Sun et al. 2007;Gu et al. 2017;Miri et al. 2018). Following experimental observation in mice, it has also been proposed that the parvalbumin protein itself could be lost, rather than the whole PV+ neuronal population, due to Ca 2+ overload and excessive recruitment of interneuron (Wittner et al. 2001;Wittner and Maglóczky 2017). Given the diversity of causes and symptoms of epilepsy, as well as animal models, finding a consensus is a struggle. Anyway, these observations suggest an increased vulnerability of PV+ neurons in epilepsy but give little information on the role of PV+ neurons in epileptogenesis.
Many genetic mutations in PV+ neurons have been proposed to play a role in epilepsy and were even detected in patients. Many of those have been reviewed by Jiang, Lachance andRossignol in 2016 (Jiang et al. 2016). In this paper, authors analyzed alterations of PV+ neurons in terms of migration, maturation, excitability, or connectivity. But, from 2016, the list is still growing Figure 4. Hypothesis of network mechanism potentially giving rise to seizure. The pyramidal neuron (in green/black) receives input from Som + neurons (in red) and PV neurons (in orange). Vertical bars represent activities of the corresponding neuron along the horizontal timeline. The green gradient represents Cl − ion concentration inside the pyramidal neuron. and we will thus focus here on several recent findings. Ankyrin-G participates in the clustering of ions channels in nodes of Ranvier and the AIS. In absence of the PV+ specific ankyrin-G isoform 1b, PV+ neurons have reduced excitability, leading to seizures and behavioral alteration associated with bipolar disorder (Lopez et al. 2017). CNTNAP2, a protein belonging to the pre-synaptic cell-adhesion protein family neurexin, is associated with many neurological disorders, including epilepsy. A team recently showed that FS cortical neurons mutated for CNTNAP2 present altered AP width or intern spike interval when transplanted into wild-type mice cortices (Vogt et al. 2018). Even more recently, it was shown that the absence of NHE1 (the Na + /H + exchanger expressed in neurons and astrocytes) in mouse hippocampal PV+ neurons decreases the frequency and increases the amplitude of mIPSC, probably by affecting the loading of GABA vesicles (Bocker et al. 2019). On the other hand, Soh and colleagues associated an increase of spontaneous IPSC frequency with a shorter latency of chemically induced seizures in mice mutated for the K + channel KNCQ2 (Soh et al. 2018), implying that stronger inhibition could also lead to seizure susceptibility.
Despite the classical consideration of epilepsy, being either due to increased excitability of glutamatergic cells or reduced inhibition and unleashing of excitatory cells, increasing data highlights the role of increased activity of inhibitory neurons in seizures. Indeed, several articles have described a modified activity of PV+ neurons followed by a "depolarization block" event right before the seizure onset in rodents (Fujiwara-Tsukamoto et al. 2004;Ziburkus et al. 2006;Grasse et al. 2013;Parrish et al. 2019). These depolarization block (DB) events correspond to a high-frequency train of truncated AP. Other teams have reported an increase of GABA release by hippocampal PV+ neurons during high-frequency train in kindled mice model (Hansen et al. 2018), or an increased firing frequency of hippocampal PV+ neurons during pre-ictal phase in chemo-induced seizures mice model (Miri et al. 2018). Interestingly, a reduction of VIP+/CR+ contacts made on OLMs, BiCs, and BCs was reported in the CA1 in pilocarpin mice model. While all interneurons displayed a reduction of spontaneous IPSC frequency, only IPSC amplitudes recorded in BiCs or BCs were decreased after light-evoked VIP+ neuron stimulation. This particular impact on BCs could explain the specific increased activity of PV+ reported by Hansen and Miri (David and Topolnik 2017). The intensity of these inhibitory currents could trigger a massive accumulation of Cl − inside postsynaptic neurons and an increase of extracellular K + concentration, probably due to K + /Cl − symporter (Fujiwara-Tsukamoto et al. 2007). Once high, the extracellular K + concentration eases the depolarization of the excitatory cell, completely free to fire as inhibitory cells are not active anymore because recovering from the DB. Furthermore, it has been demonstrated that PV+ neurons are recruited before SOM+ neurons and that these 2 populations could be needed to dam seizures (Parrish et al. 2019).
When excessive excitation stimulates interneurons (Fig. 4), the first-line cells are PV+ neurons targeting the soma of the glutamatergic neurons. PV+ neurons strongly and quickly induce inhibition, leading to a surge of Cl − inside the somatic compartment of the PC, which could sustain ion entry for a while, thanks to K + /Cl − symporter like KCC2. Once PV+ neurons are depressed, the second line is represented by SOM+ neurons, which then prevent massive excitation of glutamatergic neurons. SOM+ neurons provide a long-lasting inhibition, maintaining the non-responsive state of PCs that PV+ neurons have begun to build. If PV+ neurons remain active or reactivate, they could saturate PC in Cl − that may start diffusing, overtaking KCC2 capacity. This would impact the dendritic domain, inducing an increase of extracellular K + where PC receives excitatory drive. SOM+ neurons will not be sufficient to maintain inhibition, and the PC will begin to frenetically fire. Another possibility could involve extrasynaptic GABA receptor, as recently found in SOM+ neurons (Bryson et al. 2020). This could explain why PV+ neurons have been reported as a "double agent" in ictogenesis (Shiri et al. 2016;Wang et al. 2017;Lévesque et al. 2019). It has been reported that overexpression of KCC2 prevents the pro-ictal effect of PV+ neurons when stimulated 2 s after a seizure (Magloire et al. 2018). The increased number of KCC2 channels on PC could prevent the spreading of Cl − and keep the increase of extracellular K + minimal around dendrites.
Conclusion
The last decades have brought to light the crucial role that interneurons play in neurotransmission. Although a minor population, these inhibitory neurons are key fine tuners that regulate signals of projection neurons and have been closely examined after years of being wrongly omitted. As our knowledge piles up, we unravel the broad diversity of interneurons and gain new insights into their various structural, biochemical, and functional aspects. Parvalbumin-positive (PV+) interneurons were characterized based on their specific fast-spiking profile, but beyond this specific characteristic, we keep collecting evidence that indicates they are probably much more diverse than we think. The variety of their neurite arborization, ion balance, synaptic components, and spatial distribution probably confers them miscellaneous functions that we slowly start to grasp. Epilepsy physiopathology is one example of this specific functionality of these PV+ neurons and by itself, it is not an exception to this rule of diversity. Indeed, it includes a large phenotypical spectrum and is associated with a number of dysregulated molecular and biochemical mechanisms. Getting back to PV+ neurons, growing evidence from the literature sheds light on their role(s) on neuronal networks as they are particularly wired and equipped for influencing those grids. Moreover, these neurons could be the ideal targets for treatment: only a tiny amount of drug acting on this small cell population would be able to trigger a massive effect. Still, this powerful PV+ neuron machinery could turn out to be both ally and enemy for a correct brain function, and the genuine role of these neurons in ictogenesis is yet unclear. Whether an inadequate firing profile is directly linked to seizure, or whether their firing profile is itself impacted by dysregulated essential cellular processes (e.g., energy, transport, cytoskeletal remodeling, etc.) indirectly causing seizure, are questions that should be further investigated.
Much remains to untangle in order to fathom the way PV+ interneurons play in harmony with neighboring neurons, to identify which player in out of tune in pathological conditions, and to ultimately succeed in conducting this orchestra to play a symphony without a false note. Bucurenciu I, Kulik A, Schwaller B, Frotscher M, Jonas P. 2008. Nanodomain coupling between Ca2+ channels and Ca2+ sensors promotes fast and efficient transmitter release at a cortical GABAergic synapse. Neuron. 57(4):536-545. Buhl EH, Halasy K, Peter S. 1994
|
2020-06-25T09:07:16.655Z
|
2020-06-19T00:00:00.000
|
{
"year": 2020,
"sha1": "c820d79de14b4e014bddb36f472cfe8878739525",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cercorcomms/article-pdf/1/1/tgaa026/33679958/tgaa026.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "daf41f2f10c37341ec818e2dc488f8cfc0cb4a0f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
133538302
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Multimodel Ensemble Seasonal Hindcasts for Satellite-Based Rice Yield Prediction
Several pre-harvest rice yield estimation methods have often failed to accurately estimate rice yields due to weather variability. We attempted to assess the APEC Climate Center Multimodel Ensemble (APCC MME) seasonal hindcasts to a satellite-based rice yield prediction model to timely provide estimates of rice yields for efficacious intervention plans. The developed model by a multiple regression analysis is Yield = 5.635NDVI – 0.0012P9 + 0.91 (where yield is the white rice yield in t haand P9 is the observed monthly precipitation in September in mm month-). The goodness-of-fit measures were 0.66, -0.14%, 0.13 t ha-, and 2.25%, for adjusted R (coefficient of determination), Percent bias (PBIAS), Root Mean Square Error (RMSE), and Mean Absolute percentage Error (MAPE), respectively. A statistical downscaling method using Empirical Orthogonal Function Analysis (EOFA) and Singular Value Decomposition Analysis (SDVA) was used to predict monthly precipitation hindcasts in September required for the developed model. Even though the estimates of rice yield using the predicted monthly precipitation for whole study period were not as good as the estimates using the 9.15 sampling method, the estimates for the two years of 2008 and 2009, when the 9.15 sampling method largely underestimated, were better than those using the 9.15 sampling method. It is concluded that the proposed approach can be used to timely provide rice yield estimates that reflect the meteorological conditions for more effective intervention plans in the rice market.
Introduction
Rice (Oryza sativa L.) is one of the most important crops as a staple food crop for more than half of world's population. Rice yields have been estimated and reported based on various methods including sampling methods, remote sensing techniques, empirical-statistical methods, and crop growth modeling so that estimates of rice yields can be used in plans for managing supply and demand, and price stabilization of rice. Timeliness, in addition to accuracy, for rice yield estimations is very important for efficacious intervention plans (Bastiaanssen and Ali, 2003;Hayes and Decker, 1996;Reynolds et al., 2002). Traditional crop yield estimations, like a sampling method which typically collects the required data from ground-based field visits is often subjective, costly, and can contribute to appreciable errors in the estimation (Reynolds et al., 2000).
There have been studies on estimation of crops using Vegetation Indices (VIs) derived from remotely sensed data (e.g., Kogan et al., 2013;Li et al., 2011;López-Lozano et al., 2015;Mkhabela et al., 2005;Mkhabela et al., 2011;Müller et al., 2008). For example, Panda et al. (2010) estimated corn yields using Normalized Difference Vegetation Index (NDVI). Bolton and Friedl (2013) used NDVI and Enhanced Vegetation Index (EVI) to estimate corn and soybean yields. In several studies, Land Surface Temperature (LST) with NDVI was used for crop yield esti-mation (Doraiswamy et al., 2007;Prasad et al., 2007). There have been studies on the estimation of the rice yields using satellite-derived NDVI (Huang et al., 2013;Hong et al., 2012;Mkhabela et al., 2011). Since this estimation is typically made with a lead-time of several weeks before harvest, crop yields can be poorly estimated due to weather variability during this period (Hayes and Decker, 1996). Observed meteorological variables with NDVI were used to improve crop yield estimation by reducing the weather variability. This approach has been used for winter wheat yield (Heremans et al., 2015) and for rice yield estimation (Hong et al., 2012;Na et al., 2012). However, this approach using observed meteorological variables may not be appropriate for the timeliness of the estimation because the meteorological variables collected during ripening period are not available until close to harvest.
Seasonal climate forecasts may be useful to overcome this shortcoming by replacing those observed meteorological variables. Seasonal climate forecasts have been used to agricultural applications. Lobell et al. (2007) suggested that crop yield losses may be substantially reduced by accurate seasonal climate foreacasts. Multimodel ensemble (MME) techniques have been used to produce skillful seasonal forecasts by reducing uncertainties in the individual global climate model (e.g., Krishnamurti et al., 1999;Palmer and Shukla, 2000). The Asia-Pacific Economic Cooperation Climate Center (APCC) has produced 6-month lead predictions based on a fully operational MME seasonal forecast system since 2005 (Min et al., 2009).
The objectives of this study were to develop a MODIS-based model to more accurately estimate rice yields reflecting the meteorological conditions and to apply the APCC MME seasonal hindcasts for the developed model to timely provide the estimates of rice yield.
Study region and data collection
NDVI and the meteorological conditions were used to develop a rice yield estimation model in South Korea. The Korean government has reported white rice yield (hereinafter referred to as rice yield) estimates based on a sampling method (denoted by the 9.15 sampling method). Various information including the number of hill m -2 , the number of effective panicles per hill, the number of filled grains per panicle, and any damages on rice plants is collected from over 3300 sampling fields for about a week from September 15 (KREI, 2011a). Both the estimates by the 9.15 sampling method and observed rice yields in South Korea were collected from Korean Statistical Information Service (KOSIS, http://kosis.kr) for this study. MODIS (Moderate Resolution Imaging Spectroradiometer) NDVI products (MODIS tile numbers h27v04, h27v05, and h28v05) were collected from Land Processes Distributed Active Archive Center (LP DAAC, https://lpdaac.usgs.gov/) for this study. The 16-day composite NDVI products at 1-km resolution from the Aqua satellite were provided from the year 2002. Considering the sizes of rice paddy field in Korea, finer resolution (at least 1-km resolution) of NDVI products was required to accurately calculate NDVI values at rice paddy fields in Korea. The Global Inventory Modelling and Map-ping Studies (GIMMS) Normalized Difference Vegetation Index (NDVI) datasets (https://daac.ornl.gov/ISLSCP_II/guides/ gimms_ndvi_monthly_xdeg.html) might not be appropriate for this study due to their coarse resolutions (i.e., 0.25, 0.5, and 1.0 degree).
For this study, observed meteorological datasets collected from Automated Surface Observing System (ASOS) operated by Korea Meteorological Administration (KMA) were used to reflect meteorological conditions (Lee and Lee, 2016). The 61 sites of ASOS which have longer than 30 years of meteorological datasets were selected to collect the observed precipitation, sunshine hours, and temperatures ( Fig. 1) which are widely known to influence crop yields. These meteorological variables in August, September, October, and an average of these three months were collected to cover the period after the representative heading stage through ripening stage. The cultivated area of mid-late maturing rice cultivars in Korea in the year 2009 accounts for approximately 84.4% of the total rice paddy fields in Korea, followed by early maturing rice cultivars (about 10.9%) and mid maturing rice cultivars (about 4.7%). Odaebyeo and Ungwangbyeo are major cultivars in early maturing rice cultivars in Korea, Hwahyeongbyeo and Surabyeo in mid maturing rice cultivars, and Dongjin1ho, Chucheongbyeo, Nampyeongbyeo, Junambyeo, and Ilmibyeo in mid-late maturing rice cultivars. The heading stages of the major cultivars (i.e., mid-late maturing rice cultivars) in Korea generally ranged between Aug. 12 and Aug. 23, and 50 to 55 days after the heading ASOS Administrative district stages are generally recommended to harvest for these cultivars. The representative heading stage of the major cultivar in Korea is Aug. 20 (Lee et al., 2010).
These datasets from 1983 to 2013 were collected for this study considering the archives of NDVI products (from 2002 to 2013) and the APEC Climate Center (APCC) Multi-Model ensemble (MME) hindcasts (from 1983 to 2010). The NDVI products and the observed meteorological variables from 2002 to 2013 were used to develop a rice yield estimation model. The APCC MME hindcasts from 1983 to 2010 were used to statistically predict those meteorological variables through a statistical downscaling method in this study. To investigate the applicability of the APCC MME hindcasts, a study period from 2002 to 2010 when both the NDVI products and the APCC MME hindcasts were available were selected for this study.
Rice yield estimation model
To calculate NDVI at rice paddy fields in Korea, a mask map (30-m resolution) for rice paddy fields was extracted using a land use map provided by EGIS (Environmental Geographic Information System, http://egis.me.go.kr). Little (only 0.0006%) change in area of rice paddy fields was observed at the 30-m resolution mask map after the vector map of rice paddy fields were converted into a raster format. However, it should be noted that the mask map does not reflect any changes in land use over the study period, implying that an inaccurate mask map leads to some errors in the rice yield estimation. Therefore, it is suggested that a more accurate rice paddy field mask map over the study period should be developed for a further study to improve the rice yield estimation model. The NDVI values at each rice paddy grid (30-m resolution) of this mask map were extracted using ArcGIS software package (ESRI, 2007) and averaged over the country. Since the 30-m resolution rice paddy mask used for the extraction can be described as a mask which consists of 30 m by 30 m rice paddy fields, all of the NDVI values in the 30 m by 30 m rice paddy fields contributed to the calculation of the average of NDVI values of rice paddy in the country.
A multiple regression analysis was used to develop the rice yield estimation model, since NDVI and meteorological variables (precipitation, maximum and minimum temperatures, and sunshine hours) were used as the predictor variables. The annual yields of rice for South Korea from 2002 to 2013 were used as response variable (i.e., the number of samples = 12). The PROC REG procedure of the SAS software package (The SAS system for Windows, 9.2, SAS Institute Inc., Cary, NC, USA) was used for this analysis. The stepwise selection method implemented in the PROC REG procedure was used for the selection of the predictor variables. This stepwise process in the PROC REG procedure identifies the predictor variables with an F statististic significant at the 0.15 level (SAS Institute Inc., 2011). The Leave-One-Out cross-validation method was conducted for the robustness of the model.
( ) (1) where O i is the observed value and P i is the predicted value.
Prediction of the predictor variables
To apply the APCC MME hindcasts for the developed rice yield estimation model, a statistical downscaling method was used to predict the selected predictor variables for replacing the observed meteorological variables in the developed model. The APCC MME hindcasts were used to increase availability of the rice yield estimates for the intervention plans by timely providing the estimates of rice yields. Even though the 17 Global Circulation Models (GCMs) were generally used for the APCC MME technique, we had to select the GCMs to have the longest period since the year 2002 when the MODIS NDVI products from the Aqua satellite are available. The 6 GCMs including APCC-CCSM3 (APCC, Korea), MSC_CANCM3 (MSC, Meteorological Service of Canada, Canada), MSC_CANCM4 (MSC, Canada), NASA 2.5 30 † "T" is the spectral resolution and "L" is the number of model levels.
(NASA GSFC, National Aeronautics and Space Administration Goddard Space Flight Center, USA), PNU (Pusan National University, Korea), and POAMA (BOM, Bureau of Meteorology, Australia) were selected for this study. These six dynamical seasonal prediction models are summarized in Table 1. A simple composite method (SCM), which is a simple ensemble method assigning equal weights to each GCM (Kang et al., 2009;Lee et al, 2011;Peng et al., 2002), was used to construct multimodel ensemble predictions for this study. The performance of this equal weighing system is comparable to that of the best available operational MME techniques (Peng et al., 2002;Lee et al., 2009). However, currently the APCC MME hindcasts do not provide monthly precipitation, sunshine hours, and maximum and minimum temperatures. A statistical downscaling method using Empirical Orthogonal Function Analysis (EOFA) and Singular Value Decomposition Analysis (SDVA) was applied to statistically predict these variables. These predicted variables will replace the selected predictor variables in the rice yield estimation model developed with NDVI and the observed meteorological variables. To reconstruct the time series of large-scale variables, their respective EOF modes and principal components were employed as a noise filtering technique in the statistical downscaling method. SDVA was then applied to obtain coupled modes between the large-and station-scales. The following downscaling transfer function was constructed.
where PR j (t,x) is the downscaled prediction, S i (t) is the time expansion coefficient of the ith SVD mode for large-scale predictor, and R i (x) is the singular vector of the predictand in the ith mode. For this study, n is equal to 10 (i.e., the first 10 leading modes for the large-scale variables were retained). The skill of the downscaling method was evaluated through leave-one-out crossvalidation.
Through this statistical downscaling method, station-scale meteorological predictand variables were statistically predicted from large-scale atmospheric predictor variables required for the rice yield estimation model developed in this study. More detailed information on the statistical downscaling method can be found in Kim et al. (2004) and Chu et al. (2008). These predictors used for this study were SLP (See-Level Pressure), T2M (temperature at 2 m), T850 (850 hPa temperature), U200 (200 hPa zonal wind), U850 (850 hPa zonal wind), V200 (200 hpa meridional wind), V850 (850 hpa meridional wind), and Z500 (500 hpa geopotential height).
Rice yield estimation model
We investigated the correlation coefficients between NDVIs on day-of-year (DOY) 201, DOY 217, and DOY 233 and rice yields from 2002 to 2013. The highest correlation coefficient (approximately 0.7) was found on DOY 233 followed by that (approximately 0.3) on DOY 217 and that (approximately -0.2) on DOY 201. This result is in substantial agreement with that by Hong et al. (2012). They reported that the highest correlation coefficient between NDVIs on DOY 233 and rice yields was about 0.62. This day is close to the representative heading stage of mid-late maturing rice cultivars (i.e., Aug. 20) which are major cultivars in Korea (Hong et al., 2012).
The rice yield estimation model (eq. (5)) was developed using NDVI and the observed meteorological datasets from 2002 to 2013 and compared with the rice yield estimation model (eq. (6)) using only NDVI to investigate if the inclusion of the observed meteorological variables leads to the improvement of the rice yield estimation model using only NDVI (eq. (5)). The annual yields of rice from 2002 to 2013 (i.e., the number of samples = 12) were used as response variable for the models (eqs. (5) and (6)). Yield = 5.635NDVI-0.0012P 9 + 0.91 (5) Yield = 6.983NDVI-0.294 (6) where Yield is the rice yield in t ha -1 , NDVI is the 1-km resolution NDVI on DOY 233, and P 9 is the observed monthly precipitation in mm month -1 in September. The adjusted R 2 values were approximately 0.66 and 0.43 for the cases using NDVI and the observed meteorological variables (i.e., the model of eq. (5)) and using only NDVI (i.e., the model of eq. (6)), respectively. This adjusted R 2 value for eq. (5) was slightly lower than that (R 2 = 0.80) of Hong et al. (2012) and higher than that (adjusted R 2 = 0.37) of Na et al. (2012). These equations showed that NDVI has a positive correlation with rice yields, while monthly precipitation in September has a negative correlation with rice yields. These results are in substantial agreement with those of Hong et al. (2012) and Na et al. (2012). The P-values were 0.0031 and 0.0126 for eqs. (5) and (6), respectively. The RMSE value between the observed and estimated (using eq. (5)) rice yields was 0.13 t ha -1 , while that between the observed and estimated (using Table 2. Goodness-of-fit measures for the regression models against observed rice yields.
Model
Sim the Leave-One-Out cross-validation method) was 0.21 t ha -1 . The RMSE value between the estimated rice yields using eq. (5) and using the Leave-One-Out cross-validation method was 0.13 t ha -1 .
PBIAS, RMSE, and MAPE were used to evaluate the developed model (i.e., eq. (5)) against the observed rice yields. These results of the error evaluation methods are summarized in Table 2. The results showed that the performance of the rice estimation model using both NDVI and monthly precipitation in September (eq. (5)) was higher than that of the model using only NDVI (eq. (6)). However, MAPE in the rice estimation model of eq. (5) was slightly higher than that in the 9.15 sampling method. The largest PBIAS was approximately 0.97% in the 9.15 sampling method, while the largest RMSE (about 0.19 t ha -1 ) and MAPE (about 2.79%) were found in the rice yield estimation model of eq. (6). Fig. 2 displays the comparison of observed and estimated rice yields (a) and residual rice yields (b). The estimates by the rice yield estimation model of eq. (5) were better than those by eq. Interestingly, the errors of the estimated rice yields by the model using only NDVI (i.e., sim. (VI) in Fig. 2a) were relatively high in those two years when the estimation errors of the 9.15 sampling method were high. Korea Rural Economic Institute (KREI) reported that the errors in the years 2008 and 2009 were because the meteorological conditions after the sampling time was not properly reflected in the 9.15 sampling method (KREI, 2011a;2011b).
In the year 2007, the model of eq. (6) overestimated rice yield as much as 0.44 t ha -1 . However, the difference between the observed and estimated rice yields was only 0.12 t ha -1 in the model of eq. (5). These results can be explained by considering the monthly precipitation anomaly in the year 2007 (Fig. 4a). Since the monthly precipitation anomaly in September has negative correlation with rice yield, it is likely that the positive value (approximately 240.9 mm month -1 ) of the monthly precipitation anomaly in the year 2007 can offset the errors in the model using only NDVI. Typhoon Nari mainly contributed to this large positive monthly precipitation anomaly in the year 2007 (KREI, 2011b). Hayes and Decker (1996) reported that the poor estima- tion due to weather variability can be made in this pre-harvest yield estimation method. These results imply that the meteorological conditions after the time of rice yield estimation can improve the performance of the rice yield estimation models.
Application of the APCC MME hindcasts
As shown in Fig. 3, the range of the correlation coefficients be-tween observed and predicted precipitation were between 0.29 and 0.54 with the eight large-scale atmospheric predictor variables (i.e., SLP, T2M, T850, U200, U850, V200, V850, and Z500). We selected the large-scale atmospheric predictor variables to predict station-scale meteorological predictand variables (i.e., monthly precipitation anomalies for this study) with the highest correlation coefficients. The highest correlation coefficient between the observed and predicted monthly precipitation in September was found, when U200 was used as the large-scale atmospheric predictor variable for the prediction (Fig. 3). The correlation coefficient between the observed and the monthly precipitation hindcasts over the study region increased from 0.35 to 0.54 after the statistical downscaling method was applied. Fig. 4 shows spatial patterns for the first SVD mode using U200 (a) and station precipitation in September (b) and the time series of expansion coefficients corresponding to the first SVD mode for U200 and precipitation (c). As shown in Fig. 4(c), the correlation coefficient between the expansion coefficients is 0.91 for the leading SVD mode and this mode accounts for 23.6% of total covariance. The U200 pattern (Fig. 4a) shows changes in an upper level jet pattern and a north-south oriented pattern around the Korean peninsula. Several positive centers of U200 were found over China (about 80E-100E) and over east of Japan (about 150E) and positive anomalies were found over the Korean peninsula (Fig. 4b).
The monthly precipitation anomalies were averaged over all stations for the period of 2002 to 2010 and are depicted in Fig. 5. The observed (Fig. 5a) and predicted (Fig. 5b) monthly precipitation anomalies have the same sign with slightly different magnitudes except for that in 2003. However, these anomalies were not appropriate for replacing the observed monthly precipitation in September in the rice yield estimation model of eq. (5)). To replace the observed monthly precipitation in September, we added these anomalies to climatology defined as the long-term average of a given variable (for this study, the average monthly precipitation in September between 1983 and 2013). This approach has been widely used because it is very simple and has an effect of bias-correction (Reynolds et al., 1998;White and Toumi, 2013;Xu and Yang, 2012).
These predicted monthly precipitation values in September were applied for eq. (5) to timely provide the rice yield estimates for the effective intervention plans in the rice market. For the study period (2002 to 2010), PBIAS, RMSE, and MAPE in the model using the predicted monthly precipitation instead of the observed monthly precipitation in September were 0.42%, 0.24 t ha -1 , and 4.35%, respectively. These values were similar to those in eq. (6) ( Table 2). While PBIAS was lower than that in the 9.15 sampling method, MAPE was higher than that in the 9.15 sampling. Those estimation errors for eq. (5) using the predicted monthly precipitation in September were consistently higher than those in eq. (5) using the observed monthly precipitation in September. The enhancement of prediction skills of MME seasonal prediction for the period including September can obviously reduce these estimation errors in the case using the predicted monthly precipitation in September. It is suggested that the MME prediction skills for the period should be improved to increase the effectiveness of the rice yield estimation model using predicted meteorological variables.
However, MAPE from 2008 to 2009 when the 9.15 sampling method largely underestimated was 2.38%. This value was lower than that (4.80%) in the 9.15 sampling method or that (3.74%) in the model of eq. (6) for those two years. This result implies that the proposed model with the APCC MME seasonal forecasts can be useful to estimate rice yields in the end of August (i.e., earlier than the 9.15 sampling method). To more accurately estimate the rice yields using the proposed approach (i.e., using the model of eq. (5) with the APCC MME seasonal forecasts), it is suggested that both the rice estimation model (eq. (6)) and the MME prediction skills should be improved for more effective intervention plans in the rice market.
Conclusions
In this study, we developed the rice yield estimation model using NDVI and the observed meteorological variables and applied the APCC MME hindcasts for the model. Since accurate calculation of the NDVI at the rice paddy field contributes to better estimation of the rice yield through the developed model, a more accurate rice paddy field mask map over the study period is suggested for the improvement of the rice yield estimation model. Through multiple linear regression analysis with stepwise variable selection, monthly precipitation in September among meteorological variables including sunshine hour and maximum and minimum temperatures was selected for the model. This model was evaluated by comparing with the observed rice yields, the estimated rice yields, and the rice yield estimation model using only NDVI and the 9.15 sampling method. The performance of rice yield estimation model using both NDVI and monthly precipitation in September was better than those of the model using only NDVI and of the 9.15 sampling method. Even though the rice yield estimates applying the APCC MME hindcasts for the rice yield estimation model of eq. (5) were not as good as the 9.15 sampling method from 2002 to 2010, those for the two years (2008 and 2009) when the estimates by the 9.15 sampling method were largely underestimated were better than those by the 9.15 sampling method. This study suggests that the rice yield estimation model and the MME prediction skills should be improved to more accurately estimate and to timely provide rice yields. It is concluded that the proposed approach can be useful to timely provide rice yield estimates that reflect meteorological conditions for the development of more effective intervention plans in the rice market.
|
2019-04-26T14:22:59.642Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "fad19f39bc56292660448a8ab21becf554bf389b",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/agrmet/72/3-4/72_D-15-00019/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4a6851fdaa566bb537dfe001d1274061ff1a63d7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
23860130
|
pes2o/s2orc
|
v3-fos-license
|
Does the Somatosensory Temporal Discrimination Threshold Change over Time in Focal Dystonia?
Background The somatosensory temporal discrimination threshold (STDT) is defined as the shortest interval at which an individual recognizes two stimuli as asynchronous. Some evidence suggests that STDT depends on cortical inhibitory interneurons in the basal ganglia and in primary somatosensory cortex. Several studies have reported that the STDT in patients with dystonia is abnormal. No longitudinal studies have yet investigated whether STDT values in different forms of focal dystonia change during the course of the disease. Methods We designed a follow-up study on 25 patients with dystonia (15 with blepharospasm and 10 with cervical dystonia) who were tested twice: upon enrolment and 8 years later. STDT values from dystonic patients at the baseline were also compared with those from a group of 30 age-matched healthy subjects. Results Our findings show that the abnormally high STDT values observed in patients with focal dystonia remained unchanged at the 8-year follow-up assessment whereas disease severity worsened. Conclusions Our observation that STDT abnormalities in dystonia remain unmodified during the course of the disease suggests that the altered activity of inhibitory interneurons—either at cortical or at subcortical level—responsible for the increased STDT does not deteriorate as the disease progresses.
Introduction
Dystonia is a movement disorder characterized by sustained or intermittent muscle contractions that cause abnormal, often repetitive, movements and postures [1,2]. Depending on its distribution in the body, dystonia is classified under generalized, segmental, and focal forms, with the last being the most common in adult patients [1,2]. Although the underlying pathophysiological mechanisms of dystonia are still debated, a large body of evidence suggests that reduced inhibitory activity at various levels of the central nervous system and altered cortical plasticity are involved [3][4][5][6][7][8].
The somatosensory temporal discrimination threshold (STDT) is the shortest interval at which an individual recognizes a pair of stimuli as separated in time [9,10], and previous studies have shown that the STDT depends on the integrated activity of an extensive network that includes sensory cortex and basal ganglia [11][12][13][14]. Consistent findings have also shown that the STDT in patients with focal and generalized dystonia is abnormal [15][16][17][18].
In healthy subjects, inhibitory interneurons in primary somatosensory cortex play a role in STDT by sharpening and focusing sensory information in the temporal domain [13,22]. Several authors have suggested that an abnormal activity of inhibitory interneurons in S1 is likely to be responsible for the increased STDT values in dystonia [23][24][25]. No longitudinal studies have yet investigated whether STDT values in focal dystonia change during the course of the disease. A better understanding of this issue may shed light on the pathophysiological mechanisms underlying STDT abnormality. To this end, we designed a follow-up study to investigate whether the STDT values of patients with different forms of focal dystonia change during the course of the disease. For this purpose, we tested a group of patients with focal dystonias (blepharospasm and cervical dystonia) twice: the first time upon enrolment and the second time 8 years later. STDT values from dystonic patients at baseline were compared with those from a group of age-matched healthy subjects. We also investigated possible correlations between changes in STDT values and changes in disease severity.
Methods
Twenty-five patients with primary focal dystonia (15 patients with blepharospasm and 10 patients with cervical dystonia) ( Table 1) were enrolled in the study from the outpatient clinic of movement disorders, Department of Neurology and Psychiatry, Sapienza, University of Rome. Thirty agematched healthy subjects (age: 59 ± 13 years) were enrolled as controls. All the dystonic patients were studied 4 months after the last botulinum toxin injection. Information regarding the patients' demographic features, medical and family histories, disease course, and treatment were collected during a face-to-face interview (Table 1). Since STDT testing assesses a psychometric function, it yields reliable data only in the absence of cognitive or overt psychiatric conditions. Exclusion criteria for this study were therefore a medical history of psychiatric conditions and those patients with a FAB score lower than 15. To rate disease severity, we used a threepoint clinical scale (1 = mild, 3 = severe) to assess the clinical severity for blepharospasm [15,19] and the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) [26] for cervical dystonia. The study was approved by the local institutional review board and performed in accordance with the Declaration of Helsinki.
The STDT was investigated by delivering paired stimuli starting with an interstimulus interval of 0 ms (simultaneous pair), and progressively increasing the interstimulus interval in 10 ms steps, according to the experimental procedures used in the previous studies [15,19,[27][28][29]. Paired tactile stimuli consisted of square wave electrical pulses delivered with a constant current stimulator (Digitimer DS7AH) through surface skin electrodes with the anode located 0.5 cm distally to the cathode. Since the body part affected by dystonia in blepharospasm and cervical dystonia is different, we tested STDT values on the volar surface of the right index finger in order to obtain a between-group comparison of STDT values on the same body part. The stimulation intensity was defined for each subject by delivering a series of stimuli at an intensity that increased in 0.5 mA steps starting from 2 mA; the intensity used for the STDT was the minimum intensity perceived by the subject in 10 of 10 consecutive stimuli. The first of three consecutive interstimulus intervals at which participants recognized the stimuli as temporally separated was considered the STDT. The STDT was defined as the average of three STDT values and was entered in the data analysis. The STDT was tested and measured by neurophysiologists who were blinded to the clinical assessment both at the baseline and 8 years later.
Statistical Analysis.
We first compared STDT values in dystonic patients upon enrolment with those from a group of age-matched healthy controls using an unpaired sample t-test. We then ran a paired sample t-test to evaluate changes in clinical scores and STDT values obtained upon enrolment and at the 8-year follow-up assessment in patients with dystonia. To evaluate whether STDT values changed to a different extent in patients with blepharospasm and cervical dystonia across the two assessments, we also ran a betweengroup repeated measures ANOVA with factor GROUP (blepharospasm versus cervical dystonia) and TIME (two levels: enrolment and 8-year FU). Spearman's correlation coefficient was used to evaluate any relationships between clinical and neurophysiological variables.
Results
The unpaired sample t-test used to compare STDT values in patients upon enrolment, and healthy subjects showed that STDT values in dystonic patients were higher than those in healthy subjects (p < 0 001) (Figure 1). When we compared the clinical severity scores at the first evaluation with those at the 8-year follow-up evaluation, the paired sample t-test revealed a significant increase in disease severity scores in patients (blepharospasm: p = 0 007; cervical dystonia: p = 0 008) ( Table 1). Only 1 patient with cervical dystonia and 7 patients with blepharospasm had spread to other body parts (upper limb in the patient with cervical dystonia and oromandibular dystonia in those with blepharospasm) at the follow-up assessment.
The paired sample t-test performed to investigate any changes in STDT values in patients between the baseline evaluation and the 8-year follow-up evaluation showed that STDT values remained unchanged (baseline: 106 ± 25 ms versus 8-year follow-up: 107 ± 32 ms; p = 0 83) (Figure 1). Paired sample t-test to evaluate whether STDT values from the 8 patients who had clinical signs of spread changed at follow-up showed no significant changes (p = 0 85).
Repeated measures ANOVA to evaluate whether STDT changed differently in patients with blepharospasm and cervical dystonia across the two assessments showed neither significant factor TIME (F = 0 04, p = 0 83) nor significant GROUP × TIME interaction (F = 0 001, p = 0 97) (Figure 1). Spearman's correlation coefficient did not disclose any significant relationship between STDT values and changes in disease severity scores.
Discussion
This is the first longitudinal study based on an 8-year followup that has evaluated the STDT values during the course of disease in patients with blepharospasm and cervical dystonia. The novel finding of our study is that the abnormally increased STDT values observed in patients with focal dystonias remained unchanged whereas disease severity worsened at the 8-year follow-up assessment.
We took several precautions to ensure that the data we obtained were reliable. The neurophysiologist who tested the patients' STDT was blind to the clinical assessment, and the investigators who performed the clinical assessment were not informed of the purpose of the study. Since botulinum toxin leaves STDT values unchanged in dystonic patients [29], but is known to affect disease severity scores, the assessments both upon enrolment and at follow-up were conducted at least 4 months after the last botulinum toxin injection.
A recent study on the effect of aging on STDT measurements showed that STDT values increases with aging [30]. When they investigated a large sample of healthy subjects, Ramos et al. [30] found that the STDT increases by 0.66 ms every year in subjects older than 65 years. Therefore, the dystonic patients who were aged 60 years upon enrolment and 68 years at the follow-up assessment may have been subject to age-related changes in STDT values. However, our findings are only apparently in contrast to this observation. Indeed, if we bear in mind that the interstimulus interval during STDT testing is increased in 10 ms steps whereas the STDT increases spontaneously by 0.66 ms every year after the age of 60 years, a 10-year age increase would yield a 6.6 ms increase in STDT values. Thus, by increasing the interstimulus interval during STDT testing in 10 ms steps, as we did, age-related increases in STDT values only start having an effect well after 10 years.
Owing to the psychophysical nature of STDT testing, an altered STDT in dystonia may be caused by behavioural/attentional dysfunctions or psychiatric conditions, both of which are known to occur in patients with dystonia [5,20,[31][32][33]. Since the STDT relies on the activity of the basal ganglia combined with that of several cortical areas, including the prefrontal areas, and since covert attentional deficits or mood disorders may be responsible for increased STDT values, we expected the dystonia patients' STDT values to change when tested at the follow-up. Our findings showing that the STDT values remained unmodified 8 years after the first assessment contradict this hypothesis.
Previous studies on healthy subjects have demonstrated that STDT values are modulated by plasticity mechanisms in S1 induced by repetitive transcranial magnetic stimulation at the cortical level [13,22]. A recent study also showed that high-frequency electrical stimulation of an area of skin on a finger improves tactile temporal discrimination and that the improvement is reversed within 24 hours [34]. The authors of that study concluded that the perceptual effects on the STDT they observed are likely to be dependent on plastic changes in the somatosensory cortex, which is in accordance with the concept that the timing of sensory stimuli is, at least in part, encoded in the primary somatosensory cortex. In keeping with this hypothesis, we have recently observed [35] that the temporal discriminative acuity of tactile stimuli is affected by the number of stimuli in the task and suggested that stimulus-driven rapid plasticity is the main mechanism underlying somatosensory temporal encoding in S1.
Investigating the neurophysiological correlates of abnormal somatosensory temporal discrimination in dystonia, Antelmi et al. [25] reported that STDT values were increased in dystonic patients and were associated with reduced suppression of cortical and subcortical paired-pulse somatosensory evoked potentials as well as with a smaller area of the high-frequency oscillation early component. Overall, these findings point to a reduced activity in dystonic patients of the inhibitory interneurons within the primary somatosensory cortex although a possible contribution of altered inhibitory activity in the basal ganglia cannot be excluded.
Our observation that the STDT did not change either in patients with blepharospasm or in those with cervical dystonia, with or without clinical signs of spread, at the follow-up assessment suggests that STDT abnormalities in dystonia are representative of a background alteration in inhibitory mechanisms. We hypothesize that this alteration may be considered a "fingerprint" that remains stable over time and is a predisposing factor of the disease. Since cortical plasticity mechanisms rely on a dynamic balance between excitatory and inhibitory interneurons [36,37], it is conceivable that altered inhibitory interneuron activity may concur to give rise to other pathophysiological mechanisms in dystonia, such as aberrant cortical plasticity mechanisms [3,8].
Our findings showing that STDT values are unrelated to the severity of motor disturbances and that they do not change after 8 years despite the progression in dystonia severity suggest that abnormal STDT is not a marker of disease progression but is an endophenotypic marker of the disease. On the same line, STDT changes are present when dystonic features are not yet manifested in patients with increased blinking, a condition now considered to be a prodromal manifestation of blepharospasm [38,39]. Different from dystonia in other basal ganglia conditions, like Parkinson's disease, STDT changes reflect dopaminergic depletion [40,41] and disease progression [42,43].
A limitation of the present study may be the lack of a control group at follow-up. However, since STDT values in dystonic patients were already altered at baseline and since STDT values in dystonic patients remained unmodified at follow-up, we believe that the lack of a control group at follow-up unlikely affects the interpretation of our findings.
In conclusion, the results of our study showing that STDT abnormalities in dystonia remain unmodified during the course of the disease suggest that the abnormal activity of inhibitory interneurons does not deteriorate further as the disease progresses.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2018-04-03T01:10:20.107Z
|
2017-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "940a25c0cbebe9003adc44de4e65e9ba683ba5b1",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/np/2017/9848070.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d6731e6f8cf98c9b0c414ce4eae11cbbbe53a9b",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
248285817
|
pes2o/s2orc
|
v3-fos-license
|
Recent advances of multimoda ultrasound in image-guided prostate-targeted biopsy
Prostate-targeted biopsy is usually the preferred method over systematic biopsy because it can effectively detect prostate cancer using only a few puncture cores with fewer complications. With the development of ultrasound, it has gained multimodal technological upgrades, such as the emergence of contrast-enhanced ultrasound, ultrasound elastography, and three-dimensional ultrasonography. Moreover, multimodal ultrasound has played an increasingly significant role in prostate-targeted biopsies.
Two-dimensional transrectal ultrasonography (2D-TRUS)
Two-dimensional transrectal ultrasonography (2D-TRUS) is the most common ultrasound technique, including grayscale ultrasound and color or power Doppler ultrasound. Using gray-scale ultrasound, the prostate shape, capsule, and internal echo can be carefully observed, and asymmetrical focal hypoechogenicity is usually indicative of PCa. Additionally, color or power Doppler ultrasound can be used to test prostate blood flow signals. If there were any abnormal hypoechogenicity or blood flow signals, an image-guided targeted biopsy was performed (Fig. 1, A-C). Although 2D-TRUS has been routinely applied in prostate disease diagnosis because of its unique advantages of skillful operation and easy control, relatively low sensitivity in PCa localization limits its independent use in targeted biopsies. 6 Moreover, the low resolution of detailed anatomical structures has also resulted in a reduced positive biopsy rate. 7 Many researchers have recommended adding in some more puncture cores based on 2D-TRUS-targeted biopsy to improve the rate of PCa detection. However, some PCa lesions might still be missed even if a "saturation" biopsy approach to obtaining enough cores has been employed. 8 However, it is worth mentioning that 2D-TRUS has an important function in guiding MRI-targeted biopsies in real-time.
MRI-ultrasound fusion/cognitive targeted biopsy (MRI-TBx)
The value of multiparametric magnetic resonance imaging (mp-MRI) for identifying PCa lesions has been verified. 9,10 Many international guidelines have already recommended prebiopsy mp-MRI examinations because it has been proved that mp-MRI could improve the detection rate of PCa and reduce unnecessary biopsies by 25%. 11 The exceptional performance of mp-MRI in PCa detection further facilitated MRI-guided targeted biopsy as a possible option. However, some intrinsic shortcomings (e.g., complex operations, specialized settings, low availability, and high cost) have restricted the widespread use of MRI-guided targeted biopsies. Therefore, to supplement the limitations of mp-MRI, 2D-TRUS raises concern as a feasible alternative and is fused with MRI to guide targeted biopsy (Fig. 2). Moreover, considering the complicated and time-consuming process of image fusion, MRI-ultrasound cognitive targeted biopsy is gradually being used. However, its PCa detection rate largely depends on the experience of the operator, which has slight instability. 12 As MRI-ultrasound fusion/cognitive targeted biopsy (MRI-TBx) has been increasingly recognized in clinical practice, 2D-TRUS has become an essential part of prostate biopsy. Despite the growing acknowledgment of mp-MRI, there are still some limitations, such as moderate specificity and variable negative predictive values of 63%-98%. 11,13 In addition, mp-MRI might miss or even characterize some PCa lesions as benign, with a ratio of approximately 58%. 14 Furthermore, the biopsy revealed PCa in 5%-15% of men with negative MRI findings. 9,15 Therefore, it is essential to combine MRI-TBx with multimodal ultrasound to more accurately diagnose PCa.
Contrast-enhanced transrectal ultrasonography (CE-TRUS)
Contrast-enhanced transrectal ultrasonography (CE-TRUS) has been widely used in prostate disease diagnosis in recent years because it can dynamically display blood perfusion and vascularity, especially micronourishing vessels related to tumors that may not have sufficient native flow to be detected by conventional color or power Doppler ultrasound. 16 Focal asymmetry hyperenhancement during the early phase of prostate cancer by CE-TRUS is usually indicative of PCa and can be characterized as a lesion for targeted biopsy (Fig. 1, D). In addition, owing to the angiogenesis of PCa in the early stage, CE-TRUS may be extremely sensitive to lesion localization. Compared to 2D-TRUS, CE-TRUS is better in early detection of the blood flow in tumors and can improve the accuracy of ultrasound. 17 Therefore, CE-TRUS is widely applicable to prostate-targeted biopsy. It was proven that the positive rate of CE-TRUS-targeted biopsy was higher than that of systematic biopsy, especially for PCa with a higher Gleason score. 18 Some studies have shown that the combination of CE-TRUS-targeted biopsy and systematic biopsy can improve the detection rate. 19 However, some drawbacks of CE-TRUS, including increased cost, inability to scan the whole prostate during one administration of the contrast agent, contrast agent allergy, and time consumption, should attract attention. Moreover, there are some limitations of CE-TRUS in detecting extensive tumors with relatively low Gleason scores. 19
Transrectal shear wave elastography (SWE)
The proportion of extracellular matrix proteins associated with tumors would increase during PCa formation and is involved in the improvement of tissue stiffness in the PCa region. 20 Transrectal shear wave elastography (SWE) can be used to qualitatively and quantitatively analyze prostate tissue stiffness. The PCa region is generally stiffer than the surrounding normal tissue. Thus, abnormally stiffer regions in the prostate were characterized for targeted biopsy (Fig. 1, E). SWE could present a red-coded area suspicious of PCa with greater stiffness, which 2D-TRUS failed to identify. SWE could additionally characterize approximately 60% of the clinically significant PCa missed by MRI. The detection rate of PCa can be improved by 10% after combining SWE with MRI. 21 Compared to systematic biopsy, SWE-targeted biopsy could improve the detection rate of PCa. Moreover, SWE can effectively predict PCa extracapsular extension, although negative manifestations cannot entirely exclude extracapsular extension. 22 However, the instability and dependence of SWE on operators limit its potential for wide clinical use. 23 In addition, SWE is usually used to assess the stiffness of the outer glands, but it is difficult to accurately measure the stiffness of the inner glands.
Transrectal real-time strain elastography (TRSE)
Transrectal real-time strain elastography (TRSE) is another technique used to analyze tissue stiffness. Unlike SWE, TRSE must be performed by applying additional pressure on tissues and assessing stiffness based on the degree of tissue deformation. 24 Similarly, abnormal regions, especially asymmetrically stiffer regions detected by TRSE in the prostate, could be characterized by targeted biopsy (Fig. 1, F). Previous research has shown that TRSE-targeted biopsy could improve the detection rate of PCa by 18.3%-24.8%. By combining TRSE-targeted biopsy with systematic biopsy, the negative predictive value for high-risk PCa can be improved from 79% to 97%. 25 Moreover, the study conducted by Kamoi et al. proposed the "TRSE 5-point" method, taking 3-point as the cut-off value to differentiate the presence from the absence of PCa; the sensitivity, specificity, and accuracy were 68%, 81%, and 76%, respectively. 26 Similar to SWE, TRSE can also accurately predict the extracapsular extension of PCa. However, TRSE has limitations in detecting small PCa lesions and may miss PCa with a low Gleason score. 27 In addition, TRSE has limitations in identifying suspicious lesions owing to its high dependence on operators. 28
Three-dimensional transrectal ultrasonography (3D-TRUS)
Three-dimensional transrectal ultrasonography (3D-TRUS) has emerged as a novel imaging technique in prostate-targeted biopsy to provide a precise anatomic localization (Fig. 3). The 3D-TRUS helps acquire 3D images and remediates the shortage of 2D-TRUS for measuring the volumes of lesions. Moreover, 3D-TRUS can cover large regions of interest, including peripheral blood vessels. 29 The 3D model of a real-time puncture route was reconstructed to record the position of the puncture needle and to perform a targeted biopsy more visually and accurately. 30 Therefore, 3D-TRUS is often a better choice for repeated biopsies and for identifying lesions, especially small target lesions. Compared with 2D-TRUS, 3D-TRUS with a higher localizing accuracy could improve the efficiency of MRI-ultrasound fusion/cognitive targeted biopsy and increase the rate of PCa detection. 29,30 However, a complex reconstruction operation is still a deficiency of 3D-TRUS, which awaits further improvement.
In conclusion, with the development of multimodal ultrasound, various ultrasound techniques are widely used in prostate-targeted biopsies. Many studies have shown that multimodal ultrasound-targeted biopsy could effectively improve the rate of PCa detection. Although at present, MRI-targeted biopsy guided by 2D-TRUS is the mainstream technique in prostate-targeted biopsy, multimodal ultrasound is a reliable auxiliary technique to supplement the deficiency of MRI. It is essential to understand that multimodal ultrasound is not merely an auxiliary technique and is more likely to become the mainstream in targeted biopsy in near future as a positive prospect in clinical application. Although, the role of multimodal ultrasound in prostate-targeted biopsy still needs further exploration and confirmation.
Declaration of competing interest
No conflict of interest exists in the submission of this manuscript, and all authors have approved the manuscript for publication.
|
2022-04-21T15:06:53.806Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "9128224a0859ceb1e41cd2a43405ff675cf4be15",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jimed.2022.04.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2349a40d8683dc01c661859ac2b68f82dbd74fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
9084183
|
pes2o/s2orc
|
v3-fos-license
|
Unusual histiocytic disease in a Somali cat
An 8-year-old Somali cat presented with a 9-month history of inappetence, vomiting and weight loss. The disease progressed to involve neurological signs associated with a mass lesion at the level of the first lumbar vertebra. Histopathology identified the condition as malignant histiocytosis affecting the lungs, stomach, mesenteric lymph nodes, liver, spleen, brain and spinal cord. However, the presentation of this case differs from previously reported cases of malignant histiocytosis, and may therefore represent a variant form of histiocytic disease.
A n 8-year-old female neutered Somali cat was referred to the University of Edinburgh Hospital for Small Animals, for investigation of vomiting and weight loss of 9 months duration. The cat lived indoors, but had regular access outside. She was regularly wormed and routinely vaccinated against feline panleucopenia virus, feline herpesvirus, feline calicivirus and feline leukaemia virus (FeLV). Her previous medical history consisted of seasonal dermatitis, and occasional fight wounds.
Investigations prior to referral included routine haematology and serum biochemistry, the results of which were unremarkable. Inhouse tests for FeLV antigen and feline immunodeficiency virus antibody had been negative. A lateral thoracic radiograph taken 8 months before presentation was reported to show a patchy, mixed lung pattern. An exploratory coeliotomy had been performed. No gross abnormalities were reported, and biopsies were obtained from the stomach, mesenteric lymph nodes and jejunum. Histopathology of the gastric biopsy reported the presence of nodular to coalescing infiltrates, which consisted of histiocytes with neutrophils and variable numbers of lymphocytes and plasma cells and abundant fibrosis. No multinucleated giant cells or infectious agents were detected (acid-fast and periodic acid Schiff staining were performed). The lymph node histopathology was similar to that of the gastric biopsy, and the small intestinal histopathology suggested mild to moderate lymphocytic enteritis.
Treatments prescribed before referral consisted of antibiotics (clavulanic acid potentiated amoxicillin, 17 mg/kg q 12 h; metronidazole, 13 mg/ kg q 12 h), dietary modification (Hill's i/d) and various courses of corticosteroids, namely prednisolone (1.7 mg/kg q 12 h), dexamethasone (0.3 mg/kg q 48 h; Dexadresson; Intervet) and methylprednisolone acetate (4 mg/kg once, 3 weeks before presentation; Depo-medrone V; Pharmacia). Initially, there had been some improvement (increased appetite and decreased frequency of vomiting) following the introduction of the prednisolone. However, the owner had been unable to administer the metronidazole due to difficulty in medicating the cat, and this also accounted for the change to injectable steroids.
At the time of referral, vomiting was occurring 2e3 times per week, although the signs had waxed and waned over the previous 8 months. The nature of the vomit had changed from bilious fluid to undigested food. Defecation had not been observed, but blood had occasionally been noticed at the cat's anus. There had been a mild increase in thirst over this time period, and slow, insidious weight loss.
On physical examination, the cat was considered to be thin, weighing only 2.96 kg (body condition score 3/9). On abdominal palpation, there was suspicion of a mid-abdominal mass, but physical examination was otherwise unremarkable except that the cat tended to sit with her right hind limb extended cranially.
Routine haematology and serum biochemistry were repeated, but again were found to be unremarkable. No abnormalities were detected on urinalysis, including culture. Faecal culture was negative, as was analysis for enteric parasites, Cryptosporidia, Giardia and Coccidia species. Serum levels of cobalamin, feline pancreatic lipase and trypsin-like immunoreactivity were all within the reference ranges. Serum folate level was slightly reduced (7.1 mg/l; reference range: 9.7e21.6 mg/l), consistent with small intestinal disease. Thoracic radiography revealed a generalised bronchointerstitial pattern, but abdominal radiography was unremarkable. Abdominal ultrasonography revealed prominent mesenteric lymph nodes, with one large lymph node (0.8 Â 1.7 cm) exhibiting heterogeneous echogenicity. The submucosal layer of the stomach showed generalised thickening.
Endoscopy of the gastrointestinal tract was performed. The oesophagus was unremarkable. The mucosa of the stomach appeared rather irregular and friable. The small intestine was grossly normal, but the colonic mucosa bled very easily. Multiple pinch biopsies were taken from the stomach, small intestine and colon. Low to moderate numbers of Helicobacter-like organisms were identified in the gastric pits, but no organisms were cultured from a gastric biopsy. The histopathology of the small and large intestine was considered normal.
A presumptive diagnosis of deep-seated inflammatory bowel disease was made, and the cat was discharged on prednisolone (1.7 mg/ kg q 48 h), along with an exclusion diet of chicken only. In addition, when the biopsy results revealed the presence of Helicobacter-like organisms, a 10-day course of metronidazole (14 mg/kg q 24 h) and spiramycin (26 mg/kg q 24 h; Stomorgyl; Merial Animal Health) was prescribed. This formulation was chosen to try to facilitate medication of an uncooperative cat.
The cat was re-presented 24 days after the initial referral. Although the gastrointestinal signs had improved, the right hind lameness had progressed. Decreased muscle tone and strength were noticed, along with decreased conscious proprioception of the right hind limb.
Serum was submitted for a coronavirus profile, including albumin:globulin ratio (0.89), coronavirus antibody titre (0) and a 1 acid glycoprotein (100 mg/ml; reference range <500 mg/ml), the results of which indicated that the cat was very unlikely to have feline infectious peritonitis.
The cat was anaesthetised for further investigations. Plain radiographs of the spinal column were obtained, before collection of a cerebrospinal fluid sample and performance of a myelogram. The cerebrospinal fluid was unremarkable. The myelogram demonstrated an intra-medullary lesion at the level of the first lumbar vertebra (see Fig 1). Neoplasia was considered a significant differential diagnosis, therefore, the owner elected for euthanasia with consent given for a post-mortem examination to be performed.
The results of the gross and histopathological examinations are reported in Table 1, and led to a pathological diagnosis of malignant histiocytosis.
Histiocytes are tissue macrophages derived from the CD34þ committed stem cell precursor (Moore and Affolter 2005). Histiocytic disorders in dogs may be broadly divided into cutaneous and non-cutaneous diseases. The cutaneous disorders include cutaneous histiocytoma, multiple histiocytomas, metastatic histiocytoma and Langerhans cell histiocytoma (Moore and Affolter 2005). The non-cutaneous diseases include histiocytic sarcoma, disseminated histiocytic sarcoma, systemic histiocytosis and malignant histiocytosis. A review of these histiocytic diseases has been given by Moore and Affolter (2005).
The histiocytes in this case were identified by staining with vimentin, which confirmed mesenchymal origin (Smoliga et al 2005), and Mac 387, which is a human histiocyteemonocyte marker (Smoliga et al 2005), although the ideal marker for feline histiocytes is yet to be established. More recently, immunophenotyping has been used to differentiate lymphocytes which express CD3 or CD79a from histiocytes, which do not, and further classification of the histiocytic lineage may be achieved with immunophenotyping for CD1, CD11b, CD11c, CD11d, CD18, CD90, MHCII and E-cadherin (Moore and Affolter 2005).
In this case, the diagnosis of malignant histiocytosis was based on the finding of populations of atypical and pleomorphic histiocytes, their presence within multiple organs, and their dissemination to organs that do not normally contain large numbers of histiocytes (see Table 1 and Fig 2).
Malignant histiocytosis is a rare disorder, most commonly affecting Bernese Mountain Dogs. The disease is typified by systemic proliferation and invasion of tissues by morphologically atypical histiocytes, which occurs simultaneously within multiple sites. This is typically a very aggressive disease, with affected animals showing a rapid clinical course. Infiltrates are commonly identified within the lymph nodes, spleen and liver early in the course of the disease, and the bone marrow later in the course of the disease (Moore and Rosin 1986). Reports of malignant histiocytosis in the cat amount to only seven cases (Court et al 1993, Freeman et al 1995, Walton et al 1997, Fritz et al 1999, Kraje et al 2001. The ages of the affected cats ranged from 1 to 13 years, with weight loss, inappetence and lethargy being common presenting signs. The course of the disease was rapid, with euthanasia being performed between 2 and 7 weeks after onset of clinical signs. In the cases reported previously, anaemia, splenomegaly and icterus were consistent findings, with thrombocytopenia, hypoproteinaemia and hyperglycaemia being seen frequently. In the three cases where clotting times (prothrombin time and activated partial thromboplastin time) were performed, they were found to be prolonged (Kraje et al 2001).
The case reported here is unusual for a number of reasons. The chronicity of the clinical signs is very different from the previously reported cases. In addition, no haematological or biochemical abnormalities were detected. Clotting times were not performed, but there was no suggestion of a bleeding disorder. The bone marrow was not examined histologically, but the absence of haematological abnormalities suggests that it is unlikely to have been involved. Pulmonary infiltration was previously reported in one cat (Kraje et al 2001), but this is seen much more commonly in dogs (Moore and Rosin 1986). Spinal cord involvement has recently been reported in one cat, but was attributed to extension of a histiocytic sarcoma, rather than malignant histiocytosis (Smoliga et al 2005).
Histiocytic diseases other than malignant histiocytosis were considered in the differential diagnoses of this case. Systemic histiocytosis is unlikely, as skin lesions are a consistent finding in this condition (Moore 1984). In disseminated histiocytic sarcoma, a primary histiocytic sarcoma becomes widely distributed throughout the body. Although histiocytic sarcomas often arise on the limbs, they may occur in cryptic sites such as lung, spleen or bone marrow, which Diffusely congested, with a cream, mottled appearance. Several raised, white, irregular, nodules (5e10 mm), which felt gritty when cut, and were diffusely scattered throughout the lung parenchyma.
Multifocal to coalescing non-encapsulated masses comprising a mixture of histiocytic cells, neutrophils, and a few plasma cells and small lymphocytes. These cell aggregates were mainly scattered throughout the pulmonary interstitium, but they were also present within the lumen of bronchi and bronchioles (admixed with desquamated lining epithelial cells and eosinophilic proteinaceous material) and occasionally within the lumen of blood vessels. The histiocytes were large, polygonal with large oval vesicular nuclei with single prominent nucleoli and abundant eosinophilic cytoplasm. Mitoses were not observed. There was mild anisokaryosis of the histiocytes.
Multiple, randomly distributed, irregularly shaped foci were scattered throughout the hepatic parenchyma. These aggregates were composed mainly of large histiocytic cells and neutrophils, with fewer numbers of plasma cells. Apoptotic bodies were scattered throughout the parenchyma. Haemosiderin was abundant in the cytoplasm of hepatocytes, within sinusoidal Kupffer cells, and occasionally in the centre of inflammatory foci within the cytoplasm of the histiocytes.
Spleen
Markedly enlarged and congested.
Depleted white pulp, with prominent germinal centres, containing occasional small foci of atypical histiocytes. The red pulp was markedly acellular with the few cells consisting mainly of erythrocytes and scattered foci of histiocytes in the sinusoids. These cells were histologically similar to those seen in the lungs and liver. The ileocaecal lymph node was prominent, and the mesentery contained a solitary enlarged (10 Â 5 mm) lymph node.
The mesenteric lymph node contained a prominent cortical area with lymphoid follicles with distinct germinal centres. The follicles contained a mixed cell population of predominantly small mature lymphocytes, fewer plasma cells and macrophages, and numerous tingible body macrophages. Spinal cord An irregular area of grey discoloration of 5 Â 5 mm at the level of the first lumbar vertebra.
Multiple foci comprising a mixture of histiocytes, abundant small lymphocytes and occasional plasma cells and lymphocytes, randomly distributed throughout grey and white matter. Histiocytic cells were predominant and occasionally had a spindleoid appearance, forming streams and whorls. There was marked evidence of Wallerian degeneration, and widespread neuronal chromatolysis. Brain A large, well-circumscribed, non-encapsulated cellular mass was identified in the area of the lateral geniculate body and periaqueductal grey matter of the midbrain. This extended to above the third ventricle in the thalamus and the temporal cortex.
Cellular masses consisted of whorls and bundles of histiocytes, and occasional neutrophils, plasma cells and clusters of small lymphocytes. Aggregates of lymphocytes were present at the periphery of the two masses present in the thalamus and temporal cortex.
Immunostaining with GFAP * : negative, S-100 y : negative, Vimentin: positive, Mac 387: positive * GFAP ¼ glial fibrillary acidic protein (to rule out glial cell tumour). y S-100 ¼ polyclonal rabbit S-100 protein (to rule out amelanotic melanoma). may delay their recognition. Unfortunately, once the condition is well advanced, differentiation from malignant histiocytosis can be extremely difficult (Moore and Affolter 2005). While metastatic spread of a histiocytic sarcoma could explain the more insidious onset in this current case, the location of the primary lesion would remain in doubt. A reactive histiocytic disease was considered less likely due to the lack of identification of a causal agent, and the variety of organ systems involved. However, reactive histiocytic disease is poorly documented in the cat.
In conclusion, the spectrum of histiocytic disease is complex, and differentiation of these conditions is complicated by their rarity, particularly in the cat. Improved understanding of this spectrum of disease in this species requires further reports to aid recognition, and the development of immunophenotyping to assist with classification. This may in turn lead to identification of more effective treatments.
|
2018-04-03T00:00:34.947Z
|
2006-04-01T00:00:00.000
|
{
"year": 2006,
"sha1": "ec6d25a09d231fdf185479ec353521dc2c776d6e",
"oa_license": "other-oa",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1016/j.jfms.2005.09.002",
"oa_status": "HYBRID",
"pdf_src": "ElsevierCorona",
"pdf_hash": "b3856f38a7316a9a494c8a73875f23676998f280",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3331634
|
pes2o/s2orc
|
v3-fos-license
|
Physiotherapy as a Way to Maintain Vaginal Health during Menopause
The majority of women will experience some or most of the menopause symptoms in their life. Thais time in women’s life is associated with a reduction in estrogen levels which leads to physiological changes that affect different organ systems. In the urogenital tract, these changes usually cause vulvar and vaginal atrophy, affecting a vaginal health of women and decreasing their quality of life. Also, there is a reduction in vaginal moisture and loss of tissue elasticity. Besides, other organ systems are involved and they can also negatively impact normal vaginal physiology. These evolutional changes frequently lead to bothersome symptoms that can negatively impact a woman’s vaginal health and the quality of life. The role of pelvic floor physiotherapy is to improve the tone and strength of the muscle fibres in order to achieve the increase of motor units, improve muscle elasticity and to increase the muscle mass which will help to alleviate menopause symptoms.
Introduction
Vaginal health is considered to be a fundamental aspect in guaranteeing the overall female wellbeing and healthcare.It was defined for the first time in Spain in 2014 as "the state of the vagina where all the appropriate physiological conditions that change throughout the woman's life are maintained, a state with ab-sence of local symptoms of dysfunctions that allows a woman to have a satisfactory sexual life without suffering any genital trophism" [1].
Vagina is covered with polystratified squamous epithelium dependant on the estrogenic stimulus in a way that when estrogen levels decrease so does the process of proliferation.
As a consequence, the number of vaginal epithelium layers decreases, epithelium becomes thinner, exposing the nerve endings and increasing sensitivity [2].
Estrogen levels affect humidity levels, pH and the composition of the vaginal discharge.They also regulate blood circulation in the vagina that reduces when the estrogen levels decrease.All this will cause changes in the trophism that could affect the vaginal mucosa by causing deficit and eventual disappearance of lactobacilli, which will result in susceptibility to infections affecting sexual life and urinary symptomathology as well as pelvic floor support systems [2].
Menopause and Vaginal Health
Menopause (from Greek meno-month, pasis-stop) is defined as a definite stop of the menstruation.This physiological condition takes up to 40% of a woman's life (considering life expectancy to be around 80 years in industrialised countries) and it is very important as it causes profound changes in women's health.
Menopause causes 57% -67% of all the sexual dysfunctions that a woman may suffer during her life, this is reflected in the changes in satisfaction, excitation, lubrication, desire, orgasm and dyspareunia [3].
There could be more than one cause of dysfunctions that affect sexual health of a woman, there also could be connection between them.Among the categories of sexual dysfunctions there are: Sexual desire disorder which is characterised by a lack or absence for some period of time of sexual desire or libido for sexual activity.Sexual arousal disorders is a disorder characterised by a persistent or recurrent inability to attain sexual arousal or to maintain arousal until the completion of a sexual activity.
The condition is known as a lack or absence of sexual fantasies and desire for sexual activity for some period of time.Orgasm disorders are persistent delays or absence of orgasm following a normal sexual excitement phase.Sexual pain disorders like dyspareunia-painful intercourse or vaginismus-an involuntary spasm of the muscles of the vaginal wall that interferes with intercourse [4].
During menopause biological, psychological and social changes are provoked by the reduction of estrogen number that are being generated, which negatively affects the sexual life of 92.1% of women [5].75% to 85% of women have menopause symptoms but they are variable and different in every woman: • Short-term or acute sympoms: produce vasomotor alterations like hot flashes and neuro-psychic changes, fatigue, insomnia and irritability.
• Medium term or subacute symptoms: genitourinary apparatus suffers mucocutaneous atrophy.
• Long term symtoms: cardiovascular problems and osteoporosis appear.
Changes in Female Body Functioning Related to Menopause
The peri-and post-menopausal years are associated with a decline in estrogen levels and, as a result a hypoestrogenic condition.This leads to important physiological changes that affect multiple organ systems, particularly the endocrine and genitourinary systems [6].
Series of changes due to hypoestrogenism occur during menopause and they affect the whole organism [7].
Hormonal and Physiological Changes
There are changes in the levels of testosterone, progesterone, prolactine, oxitocine and endorphine.Oxitocine and endorphine are responsible for low vascular congestion, reduced sexual motivation and a low clitoris reaction.
These hormonal variations provoke series of symptoms typical for this stage that affect many women.Hot flashes are the most common symptom and it occurs in 79% to 86.4% of the cases [9].
Genital and Gynecological Changes
The following anatomo physiological changes are produced: Thinning of the mucosa and the consiquent loss of the vaginal, urethral and vesical epithelium thikness and elasticity; an increase of pH level to ≥5; vaginal erosions and cervicovaginal friability.All these changes are responsible for vaginal and urinary clinical state and the quality of sexual life of a woman [2].Vagina becomes smaller, less elastic pale and pink in colour.Ulcers in epithelium could appear, clitoris becomes more exposed because of the labia regression and the reduction of the normal flora and basic pH leading to vaginal infections [8].
The most important symptom is the vaginal dryness that is present in 90% of menopause women.This dryness is a result of poor lubrication and, in its turn, causes vulvoginal itching.Pelvic floor support is being lost, the erection of the nipples and clitoris reduces [5].These genital atrophy problems, unlike hot flashes and night sweats, do not improve with time, they even progressively become more severe damaging sexual health and the quality of life of the patients [13].
Psychological and Emotional Changes
The real cause of these changes is the physical changes that a woman goes through and that affect her badly.The most common is the anxiety, seen in 50.6% of women, depression in 43.8%, irritability in 33.3% [10], dysphoria, nervousness, bad mood and sadness in 82.5% of women.Memory loss is related to stress and is present in 31% -44% of the women.Antidepressants have an adverse effect and cause sexual dysfunctions [5].
Sexual Changes
There is a decrease in sexual desire and satisfaction, which requires longer stimulation.
The orgasms are shorter and dyspareunia can appear.Sexual dysfunctions are likely to be present in peri and post menopause women affecting 42% and 88% respectively.
Climateric syndrome appears between 45 and 55 years of age, and becomes more severe in the first 5 years.A decrease of libido affects 63.3% of the women, there are also problems that affect sexual satisfaction (53.1%), lack of desire (23%) and dyspareunia (12.5%) [14].
All these symptoms worsen while a woman transitions from a peri menopause to the post menopause stage.Vaginal dryness is the most common of all.10.56% of women avoid having sexual relationships, this percentage increases in post menopause women reaching 33.8%.
Despite high prevalence of these symptoms, only 1 in 4 patients who have them sees a doctor because of them and more than a half (70% of women) never or almost never have asked about vaginal dryness during a gynecological check up [15].
During menopause, a positive attitude while adapting to the changes and seeing a doctor, if necessary, improves overall health and wellbeing [16].
Physiotherapy and Vaginal Health in Women with Menopause
The main objective of pelvic floor physiotherapy is the vaginal health recovery together with the increase of knowledge and proprioception, improvement in muscle relaxation, muscle tonification, the increase of the elasticity of the tissues and desensibilization of the painful areas [17] [18].
Physiotherapy treatment is used to achieve established objectives and could include: manual therapy; functional training (coordination, strength, muscle resistance, flexibility, relaxation), mechanical, physical or electrotherapy agents.
The role of pelvic rehabilitation is to improve the tone and strength of the muscle fibres in order to achieve the increase of motor units, improve muscle excitation frequency and the increase of muscle mass.
Physiotherapy can help women at this stage in their lives alleviating the symptoms and improving the quality of life.
It is hard to compare results between studies, and it is difficult to definitely say which training regimen is the more effective, as there are different instruments and scales to measure and evaluate pelvic floor strength.Also the exercise mod-ality (type of exercise, frequency, duration and intensity) is significantly diverse between studies [19] [20].
Compared to surgery pelvic floor muscle training has no known side effects, is relatively in-expensive, and women should be motivated to intensively perform pelvic floor muscle exercise as first line treatment [21].
It is often reported that pelvic floor muscle training is more commonly associated with improvement of symptoms, rather than a total cure.However, in several cases cure has been reported [22].Among the most efficient methods to strengthen pelvic floor muscles to prevent vulvovaginal atrophy and improve the life of a woman we considered including the following.
The Use of Vaginal Dilators
The use of dilators help women decrease stenosis, the stretching of the vagina caused by menopause.Besides, after performing exercises with dilators the elasticity of the tissues becomes better and the pain sensation is minimised which makes sexual relationships more comfortable [23].
Vaginal dilators could be considered a very useful tool to strengthen pelvic floor muscles.The dilators help women to have a precise control over the size, speed and the angle of the the insertion.That help to trigger muscle reaction similar to the ones women have during sex.A woman contracts and relaxes consistently and consciously the vaginal muscles with the inserted dilators.The dilators could be made of plastic o silicon and come in different sizes.The treatment can last from several weeks to several months [19].
Vaginal Cones
Vaginal weights or cones are introduced into the vagina above the levator plate [24].Exercises with these vaginal cones is considered to be a good pelvic floor muscle training.By contracting the muscles women strengthen pelvic floor area and become aware of the perineal muscle action at the same time.Every cone has the same size but a gradually different weight, and the purpose of the exercise is that they have to be maintained in the vagina like a small tampon during several minutes while standing up or walking.
After introducing the cone into the vagina it usually descends and slightly falls down pushed by its own weight.The feeling of the weight coming out causes a light pelvic floor muscle contraction and therefore helps to maintain the cone inside.This simple contraction and the gradual increase of the weight of the cones strengthen pelvic floor muscles.Women are usually asked to try to hold the weight inside in standing position for 1 minute and then gradually increasing the time of the exercise [17].Patients start to notice improvements in muscle tone after 2 or 3 weeks, on average a complete course can last 2 to 3 months.
Chinese Vaginal Geisha Balls
There are scientific studies that show the efficacy of the Chinese vaginal balls method to recover pelvic floor muscles.The use of these balls prevents and could even stop the problems caused by the lack of pelvic muscles strength.Vaginal balls may be a good proprioceptive method, as the motivation to perform exercises could increase vaginal lubrication.These balls are placed in the vagina behind pubococcigeous muscle.There are usually one or two balls joined by a string.Each ball is has a smaller ball inside.With the movement, while walking, the interior ball is moving too, creating a vibration that stimulates vaginal vibrio receptors provoking the contraction of the soft muscles of the vagina.Besides, the weight of the ball stimulates baroreceptors of the perineal muscles providing good tonification [25].
For example Boltex Inertial balls are designed in a way that the internal surface of the balls is made of hexagonal plaques which stimulate pelvic floor more effectively independent of the position of a ball in the vagina and at the same time they adapt to the morphological and anatomical characteristics of the users [26].Regarding the time of use, muscle fatigue will be the indicator to stop the exercise.A woman should not reach the stage when she realises that it is too hard for her to keep the ball inside.It is not recommended to use the balls more than three consecutive hours [27].
Kegel Exercises
Pelvic floor rehabilitation techniques become popular in 1950 when the gynecologist Arnold Kegel has discovered and proved the relationship between pelvic floor dysfunctions and hypotonia or perineal muscle weakness and reported that women had a noticeable improvement or complete disappearance of symptoms after having performed these strengthening exercises.Kegel was the first to create pelvic floor muscle training [17].His method includes exercises augment the strength of perineal muscle contractions.The efficacy of these exercises has been clearly demonstrated [19].There ways to perform these exercises are different, but they are all based on repetitive contraction and relaxation of the muscles so that the muscle strength and resistance could be trained.Kegel exercises improve the symptoms of dyspareunia and prevent or avoid urinary incontinence and other similar problems [28].
Conclusion
Vaginal atrophy is the most common symptom of menopause which negatively affects the sexual health of women at this stage of their life.The whole organism is affected by the changes that are produced during this time; they include changes on a psychological and sexual level but more commonly on a hormone level, genital and gynecological levels.Physiotherapy can help to treat vaginal atrophy by strengthening the muscles and alleviate the symptoms of menopause and improve vaginal health of a woman.
|
2018-02-20T10:18:40.857Z
|
2017-03-31T00:00:00.000
|
{
"year": 2017,
"sha1": "6b65269067e9440ca88b492511ba59dc7b98d31c",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75458",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6b65269067e9440ca88b492511ba59dc7b98d31c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14280325
|
pes2o/s2orc
|
v3-fos-license
|
IGHV1-69 B Cell Chronic Lymphocytic Leukemia Antibodies Cross-React with HIV-1 and Hepatitis C Virus Antigens as Well as Intestinal Commensal Bacteria
B-cell chronic lymphocytic leukemia (B-CLL) patients expressing unmutated immunoglobulin heavy variable regions (IGHVs) use the IGHV1-69 B cell receptor (BCR) in 25% of cases. Since HIV-1 envelope gp41 antibodies also frequently use IGHV1-69 gene segments, we hypothesized that IGHV1-69 B-CLL precursors may contribute to the gp41 B cell response during HIV-1 infection. To test this hypothesis, we rescued 5 IGHV1-69 unmutated antibodies as heterohybridoma IgM paraproteins and as recombinant IgG1 antibodies from B-CLL patients, determined their antigenic specificities and analyzed BCR sequences. IGHV1-69 B-CLL antibodies were enriched for reactivity with HIV-1 envelope gp41, influenza, hepatitis C virus E2 protein and intestinal commensal bacteria. These IGHV1-69 B-CLL antibodies preferentially used IGHD3 and IGHJ6 gene segments and had long heavy chain complementary determining region 3s (HCDR3s) (≥21 aa). IGHV1-69 B-CLL BCRs exhibited a phenylalanine at position 54 (F54) of the HCDR2 as do rare HIV-1 gp41 and influenza hemagglutinin stem neutralizing antibodies, while IGHV1-69 gp41 antibodies induced by HIV-1 infection predominantly used leucine (L54) allelic variants. These results demonstrate that the B-CLL cell population is an expansion of members of the innate polyreactive B cell repertoire with reactivity to a number of infectious agent antigens including intestinal commensal bacteria. The B-CLL IGHV1-69 B cell usage of F54 allelic variants strongly suggests that IGHV1-69 B-CLL gp41 antibodies derive from a restricted B cell pool that also produces rare HIV-1 gp41 and influenza hemagglutinin stem antibodies.
Introduction
The initial B cell responses to HIV-1 envelope (Env) gp41 are non-neutralizing [1] and are polyreactive with human intestinal commensal bacterial antigens [2]. Env gp41 antibodies that arise following HIV-1 transmission do not select virus escape mutants and therefore exert no anti-viral immune pressure [1]. We have recently demonstrated that gp41-reactive B cells can be isolated prior to infection in HIV-1-uninfected humans and that HIV-1 activates preexisting B cells that are cross-reactive with gp41 and non-HIV-1 antigens including microbial antigens [2]. However, the pool of B cells from which the initial HIV-1 Env B cell response is derived is not known.
B chronic lymphocytic leukemia (B-CLL) is a clonal expansion of CD5 + B lymphocytes frequently associated with unmutated B cell receptors (BCRs) [3]. B-CLL cells with unmutated immunoglobulin heavy variable regions (IGHVs) (unmutated CLL, U-CLL) show a preferential usage of IGHV1-69 gene segment (,25%) and frequently have BCRs that are polyreactive and autoreactive despite dramatic structural restrictions [4][5][6][7][8][9][10]. The cellular origin of B-CLL cells has been an area of considerable debate. For example, it has been proposed that B-CLL cells derive from human B-1-like cells, marginal zone (MZ) innate B cells, or transitional B cells, based on cell surface phenotype and molecular and functional characteristics [11]. In this regard, recent studies identified a human equivalent of murine B-1 cells (CD20 + , CD27 + , CD43 + , CD70 2 ) [12] and circulating CD5 + human B cells [13] as the precursors of CLL B cells. It has also been proposed that B-CLL cells with BCR stereotypy could derive from B-1-like progenitor cells adapted to particular antigenic challenges while B-CLL cells with heterogeneous BCRs could derive from conventional B cells [14]. In addition, anti-viral innate antibodies have been reported to be derived from B-1/MZ B cells [15][16][17].
Cell culture
Epstein-Barr virus (EBV)-stimulation of patient peripheral blood mononuclear cells (PBMCs) and generation of B-CLL heterohybridoma cell lines have been described previously [28]. We stimulated PBMCs from 58 B-CLL patients (33 IGHV1-69 and 25 IGHV2/IGHV3) with EBV in the presence of a Toll-like receptor 9 agonist ODN2006 (12.5 mg/ml; Invivogen) and cyclosporin A (0.5 mg/ml), and cultured the cells in the presence of feeder cells, J774A.1 (50,000 cells per well; American Type Culture Collection, TIB-67) that had been exposed to c-irradiation (40 Gy) from a Shepherd irradiator. Three weeks after stimulation, culture supernatant was collected from each well, and levels of total IgM were measured using a previously described method [28]. We obtained 39 patient cultures (22 IGHV1-69 and 17 IGHV2/IGHV3) that produced similar levels of IgM. Of the 22 IGHV1-69 samples, 21 were U-CLL and 1 mutated CLL (M-CLL) while of the 17 IGHV2/IGHV3 samples, 9 were U-CLL and 8 M-CLL ( Table S1). As negative controls, EBV-stimulated B cell cultures from PBMCs of 20 normal subjects were studied.
Briefly, ELISA plates (Costar, Cambridge, MA) were coated with 1-5 mg/ml of test antigens in 0.1 M sodium bicarbonate buffer. After incubating overnight at 4uC, plates were blocked with PBS containing 15% goat serum, 4% whey protein, 0.5% Tween-20, and 0.05% NaN 3 . Then test supernatants or mAbs diluted in the blocking buffer were distributed to wells and incubated for 2 hours at room temperature. After washing with PBS-0.5% Tween-20, bound human IgM or IgG was detected with horseradish peroxidase-conjugated goat anti-human IgM or IgG (m-chain or c-chain specific; Jackson ImmunoResearch Laboratories, West Grove, PA) and peroxidase substrate tetramethylbenzidine (Kirkegaard and Perry Laboratories, Gaithersburg, MD) using a SpectraMax Plus384 plate reader (Molecular Devices, Sunnyvale, CA). The detection limit of IgM in each well was 60 ng/ml; negative wells with undetectable levels of IgM were assigned 10 ng/ml to permit logarithmic transformation of the data.
Reactivity of mAbs to aerobic and anaerobic bacteria whole cell lysates was tested by binding antibody multiplex (Luminex) assays as previously described [1,2]. Bacterial whole cell lysates were prepared using previously described methods [2,35]. In addition, surface plasmon resonance analysis of mAb reactivity to MN gp41 and HCV E2 proteins was performed on a BIAcore 3000 (BIAcore Inc.) using the methods as previously described [2].
Expression of recombinant IgG 1 mAbs
Live EBV-stimulated B cells from selected wells were sorted as single cells using a BD FACS Aria (BD Biosciences, San Jose, CA), and the isolated VH and VL gene pairs were assembled by PCR into the linear full-length immunoglobulin heavy-and light-chain gene expression cassettes for production of recombinant IgG 1 mAbs by transfection in the human embryonic kidney cell line, 293F (American Type Culture Collection) using the methods as previously described [29].
We next expressed the 5 B-CLL mAb V H DJ H and V L J L genes as full-length IgG 1 recombinant mAbs [29]. All 5 B-CLL recombinant IgGs bound to MN gp41 ( Figure 2B). Of these, CLL698 and CLL821 IgGs bound to the immunodominant region of HIV-1 clade B BAL gp41 (RVLAVERYLRDQQLL-GIWGCSGKLICTTAVPWNASWSNKSLNKI) ( Figure 2B). However, CLL246 and CLL698 IgGs did not bind to any other linear peptides tested including DP107, MPR.03, MEPR656, and overlapping 15-mer MN gp41 linear peptides (data not shown). These results indicated that multivalent IgM antibodies with high avidity interactions could enhance low affinity interactions between the unmutated IgG antibodies and the linear peptides tested.
Gp41 antibodies that arise in HIV-1 infection frequently crossreact with intestinal commensal bacterial antigens and indeed have been postulated to derive from pre-transmission environmental antigen-reactive antibodies from memory B cells [2]. Therefore, we tested reactivity of B-CLL mAbs with aerobic and anaerobic intestinal commensal bacterial whole-cell lysates using binding antibody multiplex assays [2]. We found all 5 IGHV1-69 unmutated IgMs reacted with aerobic and/or anaerobic intestinal commensal bacterial whole-cell lysates (Figure 1). The recombinant IgGs of CLL526 and CLL1324 also reacted with aerobic and/or anaerobic intestinal commensal bacterial whole-cell lysates ( Figure 2B). Similarly, all 5 IGHV1-69 unmutated IgMs and their recombinant IgGs also reacted with HCV E2 protein (Figure 1 and Figure 2B). Two mAbs were chosen for cross-competition studies with HCV E2; recombinant E2 competitively inhibited the binding of CLL821 and CLL1324 IgGs to gp41 (Figure 3).
It has been proposed that B-CLL cells derive from autoreactive B cell precursors [6,37]. In this regard, 2 of 5 recombinant IgG mAbs (CLL698 and CLL1324) bound to double-stranded DNA but not to the other test autoantigens including SSA, SSB, Sm, RNP, Scl-70, Jo-1, centromere B, and histone (data not shown). In our indirect immunofluorescence staining assay, however, none of the IgM paraproteins or the recombinant IgG mAbs reacted with HEp-2 epithelial cells, and none showed rheumatoid factor activity (data not shown). In functional assays, none of the IgM or IgG B-CLL mAbs neutralized HIV-1 strains, SF162 (clade B), BG1168 (clade B), or MN (clade B) ( Table S3) [2]. Similarly, none of the IgM mAbs inhibited syncytium formation by HIV-1 ADA (clade B) and MN nor captured HIV-1 virions, SF162 or BG1168 (Table S4 and Table S5). Moreover, none of the IgMs neutralized a HCV subtype 1a strain, HCVpp-H77 (Table S3) [38].
The HCDR3 sequences are the principal determinants of antibody-binding specificity in most antibodies [41]. Thus, we compared HCDR3 sequences of the 5 gp41-reactive IGHV1-69 B-CLLs with those of 47 gp41-reactive IGHV1-69 antibodies isolated from HIV-1-infected patients. The analysis revealed similar HCDR3 sequences due to common usages of IGHJ6 and IGHD3 gene segments that were preferentially used by gp41-reactive B-CLL mAbs. For example, the long HCDR3 sequences of mAbs Ab2757 (25 aa) and Ab6064 (23 aa) were remarkably similar (60% and 52% aa identity, respectively) to that of CLL1324 ( Figure 4). However, IGHJ4 was the most frequently used gene segment (32%) in the HIV-1 infection-derived IGHV1-69 gp41 antibodies in contrast to the infrequent use of IGHJ4 by IGHV1-69 B-CLL (,4%) [18]. In addition, IGHD3-3, the most frequent D gene segment used by the gp41-reactive B-CLL mAbs was found in only 4% (2/47) of the HIV-1 infection-derived IGHV1-69 gp41 antibodies. The mean HCDR3 length of the HIV-1 infectionderived IGHV1-69 gp41 antibodies was significantly shorter than that of the gp41-reactive IGHV1-69 B-CLL antibodies (16.1 aa vs. 22 aa; Mann-Whitney test, p = 0.0041). Moreover, the sequence pattern cluster analysis of HCDR3s indicated that none of the HIV-1 infection-derived IGHV1-69 gp41 antibodies belonged to the known major B-CLL stereotype subsets [14]. These results indicate that the gp41-reactive IGHV1-69 CLL B cells have molecular features distinct from those found in most IGHV1-69 gp41 B cells during HIV-1 infection.
Virus binding activity of B-CLL and clinical outcomes
When we divided the B-CLL samples based on their binding activity to the test viral antigen preparations ( Figure S1), we found that virus antigen-binding reactivity of B-CLL cultures correlated with B-CLL clinical course. The Kaplan-Meier plots of the analyses revealed that B-CLL cases with anti-viral reactivity correlated with poor clinical outcomes measured as time to first treatment (TFT) and overall survival of the patients ( Figure 5). The median TFTs for virus-binding and non-virus binding groups were 37 mo and 86 mo, respectively (p = 0.011, Mantel-Cox test), and the median overall survival for virus binding and non-virus binding groups were 131 mo and 177 mo, respectively (p,0.0001, Mantel-Cox test). This was especially impressive when restricting the analysis to IGHV1-69 samples ( Figure 5B and Figure 5D). The median overall survival for virus binding and non-virus binding groups were 117 mo and indefinite, respectively (p = 0.012, Mantel-Cox test). Of note, all but one (CLL1011) IGHV1-69 samples were U-CLL and would therefore be expected to have poor clinical outcome [3]. However, the U-CLL IGHV1-69 samples could be segregated by virus binding activity, with the non-binders to viral antigens having good clinical outcome. These findings suggested that certain BCRs with innate anti-viral reactivity may be important factors in determining the outcome of the B-CLL clinical course.
Discussion
In this paper, we have demonstrated that one third of IGHV1-69 B-CLL BCRs are polyreactive for infectious agent or commensal bacterial antigens ( Figure S1 and Figure 1). B-CLL IgM reactivity with infectious agent antigens was significantly correlated with poor clinical outcomes ( Figure 5). Moreover, there was a striking difference in IGHV1-69 allelic use by B-CLL versus HIV-1 IGHV1-69 antibodies. While IGHV1-69 B-CLL BCRs predominantly used F 54 allelic variants, IGHV1-69 HIV-1 Env gp41 antibodies from HIV-1 infected patients predominantly used L 54 ( Table 1).
Liao et al. [2] have demonstrated that the initial blood plasma cell response in acute HIV-1 infection to gp41 is highly mutated and comprised of polyreactive gp41 antibodies that cross-react with intestinal commensal bacterial antigens. This work led to the hypothesis that the initial gp41 response to HIV-1 may be in part derived from commensal bacteria-activated memory B cells with BCRs that cross-react with Env gp41 and not from naïve B cells [2]. Thus, HIV-1 Env in the context of HIV-1 infection induces a dominant Env gp41 antibody response that is polyreactive with host and intestinal commensal bacterial antigens [2]. The observation that IGHV1-69 B-CLL BCRs are similarly polyreactive and cross-react with intestinal commensal bacteria (Figure 1) raises the hypothesis that the B-CLL cell population is an expansion of members of the innate polyreactive B cell repertoire with reactivity to a number of infectious agent antigens including intestinal commensal bacteria. Hence, our results suggested that the initial response to gp41 in HIV-1 may derive from the same pool of B cells as B-CLL. However, it is striking that B-CLL B cells predominantly utilize F 54 IGHV1-69 allelic variants while HIV-1 Env gp41 B cell BCRs from HIV-1 infection utilize L 54 allelic variants ( Table 1). Therefore, the B-CLL IGHV1-69 B cell usage of F 54 allelic variants demonstrate that the initial response to gp41 in HIV-1 may not derive from the same pool of B cells as B-CLL. In fact, the B-CLL IGHV1-69 B cells may drive from an F 54 allelic variant B cell pool that produces rare gp41 and hemagglutinin stem antibodies. It has been demonstrated that the F 54 IGHV1-69 allelic variant B cells arise during early human fetal liver development [42]. They were found in a high proportion of B cells in the primary follicles of fetal spleen [43] and in the mantle zones of adult tonsil [44]. Thus, B-CLL B cells may derive from this mantle zone pool of polyreactive B cell precursors [18,45,46].
The 5 gp41-reactive unmutated B-CLL mAb clones had similar HCDR3 sequences due to common IGHV-D-J rearrangements, and as well, had long HCDR3s (21-23 aa) (Figure 2A). Three clones (CLL246, CLL526, and CLL698) belong to subset 7 according to the major stereotyped BCR subset numbering based on a sequence pattern cluster analysis of B-CLL HCDR3s ( Figure 2A) [14]. Unmutated B-CLL B cells with stereotypy give rise to the hypothesis that they are derived from a subset of B cells selected for ability to bind to bacterial and viral antigens, characteristics of B-1, transitional and MZ B cells [11]. It has been proposed that a small population of CD20 + CD27 + CD43 + CD70cells present in human umbilical cord and adult peripheral blood represent a B cell subset analogous to the murine B-1 subset [12], and human transitional and MZ B cells share traits that are similar to murine B-1 B cells, and collectively produce pre-formed antibodies to pathogens [47]. For both HIV-1 and HCV, we found no neutralizing antibodies among any of the B-CLL gp41 or HCV E2-reactive antibodies. Similarly, acute HIV-1 infection gp41 antibodies are nonneutralizing [1,2]. In contrast, the influenza-reactive non-mutated IGHV1-69 antibodies F10 and CR6260 neutralized a broad spectrum of influenza strains [21,24]. If IgM antibodies can coat infectious agent virions, they may impede virus migration across mucosal surfaces [48,49]. However, virus capture assays showed that none of gp41-reactive B-CLL mAbs captured test HIV-1 virions. Moreover, acute HIV-1 infection gp41 antibodies do not exert immune pressure via selecting escape mutants [1].
Finally, several studies have shown that unmutated B-CLL B cells, similar to natural or innate IgM antibodies, frequently express polyreactive antibodies that bind to autoantigens associated with apoptosis and oxidation as well as to components of the outer membrane of bacteria [37,50]. Of note, it has been demonstrated that human B-1-like cells (CD20 + CD27 + CD43 + CD70 2 ) displayed a skewed BCR repertoire as indicated by preferential expression of anti-phosphorylcholine and anti-DNA specificities [12]. Our findings that unmutated B-CLL cell gp41 reactivity is selective for the F 54 IGHV1-69 gene segment and has characteristics of B-1-like, transitional and MZ B cell derived antibodies strongly suggest that B-CLL IGHV1-69 gp41 antibodies derive from a restricted B cell pool that also produces rare HIV-1 gp41 and influenza hemagglutinin stem antibodies. Figure S1 Binding characteristics of B-CLL B cell cultures. To compare binding activities of B-CLL IgMs expressing IGHV1-69 vs. IGHV2/IGHV3 gene families, we stimulated PBMCs from B-CLL patients with EBV using the methods as previously described [28], and the cells were plated at 5,000 cells per well in total of 20 wells per patient sample. To profile binding characteristics of IgMs, we screened the culture supernatants in ELISA. HIV-1 antigens included aldrithol-2 (AT-2)-inactivated HIV-1 virions ADA (Clade B); HIV-1 group M consensus Env, ConS gp140; and deglycosylated JRFL gp140. HIV-1 Env gp41 linear epitope peptides included HR-1 region peptide, DP107 (NNLLRAIEAQQHLLQLTVWGIKQLQARI-LAVERYLKDQ); Env clade B HR-2 region peptide, MPER656 (NEQELLELDKWASLWNWFNITNWLW); and Env clade C HR-2 region peptide, MPR.03 (KKKNEQELLELDK-WASLWNWFDITNWLWYIRKKK). As an initial approach to ensure reactivity of IgMs were of B-CLL origin, rather than IgMs from contaminating B cells, we defined positive samples as they produced 10 or more wells ($50%) reactive with each test antigen. Of 440 IGHV1-69 B-CLL cultures from 22 patients, 67 wells reacted with DP107, 20 reacted with the MPER656, and 37 reacted with MPR.03. The reactivities of 340 IGHV2/IGHV3 B-CLL cultures (17 patients) for these epitopes were 3, 2, and 1 well, respectively (p,0.0001, p = 0.0007, and p,0.0001; Fisher's exact test vs. the IGHV1-69 group). Data are expressed in number of wells positive for each test antigen. NA, not applicable. ''-'' denotes no binding. 1 IGHV and IGKV/IGLV mutation frequencies (%) were compared with germline according to IMGT. 2 Two B-CLL mAbs were isolated from separate experiments (Hwang et al., 2012), and the results for binding activity were obtained from the purified IgM paraproteins. 3 HCDR3 subset numbers were assigned using previously described methods [14]. (TIF) Figure S2 Binding characteristics of healthy control B cell cultures. We stimulated PBMCs from 20 healthy control subjects with EBV using the methods as previously described [28], and the cells were plated at 5,000 cells per well in total of 20 wells per sample. To profile binding characteristics of IgMs, we screened the culture supernatants in ELISA. HIV-1 antigens included aldrithol-2 (AT-2)-inactivated HIV-1 virions ADA (Clade B); HIV-1 group M consensus Env, ConS gp140; and deglycosylated JRFL gp140. HIV-1 Env gp41 linear epitope peptides included HR-1 region peptide, DP107 (NNLLRAIEAQQHLLQLTVWGIKQLQARILAVER-YLKDQ); Env clade B HR-2 region peptide, MPER656 (NEQELLELDKWASLWNWFNITNWLW); and Env clade C HR-2 region peptide, MPR.03 (KKKNEQELLELDK-WASLWNWFDITNWLWYIRKKK). The reactivities of 400 cultures from 20 non-CLL control subjects for DP107, MPER656, and MPR.03 were 2, 10, and 4 wells, respectively (p,0.0001, p = 0.14, and p,0.0001; Fisher's exact test vs. the IGHV1-69 group). Data are expressed in number of wells positive for each test antigen. NA, not applicable. (TIF)
|
2017-06-16T11:31:15.725Z
|
2014-03-10T00:00:00.000
|
{
"year": 2014,
"sha1": "65376f3872ea9c98309d0451aef46b97b45a7e1c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090725&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c368ae6e977a017874d83eb507ffef1ba107c432",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
865699
|
pes2o/s2orc
|
v3-fos-license
|
Abdominal tuberculosis: a radiological review with emphasis on computed tomography and magnetic resonance imaging findings
Tuberculosis is a disease whose incidence has increased principally as a consequence of HIV infection and use of immunosuppressive drugs. The abdomen is the most common site of extrapulmonary tuberculosis. It may be confused with several different conditions such as inflammatory bowel disease, cancer and other infectious diseases. Delay in the diagnosis may result in significantly increased morbidity, and therefore an early recognition of the condition is essential for proper treatment. In the present essay, cases with confirmed diagnosis of abdominal tuberculosis were assessed by means of computed tomography and magnetic resonance imaging, demonstrating the involvement of different organs and systems, and presentations which frequently lead radiologists to a diagnostic dilemma. A brief literature review was focused on imaging findings and their respective prevalence.
data available at Datasus (www.datasus.gov.br), the tuberculosis incidence rate, including all the disease presentations, in 2010, corresponded to 37.57:100,000 inhabitants, while the incidence rate for the extrapulmonary presentation of the disease was 5.28. In 2010, 71,658 new cases of tuberculosis were recorded in Brazil, as all the disease presentations are considered; and among those cases, 10,071 were extrapulmonary tuberculosis.
The abdominal presentation may involve different structures such as gastrointestinal tract, genitourinary tract, solid organs (liver, spleen, pancreas), gallbladder, aorta and its branches, peritoneum and lymph nodes, frequently with concomitant involvement of those organs (6,7) .
The disease may mimic several other conditions such as lymphoma, Crohn's disease, amebiasis, adenocarcinoma, among others. Imaging findings are not pathognomonic, but may be highly suggestive of the disease as considered in conjunction with clinical findings, immunological conditions and the demographic origin of the patient (7) .
Abdominal tuberculosis may affect practically any intracavitary organ, presenting quite nonspecific symptoms. In
INTRODUCTION
Tuberculosis is responsible for about 1.7 million deaths annually worldwide, and the number of new cases (more than 9 million) is the greatest in history (1) . Tuberculosis is known to be associated with poverty, deprivation and immunodeficiency (2) .
Lungs are the primary involved organs and abdominal involvement occurs in about 11-12% of patients with extrapulmonary tuberculosis (3)(4)(5) . In Brazil, according to recent a series with 49 patients with abdominal tuberculosis, Sinan et al. (8) stratified in detail the main symptoms and imaging findings. Fever (75%), abdominal pain (65%) and weight loss (36%) had higher prevalence than other signs and symptoms. Peritonitis (38%) was the main tomographic finding, followed by lymph node disease (23%), involvement of the gastrointestinal tract (19%) and solid organs (10%). Diffuse pattern of lymph node commitment was most commonly observed (48%). In the gastrointestinal tract, the terminal ileum and the ileocecal region were most remarkably affected (50%). Among the solid organs, liver and spleen presented greater involvement (70%).
The present essay is aimed at reviewing and illustrating the main presentations of abdominal tuberculosis by means of images acquired by computed tomography (CT) and magnetic resonance imaging (MRI).
PERITONEUM
Peritoneal tuberculosis is the most common presentation of abdominal tuberculosis (7,8) and includes the involvement of the peritoneal cavity, mesenterium and omentum (7) . It is believed that its origin is hematogenous, but it may be secondary to lymph node rupture, gastrointestinal dissemination or tubal involvement (9) . Most probably, it results from rupture of mesenteric lymph nodes compromised by hematogenous dissemination of a distant primary focus (usually located in the lungs). Other accepted dissemination pathways include direct extension and the lymphatic chain. It rarely results from genitourinary tract infection (10) .
Despite a certain difficulty in differentiating between the several abdominal tuberculosis presentations, besides a considerable superimposition of presentation patterns, peritoneal tuberculosis is classically classified into three types according to its macroscopic aspects, namely: dry, wet and fibrous types (7,9,(11)(12)(13) . The wet type ( Figure 1) presents primarily either as free or loculated ascites, associated or not with diffuse and smooth peritoneal thickening; in the dry type, there is a predominance of peritoneal and mesenteric thickening with caseous nodules, lymph nodes enlargement and fibrinous adhesions; on its turn, the fibrous type ( Figure 2) is characterized by remarkable omental thickening and entanglement of bowel loops clinically resembling a mass, occasionally with loculated ascites and that may be similar to peritoneal carcinomatosis (7,14) .
Free or loculated ascites may be present in 30-100% of cases, and the tomographic density is variable , depending on the phase of the disease. Only 3% of patients present with the dry-type tuberculous peritonitis. The presence of fat-fluid level in association with necrotic lymph nodes is highly specific for tuberculous ascites (6) .
The omentum may be altered in up to 80% of cases, appearing as diffuse infiltration, nodule, and omental cake. Diffuse densification is most commonly found (Figure 2), while the omental cake pattern, characterized by omental fat thickening and densification, is less frequently seen, occurring in < 20% of cases, but it is most typically found in peritoneal carcinomatosis, in up to 40% of cases (6,14) .
Mesenteric disease is a common abnormality that may be observed at CT as early as at the initial stages of peritoneal tuberculosis, in up to 98% of cases (14) , ranging from a mild involvement (linear striations, vascular engorgement, star-shaped appearance, fat densification) to a more extensive involvement (diffuse infiltration of the mesenteric leaves). Mesenteric abscesses probably result from extension of a caseous process of large lymph node masses (13) . Thick striations with vascular engorgement constitute the most common finding (65% of cases), followed by the nodular pattern (29%) (14) .
The diagnosis of peritoneal tuberculosis remains as a challenge due to the wide range of clinical presentations, nonspecific laboratory findings and superimposition of imaging findings with other diseases, particularly with peritoneal carcinomatosis, whose treatment and prognosis are completely different. Thus, the most common CT findings in peritoneal tuberculosis include: a) ascites (70-90% of cases); b) smooth peritoneal thickening with marked enhancement after intravenous contrast injection; c) densification of the mesenteric root fat planes, which may occur in up 70% of cases; d) lymph node enlargement with areas of central necrosis or calcification (6,14,15) .
On the other hand, the most frequent findings in peritoneal carcinomatosis include: a) multinodular and irregular peritoneal thickening; b) homogeneous retroperitoneal lymph nodes enlargement; c) omental cake, as above mentioned (6,15) .
The most useful tomographic sign to differentiate between peritoneal tuberculosis and peritoneal carcinomatosis is peritoneal thickening that, in the first condition, is smooth and regular, and in the second, is nodular and irregular (6) .
LYMPH NODES
Lymph node involvement is usually associated with gastrointestinal tuberculosis and less commonly with the peritoneal and solid organs presentations, but it may be the only sign of disease, particularly in the periportal region (16) . The most common involvement of lymph node chains (mesenteric, celiac, porta hepatis, and peripancreatic lymph nodes) may be explained by the lymphatic drainage of the ileocecal, jejunal, ileal and right colonic regions after ingestion of infected material (16) .
The lymph node disease pattern is variable at CT, most frequently demonstrating lymph node enlargement (40-60%) with hypoattenuation in the center and hyperattenuation in the periphery, after intravenous contrast injection, which is typical but not pathognomonic of caseous necrosis ( Figure 3) (9) . Lymphoma, metastasis, pyogenic infection and Whipple's disease are the main differential diagnoses (11,16) . Other lymph node involvement patterns include increase in the number but not in volume of lymph nodes, large, localized lymph nodes clusters and conglomerates ( Figure 4) (16) .
GASTROINTESTINAL TRACT
The intestinal presentation of abdominal tuberculosis is not uncommon (16) . Some mechanisms may lead to bowel involvement by the disease: ingestion of infected material in active pulmonary tuberculosis; reactivation of a quiescent intestinal focus resulting from hematogenous dissemination in the childhood, hematogenous dissemination of active tuberculosis, or direct spread from other organs (17) .
Main imaging findings of intestinal tuberculosis include symmetrical or asymmetrical parietal thickening, extrinsic compression by enlarged lymph nodes which, on their turn may represent heterogeneous masses as associated with adherent loops and mesenteric thickening (16) .
The clinical presentation of rectal tuberculosis is rarer and different from tuberculosis in proximal segments. Hematochezia (88%) and constipation (37%) are the most common symptoms. Luminal narrowing is usually significant, with variable length, areas of deep ulceration, most commonly located about 10 cm from the anal border (Figure 5B). Prominent fibrosis associated with rectal inflammation may increase the presacral space (19) . A study published by Nagi et al. (18) revealed an incidence of 10.8% of colorectal involvement in a series of 684 patients affected by gastrointestinal tuberculosis along 10 years, in spite of large, previously published series revealing an incidence between 3% and 9%, with scarce reports about cases of single rectal involvement (18) . Perforation ( Figure 6) and fistulas are the most frequent gastrointestinal complications of tuberculosis, with incidence of 7.6%; the small bowel and the colon are the most common sites (20) . Other complications include vascular complications, intussusception and obstruction of the small bowel (7) .
The differential diagnosis for intestinal tuberculosis varies with the involvement degree and pattern, and includes Crohn's disease, lymphoma, amebiasis and adenocarcinoma. The presence of pulmonary compromise at chest radiography may help in the diagnostic rationale in spite of the fact that it is absent in up 50-60% of cases (7,16) .
LIVER AND BILIARY TRACT
Isolated hepatic tuberculosis is a rare condition and is usually associated with concomitant involvement of other organs (21) . On the other hand, the prevalence of hepatic commitment in autopsies of patients with disseminated tuberculosis is of 80-100% (16,21) .
Manifestations of hepatic tuberculosis may be divided into two types, namely, miliary and macronodular. The miliary form (Figure 7) is associated with hematogenous dissemination, and hence the diffuse involvement of the liver (4,21) . There is a diffuse enlargement of the liver and, despite the increase in hepatic enzyme levels, biliary dilatation may not be noticeable due to the predominance of smallcaliber ducts involvement (21) . Most commonly it is related to miliary pulmonary tuberculosis (4,21) .
The macronodular presentation ( Figure 8) is rarer, less frequently associated with the pulmonary form of tuberculosis and it is related to dissemination through the portal vein (4,21) . Calcifications may arise in the chronic phase of the disease (7,9,16) . At CT, lesions measuring between 1 and 3 cm in diameter or single mass may be observed in a diffusely form ( Figure 9B) is extremely rare and is seen either as multiple or solitary, rounded or ovoid nodules with variable appearance both at CT and MRI, which may represent different disease stages. At contrast-enhanced T1-weighted sequences, one can observe peripheral enhancement or, less commonly, gradual and progressive enhancement (24) .
PANCREAS
In spite of the fact that pancreatic compromise by tuberculosis is rarely identified at imaging studies, at least one series has reported a prevalence of 8.3% of pancreatic involvement in 384 patients with diagnosis of abdominal tuberculosis. The described alterations include increased pancreas dimensions (Figure 10), hypodense intrapancreatic collections or complex masses, besides peripancreatic lymphadenopathy (25) . enlarged liver. At MRI, the lesions present low signal intensity and minimal peripheral enhancement, with a honeycombing pattern in the miliary form at T1-weighted sequences (Figure 7). At T2-weighted sequences, the lesions are hypointense, with a less hypointense halo in relation to the surrounding liver parenchyma (12) .
The differential diagnosis for the micronodular form of hepatic tuberculosis includes metastasis, fungal infection, sarcoidosis and lymphoma (12) ; and in the macronodular form, it is made primarily with abscess and metastasis (12,16,21) .
The involvement of the biliary tree by tuberculosis is even more rarely observed and its annual incidence is estimated to be 0.1% (22) . The involvement may be either primary, with small ducts stenosis, or secondary to compression by hepatic granulomas, many times making it more difficult to differentiate with primary sclerosing cholangitis and cholangiocarcinoma (21) .
The gallbladder is very rarely involved. Mural thickening, irregular septa, and lymphadenopathy may be found. There is no typical presentation, and it may be quite variable (23) . The diagnosis is usually made on the basis of histopathology (7) .
SPLEEN
Splenic tuberculosis is usually associated with the disseminated form of miliary tuberculosis and, in spite of being reported in up to 80-100% of autopsies of patients with disseminated tuberculosis, it is much less frequently identified by imaging methods (24) . However, a recent series reported a rate of splenic involvement diagnosed by imaging methods (ultrasonography, CT or MRI) of 45.8% in cases of disseminated tuberculosis (24) .
As well as hepatic tuberculosis, there are two types of presentations of splenic tuberculosis, namely, miliary and macronodular. The first and most common type ( Figure 9A) usually manifests as moderate splenomegaly, but minute hypodense lesions may be seen at CT. The macronodular
SUPRARENAL GLANDS
Suprarenal glands are not a rare site of involvement by tuberculosis (26) , and this is the main cause of suprarenal gland failure (Addison's disease) (26) .
KIDNEYS
In spite of the fact that renal tuberculosis is usually a result of hematogenous dissemination originating from the lungs, less than 50% of patients present with radiological evidences of pulmonary tuberculosis and only 10% present with active disease (28) . Renal tuberculosis is usually a consequence from a primary pulmonary infection that might have occurred several years before. Tubercle bacilli lodge at the corticomedullary junction, forming granulomas in the papilla which remain stable for many years; if reactivation occurs, the organisms spread into the medulla, causing papillitis (28) .
Focal tissue edema and vasoconstriction caused by active inflammation result in local hypoperfusion identifiable at CT and MRI (26) . Calyceal deformity is the initial finding (28) . Multiple or solitary parenchymal nodules ( Figure 12) with no urinary tract involvement are rare manifestations and may mimic neoplasms (26) .
The disease progression might result in extensive papillary necrosis, cavitation and later cortical scars, pyeloinfundibular strictures and hydronephrosis. At a terminal stage of the disease, lost of renal function and calcifications are observed (28) . Calcifications may present with several patterns such as amorphous, granular, lobar and diffuse (named autonephrectomy) (28) .
URETERS
Dilatation and irregular thickening of the urothelium (Figure 15) represent the first signs of ureteral tuberculosis. The dilatation results primarily from ureterovesical junction stricture secondary to cystitis and tuberculous urethritis.
At advanced stages of the disease, ureteral stenosis, shortening, filling defects and ureteral calcifications may be seen ( Figure 16) (12) .
BLADDER
Initially, tuberculous cystitis produces mucosal ulceration and edema. The disease extension towards the muscle layer leads to fibrosis and, consequently, to mural thickening and decrease in contractility. For this reason, the main sign of tuberculous cystitis is a thickened bladder and reduced bladder capacity (Figures 16 and 17). Ureteral reflux may be observed. Calcified tuberculous cystitis is rare and should be differentiated from other conditions such as schistosomiasis, amyloidosis, cyclophosphamide-induced actinic cystitis, carcinoma and foreign bodies (28) .
FEMALE GENITAL ORGANS
Tuberculous infection of the female genital system may cause menstrual disorders, gestational complications, neonatal tuberculosis, antituberculosis drugs side effects during pregnancy, increased drug-resistance and infertility (29) . Chavhan et al. have published a study showing 7.5% incidence of female genital tuberculosis in 492 patients undergoing investigation for infertility and submitted to hysterosalpingography (30) .
Most women with genital tuberculosis present with infertility secondary to tubal involvement ( Figure 18A), which occurs in up to 94% of such patients. Typically, it is bilateral and causes multifocal stricture and calcifications (12) . Tubo-ovarian abscess extending through the peritoneum and extraperitoneal compartment is also suggestive of tuberculosis (12) .
MALE GENITAL ORGANS
Tuberculosis may affect the whole male genital tract, with lesions in the prostate, seminal vesicles, deferent ducts, epididymis, penis and testes. Genital tuberculosis occurs by hematogenous dissemination to the prostate and epididymides or through the urinary system to the prostate with canalicular dissemination to the seminal vesicles, deferent ducts and epididymides. It may be either associated with renal lesions or present as an isolated condition (31) .
Epididymides are involved in 10-55% men with urogenital tuberculosis (31) . Generally, it starts at the epididymal tail and later may propagate to the entire structure (26) . Sonographic findings include edema and heterogeneous echotexture of the involved segment. At MRI, increased volume and low signal intensity are observed at T2-weighted sequences, suggesting chronic inflammation and fibrosis (11,26) .
The seminal vesicles and deferent ducts may present wall thickening, stricture, and parietal or intraluminal calcifications at sectional images (26) .
Tuberculous prostatitis is characterized by decreased echogenicity and increased vascularization at Doppler ultrasonography, similarly to prostate cancer (12) . In patients with prostatic abscess (Figure 19), CT scans and MRI reveal cystic lesion with peripheral enhancement, indistinguishable from other causes of abscess. Dystrophic and nonspecific calcifications may be observed in the chronic phase of the disease (26) . radiologist recognizes the imaging findings, allowing for the establishment of a more effective strategy to confirm the diagnosis and to institute the appropriate treatment as soon as possible.
OTHERS
Iliopsoas muscle abscess ( Figure 20) was a well known complication of vertebral tuberculosis until the implementation of modern chemotherapy schemes (32) . It may be classified into primary (30%) or secondary (70%), depending on the presence of an underlying disease, such as, tuberculosis. In developing countries, vertebral tuberculosis (Pott's disease) is considered to be the most common cause of psoas muscle abscess; however, few reports are found in the literature about psoas muscle abscess as a primary presentation of tuberculosis (33) . The presence of calcification in the abscess is virtually pathognomonic of tuberculosis (12) .
CONCLUSION
Tuberculosis presents a wide spectrum of clinical and imaging findings, and may affect many different organs in different ways. The diagnosis requires a high degree of suspicion and, despite the fact of being defined only by means of biopsy and specimens culture, it is important that the
|
2018-04-03T00:55:07.176Z
|
2015-05-01T00:00:00.000
|
{
"year": 2015,
"sha1": "492f2c41b6c3db321d6d1e8fb88f4f5ad5f2bc0f",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rb/v48n3/0100-3984-rb-48-03-0181.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "492f2c41b6c3db321d6d1e8fb88f4f5ad5f2bc0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54788619
|
pes2o/s2orc
|
v3-fos-license
|
Young ' s modulus of [ 111 ] germanium nanowires
Copyright © 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. Your access is provided by: Virginia Commonwealth Univ Register to create your user account, or sign in if you have an existing account Additional sign in via Username Sign in via Shibboleth/Athens My Cart Export citations Add to my favorites Recommend to library Subscribe to email alerts Submit an article Reprints & Permissions Subscribe to RSS Access Key FFree ContentOAOpen Access ContentSSubscribed ContentTFree Trial Content
Nanostructured, lithium-alloying electrodes provide a promising pathway towards substantively increasing the energy density in lithium-ion (Li-ion) battery anodes beyond the capabilities of graphite, which has a theoretical capacity of 372 mAh/g and remains the predominant choice in current commercial batteries.2][3][4] Interest in silicon nanowires (NWs) 1 has emerged from their ability to reversibly store lithium at gravimetric capacities approaching their theoretical value (of 3579 mAh/g), which is the highest among all known electrode materials.On the other hand, germanium NWs [2][3][4] have attracted increasing attention due to their superior rate capabilities even though their gravimetric capacity of 1384 mAh/g is lower than that of Si.4][5][6] Another favorable attribute associated with crystalline Ge relates to its enhanced fracture resistance towards electrochemical lithiation as compared to crystalline Si. 7 While nanostructuring of these materials in a one-dimensional form offers directional strain relaxation along their longitudinal axis, an accurate understanding of their mechanical properties is still essential due to the large volume changes during the reversible lithium alloying process.The Young's Modulus (YM) is a critical parameter in the development of stress-fields and fracture within a battery electrode during lithiation, and hence, its quantification is important. 8For instance, Ge undergoes a 300% volume change upon full lithiation 4 and its YM plays a key role in its mechanical stability during this volume expansion process.In previous studies, YM values of 106 ± 19 GPa 9 and 112 ± 43 GPa 10 were reported for NWs synthesized in the [110] and [112] directions, respectively.Here, we present YM measurements from Ge NWs grown in the [111] direction.1][12][13] In addition, the ultimate strength of these nanowires was estimated from an experiment involving brittle fracture in one of the tested NW devices.In this effort, Au-catalyzed Ge NWs, which are shown in Fig. 1, were grown by vapor-liquidsolid mechanism via low-pressure chemical vapor deposition (LPCVD).Au nanoparticles (NPs) with diameters of 30 nm were dispersed on pre-cleaned Ge (111) substrates.To enhance the adhesion of Au NPs, (3-Aminopropyl)triethoxysilane was coated onto the Ge substrates before the dispersion of Au NPs.30% GeH 4 diluted in H 2 was employed as the gaseous precursor for the NW synthesis step.This Ge NW growth process consisted of two stages: nucleation (2 min) and elongation (10 min), respectively.The growth temperature and chamber pressure for the nucleation stage were maintained at 350 • C and 2 Torr, respectively.The elongation stage follows the nucleation stage with a growth temperature of 265 • C and did not involve changes in other growth parameters.The NWs tested in this effort ranged in radius and length between ∼10 and 20 nm and ∼1-5 µm, respectively.
The synthesized Ge NWs were harvested from the substrate and suspended in ethanol using ultrasonication.5][16][17] The gold nanoelectrodes were defined as a part of an array of devices on silicon substrates.The DEP parameters such as voltage, time, and frequency were optimized to yield either a single or a few non-overlapping nanowires at each device location in the electrode array.From among these devices, the locations containing only a single NW were selected for further testing.Next, the nanowires at these selected locations were clamped at their distal ends from the top-side using electron beam induced deposition (EBID) of platinum (Fig. 2(a)).This results in the formation of doubly clamped Ge NW beams, which are conducive for nanomechanical characterization using the AFM.1][12][13] This measurement was carried out at room temperature (∼298 K) and in air (atmospheric conditions).In this experiment, the nanowire beams, which are anchored at their terminal ends, are subjected to transverse loads at their mid-lengths using the force exerted by an AFM tip.This is accomplished by pushing the sample upwards against the tip.During this process, signals from the AFM photodetector and from the piezo-actuator controlling the sample stage are monitored to obtain the tip deflection (δ tip ) and stage displacement (δ stage ), respectively.From these variables, the tip force acting on the NW is calculated as F NW = k tip * δ tip , where the spring constant of the AFM cantilever (k tip ) is obtained using Sader's resonance damping method, 18 and the NW deflection is calculated as δ NW = δ stage − δ tip .The resulting F-d relationship is used to extract the YM of the NW sample.The F-d curve for the NW sample of Fig. 2(a) is shown in panel "c" of this image.
As seen in Fig. 2(c), the deflection of the NW increases almost linearly with the applied force in the small deformation region, which extends up to a characteristic value approaching the radius of the wire.This near-linear relationship indicates that the NW is subjected to bending in this portion of the F-d curve.At larger NW deformations, the F-d curve is non-linear.The non-linearity emerges from the one-dimensional nature of the nanobeam, which results in a combination of bending and tensile stretching at large deformations. 10,19This non-linear F-d behavior in one-dimensional nano-beams can be expressed using the following model: 10,13 where 350+3ϵ , and ϵ = . Also, E, I (= πR 4 N W /4), L, and R NW represent the YM, moment of inertia, NW beam length, and NW radius, respectively.In our experiments, the NW beam length and radius were estimated from tapping-mode AFM height plots (inset of Fig. 2(c)) and from SEM images, respectively.By fitting the observed experimental data to this analytical model, the NW YM (i.e., the unknown parameter) is extracted.For the Ge NWs tested in this work, the non-linear relationship given by Eq. ( 1) was found to accurately describe the observed F-d behavior, which can be clearly seen from the data shown in Fig. 2(c).For this NW device with a radius of 15 nm, the YM was calculated as 89.4 GPa.
A total of 7 [111] Ge NW samples were tested, and the average value of their YM was obtained as 91.9 GPa (with 95% confidence limits of ±8.2 GPa).Table I lists the NW dimensions (i.e., the radius/length) and the measured YM for these 7 samples.The variations in NW beam length (L) arises from the following factors: (a) the differences in electrode designs within the array, where the inter-electrode design spacing was fixed at 400 nm at one-half of the locations and at 800 nm at the rest of the locations, and (b) the differences in orientation of the assembled NWs with respect to the electrode gaps.The YM values are also plotted as a function of NW radius in Fig. 3.This plot reveals that, within the measured range of radii, the YM is independent of the NW radius and agrees with past measurements involving Ge NWs. 9,10These past experiments have reported YM values of 106 ± 19 GPa and 112 ± 43 GPa for single crystal Ge NWs oriented along the [110] and [112] directions, respectively. 9,10Our results, taken together with previously published data, point to the dependence of YM on the crystal orientation, as would be expected.It is important to note that the theoretical values for YM in Ge crystals are estimated to be 103, 138, and 155 GPa for the [100], [110], and [111] directions, respectively. 20,21These values have also been consistent with density-functional theory calculations for Ge NWs when their diameters exceeded 2 nm. 22he measured YM value is lower than the theoretical estimate of 155 GPa 22 for [111] NW crystals.We attribute this difference to the presence of an amorphous and much softer germanium oxide layer on the surface of these wires.The existence of such an oxide is evidenced by the TEM images presented previously (Fig. 1).The nominal thickness of the oxide layer was measured from these micrographs and was determined to be ∼3 nm.To estimate the intrinsic modulus of the Ge core, we employed a core-shell model 23 for these NWs, using an approximate thickness of 3 nm for the oxide shell layer.Using this approach, the YM of the Ge core can be calculated as E Ge = (EI − E ox I ox ) /I Ge , 23 where E ox , I ox , and I Ge represent the YM of the oxide shell, moment inertia of the oxide shell and the moment of inertia of the Ge core, respectively.Assuming an oxide thickness of 3 nm and an oxide YM of 53 GPa, 24 the intrinsic YM of the [111] Ge NWs is obtained as 147.6 ± 23.4 GPa.It is important to note that the value for the oxide YM, which has been used here and has been cited in the past with the [112] Ge NW 10 system, is obtained from cylindrical specimens with millimeter-scale diameters and represents the best available estimate for the surface oxide layer.The YM value, which is obtained after accounting for the oxide layer, is very near to the theoretical estimate of 155 GPa 22 for this crystal direction and hence supports our argument that the observation of a lower YM in our NWs is due to its surface oxide layer.
The fracture behavior of one of the NW samples provides additional insights into the ultimate strength of this material system.The AFM trace-retrace curve (i.e., the raw data from tip deflection/stage movement signals) representing the loading and unloading behaviors of the NW sample and the resulting F-d curve prior to fracture are shown in Fig. 4. From this figure, it is evident that this 17 nm radius NW does not undergo plastic deformation and undergoes an abrupt fracture event at a critical load (F cr ) of 1106 nN.The ultimate strength of the material can then be computed as 10 where . From the above equation, the ultimate strength of the [111] Ge NW was calculated to be 10.9 GPa.This experimental value represents ∼74.5% of the predicted theoretical strength (of E/2π, 25 or 14.6 GPa) for this NW material.In prior work, theoretical-to-experimental ultimate strength ratios of ∼50% and 88% have been reported for Si 26,27 and [112] Ge NWs, 10 respectively.Our measurement is at the higher end of this reported range for the experimental-to-theoretical strength ratio and indicates a relatively defect-free NW crystal.
We have presented the YM measurements from [111] Ge NWs, which were obtained using the AFM based three-point bending technique.The observed average value of 91.9 GPa is lower than the theoretical value for this crystal due to the presence of an amorphous oxide layer.When the softer oxide surface layer is accounted for using a core-shell model, the average YM of the intrinsic Ge core is calculated to be 147.6GPa, which approaches the theoretical value for [111] Ge.Our results point to the significance of these relatively thin surface layers on the effective elastic properties of nanostructured material systems.This aspect, which has not been addressed so far within the mathematical models that are being used today, needs to be suitably accounted for while predicting the lithiation-induced stress-fields and fracture in these promising new material systems.It is important to note that the surface oxide has previously been shown to have beneficial effects in one-dimensional silicon anodes by preferentially directing the dimensional volume expansion during the lithiation process and thereby stabilizing the solid electrolyte interphase layer. 28Hence, we would like to emphasize that our observation points only to the importance of accounting for the impact of this surface oxide layer on the elastic properties of the NW and does not necessarily entail a conclusion/recommendation to preferentially eliminate them, which may be possible using chemical etching techniques involving HF or HCl acids.Furthermore, our results point to an exceptionally high experimental-to-theoretical ratio for the ultimate strength of these crystals.This is another important attribute that advances their suitability for use as high-capacity and high-rate alloying anode systems in next-generation batteries.
This work was partly supported by the National Science Foundation under Grant No. 1453966.This work was performed, in part, at the Sandia-Los Alamos Center for Integrated Nanotechnologies (CINT), a U.S. Department of Energy, Office of Basic Energy Sciences user facility.This involved chip nanofabrication activities at CINT, which were performed under the user Proposal No. U2014A0084.Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.
116101- 2 FIG. 1 .
FIG. 1.(a) HR-TEM image of [111] Ge NW with the electron diffraction pattern shown in the inset.(b) An image showing the amorphous oxide layer, with a nominal thickness of ∼3 nm, on the NW surface.The scale bars in panels "a" and "b" measure 5 nm and 20 nm, respectively.
FIG. 2 .
FIG. 2. (a) A 3D, tapping mode AFM image of a NW device, which is clamped on the top-side with EBID Pt.The inset shows a SEM image of the NW (scale bar = 300 nm).(b) A schematic illustration of the AFM-based, three point bending test and the NW deformation in the doubly clamped mode.The arrow indicates the force exerted by the AFM tip.(c) The force vs. displacement plot of the NW.The NW height trace, which was obtained from a separate tapping mode scan, is shown in the inset.
FIG. 4 .
FIG. 4. (a) The AFM loading and unloading curves of a NW device, which exhibited brittle fracture.The arrow indicates the onset of fracture.(b) The extracted force vs. displacement plot, shown here up to the fracture point.(c) An AFM image of the post-fracture NW device with the arrow pointing to the fracture location.The ultimate strength of this NW is calculated as 10.9 GPa.
TABLE I .
The measured YM values and dimensions for the NW samples.FIG.3.YM vs. radius plot for 7 NW samples.The average value for the YM is shown using the dotted line.
|
2018-12-05T12:27:22.590Z
|
2015-11-02T00:00:00.000
|
{
"year": 2015,
"sha1": "b82f7f8dbbcab7af4a0025fa7dcf912f955dcbd8",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4935060",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b82f7f8dbbcab7af4a0025fa7dcf912f955dcbd8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
265091886
|
pes2o/s2orc
|
v3-fos-license
|
Foliar Application of Sulfur-Containing Compounds—Pros and Cons
Sulfate is taken up from the soil solution by the root system; and inside the plant, it is assimilated to hydrogen sulfide, which in turn is converted to cysteine. Sulfate is also taken up by the leaves, when foliage is sprayed with solutions containing sulfate fertilizers. Moreover, several other sulfur (S)-containing compounds are provided through foliar application, including the S metabolites hydrogen sulfide, glutathione, cysteine, methionine, S-methylmethionine, and lipoic acid. However, S compounds that are not metabolites, such as thiourea and lignosulfonates, along with dimethyl sulfoxide and S-containing adjuvants, are provided by foliar application—these are the S-containing agrochemicals. In this review, we elaborate on the fate of these compounds after spraying foliage and on the rationale and the efficiency of such foliar applications. The foliar application of S-compounds in various combinations is an emerging area of agricultural usefulness. In the agricultural practice, the S-containing compounds are not applied alone in spray solutions and the need for proper combinations is of prime importance.
Introduction
Plants acquire and use carbon dioxide from the atmosphere, as well as water and inorganic nutrients from the soil for their growth, development, and reproduction.Water and nutrients enter the plant body through the root system, and the application of the depleted nutrients in the soil is a common agricultural practice, i.e., soil fertilization.This approach poses limitations when it comes to the accessibility of nutrients for plants.This is primarily due to the formation of insoluble forms in the soil following fertilizer application or leaching of soluble forms through the soil, which can ultimately contaminate ground water sources [1,2].
Leaves have the capacity for water and nutrients uptake when exposed to rain or irrigation.Spraying the above ground part of the plant with dilute solutions of the needed nutrients is also a common agricultural practice, i.e., foliar fertilization.When the supply of a nutrient by the rhizosphere is inadequate or uncertain, the foliar application of fertilizers is widely used in current crop management for optimal production of the crop.Foliar fertilization provides benefits compared to soil fertilization when the demand for nutrients by the plant exceeds the capacity of its root system for nutrient uptake, and/or nutrients within the plant restrict delivery to tissues.It is also advantageous in adverse environmental conditions that may adversely affect crop performance [1].This practice is used for supplying additional nutrients such as nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg), sulfur (S), and micronutrients.Through foliar fertilization, essential nutrients are supplied to the plant in the proper concentrations, enhancing the plant's nutritional status and ultimately leading to increased yield quality and production [3,4].
Apart from the nutritional significance of foliar application, there is a second important contribution of this agricultural practice, the biofortification of crop production.The crops are biofortified with elements such as zinc (Zn), iron (Fe), manganese (Mn), selenium (Se), and compounds such as folate and vitamins [5][6][7].Furthermore, leaves can acquire compounds when they are applied as ingredients of an aqueous solution and foliar application provides a third significant advantage.It is also used to alleviate the detrimental effect of various adverse conditions, and combinations of them including heat, cold, frost, drought, and salinity by spraying the crop with a range of compounds, such as growth regulators and stimulators or biostimulants [8].These substances play a pivotal role in plant growth and development as well as disease control.In practical terms, foliar application is becoming increasingly vital in agricultural practices.
The nutrients are applied as aqueous solutions via spraying and focusing on the penetration of ionic, polar solutes through the leaf surface.This is a target-oriented fertilization method, and an environmentally friendly one.The nutrients are delivered directly to foliage, and in less amounts compared with soil fertilization.This practice reduces the environmental impact caused by soil fertilization [1,2].
The response of foliage to spraying is in some cases variable and perhaps not reproducible.This can be attributed to the ever-changing environmental conditions during spraying, coupled with a multitude of factors affecting the penetration of the solution sprayed on foliage.This complex scenario has been aptly referred to as the "spray and pray" situation by Fernández and Eichert (2009) [9].The environmental conditions, including the temperature, relative humidity, and wind speed, influence the first step of foliar application.Other factors could be the molecular weight, along with the physical and chemical properties of the nutrient, the concentrations of the active ingredients, the time of application, and the environmental conditions.These factors affect the penetration of the spray solution through the plant surfaces, and in particular the penetration through the structures of cuticle and stomata [9][10][11].
Another factor that affects the penetration of the spraying material is the morphological and anatomical construction of the leaf.The leaf surface physiology, and especially the presence of the epicuticular waxes, determine the rate of retention, the wettability and finally the penetration of the spraying material [12][13][14].The effectiveness of the application is assessed in relation to its penetration, the reduction in the deficiency or its correction, the improvement in yield, and the quality of the produce [15][16][17].The foliar application of nutrients is an agricultural tool that supplements soil fertilization in conditions of lower availability of these nutrients in the soil, and especially at the critical times of nutrient demand [15].
The commercial solutions for foliar application of nutrients are generally composed of at least two major components, the active ingredient(s) and the adjuvant(s).Adjuvants are materials that help the spraying solution and its ingredients to improve the wetting, spreading, and sticking on the surface of foliage, and then to support the rate of the penetration of the applied nutrients.The challenges associated with the penetration of the applied mineral nutrients during foliar have prompted the extensive utilization of adjuvants and ongoing research to discover new ones that can improve the effectiveness of spray treatments.The addition of the adjuvant modifies the physical and chemical properties of the spray solution towards an effective wetting of the leaf surface [18].
Sulfur nutrition plays a crucial role in the growth and development of higher plants and the S demand of agricultural crops may be from 1 kg S t −1 for sugar beet up to 17 kg S t −1 for Brassica crops [19].S supply has consequences for crop productivity and nutritional quality and S-fertilization has become an issue.The problem is the reduced industrial emissions of S to the atmosphere, which results in the decreased deposition of S onto agricultural land in many areas of the world [20].Sulfur limitation results in decreased yields and quality parameters of crops [21].Adequate S nutrition is also required for plant health and resistance to pathogens [22].A series of specific responses aimed at optimizing acquisition and utilization are induced by sulfur limitation in all plant species studied to date [21,23,24].Factors affecting S supply and the subsequent impacts on crops have been discussed by Haneklaus et al. (2007) [19].On the other hand, complex interactions occur between S and other nutrients at the level of the whole plant; the individual tissues and the cellular compartments have been discussed by Courbet et al. (2019) [25].S deficiency mainly acts by reducing plant growth, which in turn restricts the root uptake of N, K, and Mg and vice versa.In legumes, the nodules show high requirements for S and there is a strong interaction between N and S.
It is a common phenomenon for the crops to develop under a harsh agricultural environment in the region of cultivation.Climate change, in combination with other factors, built a cultivation environment that causes reductions in crop yields due to various adverse conditions.The foliar application of natural compounds improves the obtained yield under the circumstances, including plant metabolites [8].Applying a range of metabolites in crops with valuing properties and functions, such as glutathione, proline, glycine betaine, citric acid, L-tryptophan, polyols, ascorbic acid, lipoic acid, and tocopherol, contributes to the crop's ability to tolerate various abiotic stresses it encounters at different developmental stages.The benefits of such applications are assessed and evaluated by measuring or determining their effects on morphological parameters, along with biochemical, metabolic, and genetic ones, in a variety of crop plants.Such applications usually result in improved yields when applied in the field.Foliar application of plant metabolites has proven to be an effective way to support the tolerance of the crop to the various abiotic stress [8].Within the various tested plant metabolites so far, several S-containing compounds seem to be of prominent importance and have been incorporated in the foliar fertilization practice.
The target of this review is to deepen in the foliar application of the S-containing compounds.The significance of the foliar application of such compounds on the crop for nutritional purposes, along with the biofortification of crop production and the mitigation of the various negative results of the stresses, have provided significant research progress.Several S-containing compounds have been used as biostimulants to support the crop to overcome stressful conditions.These compounds are the metabolites cysteine (Cys), methionine (Met), glutathione (GSH), S-methylmethionine (SMM), and lipoic acid (LA), as well as the non-metabolites thiourea (TU) and lignosulfonates, sodium sulfite, and sodium hydrogen sulfide.The latter two produce hydrogen sulfide.For each of these compounds, we discuss on their role and contribution to crop growth, development, and health after foliar application (Figure 1).For the integrity of the approach, in the last group, the S-containing adjuvants and agrochemicals have been added.The arrows indicate the complex journey of the S-containing solute to the action point within the cell which includes several structures to cross, and the characteristics of each one of them is summarized in the text.Towards understanding and handling the effectiveness of such foliar applications, the nature and mode of action of these compounds, along with some characteristic case studies, are discussed.
The Journey of the Sprayed Compounds from the Aerial Plant Surfaces to the Action Point within the Plant Tissues and Cells
In plants, the root is the organ that absorbs water and inorganic nutrients from soil, which are then transported to the leaf tissues and cells through the vascular system.In leaves, the inorganic nutrients are incorporated or assimilated into various organic compounds and the leaves act as sources that send these organic compounds to the other plant organs, which in turn act as sinks.The acquisition of inorganic nutrients by the roots is in mutual relationship with the assimilation activities in leaves.The plant foliage which comprises leaves, stems, inflorescences, flowers, and fruits can take up nutrients as well [26].
From the Surface to the Vascular System
The journey of a compound sprayed on the aerial plant surfaces up to the action point within the cells starts with its penetration and transport through the following structures: the various layers of the cuticle, the cuticular pores, the stomata, the epidermal walls and For the integrity of the approach, in the last group, the S-containing adjuvants and agrochemicals have been added.The arrows indicate the complex journey of the S-containing solute to the action point within the cell which includes several structures to cross, and the characteristics of each one of them is summarized in the text.Towards understanding and handling the effectiveness of such foliar applications, the nature and mode of action of these compounds, along with some characteristic case studies, are discussed.
The Journey of the Sprayed Compounds from the Aerial Plant Surfaces to the Action Point within the Plant Tissues and Cells
In plants, the root is the organ that absorbs water and inorganic nutrients from soil, which are then transported to the leaf tissues and cells through the vascular system.In leaves, the inorganic nutrients are incorporated or assimilated into various organic compounds and the leaves act as sources that send these organic compounds to the other plant organs, which in turn act as sinks.The acquisition of inorganic nutrients by the roots is in mutual relationship with the assimilation activities in leaves.The plant foliage which comprises leaves, stems, inflorescences, flowers, and fruits can take up nutrients as well [26].
From the Surface to the Vascular System
The journey of a compound sprayed on the aerial plant surfaces up to the action point within the cells starts with its penetration and transport through the following structures: the various layers of the cuticle, the cuticular pores, the stomata, the epidermal walls and the ectodesmata, as well as the lenticels; and continues through the cell wall, and the apoplast, to cross the various membranes, to enter cells, then to enter the phloem and circulate within the vascular system for long-distance transport, and again to cross membranes, to enter cells and finally to reach the action point [1,5,9,26].In this section, we summarize the contribution of these structures to the journey of the sprayed compound, and then we focus on the penetration and transport of the S-containing compounds that are sprayed on a surface of foliage and enter the corresponding epidermis.
A leaf consists of petiole and blade with an upper and a lower surface.Both surfaces are covered with epidermis, the outside material of which is the cuticle, a composite material.This is the first barrier for the penetration of the spraying solution, i.e., water and the solutes that carries, through the leaf surface.The cuticle consists of various layers; it is composed of lipids which are embedded into the matrix of the biopolymer of cutin.Hydrocarbons, along with fatty acids, primary alcohols, and esters, have been identified as major constituents of cuticle.The cuticular layers are protective layers, mainly composed of cutin, waxes, polysaccharides, and phenolics, with various combinations.The existence of these layers makes the leaf surface waterproof.The changes in the quantity and chemistry of the cuticular waxes during the developmental stages of the leaves influence the wettability and the permeability of the leaf cuticles [27][28][29][30].
As ingredient of the spray, sulfate is an anion, and the outer layer of the cuticle is negatively charged due to carbonyl and carboxyl groups existed within the cutin.This layer presents the major resistance to penetration, whilst in the inner layer the mobility of ions is greater than in the outer one.In contrast, the penetration of non-charged, lipophilic solutes through the cutin occurs by dissolution and diffusion.The mechanism of the penetration of polar, hydrophilic molecules has been discussed by Fernández and Eichert (2009) [9].Two parallel pathways in cuticles seem to be responsible for the transport of the lipophilic substances and the hydrophilic ones, with separate diffusion paths for the lipophilic non-electrolytes and the hydrated ionic compounds.The presence of cracks on the cuticular surface, i.e., the cuticular pores, contributes to the penetration of the solutes.Inside the pores, the cations are attracted to the negative charge and diffuse passively.In this way, the electrical charge is progressively balanced.In parallel, the anions start to penetrate through the pores.The rate of diffusion of ions across the layers depends on their concentration gradient.The cuticle is a polyelectrolyte presenting isoelectric points at approximately 3.0.The ion-exchange capacity of the cuticle alters the fluctuations of the pH Spraying with solutions with pH values higher than 3.0 renders the cuticle negatively charged.On the surface, this corroborates the diffusion of the cations, whilst it repels the anions, such as sulfate.Alshaal and El-Ramady (2017) [5] have provided an estimation of the time for 50% needed for the entry of nutrients into the plant leaf tissue, according to which sulfate needs 8 days to enter the leaf tissue.For comparison, Mg 2+ needs 2-5 h, K + 10-24 h, Ca 2+ , Zn 2+ , and Mn 2+ 1-2 days, whilst Fe 2+ or Fe 3+ 10-20 days to enter the leaf tissue.Phosphate anion needs 5-10 days, and molybdate 10-20 days.
In the ledges of the cuticle, aqueous pores are found preferentially at the basal cells of trichomes, guard cells, and in anticlinal walls.Such pores are dynamic in nature and are formed only in the presence of water because hydration of the permanent dipoles and the functional ionic groups is needed.The distribution of polar groups within the lipophilic polymer leads to the formation of either isolated aqueous clusters or a continuous aqueous space where the polar groups are spread out.The radii of these pores range from 0.45 to 1.18 nm.If the diameter of the molecule of the sprayed solute is within this range or less, it is likely that this compound can penetrate through the aqueous pores and reach the epidermal cell walls.The polar characteristics of the plant surface results in more efficient translocation of cations rather than anions.An increase in cation valency, decreases the rate of cation penetration through the cuticle.A charge-neutral molecule like urea, can pass easily and efficiently through the plant surface [26,31].
Once the nutrient penetrates the plant surface, the barriers to nutrient uptake after the cuticle layers are cell walls and plasma membranes; then the nutrient can take either the apoplastic or the symplastic pathway to reach the vascular tissues for its translocation.The cell wall includes primary and secondary one, along with middle lamella.The primary cell wall is composed of pectin along with polysaccharides, the cellulose, and hemicelluloses.This results in a negative charge on the surface due to carboxyl and hydroxyl groups.There are structures in the epidermal walls that terminate on the surface of the outer epidermal cell walls, called ectodesmata.These structures are usually present on both the upper and lower epidermal cells, the guard cells, the sides of the larger leaf veins, trichomes and epidermal cells surrounding the capitate hairs.Ectodesmata are always covered by the cuticle and do not extend to the outer leaf surface.In this region, the structure is comparatively loose compared with the structure of the cell wall, whilst the interspaces are filled with coarse reticulum of cellulose, extended from the plasmalemma to the cuticle.In this way, ectodesmata serve as a polar pathway in absorption and excretion of substances.In cuticles, most of these pore-like structures have a diameter of less than 1 nm and distributed at a density approximately 1000 pores/cm 2 .These allow ready accessibility to low-molecular-weight solutes but not to larger molecules like synthetic chelates [26,[32][33][34].
Stomata are found in the surface of the leaves, i.e., dynamic openings composed by guard and subsidiary cells that facilitate the gaseous exchange of water vapor, CO 2 and O 2 between the external atmosphere and the internal one in the stomatal cave.Each of the upper and the lower surface of a leaf, the adaxial and the abaxial, respectively, consists of numerous stomata.The stomatal density is usually higher on the lower surface, whilst fewer or no stomata may be present on the upper surface.The solutes penetrate through stomata by diffusion along the wall pores, which are less selective and thus offer less resistance as compared to the cuticle.The overall contribution of stomata to the foliar uptake process seems to serve as a major passage to ion penetration.A higher density of cuticular pores in cell walls between the guard and subsidiary cells leads to a higher absorption of nutrients [26,35].
The nutrients diffuse across the cell wall against a concentration gradient and reach the apoplast.Apart from the cell walls, the apoplast also includes the intercellular space and the xylem, which consists of dead cells.The nutrients are transported into the xylem sap and allocated to the leaves via the flow of water.Several functions take place in the apoplast: water and nutrient transport, cellulose synthesis, the transport of materials for the construction of the cuticle, and molecules that are involved in plant development and in plant responses to various adverse conditions.Compounds secreted by pathogens can also be found in the apoplast.The diversity of the molecules found in the apoplast highlights its importance in the survival of plant cells [27].The characteristics of the apoplast are expected to influence the fate of nutrients applied to foliage by spraying.The pH value ranges from 4.5 to 7.0 depending on plant species and growing conditions.The H + -ATPases and the ABC-transporters located at the plasma membrane pump H + into the apoplast, thus lowering its pH.The apoplast has a role in ionic balance and serves as a transient ion reservoir.The functions of the leaf apoplast include the regulation of transient pH fluctuations, signal transduction leading to stomatal responses, and detoxification of toxic elements.The ion relation in the apoplast varies temporally because of changes in the metabolic activity caused by the day and night transition.Immediately after onset of light, the process of photosynthesis leads to an alkalization of the apoplastic pH.The apoplastic pH can be affected by external factors such as drought or flooding.It is therefore expected that the rate of diffusion of ions into the apoplast would be determined by the pH [26,[36][37][38][39][40][41][42][43][44][45].
Xylem and phloem constitute the vascular system of the plant, which contributes to long-distance transport.Main function of the vascular system is the effective distribution of nutrients between plant organs.The phloem connects sources with the sinks of the photosynthetic products.Within the phloem sap, inorganic cations and anions are co-transported.In annual plants, usually the mature leaves are sources for carbon and nutrients that are supplied to sinks, including roots, flowers, and seeds.In perennial plants, the source-sink relationship depends on the season.During spring, buds and developing leaves are sinks for carbon and nutrients.Later in the year, mature deciduous leaves are the source.Stem tissues of bark and wood can be source organs in spring and sink organs during active growth and leaf senescence [46][47][48][49][50].
Also, the importance of the vascular system as a structure that is useful for inter-organ communication [51] that contributes to the transfer of signals from roots to shoot, and vice versa, has been highlighted-for example, by signaling nutrient demand.Signaling from and to other plant organs such as flowers, seeds, bark, and wood in the case of perennial plants is considered too.The compounds that are transported through the vascular system create cycling pools among the various tissues and organs.These pools utilize the transport processes over the plasma membrane of cells.They are also influenced by the short-distance transport processes through the organellar membranes within cells, which in turn results in intracellular cycling, including that of S [39,50,52-59].
The Fate of the Foliar-Applied S-Containing Compounds
Sulfate and the S-containing compounds Cys, GSH, Met, γ-glutamylcysteine (γEC), and SMM have been detected as being transported within the vascular system.All these compounds can cycle within the plant tissues, between roots and the shoot and vice versa, and can be distributed from places of surplus to places of demand.In the xylem sap, S mainly exists as sulfate.Reduced S-containing compounds have also been found but in lower amounts.S-containing compounds play a critical role in the response of plants to abiotic stress factors.For example, although abscisic acid (ABA) is the key regulator of responses to drought and/or high-salt stress, it seems that there is an interaction of S-metabolism and ABA biosynthesis.It has been reported [60] that sulfate supply affects synthesis and steady-state levels of ABA in Arabidopsis, and evidence has been provided for a significant co-regulation of S-metabolism and ABA biosynthesis that operates to ensure sufficient Cys involved in the acting mechanism, a fact that highlights the importance of S for stress tolerance of plants.In addition, it has been tested [61] whether sulfate is an early xylem-delivered signal communicating drought stress to the shoot.Sulfate and ABA concentration changes in the xylem sap were related to stomatal conductance.It was found that drought affects sulfate transporter expression.Xylem-delivered apoplastic sulfate induces stomatal closure, and triggers gene expression in guard cells in an ABA-like manner.The xylem-derived sulfate seems to be a chemical signal of drought that induces stomatal closure via anion channels, and/or guard cell ABA synthesis.
The composition and concentrations of the S compounds differ with S nutrition, during the season, during the developmental stage, along the trunk, between species, and due to mycorrhization.These aspects have been discussed for trees in detail [48,50,[62][63][64][65][66].The supply of reduced S from shoot to roots is carried out by the phloem transport.Together with S allocation in the xylem, the cycling pools of the circulating S compounds provide oxidized and reduced S to respective sinks [48,50,67].Thus, the above-mentioned S-containing compounds can cycle between the interconnected structures: cells, tissues, and organs.Cycling occurs among various tissues and includes exchanges between phloem and xylem, both within the shoot and in transit to stems and roots.This process facilitates the distribution of S to the sites of its demand for growth and development, helps signal the S status of the plant, and plays a crucial role in regulating the overall nutrition of the entire plant.Sulfur cycling has been investigated in perennial plants regarding the annual growth cycle.Perennial plants need to store nutrients during dormancy, and they must mobilize nutrients during spring to supply the newly sprouting shoot with carbohydrates, N, S compounds and needed nutrients [39].As an example of S cycling at the tissue level, cycling of Cys and GSH has been shown in maize plants between mesophyll and bundle sheath cells.Sulfate reduction and assimilation up to Cys occurs in the bundle sheath cells, whilst the use of Cys in GSH synthesis takes place in the mesophyll cells [48,50,68].
To reach their site of action, the S-containing compounds need to cross membranes too.Therefore, specific transporters are needed at the whole plant level.Sulfate uptake is a dynamic biological process that occurs at the cell, tissue, and organ levels.Sulfate transporters (SULTR) are integral membrane proteins controlling the flux of sulfate entering the cells and the subcellular compartments across the membranes.Takahashi (2019) [69] has elaborated on sulfate transport systems.Sulfate is necessary for the synthesis of many metabolites, and Gigolashvilli and Kopriva (2014) [70] have discussed the various transporters that contribute to S metabolism.The primary and secondary assimilation, the biosynthesis, the storage, and the final utilization of S-containing compounds all require movement between organs, tissues, cells, and organelles.Therefore, efficient transport systems of S-containing compounds are required across the plasma membrane and the organellar membranes.A detailed understanding of the mechanisms and regulation of transport is needed towards improving the plant yield, biotic interactions, and nutritional condition of crops, along with the successful biofortification of the involved pathways.A brief presentation of the corresponding transporters that are known takes place in the discussion of the foliar application of each S-containing compound.
The Impact of the Atmospheric S on Foliage
Atmospheric sulfur comprises a range of gaseous forms of S including sulfur dioxide (SO 2 ), hydrogen sulfide (H 2 S), carbon disulfide (CS 2 ), carbonyl sulfide (COS), dimethyl sulfide (DMS) [71] and aerosols, mainly as sulfate from the oxidation of S gases or organosulfates [72,73] that enter the plant.It has long been recorded that several nutrients including S and S-compounds are absorbed by the leaves following rain and transported to other parts.S-compounds are released into the atmosphere by the emission of industrial gases or during the ore-smelting activity [34].When SO 2 is emitted into the atmosphere and transported by wind and air currents, it reacts with water, oxygen, and other chemicals to form sulfuric acid, which in turn mixes with water and other materials and falls to foliage.Winds can blow SO 2 over long distances, thus making sulfuric acid-containing rain a global problem.Although a portion of the SO 2 in acid rain is from volcanoes and other natural sources, most of it comes from the burning of fossil fuels.When SO 2 is in high concentrations in rain and acidic fog, this negatively affects foliage.In the short term, the plants are less able to absorb sunlight because they become weak and less able to withstand freezing temperatures.In the long term, foliage contains dead leaves.Moreover, acid rain leaches aluminum, which may be harmful to plants and crops as well as animals, and nutrients from the soil [74].Global climate change increases the occurrence of wildfires worldwide and fire-smoke has a devastating effect on the entire ecosystem including plants over very long distances.The concentrations of released SO 2 into the atmosphere during fires can reach 3000 nL L −1 [75].
SO 2 is a highly hazardous gas, posing severe stress on plants due to sulfite formation in the apoplastic space, which in turn leads to necroses of the leaves.It enters the leaves mostly via the stomata and is converted to sulfite, a toxic substance, and the elevated sulfite must be detoxified.Studies investigating the detoxification mechanism(s) of SO 2 in natural environments and non-model organisms are plentiful.Baillie et al. (2016) [76] studied the detoxification of the S surplus in plants and discussed different strategies of survival.It was found that plants from selected fields reacted differently after exposure to SO 2 at the level of (i) the closure of the stomata and (ii) the overall accumulation of thiols and/or sulfate.In a strategy, channeling of the S surplus took place by the formation of S-metabolites like thiols, i.e., the strategy of reductive detoxification.Oxidative detoxification into sulfate was also found in another strategy, with or without increased sulfite oxidase activity.Baillie et al. (2019) [77] provided new insights into sulfite detoxification, as they found apoplastic peroxidases that enable additional sulfite detoxification, acting as a first line of defense upon exposure to SO 2 .Sulfite detoxification is of utmost importance in plants, and it has been found that the sulfite concentration is strictly controlled.Increased guaiacol peroxidase activity can act as a first line of defense and assists the activity of sulfite oxidase.[75] examined the detoxification of S surplus in the leaves from beech (Fagus sylvatica) and oak (Quercus robur) due to exposure to elevated SO 2 .In both species, an induced stress reaction was indicated by a 1.5-fold increase in oxidized GSH.In beech leaves, the activities of the sulfite detoxification enzymes were increased by 5 fold and a trend of sulfate accumulation was observed.In contrast, oaks did not regulate sulfite oxidase and apoplastic peroxidases during smoke exposure; however, the constitutive activity was 10-fold and 3-fold higher than in beech.Beeches use the efficient upregulation of oxidative sulfite detoxification enzymes, while oaks hold a constitutively high enzyme pool available.
Weber et al. (2021)
Plants emit gaseous S compounds.Bloem et al. ( 2012) [78] focused on the dependence of H 2 S and COS exchange with ambient air on the S status of oilseed rape (Brassica napus L.) as well as on the fungal infection of oilseed rape with Sclerotinia sclerotiorum.S fertilization and fungal infections affect the exchange of H 2 S and COS from the crop.Such emissions are related to the plant S status or the fungal infection.H 2 S was either released or taken up by the plant depending on the ambient air concentration and the plant demand for S. The emissions of H 2 S were closely related to both the pathogen infections and S nutrition.The S fertilization caused a shift from H 2 S consumption by S-deficient oilseed rape plants to H 2 S release after the addition of S. The fungal infection caused a stronger increase in H 2 S emissions.COS is normally taken up by plants.Healthy oilseed rape plants acted as a sink for COS, whilst fungal infection caused a shift from COS uptake to COS releases.Jing et al. ( 2019) [79] have elaborated on the exchange of carbonyl sulfide (COS) and carbon disulfide (CS 2 ) between the atmosphere and cotton fields in an arid area.The vegetation presented a predominant seasonal net uptake of COS during the growing season.This was not the case for the CS 2 , where its fluxes from the vegetation presented no significant seasonal variations.The exchange rates of CS 2 and COS were found to be stimulated by the addition of urea fertilizer.
S-Containing Mineral Compounds Applied as Foliar Sprays
Inorganic S-containing compounds may contain sulfate as the anion, or the thiol group -SH.There is a good number of sulfate compounds that have been used as fertilizers for foliar application, namely ammonium sulfate, potassium sulfate, potassium-magnesium sulfate, zinc sulfate (monohydrate or heptahydrate), copper sulfate (monohydrate of pentahydrate), ferrous sulfate (monohydrate or heptahydrate), ferric sulfate tetrahydrate, and manganese sulfate (anhydrous or tetrahydrate).Sodium hydrosulfide (NaHS) is used as donor to produce hydrogen sulfide.
Spraying with Sulfate Salts
The importance of applying sulfate by foliar application.To correct nutrient deficiencies by foliar fertilization, the soluble sources of the various nutrients are more efficient compared to insoluble or slightly soluble sources, and sulfate salts are soluble.The principal sources of macro-and micronutrient fertilizers and their solubility, including the sulfate-containing ones, have been presented and discussed by Fageria et al. (2009) [2].The chelated sources of micronutrients are comparatively more efficient compared to nonchelated ones, but between them the chelated sources are expensive.The selection of appropriate sources of inorganic fertilizers for foliar sprays is of great importance, considering penetration and transport efficiency, and foliage burning.Considerable differences have been reported among the fertilizer sources, as regards burning of foliage following the foliar application of inorganic fertilizers, especially N. The risk of foliage burning is more likely when the N source is other than urea, such as ammonium sulfate [80].Leaf injury and yield depression of soybean by the various NPKS materials were noted when the fertilizer application was taking place during midday rather than the early morning or late afternoon hours [81].This piece of information clearly shows that the evaporation rate of the sprayed drops onto foliage condenses the drop, thus altering the pH; therefore, application timing is of importance.Foliar application should be made when the plant is not under water stress, i.e., when the plant is turgid and cool [82].When the crop is under a given nutrient stress, this is the most critical time to apply it.Stress periods occur during active growth activity, especially when the plant is switching from the vegetative to the reproductive stage [2].
Sulfate salts are used for agronomic biofortification, through foliar application of the micronutrient fertilizer directly to the leaves.Rice, wheat, maize, legumes, sorghum, millet, and sweet potato dominate diets worldwide and agronomic biofortification is mainly focused on them [83,84].Foliar fertilization with micronutrients often stimulates more nutrient uptake and efficient allocation in the edible plant parts than soil fertilization [85].Foliar pathways are generally more effective in ensuring uptake into the plant because immobilization in the soil is avoided.However, the combination of soil and foliar application is in several cases the most effective method [86,87].The downside of foliar application is that fertilizers can easily be washed off if rain follows, whilst several fertilizers are more costly and difficult to apply [88].
Zinc-enriched grains are of great importance for crop productivity on potentially Zn-deficient soils, as they secure better seedling vigor, denser stands, and higher stress tolerance.Due to its high solubility and low cost, zinc sulfate (ZnSO 4 ) is the most widely applied inorganic source of Zn.Foliar, along with combined soil and foliar application of Zn fertilizers under field conditions, are highly effective and very practical ways and uptake and accumulation of Zn at the level of whole wheat grain is maximized.Such applications can achieve concentrations up to 60 mg Zn kg −1 [89].The most effective method for increasing Zn in grain was the combination of soil plus foliar application method that provided a 3.5-fold increase in the Zn concentration of grain.The timing of foliar Zn application is an important factor determining the effectiveness of the foliar-applied Zn fertilizers in increasing grain Zn concentration.According to Cakmak (2008) [89], large increases in loading of Zn into grain can be achieved when foliar Zn fertilizers are applied to plants at a late growth stage.
The main inorganic source of Fe is FeSO 4 , while Fe 2 (SO 4 ) 3 is also used.In a case study provided by Papadakis et al. (2007) [90], three-months-old citrus plants, including two genotypes [sour orange (Citrus aurantium L.) and Carrizo citrange (C.sinensis L. cv.Washington navel × Poncirus trifoliata)], were sprayed with 0.018 M iron sulfate (FeSO 4 •7H 2 O), or 0.018 M manganese sulfate (MnSO 4 •H 2 O).After its foliar application, Mn was found to be relatively mobile within citrus plants, resulting in a significant increase in Mn concentrations in top leaves, basal leaves, stems, and roots of sour orange, and in top leaves, basal leaves, and stems of Carrizo citrange.Transport of Mn from the basal, sprayed leaves to the top, unsprayed ones were found for both genotypes.The results did not provide any evidence for Mn translocation from sprayed tissues to roots.As regards Fe, it was found to be strictly immobile within citrus plants after its foliar application.Spraying with Fe significantly increased the concentrations of Fe in the stems and basal leaves of both genotypes and no transport of Fe from sprayed tissues to unsprayed ones (top leaves, and roots) was found [90].
Iron chlorosis is a very common nutritional disorder in plants and variable results have been reported from Fe source studies related to differences in Fe placement, rate, time of application, weather, and soil conditions [91][92][93].The most effective agronomic practices for the Fe enrichment of crops are through foliar application of mineral Fe.Foliar application has already showed to increase Fe concentrations in wheat grain and rice grain [94].However, there are also contradictory results, as some studies have showed no response of plants upon foliar Fe application, especially under treatment with inorganic and chelated Fe fertilizers [87,88].
Fe and Cd present similar chemical properties and entry route and are closely correlated in crops cultivated in contaminated soils.Many studies have characterized the effects of Fe in crops under Cd stress, and Afzal et al. (2021) [95] highlighted the fact that the underlying mechanisms that reduce the Cd concentration within the plant when different Fe fertilizers are applied are poorly understood.A report by Bashir et al. (2018) [96] suggested that foliar application of Fe complexed with lysine significantly increased plant growth and biomass, biochemical and physiological attributes in O. Sativa grown in Cr stress environment.Wang et al. (2021) [97] supplied foliar Fe fertilizers in the ionic (FeSO 4 •7H 2 O, Fe(NO 3 ) 3 •7H 2 O), and chelated forms, (Na 2 FeEDTA, and FeEDDHA) and it was concluded that foliar application of chelated ferrous Fe provides a promising alternative approach for enhancing growth and controlling Cd accumulation in rice plants.These results indicate that foliar application of chelated ferrous Fe provides a promising alternative approach for enhancing growth and controlling Cd accumulation in rice plants.The above-mentioned references contribute to the understanding of the associations between plant Fe nutrition status and Cd accumulation.
Sulfate transport and transporters.The amount of sulfate ions that enter the cell to be metabolized in the cytosol, chloroplast, or plastid, or to be stored in the vacuole, depends on the expression levels and the functionalities of the existing sulfate transporters (SULTR, see Section 2.2) in the corresponding membranes.The entire system for sulfate transport requires different types of SULTR.When sulfate penetrates the leaf epidermis, the corresponding the tissue and cell type transporters are expressed.The regulation takes place at the transcriptional and post-transcriptional levels and controls the expression levels of the corresponding SULTR, towards optimal internal distribution in response to the availability of sulfate and the demand for the synthesis of various S metabolites [69].Several SULTR have been detected in the vasculature and the xylem parenchyma of leaf [98,99].The occurrence of SULTR types varies depending on the location at the organ, cell, or subcellular compartment levels, the environment, and the genotype [100].There has been research towards answering whether these transporters release sulfate into the xylem, as well as on the transporters that are involved in the efflux of S compounds into the xylem from xylem parenchyma cells and from storage tissues along the trunk [48].Sulfate contents, together with SULTR expression, have been investigated in leaves, bark, and wood of field-grown poplar, and control of sulfate cycling by SULTR expression has been revealed [61,101].Sulfate accumulates in bark and wood during autumn, whilst sulfate can be taken out of the xylem or phloem sap for storage [50].
The Foliar Application of Hydrogen Sulfide
The impact of atmospheric H 2 S. The H 2 S applied to foliage can be used as a S source for growth.Being lipophilic in nature, H 2 S can rapidly cross the membranes without the intervention of channels.When no sulfate is supplied to the root, plants can grow with atmospheric H 2 S as the sole S source.Within the plant, H 2 S is biologically reactive, directly incorporated into Cys.The H 2 S applied to foliage can be potentially phytotoxic.Intensive research through the years showed that the various plant species differ considerably in the susceptibility of the applied H 2 S. Prolonged exposure to 0.03 L H 2 S L −1 air inhibited the biomass production of sensitive dicot species.This concentration is a realistic one for industrially and agriculturally polluted areas.Monocot species can tolerate up to 1.5 L H 2 S L −1 air without negative effects on plant biomass production.A probable explanation of the tolerance of these species to elevated H 2 S is that the meristem is sheltered by leaves and H 2 S can hardly penetrate the meristem [73,[102][103][104][105][106].
Fumigation of plants with H 2 S. The fumigation of plants with H 2 S has proven to be a powerful way to obtain insights into the regulation of sulfate uptake and assimilation.Upon H 2 S fumigation, shoot Cys and GSH contents increase significantly, indicating that absorbed H 2 S is metabolized with high affinity in these thiols.Sulfate-deprived plants may fully alleviate the development of S-deficiency symptoms by receiving foliar H 2 S.Many plant species have been tested and the rate of the foliar H 2 S uptake followed Michaelis-Menten kinetics.Kinetics is controlled by the rate of incorporation of H 2 S into Cys [107,108].
The regulation of sulfate metabolism in barley (Hordeum vulgare) seedlings exposed to atmospheric H 2 S in the presence and absence of a sulfate supply has been studied by Ausma and De Kok (2020) [106].Sulfate deprivation resulted in reduced shoot and root biomass production, decreased shoot and root total S contents, decreased sulfate content, and lower Cys, GSH, and soluble protein levels.On the other hand, it resulted in increased shoot and root molybdenum content, increased APS reductase (APR) activity and increased expression and activity of the root sulfate uptake transporters, and enhanced dry matter, nitrate and free amino acid contents [106].Barley could use the absorbed H 2 S by foliage as a S source for growth.In fact, barley switched S source, from rhizospheric sulfate to atmospheric H 2 S. The development of S-deficiency symptoms was alleviated in sulfate-deprived barley exposed to 0.6 µL L −1 atmospheric H 2 S. Fumigation of both sulfate-deprived and sulfate-sufficient plants with H 2 S downregulated APR activity, as well as the expression and activity of the sulfate uptake transporters.The sulfate utilization in barley seems to be controlled by signals originating in the shoot [106].
Donors of H 2 S. Donors of H 2 S were studied in recent years for their contribution as oxidative stress reducers.Such molecules seem to contribute to cellular signaling, and are post-translational modifiers.The same holds true for several derivative compounds of H 2 S such as polysulfides and polysulfanes.The H 2 S donor compounds are explored in the agricultural field towards possible applications in improving the productivity and quality of crops [73].To date, exogenous applications have been carried out using chemicals capable of delivering H 2 S; the standard chemicals are sodium hydrosulfide (NaHS) and inorganic sodium polysulfides (Na 2 S x ) such as Na 2 S 2 , Na 2 S 3 , and Na 2 S 4 [108].In aqueous solutions, the delivery of H 2 S by the above-mentioned polysulfides depends on the pH and the corresponding pKa.NaHS is a short-lived donor that does not mimic the slow continuous process of H 2 S generation in vivo.NaHS is the cheaper solution and has been sprayed directly on plants in a wide range of concentration [109].The foliar application of H 2 S donors to different plant species at different stages of development can alleviate damage caused by abiotic stress.Physiological features including post-harvest preservation of vegetables are enhanced [109][110][111][112].The positive results that have observed so far suggest that further basic research on the foliar application of H 2 S donors is reasonable and highly required.So far, it has been found that H 2 S mediates in signaling and in the increase in tolerance to different stresses including water deficit, salinity, high temperature, and increased concentrations of heavy elements (Cd, Cr, Cu, Al, As) [73,[113][114][115][116][117][118][119][120].H 2 S in higher plants may be part of a mechanism of response to environmental stress conditions [108].H 2 S and reactive sulfur species (RSS) interact with reactive oxygen (ROS) and reactive nitrogen (RNS) species, and relevant signaling molecules [111,121].The set of the reactive chemical species seems to form a cellular network of redox signals [122].
A possible signaling mechanism where H 2 S contributes is the formation of persulfides or hydrosulfides (RSSH) from the protein cysteine residues, as several enzymes, transcription factors, and channels seem to be involved in the mechanism(s) [123,124].H 2 S auto-oxidises in the presence of O 2 and SO 3 2− , S 2 O 3 2− , SO 4 2− and polysulfanes are formed [70].H 2 S is precursor of biological polysulfides [125].Polysulfanes, polysulfides (with S n > 2), and RSSH present a diversity of oxidation states between the S atoms, a trait that allows the molecules to present a dual character as oxidants and reducers.This diversity probably contributes to a multifunctionality character of the signaling of H 2 S and the derived compounds.H 2 S presents reactivity comparable to that of GSH against H 2 O 2 and free radicals.However, its value as a cellular antioxidant is limited because low concentration of H 2 S in vivo is found [73,126,127].The signaling properties of the endogenously generated H 2 S within the plant cells are mainly observed during persulfidation, a protein post-translational modification (PTM) that affects the redox-sensitive cysteine residues [108].
S-Containing Metabolites Applied to Foliage by Spraying
Several S-containing metabolites have been used as biostimulants, towards supporting the crop to overcome various stressful conditions.The metabolites Cys, Met, GSH), SMM, and LA are discussed in this section.For each of these compounds, we summarize their role(s) and contribution to crop growth, development, and health after foliar application.A summary of the case studies on the foliar applications of S-containing metabolites discussed in this section is given in Table 1.
Spraying with L-Cysteine
The nature and action of L-cysteine.Cys is a reductive amino acid with a thiol side chain, incorporated into proteins as residue with structural function.In parallel, it is precursor for biomolecules, such as GSH, vitamins, and defense compounds (glucosinolates and thionins) [123,124,128].The various oxidative conditions that put plants under stress are characterized by the production of various reactive species: oxygen, nitrogen, and sulfur ones (ROS, RNS, and RSS, respectively).Many of these reactive species either damage different types of macromolecules or serve as messengers.Due to the reactivity of the thiol group, some protein Cys residues are prone to oxidation by these molecules.The modification of Cys thiol groups contributes either to catalytic and regulatory functions, or to protective, redox signaling mechanisms.Reversible redox post-translational modifications (PTMs) of physiological relevance are disulfide bonds, S-nitroso thiols, sulfenic acids, S-glutathione adducts, thiosulfinates, S-persulfides and S-sulfenyl-amides.Coturier et al. (2013) [129] have reviewed the variety and the physiological roles of these PTMs, which are mostly controlled by two oxidoreductase families, the thioredoxins and the glutaredoxins.
Case studies of foliar application of cysteine.Cys is a natural compound that has been used as a component in foliar applications.One aspect of application is towards alleviating the adverse effect of salinity stress on different plant crops.Foliar application of Cys can ameliorate negative effects of salt stress on plants, and the following two examples are provided.
Perveen et al. ( 2018) [130] assessed the effect of Cys, on maize crop (Zea mays L., var.Malka and hybrid DTC) under salt stress, among various S-containing compounds.Maize plants were subjected to salt stress treatment (0 vs. 90 mM NaCl).Two weeks after the salt stress application, plants were sprayed with different levels of the following compounds, i.e., Cys (20 mM), FeSO 4 (10 mM), LiSO 4 (10 mM), and a mixture at a 1:1:2 ratio, against control (non-spray).Foliar application was performed twice at one-week interval after the salt stress treatment.The salinity stress significantly decreased the growth of both maize genotypes.The foliar application of Cys decreased the relative water content (in var.Malka) and the contents of free proline, glycine betaine, and flavonoid.Among the used compounds, a differential response was observed in increasing the growth parameters of both maize genotypes, where FeSO 4 , LiSO 4 and the mixture with a 1:1:2 ratio, were better than the foliar application of Cys alone [130].
In alleviating the effect of salinity stress on soybean crop, Sadak et al. (2020) [128] investigated the contribution of Cys (20 mg L −1 and 40 mg L −1 ) by carrying out experiments during two successive summer seasons at 30 and 45 days from sowing, in soybean plants grew under salinity conditions (3000 mg L −1 , and 6000 mg L −1 ).Salinity caused decreases in soybean growth parameters, the photosynthetic pigments, the N, P, K contents, along with the yield and yield components, and percentage of oil.Cys treatments improved the growth and yield of soybean plant either irrigated with tap water or saline water and presented a beneficial role in alleviating the adverse effect of salinity stress on soybean plant.It was suggested that the spraying solution of 40 mg L −1 of Cys is the most effective treatment [128].
Another line of work on the topic focused on the agronomic biofortification of broccoli, which serves as a functional food because it can accumulate Se, glucosinolates, the well-known bioactive amino-acid-derived secondary metabolites, and polyphenols.The chemical and physical properties of Se are very similar to those of S, and competition between sulfate and selenate for uptake and assimilation has been demonstrated.Towards an efficient agronomic biofortification of broccoli florets, Bouranis et al. 2023 [131] studied the working question whether it is possible to overcome this competition by exogenously applying Cys along with Se application.Broccoli plants were cultivated in a greenhouse; and at the beginning of floret growth, the application of Se (0.2 mM) was coupled with the application of Cys (0.05 mM).Foliar application of this combination and isodecyl alcohol ethoxylate (IAE) as adjuvant provided 298 µg Se per floret (which is an acceptable Se concentration); dry mass, organic S concentration and carotenoids were increased.When silicon ethoxylate (SiE) was used as adjuvant the outcome was different; Se concentration was 313 µg Se per floret and Sorg was decreased, whilst carotenoids, total chlorophylls, and glucosinolates were increased.
Cysteine transporters.Although plants possess many amino acid transporters, many of them capable of transporting Cys, some even with a high specificity [132,133], it is not clear whether Cys transport is less specific through general amino acid permeases.In general, amino acid transporters act on the specificity of substrates, but there is much information about the S-containing amino acid Cys transporters.For instance, Arabidopsis UMAMIT14 is a broad substrate transporter for amino acids, whereas not a specific transporter for Cys [134].In plants, protein synthesis occurs in three organelles and the intracellular transport of amino acids is essential [70].However, this may not be the case for Cys, the synthesis of which is also localized in cytosol, mitochondria, and plastids [135], although it has been found that the synthesis of Cys can be restricted to a single compartment without affecting survival and with only small effects on growth [70,136].Cys also undergoes intercellular transport, although its contribution to a total long-distance flow of S may not be very high [137].Seeds can assimilate sulfate and therefore they do not depend on transport of Cys [138].In C4 plants, S nutrition is dependent on intercellular Cys transport, since sulfate is reduced in the bundle sheath cells only and Cys is the transport metabolite from these cells to mesophyll and other cell types of the leaves [139].Thus, the molecular nature of Cys transport into the cells as well as in mitochondria and plastid membranes remains obscured.
Spraying with Glutathione
The nature and action of glutathione.GSH (γ-L glutamyl-L-cystinyl-glycine) is natural, bioactive compound present in most plant tissues and involved in diverse aspects of plant metabolism.It is a tripeptide consisting of glutamic acid (Glu)-cysteine (Cys)glycine (Gly).GSH is a powerful antioxidant, involved first and foremost in the removal of ROS.ROS increase when plants cope with different abiotic adverse conditions, and the increasing oxidative damage to nucleic acids, proteins, and lipids will affect various metabolic activities.The degree of damage depends on the balance among the production and removal of ROS through the scavenging mechanisms.Towards mitigating the oxidative damages, a complex defensive antioxidant system is in action, which includes antioxidant enzymes such as SOD, POX, APX, GR and non-enzymatic antioxidants such as GSH, ascorbate, and phenolic compounds.Some plants exhibit a variation in GSH, called homoglutathione, with the same biological properties [8,[140][141][142].
GSH is an important pool of reduced S, a central component of the glutathioneascorbate cycle, and a predominant non-protein thiol present in plant cells.It contributes to the regulation of many cell functions, including the synthesis and repair of DNA, the synthesis of proteins, activation, and the regulation of enzymes in plants.GSH is the precursor of phytochelatins (PCs), which are GSH-containing oligomers able to chelate heavy metals, thus contributing to the sequestration of heavy metal and transport to the vacuole.PCs are involved in flower development and plant defense signaling [143][144][145][146][147]. Thus, it is clear that GSH is crucial for biotic and abiotic stress management [146].
Case studies of foliar application of GSH.Various treatments with GSH have been performed as foliar applications, with target to increase plant tolerance to different abiotic adverse conditions.According to Akram et al. (2017) [148], when GSH was applied to the leaves of various varieties of salt-stressed soybean (Glycine max) plants, significant increments in plant growth and production parameters were observed in comparison to plants under salt-stressed conditions only as control.Compared with control, the number of seeds per plant increased depending on the genotype examined, whilst pods per plant and yield per plant, parameters that impact on crop yield, were significantly improved.
Genotypes categorized as susceptible to salt stress presented better responses when also treated with GSH [148].
According to Nakamura et al. (2019), foliar application of GSH to oilseed rape plants (Brassica napus) cultured hydroponically, affected the distribution and behavior of Zn [147].The treatment significantly increased the Zn content in shoots, the root-to-shoot Zn translocation ratio, the Zn concentration in the cytosol of root cells, and enhanced xylem loading with Zn.Following the foliar GSH treatment, the gene encoding pectin methylesterase was upregulated in roots and signals triggered in response to foliar-applied GSH increased Zn availability in roots and mobilized Zn from the root cell wall.Root-to-shoot translocation of Zn was activated and increased Zn accumulation in the shoot was found.These findings suggest that the foliar application of the reduced form of GSH improved the Zn mobilization and transport within the plant [147].
According to Ghoname et al. [146], foliar application with 50 or 100 mg L −1 of GSH vs. arginine, or tryptophan on hot pepper (Capsicum annuum L. cv.Albasso) resulted in significant increases in plant growth parameters and yield, an increase in the level of IAA, GA3, decrease in ABA content, accompanying by increases in ascorbic acid, anthocyanins, tannins, phenolic compounds, carbohydrate content, protein content and amino acids composition, all parameters of nutritive value.The authors concluded that, comparatively, the promoting effect of GSH and arginine treatments, especially at 100 mg L −1 , was more pronounced than that of tryptophan treatments [146].
The effect of the combined application of GSH and ascorbic acid, on growth, yield, and yield components of two cultivars (Sakha93 and Giza168) of wheat plant (Triticum aestivum L.) was studied by El-Awadi et al. ( 2014) [149].In this approach, the antioxidants were foliar applied twice at the two concentrations of 50 and 100 ppm.The first spray was applied 30 days after sowing and the second one at 15 days later (45 days after sowing).Applying both antioxidants at 100 ppm improved wheat growth and yield, accompanied by increase in yield and yield components of the two cultivars, along with increases in photosynthetic pigments, carbohydrate, total free amino acids, and protein contents [149].
Sadak et al. (2017) [150] evaluated the role of foliar treatment with GSH in enhancing the antioxidant defense system of chickpea plant under different levels of seawater salinity.Increasing concentrations (50, 100 and 150 mg L −1 ) of GSH were tested.Foliar application of GSH caused significant increases in the contents of osmo-protectants of chickpea plants.Compared with the plants irrigated with tap water, the examined levels of diluted seawater significantly increased H 2 O 2 and lipid peroxidation.The different concentrations of GSH resulted in significant decreases in H 2 O 2 contents and lipid peroxidation levels in the control and salinity-stressed plants.It was concluded that foliar application of GSH was effective in improving chickpea performance in several aspects by reducing H 2 O 2 free radical, enhancing compatible osmolytes and antioxidant enzyme activities, and providing membrane stability.Marked increases in the activities of the antioxidant enzymes ascorbate peroxidase, glutathione reductase, peroxidase, and superoxide dismutase were observed in plants treated with GSH at 100 mg L −1 either under normal irrigation or salinity-stressed conditions [150].
Rehman et al. (2021) [151] investigated the potential of GSH at the level of 1 mM in combination with moringa leaf extract (MLE; 3%) in wheat under salt stress.GSH served as an antioxidant and the extract as an organic biostimulant.The combination applied in sequence as seed priming and foliar application on wheat growth.The sequential application of MLE and GSH improved osmotic stress tolerance.The positive results were due to stabilized membrane integrity, decreased electrolyte leakage, and enhanced endogenous GSH and ascorbate levels [151].
Jung et al. (2019) [152] investigated the effects of exogenously applied GSH to the leaves of B. napus seedlings exposed to 10 µM Cd.Foliar GSH treatments took place at the concentrations of 50 (162.7 µM) or 100 mg kg −1 (325.4 µM) of.In this case study, 2 mL L −1 of a commercial surfactant (20% sodium lignosulfonate and 10% polyoxyethylene alkyl aryl ether) was incorporated in the foliar solution.The foliar application of GSH to Cd-stressed B. napus seedlings reduced Cd-induced ROS levels by increasing seedling AsA, GSH, and NADPH concentrations, thus enhancing the antioxidant-scavenging defenses and the redox regulation.The results demonstrated that GSH improved plant redox status by upregulating the AsA-GSH-NADPH cycle and reestablishing normal hormonal balance.Therefore, GSH can potentially be applied to Cd-polluted soil for phytoremediation purposes [153].
Glutathione transporters.GSH is present in all compartments, and its concentration is high, especially in the mitochondria [154].Moreover, GSH is subject to long-distance transport [126].The presence of GSH transporters in the plasma membrane has long been recognized, although the molecular nature of these carriers is still an open field for research.Some transporters of the oligopeptide transporter family can transport GSH.However, for the high flux of GSH within plant cells, their affinity and specificity were not as high as expected [155][156][157].As an alternative pathway for GSH transport, it has been proposed that the components of GSH are moved across the membrane, through the combination of its degradation, amino acid transport, and synthesis in the new compartment [158].The key enzyme in this scenario is gamma-glutamyl transferase (GGT).It is present in plants, and it is important for the recovery of apoplastic GSH [159].GGT is localized on the apoplast side of plasma membrane or in the tonoplast, and therefore it cannot be responsible for the intracellular GSH transport.
Spraying with Methionine
The nature and action of L-methionine.Met is an S-containing amino acid, which contributes to a diversity of physiological functions.Met is an essential amino acid in humans, and it could be safely added to food, except infant foods [160].It regulates transpiration, the photosynthetic rate, and protein synthesis; it maintains membrane stability and relative water content; it reduces ROS production, H 2 O 2 , and MDA contents; it enhances enzyme activities that protect plant cells from oxidative damage under water-deficit conditions [161,162].Thus, it is an effective regulator of plant growth and development under water deficit [163].
Case studies of foliar application of Met.Met has been used as a component in foliar applications towards alleviating the adverse effects of drought stress and salinity stress, and the following examples are presented.
The effects of Met, among other amino acids, were studied by El-Bauome et al. (2022) [160] on cauliflower plants (cv.Arasya) grown under well-irrigated and droughtstressed conditions.After transplantation, all plants were acclimated for a month by keeping them at 60-70% field capacity.The control group was sprayed with distilled water plus 0.05% (v/v) Tween-20 as a wetting agent, whilst the Met group was sprayed with Met (25 mg L −1 ) plus 0.05% (v/v) Tween-20.Foliar treatments were performed 5 times at 30, 45, 60, 75, and 90 days after transplanting; then the pots were left to grow for additional 15 days.Compared with the untreated plants, foliar application of Met significantly increased height, diameter, freshness, dry matter, leaf area, leaf chlorophyll content, leaf relative water content, vitamin C, proline, total soluble sugar, reducing sugar, and nonreducing sugar.On the other hand, polyphenol oxidase (PPO), peroxidase (POD), and phenylalanine ammonialyase (PAL) were significantly reduced.A similar trend was observed in glucosinolates, abscisic acid (ABA), malondialdehyde (MDA), and total phenols [160].
In another example provided by Maqsood et al. (2022) [164], two wheat genotypes were grown with 100% field capacity (FC), the control treatment, up to the three-leaf stage.The 25-day-old seedlings of two wheat genotypes (Galaxy-13 and Johar-16) were subjected to 40% FC, the water-deficit stress treatment, with and without foliar application of 4 mM Met.The foliar application of Met substantially improved growth, photosynthetic, and gas exchange attributes under water-deficit conditions in both genotypes.Under the stress conditions, the Met application improved K, Ca 2+ , and P contents, whilst the activities of SOD, POD, and CAT were further enhanced [164].
The impact of foliar-applied Met on growth and performance of okra was investigated by Zulqadar et al. (2015) [165].Different levels of Met were applied to okra (5, 10 and 20 mg L −1 ), 15 and 30 days after sowing.For foliar application, 0.1% Tween-20 was used as a wetting agent.It was concluded that foliar application of Met could be effective in inducing more flowering and promoting the growth and yield of okra.Foliar application of Met at the level of 10 mg L −1 had a significant effect on the growth, yield and physiological parameters of okra as compared to untreated control.It increased the number of flowers, number of fruits, root length, shoots fresh and dry weight, the photosynthetic rate, chlorophyll contents and fruit yield up to 77, 96, 71, 64, 65, 71, 60 and 64%, respectively [165].
The responses of four maize genotypes (FH1275, FH 936, FH 1231, and FH 1227) to the foliar application of two Met levels (5 and 10 mg L −1 ) under 80 mM NaCl stress were studied by Shahid et al. (2021) [166].Salinity was applied at four leaves stage and maintained at 80 mM gradually.After 7 days of salinity application, spray with Met was applied with four days difference.Yield parameters were examined at plant maturity, and the Met level of 10 mg L −1 showed better results as compared to 5 mg L −1 , both under saline and non-saline environments [166].The use of Met seems to be a cost-effective treatment.
Such foliar applications may include other (natural) compounds too as in the following examples.Almas et al. (2021) [167] studied physiological and yield attributes of tomato by foliar spray of Met, and/or L-phenylalanine (Phe) under saline stress (4, and 6 dS m −1 ).The tomato plants were sprayed with Met (0.01% and 0.02%), Phe (0.01% and 0.02%), and their combination at the vegetative growth stage.The foliar application of Met induced salt resistance and improved all the growth and yield parameters.The combination of Met plus Phe displayed higher carotenoid contents, total carbohydrates, total free amino acids, and proline contents compared with the only salt-treated plants.The combined application of both amino acids reduced the lipid peroxidation rate and electrolyte leakage [167].
Towards an efficient agronomic biofortification of broccoli florets, Bouranis et al. ( 2023) [131] (see Section 5.1 for Cys) studied the working question whether it is possible overcome the competition between sulfate and selenate by exogenously applying Met along with Se application.Broccoli plants were cultivated in a greenhouse and at the beginning of floret growth the application of Se (0.2 mM) was coupled with the application of Met (0.1 mM).Foliar application of this combination and IAE as adjuvant provided 156 µg Se per floret (which is an acceptable Se concentration); organic S concentration and carotenoids were increased.When SiE was used as adjuvant the outcome was different; Se concentration was 156 µg Se per floret and Sorg was decreased, whilst carotenoids, total chlorophylls, and glucosinolates were increased.The combinations of Met (0.1 mM) with Cys (0.05 mM), or with phenylalanine (0.25 mM) and tryptophane (0.05 mM) along with Se (0.2 mM), were also studied [131], as Met, phenylalanine and tryptophane are all precursors for the biosynthesis of glucosinolates.Again, the nature of the adjuvant differentiated the responses.When IAE was used, FM was decreased (a negative result from the commercial point of view), whilst Se was found within acceptable contents.Car and total chlorophylls were increased but the total glucosinolates.In the case of SiE as adjuvant, the organic S decreased, fresh mass was not affected, whilst Car, total chlorophylls and total glucosinolates increased.
Methionine transporters.Very little is known about methionine transporters.A complex demand for inter-and intracellular transfer is needed for Met, particularly when its derivatives are considered [70], and Met must be transported to all compartments with protein synthesis.Cytosolic Met synthase is involved in the regeneration of Met in the SAM cycle [61,159].As mentioned, the amino acid transporters act on the specificity of substrates, and the UMAMIT14 of Arabidopsis is a broad substrate transporter for amino acids, whereas not specifically for the transport of Met, as in the case of Cys [134].
Spraying with Alpha Lipoic Acid
The nature and action of lipoic acid.LA (6,8-dithiooctanoic acid) is a S-containing compound with powerful oxidizing properties in both of its forms, the reduced dihydrolipoic acid (DHLA) and the oxidized one (LA) [168][169][170][171].It is a coenzyme of several key enzymes involved in the regulation of the redox status of plants, and the energy metabolism in eukaryotes [170,171].It plays a significant role as a cofactor for pyruvate dehydrogenase and glycine decarboxylase, components of certain mitochondrial enzyme complexes [172].LA application provided salinity tolerance by stimulating antioxidant enzyme activities in plants [173,174], whilst under abiotic stresses, including salt stress, the oxidized and reduced forms of LA decreased in shoots of wheat and barley [175,176].
Case studies of foliar application of LA.LA has been used as component in foliar applications towards alleviating the adverse effects of various stresses on different crop plants.Yildiz et al. (2015) [174] investigated the effects of LA on NaCl toxicity, proteomic, biochemical, and physiological changes in the leaves of canola (Brassica napus L.) seedlings.The applied concentration of LA alleviated the toxic effects of salinity stress by decreasing MDA content and increasing growth parameters, cysteine content, and activities of CAT and POD.Out of 28 proteins that were differentially expressed, 21 proteins were successfully identified that significantly upregulated or downregulated.These proteins were related to photosynthesis, energy metabolism, protein folding and stabilization, signal transduction, and stress defense.The authors concluded that foliar application with LA is an effective application for improving growth of canola under salinity stress [174].Elkelish et al. (2021) [177] studied a combination of LA plus Cys in wheat, i.e., the influence of LA in a grain dipping pre-cultivation treatment, in combination with Cys as a foliar application, under well-watered or deficit irrigation.The authors concluded that applied LA at 0.02 mM as seed soaking treatment, combined with Cys at 50 ppm as a foliar application could be considered as a successful application in wheat cultivation under water-deficit conditions, by providing physiological tolerance and restoring yield attributes in wheat [177].
Transporters of LA.As regards the transport of LA and corresponding transporters, to our knowledge, there is no such information available.
Spraying with S-Methyl Methionine
The nature and action of S-methylmethionine.SMM is a non-proteinogenic amino acid synthesized from Met and S-adenosylmethionine (SAM or AdoMet), and the reaction is catalyzed by Met S-methyltransferase (MMT).SMM participates in methylation processes within the cell and plays an important role in the transportation and storage of sulfur [61].It serves as methyl donor for Met synthesis from homocysteine, catalyzed by the homocysteine S-methyltransferase (HMT).MMT and HMT together have been proposed to constitute an SMM cycle that protects the free Met pool from depletion by an overshoot in SAM synthesis.During the Met cycle, SMM can revert to Met through a transmethylation reaction involving homocysteine [178].The SMM cycle operates throughout the plant [179].SMM is produced by all angiosperms and is involved in their S metabolism [180,181].It contributes to the regulation of the levels of both Met and SAM.Plants lack the negative feedback loops that regulate SAM pool size in other eukaryotes.The SMM cycle may be the main mechanism whereby plants achieve short-term control of the SAM level [179].
Apart from having an important role in the S metabolism, SMM is involved directly or indirectly in the stress and disease tolerance of plants.The role of SMM in the biosynthesis of sulfopropionates (that serve as osmoprotectants) and polyamines is valuable for plant resistance [180].It can moderate the damaging effects of various stressors by enhancing the production of dimethyl sulfopropionate, which acts as an osmo-and cryoprotectant [182], by increasing the biosynthesis of polyamines, and by regulating ethylene production [178,183,184].SMM is highly effective in protecting against cold stress, it stimulates the phenylpropanoid pathway, it increases the content of phenol derivatives and anthocyanins, and protects the photosynthetic apparatus [185,186].
Case studies of foliar application of SMM.Trials with foliar application of SMM are rare.As an example, provided by Fodorpataki et al. (2021) [187], the effect of foliar application of SMM has been investigated in canola (Brassica napus L. cv.Cindi) plants exposed to moderate or severe salt stress for different periods.Canola is a moderately salt tolerant plant, but high salinity inhibits germination of seeds, vegetative growth of young plantlets, and reduces biomass production.After two weeks of development, plants were sprayed in leaves and stem with an aqueous solution of 1 mM SMM.The applied level of SMM alleviated the reduction in the net photosynthetic rate, enhanced the water use efficiency and contributed to the reduction in oxidative membrane damage in fully developed young leaves [188].
SMM transporters.The importance of specific transporters for SMM cycling has been highlighted [50].The consequences of enhanced phloem loading capacity of SMM has been described by Tan et al. (2010) [65].Enhanced phloem loading of SMM was mediated by the yeast MMP1 gene that produces an SMM transporter when it was targeted to the phloem and the seeds in pea plants.Over-expression of this gene increased SMM content in phloem exudates; however, SMM did not accumulate in roots.Instead, the expressions of SULTR and APR increased.The downregulation of APR and other genes of the sulfate reduction pathway in leaves corresponded to higher SMM contents.This might function as a signal to reduce sulfate assimilation.Shoot biomass of transgenic pea plants increased, along with the soluble and total seed N content.This study shows that manipulation of long-distance transport can influence whole plant physiology.It was suggested that enhanced xylem loading could be responsible for no accumulation of SMM in roots of these mutants [65].
A summary of the case studies on the foliar applications of S-containing metabolites discussed in this section is given in Table 1.
S-Containing Non-Metabolites Applied as Foliar Sprays
In this section, we discuss the foliar application of thiourea, the lignosulfonates, the usefulness of dimethyl sulfoxide, and the S-containing adjuvants.For the reader to receive an integrative picture, the S-containing agrochemicals that have been used so far are mentioned.
Spraying with Thiourea
The nature and the action of thiourea.The plant growth regulators (PGRs) are chemical compounds that modulate the responses of plants under biotic and abiotic stresses at the cellular, tissue, and/or organ levels.Thiourea (or thiocarbamide; TU) is a synthetic PGR-containing nitrogen as -NH2 (36%,) and sulfur as -SH (42%).TU has three functional groups, the amino, imino, and thiol ones, each with biological roles.It is characterized by high-water solubility and quick absorption in living tissues and has gained wide attention for its role in plant stress tolerance, where it seems that it modulates several of the involved mechanisms [188,189].The application of TU modulates various physiological responses and mechanisms during development.It is involved in leaf gas exchange, plant water relations, photosynthesis, nutrient assimilation, and enhances the source-to-sink relationship resulting in increased crop yield.TU acts as a thiol-based scavenger of ROS.At the biochemical level, it is involved in antioxidant defense systems, nitrogen and proline metabolism, improves the metabolism of sugars, and protein biosynthesis.At the molecular level and regardless of the applied stress, the application of TU modulates the pattern of gene expression.It upregulates the expression of genes involved in encoding antioxidant enzymes, ROS-activated ion channels, and the regulation of redox state.Also, genes involved in calcium signaling, aquaporins and osmotic adjustment, metabolite biosynthesis, and hormonal regulation.Moreover, it is involved in the post-transcriptional regulation to enhance the expression of defense-related genes by the synchronization of microRNAs and hormones.Signaling of gene expression is a likely mechanism induced by TU, especially in ABA and calcium signaling events [26,[190][191][192][193][194][195][196][197][198][199].Therefore, TU has been increasingly used to improve plant growth and productivity under normal and stressful conditions.
Case studies of thiourea in foliar applications.Foliar application of TU seems to be more effective under environmental stress than under normal conditions and is more effective in the tissues where it is applied, in improving plant growth and development under heat stress, drought, salinity, and heavy metal toxicity, to a differential extent [190].
Wheat under drought stress.The application of TU resulted in a significant improvement in the growth and photosynthetic efficiency of wheat crop, by increasing vegetative growth, protein content, and yield in wheat under drought stress.TU seems to be an efficient osmo-protectant towards shielding the plants from different abiotic stresses, including drought and heat stress.Tolerance induced by TU is attributed to greater nutrient uptake coupled with the production of osmolytes, improved metabolic processes, and antioxidant defense mechanisms [200][201][202][203].
Heat tolerance in canola.The improvement of heat tolerance in canola with the foliar application of S has been studied by Waraich et al. (2022) [204] with TU as a S source.The design of the experiment included two varieties of canola (Hyola-401 and 45S42), two levels of foliar TU treatments (0 ppm vs. 500 ppm, and two temperature levels (18 C vs. 28 C).Heat stress was imposed at the stage of anthesis, and TU was applied at the same stage.The photosynthetic rate, stomatal conductance and intercellular CO 2 concentration were improved in TU treatments under the situation, while transpiration rate was decreased with the foliar application of TU.Yield and yield components increased with the foliar application of TU at the level of 500 ppm.Among the genotypes, Hyola 401 performed better following the foliar application of TU under heat stress conditions [204].
Nutritional-quality-related traits of bread wheat.The effect of foliar-applied TU on the growth, yield, and nutritional-quality-related traits of bread wheat has been investigated by Sher et al. (2021) [205] on sandy loam soils in semiarid regions.The treatments of TU levels (500 mg L −1 , and 1000 mg L −1 ) were applied on two diverse wheat cultivars (Gandam-1 and Galaxy-2013) at tillering, booting, and heading.TU treatments significantly affected the growth, nutritional quality, and morphological traits, and the interaction of the two factors was significant.The application of TU improved the productivity and nutritional quality in both cultivars.Galaxy-2013 performed best at 1000 mg L −1 TU application at the heading stage for both productivity and nutritional-quality related traits.
Late sowing of wheat.The potential of TU for enhancing the performance of late sown wheat has been studied by Zain et al. (2017) [202].Wheat (cv.Galaxy-2013) was sown in mid-December, and two foliar treatments of TU (300 and 600 mg L −1 ) were applied at tillering, jointing, and booting, under water spray and no spray as double control.The foliar application of TU at the tillering stage and level of 300 mg L −1 significantly enhanced wheat growth, the number of productive tillers, number of grains per spike, 1000-grain weight and grain yield.The foliar application of TU reduced the harmful effects on late sown wheat [202].
TU and boron toxicity.The relation of TU and nitric oxide (NO) in mitigating the boron toxicity (BT) has been assessed by Kaya et al. (2019) [199] in bread wheat (Triticum aestivum L. cv.Pandas) and durum wheat (Triticum durum cv.Altıntoprak 98) plants.Plants were grown under 0.05 mM B (control) and 0.2 mM B (BT treatment) supplied to nutrient solution for 4 weeks after germination.Then, foliar application of TU at the concentrations of 200 mg L −1 or 400 mg L −1 was applied once a week during the period of stress.TU, on the one hand, improved the plant growth, led to a further increase in NO in the leaves, and enhanced enzyme activities but, on the other hand, reduced the contents of soluble sugars, soluble protein, and phenols [199].
Spraying with Lignosulfonates
The nature and action of lignosulfonates.Lignosulfonates (LS) are by-products of the pulp and paper industry, generated by breaking the lignin network during the sulfite pulping process of wood.LS are randomly branched polyelectrolytes, and the watersolubility of them is ensured by the abundance of sulfonate and carboxylic acid groups, the content of which is variable.The properties of LS can be tailored by controlling the production parameters, fractionation, and subsequent modification.Agriculture is among the fields of application of this technical lignin material.The use of LS in agriculture and strategies for the implementation of LS in soil have been discussed by Wurzer et al. (2022) [206,207].As regards the foliar application of LS, the following are documenting the usefulness of LS in this strategy.
Case studies for foliar application of LS: (i) Iron-lignosulfonates (Fe-LS).The environmental concerns regarding the use of synthetic chelates to overcome iron chlorosis have increased and enforced the search for new and environmentally friendly ligands, including the LS.The LS complexes are less costly per unit of micronutrient but usually less effective than the synthetic chelates.However, the efficacy of the LS products is variable [91,93,208,209].The formation of a complex between Fe and LS involves different coordination sites.The target here is the efficiency with which the complex will provide iron under various agronomic conditions.Fe coordination environment and speciation have been studied in various LS complexes, in relation to the Fe-complexing capacities, and the chemical characteristics of the different products.According to Carrasco et al. (2012) [93], when Fe(II) is used to prepare the Fe-LS product, the complexes form weak adducts, and are sensitive to oxidation at neutral or alkaline pH.In contrast, both Fe(II) and Fe(III) are found when Fe(III) is used to form the complexes.Reductive sugars are normally present in LS and the content of these sugars favors a higher content of Fe(II), even in the case where these complexes prepared using Fe(III).It seems that the strong Fe(III)-LS complexes are preferred for application to the leaf [93].
Rodriguez-Lucena et al. (2009) [92] tested the ability of Fe-LS complexes to support plants with Fe through foliar application.Spraying with Fe(III)-LS vs. Fe(III)-EDTA to Fe-deficient cucumber plants showed that uptake and reduction rates of Fe between these complexes were similar.In the case of Fe-deficient tomato leaves, when Fe(III)-LS was used, a similar reduction rate compared with Fe(III)-EDTA was observed, along with a lower uptake rate.Therefore, foliar-applied Fe-LS can be used as an alternative to synthetic chelates, as a valid, cheap, and eco-compatible in dealing with Fe chlorosis.Focusing on the physico-chemical characteristics and the efficacy of different LS, Rodriquez-Lucena et al. (2011) [209] compared eucalyptus LS against spruce LS, a hardwood LS against a softwood one, respectively.All tested LS presented a good ability to complex Fe, whilst the spruce LS was the only one capable to maintain significant amounts of soluble Fe above pH 8.The efficacy of foliar-applied Fe-LS in chlorotic cucumber (Cucumis sativus L. cv Ashley) plants was tested in comparison with FeSO 4 and Fe(III)-EDTA.The Fe content of plants sprayed with Fe-LS was very low compared with the EDTA treatment, but not the biomass and the rates of re-greening.Modifications in the eucalyptus LS improved the efficacy for Fe chlorosis recovery to levels like those found for the spruce LS.The two applications of the LS were recommended [209].
(ii) Foliar application of Zn-lignosulfonates.Zn uptake and localization at the leaf cellular level have been studied by Minnoci et al. (2018) [210], in green bean plants (Phaseolus vulgaris L., cv.Linera) after foliar application of a Zn-lignosulfonate (Zn-LS) complex on the oldest leaves at 6 h, 4 days and 30 days, in comparison with a Zn-EDTA chelate.Significant differences in Zn penetration inside the leaves were observed.The Zn-LS complex showed the fastest absorption after 6 h, along with significant differences in Zn localization inside the leaf tissues, and the mesophyll presented the highest absorption of Zn-LS.Zn was detected at the highest concentration in the mesophyll of leaves treated with Zn-LS also at the day 4 and day 30, whereas in those treated with Zn-EDTA it was in the lower epidermis.The treatment with Zn-LS caused an increase in the total thickness of the leaf and of the spongy mesophyll [210].
The Contribution of Dimethyl Sulfoxide as Additive in Spraying Solutions
The nature and action of DMSO.Dimethyl sulfoxide (DMSO) seems to be of specific interest for agronomical use, discussed by Kumar et al. (1976) [211].DMSO is an excellent solvent, and it can easily solvate the cations because of the negatively charged oxygen atom in its molecule.It easily breaks the hydrophobic non-covalent bonds in the membranes and increases cell permeability.This property contributes to the increase in ion penetration into plant tissues.DMSO forms hydrogen bonds and it can affect (enhance or suppress) enzymatic activities [212,213].The following case studies highlight the effects of DMSO in various foliar applications.
Case studies of foliar application of DMSO: (i) DMSO and Zn.Kumar et al. (1976) [211] studied the effect of foliar application of DMSO on rice plants (Oryza sativa L., variety Jaya) grown in a Zn-deficient soil.Application of ZnCl 2 took place in the soil at the rates of 10 and 20 ppm, whilst foliar applications of DMSO took place at the rates of 0.001%, 0.01%, and 0.1%.Zn availability of the soil was increased by all DMSO treatments.Control plants showed very low dry matter.Applications of 0.001% and 0.01% of DMSO slightly increased dry weights of all plant parts at 45 days after transplanting.Grain yield was significantly increased by all DMSO treatments.Control plants showed very low chlorophyll contents.Chlorophyll content was stimulated by these doses of foliar application of DMSO, whilst the chlorophyll:carotenoid ratio was increased by all DMSO and Zn treatments.Control plants showed significantly lower activities of carbonic anhydrase and tryptophan synthetase.Carbonic anhydrase activity was significantly increased by the 0.001%, and 0.01% of the DMSO treatments, whilst tryptophan synthetase activity was stimulated only by the lowest dose of foliar (0.001%) applications [211].
(ii) DMSO and Fe.Foliar applications of various Fe-containing compounds-in most cases, inorganic Fe(II) and Fe(III) compounds-have been tested, usually with limited success.Criteria for assessing treatment success were regreening of yellow leaves, increases in Fe concentrations, as well as in chlorophyll and/or in leaves.This literature has been reviewed by Fernández and Ebert (2005) [11].In an example provided by Shoenherr et al. (2005) [214], the penetration of FeSO 4 at pH 3.9, 4.3, and 4.7 into maize leaves at 48-60% humidity has been investigated, where 0.5%, or 1% DMSO was included as an adjuvant and Tween 20 (0.02%) was used as the wetting agent.Very slow penetration was observed, and DMSO increased rates of the penetration of FeSO 4 .According to Singh and Kahn 2012 [215], the application of various water-soluble sources of Fe combined with DMSO markedly improved the Fe content, presenting rapid leaf greening and higher leaf chlorophyll contents of Fe chlorotic orange and grapefruit trees than without DMSO.
(iii) DMSO and phytohormones.The application of formulations with biostimulant action boost vegetable yields under typical and different biotic stress circumstances, whilst they are ecologically and user friendly.The production of highly stable emulsifiable concentrate (EC) formulations is a bottleneck of the uses of plant growth regulators, due to their hydrophobic character and huge molecular volumes.Ruidas et al. (2022) [216] studied phytohormone formulations of gibberellic acid with 0.25% EC, and brassinolide with 0.15% EC, using a variety of solvents, including DMSO and surfactants (calcium alkylbenzene sulfonate, or nonylphenol ethoxylate-13).Gibberellic acid boosted brinjal yields by 37.5%, while brassinolide raised onion yields by 33.9%.Brassinolide and gibberellin were both interdependent in action [216].
(iv) DMSO and agrochemicals.DMSO has been used as a systemic carrier of growth regulators, herbicides, and pesticides in plant tissues, by enhancing of the penetration of the applied substances into the tissues [217].In a trial of foliar application of low concentrations of Cycocel [(2-chloroethyl) trimethyl ammonium chloride] to pea plants at the 5th node stage, the internode length, plant height, fresh plant weight, fresh and dry pea weight and total dry matter were increased.In combination with DMSO, Maurer et al. (1969) [218] found that pea plants could be sprayed safely with a 5% DMSO solution, whilst a 10% solution caused plant injury.In separate experiments at two locations with different climates and soils, DMSO was tested in field trials as a carrier for Cycocel.It was applied at 3 concentrations when pea plants were in the 5-6 node stage, with and without a 5% solution of DMSO.The effects of DMSO and Cycocel were additive.A surfactant [polyoxyethylene (20) sorbitan monolaurate] was included in all treatments [217].
A summary of the case studies on the foliar applications of S-containing non-metabolites discussed in this section is given in Table 2.
S-Containing Spray Adjuvants
Function of adjuvants.The foliar-applied compounds must penetrate through the epidermis.The term adjuvant or surfactant (meaning surface-active agent) characterizes any compound added into the spray solution, towards improving its performance and effectiveness for enhanced penetration.Such compounds reduce surface tension, alter the energy relationships at interfaces, and adjust themselves as interfaces as they contain both hydrophobic and hydrophilic group within their molecule [12,219].
Classification of S-containing adjuvants.The classification of the surfactants includes non-ionic, anionic, cationic, or molecules with ampholytic part, based on the presence and the nature of the electrical charge, or the absence of ionization on the hydrophilic portion of the molecule.The non-ionic surfactants are active depressants of surface-tension.Chemically they are inert due to lack of ionization and possess no charge groups in their heads.The hydrophobic group is associated with non-ionized hydrophilic groups as polymerized esters of polyether alcohols ethylene oxide, or polyhydric alcohols [220,221].The heads of ionic surfactants carry net charge.The hydrophilic portion of an ionic surfactant can be either anionic (if the charge is negative) or cationic (if the charge is positive).Such surfactants are of limited relevance in agriculture since most nutrients are delivered as ionized compounds, which may interact and bind to the ionic surfactant molecules, thus altering their surface-active performance [1].
S-containing surfactants are anionic surfactants containing one or more functional groups.These groups become ionized in solution and generate the negatively charged organic ions that are responsible for lowering the surface tension.This class of surfactants includes alkyl-sulphates, alkyl-polyether sulfates, as well as paraffin-, olefin-and alkylbenzene-sulfonates and sulfate esters.The anionic S-containing surfactants may be sulfonates or sulfates [215].The surfactants of the sulfonate class include docusates (dioctyl sodium sulfosuccinate), sulfonate fluorosurfactants (perfluorooctanesulfonate; perfluorobutanesulfonate), and alkyl benzene sulphonates.The surfactants of the sulfate class include alkyl sulfates (ammonium lauryl sulfate; sodium lauryl sulfate; sodium dodecyl sulfate), and alkyl ether sulfates (sodium laureth sulfate; sodium myreth sulfate).The sulfate ester groups (C-O-S) attaching the hydrophilic head to the surfactant are easily hydrolyzed to the corresponding alcohol and sulfate ion by dilute acids, whilst the stronger C-S bond of sulfonate groups is much more stable and will be broken only under extreme chemical conditions [221].
The surfactant that contains a head with two oppositely charged groups is the zwitterionic one.Such surfactants are compounds with a hydrophobic part, consisting of alkyl-substituted benzene, naphthalene or paraffinic chain ring and a hydrophilic group with a negatively charged carboxyl, sulfate, sulfonate, or phosphate group.This class of surfactants mainly include alcohols and/or fatty acids, which improve spreading, sticking and uptake of the sprayed materials due to lower surface tension.Cationic quaternary ammonium, arsonium, iodonium, phosphonium or sulfonium compounds have similar hydrophobic groups as in anionic surfactants.They can also link with positively charged hydrophilic group.The zwitterionic S-containing surfactants are either (i) sulfonates [CHAPS The ampholytic (or amphiphilic) surfactants present similar molecular arrangement as hydrophilic groups, with the capacity to become cationic in an acidic medium and anionic in a basic medium.The lack of ionization renders the amphiphilic surfactants inert, and proper for application in biological systems since they work as surface-tension depressants [222].
Lignosulfonates are bio-based surfactants.It is very interesting that LS did not require surfactants for their application, because LS are themselves bio-based surfactants.LS present amphiphilic nature.In foliar applications, they did not burn the leaves, and they present a stimulating effect on the vegetative growth of the plants [208].The physicochemical behavior of lignosulfonates is affected by the monolignol composition, the distribution of the molecular weight, as well as the chemical modification.On the other hand, hydrophobicity is the indicator that relates composition and behavior of LS.The function and performance of LS are determined by their behavior in aqueous solution at surfaces and interfaces.In aqueous solution, several parameters can affect LS conformation, the colloidal state, and adsorption at surfaces or interfaces.These parameters include pH, temperature, concentration of other electrolytes, and the presence of organic solvents.These parameters may also affect the adsorption behavior of LS [207].
S-Containing Agrochemicals
Sulfur and plant health.Nutrient-induced resistance (NIR) is the contribution of the targeted nutrition in the protection of plants against pests and diseases (Bloem et al. 2005) [223].Because of its complexity and the availability of effective pesticides, research in the field of NIR mechanisms has been poor.However, the practical significance of NIR is not of secondary importance.Mechanisms of disease control with nutrition have been discussed by Huber and Haneklaus (2007) [224].During the 1990s, when clean air legislation came into force, the S deficiency developed into a widespread nutrient disorder.Since then, S was investigated with respect to various aspects, plant nutrition and plant health included.Understanding the mechanisms of NIR contributes to maintaining plant health, and to minimizing the input of pesticides in the conventional systems.S-containing metabolites that contribute to effective pathogen resistance are volatile S compounds, GSH, glucosinolates, phytoalexins, S-rich proteins, and the formation of elemental S. Bloem et al. (2005) [223] summarized the knowledge up to that point of time as regards the relationship of these metabolites to pathogenesis and the influence of S nutritional status on them.The concept of S-induced-resistance (SIR) is still developing, and the target of this research is to identifying metabolites, enzymes, and reactions, potentially activated by the S metabolism to combat pathogens.The S status of the crop affects various plant features, including the release of gaseous S compounds.These features influence the desirability of a crop for a variety of different crop-related organisms.Bloem et al. (2015) [225] summarized the progress of this knowledge that connects the effect of the S nutritional status of agricultural crops to their health status.
S-containing agrochemicals.On the other hand, S-containing xenobiotics play important roles in the control of weeds, insects, and plant diseases.Lamberth (2004) [226] has provided an overview of the significance of S-containing compounds in crop protection with chemicals and presented the main classes of organic agrochemical S-compounds.These xenobiotics present a broad range of modes of action.In some of them, the S atom plays an important role in the transformation of propesticides into active substances.On the other hand, several natural products bearing S atoms display distinctive pesticidal properties, with special role of S in propesticide action.The S-carrying pesticides are mainly in fungicides, herbicides, and insecticides [226].
The introduction of S into a biologically active molecule can dramatically modify its biological activity.Parameters that are affected include (i) blocking metabolic deactivation, (ii) binding to a target receptor or enzyme, and/or (iii) transporting the bioactive molecule from the point of application to the target site.Thus, the introduction of S atoms into an active ingredient is an important tool towards the modulation of the properties of novel chemical compounds with new modes of action, tailored for crop protection.Most S-containing pesticides undergo metabolic activation by reactions involving or initiated by oxidation.The introduction of a S-containing moiety may enhance the selectivity.Metabolic conversion of sulfides to sulfoxides and sulfones alters the reactivity, solubility, and ease of translocation of systemic pesticides.Recently, Devendar and Yang (2017) [227] highlighted the interest in active S-containing compounds, providing a comprehensive overview of selected leading S-containing pesticidal chemical families, including sulfonylureas, sulfonamides, sulfur-containing heterocyclics, thioureas, sulfides, sulfones, sulfoxides, and sulfoximines.
Potential Contribution of ABC-Transporters and Glutathione S-Transferases to the Transport of the S-Containing Compounds
It seems that ABC-transporters and glutathione S-transferases do have roles to the transport of the S-containing compounds, as both contribute to handling the penetration of xenobiotics within the various plant tissues.ABC transporters are involved in plant responses to different types of stress, as they function as transporters of xenobiotics, secondary metabolites, stress hormones or regulators of stress response genes [228].ABC proteins were originally identified as transporters involved in the vacuolar deposition, the final detoxification process.Since then, it has been shown that the functions of this class of transporters extend far beyond detoxification.They are involved in diverse processes including surface lipid deposition, pathogen response, phytate accumulation in seeds, and transport of the phytohormones auxin and abscisic acid and plant hormones that regulate the overall development of plants, as well as transport of secondary metabolites, coating materials, and supportive materials.Therefore, ABC transporters contribute to organ growth, plant nutrition, plant development, response to abiotic stress, and the interaction of the plant with its environment [229,230].Of these processes, we also highlight the contribution to the delivery of the required materials that construct the foliage surface.
On the other hand, glutathione S-transferases (GSTs; E.C.2.5.1.18)belong to a superfamily of multifunctional proteins acting as detoxifying enzymes, among many other functions.GSTs are versatile enzymes catalyzing a wide range of reactions involving the conjugation of GSH to electrophilic compounds, that is to an electrophilic center contained within a small molecule acceptor, to form more soluble peptide derivatives.Hernandez Estevez and Rodríguez Hernández (2020) [231] have summarized cases showing that GSTs are involved in diverse aspects of biotic and abiotic stresses, as well as regulatory functions, and intracellular events such as, herbicide detoxification, signal transduction, plant protection against ozone damages, heavy metals, xenobiotics, transporting anthocyanins, hydroperoxide detoxification, auxin homeostasis, tyrosine metabolism, the regulation of apoptosis, primary, and secondary metabolisms, stress metabolism, herbicide detoxification and plant protection against ozone damages, and microbes' infections.Among the functions of GSTs, is the removal of ROS, including superoxide radicals, hydroxyl radicals, alkoxy radicals, hydrogen peroxide and singlet oxygen.The above-mentioned list of actions supports the idea that perhaps GSTs are involved in the performance of foliar applications of S-containing compounds, thus rendering GSH as an important player and its foliar application of high importance.
Conclusions and Prospects
The S-containing compounds hold a distinguished place in the area of foliar applications due to the various mechanisms they contribute and affect within the plant.The array of the examined case studies clearly shows that the diversity in the way foliar application has been used makes it difficult to compare between the various experiments.The timing of application and the frequency of application are crucial factors.In the agricultural practice, the less the better; therefore, the success of the application is judged by the combination of (i) application once at the proper time, (ii) better yield, and (iii) better production, and (iv) affordable cost of the spray product, given the weathering at the time of application.A detailed understanding of the mechanisms and the regulation of transport is needed, especially in the interactions between the penetrating S-compounds and the existing compounds in the apoplastic space; also, on the transporters in action and their variety given the tissue.The combination of the various components of the spray solution and the dose are of critical importance in such applications.Foliar application of S-compounds in various combinations is an emerging area of agricultural usefulness.The S-containing compounds are not applied alone in spray solutions; and in the agricultural practice, the need for proper combination is of prime importance.Last but not least is the dose.The tables clearly show that the applied dose was based on preliminary experiments under the circumstances, but more work is needed on this aspect.The agronomic situation the product is designed to alleviate broadens the area of potential applications of the products enriched with S-containing compounds.
Figure 1 .
Figure 1.Foliar application of S-containing compounds.These compounds are S-gases [sulfur dioxide (SO2), hydrogen sulfide (H2S), carbon disulfide (CS2), carbonyl sulfide (COS), dimethyl sulfide (DMS)], fertilizers containing sulfate, S-containing metabolites, and S-containing non-metabolites.For the integrity of the approach, in the last group, the S-containing adjuvants and agrochemicals have been added.The arrows indicate the complex journey of the S-containing solute to the action point within the cell which includes several structures to cross, and the characteristics of each one of them is summarized in the text.Towards understanding and handling the effectiveness of such foliar applications, the nature and mode of action of these compounds, along with some characteristic case studies, are discussed.
Figure 1 .
Figure 1.Foliar application of S-containing compounds.These compounds are S-gases [sulfur dioxide (SO 2 ), hydrogen sulfide (H 2 S), carbon disulfide (CS 2 ), carbonyl sulfide (COS), dimethyl sulfide (DMS)], fertilizers containing sulfate, S-containing metabolites, and S-containing non-metabolites.For the integrity of the approach, in the last group, the S-containing adjuvants and agrochemicals have been added.The arrows indicate the complex journey of the S-containing solute to the action point within the cell which includes several structures to cross, and the characteristics of each one of them is summarized in the text.Towards understanding and handling the effectiveness of such foliar applications, the nature and mode of action of these compounds, along with some characteristic case studies, are discussed.
Table 2 .
Summary of the case studies on the foliar applications of S-containing non-metabolites discussed in Section 6. TU: thiourea and DMSO: dimethyl sulfoxide.
|
2023-11-10T16:20:53.211Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "54e901d7f024f6885ce337a3a49ee4d9b91cde20",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/22/3794/pdf?version=1699348539",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c7b7b0a59946aa95d590257a2e13877035f1be8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14525696
|
pes2o/s2orc
|
v3-fos-license
|
Annals of Clinical Microbiology and Antimicrobials Open Access Infection with Anaplasma Phagocytophilum in a Seronegative Patient in Sicily, Italy: Case Report
Human granulocytic anaplasmosisAnaplasma phagocytophilum16S rDNA sequence Abstract Background: Anaplasma phagocytophilum causes human granulocytic anaplasmosis (HGA) in humans, which has been recognized as an emerging tick-borne disease in the United States and Europe. Although about 65 cases of HGA have been reported in Europe, some of them do not fulfill the criteria for confirmed HGA. Confirmation of HGA requires A. phagocytophilum isolation from blood, and/or identification of morulae in granulocytes and/or positive PCR results with subsequent sequencing of the amplicons to demonstrate specific rickettsial DNA. Seroconversion or at least fourfold increase in antibody titers to A. phagocytophilum has been used as criteria for confirmed HGA also.
Background
Anaplasma phagocytophilum (Rickettsiales: Anaplasmata-ceae) causes human granulocytic anaplasmosis (HGA) in humans, which has been recognized as an emerging tick-borne disease in the United States and Europe [1,2]. The disease was first described in the United States in 1994 [3,4] and in Slovenia in 1997 [5]. About 65 cases have been reported in Europe although some of them do not fulfill the criteria for confirmed or probable HGA [2]. Human infection with A. phagocytophilum could be confirmed by identification of morulae in granulocytes, positive PCR results using whole blood as a substrate, and/or isolation of A. phagocytophilum from the blood [2]. Serological tests, particularly indirect immunofluorescence assay (IFA), are commonly used but are often negative during the initial phase of the disease. However, high and stable antibody titers to A. phagocytophilum are usually found in patients with probable HGA who show clinical signs and symptoms that most likely are not the result of a recent infection with A. phagocytophilum [2].
Although the seroprevalence of HGA in central and southern Italy is high in workers at risk for tick bites [6], only two cases of HGA have been reported in Italy [7]. However, the analysis of A. phagocytophilum DNA was not performed in these cases. The objective of this study was to characterize the 16S rDNA sequence of a strain of A. phagocytophilum obtained from a seronegative patient with confirmed HGA in Sicily, Italy.
Case Presentation
Patient, materials and methods On May 8, 2004, a 51-year-old man living in Barcellona Pozzo di Gotto, Messina, Sicily, Italy was admitted to the hospital with a 3-month fever (37.2°C), arthralgia, myalgia, malaise, pallor, stiffnes of hands and knee, nausea, low appetite and weight loss. The patient did not recall a tick bite but he visited hunting areas 15 days before the fever started and kept exotic birds (golden pheasants, partridges and guinea fowls) at his house for two years. The patient showed abnormal liver (462 mU/ml alkaline phosphatase, normal range (nr) = 98-280; 1.20 mg/dl total bilirubin, nr = 0.16-1.10; 0.35 mg/dl direct bilirubin, nr = 0.0-0.2) and renal (1.5 mg/dl creatinine, nr = 0.5-1.2) functions and low hemoglobin (9.9 g/dl; nr = 12-18). Pulmonary, cardiac and immunological involvement was absent. Cell counts and serum C-reactive protein levels were normal.
On September 13, 2004 the patient was admitted to the hospital for a second time. In addition to previous findings, the patient showed a mild splenomegaly, hemorrhoids, osteopenia, hiatal hernia, gastritis and inflammation of duodenum. Systemic inflammatory pathology and neoplasia were ruled out. Doctors suspected a depressive syndrome and the patient was treated with antidepressants. [8][9][10][11]. PCRs were conducted with 1 µl (0.1-10 ng) DNA using 10 pmol of each primer in a 50-µl volume (1.5 mM MgSO 4 , 0.2 mM dNTP, 1 × AMV/Tfl 5 × reaction buffer, 5u Tfl DNA polymerase) employing the Access RT-PCR system (Promega, Madison, WI, USA). Reactions were performed in an automated DNA thermal cycler for 35 cycles. PCR products were electrophoresed on 1% agarose gels to check the size of amplified fragments by comparison to a DNA molecular weight marker (1 Kb DNA Ladder, Promega). Control reactions were done without the addition of DNA to rule out contaminations during PCR.
Amplified Anaplasma 16S rDNA fragments were resin purified (Wizard, Promega) and cloned into pGEM-T vector (Promega) for sequencing both strands by doublestranded dye-termination cycle sequencing (Core Sequencing Facility, Department of Biochemistry and Molecular Biology, Noble Research Center, Oklahoma State University). Two independent clones were sequenced from each PCR. Multiple sequence alignment was performed with the program AlignX (Vector NTI Suite, version 5.5; InforMax, North Bethesda, MD, USA).
Results and discussion
The analysis of the clinical symptoms and laboratory results of the patient described herein suggested the possibility of an infectious agent as the cause of the illness. Viral infections were ruled out and the investigation was directed towards the identification of tick-borne pathogens. [12]).
The patient described herein fulfilled the criteria for confirmed HGA, including prolonged fever, arthralgia, myalgia, malaise, nausea, abnormal liver function and positive PCR and sequence analysis for A. phagocytophilum DNA. However, the patient was seronegative for up to six months after detection of pathogen infection by PCR. These results suggested that the patient presented a chronic stage of infection with symptoms produced by a secondary illness related or not to A. phagocytophilum and/ or could has been infected with a strain of the pathogen that could not be detected by existing serological tests or induces a low antibody response. Strain genetic differences have been reported in A. phagocytophilum and may be associated with variations in pathogenicity and host tropism [10,[12][13][14], although the exact relationship between these factors is presently unknown.
Conclusion
The results reported herein demonstrated a prolonged A. phagocytophilum infection in a patient without a detectable antibody response against the pathogen. These results documented the first case of prolonged A. phagocytophilum infection in Sicily, Italy and suggest the possibility of human infections with A. phagocytophilum strains that result in clinical symptoms and laboratory findings confirmatory of HGA but without detectable antibodies against the pathogen.
Authors' contributions
JF carried out the sequence analysis, designed the study and drafted the manuscript. AT and SC conceived the study, and participated in its design and coordination and helped to draft the manuscript. VN and AA carried out the molecular genetic studies. VDM and MR collected and analyzed the clinical data. ARM carried out the immunoassays. KMK helped to draft the manuscript. All authors read and approved the final manuscript.
|
2018-05-08T17:42:21.602Z
|
0001-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "be4ed034a59d8a57d811e6ea69bca0d15e277e5d",
"oa_license": "CCBY",
"oa_url": "https://ann-clinmicrob.biomedcentral.com/track/pdf/10.1186/1476-0711-4-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be4ed034a59d8a57d811e6ea69bca0d15e277e5d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
91185231
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Characterization of Esterified Bamboo Flour by an In Situ Solid Phase Method
Bamboo plastic composites have become a hot research topic and a key focus of research. However, many strong, polar, hydrophilic hydroxyl groups in bamboo flour (BF) results in poor interfacial compatibility between BF and hydrophobic polymers. Maleic anhydride-esterified (MAH-e-BF) and lactic acid-esterified bamboo flour (LA-e-BF) were prepared while using an in situ solid-phase esterification method with BF as the raw material and maleic anhydride or lactic acid as the esterifying agent. Fourier transform infrared spectroscopy results confirmed that BF esterification with maleic anhydride and lactic acid was successful, with the esterification degrees of MAH-e-BF and LA-e-BF at 21.04 ± 0.23% and 14.28 ± 0.17%, respectively. Esterified BF was characterized by scanning electron microscopy, contact angle testing, X-ray diffractometry, and thermogravimetric analysis. The results demonstrated that esterified BF surfaces were covered with graft polymer and the surface roughness and bonding degree of MAH-e-BF clearly larger than those of LA-e-BF. The hydrophobicity of esterified BF was significantly higher than BF and the hydrophobicity of MAH-e-BF was better than LA-e-BF. The crystalline structure of esterified BF showed some damage, while MAH-e-BF exhibited a greater decrease in crystallinity than LA-e-BF. Overall, the esterification reaction improved BF thermoplasticity, with the thermoplasticity of MAH-e-BF appearing to be better than LA-e-BF.
Introduction
In China, bamboo is a rich resource, having the benefit of a short growth cycle, and the research and utilization of bamboo have attracted much attention [1,2]. Various kinds of bamboo wood-based panels, active carbon, and fuel have been successfully developed [3,4]. However, these traditional processing methods have low utilization rates, high energy consumption, serious pollution issues, lack of effective utilization, single performance uses, and low added value. If bamboo leftovers from these processes and bamboo plastic composite polymer mixed preparations can be used for packaging, construction, and even used in cars, high-speed rail, aircraft, floors, and interiors, it would not only effectively improve the utilization rate of bamboo timber and produce added value, but it would also promote the development of a circular economy and help to maintain an ecological balance [5,6].
However, there are many strong, polar, hydrophilic hydroxyl groups in bamboo, which are incompatible with polymer interfaces and directly affect the thickness, morphology, structure, and dispersion uniformity of the material, leading to the deterioration of material properties. Therefore, replacing hydrophilic -OH by hydrophobic groups is an effective way to improve the hydrophobic properties of bamboo. The methods for modifying bamboo hydrophobicity, while using existing technology, include bamboo fiber steam explosion [7,8] and electron beam irradiation. These physical methods can improve the mechanical interlocking forces of bamboo flour (BF) and polymer composites to a certain extent, but they cannot form a strong chemical bond between the fiber and resin matrix. For this reason, researchers have used chemical methods to modify these plant fibers. Bledzki has pointed out that a fiber-coupling agent treatment can change fiber wettability and improve the interfacial compatibility of composite materials [9]. However, this method generally requires modification by a coupling agent in an organic solvent, which increases production costs and also has potential to damage the environment. Also, Lee has studied maleic anhydride (MAH)-esterified bamboo fiber compounded into a composite material, which improved the interfacial compatibility [10]. In addition, many studies have used esterification reactions to modify BF and achieve good results [11,12].
Chemical modification of the plastic matrix has been shown to form a molecular/structural bridge between the fibers and plastic, which is conducive to enhancing the bonding between interfaces. At present, the most commonly used esterification methods include an aqueous phase, organic solvent, and reactive extrusion methods. The water phase method is a uniform reaction without organic solvent, because the reaction is a heterogeneous reaction between the solid particles of BF and the liquid, the affinity and forces between the two reactants are not high and the hydrolysis of carboxylic acid is a side reaction. The organic solvent reaction is uniform, but the degree of substitution is not high and the reaction has environmental pollution problems. The reaction extrusion method needs plasticizer, which destroys the BF particle structure and restricts the range of BF applications. In view of this, a solid-phase esterification method in situ was used for modifying bamboo. Lactic acid (LA) and MAH belong to two carboxylic acids, and their use to modify BF can improve its crosslinked structure. In experiments here, BF was mixed with LA or MAH in an airtight reactor and the solid phase esterification reaction carried out under certain pressure and temperature conditions ( Figure 1). When compared with the solution method and melting method, solid phase esterification exhibited the following advantages: (1) the reaction temperature was significantly reduced and by-products and degradation reactions also reduced; (2) the monomer concentration was large and fully reacted, the reaction was favored, and the reaction efficiency was high; (3) the condensation reaction and ring-opening reaction was stable and high pressure was not required; and, (4) the reaction system did not require organic solvents. This was overall an environmentally-friendly condensation process. This in situ solid-phase esterification of BF was found to be a highly efficient and environmentally-friendly process.
Materials
The BF that was used in this study was supplied by Hunan Taohuajiang Industrial Co., Ltd. (Yiyang, China) and was used as received. LA (AR) was obtained from Chongqing East Sichuan Chemical Co., Ltd. (Chongqing, China). MAH (AR) was purchased from Tianjin Kemiou Chemical Reagent Co., Ltd. (Tianjin, China). Acetone (AR) was obtained from Hunan Normal University Chemical Reagent Factory (Changsha, China). Sodium hydroxide (AR) was obtained from Xilong Chemical Co., Ltd. (Guangzhou, China). Ethanol (99.7%, AR) was obtained from Anhui Ante Food Co., Ltd. (Suzhou, China). Ultra pure water was obtained in the laboratory.
Preparation of Esterified BF
A select quantity of BF was treated with 1 wt % NaOH solution for 24 h, washed repeatedly with tap water till the wash water pH was neutral and then dried in an oven of 60 • C to a constant weight ( Figure 2). Next, 30 g of the alkali-treated BF (dry base) was mixed with 4.5 g of LA or with 4.5 g of MAH and then placed in a hydrothermal reaction kettle at 80 • C for 2 h. The reaction products were cooled to room temperature, a select quantity of acetone added, stirred for a while, and then the solvents removed by rotary evaporation. Finally, the product was washed three times with acetone and then placed into a 50 • C oven and dried till constant weight was achieved.
Fourier-Transform Infrared Spectroscopic Analysis
Chemical changes in esterified BF after esterification were characterized while using Fourier-transform infrared spectroscopy (FT-IR) of samples that were tabletted with KBr and (IRAffinity-1, Shimadzu Corp., Kyoto, Japan). To remove moisture completely, native and esterified BF were further dried in a muffle oven at 50 • C for 48 h. Samples for testing were obtained by grinding material fully with a weight ratio of sample/KBr of 1/100. FT-IR curves of samples were obtained in a range of 400-4000 cm −1 .
Determination of Esterification Degree
Esterification degree of esterified bamboo flour was tested by the saponification principle. The grafting efficiency was calculated while using a previous published procedure [13,14]. First, 1.00 g of dry esterified BF was weighed and placed in a 250 mL conical flask. Next, 10 mL of 75% ethanol solution in deionized water was added, followed by the addition of 10 mL of 0.5 M aqueous sodium hydroxide solution. The stoppered conical flask was agitated, warmed to 30 • C, and stirred for 1 h. Excess alkali was then neutralized with a standard 0.5 M aqueous hydrochloric acid solution. A blank titration was performed using native BF and the degree of esterification (DS) being calculated as follows.
Where W is the substituent group content, %; M the esterifying agent molecular weight, g; c the aqueous hydrochloric acid solution concentration, M; V 0 the aqueous hydrochloric acid solution volume consumed by the blank sample, mL; V 1 is the aqueous hydrochloric acid solution volume consumed by the esterified BF sample, mL; n the number of hydrophobic groups from the grafted monomer; and, m the sample mass, g.
Determination of Water Absorption
To determine the effects of esterification on BF hydrophobicity, 2.0 g of native BF and esterified BF (dry base) were separately placed in glass dishes that contained a set amount of water. Over the test period, the sample weights were measured every 24 h and water absorption was calculated, as follows.
where W t is the sample weight after water absorption for t h and W 0 the sample weight when it reached a constant dry weight.
Contact Angle Measurements
A set amount of native and esterified BF were weighed and pressed into pie-shaped samples 1.5 cm in diameter using a press machine with a pressure of 20 MPa. An optical contact angle measurement instrument (OCA20, DataPhysics Instruments GmbH, Filderstadt, Germany) was used to measure the sample contact angle, while using distilled water as the test solution. For each measurement, a 4-uL drop of water was placed on a test sample using a microsyringe and the contact angle values measured to within 1 • .
Scanning Electron Microscopic Analysis
The morphologies of the native and esterified BF were determined with a scanning electron microscope (SEM; Quanta 200, FEI Co., Hillsboro, OR, USA), operating at an acceleration voltage of 20 kV. BF samples were mounted on circular aluminum stubs with double-sided adhesive tape and coated with gold before testing.
X-ray Diffraction Analysis
Native and esterified BF were further dried in a vacuum oven at 50 • C for 48 h to remove the remaining moisture. Sample crystallinity indices were measured while using an X-ray diffractometer (XRD; XD-2, Beijing Purkinje General Instrument Co., Ltd., Beijing, China) with a Cu target at 36 kV and 20 mA. Samples were tested in the angular range of 2θ = 5-40 • with a scanning rate of 4 • /min. The empirical crystallization index Crl, proposed by Segal (Segal et al., 1959), is a measure of natural cellulose crystallinity [15]. The calculation for this is where I 002 is the maximum diffraction peak intensity of the main crystallization peak 002 and I amorph the diffraction intensity of 2θ angles to 18 • .
Thermogravimetric Analysis
Thermogravimetric analytical (TGA) measurements of native and esterified BF were performed using a 209 F3 TGA instrument (Netzsch Instruments Inc., Burlington, MA, USA). About 5 mg of dried sample powders were placed in a platinum crucible and heated from 30 to 600 • C at the rate of 10 • C/min. Nitrogen dynamic carrier gas was applied at 30 mL/min.
In Situ Solid Phase Polymerization Confirmation
Esterification reactions between the esterifying agents and BF involved the hydrophilic hydroxyl group (-OH) in BF being replaced by the hydrophobic modifying groups (Figure 1). After esterification with MAH, BF molecules were connected to C=O and C=C, and after the esterification with LA, BF molecules were connected to C=O. FT-IR analyses of native and esterified BF were performed to verify that esterification had occurred and to investigate the resulting chemical changes (Figure 3). In unmodified BF, its basic compositional unit is D-anhydroglucose, of which the main characteristic functional groups were C 2 and C 3 -linked secondary hydroxyls and C 6 -linked primary hydroxyls and D-pyranose ring structures. The absorption peaks for these main structures are shown in Figure 3. The characteristic peak centered at 3310 cm −1 corresponded to O-H stretching and vibration of hydrogen bond associations, 2930 cm −1 to C-H asymmetrical stretching and vibration, 1635 cm −1 from the water tightly bound to the starch, 1152 cm −1 from C-O-C asymmetrical stretching and vibration, 1080 cm −1 assigned to D-glucopyranose and hydroxyl-linked C-O stretching and vibration, and 925 cm −1 due to glucosidic bond vibration. In the infrared spectrum of MAH-e-BF, in addition to all of the characteristic absorption peaks of the BF, the C=O absorption peak appeared at 1720 cm −1 [16,17] and the C=C absorption peak appeared at 1585 cm −1 [18]. LA-e-BF also showed a C=O absorption peak at 1720 cm −1 . Following esterification with a BF-esterifying agent, unreacted MAH and LA and oligomer were removed after washing with acetone. This result confirmed that MAH and LA molecular chains were detected in the BF skeleton, thus verifying that esterification had occurred between the BF and MAH or LA. The solution titration results established that the degrees of substitution of MAH-e-BF and LA-e-BF were 21.04 ± 0.23% and 14.28 ± 0.17%, respectively.
Morphology Change of Esterified BF
SEM, in principle, uses a very fine focused high-energy electron beam to scan a sample and to stimulate and collect a variety of physical information, by accepting, amplifying, and displaying this information, and the surface morphology of a test specimen is observed. The surface morphology changes of native BF, MAH-e-BF, and LA-e-BF were observed by SEM to study the extent of changes in BF surface morphology as a result of the esterification reactions ( Figure 4). The surface of native BF was smooth with few trenches, few edges, and few corners (Figure 4). When compared with BF, esterified BF surfaces were fragmentary, rough, angular, and convex. Moreover, the covering material produced by these reactions were clearly observed on these surfaces, with the surface roughness and bonding degree of MAH-e-BF clearly greater than that of LA-e-BF. These results also indicated that BF surface roughness was positively correlated with the DS. SEM test results further showed that MAH and LA were successfully reacted with BF by this in situ solid-phase esterification method, with the resulting effects from MAH-modification appearing to be better than with LA.
Water Resistance of Esterified BF
The BF molecular chain contains hydrophilic hydroxyl groups, which exhibit hydrophilic properties [19]. In this experiment, in situ solid-phase esterification of MAH and LA replaced hydroxyl groups on BF with hydrophobic groups, resulted in a reduction of the number of hydrophilic hydroxyl groups and concomitant increase in the number of hydrophobic groups, thus enhancing BF hydrophobicity. This phenomenon was verified by analyzing MAH-e-BF and LA-e-BF hydrophobicity using contact angle measurements. The water contact angle on a surface is the angle formed by a tangent line from the water droplet to a solid surface, which is an indication of the relative sample-surface hydrophobic character. The larger the contact angle, the higher the material's hydrophobicity [20]. Native BF, MAH-e-BF, and LA-e-BF were tested while using a contact angle tester ( Figure 5).
The initial contact angle of the BF was only 43 • and absorbing of water droplets completely required only 0.820 s. After in situ modification by solid phase esterification, the initial contact angle increased and the full absorption time of water droplets was prolonged for both MAH-e-BF and LA-e-BF ( Figure 5). The results suggested improved hydrophobicity in the esterified BF when compared to native BF. The reasons for this phenomenon included the replacement of hydrophilic hydroxyl groups on BF with hydrophobic groups, resulting the hydrophobic properties of modified BF clearly improving. SEM analysis shown that, after BF esterification with MAH or LA, BF surfaces were coated to a certain extent, reducing their water absorption capacity. MAH-e-BF exhibited a greater contact angle and longer absorption time when compared to those of LA-e-BF, indicating that MAH-e-BF hydrophobicity was better than LA-e-BF. The hydrophobicity of esterified BF was directly related to the number of hydrophobic groups that were grafted to BF and more hydrophobic groups resulted in better hydrophobicity. Thus, the measured contact angles were consistent with the DS. This was in line with the results form SEM analyses. Improvement in BF hydrophobic properties of BF by ester-modification was confirmed by determining the water absorption of BF, MAH-e-BF, and LA-e-BF based on their relative weight change after exposure to water ( Figure 6). Native BF water absorption gradually increased over time, but water absorption by MAH-e-BF and LA-e-BF were both lower than that of BF during soaking for 120 h, which further demonstrated that the esterification significantly improved the water resistance in modified BF. Comparison of water absorption for the two esterified BF revealed that MAH-e-BF was lower than LA-e-BF. Thus, the hydrophobicity of MAH-e-BF was found to be better than that of LA-e-BF, which was in agreement with contact angle test results.
Crystalline Structural Changes of Esterified BF
As BF crystal structure was easily affected by high temperatures and reactions with an esterifying agent, the crystal structure of BF, MAH-e-BF, and LA-e-BF were analyzed by XRD (Figure 7). XRD diffraction peaks for native BF were a typical of Iβ type crystalline structures, whose 2θ values were 16.25, 22.47, and 33.85 • , which corresponded to diffraction peaks of the 101, 002, and 040 crystal faces, respectively [21]. After esterification modification, the diffraction peaks of the main crystal faces 101, 002, and 040 of MAH-e-BF and LA-e-BF were similar to those of BF. This result showed that esterification mainly occurred in noncrystalline regions of BF, because the changes in the materials' crystalline region were very small. However, XRD diffraction peak intensities from MAH-e-BF and LA-e-BF were clearly weaker than those of native BF. Due to the effect of heat and pressure in the hydrothermal reactor, a part of the modifier will cause swelling in the crystalline region. Therefore, a part of the reaction occurs in the crystalline region, resulting in a decrease in crystallinity. The BF crystallinity degree was calculated to be 56.78%, with the crystallinity of MAH-e-BF and LA-e-BF found to be 47.45 and 51.07%, respectively, which showed that BF crystallinity decreased after in situ solid phase esterification with MAH or LA. Because of this treatment, MAH and LA infiltrated into the crystalline area and destroyed the hydrogen bonding between molecules in the crystalline region. At the same time, BF hydroxyl groups chain-reacted with MAH or LA molecules and molecular chains gradually grew and crosslinked, which further destroyed BF crystallinity. Conversely, the destruction of the crystalline zone facilitated these reactions, which led to a further decrease in crystallinity. As BF crystallinity decreased, the forces between BF molecules were weakened [22], such that the thermal plasticity of MAH-e-BF and LA-e-BF improved. MAH-e-BF crystallinity was less than that of LA-e-BF, owing to higher DS in MAH-e-BF and more reacted BF chain hydroxyl groups, producing more serious destruction of hydrogen bonds.
Thermal Performance Analysis
Based on XRD analysis of esterified BF, esterification were observed to decrease BF crystallinity by changing the crystalline structure, which inevitably affected the BF thermal properties. Therefore, TGA of the material was used to determine thermal property changes in BF when it was altered to form MAH-e-BF and LA-e-BF (Figure 8). Thermal degradation of native and esterified BF was divided into three stages, with temperature ranges of 50-120, 120-400, and 400-600 • C (Figure 8). The first stage represented water evaporation from BF. In the second stage, hemicellulose, cellulose, and some xylem in BF thermally decomposed and the fastest decomposition rate was observed. In the third stage (> 400 • C), the remaining material decomposed to carbon through broken chain pyrolysis. The initiation temperatures of thermal decomposition in MAH-e-BF and LA-e-BF were clearly lower than BF and the residual ratio also lower than BF. These results were attributed to decreased crystallinity of MAH-e-BF and LA-e-BF by esterification, so that the molecular chain arrangement of cellulose in bamboo flour was reduced. This result also suggested an increase in BF plasticity as a result of esterification. When compared to MAH-e-BF and LA-e-BF, the thermal decomposition temperature and residual ratio of MAH-e-BF were lower than those of LA-e-BF, which was due to their crystallinity degrees. This also indicated that MAH-e-BF molecules were more loosely arranged and thus had better thermal plasticity.
Conclusions
This study demonstrated the successful preparation of MAH-e-BF and LA-e-BF while using an in situ solid phase esterification method, achieving DS values at 21.04 ± 23% and 14.28 ± 0.17%, respectively. Esterification resulted in the replacement of hydroxyl groups on BF D-anhydroglucose moieties with hydrophobic groups from MAH or LA, which improved BF hydrophobic characteristics. Esterified BF hydrophobicity was significantly higher than native BF and MAH-e-BF hydrophobicity was better than LA-e-BF. Esterified BF surfaces were covered with graft polymer, with the surface roughness and bonding degree of MAH-e-BF being clearly greater than those of LA-e-BF. The crystalline structure the esterified BF showed some damage, with the MAH-e-BF exhibiting a greater decrease in crystallinity than LA-e-BF. Overall, esterification improved BF thermoplasticity and MAH-e-BF thermoplasticity was better than LA-e-BF.
Esterified BF produced by in situ solid phase esterification exhibited increased overall hydrophobicity with concurrent increased interface compatibility, allowing for an expanded range of applications in bamboo plastic composites. This work compared the influence of two esterifying agents on BF esterification degree and hydrophobic character, providing reference data for the preparation of blended composites of BF and other polymers.
|
2019-04-05T01:05:29.963Z
|
2018-07-30T00:00:00.000
|
{
"year": 2018,
"sha1": "fecbd85184aeedc9680727aa763efd6779ae93a0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/10/8/920/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fecbd85184aeedc9680727aa763efd6779ae93a0",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
237599334
|
pes2o/s2orc
|
v3-fos-license
|
Copper Large-scale Grain Growth by UV Nanosecond Pulsed Laser Annealing
UV nanosecond pulsed laser annealing (UV NLA) enables both surface-localized heating and short timescale high temperature processing, which can be advantageous to reduce metal line resistance by enlarging metal grains in lines or in thin films, while maintaining the integrity and performance of surrounding structures. In this work UV NLA is applied on a typical Cu thin film, demonstrating a mean grain size of over 1 {\mu}m and 400 nm in a melt and sub-melt regime, respectively. Along with such grain enlargement, film resistivity is also reduced.
I. INTRODUCTION
In advanced BEOL interconnects, reducing the trench geometry limits metal grain growth with the consequence of increasing electron scattering at grain boundaries. It results in an exponentially increasing line resistivity while scaling down the lines, and degradation of RC delay. In such scaling era, alternative metals such as Ru, Co, and Mo are introduced because of their potential benefits in line resistivity, which come from a complex combination of bulk resistivity, line width, mean-free-path of electrons, electro-migration reliability (i.e., melting point), integration compatibility, and especially the use of a specific set of barrier and liner [1][2][3][4]. However, even if the narrowest interconnects are formed with alternative metals, copper will not be completely replaced, and it will remain the reference to beat and the preferred candidate for larger interconnects. Thus, it is important to explore new paths to boost his performances and extend his utilization. Extending Cu technology is still possible, especially by engineering the barrier/liner part [5][6][7]. On the other hand, nanosecond laser annealing (NLA) demonstrated a benefit on BEOL interconnects by enlarging the mean size of grains in both Cu [8,9] and Ru [10] lines. In fact, NLA allows to reach a much higher surface temperature than that of conventional BEOL limit (i.e., 400 ℃ for minutes), while conserving the functionality of surrounding devices thanks to its short timescale and shallow irradiation absorption. Such opportunity to reduce the interconnect resistance became particularly critical now, when the number of interconnect layers is continuously increasing [11].
In this paper we study high temperature processing realized by UV NLA to enable large-scale grain growth in thin films and lines (e.g., roughly 50-nm-thick). Specifically, we present the formation of large grains, which is, to our knowledge, a record for such a thin film (typically around 100 nm after annealing at 350 to 600 ℃ for minutes [12,13]). This opens a potential path to boost performances of future Cu interconnects.
II. EXPERIMENTAL
A 50-nm-thick sputtered-Cu was deposited on 8 nm-thick Ta/5 nm-thick TaN/100 nm-thick SiO2/Si without any capping layer on top. A UV NLA was performed at room temperature in air. Both laser fluence (LF) and process time (t) were varied to control the heat generated in the Cu film. The evolution of the material as a function of annealing condition was captured by in-plane XRD. Then, some selected conditions were analyzed by TEM and Electron Diffraction Mapping (EDM). Finally, the correlation between the film resistance and grain size was deduced. Figure 1 shows the XRD patterns taken at LF1 for different t (t1 < t2 < t3 < t4 < t5), where the reference (i.e., non-annealed) data is also compared. In the as-deposited Cu film, the peaks of Cu (111), (200), and (220) planes are clearly observed, and a slight surface oxidation is also implied by the peak of Cu2O (111). For t1 and t2, the peaks of Cu (111) and (200) disappear, while those of Cu (220) grows. For t3, t4, and t5, the disappeared peaks emerge again, while the peaks of Cu (220) show a significant drop of intensity. This suggests that the Cu SE-80-6031-L1 SCREEN Semiconductor Solutions, Co. Ltd. Accepted Paper for the IEEE International Interconnect Technology Conference (IITC) 2021 Virtual Symposium film is melting (i.e., homogeneous nucleation) at the latter conditions (t3, t4, t5), but not at the former ones (t1, t2). A similar XRD pattern evolution was observed at a smaller laser fluence (LF2) as well (data not shown).
III. RESULTS AND DISCUSSION
To get a more in-depth comparison of sub-melt and melt regimes, two UV NLA conditions (i.e., t2 and t4 at LF1) were selected to pursue the analysis of microstructure in the Cu thin film. Figure 2 shows the plane-view TEM images of the nonannealed and annealed samples. In the as-deposited film, only small grains are observed everywhere. In the annealed films, a significant grain growth is observed. On these TEM specimens, electron diffraction patterns were obtained and fitted with those theoretically predicted for a face-centered-cubic (FCC) crystal lattice of Cu. Also, a mean grain size (Av.) was calculated as a weighted average of area ratio within each observed area, considering Σ3CSL facets as grain boundaries. In the asdeposited film (Fig. 3(a)(b)(c)), the small grains (Av. 53.9 nm) are randomly distributed. After annealing at LF1 for t2 (Fig. 3(d)(e)(f)), a significant grain growth (Av. 414 nm) is observed and interestingly the Cu (111) plane is preferably oriented on the top surface (i.e., ND). This is consistent with the fact that the (111) plane has the lowest surface energy in Cu FCC system [14]. In the in-plane directions (i.e., TD and RD), all the observed planes are on a typical tensile-strain-induced slip line (i.e., between [101] and [011] on an FCC-type stereogram) of Cu single crystal [15]. After annealing at LF1 for t4 ( Fig. 3(g)(h)(i)), very large grains (Av. 1000 nm) are randomly distributed. Then, the grain morphology seems drastically changed, and certain grains grow up to several μm scale in a preferred direction.
At the end, film resistivity was characterized for the same three samples. According to the literature [12], the Mayadas-Shatzkes (MS) model, which involves the scattering at grain boundaries, may give a better fit with our experimental data obtained in the 50-nm-thick Cu film, than the Fuchs-Sondheimer (FS) model. To start, the MS model was fitted with the film resistivity of the non-annealed sample by changing only the reflection coefficient, R. Then, R = 0.45 was obtained, which is close to the reported value (R = 0.47 [12]). The same R was applied to the other samples. The effective thickness of the Cu and Cu2O layers before and after UV NLA were confirmed by cross-sectional TEM images. Figure 4 shows the fitting result as a function of the mean grain size (Av.). For the annealed samples, the film resistivity shows a deviation from the theoretical values. A possible origin of this deviation is associated to oxygen (O) contamination in our Cu thin films (before and) during the annealing. According to SIMS, the averaged O concentration level became higher when the mean grown grain size did larger (data not shown). In addition, in the melting regime, the Cu surface roughness may be significantly increased during UV NLA, resulting in more pronounced surface scattering of electrons compared to the non-annealed case. These two aspects will be improved in future works by covering the Cu surface with a capping layer and by better controlling the UV NLA conditions.
IV. CONCLUSION
We investigated the effect of UV NLA on a thin Cu film in order to control the grain growth and to find a path to mitigate the exponential growth of the metal resistance in advanced metal interconnects. In the sub-melt condition, the mean grain size (Av.) was increased up to 414 nm, almost 8-times-larger than that of the as-deposited film (Av. 53.9 nm), with a controlled distribution of grain orientations. In the melting condition, the grain growth was extended further (Av. 1000 nm), but the control of the grain orientation was not maintained. The observed grain growth led to a consistent reduction of the film resistivity. Although these results are promising, it is only the first fundamental study on thin films that motivates a further investigation. Particularly, the grain growth control needs to be confirmed in real interconnect structures. Also, it must be assured that the applied thermal budget does not degrade surrounding materials and structures. Fig. 1. XRD patterns obtained for the non-annealed and annealed Cu thin films. UV NLA was at LF1 for different t (t1 < t2 < t3 < t4 < t5).
Fig. 2.
Plain-view TEM images taken for the nonannealed and annealed Cu thin films. UV NLA was at LF1 for t2 and t4. Fig. 3. EDM images taken for the non-annealed (i.e., (a), (b), and (c)) and the annealed Cu thin films. UV NLA was at LF1 for t2 ((d), (e), and (f)) or t4 ((g), (h), and (i)). As depicted in (j), ND, TD, and RD stand for Normal Direction, Transverse Direction, and Reference Direction, respectively. Also, a standard triangle of grain orientations is also shown in (k).
|
2021-09-23T13:08:40.348Z
|
2021-07-06T00:00:00.000
|
{
"year": 2021,
"sha1": "24ea88352f0a685650aa126cd64cf4d6f1dc1072",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2111.07580",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1449b68764d81513a00ad6a92c514cb68d80ea74",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13059157
|
pes2o/s2orc
|
v3-fos-license
|
Under Consideration for Publication in J. Functional Programming Correctness of Compiling Polymorphism to Dynamic Typing
The connection between polymorphic and dynamic typing was originally considered by Curry, et al. in the form of " polymorphic type assignment " for untyped λ-terms. Types are assigned after the fact to what is, in modern terminology, a dynamic language. Interest in type assignment was revitalized by the proposals of Bracha, et al. and Bank, et al. to enrich Java with polymorphism (generics), which in turn sparked the development of other languages, such as Scala, with similar combinations of features. In that setting it is desirable to compile polymorphism to dynamic typing in such a way that as much static typing as possible is preserved, relying on dynamics only insofar as genericity is actually required. The basic approach is to compile polymorphism using embeddings from each type into a universal 'top' type, dyn, and partial projections that go in the other direction. This scheme is intuitively reasonable, and, indeed, has been used in practice many times. Proving its correctness, however, is non-trivial. This paper studies the compilation of System F to an extension of Moggi's computational metalanguage with a dynamic type and shows how the compilation may be proved correct using a logical relation.
Abstract
The connection between polymorphic and dynamic typing was originally considered by Curry et al. (1972, Combinatory Logic, vol. ii) in the form of "polymorphic type assignment" for untyped λ-terms. Types are assigned after the fact to what is, in modern terminology, a dynamic language. Interest in type assignment was revitalized by the proposals of Bracha et al. (1998, OOPSLA) and Bank et al. (1997, POPL) to enrich Java with polymorphism (generics), which in turn sparked the development of other languages, such as Scala, with similar combinations of features. In such a setting, where the target language already has a monomorphic type system, it is desirable to compile polymorphism to dynamic typing in such a way that as much static typing as possible is preserved, relying on dynamics only insofar as genericity is actually required. The basic approach is to compile polymorphism using embeddings from each type into a universal "top" type, D, and partial projections that go in the other direction. This scheme is intuitively reasonable, and, indeed, has been used in practice many times. Proving its correctness, however, is non-trivial. This paper studies the compilation of System F to an extension of Moggi's computational meta-language with a dynamic type and shows how the compilation may be proved correct using a logical relation.
This research is sponsored in part by the National Science Foundation under Grant Number 1116703. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Interest in polymorphic type assignment was revitalized by the proposals of Bracha et al. (1998) and Bank et al. (1997) to enrich Java with polymorphism (generics), which then inspired similar treatment of generics in languages such as C and Scala. Such languages are statically typed, but feature a universal type (Object in Java, but herein called D) of dynamically typed values. The question arose as to how to compile these extensions, given that little or no change could be made to the language's established monomorphic run-time structure. Abstracting from the language-specific details, the question may be re-phrased as: How to compile System F to a simply typed language D with a type of dynamically typed values while preserving static type information as much as possible?
Classical type assignment effectively erases static types, mapping everything to the universal type D, which is unsatisfactory. We would rather, for example, translate the monomorphic doubling function λx:nat.x + x to essentially the same typed code λx: N.x + x in the target, reserving dynamic typing, and its associated run-time costs, to the translation of code that actually uses polymorphism. The ideal translation of a System F type A into a D type A † will preserve the structure of A, except that source language type variables, X, will be mapped to the target type D. This immediately raises the question of how to relate (A[B/X]) † to A † , which is to say how to manage polymorphic instantiation. Indeed, this is the heart of the translation given by the aforementioned authors.
The translation relies on the existence of an embedding, i, of each D type into the type D, equipped with a corresponding projection, j, which recovers the embedded object, which is to say that j is post-inverse (left-inverse) to i, up to observational equivalence, j • i ∼ = id. In the case of compiling to the JVM, i would be realized by an upcast to Object and j by a (possibly failing) downcast from Object. Notice that j is not also pre-inverse (right-inverse) to i, that is, i • j ∼ = id, because there is no reason to expect that an arbitrary value of type D lies in the image of the embedding i. In order-theoretic terms, every D type is a retract of D, with retraction given by the idempotent composition i • j : D → D.
The embedding of every D type into D lifts functorially into an embedding, I, from (A[B/X]) † to A † , accompanied by a corresponding projection, J, going in the other direction. These lifted embeddings and projections are used to mediate polymorphic instantiation. Consider the polymorphic identity function ΛX.λx:X.x of type ∀X.X → X in System F which is then translated to λx:D.x of type D → D in the target language. An instantiation of this polymorphic function at the type N is translated to the following function of type N → N: where i nat and j nat are, respectively, the embedding and the projection for N. The pre-and post-compositions with the embeddings and projections arise from the functorial action of the type constructor X → X, thought of as a function of X. The projection to type N → N requires that we embed the argument into D, execute the translation of the polymorphic identity, and project the result back to N. This function is observationally equivalent to the identity on N, because the context may only provide natural numbers as arguments, and expect natural numbers as results.
Our contribution: correctness proof
At a very high level, the form of our proof is that of an adequacy theorem for a paradigmatic denotational metalanguage with dynamic typing (which we call D) with respect to an operational semantics (represented by conversion rules) of a paradigmatic polymorphic calculus (which is System F).
Using embeddings and projections, as sketched above, we can give a straightforward translation of System F into D. The goal of the correctness proof is to show that an expression and its compilation are appropriately related. The contribution of this work is in the method of proof. In the literature, we identified two relevant results, neither of which are readily applicable to the present problem: • Meyer & Wand (1985) give a logical relation argument for correctness of continuation-passing style translation for the simply typed lambda calculus, but our projection j is not an pre-inverse (right-inverse) of i as in their work. • Igarashi et al. (2001) show the correctness of compiling generics in (core) Java, but their treatment seems inextricable from the source language, Featherweight Java, which involves a number of object-oriented concepts such as a class table.
Here, we present a carefully formulated parametric logical relation that directly relates terms with their translations, together with a key lemma that captures the way in which the relation respects the embeddings and projections. This is, to our knowledge, the first correctness proof of this method of compiling polymorphism in System F to dynamic typing. Compared to the bisimulation theorem by Igarashi et al. (2001), our logical relation (cf. Lemma 7) additionally permits foreign functions as long as they follow the embedding and projection invariants specified in the logical relation, which may be seen as a technical advantage.
Languages
Throughout the paper, we work modulo α-conversion. So bound variables are assumed to be renamed if collision would happen in substitutions, and variables appearing in contexts are always distinct.
Source language
Our source language is System F, the Girard-Reynolds polymorphic lambda calculus. As in Girard's original formulation, we include a base type of natural numbers. In our case, they provide the observable outcomes that are used to distinguish programs. The syntax, the typing rules, and the conversion rules are shown in Figure 2. We say a type A is closed if · A. The calculus is presented with β-conversions (≡ β ), a thin abstraction over reductions or the operational semantics. It is compatible with both the call-by-value and call-by-name reductions, or any reasonable operational semantics because the calculus is strongly normalizing.
Target language
Our source language is pure and strongly normalizing, but the target language has to have some effects. First, the presence of a universal type that will essentially be a model of an untyped lambda calculus means, because one can express fixed point combinators, that the target has to include non-terminating expressions. Second, there needs to be some error mechanism in the target, which we can invoke when projections from the universal type should fail.
Once the target has side-effects, we have to decide how eager to make the translation. For System F, any sensible translation will have a corresponding correctness theorem that shows, among other things, that the translation of a closed source program actually never exhibits any effects, but different translations can nevertheless translate a source term into ones that can be distinguished in the target. Here, we choose to work with a call-by-value translation, as that is what one would want to use in the most common real-world situation, in which the source language also has some effects, and has a call-by-value semantics.
We could take the target to be a monomorphic ML-like language with a particular universal type and notion of error. That would work out perfectly well, but we instead translate directly into a slightly more explicit metalanguage for the semantics of such a language, namely a version of Moggi's (1991) computational metalanguage, λML T . The computational metalanguage is a simply typed lambda calculus with a type constructor, T(·), corresponding to a strong monad with an injective unit; we further add a universal type and errors. See Figure 3 for the relevant fragment of its syntax, typing rules, and equations.
The equations for D should be understood as real denotational equalities. The source language System F is treated more syntactically: The translation is defined structurally on actual terms (just modulo α-conversion) and we only later show that it respects β-conversions in the source language as a separate lemma. (We say a relation is admissible if it respects β-conversions (Equation (1) on page 12), and the admissibility of the translation relation is stated as Lemma 4.) Taking D to be the computational meta-language is largely a matter of taste, but has some advantages. We will be doing a great deal of equational reasoning and, unlike a call-by-value lambda calculus, λML T satisfies unrestricted β and η laws. The fastidious distinction between value types, D, and computation types, T(D), means that the type system makes it clear where there is a possibility of an error or divergence and where there is not, so various erroneous definitions one might make simply will not typecheck. And finally, we can be generic in exactly what the monad is-the proof will work for any T(·) that satisfies the equations we use.
As mentioned above, the monad is used to account for the possibility of divergence that is forced by the presence of a universal type, and also for the runtime errors that should arise, for example, when one injects a function value into the universal type and then attempts to project (cast) it back out as a natural number. We will require that T(·) comes equipped with a polymorphic constant err : T(D) for any D, but there are many concrete examples of monads that will suit our purposes. For example, in the category of ω-cpos (predomains) 1. take T(D) = D ⊥ , the lifting monad, and err = ⊥, so dynamic errors are just modeled by divergence; 2. take T(D) = (1 + D) ⊥ , the lifted error (maybe, option) monad, and err = [inl( * )], so dynamic errors are modeled by a terminating, failing computation.
But everything that follows works for any monad that satisfies our conditions. We write [·] for the unit of the monad and also abbreviate the usual monadic bind construct let x ⇐ d in e by just x ← d; e. The target language D has a base type N for the natural numbers. If n : N is a numeral, we write n for the corresponding System F normal form suc(. . . (suc(z)) . . .). subject to the equations listed in Figure 3. We know these requirements are consistent, as they can be canonically satisfied by taking D to be the least solution to the where roll(·) and unroll(·) are the components of the isomorphism in the solution of the equation for D. However, nothing that follows relies on any domain theory: We just need the equations. It is interesting to observe that the correctness of the translation does not actually require any interesting properties of errors. In particular, we do not need to specify that err is natural, that the monad is strict in errors (i.e. that x ← err; d = err), or even that errors are disjoint from values (∀d, err = [d]), though these properties do hold for our examples of concrete monads. Indeed, one could remove errors entirely, replacing them with arbitrary default values, without materially affecting what follows. The reason for this is that the correctness theorem only talks about error-free behavior-if everything in the context is error-free then the translated term is also error-free-so the precise nature of errors is not very important. But in a more practical setting, one would want to use a well-structured error mechanism.
Translation
The translation and interpretation of types follows Moggi's (1991) call-by-value translation, with type variables interpreted as the universal type, D. This is shown in Figure 4.
Embeddings and projections
Before we can define the translation of terms, we need some auxiliary definitions on the target side. First, we have an embedding, i, and a projection, j, mapping between A † and D for each source type A. The definitions are shown in Figure 5. Note that the embedding is total (any value of type A † can be mapped into the universal domain), although the projection is partial, which is why the monad appears in the return type. Only well-behaved elements of D may be mapped back to A † ; projecting an ill-behaved value may fail immediately or, in the case of function types, when the projected value is later actually applied.
An embedding followed by the corresponding projection is always morally the identity (actually, the unit of the monad):
Lemma 1
For any System F type A and D term x : A † ,
Proof
Induction on the structure of A.
Lifted embeddings and projections
The translation A † of a type A with a free type variable X has D in positions corresponding to the occurrences of X in A. Type application in the source involves substitution of a type B for those occurrences of X; translating the application requires the use of functions J B X.A from A † to T(A[B/X] † ), the monad applied to the translation of the substituted type. The result is wrapped in the monad because Dvalues produced by the argument in places corresponding to positive occurrences of X in A are not necessarily well-behaved. Just as with the embeddings and projections of the previous section, the definition of J B X.A is not only inductive on A, but mutually inductive with that of a function going in the other direction, The definitions are shown in Figure 6.
The following is a lifted version of Lemma 1.
Lemma 2
If Δ, X A and Δ B, then for any D term d : .
Proof
Induction on the structure of A.
Term translation
Just as was the case for types, the translation (·) * of terms in context is Moggi's usual call-by-value translation, extended to use J B X.A to translate type application. The formal definition is shown in Figure 7.
Note that uniqueness of typing in the source language ensures that the type A appearing on the right-hand side of the application case is uniquely determined, so this is indeed a good definition. It is also appropriately typed:
Logical relation
If B is a closed source type, write CT(B) = {M | ·; · M : B} for the set of closed terms of type B. Given R ⊆ CT(B) × D, a relation between closed source terms of type B and elements of D, then we say R is admissible if it respects the equivalence of the source language; that is We write Δ w to mean that the type environment w is a map from the finite set of type variables Δ to pairs comprising a closed type and an admissible relation on that type. Formally, If Δ = ·, X 1 , . . . , X n and w(X j ) = (B j , R j ) for each 1 6 j 6 n, then we define the for each A such that Δ A, by mutual induction on A as shown in Figure 8. The relation T R w A is a particular choice of "monadic lifting" of the relation R w A . Figure 8 also defines Fig. 8. Logical relations the shorthand R w A , which relates source terms to target values of type D. We will have (A[B j /B j ], R w A ) ∈ F . Observe that, as in previous work on relationally parametric models of polymorphism (Reynolds, 1983), the clause for polymorphic types involves quantification over all relations from a pre-defined set. This enforces parametricity and also avoids the potential circularity due to impredicativity, which would arise were one to consider instantiating just with R w B for each B; an instance of ∀X.A, say A[B/X], could be "larger" than ∀X.A and break the naive induction ordering. A more detailed discussion about the potential impredicativity issues can be found in Chapter 48 of the third author's text (Harper, 2012).
Lemma 4 (Admissibility)
For all w and A, R w A and T R w A are admissible.
Lemma 5 (Weakening) If Δ A and Δ w, then for any B and R, The crucial lemma is the following, which connects the logical relation at a substituted type, A[B/X], with the relation at the type A in an extended type environment, mediated by the lifted embeddings and projections. The statement involves instantiating a type variable with a particular, well chosen, relation.
Lemma 6 (Type substitution) Let Δ B, Δ w, and w(X j ) = (B j , R j ) for each j. Define the extended type environment w = w, X → (B[B j /X j ], R w B ). Then, for any A and M, with Δ, X A, ·; · M : (A[B/X])[B j /X j ], the following hold: A natural first attempt at a logical relations proof would replace "implies" by "iff" in the above, strengthening the lemma significantly. Our proof of Lemma 6 almost works for this stronger version, except for the second case of function types. That is, it is unclear how to show the following statement: Ignoring monads for the moment, the problem is that at some point, we want I (J (d)) = d, which is false in general. Lemma 6 is carefully formulated so that we no longer need this false statement, and yet is still strong enough to derive the correctness theorem for the translation. Here is a failed proof attempt of the strengthened version of Lemma 6.
Proof Attempt
From the assumption (M, Moreover, by inductive hypothesis applied to (M 2 , d 2 ) ∈ R w A 1 , we know (M 2 , J B Comparing this to the goal, we wish to show the following equation: which would be true if I B X.A 1 were a post-inverse (left-inverse) of J B X.A 1 , or that I B X.A 1 (d 2 ) = d 2 , which does not hold.
We now present the proof of the correct version of Lemma 6, which evades the difficulty and yet is sufficient for our main result.
Proof
The two parts are proved by simultaneous induction on A. Note that the type environment w remains free (universally quantified) in the induction hypothesis because in the case A = ∀Y .A , the environment will be extended.
• Case X: Therefore, by the definition of R w B , (M, i B (d)) ∈ R w B . Then, by the construction of w , R w B = R w X , and also I B X.X = i B , and thus 2. By the construction of w , R w X = R w B , and thus (M, d) ∈ R w B .
By the definition of R w B , and also the fact that J B X.
• Case Y (a variable different from X): • Case nat: )) ∈ T R w A 2 . By the equations in D, this is the same as Then, we can simplify the D expression further: A 1 (a)); J B X.A 2 (r)] and we will show this obvious choice of d works: By inductive hypothesis applied to (M 2 , d 2 ) ∈ R w A 1 [B/X] , which means there exists r such that d(I B X.A 1 (d 2 )) = [r ] and (M [M 2 /x], r ) ∈ R w A 2 . Therefore, and thus it suffices to show which is the inductive hypothesis applied to In either part ,M ≡ β ΛY .M for some M .
1. Expanding the definition, we know it is sufficient to show for any (C, R C ) ∈ F . Fix a pair (C, R C ). By the definition of R w A [B/X] , and by induction By Lemma 5 (weakening), and by exchange (implicit in the treatment of type environments as maps), we have the goal We claim that we can swap the universal quantifier of (C, R C ) and the existential quantifier of d . The reason is that [·] is injective and so d is uniquely determined by J B X.∀Y .A (d). After the swapping, the goal is then for which is exactly the definition of Note that J B X.∀Y .A = J B X.A and thus this is also equivalent to Fix the (C, R C ) ∈ F . From the assumption (M, d) ∈ R w ∀Y .A and the definition of the extended type environment w , we have By Lemma 5 (weakening), and therefore, together with exchange, Applying the inductive hypothesis, we have the desired statement . Armed with Lemma 6, we are now in a position to show the "Fundamental Property": that each (open) source term is logically related to its translation. The relation is defined on closed terms, so the statement of the lemma involves substituting arbitrary types and relations for free type variables, and arbitrary-but related-closed source and target terms for free term variables.
Lemma 7 (Fundamental property) Suppose Δ; Γ M : A, where Δ = ·, X 1 , . . . , X m and Γ = ·, x 1 :A 1 , . . . , x n :A n . Let w be such that Δ w and w(X j ) = (B j , R j ) for each 1 6 j 6 m. Then, for any list of source terms V i : A i [B j /X j ] and target terms t i : A † i , 1 6 i 6 n, such that
Proof
Induction on the derivation of Δ; Γ M : A. We first define some abbreviations, writing w for the type substitution [B j /X j ], V for the source term substitution [V i /x i ], andt for the target term substitution [t i /x i ].
-If n = n + 1, then V ( w(ifz(M; N 0 ; x.N 1 ))) ≡ β V ( w(N 1 ))[n /x]. Since (n , n ) ∈ R w nat , induction gives ( w(N)), e) ∈ R w A . Thus, we know V ( w(M)) ≡ β λx:A.M for some M such that Unfolding the logical relation for quantified types and instantiating with By the second part of Lemma 6, the key type substitution property, this implies An immediate consequence of Lemma 7 is that the behavior of a program (closed term of ground type) and its translation agree:
Discussion
Using logical relations, it is possible to prove the correctness of the compilation of polymorphic types to dynamic types in such a way that overhead is imposed only insofar as polymorphism is actually used. This compilation method lies at the heart of the implementation of generic extensions to Java, and of polymorphic languages such as Scala, on the Java Virtual Machine, with the type Object playing the role of our D. As far as we are aware, this is the first correctness proof of this compilation strategy for System F, and is novel insofar as it only relies on an embedding into D, rather than a stronger condition such as isomorphism. In this respect, the proof may be useful in other situations where the correctness of a compilation method is required.
Semantically, the underlying idea of interpreting types as retracts of a universal domain is an old one, going back to work of Scott (1976) and McCracken (1979). It has been adapted and used for various purposes in programming, including by Benton (2005) and Ramsey (2011) for interfacing typed languages with untyped ones, and by many authors studying run-time enforcement of contracts (Findler & Felleisen, 2002) in dynamic languages, and the correct assignment of blame should violations occur (Ahmed et al., 2011).
The broad shape of the proof presented here is that of adequacy: Showing agreement between an operational and a denotational (translational) semantics via a logical relation (Plotkin, 1977;Amadio, 1993). Similar logical relations have also been used for the closely related task of establishing the correctness of compilers (Minamide et al., 1996;Benton & Hur, 2010;Hur & Dreyer, 2011).
One possible extension to this work is to consider the extension of System F with general recursion at the expression level, or, more generally, with recursive types. It appears that handling general recursion is straightforward, following directly the strategy outlined in Chapter 48 of the third author's text (Harper, 2012), which requires that admissible relations be closed under limits of suitable chains, and which employs fixed point induction in establishing the main theorem. The extension to product and sum types is entirely straightforward. Recursive types require more sophisticated techniques pioneered by Pitts (1996), and adapted to the operational setting by Crary & Harper (2007).
Step-indexed methods, such as those introduced by Appel & McAllester (2001), Ahmed (2006) may also be useful in this respect.
Another possible extension is to consider System F with higher order polymorphism, namely System F ω , which enables programmers to abstract over even type constructors, such as lists or trees which themselves are polymorphic in their element type. Such higher order polymorphism has been materialized in dynamic typing, for example, in Scala, by Moors (2008), and it is conceivable, for studying the correctness, to migrate the method to System F ω as we did to System F. Together with the work by Rossberg et al. (2010) which compiles ML modules to System F ω , an alternative account for the dynamics of ML modules, in terms of dynamic typing, can possibly be made.
|
2018-05-08T18:12:01.947Z
|
0001-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "6066c6a0d358373a5d8a46fadfefe5528f8d1012",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Correctness_of_Compiling_Polymorphism_to_Dynamic_Typing/6604526/files/12094931.pdf",
"oa_status": "GREEN",
"pdf_src": "Cambridge",
"pdf_hash": "a15d782d2938626a603fdf6b9e00329ad0c418c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
13303570
|
pes2o/s2orc
|
v3-fos-license
|
The accuracy of transcranial Doppler in excluding intracranial hypertension following acute brain injury: a multicenter prospective pilot study
Background Untimely diagnosis of intracranial hypertension may lead to delays in therapy and worsening of outcome. Transcranial Doppler (TCD) detects variations in cerebral blood flow velocity which may correlate with intracranial pressure (ICP). We investigated if intracranial hypertension can be accurately excluded through use of TCD. Method This was a multicenter prospective pilot study in patients with acute brain injury requiring invasive ICP (ICPi) monitoring. ICP estimated with TCD (ICPtcd) was compared with ICPi in three separate time frames: immediately before ICPi placement, immediately after ICPi placement, and 3 hours following ICPi positioning. Sensitivity and specificity, and concordance correlation coefficient between ICPi and ICPtcd were calculated. Receiver operating curve (ROC) and the area under the curve (AUC) analyses were estimated after measurement averaging over time. Results A total of 38 patients were enrolled, and of these 12 (31.6%) had at least one episode of intracranial hypertension. One hundred fourteen paired measurements of ICPi and ICPtcd were gathered for analysis. With dichotomized ICPi (≤20 mmHg vs >20 mmHg), the sensitivity of ICPtcd was 100%; all measurements with high ICPi (>20 mmHg) also had a high ICPtcd values. Bland-Altman plot showed an overestimation of 6.2 mmHg (95% CI 5.08–7.30 mmHg) for ICPtcd compared to ICPi. AUC was 96.0% (95% CI 89.8–100%) and the estimated best threshold was at ICPi of 24.8 mmHg corresponding to a sensitivity 100% and a specificity of 91.2%. Conclusions This study provides preliminary evidence that ICPtcd may accurately exclude intracranial hypertension in patients with acute brain injury. Future studies with adequate power are needed to confirm this result.
Background
Brain injury is frequently accompanied by episodes of intracranial hypertension, which is a potentially fatal condition [1][2][3]. Timely diagnosis through intracranial pressure (ICP) monitoring becomes fundamental in order to guarantee prompt diagnosis and appropriate therapeutic decision-making. Presently, the gold standard for continuous ICP monitoring is invasive measurement through insertion of a catheter within the brain ventricles (EVD) connected to an external pressure transducer [4,5]. However, this method may be cumbersome, not always available, and accompanied by an elevated complication rate due mostly to infection, hemorrhage, and catheter obstruction [6][7][8][9]. Brain intraparenchimal catheters, despite being safer, still require an invasive procedure and cannot be recalibrated once inserted, rendering the measurements prone to imprecision due to zero drift [10][11][12].
Numerous alternatives to invasive ICP measurement have been proposed in the literature. Although some techniques have potential as screening methods for intracranial hypertension, none have found a valid place within daily clinical practice [13][14][15][16]. Among these, methods which use transcranial Doppler (TCD) provide valuable information, as cerebral blood flow velocity has been shown to correlate with ICP [17][18][19][20][21][22]. In this study, we investigated if ICP estimated by means of TCD (ICPtcd) accurately identifies intracranial hypertension in patients with acute severe brain injury.
Project setting and design
This was a prospective multicenter pilot study and took place between November 2013 and August 2014 in six neurocritical care units (Brescia Spedali Civili University Hospital; Brescia Fondazione Poliambulanza; Pisa Azienda Ospedaliera Cisanello; Lecco Azienda Ospedaliera A. Manzoni; Varese Ospedale di Circolo Fondazione Macchi; Genova Ospedale Galliera). The Brescia University Hospital served as the coordinating center for the study. Ethics approval for all participating sites was obtained from the appropriate regulatory committees. Detailed written information was provided to the family members regarding the study protocol, the scope of research, and the safety of TCD examination. Since all patients had altered consciousness, the ethics committees waived the requirement for consent, as in Italy relatives are not regarded as legal representatives of the patient in the absence of a formal designation [23]. Written informed consent was requested from all surviving patients as soon as they regained their mental competency (NP 1892 -EudraCT: 2014-005482-71).
Patients were included if they were 18 years or older, had sustained acute brain injury and required invasive ICP monitoring within the first 24 hours of ICU admission. They were excluded if they had any one of the following: inaccessible or poor acoustic ultrasound window, a cardiovascular disease causing hemodynamic variations affecting the TCD reading (severe arrhythmia, cardiac valvular stenosis, severe vascular sclerosis), decompressive craniectomy, or any treatment for intracranial hypertension intervening between the invasive ICP (ICPi) and ICPtcd measurements. Patient sedation for ICP bolt placement consisted of bolus followed by continuous infusion of propofol or midazolam, fentanyl, and when necessary, neuromuscular blockade through bolus infusions of atracurium besylate. Mechanical ventilation was targeted to maintain adequate oxygenation (SaO 2 > 90%) and normocapnia (PaCO2 36-40 mmHg). Intravenous fluids and inotropic support (norepinephrine and/or epinephrine) were provided as appropriate in order to achieve and maintain a sufficient cerebral perfusion pressure (CPP >60 mmHg). General management of the various types of brain injury (traumatic, hemorrhagic, or ischemic), as well as the definition of intracranial hypertension, were in accordance to international guidelines [24][25][26][27][28][29]. Treatment of intracranial hypertension was based on a protocol-driven strategy which included optimization of arterial blood pressure and volemia, sedation, mild hyperventilation, and infusion of hyperosmolar fluids [30].
Patient monitoring
Systemic hemodynamic monitoring consisted of invasive arterial blood pressure (ABP) from the radial artery, continuous electrocardiography and pulse oximetry. ICPi was performed either by means of an intraparenchymal fiberoptic transducer (Camino Laboratories, Integra NeuroSciences, San Diego, CA, USA), or a catheter inserted into the brain ventricles and connected to an external pressure transducer and drainage system (Codman, Johnson & Johnson Medical Ltd., Raynham, MA, USA). Cerebral blood flow velocity was assessed using TCD sonography (DWL 2000 Multidop X2, Compumedics DWL, Singen, Germany), and was performed by a selected group of experienced operators in order to reduce inter-operator variability. The insonation technique was standard: a lowfrequency pulsed 2 MHz ultrasound probe was placed over the acoustic temporal window for insonation of the M1/M2 section of the middle cerebral artery (MCA) at a depth ranging from 45 to 55 mm [31][32][33]. The MCAs were insonated bilaterally; however, for ICPtcd measurement the acoustic window ipsilateral to the side of ICP bolt placement was used.
ICPtcd was calculated using the following equations (1 and 2) [21,22]: where MAP represented the mean arterial pressure, CPPe the estimated CPP, FVdia and FVm were, respectively, the diastolic and mean flow velocities, as measured by TCD. The ICPi and MAP readings used for calculations were recorded simultaneously in order to standardize measurements.
Study design
For each patient enrolled into the study, a total of three ICPtcd measurements were performed, each of which was compared to the corresponding ICPi for concordance. The first ICPtcd measurement (TIME 1) was performed immediately before ICPi placement and was compared with the first ICPi reading once the probe was positioned. The need to reduce the time gap as much as possible between the two readings was motivated by the fact that ICP may be subjected to variations caused by ABP manipulation, cerebrospinal fluid (CSF) leakage during catheter placement and pharmacological treatment or fluctuations due to the evolving underlying brain injury. The second ICPtcd measurement (TIME 2) was performed immediately after insertion of the ICPi probe and compared with the post-insertion ICPi reading. The third ICPtcd measurement (TIME 3) was performed between 2 and 3 hours following the second reading. The reason for this was to avoid any possible variations in systemic and cerebral hemodynamics caused by the ICPi device insertion itself, despite sedation. Therefore, performing the examination more than 2 hours post insertion should reduce the influence of the positioning maneuver on the readings. In accordance with the guidelines present during the study period, intracranial hypertension was defined as an ICP above 20 mmHg, which remained so for at least 10 minutes and was not related to procedural pain [12].
Statistical analysis
Continuous variables were expressed as means standard deviation (SD) or as medians (interquartile range, IQR) as appropriate, and discrete variables as counts (percentage). Concordance correlation coefficients were calculated for ICPi and ICPtcd both in patients receiving EVD and intraparenchimal ICP monitoring. These two correlation coefficients were compared by mean of Fisher transformation test, for TIME 1, in order to account for differences due to the possibility of CSF leakage during EVD placement.
Agreement between ICPtcd and ICPi measurements was evaluated both on the continuous raw scale and after categorization based on common usage threshold for ICP (20 mmHg) [34,35]. Concordance correlation coefficient between ICPi and ICPtcd for repeated measurements was calculated using variance components estimated through linear mixed model, adjusting for ICPi in the three separate time frames described above [36]. A Bland-Altman plot was computed for agreement, assuming constant bias and accounting for linked repeated measures. Linked repeated measures (TIME) were also accounted for variance components and were estimated using Markov chain Monte Carlo [37].
Receiver operating curve (ROC) and the area under the curve (AUC) were estimated after measurement averaging over time. Values of ICPi were dichotomized using a standard reference value of 20 mmHg [34,35]. Confidence interval for AUC, sensitivity and specificity were computed using bootstrapping (B = 10000) [37]. Youden statistics criterion was used to evaluate the performance of ICPtcd (best combination of sensitivity and specificity) [38]. The sensitivity was expressed as the probability that a patient with high ICPi (>20 mmHg) would also have a high ICPtcd value, and the specificity as the probability that a patient with normal ICPi (≤20 mmHg) would also have a normal ICPtcd value. Best threshold for marker was computed using Youden criterion [38][39][40].
Sample size for a future study was estimated using the procedure proposed by Flahault et al. and Chu et al., assuming a sensitivity of the test (ICPtcd) of 90%, a prevalence of the disease (intracranial hypertension) equal to 30%, statistical power of 95% and a minimal acceptable lower confidence limit of 10% [41,42].
R software was used for statistical analysis (version 3.2.5, Free Software Foundation, Inc., Boston, MA, USA).
Results
From November 2013 to August 2014, a total of 38 patients with acute brain injury were enrolled. Patient demographics, causes of brain injury requiring ICPi monitoring, and types of monitoring techniques are described in Table 1.
ICP monitoring was initiated in all patients within 24 hours following acute brain injury. EVD was placed in 10 patients, the other 28 received intraparenchimal catheter monitoring. The Fisher transformation analysis of the correlation coefficient for TIME 1 between ICPtcd-IP and ICPtcd-EVD showed no differences (p = 0.35), therefore the subsequent analyses were performed without dividing the invasive measurements from IP or EVD. A total of 114 ICPtcd examinations in three separate time frames were performed in 38 patients, 105 ipsilateral to the ICPi placement and 9 contralateral. The most common reasons for not being able to access the ipsilateral sides were due to an inaccessible acoustic window (60%) and a poor signal (40%). Due to a temporary unavailability of the TCD machine, a transcranial colorcoded duplex Doppler was used for ICPtcd measurements in one patient. However, we did not exclude this patient since both ICPi and ICPtcd readings corresponded to values <20 mmHg, and therefore their exclusion would not have modified the results.
As for protocol, during the measurements the PaCO2 in all patients remained within the 36-40 mmHg target range.
With a ROC curve analysis for ICPtcd averaged over times (TIME 1, TIME 2, and TIME 3), the AUC was 96.0% (95% CI 89.8-100%) and the estimated best threshold was at ICPi of 24.8 mmHg corresponding to a sensitivity 100% and a specificity of 91.2% (Fig. 4).
The estimated AUC and bootstrapped 95% CI for ROC were estimated at three separate time points (TIME 1, TIME 2, and TIME 3) and the AUC averaged over time. Pairwise comparisons between different AUC did not show any statistically significant difference (1 vs 2, p = 0.80; 1 vs 3, p = 0.99, 2 vs 3, p = 0.78), indicating that ICPtcd estimation of ICPi was time independent.
Discussion
This is the first prospective multicenter pilot study performed in a cohort of brain-injured patients which showed that ICPtcd had a 100% sensitivity in excluding intracranial hypertension when compared to ICPi. Although the patients were exposed to few episodes of high ICP, this result held true for all values of ICPi above 20 mmHg. The best threshold was at ICPi of 24.8 mmHg corresponding to an ICPtcd sensitivity of 100% and a specificity of 91.2%. ICPtcd was higher than ICPi in the large majority of measurements, which was reflected in the Bland-Altman analysis yielding a mean bias of + 6.2 mmHg. This emphasizes the finding that in patients with acute brain injury recruited in our study, if the ICPtcd was normal, ICPi was certainly normal.
The main goal of our study was to evaluate if ICPtcd could represent a noninvasive screening method to exclude patients without intracranial hypertension and therefore not requiring invasive measurement. There is extensive literature proposing TCD as a tool for noninvasive assessment of ICP [15][16][17][18][19][20][21][22]. In fact, published studies have shown a good concordance between overall ICPi and ICPtcd values. Yet none of these studies have specifically sought to demonstrate that normal ICPtcd can accurately exclude intracranial hypertension. Our results are consistent with a recent multicenter study in 356 traumatic brain injury patients which showed that TCD had a negative predictive value of 98% in excluding neurologic worsening. However, comparison with ICPi was not possible due to the fact that the study enrolled patients with mild to moderate traumatic brain injury. Moreover, the study used the pulsatility index (PI) and TIME1 ICPtcd immediately before ICPi insertion, TIME 2 ICPtcd immediately after ICPi insertion, TIME 3 from 2 to 3 h following ICPi insertion diastolic blood flow velocity as TCD parameters to predict neurologic worsening [43]. TCD-derived PI methods are based on observation that ICP and PI are positively correlated during increases of ICP. However, increase in PI is not specific for increase in ICP. In certain situations, such as a drop in CPP, PI presents an increasing trend, which can be related to either increases in ICP or decreases in arterial blood pressure. The same behavior occurs during decrease in PaCO2 or increase in pulsatility of arterial blood pressure waveform. We used the equation for noninvasive measurement of CPP (CPPe = MAP · FVdia/FVm -1 + 14) proposed by Czosnyka and colleagues based on the fact that specific patterns of TCD waveform, such as a decrease in diastolic flow velocity, reflect impaired cerebral perfusion caused by a decrease in CPP. This formula provides a quantitative assessment of CPP from which ICP can be derived [19][20][21][22]44].
In a recent study, Cardim and colleagues evaluated four methods for noninvasive measurement of ICP; a "black-box" model based on interaction between TCD and arterial blood pressure (nICP_BB); a model based on diastolic flow velocity (nICP_FVd); one based on critical closing pressure (nICP_CrCP); and one on TCDderived pulsatility index (nICP_PI). The first three methods proved to be the best estimators of measured ICP. We believe these findings strengthen our results, since the method we used was indeed the FVd model. Despite that nICP_FVd had a greater 95% CI for prediction of ICP compared to the other two estimators, it was associated with only a marginally better AUC [45].
Although at present ICPtcd cannot replace ICPi as the gold standard for ICP measurement, this simple and cost-effective method incurs no harm to the patient and provides a method of quickly excluding intracranial hypertension in brain-injured patients in the early phase of hospital admission, when other means are unavailable or contraindicated and when saving time is of paramount importance. In fact, following acute brain injury precious time is frequently lost before adequate cerebral monitoring can be initiated. During triage of polytrauma patients within the emergency department, ICPtcd may be helpful in prioritizing treatment when extracerebral lesions are also involved. Also, on admittance to the emergency department, comatose patients can benefit from early TCD evaluation, which provides valuable information regarding ICP and cerebral perfusion [43]. Admission diastolic flow velocity <25 cm/s and pulsatility index >1.3 in adults and children with head injury have been associated with a poor outcome [43][44][45][46][47][48][49]. In a series of 28 severe traumatic brain injury (TBI) patients the authors performed a TCD examination before ICPi monitoring was initiated and identified cerebral hypoperfusion in 46% of patients, which prompted the clinicians to optimize CPP management [49]. Even in the prehospital setting, TCD is feasible and can assist in optimizing early goal-directed therapy [50]. The ICPtcd measurements in our study were performed within 24 hours from brain injury and as soon as possible following ICU admission.
We used a cutoff value of 20 mmHg to define intracranial hypertension, according to the recommendations available at the time the study took place. Current guidelines of the Brain Trauma Foundation indicate that higher ICP values (22 mmHg) should be considered as a threshold [51][52][53][54]. We found that ICPtcd sensitivity remained 100% also for all explored values of ICPi above 20 mmHg. Indeed, ROC analyses estimated the best cutoff for sensitivity (100%) and specificity (91.2%) to be at 24.8 mmHg. However, this analysis was based on few measurements of intracranial hypertension episodes, and requires further investigation.
As a pilot study, we also aimed at calculating a sample size for a future larger and more definite trial. In the literature, the prevalence of intracranial hypertension in the categories of acute brain injury taken into consideration in this study [TBI, subarachnoid hemorrhage (SAH), and intracerebral hemorrhage (ICH)], ranged from 36% to 77% [3,[55][56][57]. Using the calculation method mentioned previously, we estimate a sample size of 490 patients [41,42].
Limitations
Some study limitations are worth considering. First, TCD readings were intermittent and not continuous. However, TCD is a noninvasive method which can be rapidly performed and repeated as many times as needed. Second, we enrolled patients with different types of brain injury, including subarachnoid hemorrhage, intracerebral hemorrhage and stroke, for whom ICP thresholds are not well defined. Small sample size precluded us from comparing the diagnostic accuracy of TCD in different diagnostic categories. Third, the study was unblinded; ICPi and MAP were recorded simultaneously in order to reduce the possibility of value selection during the readings. Finally, most of the 114 measurements had ICP <20 mmHg (94/114), therefore the cohort of brain-injured patients was exposed to few episodes of high ICP.
Conclusions
This prospective multicenter pilot study provides preliminary evidence that ICP estimated with TCD was in line with true ICP in excluding intracranial hypertension. Since the brain-injured patients in our study were exposed to few episodes of high ICP, a study including a greater amount of brain-injured patients with high ICP is warranted.
A large study aiming at enrolling 490 patients is under way (https://www.clinicaltrials.gov -Invasive vs noninvasive Measurement of intracranial PRESSure in brain Injury Trial [IMPRESSIT]). The estimated best threshold value was at an ICPi of 24.8 mmHg corresponding to a sensitivity of 100% and a specificity of 91.2%
|
2018-04-03T05:24:16.963Z
|
2017-02-27T00:00:00.000
|
{
"year": 2017,
"sha1": "1c25f553fdf6525b4da7a97928dd9a3a05a3483e",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-017-1632-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c25f553fdf6525b4da7a97928dd9a3a05a3483e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221778946
|
pes2o/s2orc
|
v3-fos-license
|
Vicarious Reinforcement and Punishment among the Children of the Incarcerated: Using Social Learning Theory to Understand Differential Effects of Parental Incarceration
In this literature synthesis, research concerning the effects of parental incarceration on children is reviewed. Literature from across disciplines is synthesized to advance the understanding of how parental incarceration affect children, as well as to propose vicarious reinforcement and punishment as a potential mechanism to explain positive outcomes of this type of separation. It has been a predominant view that this population is at risk for serious negative outcomes, like behavioral issues, even before parental incarceration. It is obvious that children with parents in prison or jail do constitute an especially fragile population group needing urgent attention for social, educational, and psychological services. However, research findings are mixed and several problems with research on this population have been identified, such as issues with identification, access, as well as research quality. The purpose of this review is to summarize recent research findings on the differential effects of parental incarceration on educational outcomes, as well as introduce vicarious reinforcement and punishment from Bandura’s social learning theory as possible mechanisms that safeguard these children from negative outcomes. Implications for future research and intervention development are offered.
INTRODUCTION
Given that the United States leads the world in per capita rates of incarceration, it is natural that the number of children affected by parental incarceration is also high. Currently, an estimate of 6 million U. S. children have at some point lived without one or both parents due to incarceration (The Annie E. Casey Foundation, 2017). Parental incarceration is a term used across disciplines to describe the experience surrounding the initial arrest, detention, and imprisonment, as well as probation and parole status, of a parent. The experience of parental incarceration involves more than the detainment and removal of the parent from the home; parental reentry also presents challenging interactions. These effects 'beyond the prison walls' are just some of the pain caused by incarceration (Haggerty & Bucerius, 2020). However, with children of the incarcerated do not experience these 'pains'in a homogenous manner (Haggerty & Bacerius, 2020). A proportion of these children succeed academically and do not exhibit anti-social behavior (Author, 2016;Joy et al., 2020;Wakefield & Powell, 2016).
Given that this population is estimated to be 1 out of 9 students in U.S. public schools (Peterson, et al., 2015), educators, psychologists, and counselors are very likely to serve them (Turney, 2019). Despite the array of theoretical frameworks within educational psychology, only social learning theory has been called upon to explain why the children of the incarcerated are more likely to commit crime. This explanation, as well as current research which focuses on parental individual characteristics (e.g., gender), does not account for the children that do graduate high school, attend college, and resist criminal, antisocial behavior. In addition, the effect of parental incarceration has not been examined in terms of learning outcomes other than high school graduation. In this paper, we offer vicarious reinforcement and punishment as a possible explanation for why some children of the incarcerated engage in prosocial behavior. In light of this explanation, practitioners need not treat the children of prisoners in a one-size-fitsall fashion (Johnson et al., 2018).
To date, the academic success of the children of the incarcerated has not extensively studied. Although the captive audience of the imprisoned parents have been studied in the past, these parents rarely have an understanding of their children's experiences (Haskins & Jacobsen, 2017). In fact, a majority of the state prison population reported never getting to see their children for visitation (Glaze & Maruscak, 2010;Rabuy & Kopf, 2015). In terms of academic challenges, Turney (2014) found high rates of learning disabilities, communication problems, and developmental delays among these children. In order to understand the supports and barriers to success for this population, longitudinal educational research must be conducted. In this synthesis, we review the existing literature on resilent children of the incarcerated and offer vicarious reinforcement and punishment as possible safeguards for these children. Areas for future research include the effects of which parent is in prison, other role models, peer groups, environmental factors, and intervention programs. Such research could better focus future resources for targeted early intervention to promote high school and college graduation as well as prosocial behavior.
LITERATURE REVIEW
We have conducted a review of the literature to advance the understanding of how parental incarceration affects children, some of whom develop resiliency. This research area has been the subject of a number of published and unpublished works in such diverse disciplines as criminology, family science, law studies, psychology, social work, and sociology. Based on recent evidence, we propose that vicarious punishment and negatively reinforced behaviors can explain children's behavioral reactions to parental incarceration.
PREVALANCE AND POLICY
Recently, there has been an upsurge in interest in the well-being of the children of the incarcerated from researchers, policymakers, and human service providers. The first type of research that has been conducted involves the prevalence of this population. The United States as compared to other industrialized countries currently has one of the highest number of children (about six million) with incarcerated parents (Peterson et al., 2015; The Annie E. Casey Foundation, 2017). Although the risk of maternal incarceration has risen over the last 40 years, paternal incarceration is still more prevalent. Children of less-educated mothers and minority groups are significantly more at-risk to experience parental incarceration (Turney & Adams, 2016;Wildeman, 2009).
The extent of the problem had triggered serious concern on the part of the federal government including the President of the United States. In 2013, President Obama called for an urgent inter-agency collaboration to address the problem. As a result, the Children of Incarcerated Parents Working Group was created that consisted of representatives of the U.S. Departments of Health and Human Services, Justice, Housing and Urban Development, Agriculture, as well as the Social Security Administration (see Garcia, 2013). This working group created a toolkit for child welfare agencies working with the children. However, only a portion of the children end up in foster care. This is more likely when the mother is incarcerated (Jones et al., 2019).
Researchers are compiling important information to understand the various facets of the impact of parental incarceration on the children, their families, and society. Not only do the circumstances of incarceration vary, the extent of contact with children also varies between state facilities as compared to federal facilities (Glaze & Maruscak, 2008). For example, many federal facilities are located far from the prisoner's home, often out of their home state, making it difficult for visitation to occur. Furthermore, visitation procedures can be "intrusive and traumatic" with the security put downs and presence of weapons from correctional staff (Turney, 2019, p. 26). Even phone calls can be prohibitively expensive to make. Contact with the incarcerated parent is just one individual factor that may contribute to a child's life outcomes (Rabuy & Kopf, 2015).
EDUCATIONAL OUTCOMES
What is the real impact on students as they experience parental incarceration? There is scant research available to answer this question. The evidence points to an indirect relationship. General reviews have also been conducted on the children of the incarcerated (Adams, 2018). In 2014, the American Psychological Association released a cross-cultural collection of studies on the effects of incarceration, which highlighted the increased risk for future criminal behavior among the children of the incarcerated. Murray, Farrington, and Sekol (2012) conducted a meta-analytic review of studies on parental incarceration examining various child outcomes and found the same result regarding antisocial behavior. In terms of educational outcomes, the results vary across the samples (Cox, 2009;Dannerback, 2005;Geller, Cooper, Garfinkel, Schwartz-Soicher, & Mincy, 2012;Gordon, 2009;Hagan & Foster, 2012a;Murray & Farrington, 2008;Murray, Loeber, & Pardini, 2012;Neal, 2009;Ng, Sarri, & Stoffregen, 2013;Stanton, 1980;Stroble, 1997;Trice & Brewster, 2004). A significant association between parental incarceration and poor educational outcomes was found across samples. Specifically, children affected by parental incarceration were 1.4 times more likely to perform poorly in school with a slightly higher chance (OR = 1.5) among children in community-based samples. The relationship between parental incarceration and educational outcomes was significantly weaker across studies.
School readiness in terms of behavioral expectations have been found weaker among children of the incarcerated accounting for the high prevalence of special education placement (Haskins, 2014). Moreover, students with incarcerated fathers are significantly more likely to be held back in elementary school retention, controlling for behavioral reports and test scores. Teachers' perceptions of the child's academic ability were found to moderate this relationship (Turney & Haskins, 2014). Research also cites stigma as the most detrimental direct effect of parental incarceration affecting educational performance. Teachers were found to have significantly lower expectations for students whose mothers were incarcerated compared to a group of students whose mothers were absent from home for other reasons (Dalliare, Ciccone, & Wilson, 2010). Parke and Clarke-Stewart (2002) identify school problems as long-term effects of parental incarceration on school-age children. Problems such as "learning disabilities, attention deficit disorder and attention deficit hyperactivity disorder, behavioral or conduct problems, developmental delays, and speech or language problems" are very predominant in this population (Turney, 2014, p. 302). Also, paternal incarceration has been found to be associated with social exclusion (Foster & Hagan, 2007) and lower GPA (Hagan & Foster, 2012a). Further analyses using propensity scores revealed that the likelihood of paternal incarceration was more predictive of lower GPA, than actual incarceration (Foster & Hagan, 2009). Similar associations were found when maternal incarceration was examined (Hagan & Foster, 2012b). More recent research, however has found that an association between low grades in school and parental incarceration may be chiefly due to selection effects (McCauley, 2020). This adds to the list of problems with research on this population, including issues with identification, access, as well as research quality (Billings, 2017).
INTERVENTIONS
What is being done to prevent negative academic outcomes? Outside of prisonbased parenting programs (Henson, 2020), few school-based interventions have been developed due to the many issues innate in the implementation of such programs (Vacca, 2008). Part of the issue is identification; authorities are not required to contact public schools upon incarceration of a parent. Another part of the issue is stigma (Miller & Crain, 2020); once identified, for children of incarcerated parents, being singled out would be problematic. Although access is often cited as a barrier for research in this area (Easterling & Johnson, 2015), the children of the incarcerated parents are students in public schools where teachers, counselors, and administrators can make a difference.
DIFFERENTIAL EFFECTS
There are differential effects of a parental incarceration depending on individual environmental factors (Johnson et al., 2018). Environmental factors, like a role model can positively impact academic outcomes for the children of the incarcerated (Joy et al, 2020). Likewise, there are different trajectories of developing internalized problms (e.g., depression and anxiety) or externalized behaviors (e.g., aggression and vandalism) among the children of the incarcerated (Kjellstrand et al., 2018;Kjellstrand et al., 2020;Sullivan 2019). Social contexts of children's lives, including demographics, behavioral characteristics, and socioeconomic status, are important because they determine the consequences of parental incarceration Turney (2017). These factors allow for heterogeneous consequences based on what type of exposure or risk they have for parental incarceration. It determines the impact something like parent incarceration will have on a child. For instance, African-American boys have a higher risk of experiencing paternal incarceration (Haskins et al., 2018;Turney & Adams, 2016).
However, Turney (2017) argues that the children who have the lowest chance of parental incarceration are impacted more than children who have a moderate or high risk of parental incarceration. Turney studied the associations between parental incarceration, externalizing and internalizing behaviors, juvenile delinquency, reading and math comprehension, and verbal ability. Children were divided into three groups based on risk for parental incarceration. The children in the first group, who had the lowest risk of parental incarceration, were significantly more impaired by both externalizing and internalizing behaviors, as well as lower comprehension and verbal ability. Those with only a moderate risk of parental incarceration showed the same trend without the effect on juvenile delinquency, math comprehension, and verbal ability. Despite the highest risk of parental incarceration, children in the third group only had significantly higher rates of juvenile delinquency and externalizing behaviors.
For all children, parental incarceration is a stressor. Those children with low risks of parental incarceration perceive parental incarceration as an event stressor. An event stressor is any unanticipated life changing event that is especially detrimental one's well-being. These children are impacted the most because of the social disruption and family instability the incarceration causes. Children who have prior experience with parental incarceration perceive it as a chronic stressor. Chronic stressors are a product of the social environment and have harmful effects on the people's well-being. Parental incarceration for high-risk children adds to the disadvantages that they are already facing. However, some children learn to cope with this stress, become resilient, and eventually succeed (Author, 2016;Joy et al., 2020).
Positive consequences of parental incarceration and positive attributes of the children affected by parental incarceration are not normally studied (Johnson et al., 2018). Wakefield and Powell (2016) argue that some parents would not contribute positively and do less harm when incarcerated. This is especially true in cases where a harmful father (as opposed to a helpful father) is incarcerated. For example, when harmful fathers wo are violent are removed from the home, children tend to benefit. Despite this finding, alternatives to incarceration, such as substance abuse treatment are suggested. More research in this area is needed to uncover the exact beneficial means of parental incarceration (Billlings, 2017). Based on that knowledge, policies could be made that will be more advantageous to the children.
Instead, there is a trend in the literature that focuses on exploring negative impacts. Billings (2017) further explored the idea that not only negative consequences occur as a product of parental incarceration. Positive impacts most likely appear when the negative role model is removed from the situation. It is also possible that positive consequences occur when the negative role models are removed and an abusive relationship ends or is escaped. Billings extends the work of Wakefield and Powell (2016) by examining the effect of maternal incarceration on female children. In discussing this relationship, Billings explains that an abusive mother is highly influential and if removed can allow more positive effects to transpire. Billings (2017) attempted to tease out the long-term effects of parental incarceration from the short-term effects of parental arrests concerning academic achievement and behaviors (as measured by a behavior index and school crimes). The more times a child experienced a parent being arrested, the lower the average test scores, reading scores, and math scores, as well as the chances of graduating high school. However, parental arrests were positively related behavioral problems and school crimes. The exact opposite was true for the associations with parental incarceration. The longer a child experienced a parent being incarcerated, the higher their average test scores, reading scores, and math scores, as well as the likelihood that they would graduate from high school. Parental incarceration was also associated with fewer behavioral issues and school crimes. In sum, arrests tend to have a negative short-term effect on student educational outcomes and behavior. Incarceration, however, may have a positive long-term effect on the same outcomes. Hence, it is possible that the separation of the parent involved in incarceration served a protective function as compared to the reoccurring trauma associated with parental arrest (Johnson et al., 2018;Wakefield & Powell, 2016). But through what mechanism could parental incarceration have the possibility of positively impacting children's lives?
VICARIOUS REINFORCEMENT AND PUNISHMENT
We propose one possible mechanism in terms of children learning from their parent's incarceration through vicarious reinforcement and punishment. Albert Bandura's social learning theory emphasizes the importance of observational learning or vicarious learning and modeling that affects the cognitive and behavioral processes of a person (1977). Observational learning occurs when observing people, situations, and events in an environment (Bandura, 1977). Modeling refers to the actors engaging in the observed behaviors. When observing the behaviors of models, behaviors may be reinforced based on their outcomes (Bandura, 1977). This seminal work is responsible for our understanding of how both aggression and moral disengagement is developed over time (Bandura, 1978;1999).
Observational learning follows the logic of operational conditioning in which certain behaviors are more likely to reoccur or less likely to occur depending on the consequences. Behaviors can be positively or negative reinforced. Positive reinforcement is the result of a behavior being followed by favorable outcomes. Negative reinforcement relates to the strengthening of behaviors by avoiding an aversive stimulus. Vicarious punishment, an original concept by Bandura, is similar to operant conditioning with observations of consequences to others' behavior setting learning in motion. Since social learning operates under the basic assumption that people learn from other peoples' experiences, when the model is seen being punished for certain behaviors observers are more likely to inhibit the same type of behaviors to avoid undesired consequences (Bandura, 1977). In essence, the onlooker's behavior can be modified prospectively without engaging in the undesired behavior.
Applying Bandura's theory to parental incarceration, behaviors are negatively reinforced or vicariously punished. Parents are punished for undesirable behavior. Children whose parents are incarcerated observe the undesired consequences of criminal behaviors. In order to not follow their parent's footsteps, they change their own behaviors, including avoiding antisocial behavior. Instead, children of the incarcerated may engage in more socially positive behaviors, such as going to school and getting better grades to avoid failure, negative attention, trouble with the law, etc. (Joy et al., 2020). In other words, while socially positive behaviors increase, socially negative behaviors decrease. This translates to negative reinforcement of prosocial behavior. Bandura's idea of vicarious punishment can also be applied to the long-term effects of parental incarceration on children's test scores, behavior, and likelihood to graduate from high school (Billings, 2017).
Research revealed that children who have parents who have been, or are, incarcerated are more positively affected than children who have parents who have been arrested (Billings, 2017;Wakefield & Powell, 2016). In fact, these children were less likely to misbehave-in school or in general, have higher test scores, and exhibit lower high school drop-out rates. Vicarious punishment can be applied to this situation in the sense that children have experienced or seen the effects of criminal behavior on their parents; therefore, positive behaviors are reinforced. The model, the parent in this case, experiences the negative consequences of their actions. The child sees the consequences of the model's actions; thus, making it more likely that the child will inhibit similar behaviors in order to avoid experiencing the negative consequences observed (Bandura, 1977). Likewise, the child is likely to engage in socially acceptable behavior in contexts, like school, in order to reduce the likelihood of negative attention altogether. This may be especially prudent for children avoiding the well-documented stigma associated with a parent being incarcerated. By abiding by rules, norms, and regulations one could fly under the radar, avoiding being labeled an "at-risk" child through the mechanism of vicarious reinforcement and punishment.
More recent research documents the vicarious mechanism in adult children of the incarcerated as they reflect on the effects. Young et al. (2020) documented how parental incarceration is perceived as a turning point for many children, a time to start taking school seriously. As a result, children of the incarcerated develop adaptive coping strategies and resilience against later challenges in life. Joy et al. (2020) found that through coping skills, like finding a positive role model, being involved in group activities at school, and embracing spirituality adult college students who experienced parental incarceration are very successful in college. For instance, college students who experience parental incarceration report more selfregulated learning strategies, like monitoring their comprehension and seeling help when needed, compared to their peers (Author, 2016). By learning from successful adult children of the incarcerated, effective intervention could be developed to promote resiliency and coping skill development. Although the research is just starting to emerge, more information is need to test the theory of vicarious reinforcement and punishment in the case of parental incarceration.
RECOMMENDATIONS FOR FUTURE RESEARCH
The purpose of this article has been to summarize and apply the theories of vicarious reinforcement and punishment to recent research findings on the effects of parental incarceration on educational outcomes, as well as to underscore the need for effective interventions. To date, positive attributes of the children of the incarcerated have rarely been studied. Social workers and criminologists have primarily studied this population with the aims of providing immediate assistance or predicting future criminal behavior. Given the prevalence of parental incarceration, these children are likely served by psychologists, counselors, as well as educators (Turney, 2019). Even if it is a small percentage, many of these students do go on to postsecondary institutions (Author, 2016;Joy et al., 2020. Social learning theory has been used in past research to explain why the children of the incarcerated are more likely to commit crime. However, until now this explanation did not account for the children that do graduate high school, attend college, and resist criminal behavior. In addition, the effect of parental incarceration had not been examined in terms of learning outcomes other than high school graduation until recently.
Besides the negative outcomes associated with children of the incarcerated, not much is known about the children that are successful despite this possibly traumatic separation. Future research should contribute to the literature on parental incarceration by offering a psycho-educational perspective on possible safeguards and persistence in this population. For example, using anonymous surveys researchers could examine individual characteristics, like academic motivation, persistence, and self-regulation, as well as any existing differences between college students that experienced parental incarceration and those that did not. In order to provide depth to any findings from the survey, qualitative data could help further investigate possible safeguards of parental incarceration among college students. Findings from such students could help institutions of higher learning better serve this population of future students.
In conclusion, even though parental incarceration presents certain challenges for children, such adversity may lead to resilience. Preliminary results from suggested future research will tell us how children of prisoners differ from their peers in terms of academic and motivational factors. Themes from potential interviews with the students may reveal psycho-educational safeguards in this population. Such research can help scholars and practitioners develop the interventions/programs necessary for college students that experience(d) parental incarceration. Our advice to practioners is to be patient and support the children in a nonjudgemental manner (Turney, 2019). We highly recommend treating these children optimistically and discourage using labels that could potentially harm the child. Since they deal with more stress and strife than their peers, they may need additional help in school, therefore we urge you to be understanding of their situation (Turney, 2019
|
2020-09-03T09:04:27.841Z
|
2020-08-31T00:00:00.000
|
{
"year": 2020,
"sha1": "5d5b9572bb119901fccc3604c23ec8c02fd225c5",
"oa_license": null,
"oa_url": "https://digitalcommons.georgiasouthern.edu/cgi/viewcontent.cgi?article=1091&context=nyar",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "781c73dccc8f62d8094893a3d692a71dade3df2f",
"s2fieldsofstudy": [
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
245897950
|
pes2o/s2orc
|
v3-fos-license
|
Does Cultural Hofstede dimension "Indulgence versus Restraint" impact the Corporate Performance
Jordan, as an Arab country located in the Middle East, began to pay attention to change. Jordanian society began to move from an old-style state to a contemporary state. Where globalization had a signi�icant impact on culture, either in terms of individual culture and organizational culture. On the other hand, many studies emphasized that culture through its dimensions has an important role in in�luencing the corporate atmosphere or corporate social responsibility, as well as affecting the �inancial performance of companies. Moreover, the literature indicates that cultural dimensions in�luence the conduct of individuals and the performance of �irms. The literature indicated that there are different dimensions of culture, including Hofstede's dimensions of culture, which are Power Distance Index; Individualism versus collectivism; Masculinity versus Femininity; Uncertainty Avoidance index; Long-versus ShortTerm Orientation; Indulgence versus Restraint. This study focused on the Indulgence versus Restraint dimension, as a recent dimension, where through the literature it can be predicted the existence of a critical role for the indulgence dimension on performance. After reviewing the literature related to the Indulgence dimension, it was found that there is a shortage and almost no local studies conducted in Jordan regarding the Indulgence versus Restraint dimension. Since this dimension is new, this paper stresses the importance of more research on this subject to expand the local literature on Indulgence and its impact on the performance in the Jordan context and compare it with other countries in order to provide important and useful results for the policymakers. Key word: Corporate Performance, Indulgence Versus Restraint Dimension, Jordanian Society. Does Cultural Hofstede dimension "Indulgence versus Restraint" impact the Corporate Performance Houda Qasim Aleqedat Doctoral School of Management and Business Administration Hungarian University of Agriculture and Life Sciences Institute of Economics, Hungary Author E-mail: hudaeqedat@yahoo.com INTRODUCTION The Jordanian society has begun to enter the circle of modernity and undergoes transitional processes from traditional culture to a culture of openness. Traditional life is usually characterized by harmony and relative harmony in addition to preserving cultural origin. A set of public values is usually transmitted through generations in
INTRODUCTION
The Jordanian society has begun to enter the circle of modernity and undergoes transitional processes from traditional culture to a culture of openness. Traditional life is usually characterized by harmony and relative harmony in addition to preserving cultural origin. A set of public values is usually transmitted through generations in a formal manner through various educational institutions. Which the community has set up for this purpose, such as the school, in informal ways such as the family (Abu Asbah, 2012). This transition from a traditional to a state of modernity and globalization has had a great impact on the individual culture and organizational culture of organizations, and some have considered globalization as a protective shield against many of the challenges faced by organizations and their workers.
The organizational culture (Schein, 1992) refers to concepts, philosophies, concepts, values, assumptions, beliefs, expectations, attitudes and standards. These concepts form the framework that illustrates the way in which the work in the organization should be taught to the new members of the organization as correct ways of perception and thinking. Organizational culture forms the standard patterns of individual behavioral practices and interpersonal relationships that arise between them.
It is clear from a review of literature on the subject of cultural dimensions that there is an impact of cultures on the behavior of individuals, in turn, it has an impact on the performance of companies. This study is a literature review of the previous studies related to the sixth dimension, indulgence versus restraint. It is found that there are few studies in the literature on this subject, and there are no related studies in Jordan. The current study highlights the impact of indulgence on performance, future studies should examine the impact of this dimension on the performance in the Jordan context and compare it with other countries in order to provide important and useful results for the policymakers.
Research Questions (Problem statement)
The research will answer the key question which is: Does the Cultural Hofstede dimension "Indulgence versus Restraint" in�luence Corporate Performance?
Research Objectives
The main purpose of the research is to highlight the impact of Indulgence versus Restraint as a Cultural Hofstede dimension on Corporate Performance.
Research Signi�icance
The study uses the sixth Hofstede cultural dimension Indulgence versus Restraint to measure the culture. Using this dimension the indulgence and its impact on the corporate performance would contribute to the literature due this topic was not conducted widely in the literature particularly in the Jordan context since it is considered as a new dimension. This study will highlight the important role of indulgence on the corporate performance.
METHODOLOGY
This paper is a review paper based on published articles and review the present literature related to the paper topic, particularly in Jordan to endeavour to determine the gap in that literature.
Culture
Culture has become an important topic to most researchers these days. it is considered one of the concepts that were used in the past in agricultural topics, and recently appeared in recent Europe in the seventeenth, eighteenth, and nineteenth centuries, and began to be practiced in non-agricultural subjects such as the education of persons, global human capability, national standards, and ambitions (Yaşar, 2014). Culture expresses the different behaviors of people across countries. This difference explains the difference in views on some topics.
Culture, in the narrow de�inition, in Europe means "civilization", education, art, and literature. Whereas the broader de�inition meaning of 'culture' means perceptual software. Culture shows social patterns such as thinking, sense, and behaving, as well as other behaviors such as greeting, eating, and viewing feelings. Culture expresses the patterns of a group of people such (Hofstede and Minkov, 2010). Heroes are considered role models and appreciated in every society. In Jordanian culture, there is more than one hero and a heritage �igure known over time, including Mustafa Wahbi al-Tal, one of the most famous poets in Jordan at all, and one of the stallions of contemporary Arab poetry, he was called the poet of Jordan, and Arar. In his poetry, quality and sobriety, anti-injustice, and anti-colonialism.
With regard to Rituals, they represent habits within the culture of each country, such as greeting, farewell, and celebration of occasions. Jordan has distinctive and inherited social habits and traditions. It is practiced by individuals and groups frequently, and which society imposes on its members, so they cannot deviate from it and violate it, such as the customs of hospitality, and the habit of giving gifts on various social occasions. Social habits should be congruent with positive social values, that is, values that people desire and love. There are acceptable social habits practiced by individuals and groups in our Jordanian society, such as the habit of cooperation, and among the good social habits in Jordan is the habit of hospitality, and this habit is represented in receiving the guest with cheerfulness and openness, and the host offering his guest whatever food is available. There is also the habit of kissing the cheeks and shaking hands.
Furthermore, Generosity and hospitality in Jordanian society are some of the most prominent social and intrinsic values.
The base of building this model values, which are located in the centre, these values which individuals learn from childhood. Where values are built at the age of ten, and after that these beliefs cannot be changed. These social values, are the preferred thing, or any judgment we give about something, and value is something desirable at the individual and collective levels.
Values direct human behavior, and regulate his relationships with others and with reality itself.
The current study tries to highlight the impact of Indulgence versus Restraint as a Cultural Hofstede dimension on Corporate Performance.
Differences between cultures
Globalization is considered one of the most important events that have been at the forefront of the world for a long time until this moment, as countries have become open to each other, as globalization has led to companies' interest in internationalization and growth, and this has led to an increased interest in conducting research related to intercultural communication. Since most of the major companies have branches in different parts of the world, so besides paying attention to �inancial prosperity, it is necessary to take into account the importance of intercultural communication especially that they have business partners whose culture differs from their own.
Hofstede cultural Dimensions
Hofstede is the founder of the Hofstede cultural dimensions model. Where he indicated that culture has a role in impact on the values of society. Furthermore, culture is scope for intercultural connection. Thus, the Hofstede Model is a structure that helps organizations crosswise cultures work together more effectively. Hofstede developed the �ive cultural dimensions, namely: power distance, individualism-collectivism, masculinity-femininity, uncertainty avoidance. Long-term orientation vs. short-term orientation (LTO), later the sixth cultural dimension has been developed recently that known of Indulgence vs. restraint (IND). The latter dimension was added by Hofstede and Minkov in 2010 (Hofstede & Minkov, 2010).
Indulgence vs. restraint (IND) de�inition
The indulgence vs. restraint (IND) represents the independent preferences that distinguish countries from each other. Indulgence society allows grati�ication human desires related to the enjoyment of life (caring about the satisfaction of needs). While the restraint society restricted and controlling the satisfaction of needs (rescinds meeting needs and cares about strict social standards) (Hofstede, 2010). In other words, the Indulgent cultures focus on happiness, positive feelings, easily expressed, and well-being, care about free time and individual freedom. These things do not exist in restricted societies.
It is a new dimension, it was derived from the literature of "happiness". Which is essentially linked to the national levels of self-happiness and control of life. Created by Michael Minkov to cover social variances shown by the World survey of values (WVS) for representative samples of 93 societies, and it is unsolved by Hofstede's other �ive dimensions.
It's somewhat matching with the long-term versus the short-term but in a poor way. It's covered the other sides which are not covered by the other dimensions. This dimension scores in 93 countries (Hofstede, 2011). Figure (1) indicates that the indulgent culture in South and North America, Western Europe, and some regions in Sub-Sahara Africa. On the contrary, Eastern Europe, Asia, and the Muslim world are considered a restraint culture. The following �igure (2) indicates the differences between Indulgent and Restrained Societies.
Empirical Studies (The relationship between the Indulgence and Corporate Performance)
There are rare studies conducted on the sixth dimension, due to it is considered a new dimension, and in particular, there were no studies linked this dimension to the performance of the company directly. (Sun and et al., 2018) investigated the effect of indulgence versus restraint (IND) as a moderating variable on the relationship between corporate social performance (CSP) and corporate �inancial performance (CFP). The study found that in indulgent countries the social performance effect positive on �inancial performance but in a weak way. According to (Shi and Veenstra, 2015) in high indulgent countries if the companies have high CSP the governing legality would lower levels. Therefore, the CSP interacted with indulgence will negatively in�luence on CFP. Conversely, in high indulgent countries, a high CSP leads to an absence of prospects of shareholders, so the interaction between CSP and indulgence will positively in�luence on CFP. (Oliveira, 2016) indicated a higher dividend payout in countries with a high level of indulgence. impact the Corporate In emerging countries, the dividend payout is fewer affected by national culture comparing with advanced countries. Furthermore, she indicated that good corporate governance decreases the in�luence of the culture. Furthermore, the study con�irmed that this in�luence varies within emerging and advanced countries. The previous results were con�irmed by (Halkos and Skouloudis, 2017) who investigated the relationship between corporate social responsibility (CSR) and the Hofstede cultural dimensions. The study concluded that culture forms the national and the characteristics of corporate social responsibility, and indulgence vs restraint positively in�luences on the CSR index. Thus, as the literature shows that the CSP interacted with indulgence and this interaction will in�luence on corporate �inancial performance. (hatmanu and et al., 2014) indicated that the business environment affected by practices of national culture, as a result, will impact on societal variables. The study also indicated that the acceptance of e-government in Romania may be affected by indulgence versus restraint.
Depend on the above studies it is noticed the important role of indulgence versus restraint as one of the cultural dimensions that play an effect on the business environment or on CSR. Later it will affect corporate �inancial performance.
Regarding Jordan's context as mentioned above that there are no studies conducted especially about this dimension. However, there are studies that addressed the cultural dimensions in general such as (Waq�i, 2004) who concluded that the culture of management is not the same in the Jordanian banks. As well as the managers are not recognizing the role of culture on performance. (Waq�i, 2004) recommended that if the managers understanding the culture that will be enhanced the bank performance. (Afaneh and et al., 2014) explored Hofstede's cultural dimensions in Jordanian private universities and their in�luence on the commitment of the organization. The researchers concluded that the managers are committed to the universities due to the managers tend to be collectivist. As well as the study found that Jordanian universities are categorized by the mutual of fourth cultural dimensions which in�luence on organizational commitment. The study stimulated the Jordanian universities to reconsider the culture. Such as teamwork in order to improve the performance. (Al-Harsh, 2008) examined the in�luence of Hofstede's cultural dimensions on the individuals in Jordanian commercial banks. The study concluded that individuals in Jordanian banks be likely to be collective and masculine in their actions. And they don prefer the Long-term orientation in the future. This study advises the Jordanian banks conducting Research & Development activities and embracing selfgoverning applications to eliminate the factors that discourage innovation.
Indulgence vs. restraint in Jordan
Based on mentioned above the Muslim world is considered a restraint culture. Therefore, Jordan scores (43) on the indulgence dimension that means that Jordan is not indulgent culture. People are bound by norms and society restricts the satisfaction of their needs (Hofstede website). Further, based on the �indings of studies of the impact of indulgence on the corporate performance and according to classi�ications of Hofstede of the Arab world that indicated Jordan is restraint culture. It can be predicted that the indulgence dimension could be in�luenced negatively on corporate performance. This can be proved or denied when conducting empirical studies in Jordan.
RESULTS AND DISCUSSION
The Jordanian society has begun to enter the circle of modernity and undergoes transitional processes from traditional culture to a culture of openness. This transition from a traditional to a state of modernity and globalization has had a great impact on the individual culture and organizational culture of organizations. As the literature indicates that cultural dimensions in�luences on the behavior of individuals and later, as a result, may in�luence on the performance of companies. Therefore, this study reviews the literature related to the sixth dimension, indulgence versus restraint. Thus, after a survey of the literature of the Indulgence dimension and based on the discussion above we believe that there is a critical role of indulgence dimension on the performance. However, there are no conducted studies in Jordan regarding the Indulgence vs. restraint dimension.
CONCLUSION AND RECOMMENDATION
This paper contributes to the reference to the importance of this dimension as any other, and we believe based on research related to happiness, which showed that the indulgence societies feel free and full happiness and this re�lected on their values and behaviour in a positive way, which positively affects the performance of the company. Where (Martins, M.M. and Lopes, I.T., 2016) indicated that a Higher Indulgence culture has a higher pro�itability. Therefore, due to this dimension is new, this paper emphasizes the importance of further research on this subject to enrich the literature of the Indulgence. Future research can improve or denied these results, since Hofstede and many studies have shown that the indulgence dimension in�luences behaviour.
Recommendation
Depending on the �indings of studies that show the indulgence dimension impact on the performance of the company, the researcher recommends further research in Jordan to examine the impact of this dimension on the performance of Jordanian companies.
Furthermore, since the score of Hofstede's cultural dimensions vary across countries. So future comparison studies should be conducted in addition to the Jordan context.
The importance of Future Research
The current study could be the basis of further future researches regarding the Hofstede cultural dimensions of indulgence and its impact on performance in the Jordan context. Future research about this sixth cultural dimension and its impact on performance would be an opportunity for future studies as individual behaviour has an impact on the companies. Furthermore, further comparing studies would be valuable with other countries from the developing and developed countries. As Hofstede pointed out that the scores of Hofstede's dimensions are comparative, and it is useful to compare these societies. So conducting this research in Jordan's context and make comparisons with the other countries that differ in culture, would be more important. As the current study highlights the impact of indulgence on the performance, future studies should examine the impact of this dimension on the performance in the Jordan context and compare it with other countries (across countries). Future studies could be conducted on one of the big sectors in Jordan such as �inancial sectors as this sector form a large percentage
|
2022-01-13T16:23:30.959Z
|
2021-11-30T00:00:00.000
|
{
"year": 2021,
"sha1": "492d63f5a6816623e9193bda8e646a858673def1",
"oa_license": "CCBY",
"oa_url": "https://jscd.ipmi.ac.id/index.php/jscd/article/download/49/32",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "186633fda4ac5ac1442ce1eb473bc1a673724c6f",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": []
}
|
244642070
|
pes2o/s2orc
|
v3-fos-license
|
Sub Theme: Disaster Mitigation in the Society 5.0 The Complexity of Abrasion Disaster Mitigation on Rupat Island
Abrasion disasters and environmental issues are strategic issues that are widely studied from various study perspectives, this article is the result of research conducted by researchers from the perspective of public administration. The Abrasion Disaster on Rupat Island is a disaster that can cause various social and economic impacts on the community. In 2019, the abrasion rate on Rupat Island reached 6-8 meters. Rupat Island is also one of the outer islands of Indonesia which borders with neighbouring countries, namely Malaysia and is included in the National Tourism Strategic Area (KSPN). Therefore, the abrasion disaster that occurred on Rupat Island, Bengkalis Regency must be managed in order to minimize the impact of the abrasion disaster that occurred. Abrasion disaster management can be done one of them by means of abrasion disaster management or efforts made to regulate the reduction of abrasion disaster risk. The purpose research is to know abrasion disaster management actors in Rupat Bengkalis and determine what course the limitations in disaster management abrasion in Rupat Bengkalis. This type of research is qualitative research using data collection methods through interviews and documentation. The findings in this study is that the management effort abrasion disaster in Rupat actor countermeasures abrasion in Rupat not maximized This is caused by things still are limitations in disaster management is done. The value in this study is that disaster management actors at the regional and central levels cooperate with each other in order to maximize disaster management efforts
Introduction
Rupat Island is the outermost island in Riau Province, with a strategic coastal area because it is directly opposite neighboring Malaysia. On the other hand, Rupat Island forces the local government and the central government to have extra attention, this is because, it is a critical coastal area because it is eroded by abrasion every year. (Rahmat Hidayat, 2014).
The uniqueness of Rupat Island is the National Coastal Strategic Area (KSPN) which
Proceedings IAPA Annual Conference 2021: "Governance and Public Policy in The Society 5.0" 109 experiences severe abrasion every year, but quantitatively in 2019 the length of abrasion on Rupat Island reached 6 -8 meters. (BWSS III Pekanbaru, 2021).
Abrasion is a natural disaster that requires special, serious and appropriate attention by the local government and also the central government because the abrasion disaster on Rupat Island has a socio-economic impact on the community. (Rahmat Hidayat, 2014). The social and economic impact of abrasion, ideally, requires special attention from the local government by seeking appropriate abrasion disaster management according to the characteristics of the abrasion that occurred on Rupat Island.
Disaster management is an effort to minimize the impact of a disaster that is supported by planning before the disaster occurs, when a disaster occurs or after a disaster (Soehatman, Ramli 2010). This impact reduction is carried out by disaster management actors starting before the disaster occurs, is happening and after the disaster occurs. Disaster management is carried out with a pattern of structural development and non-structural development. The construction of structures and non-structural developments is the result of the coordination process between actors in the management of abrasion disasters. (Soehatman, Ramli 2010).
Rupat Island is included in the Strategic Development Area (WPS). Ensuring regional-
Roles of coordination of actors for Abrasion in Rupat Island
Based on Figure I, it can be seen that the ideal coordination pattern must be carried out in the abrasion disaster coordination pattern. however, the complexity of planning and realization as well as the focus of development needs are limitations. collaboration at the district government level, namely the Bengkalis Regency BAPPEDA, the Bengkalis Regency Environmental Service and the PUPR Office. At the district government level, it is also acknowledged that currently the Bengkalis Regency, especially on Rupat Island, Bengkalis Island and several sub-districts located in mainland Riau, are under construction, and it is impossible to focus solely on abrasion. high-cost abrasion management, while the district is still fixing its infrastructure, personnel expenditures and other development sectors This article will show the complexity of abrasion disaster management on Rupat Island.
Abrasion disasters are different from disasters in Riau Province such as the Haze. The complexity that arises between development priorities, communication between actors and classic problems in the budget becomes the grassroots in the abrasion disaster. on the other hand, the higher the abrasion especially the peat soil structure.
Methods
Through qualitative research methods, by examining the condition of objects naturally and emphasizing research results on the meaning of the actual data (Sugiyono, 2014).
Primary and secondary data were obtained through interviews, observation and documentation. The primary and secondary data were analyzed using the Interactive Analysis Model system. This analysis system starts from data collection, data reduction, data presentation and conclusion drawing.
Districts Level
Village Level The author involves all the actors in Figure
Results and Discussion
The complexity of abrasion disaster management is of particular concern, in this article. This article is the result of research related to abrasion disaster management in Bengkalis Regency, namely Rupat Island. begins by looking at the abrasion disaster mitigation that has been carried out. Usually, if a disaster occurs, it will be the responsibility of the regional disaster management agency (BPBD Bengkalis Regency). Abrasion disaster, is not a disaster with human victims, but human life in the future will be very influential. and BPBD is not the central actor for Abrasion Disasters like disasters in general.
There are three stages in disaster management that should be fulfilled, such as the first stage, namely before a disaster occurs in the form of preparedness, mitigation and early warning. The second stage is when a disaster occurs and the third stage is post -disaster by conducting rehabilitation and reconstruction (Soehatman, Ramli 2010). The ideal disaster management is to prepare a layout for these three stages, but as in the previous paragraph, abrasion disaster management is different from other disasters. regardless of the stage of a disaster, interpreting an abrasion event is certainly very difficult to interpret because it occurs naturally. Horizontal communication is carried out with local government to village levels.
Budget-based planning is very important and has the power to be carried out with a pattern of vertical coordination between actors. Coordination is carried out vertically between the central and regional governments, this is because it is realized that in carrying out abrasion disaster mitigation it requires large costs. for the last eight years for the prevention or mitigation of abrasion disasters that have been carried out have spent Rp.
326,575,506,736,. Rupat Island administratively has two sub-districts, from these two subdistricts there are 9 coasts with critical categories for abrasion achievement.
The construction that has been built in an effort to mitigate the disaster, which also indirectly becomes the rehabilitation stage for the abrasion disaster in Rupat Island and Bengkalis Regency as a whole. the following data obtained from BWSSS III Pekanbaru City.
Source: Bengkalis Regency Government, 2021
Proceedings IAPA Annual Conference 2021: "Governance and Public Policy in The Society 5.0"
113
The first construction, namely the breakwater, is very expensive. especially abrasion occurs almost along the coast.
Horizontal coordination carried out by local governments is realized in the form of preventive activities only. Because, abrasion disaster mitigation requires action with careful planning and preparation in line with regional development. The limitations of the regional budget and indeed the situation in the area in Bengkalis Regency is in dire need of attention.
Horizontal coordination carried out by local governments is realized in the form of preventive activities only. Because, abrasion disaster mitigation requires action with careful planning and preparation in line with regional development. The limitations of the regional budget and indeed the situation in the area in Bengkalis Regency is in dire need of attention.
The abrasion disaster mitigation pattern that is carried out is by bit by bit. A number of proposals submitted cannot be carried out directly, so they are carried out partially. the effect is that the abrasion continues, but of course it is not in accordance with the current planning and abrasion conditions.
Conclusion
Abrasion Disaster Mitigation, in Bengkalis Island is very complex in terms of the need for critical land that has been eroded very widely. The dependence of the vertical coordination pattern is very high, but of course Rupat Island is not the main focus of the central government. The local government does not make abrasion disaster mitigation the main focus, because the district government's development priority is in infrastructure development. the mitigation efforts carried out require high costs, and have not presented the right mitigation pattern because they are carried out partially while the rate of abrasion increases every year.
|
2021-11-26T16:46:46.098Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "060ced3787df85c6539add5205612ac8420c543e",
"oa_license": null,
"oa_url": "https://journal.iapa.or.id/proceedings/article/download/518/293",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f939f7cd0bb77f138e3600f6c19c7b4286fa528b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
250669478
|
pes2o/s2orc
|
v3-fos-license
|
Unitarity with Closed Timelike Curves
We conjecture that, in certain cases, quantum dynamics is consistent in the presence of closed timelike curves. We consider time dependent orbifolds of three dimensional Minkowski space describing, in the limit of large AdS radius, BTZ black holes inside the horizon. Although perturbative unitarity fails, we show that, for discrete values of the gravitational coupling, particle propagation is consistent with unitarity. This quantization corresponds to the quantization of the black hole angular momentum. We perform the computation at very low energies, where string effects are irrelevant and interactions are dominated by graviton exchange in the eikonal regime.
One of the outstanding difficulties of the AdS/CFT correspondence [1] is to understand physics in the bulk of the AdS space in terms of CFT data. In particular, understanding the space-time causal structure of black holes is still a fundamental problem from the view point of the duality. The AdS 3 /CFT 2 case is one of the best studied examples of the duality, with extremal black hole geometries given by the BTZ metric [2,3] where N = 1 r r 2 − r 2 + , N φ = 1 r 2 + r 2 . The AdS 3 radius is given by , and r + is the position of the horizon determining the mass and the angular momentum of the black hole M bh = πM 4 2r 2 + 2 + 1 , J= πM 2 in terms of the three-dimensional Planck mass 1 M . In the dual CFT 2 description, these black holes correspond to states with [4] where L 0 ,L 0 are the Virasoro zero modes. For a supersymmetric theory the spin eigenvalue J is naturally quantized in half integral units 1 For notational convenience, we normalize the Planck mass in terms of the Newton constant G as M −1 = 2πG.
On the other hand, from a purely gravitational view point, the quantization of the angular momentum is rather mysterious. Classically, J is a continuous parameter, and the usual arguments leading to (2) rely on the asymptotic symmetries of quantum gravity on AdS 3 [5], and therefore implicitly on the existence of a dual CFT 2 . A basic property of the BTZ black holes is the existence of closed causal curves (CCC's) in the geometry. Therefore, if we ignore the dual CFT description, we naively expect that quantum gravity in the BTZ geometry violates unitarity. By studying quantum field theory in the flat space limit → ∞ of the BTZ geometry, we shall show that the quantization condition (2) can be obtained by demanding that quantum propagation of fields is consistent with unitarity, even in the presence of CCC's. More specifically, we will consider corrections to free propagation of scalar fields due to interactions with particles winding around the closed timelike direction, as shown in Figure 1. Let us recall the Penrose diagram of the extremal BTZ black hole given in Figure 2a. In the flat space → ∞ limit, keeping the energy scale fixed, the region inside the black hole horizon becomes an orbifold of flat Minkowski space Choosing coordinates x ± , x on M 3 , such that the metric is the orbifold generator κ is the Killing vector where L ab , K a are, respectively, the generators of Lorentz transformations and translations, and where E parameterizes inequivalent orbifolds. Under a change of coordinates the metric can be written as and the Killing vector as The direction y − is therefore compact with period This geometry focuses on the region inside the horizon of the extremal BTZ black hole as seen in figure 2b. The quantization of the BTZ black hole angular momentum (2) becomes, in the flat space limit, the condition In this case, on the other hand, one cannot justify this quantization condition with arguments relying on asymptotic symmetries and on the existence of a dual CFT. In fact, the Minkowski space orbifold just described focuses on the region inside the horizons, and the asymptotic AdS boundary is no longer part of the geometry. We will derive the quantization condition (4) purely within the framework of quantum field theory in the presence of gravitational interactions. From this perspective, (4) is obtained by requiring unitarity in the space M 3 /e κ , which possess CCC's. Hence we see that unitarity in the presence of CCC's is related to charge quantization in dual descriptions of the system. The mechanics that protects chronology is rather different than that proposed by Hawking [8], which is based on a large backreaction due to UV effects.
To investigate the possible restoration of unitarity, we shall study the two-point function Γ p + ,p − (λ, λ ) of a scalar field as represented in Figure 3. The external states V λ,p + ,p − , which are invariant under the action of the orbifold group Ω = e κ , are labelled by the mass squared λ and by the conserved momenta p ± , conjugate to the Killing vectors ∂ y ± . They are given explicitly by the wave functions where k − = (k 2 + λ)/(2p + ). The full propagator becomes Ω Ω −i q 2 +m 2 −i e i E wq − + w 3 24 q + Figure 4. Scalar propagator for a particle winding w times the compact y − direction. The incoming and outgoing momenta are related by the action of the orbifold generator and the usual propagator is multiplied by a momentum dependent phase.
Stable particle states will exist provided the reality condition is satisfied. In particular we shall consider the specific kinematics with p − ∈ 2πEZ fixed, λ = λ = 0 and p + → 0. The computation that follows is largely independent on the details of the underlying theory because gravitational interactions will be dominant in the specific kinematic regime just described [9].
A key ingredient in the computation is the propagator, which is simply given by the method of images. Denoting the Feynman propagator in the covering space by ∆ (x, x ), we can write the full propagator as a sum The summand ∆ (Ω w x, x ) can then be written symmetrically as so that, in Fourier space, a scalar propagator is labeled by a momentum q and a winding number w. The propagator itself is given by Moreover, as we move along the propagator, the momentum gets transformed under the action of the orbifold group element Ω −w . Therefore, the incoming momentum along the line is Ω w/2 q and the outgoing one is Ω −w/2 q, as shown in Figure 4. We will compute the first non-trivial contribution to Γ p + ,p − (λ , λ) arising from the graph 5. The only propagator with non-vanishing winding number w is the loop propagator, which probes the non-causal structure of space-time. The bubble in the graph represents the fourpoint interaction in the parent theory on M 3 to all orders in the couplings. In the limit of p + → 0, we will only need control over the parent four-point amplitude in the eikonal kinematical regime, where resummation techniques are known and where general arguments indicate that interactions are dominated by graviton exchange. The full eikonal amplitude for spin-2 exchange reads [10,11,12] 1 + iA −4iM with poles in the physical region placed at We shall assume that the amplitude A has poles given by (9) also in the off-shell regime needed for the computation.
In the limit considered the two-point amplitude can be written as where c w is a constant and where The constant p is given by p = √ 2p + p − . The reality condition (8) is satisfied if holds for each value of w. The amplitude Γ ± w is given by the contribution of the poles of the integrand in the upper (lower) complex s-plane, as shown in Figure 6. The reality condition (11) becomes where F ± are real and are related to the residues of A at the eikonal poles, and where A on−shell is the amplitude at the winding propagator pole. In order for (12) to be satisfied for all values of w, we must have that e i M 2E = 1 and therefore that which is the quantization condition (4). In this case, we have the additional requirement on the residues F − − F + = A on−shell . We have shown that, using limited information regarding the behavior of the gravitational interaction, we can recover the quantisation condition of the BTZ black hole angular momentum. Only for these discrete values is the theory unitary, even in the presence of closed causal curves.
|
2022-06-28T03:40:50.051Z
|
2006-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "61c2ba7c093e372b4a3439a25c51b1f1ef752ebc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/33/1/042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "61c2ba7c093e372b4a3439a25c51b1f1ef752ebc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
245025955
|
pes2o/s2orc
|
v3-fos-license
|
Seasonal Reproduction Shift among Three Murine Rodents in a Mediterranean Area of North-Western Africa
Samira Merabet1, Nora Khammes-El Homsi1,*, Lydia Aftisse1 and Stéphane Aulagnier2 1Laboratoire Ecologie et Biologie des Organismes Terrestres, Université Mouloud Mammeri, Bastos Tizi-Ouzou 15000, Algérie 2Comportement et Ecologie de la Faune sauvage, INRAE, Université de Toulouse, CS 52627, 31326 Castanet-Tolosan cedex, France Article Information Received 14 November 2020 Revised 12 April 2021 Accepted 10 June 2021 Available online 07 December 2021 (early access)
INTRODUCTION
A ll latitudes find some mammals reproducing seasonally, even in the deep tropics (Bronson, 2009). In the temperate regions the reproductive process of wild rodents is adaptively timed with the season contrary to synanthropic species which can have litters along the year (Bronson and Perrigo, 1987). Winter is a period of high energetic demand and many rodents show a break in reproductive activity (Gockel and Ruf, 2001), under the main influence of poorer foraging conditions and shorter photoperiod. Ovulation more than spermatogenesis is commonly sensitive to both low temperature and food restriction in small mammals (Bronson, 2009). Hence the end of winter usually initiates the reproductive period in response to lengthening of days, rising of temperature and vegetation growth (e.g. Martinet and Spitz, 1971). Within the reproductive period extending from spring to the beginning of autumn many species show a peak of reproductive activity depending mostly on their feeding ecology, herbivores being earlier than granivores, as they face an enormous energetic drain during lactation (Bronson and Perrigo, 1987). However, this reproductive pattern can be softened by winter reproduction for some members of the population (e.g. Gockel and Ruf, 2001) or even be inverted for the whole population. So, for example, populations of wood mouse (Apodemus sylvaticus) breed from March to November in Britanny (north-western France), and from November to April in Doñana (southern Spain), Pyrénées-Orientales (southern France) and Corsica (Moreno and Kufner, 1988;Fons and Saint Girons, 1993).
In North Africa, rodent females usually give birth during winter and spring, after autumn rainfalls, either in Dipodidae, Gerbillinae, Murinae or Ctenodactylidae (Bernard, 1969;Osborn and Helmy, 1980). For some species, the reproductive period may vary geographically, according to the wet season in the most arid zones, to the temperature in mountains. So, for the lesser Egyptian gerbil (Gerbillus gerbillus), it extends from January to May in Egypt (Osborn and Helmy, 1980) and from July to September in Mauritania (Klein et al., 1975). Pregnant wood mouse females have been recorded from September to February or April in Tunisia, Algeria and Moroccan lowlands (Bernard, 1969;Kowalski, 1985;Harich and Benazzou, 1990;Khidas, 1993;Hamdine and Poitevin, 1994), while the reproductive period extends to May in Moroccan high mountains where there is also a winter break (Saint Girons, 1972). The Algerian mouse (Mus spretus) exhibits a similar last pattern with an approximately three-month winter rest period (Bernard, 1969;Palomo et al., 1985;Kowalski and Rzebik-Kowalska, 1991), like in southern Spain (Vargas et al., 1984(Vargas et al., , 1991Antuñez et al., 1990). However, Orsini et al. (1982) reported an absence of reproduction in summer in southern France. The Barbary striped grass mouse (Lemniscomys barbarus) is the third non-commensal murine species widely distributed in north-western Africa (Happold, 2013). Zaime (1985) trapped juveniles mainly during winter and spring periods in central Morocco, suggesting births from September-October or December-January depending on the year. However, pregnant females were reported in May, June and September in Tunisia (Bernard, 1969), in spring/summer months in northern Morocco (Lahmam et al., 2008). Investigating such variations was the starting point of a study in a Mediterranean area where the three smallsized murine species live sympatrically. We aimed to identify the specific responses to the same physical and ecological conditions in the wild. We hypothesized that all three species may adapt similarly to photoperiod and temperature variations for adjusting their reproductive activity to vegetation growth.
Study plan
Rodents were trapped along a 150 m linear transect, a low disturbing but efficient method for sampling rodent populations in the temperate zone (Spitz et al., 1974). Three-day captures were conducted each month from January to December 2017. Baited traps with bread and pilchard were 3-meter spaced, giving a total of 1620 night-trappings. Animals were euthanized, sexed, aged, measured and any sexual activity was recorded. According to Kowalski (1985), males were considered sexually active when the diameter of testes was over 10 mm and seminal vesicles were developed. For females, reproductive state was estimated by appearance of open vagina, developed mammary glands and uterus (including embryos). They were considered sexually active when gestating and/or lactating (Birkan, 1968).
Age classes were identified according to upper molar wear after cleaning the skulls by boiling heads and then soaking them in bleaching water during 5-10 min. For Apodemus sylvaticus, we followed Saint-Girons (1972) who considered three age classes: juveniles, sub-adults and adults. For Mus spretus, we identified the same three age classes according to Palomo et al. (1983). For Lemniscomys barbarus, we adapted the table published by Van der Straeten (1980) for L. linulus to our trapped specimens for identifying the same three age classes.
RESULTS
The 1620
O n l i n e F i r s t A r t i c l e
along the year with larger numbers in winter (December to February) and lower numbers in summer (June to September) for A. sylvaticus and M. spretus, which no specimen was trapped in June, July and August (Fig. 1A).
The monthly low number of L. barbarus (maximum 6 specimens in November) was quite similar along the year. The gross sex-ratio was balanced for A. sylvaticus (33:32) and in favour of males for M. spretus (32:15) and L. barbarus (28:10), including deep monthly variations for the two first species and not for the third one, males being always more numerous than females (Fig. 2). No female of L. barbarus was trapped from March to May, in August and October. Few juveniles were trapped for any species except for M. spretus with 11 juveniles over 26 specimens. Juveniles of L. barbarus were trapped in June and July (Fig. 1B) and sub-adults mainly from August to December (Fig. 1C), suggesting a winter break of reproduction. At the opposite, juveniles of M. spretus were trapped from September to May, and sub-adults from November to May. A quite similar pattern was observed for A. sylvaticus with juveniles from October to May and sub-adults from December to March. None of sub-adults was sexually active as well as none adult male of A. sylvaticus trapped from June to September, the first gestating female being recorded in December. Adult males of L. barbarus were sexually active from April to October.
DISCUSSION
The vegetation of our study site is relatively dense, so it is not surprising that Apodemus sylvaticus was the most trapped species among small mammals. This species which prefers forest or forest edges is found in a wide variety of habitats (Denys, 2017a), including mountain grasslands, shrubs and undergrowth cover (lower woody vegetation) in Kabylia (Hamdine and Poitevin, 1994;Khidas et al., 2002). Mus spretus is associated with Mediterranean scrub, bush, grasslands and cultivated fields (Denys, 2017b), and was found syntopic with wood mouse where high woody vegetation is sparse (Khidas et al., 2002) despite some competition for food (Fons et al., 1988). Lemniscomys barbarus prefers bushes and grasses habitats with dense ground cover (Happold, 2013) whereas M. spretus includes a high percentage of bare ground in its home range in Kabylia (Khidas et al., 2002). Such occurrence of the three murine species has been rarely reported, for example in cultivated fields of Esperada (Morocco), among 17 sampled sites, and Lansarine region (Tunisia) with a low number of A. sylvaticus each time Ben Ibrahim et al., 2019). In Kabylia, these three species were previously trapped in four sites: Bouberak (Khidas, 1993), Azazga (Khammes, 1998), Cap Djinet andBoukhalfa (Amrouche-Larabi et al., 2015), with a lower number of L. barbarus each time, like in our study site.
Sex-ratio was balanced for A. sylvaticus, contrary to Hamdine and Poitevin (1994) who reported a larger percentage of males, as we recorded for M. spretus and L.
barbarus. An unbalanced sex-ratio is most often observed in trapped small murine species, particularly during the breeding period for A. sylvaticus (Butet and Paillat, 1997) and after the breeding period for M. spretus (Vargas et al., 1984;Cassaing and Croset, 1985). In L. barbarus males are more often trapped than females after the age of five months when the later become more sedentary (Zaime, 1985). Juvenile dispersal and exploratory behaviour are widely reported in small mammals to explain sex ratios skewed in favour of males in trapping sessions (Stenseth and Lidicker, 1992).
Despite the small sample size our results clearly suggest a different reproductive period for A. sylvaticus and M. spretus with a summer break (June to August), and L. barbarus with a winter break (November to January).
S. Merabet et al.
In the Mediterranean region, the reproductive period of A. sylvaticus was linked to the availability of food, usually reduced in summer as a consequence of drought, and abundant in autumn (fruits and berries) and winter after vegetation growth following autumn rainfalls (Soriguer and Amat, 1979;Torre et al., 2002;Díaz and Alonso, 2003;Rosário and Mathias, 2004). Similarly, the reproductive period of M. spretus is linked to water availability and vegetation growth, mainly mast production and grass seeds in southern France (Orsini et al., 1982). Similarly, according to Zaime (1985) food availability synchronized the reproductive period of L. barbarus and the two syntopic gerbilline species Meriones grandis and Gerbillus campestris. But this period is spread over winter and spring in this sub-arid study site of central Morocco, before the spring and summer period estimated from our data which are supported by previous studies in Tunisia (Bernard, 1969) and northern Morocco (Lahmam et al., 2008), where the climate is sub-humid. These results confirm that, in the Mediterranean area, small mammals are irresponsive to variation in photoperiod (Bronson, 2009). For a species which reproduces along the whole year in captivity (Lenkiewicz and Saint-Girons, 1964), the reproduction shift agrees with the hypothesis of an adjustment of the breeding activity to vegetation growth. Moreover, Lenkiewicz and Saint-Girons (1964) showed that activity declined in cold weather, which can contributes to delay the reproductive period to hotter days in some parts of its range.
The influence of temperature on the shift of reproductive period among the three murine species is hardly supported by our data. First, the mean January temperature is lower in central Morocco (Zaime, 1985) than in Boudjima. Second, L. barbarus, a diurnal species, should be less affected by winter cold nights than A. sylvaticus and M. spretus, which are mainly nocturnal. Then, feeding ecology remains the main likely cause of this seasonal reproduction shift. A. sylvaticus is omnivorous, but predominant food is seeds and acorns, and M. spretus eats mainly fruits, seeds and green parts of plants (Denys, 2017b). In Kabylia, both species are primarily granivorous (Khammes, 1998;Khammes and Aulagnier, 2007). L. barbarus is "probably herbivorous" (Happold, 2013), like most Lemniscomys species (Taylor, 2017) and its reproductive activity seems to respond to herbaceous vegetation growth starting at the end of winter, whereas the two granivorous species get an energetic source from seed and acorn production in autumn and winter.
CONCLUSION
The occurrence of the three small-sized non commensal murine species in one area is a quite rare event that is worth to be reported, mainly when populations seem to be almost balanced, contrary to previous trapping sites reported in the literature. As some competition between Apodemus sylvaticus and Mus spretus was recorded by Fons and Saint Girons, (1988), our study site provides a good opportunity for investigating their interactions with a third phylogenetically related species that could influence their population dynamics.
Our main result is the seasonal reproduction shift between A. sylvaticus and M. spretus on one side, and Lemniscomys barbarus on the other. Roughly, the summer break of the formers stands in opposition to the winter break of the later. This pattern cannot be related to physical factors widely reported to influence reproductive period, such as photoperiod or temperature. Ecological factors are also poorly relevant since, even if they exhibit different daily activity pattern, the three species are living in syntopy. However, a more comprehensive study could reveal some different use of the study area. More likely, the reproduction shift among the three species is linked to their feeding ecology that should be investigated, our study site becoming a higher research spot.
Statement of conflict of interest
Authors have declared no conflict of interest.
|
2021-12-12T17:31:57.331Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6d1c6429596ccb3d3a9f2cc2ef6a93f06ca1edd3",
"oa_license": "CCBY",
"oa_url": "http://researcherslinks.com/uploads/articles/1638886806PJZ_MH20190328080332-R3_Khalil%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4889362f0edd6fd5acbdc0fcde3296168afb0b40",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
244354499
|
pes2o/s2orc
|
v3-fos-license
|
A Practical and Economical Route to ( S )-Glycidyl Pivalate
An efficient method to prepare enantiopure ( S )-glycidyl pivalate from ( R )-epichlorohydrin and pivalic acid is reported. This work provides an alternative to the synthesis of this important building block from readily available and inexpensive materials.
Tuberculosis is one of the leading global causes of mortality, and it is believed that one third of the population has a latent case of the disease. 1 Pretomanid ® is a therapy for treatment of tuberculosis that was recently approved by the US FDA under the Limited Population Pathway (LPAD Pathway) for treatment of pulmonary extensively drug resistant (XDR) tuberculosis in combination with Bedaquiline ® and Linezolid ® . It works as a respiratory poison against bacteria by releasing nitric oxide under anaerobic conditions.
Given the large quantities of drug substance that would be required to treat tuberculosis throughout the world, cost-effective syntheses are needed. A key structural feature of Pretomanid ® is the dihydro-1,3-oxazine, containing an oxygen-substituted asymmetric center on the C 3 unit (Figure 1). One could foresee installation of this fragment from an (S)-glycidol derivative, and, not surprisingly, many of the current Pretomanid ® routes make use of functionalized glycidols. 2
Figure 1 Pretomanid ® retrosynthesis from glycidol and derivatives
Glycidyl pivalate appears to be a particularly important variant. 3 However, optical enantiomers of glycidol are of considerable expense and construction from less expensive precursors would be desirable. Epichlorohydrin is a feedstock chemical, and its pure enantiomers are more readily available in comparison to those of glycidol. As a result, (R)epichlorohydrin is approximately 5-6% of the cost 4 of (S)glycidol and could thus form the basis of a more cost-effective route to this intermediate.
Numerous reports describe reaction of epichlorohydrin with carboxylates, particularly hindered carboxylates, as the ensuing glycidyl esters are used in alkyd resins, paints, coatings, and acrylate monomer compositions. 5 Fewer reports detail the reaction of enantiopure epichlorohydrin with carboxylic acid derivatives. 5i-k This work describes the development of a practical route to (S)-glycidyl pivalate from low-cost and readily available (R)-epichlorohydrin and pivalic acid.
Our investigation began by screening typical conditions used to couple acids with racemic epichlorohydrin (Table 1). Variations in the numbers of equivalents of starting ma-
Paper SynOpen
terial, preformation of the carboxylate, solvent, temperature, and time were explored. Introducing an excess of epichlorohydrin was advantageous (entries 4-6). Furthermore, removal of exogeneous solvent led to the best results, giving glycidyl ester 3 in greater than 95% yield by NMR assay. While a high stoichiometry of epichlorohydrin was employed, we were encouraged that these conditions could be rendered economical if excess starting material were to be recovered.
We subsequently shifted our focus to isolation of the desired compound from the reaction mixture, and the reaction scale was increased to 20 g of pivalic acid and 182 g (10 equiv) of (S)-epichlorohydrin (Scheme 1). The reaction of sodium pivalate with epichlorohydrin produced one equivalent of sodium chloride that was easily removed by filtration because of its low solubility. Next, the epichlorohydrin (bp 118 °C) was evaporated and collected and a high proportion of the excess epichlorohydrin was recovered, an important consideration in rendering an economically viable synthesis (143 g, 87%). The residual crude glycidyl pivalate (33 g, contaminated with ca. 6% epichlorohydrin) was distilled twice at 50-70 °C under high vacuum (ca. 6-10 Torr), resulting in 74% isolated yield of the pure glycidyl pivalate. The compound appeared to be temperature sensitive at high concentration, and thus short distillation times were optimal. The product showed good specific activity (-21.9, CHCl 3 , 25 °C), as compared to literature values for (S)-glycidyl pivalate (+20.7); 9 however, the sign of rotation was inverse, indicating that the undesired (R)-enantiomer had been made. Therefore, starting from (R)-epichlorohydrin led to (S)-glycidyl pivalate samples with [] D values of 18.8 and 18.9. Attack of the pivalate anion on the epoxide rather than the primary chloride rationalizes this observation. De-spite these highly encouraging results, analysis of the recovered epichlorohydrin revealed that the epichlorohydrin racemized over the course of the reaction.
Scheme 1 20 g scale-up transposed from initial conditions with isolation of optically pure glycidol pivalate Further reaction screening was required to identify a cost-effective system. Our approach was that either epichlorohydrin epimerization would need to be fully suppressed or that consumption of epichlorohydrin would need to be decreased in order to negate the requirement of starting material recycling. We first explored suppression of epimerization with the thought that, at lower temperatures, the rate of substrate racemization might be significantly slower. The esterification was carried out at 60 °C, which gave 98% yield of product by NMR analysis. At this temperature, the enantiomeric ratio increased from 50:50 to 90:10 ( Table 2, entries 1-2). While this was a positive development, further improvements were still required. The high assay yield (AY) was maintained at 50 °C, and the enantiomeric ratio was increased to 95:5 (entry 3). This moved the conditions toward economic viability; however, even the slight erosion of optical activity limits the ability to recycle epichlorohydrin.
Paper SynOpen
Removing the need to recycle the epichlorohydrin would be preferable as it would simplify the procedure. If the epichlorohydrin equivalence could be reduced, the economic driver to recycle the starting material would be eliminated. However, simply reducing the equivalents of epichlorohydrin led to much lower yields, and a large amount of decomposition was observed (Table 2, entries 4 and 5). The root cause was believed to be heat sensitivity, where bimolecular degradation of the product was most likely accelerated at elevated concentrations. To evaluate this hypothesis, the reaction was investigated with 3 and 6 equivalents of epichlorohydrin, but diluted with inert chlorobenzene to a volume equivalent to that of 10 equivalents of epichlorohydrin. This did indeed provide a significant increase in yield up to and above 80% (entries 6 and 7). Decreasing temperature to 60 °C was found to be the best solution as it further increased yield, avoided the need for exogenous solvent, greatly increased throughput of material, and rendered the system highly economical as compared to glycidol. With the optimized preparative procedure, we then investigated glycidyl pivalate isolation methods. Two feasible solutions were identified as (1) in situ solution of glycidyl pivalate, or (2) distillation to access a higher purity of product.
Each approach has benefits and drawbacks. The first option is desirable, in that it avoids distillation of epoxide 3. Some temperature sensitivity was noted for epoxide 3, and production of a reactive solution could maximize yield by limiting the heat history and concentration of the epoxide. However, this approach does not provide a means of purifying the glycidyl pivalate, and the excess epichlorohydrin must still be removed. If successful, the second option provides a means of removing byproducts from the glycidyl pivalate to obtain a more highly controlled and pure product.
Production of an in situ solution was explored first (Figure 2). Changing the reaction solvent to toluene was considered desirable as toluene could be used in the subsequent steps. In first attempts toward this goal, epichlorohydrin was directly distilled from the reaction mixture under vacuum, and then toluene was added intermittently to compensate for the volume lost from epichlorohydrin evaporation. Volatiles were then fully removed to give a glycidyl pivalate residue. The process was repeated three times. This led to a loss of active glycidyl pivalate in solution, as observed by decrease in the NMR assay (10-15%) and the observation of unidentified by-products (Figure 2). Again, heat and concentration sensitivities were suspected to cause the loss in yield. If the solvent replacement could be conducted while maintaining constant volume, the decomposition would be expected to be mitigated by the maintained concentration and less direct heat application. This was accomplished by adding toluene continuously to a stirred solution of the glycidyl pivalate reaction mixture whilst under vacuum (Figure 3). Performing the solvent exchange in this manner largely prevented the loss of active glycidyl pivalate to decomposition products. The reaction mixture had a 94% NMR yield at the end of reaction and a
Paper SynOpen
90% NMR yield after removal of epichlorohydrin after toluene solvent exchange. This yielded a solution of epoxide 3 that could be used for the subsequent alkylation step. 2b Figure 3 Solvent exchange of epichlorohydrin for toluene with continuous addition of toluene to maintain constant volume Next, we attempted to isolate glycidyl pivalate in good purity by direct distillation 10 (Table 3). Firstly, the sodium chloride was removed from the reaction mixture by filtration, and then the excess epichlorohydrin was removed from the filtrate by evaporation under reduced pressure. Care was taken to remove the epichlorohydrin at low temperature (<60 °C) under high vacuum (<10 torr). After evaporation, the NMR assay yield of the crude glycidyl pivalate residue was 87%. The product was then distilled. Again, it was important to carry this out under reduced pressure so that the temperature of the glycidyl pivalate did not exceed 70 °C. At higher temperature, lower yields were observed as a result of product decomposition. Optimal conditions used in this work were to distil at 50 °C and 6 Torr. This is likely a function of system configuration, which can be further optimized upon subsequent implementation, and might benefit from a continuous distillation system such as a thin-film evaporator so as to minimize thermal exposure of the heatsensitive compound. In this way, the isolated yield of 3 reached 76% with material of 95% purity.
The optical purity of the epoxide samples was confirmed through derivatization with 4-nitro-2-bromoimidazole. The derivatives synthesized from optically active epichlorohydrin were compared against those of racemic epichlorohydrin by HPLC. Supercritical fluid chromatography traces indicated an enantiomeric ratio of 97:3, 11 which was consistent with the high optical purity observed from the specific rotation (see the Supporting Information for the derivatization procedure).
In conclusion, we have developed an efficient method to prepare enantiopure (S)-glycidyl pivalate from (R)-epichlorohydrin and pivalic acid. We believe this work provides an alternative to the synthesis of this important building block from readily available and inexpensive materials.
Starting materials, reagents, and solvents were purchased from commercial sources and were used as received unless otherwise noted. The reactions were monitored with a Bruker Avance III 600 MHz NMR spectrometer, using 1,3,5-trimethoxybenzene as internal standard. Solvents were removed under reduced pressure using a rotary evaporator. Purification was performed using vacuum distillation (see the Supporting Information for more information).
Paper SynOpen Funding Information
We thank the Bill and Melinda Gates Foundation for continuous support of our research.B i l l a n d M e l i n d a G a t e s F o u n d a t i o n
|
2021-11-19T16:27:21.763Z
|
2021-11-17T00:00:00.000
|
{
"year": 2022,
"sha1": "614566ff806fbcc8f0787548d2c80c349466aab9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1055/s-0042-1751375",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "87666f6c76a8a332262ca42e2db8579a1b70063c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
201156773
|
pes2o/s2orc
|
v3-fos-license
|
Increased Use of Noninvasive Ventilation Associated With Decreased Use of Invasive Devices in Children With Bronchiolitis
Objective: To assess how a change in practice to more frequent use of high-flow nasal cannula for the treatment of bronchiolitis would affect the use of invasive devices in children. Design: Retrospective cohort study of children under 2 years old admitted to the ICU with respiratory failure secondary to bronchiolitis. Outcomes and invasive device use were compared between two time periods, before and after the practice change. Setting: Eighteen bed tertiary care PICU. Patients: A total of 325 children: 146 from 2010 to 2012 and 179 from 2015 to 2016. Interventions: None. Measurements and Main Results: There were no significant differences between the two time periods regarding gender, race/ethnicity, medical history, and viral profile, although children were younger in the earlier cohort (median age of 1.9 mo [interquartile range, 1.2–3.5] vs 3.3 mo [1.7–8.6]; p < 0.001). There was an increased use of noninvasive ventilation in the second time period (94% from 69%; p < 0.001), as well as a decreased frequency of intubation (13% from 42%; p < 0.001) and reduced central venous catheter placement (7% from 37%; p < 0.001). There was no significant difference in mortality between the two groups. A logistic regression analysis was conducted, which found that time period, intubation, and hospital length of stay were all independently associated with central venous catheter placement. Conclusions: A practice change toward managing patients with bronchiolitis in respiratory failure with less invasive means was associated with a reduction in the use of other invasive devices. In our cohort, minimizing the use of invasive ventilation and devices was not associated with an increase in mortality and could potentially have additional benefits.
Objective: To assess how a change in practice to more frequent use of high-flow nasal cannula for the treatment of bronchiolitis would affect the use of invasive devices in children. Design: Retrospective cohort study of children under 2 years old admitted to the ICU with respiratory failure secondary to bronchiolitis. Outcomes and invasive device use were compared between two time periods, before and after the practice change. Setting: Eighteen bed tertiary care PICU. Measurements and Main Results: There were no significant differences between the two time periods regarding gender, race/ethnicity, medical history, and viral profile, although children were younger in the earlier cohort (median age of 1.9 mo [interquartile range, 1.2-3.5] vs 3.3 mo [1.7-8.6]; p < 0.001). There was an increased use of noninvasive ventilation in the second time period (94% from 69%; p < 0.001), as well as a decreased frequency of intubation (13% from 42%; p < 0.001) and reduced central venous catheter placement (7% from 37%; p < 0.001). There was no significant difference in mortality between the two groups. A logistic regression analysis was conducted, which found that time period, intubation, and hospital length of stay were all independently associated with central venous catheter placement. Conclusions: A practice change toward managing patients with bronchiolitis in respiratory failure with less invasive means was associated with a reduction in the use of other invasive devices. In our cohort, minimizing the use of invasive ventilation and devices was not associated with an increase in mortality and could potentially have additional benefits. Key Words: bronchiolitis; noninvasive ventilation; pediatrics; resource utilization B ronchiolitis is the most common lower respiratory tract infection in infants and children worldwide (1). Respiratory syncytial virus is responsible for approximately 60% of bronchiolitis cases in children, but many different viruses can cause the same clinical presentation (1,2). The majority of patients can be managed supportively in an outpatient setting or general inpatient unit (3), but approximately 8-13% of children with bronchiolitis progress to respiratory failure and require ICU management (4)(5)(6)(7)(8)(9).
What Is Known on This Subject
Bronchiolitis is a common lower respiratory tract illness in young children, and approximately 10% of children with bronchiolitis progress to respiratory failure. These children are at risk for needing noninvasive positive-pressure ventilation, intubation, and placement of other invasive devices.
What This Study Adds
This study supports that in children with respiratory failure secondary to bronchiolitis, minimizing the use of invasive ventilation can be associated with decreased invasive device use and is not associated with an increase in mortality.
Respiratory failure in this population commonly requires intubation and may also require placement of invasive devices. These devices have both inherent risks and associated complications that can lead to prolonged hospitalizations (10,11). There is evidence that in general, patients who have longer stays in the ICU frequently need more stable IV access in the form of indwelling central venous catheters (CVCs) (10,11). They also can have arterial catheters and Foley catheters placed to allow for close and accurate monitoring. Patients with viral bronchiolitis are increasingly being treated with noninvasive ventilatory support via nasal/ facial interfaces.
In 2012-2013, we conducted a study with other children's hospitals in the Northeast United States and compared treatment practices and outcomes in children with bronchiolitis admitted to four regional PICUs between July 2009 and July 2011 (12). Despite similar mortality risk scores among ICUs, there was considerable variation in the management of children with acute respiratory failure, including a greater than 3.5-fold increased risk of intubation between two children's hospitals (12). This may have been due to a different patient population but may also have been due to different practice patterns.
As a result of this trial, we changed our practice at Connecticut Children's Medical Center (CCMC) between 2013 and 2014. The institution invested in more devices capable of delivering noninvasive positive pressure via high-flow nasal cannula (HFNC), and we started implementing it as a type of first-line respiratory support in patients with bronchiolitis. Others have found the use of HFNC feasible in children with respiratory failure from various causes including viral bronchiolitis (9,13), and that HFNC can decrease inspiratory effort and respiratory rate (14,15), and may potentially decrease intubation rates (16)(17)(18).
With this practice change, we sought to assess whether the increased use of noninvasive positive pressure was associated with decreased use of other invasive devices. Specifically, we sought to compare treatment practices between two study periods (before and after our practice change), including the use of noninvasive positive pressure and the frequency of placement of certain invasive devices such as CVCs, endotracheal tubes, arterial catheters, and Foley catheters.
MATERIALS AND METHODS
A retrospective cohort study was conducted that included all children under 2 years old admitted to the CCMC ICU with acute respiratory failure secondary to viral bronchiolitis. Infants who required more than 2 L/min of oxygen and children who required over 6 L/min of oxygen were considered to be in respiratory failure. Outcomes, invasive device use, and ICU and hospital length of stay were compared between two time periods (January 1, 2010, to December 31, 2012, and January 1, 2015, to December 31, 2016), before and after a practice change toward using more HFNC respiratory support. A gap between the two time periods was chosen to exclude the timeframe of the previous study and practice change. Acute respiratory failure was diagnosed by the admitting critical care attending at admission to the ICU. Children were excluded if they were admitted outside the two study time periods. Institutional Review Board approval was granted by CCMC in Hartford, CT, and the need for informed consent was waived. Initially, we queried the Virtual PICU Systems database (Virtual PICU Systems; LLC, Los Angeles, CA), a national quality database in which Connecticut Children's critical care division participates, to identify subjects who met our inclusion criteria. Subjects then underwent subsequent electronic medical record chart review.
Noninvasive ventilation was delivered by the Fisher & Paykel Healthcare Optiflow junior HFNC (Auckland, New Zealand) and B&B Medical Technologies bubble bottle nasal continuous positive airway pressure (NCPAP) setup (Carlsbad, CA). Per hospital protocol, all patients who receive a form of noninvasive ventilation were admitted to the PICU.
Data were analyzed using JMP statistical software (version 12.0.0; SAS Institute, Cary, NC) and using consultation from statisticians at CCMC. The relationships between time period and the baseline characteristics and clinical outcomes were compared using appropriate parametric and nonparametric tests and statistics including chi-square for dichotomous variables and Wilcoxon rank sum for continuous variables. The Kolmogorov-Smirnov statistic and the Shapiro-Wilk test were used to assess the normality of continuous variables. Data are presented as median and 25-75% interquartile range (IQR) for nonparametric continuous variables and as frequencies (%) for categorical variables. Frequencies are rounded to the nearest whole number and p values of less than 0.05 are considered statistically significant. Odds ratios and 95% CIs are reported where appropriate. A stepwise multiple logistic regression was performed to determine the factors associated with CVC placement using factors found significant (p < 0.1) on univariate analysis and reported using American College of Chest Physicians guidelines (19).
RESULTS
In 2015-2016, there were 190 children admitted to the ICU with bronchiolitis, 179 who met the study criteria. In 2010-2012, there were initially 182 children admitted to the CCMC ICU with bronchiolitis, 146 of whom met inclusion criteria. We chose a longer time period for the earlier time frame to have comparable sample sizes. Our total cohort consisted of 325 subjects. Demographic data were compared between the two time periods ( Table 1). There was no significant difference between groups regarding gender, race/ethnicity, medical history, and viral profile. There was a significant difference in age, with the earlier cohort being younger (median age of 1.9 mo [IQR, 1.2-3.5] vs 3.3 mo [1.7-8.6]; p < 0.001).
Frequency of adjunctive interventions was compared between the two time periods ( Table 2). There was an increased use of noninvasive ventilation in the second group (94% from 69%; p < 0.001). The predominant modality of noninvasive ventilation in the first group was NCPAP, which was used in 66% of patients. Its use decreased significantly to 29% of patients in the later cohort (p < 0.001). However, as NCPAP use decreased, HFNC use increased almost 10-fold from the earlier to later cohort (89% from 9%; p < 0.001). There was no change in biphasic positive airway pressure use between cohorts (2% from 3%; p = 1.00). There was a significantly decreased usage of intubation and mechanical ventilation (13% from 42%; p < 0.001) and highfrequency oscillatory ventilation (1% from 5%; p < 0.02). No patients in either cohort required extracorporeal membrane oxygenation support or tracheostomy. There was also a significantly decreased rate of CVC placement (7% from 37%; p < 0.001) and arterial line placement (1% from 7%; p = 0.01). Foley catheter placement decreased from 9% to 4%, but this was not statistically significant (p = 0.07).
Clinical course was compared between the two time periods ( Table 2). The median hospital length of stay in the later cohort was 7 days from 9.5 days in the earlier cohort (p < 0.001). The ICU length of stay was also significantly shorter in the latter time period (median of 3.4 vs 4.6 d; p = 0.01). There was no significant difference in mortality between the two groups regardless of the noninvasive treatment modality (p = 1.00). In the earlier cohort, all patients survived, whereas one patient succumbed to their illness in the second group.
In order to assess factors associated with CVC use, a univariate analysis was conducted ( Table 3). Children who had CVC placement were significantly more likely to have a history of gastroesophageal reflux disease and chronic respiratory failure. These children were also more likely to have received NCPAP, been intubated, have an arterial catheter placed, and have a Foley catheter placed. Of those who did not have a CVC, most were previously healthy and received support with HFNC. These patients also had shorter hospital and ICU length of stays compared with those who did not have a CVC placed.
Factors associated with CVC placement were assessed using a multiple logistic regression model constructed of the aforementioned variables associated with CVC placement on univariate analysis. The best fitting model was one that contained investigational time period, intubation, and hospital length of stay (R 2 = 0.67; Table 4). This suggests that these variables are all independently associated with CVC placement.
A univariate analysis was performed to evaluate factors associated with intubation finding, no association with race, gender, or chronic medical conditions. There was, however, an association between patient age and intubation, with intubated children being younger (median, 0.17 yr; IQR, 0.09-0.32 yr) compared with nonintubated children (median, 0.23 yr; IQR, 0.12-0.62 yr; p = 0.002). This suggests that these variables are all independently associated with CVC placement.
DISCUSSION
In this cohort of children with acute viral bronchiolitis, we found that respiratory failure can safely be managed with noninvasive ventilation and decreased CVC use compared with a historic control. A practice change that encouraged increased use of noninvasive ventilation was also associated with fewer CVCs, arterial catheters, and endotracheal tubes in patients. The length of stay was also significantly shortened with the increased use of noninvasive ventilation. In 2010, McKiernan et al (20) conducted a retrospective chart review of 115 patients under 24 months with bronchiolitis who were treated with HFNC therapy at a single-center PICU. After the introduction of HFNC to their unit, they reported a 68% decrease in intubation rate in children with bronchiolitis, as well as a decrease in respiratory rate within 1 hour of initiation of HFNC therapy. Those with the largest decrease in respiratory rate were less likely to be intubated, likely reflective of the efficacy of HFNC to treat acute respiratory failure. They also found a decrease in overall PICU length of stay of the entire cohort when HFNC was used (20). Our cohort incorporated many more patients than the McKiernan et al (20) cohort and yielded very similar outcomes.
More recently, Lin et al (21) published a meta-analysis regarding the use of HFNC for children with bronchiolitis. They included nine randomized controlled trials that evaluated 2121 children with bronchiolitis and found no significant difference in hospital length of stay, length of oxygen supplementation, frequency of intubation, and adverse events in the HFNC group compared with those who received standard oxygen therapy and NCPAP (21). Although our study did not compare the difference in outcomes between noninvasive devices, nor overall hospital length of stay or total duration of oxygen supplementation, as opposed to need for PICU care, our data do support similar results that the use of HFNC is safe as a first-line modality of respiratory support and may have other benefits.
To our knowledge, this is the first study that has evaluated the association of invasive catheters and the use of noninvasive ventilation. Although invasive devices provide many benefits for the patient, they can be a nidus for bacteria and greatly increase the chance of a serious infection (11). Central line-associated bloodstream infections typically require prolonged parenteral antibiotic therapy and are tracked by both the Joint Commission and the Department of Public Health (11). CVCs and arterial catheters also carry significant thrombotic risks that could have prolonged consequences even postdischarge (22), as well as carry the risk of death. Therefore, avoidance of using these devices can potentially help limit complications of acute illness. Prolonged ICU stays are associated with patient physical, psychologic, cognitive, and functional deconditioning, decreased perceived quality of life, and social hardships for the patient and their family (22,23). De-intensifying patients, by using less-invasive devices, may promote earlier discharge from both the ICU and the hospital. Additionally, an increased number of invasive procedures performed is known to be a hindrance to patient mobility and is a risk factor for development of ICU-related morbidities (23). Others have reported that early rehabilitation may improve outcomes and shorten ICU length of stay (24,25). It may be that limiting invasive procedures may significantly decrease both the short-and long-term complications in critically ill patients, as well as shorten their length of stay.
This study has several limitations. All subjects were identified from a single center, and thus, our data may not be generalizable to the entirety of patients in this age group with viral bronchiolitis. Additionally, no explicit guidelines regarding the rationale of when to implement each method of respiratory support are present, and therefore, there may be cognitive bias due to variability of individual provider preference and medical decision-making. Our cohorts were matched by chronology of admission rather than severity of illness, but our large sample size, and well-matched study groups, should account for variability in illness severity. The only statistically significant demographic difference between the two groups analyzed was regarding age, with the earlier cohort being younger. Younger children are at risk for a more severe illness course, which could contribute to the increased rate of intubations in this cohort.
CONCLUSIONS
Our data support that a practice change toward managing patients with acute respiratory failure secondary to bronchiolitis with lessinvasive means is possible without compromising patient outcome. The use of HFNC was associated with a reduction in the use of other invasive devices, specifically endotracheal tubes, CVCs, arterial catheters, and there was a significantly decreased hospital length of stay following the implementation of a practice change toward increased noninvasive devices. Additionally, it may be that minimizing the use of invasive ventilation and thereby reducing the use of other invasive devices could potentially have additional benefits such as reduced hospital-acquired complications.
|
2019-08-23T02:03:40.233Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "9f0fd4dbff3f9e9034d22b518ef3c567bd8bf920",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cce.0000000000000026",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "012280f5239a2179a3067659616ea59a0116d078",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13001506
|
pes2o/s2orc
|
v3-fos-license
|
Analytic and asymptotic properties of multivariate generalized Linnik's probability densities
This paper studies the properties of the probability density function $p_{\alpha,\nu, n}(\mathbf{x})$ of the $n$-variate generalized Linnik distribution whose characteristic function $\varphi_{\alpha,\nu,n}(\boldsymbol{t})$ is given by \varphi_{\alpha,\nu,n}(\boldsymbol{t})=\frac{1} {(1+\Vert\boldsymbol{t}\Vert^{\alpha})^{\nu}}, \alpha\in (0,2], \nu>0, \boldsymbol{t}\in \mathbb{R}^n, where $\Vert\boldsymbol{t}\Vert$ is the Euclidean norm of $\boldsymbol{t}\in\mathbb{R}^n$. Integral representations of $p_{\alpha,\nu, n}(\mathbf{x})$ are obtained and used to derive the asymptotic expansions of $p_{\alpha,\nu, n}(\mathbf{x})$ when $\Vert\mathbf{x}\Vert\to 0$ and $\Vert\mathbf{x}\Vert\to \infty$ respectively. It is shown that under certain conditions which are arithmetic in nature, $p_{\alpha,\nu, n}(\mathbf{x})$ can be represented in terms of entire functions.
As the case of stable distributions, in general the probability density functions of the univariate and multivariate Linnik distributions do not have closed forms. In [28], Kotz, Ostrovskii and Hayfavi studied the analytic and asymptotic behaviors of the probability density function of the univariate symmetric Linnik distribution whose characteristic function is given by (1.1). This work was generalized to symmetric homogeneous multivariate Linnik distribution with characteristic function (1.4) with Σ = id by Ostrovskii [62]; and to asymmetric univariate Linnik distribution with characteristic function (1.2) by Erdogan [13], and finally to asymmetric univariate Linnik distribution with characteristic function (1.3) by Erdogan and Ostrovskii [11,12]. In the present work, we consider the generalization of the multivariate Linnik distribution whose characteristic function is given by (1.5) ϕ α,ν,n,Σ (t) = 1 1 + [t ′ Σt] α 2 ν , α ∈ (0, 2], ν > 0, t ∈ R n . One can expect that this multivariate generalized Linnik distribution also plays an important role in some characterization problems in multivariate statistics as in the univariate case [64]. The main goal of this paper is to investigate the asymptotic and analytic properties of the probability density function of the multivariate generalized Linnik distribution. It suffices to restrict to the case where Σ = id, i.e., we only consider the class of multivariate symmetric generalized Linnik distribution with characteristic function given by (1.6) ϕ α,ν,n (t) = 1 (1 + t α ) ν , α ∈ (0, 2], ν > 0, t ∈ R n , where t = n i=1 t 2 i is the Euclidean norm of t. We denote by p α,ν,n (x) the corresponding probability density function.
There are also other motivations to the problem we want to study here. Recall that the dual distributionp of a probability distribution p is a probability distribution whose characteristic function (resp. probability density function) is up to a constant, the probability density function (resp. characteristic function) of p [15]. When α ∈ (0, 2] and ν > n/α, the dual distribution of the generalized Linnik distribution is called the generalized Cauchy distribution [25] whose probability density function is up to a constant, given by (1.6). Therefore, p α,ν,n (x)/p α,ν,n (0) is the characteristic function of the generalized Cauchy distribution. For the special case where α = 2 and ν > n/2, the generalized Cauchy distribution is the well-known Student's t distribution which has a lots of applications (see e.g. [31] and references therein).
From the perspective of stochastic processes, Gneiting and Schlater [18] introduced a new class of stationary stochastic processes called Gaussian field with generalized Cauchy covariance whose covariance function is given by (1.6). This stochastic model has two parameters α and ν which can give separate characterizations of the fractal dimension and Hurst effect. It has found applications in many modeling problems [19,46,47,48,58,59,67,70]. In this context, the function under investigation p α,ν,n (x) appears as the spectral density function of the stochastic model [49]. On the other hand, when α = 2 and ν > n/2, up to a constant, (1.6) appears as the spectral density of the Whittle-Matérn random field [54,73,74]. This class of random fields has found wide applications especially in geostatistics [15,20,22,23,55,56,57,71,72]. In [50], we generalized the Whittle-Matérn random field to a random field whose spectral density is given by (1.6) with α ∈ (0, 2], ν > n/α and showed that it can provide a more flexible model for wind speed data. The function p α,ν,n (x) we want to study in this paper then becomes the covariance function of the generalized Whittle-Matérn random field.
From the brief discussion above, one notes that the function p α,ν,n (x) appears in various places and plays different roles, stretching from the probability density function of generalized Linnik distribution and spectral density function of Gaussian field with generalized Cauchy covariance to the characteristic function of the generalized Cauchy distribution and the covariance function of generalized Whittle-Matérn field. Thus, a detailed study of the analytic and asymptotic properties of p α,ν,n (x) is of strong interest and significant importance, particularly in view of its potential applications in physics, internet traffic and financial time series analysis and modeling. This is the task undertaken in the present work.
Multivariate generalized Linnik distribution
Gneiting and Schlather [18] asserted that using similar arguments as of [16] and references therein, one can show that the function ϕ α,ν,n (t) (1.6) is a covariance function of a stationary Gaussian random field if and only if α ∈ (0, 2] and ν > 0. This implies that when α ∈ (0, 2] and ν > 0, p α,ν,n (x) ≥ 0 and therefore is also the probability density function of a distribution. In this section, we provide another argument to show that ϕ α,ν,n (t) (1.6) is the characteristic function of a probability distribution. The following is a generalization of the result of Devroye [8] to multivariate case. Proposition 2.1. Given α ∈ (0, 2], ν > 0 and Σ a positive definite matrix of rank n, let S α,n,Σ be a symmetric multivariate stable random variable with characteristic and let U ν be an independent univariate gamma random variable with probability density function Then the characteristic function of the random vector X α,ν,n,Σ = U 1 α ν S α,n,Σ is given by (1.5). Proof.
We call the random vector X α,ν,n whose characteristic function is given by (1.6) an (α, ν, n) Linnik random vector. As in the introduction, denote by p α,ν,n (x) its probability density function. In other words, p α,ν,n (x) is the unique function such that Since ϕ α,ν,n (t) is a radial function, p α,ν,n (x) is also a radial function. Denoting by q α,ν,n (r) the function q α,ν,n (r) = p α,ν,n (x)| x =r .
Then Schoenberg's formula gives where J µ (z) is the Bessel function of the first kind. In this paper, we want to study the properties of the function q α,ν,n (r). Proposition 2.1 gives a relation between the probability density functions of the generalized Linnik distribution and the symmetric stable distribution. More precisely, let S α,n (x) be the probability density function of the symmetric stable distribution with characteristic function exp(− t α ), i.e.
Since K µ (z) is an infinitely differentiable function and (see e.g. [21]), Proposition 2.4 shows that q α,ν,n (r) is an infinitely differentiable function of r when r ∈ (0, ∞). When ν = 1, (2.10) has been proved by Ostrovskii [62]. When n = 1, it was proved in [53,28,11,12]. Both (2.10) and (2.11) can be proved by showing that they satisfy (2.2). However, to motivate these formulas, one can start from (2.9) when αν > n−1 2 . In this case, the formula (2.9) can be evaluated explicitly when α = 2 which gives (2.11). When α ∈ (0, 2), we use the fact that (see e.g. [21]) is the Hankel's function of the first kind and N µ (z) is the Bessel function of the second kind or called the Neumann function. A change of contour of integration from the positive real axis to the positive imaginary axis gives (2.10). For details, see [49].
3.1. α = 2. When α = 2, the asymptotic behavior of K µ (z) as z → ∞ (see e.g. [21]) gives Proposition 3.1. When α = 2 and ν > 0, the asymptotic expansion of the function q α,ν,n (r) when r → ∞ is given by In particular, the large-r leading term of q 2,ν,n (r) is given by For r → 0, the explicit series representation of K µ (z) about z = 0 (see e.g. [21]) gives: Proposition 3.2. When α = 2 and ν > 0, I. If n 2 − ν is not an integer, then the series expansion of q α,ν,n (r) about r = 0 is given by II. If n 2 − ν = l is an integer, then Notice that the behavior of q 2,ν,n (r) as r → 0 depends on whether n 2 − ν is an integer. If n 2 − ν is not an integer, q 2,ν,n (r) can be represented by the sum of two series, one is an absolutely convergent power series, and the other is the multiplication of r 2ν−n with an absolutely convergent power series. When n 2 − ν is an integer, q 2,ν,n (r) can be represented by the sum of three terms, one is an absolutely convergent power series, one is the multiplication of r min{2ν−n,0} with a polynomial, and the third one is the multiplication of ln r with an absolutely convergent power series. Corollary 3.3. When α = 2 and ν > 0, the r → 0 leading term of the function q α,ν,n (r) depends on the value of ν: II. If ν > n 2 , then q 2,ν,n ∼ Γ ν − n 2 2 n π n 2 Γ(ν) .
Proposition 3.4. When α ∈ (0, 2) and ν > 0, the following formula is valid: For r → ∞, In particular, the large-r leading term of q α,ν,n (r) is given by Proof. Making a change of variable in (2.10), we have Using integration by parts, one can check easily that for an infinitely differentiable function g(y), we have This shows that Notice that The assertion (3.1) follows from the formula (see e.g. [21]) An immediate corollary of Proposition 3.4 is Corollary 3.5. When α ∈ (0, 2) and ν > 0, the asymptotic expansion of q α,ν,n (r) as r → ∞ is given by For the special cases where ν = 1 or n = 1, (3.7) was obtained in [28,62,11,12,13] by different methods. Notice that the large r-asymptotic behavior of q α,ν,n (r) is very different for the case of α ∈ (0, 2) and α = 2. q α,ν,n (r) decays polynomially when α ∈ (0, 2) and exponentially when α = 2. Also notice that the large-r leading term of q α,ν,n (r) is of order r −α−n , which is the same as the large-r leading term ofs α,n (r) (2.6).
Now we turn to the asymptotic behavior of q α,ν,n (r) when r → 0. For this purpose, we need to derive an integral representation of q α,ν,n (r) which extends the results of [28,62,11,12,13]. We begin with the following lemmas.
Lemma 3.6. Let z be a complex number such that |arg (z)| < π, then Proof. Using the fact that when u ≫ 1 (see e.g. [3]), we find that if u ≫ 1, Here and the followings, C or C 1 , C 2 represent constants whose values can be different in different lines. (3.10) shows that the right hand side of (3.8) indeed defines an analytic function of z when |arg (z)| < π. Now using (3.11) For u ≫ 1, (3.9) implies that Therefore we can interchange the order of integrations in (3.11) when |arg (z)| < π/2 to obtain (3.8). The general case where |arg (z)| < π follows by analytic continuation.
Proof. From Lemma 3.6, we have Therefore, it suffices to show that Notice that the functions z → sin z, z → cos z, z → u z , u ∈ R and z → Γ(z) are all real on the real axis. Reflection principle implies that they satisfy f (z) = f (z). On the other hand, (3.14) By taking the complex conjugate of (3.14), it is easy to see that (3.14) is real. Therefore (3.12) holds. (3.13) follows analogously.
Now we can prove the following.
For 0 ≤ u ≤ 1 and y ≫ 1, (2.12) and (3.9) imply that (3.18) For u ≥ 1 and y ≫ 1, (2.12) and (3.9) give (3.19) Since c < 1/α, (3.18) and (3.19) show that we can interchange the order of integrations in (3.17) and use (3.6) to obtain (3.15) follows by using the identity By making a change of variable, it is easy to see that (3.15) coincides with the results of [28,62,11,12,13] in the special cases where ν = 1 or n = 1. However, instead of proving that the right hand side of (3.15) satisfies the equation (2.2), we choose to give a direct derivation of (3.15) here.
These are simple poles except if n + 2j = α(ν + l) for some (j, l) ∈N ×N. Herê N = N ∪ {0} is the set of nonnegative integers. Define the set Λ n by Λ n = (α, ν) ∈ (0, 2) × R + | n + 2j = α(ν + l) for some (j, l) ∈N ×N , so that (α, ν) ∈ Λ n if and only if f α,ν,n (w; r) has double poles on the right half plane Re w > 0. The detail characterization of the set Λ n is deferred to the end of this section. In the following, we give the asymptotic expansion of q α,ν,n (r) when r → 0.
One notice that putting α = 2 in the results of Corollary 3.10, one obtains the results of Corollary 3.3. We also remark that the large-r asymptotic expansion of q α,ν,n (r) when α ∈ (0, 2) (3.1) can also be obtained from (3.15) by considering the poles on the left of the line Re w = c. Now we analyze in more detail the conditions for which (α, ν) ∈ Λ n . Observe that it is necessary that both α and ν are rational numbers or both are irrational numbers. If α and ν are both irrational, then n + 2j 1 = α(ν + l 1 ) and n + 2j 2 = α(ν + l 2 ) for (j 1 , l 1 ), (j 2 , l 2 ) ∈N ×N if and only if 2(j 1 − j 2 ) = α(l 1 − l 2 ), if and only if j 1 = j 2 and l 1 = l 2 . In other words, if α and ν are both irrational numbers, there is at most one pair of nonnegative integers (j, l) such that n + 2j = α(ν + l). For the case where both α and ν are rational numbers, we write α = a/b and ν = e/f , where a, b, e, f ∈ N, gcd (a, b) = 1 and gcd (e, f ) = 1. Then (3.28) n + 2j = α(ν + l) =⇒ n + 2j = a b Since gcd (a, b) = 1, gcd (e, f ) = 1 and the left hand side is an integer, it is necessary that f divides a and b divides e + lf . Let a = mf for some m ∈ N. Since gcd (a, b) = 1, this implies that gcd (b, f ) = 1. Therefore, it is always possible to find positive integers k and l such that (3.29) e + lf = kb.
We have then a b However, there does not necessary exist a nonnegative integer j such that n + 2j = km. We need to discuss the parity of n. If n is odd, then n + 2j is odd for any j ∈N. Therefore for n + 2j = km, it is necessary that m and k are both odd. m is odd implies that both a and f are even or both are odd. In the case where a and f are both even, b and e are both odd. Then for any l ∈N, e + lf is always odd. Therefore if (k, l) ∈ N ×N is a solution to (3.29), k is necessary odd. This implies that there are always solutions (j, l) ∈N ×N to (3.28). Moreover, if (j 0 , l 0 ) is the smallest nonnegative solution, then all the other solutions are given by (j q , l q ) = (j 0 + aq/2, l 0 + bq), q ∈N. In the case where a and f are both odd, if (k, l) is a solution to (3.29), then so is (k + f, l + b), but one of k and k + f must be odd. In this situation, we also find that there always exist solutions to (3.28). Moreover, if (j 0 , l 0 ) is the smallest nonnegative solution, then all the other solutions are given by (j q , l q ) = (j 0 + aq, l 0 + 2bq), q ∈N.
For the case where n is even, n + 2j is even. In this case, m can be either odd or even. If m is odd, k has to be even. The same reasoning above shows that a and f has to be both odd and there is always solutions to (3.28). Moreover, if (j 0 , l 0 ) is the smallest nonnegative solution, then all other solutions are given by (j q , l q ) = (j 0 + aq, l 0 + 2bq), q ∈N. If m is even, then a is even and there is no restriction on k. In this case, there is always solutions to (3.28) and all the solutions can be expressed by the smallest nonnegative solution (j 0 , l 0 ) via (j q , l q ) = (j 0 + aq/2, l 0 + bq), q ∈N.
We summarize the results as follows: If a is even, all solutions are given by (j q , l q ) = (j 0 + aq/2, l 0 + bq), q ∈N.
4.
Representation of q α,ν,n (r) in terms of entire functions In this section, we are going to analyze under what conditions the asymptotic expansion of q α,ν,n (r) given in Proposition 3.9 will give rise to representation of q α,ν,n (r) in terms of absolutely convergent power series. First we have Proposition 4.1. Let α ∈ (0, 2) and ν > 0. Then q α,ν,n (r) = lim N →∞ S α,ν,n;N (r) uniformly for r in any compact subsets of (0, ∞). Here S α,ν,n;N (r) is given by (3.21).
Now we want to investigate the analyticity of the series representation of q α,ν,n (r). First we have Proposition 4.2. Let α = a/b ∈ (0, 2) and ν = e/f > 0 be rational numbers with gcd (a, b) = gcd (e, f ) = 1. If one of the conditions (i) a is not divisible by f , (ii) n is an odd integer and a/f is an even integer, (iii) n is an even integer, a and f are both even integers and a/f is an odd integer, holds, then q α,ν,n (r) can be represented by: q α,ν,n (r) = 1 2 n π n 2 Γ(ν) r αν−n 2 αν−n A 1;α,ν,n (r α ) + A 2;α,ν,n r 2 , where A 1 (z) and A 2 (z) are entire functions given by
(4.4)
Proof. If the conditions (i) or (ii) holds, Proposition 3.11 implies that (α, ν) / ∈ Λ n . Using Propositions 3.9 and 4.1, we only need to show that the right hand sides of the equations (4.4) define entire functions. Let Given l ∈N, there exists a unique j l such that Since α(ν+l)−n 2 − j l = 0, .
Similarly, given j ∈N, there exists a unique l j such that Together with Stirling's formula (4.2), we find that for l and j large enough, Therefore, the two infinite series on the right hand sides of the equations (4.4) converge absolutely and uniformly on any compact subsets of C. Proposition 4.3. Let α = a/b ∈ (0, 2) and ν = e/f > 0 be rational numbers with a, b, e, f ∈ N and gcd (a, b) = gcd (e, f ) = 1. If one of the conditions (i) n is an odd integer and a/f is an odd integer, (ii) n is an even integer and a/f is an even integer, (iii) n is an even integer, a/f is an odd integer and a and f are both odd integers, holds, let (j 0 , l 0 ) be the smallest nonnegative integers satisfying n + 2j = α(ν + l), (j, l) ∈N ×N. Then q α,ν,n (r) can be represented by: q α,ν,n (r) = 1 2 n π n 2 Γ(ν) r αν−n 2 αν−n A 1;α,ν,n (r α ) + A 2;α,ν,n r 2 + r 2j0 2 2j0 A 3;α,ν,n (r a ) + r 2j0 2 2j0 A 4;α,ν,n (r a ) log r 2 , where if a is odd, A 1 (z), A 2 (z), A 3 (z) and A 4 (z) are entire functions given by whereas if a is even, Proof. The fact that q α,ν,n (r) can be represented in the form (4.5) follows from Propositions 3.9, 3.11 and 4.1. The proof of the absolute convergence of the series A 1;α,ν,n (z) and A 2;α,ν,n (z), A 3;α,ν,n (z) and A 4;α,ν,n (z) follows the same as the proof of Proposition 4.2 and the formula (see e.g. [3]): Propositions 4.2 and 4.3 show that when α and ν are both rational numbers, q α,ν,n (r) can be represented in terms of entire functions. In the case where α is a rational number and ν is an irrational number, we have Proposition 4.4. Let α = a/b ∈ (0, 2) be a rational number with a, b ∈ N and (a, b) = 1. If ν > 0 is an irrational number, then q α,ν,n (r) can be written as the sum of two absolutely convergent series as in Proposition 4.2. Proof. Notice that the conditions α is rational and ν is irrational imply that (α, ν) / ∈ Λ n . Given ν > 0 an irrational number, let q ν be the unique positive integer such that This implies that for all q ∈ Z Given l ∈N, there exists a unique j l such that (4.8) implies that Similarly, given j ∈N, there exists a unique l j such that (4.8) implies that The rest of the proof is the same as Proposition 4.2.
When ν is a rational number but α is irrational, the situation is more complicated. We have to introduce the concept of Liouville numbers as in [28,62,11,12,13]. Recall that an irrational number β is called a Liouville number if for all m = 2, 3, . . ., there exists a rational number p/q, p, q ∈ Z such that By Liouville theorem (see e.g. [63]), all Liouville numbers are transcendental and the set of Liouville numbers, denoted by L, has Lebesgue measure zero. If α is an irrational number but not a Liouville number, then q α,ν,n (r) can be written as the sum of two absolutely convergent series as in Proposition 4.2.
Recall that an entire function E(z) = ∞ k=0 E k z k is said to be of order ρ if and if It is easy to deduce from the proof of Proposition 4.5 that when α, ν are both rational numbers, or if one of α or ν is a rational number, and the other is a non-Liouville irrational number, then the entire functions A 1;α,ν,n (z) and A 2;α,ν,n (z) have order 1/α and 1/2 respectively. In the case where α = a/b and ν = e/f satisfy the conditions (i), (ii) or (iii) of Proposition 4.3, the entire functions A 3;α,ν,n (z) and A 4;α,ν,n (z) have order 1/a. Although Propositions 4.2, 4.3 and 4.4 are proved for α ∈ (0, 2), one can show that by naively extending the results to α = 2, one obtains exactly the results for α = 2 given in Proposition 3.2.
At this point, one may wonder whether there exist any values of α and ν where both the series A 1;α,ν,n (z) and A 2;α,ν,n (z) are divergent. As was proved in [62], when ν = 1, there is a dense subset of α ∈ (0, 2) where both the series A 1;α,ν,n (z) and A 2;α,ν,n (z) are divergent. Now we extend the result to general ν > 0 which can also be regarded as the generalization of a result in [12] for n = 1.
Define Ω to be the set of real numbers of the form Finally define E to be the set E = α ∈ (0, 2) : αν = eα f = x + y, x ∈ Λ, y ∈ Ω .
|
2009-03-30T23:44:54.000Z
|
2009-03-30T00:00:00.000
|
{
"year": 2009,
"sha1": "ee765fdbfb0d69284222e2b93969026cbac402a1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0903.5344",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ee765fdbfb0d69284222e2b93969026cbac402a1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
3047066
|
pes2o/s2orc
|
v3-fos-license
|
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.
which defines for sequences of n loci, with 4 alleles per locus the following coe cients, • 1 intercept, I.
Note that the coe cients in equation 3 are not determined uniquely without the regularization parameter .
The generalized kernel ridge-regression (GKRR) implementation we use (Hinkley et al., 2011), is applied directly to the binary encoding of each sequence, without any information about the number of loci per allele. Thus, it defines 4n 2 pairwise coe cients, resulting in 6n extra "nonsense" coe cients. These additional coe cients represent the pairwise e↵ects of simultaneously observing 2 di↵erent alleles at the same locus, which is not allowed to happen in our situation. (However, in a hypothetical scenario this could be used when di↵erent alleles are present at known frequencies). These "nonsense" coe cients are not explicitly set to 0, but are in practice close to 0 (personal communication, Trevor Hinkley). Thus, for the sequences used here, where n is 217, the model has 377,147 coe cients, of which 1,302 are superfluous.
In the sequence encoding used here sequences are encoded independent of any reference sequence, as opposed to other representations where sequences are encoded as mutations made to a wild type sequence (see for instance Otwinowski and Plotkin (2014)). Our representation results in sequences that are longer and a model with more parameters, but it allows for a more general treatment of the sequence landscape in that the sequence representation is not tied to an arbitrary sequence. Deviance is a standard measure for generalized models and is analogous to the coe cient of determination, R 2 , of linear models with normal error structures. The deviance of a model is defined as the di↵erence between the log-likelihoods of the model and a complete model (a model with a parameter for every observation such that it fits the data perfectly), multiplied by 2 (Nelder and Wedderburn, 1972).
GKRR assumes a Poisson error structure (Supplement, Section 1.2, Hinkley et al. (2011)), which results in the following formula for the deviance, where N is the number of data points.
Predictive power is measured as the fraction of the deviance explained. This is measured as the improvement over a null model, which is equivalent to fitting only the mean of the data points (i.e. y i =ȳ 8i) and thus maximizes the deviance. If the deviance of the null model is D N then the fraction of the deviance explained by a model is given by DN D DN .
Extrapolating from the independent e↵ects of single and pairwise mutations The quadratic model that we use assumes that only main e↵ects and pairwise epistatic interactions contribute to the fitness of a sequence. In this section we investigate if it is possible to extrapolate from the independent fitness e↵ects of single and pairwise mutations and thereby assess the contribution of higher-order interactions to sequence fitness. We train a quadratic model on the dataset containing all sequences reachable from the focal genotype within its 2-mutational neighbourhood (defined as D 2 in Methods). This dataset contains 211,576 sequences representing the independent fitness e↵ects of all possible single and double mutants of the focal genotype. Although these are less sequences than the number of coe cients in the model (see above), assuming that the fitness landscape contains only main e↵ects and pairwise epistatic interactions, we would expect that it is possible to extrapolate to sequences with higher numbers of mutations using a model trained on this dataset. On the other hand, if higherorder interactions play a significant role, we expect to observe a decrease in the quality of the prediction as more mutations are added to the test sequences. indicating a tendency to underestimate high fitnesses and overestimate low fitnesses. For datasets sampled at more than 5 mutations from the focal genotype the model cannot explain any of the variance. Thus, it is apparent that higher-order interactions are present within the fitness landscape and that it is impossible to represent the fitness of sequences with higher numbers of mutations as a combination of independent fitness e↵ects of single and pairwise mutations. Ability of a quadratic model to extrapolate when trained on the independent fitness e↵ects of all single and pairwise mutations of the focal genotype. The model was trained on all one and two step mutants of the focal genotype (211,576 sequences) and tested on 6 datasets of 5,000 sequences each, randomly sampled at Hamming distances of 3, 4 and 5 from the focal genotype. Data points are the mean predictive power (blue) and correlation between true fitnesses and residuals (orange) among replicates. Error bars indicate the standard error of the mean. For comparison the sixfold cross-validated results are shown for a Hamming distance of 2, where the training set was randomly split into training and test sets of 206,576 and 5,000 sequences respectively. Because of the cross validation the prediction is not perfect and there is a substantially higher correlation between the true fitness and the residuals. Correlation between the true fitness value and the residuals under di↵erent sampling regimes. Four datasets were sampled from the same quasi-empirical RNA fitness landscape using di↵erent sampling regimes. Each dataset contains 65,000 sequences (65,536 for Complete subset) and was randomly split into training and test sets of 60,000 and 5,000 sequences (5,536 for Complete Subset) respectively. Sixfold cross-validation was used to assess the predictive power and biases of the linear and quadratic models on the simulated datasets. Error bars indicate the standard error of the mean. FIG. S17. E↵ect of the sampling density on the ability of a linear model to approximate a fitness landscape from randomly sampled sequences. Datasets are composed of 65,000 sequences randomly sampled within successively higher Hamming distances from the focal genotype. Datasets were randomly split into training and test sets of 60,000 and 5,000 sequences respectively and sixfold cross-validation was used to assess the predictive power and biases of the linear model on the simulated datasets. Data points are the mean predictive power (blue) and correlation between true fitnesses and residuals (orange) among replicates. Error bars indicate the standard error of the mean. For comparison Complete Subset and Evolved (shown in Fig. 3 and S9) are also shown.
|
2018-04-03T03:42:29.955Z
|
2016-05-14T00:00:00.000
|
{
"year": 2016,
"sha1": "c4e44b665bd8c77b00387c3ea0f8b50c8b2a555f",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/mbe/article-pdf/33/9/2454/17473460/msw097.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "97c446928ca1a70ed0da7952ce1b496eb9f004a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254043073
|
pes2o/s2orc
|
v3-fos-license
|
Metal‐Free Intermolecular C−H Borylation of N‐Heterocycles at B−B Multiple Bonds
Abstract Carbene‐stabilized diborynes of the form LBBL (L=N‐heterocyclic carbene (NHC) or cyclic alkyl(amino)carbene (CAAC)) induce rapid, high yielding, intermolecular ortho‐C−H borylation at N‐heterocycles at room temperature. A simple pyridyldiborene is formed when an NHC‐stabilized diboryne is combined with pyridine, while a CAAC‐stabilized diboryne leads to activation of two pyridine molecules to give a tricyclic alkylideneborane, which can be forced to undergo a further H‐shift resulting in a zwitterionic, doubly benzo‐fused 1,3,2,5‐diazadiborinine by heating. Use of the extended N‐heteroaromatic quinoline leads to a borylmethyleneborane under mild conditions via an unprecedented boron‐carbon exchange process.
The now-ubiquitous Suzuki-Miyaura cross-coupling reaction, [1], [2] and the growing realization that C-B bonds can act as nearuniversal placeholders for the functionalization of organic molecules, [3], [4] has spurred enormous interest in the efficient synthesis of borylated organics. Transition-metal catalyzed C-H borylation has emerged as a promising, direct, and often selective route to borylated precursors for Suzuki-Miyaura cross-coupling reactions. [5], [6] However, the toxicity and environmental impact of the transition metals used in catalysis, and the expense related to their removal from the products, has caused concern in the chemical industry. Consequently, the search for metal-free C-H borylation protocols has become a hotly-contested area of research, [7]- [13] however, this chemistry is hampered by the relative inertness of most C-H bonds and chemo-/regioselectivity issues arising from the multiple C-H sites present in most target molecules. In particular, protocols for selective C-H borylation of heterocyclic compounds with relatively reactive auxiliary sites, such as the N atoms of pyridines, present further synthetic challenges, and are exceedingly rare even with the assistance of transition-metal catalysts. [14], [15] The recent development of highly reactive molecules containing B-B multiple bonding [16]- [18] provides interesting opportunities for novel bond activation reactions. Indeed, doubly Lewis-base-stabilized diborynes, of the form [LBºBL] (L = Lewis base such as N-heterocyclic carbenes or cyclic alkyl(amino)carbenes), have already been shown to undertake a number of interesting intermolecular bond activation reactions, leading to 1,2-additions across their BºB triple bonds. These include the H-H bond of dihydrogen, [19] the C-O bonds of CO and CO2, [20], [21] B-H [22] and B-B [23] bonds, S-S and Se-Se bonds, [24] and even the activated C-H bonds of acetone and alkynes. [25], [26] The demonstrably high reactivity of diborynes makes them good candidates for the highly challenging task of activating the C-H bonds of (hetero)arenes, prompting us to combine these two classes of reagents in this work.
Herein we report three different modes of regioselective, intermolecular C-H borylation of N-heterocycles with carbenestabilized diborynes, compounds with varying degrees of boronboron multiple bonding. [16], [17] All of these reactions occur at ambient temperature and in the absence of catalysts or additives. Depending on the diboryne precursor, either one or two molecules of pyridine can be activated, leading either to a simple pyridyldiborene or a tricyclic alkylideneborane, respectively. Use of the larger heteroaromatic quinoline leads initially to the simple C-H borylation product, which spontaneously undergoes a highly unusual B/C exchange, leading to a borylmethyleneborane.
Thereby, treatment of 1 with an equimolar amount or an excess of pyridine led to an immediate color change from red to blue and a new set of 11 B NMR signals at 35 and 25 ppm (1: δ( 11 B) = 56 ppm). After evaporation of all volatiles under high vacuum and washing with hexane, the blue solid 3 was isolated in 82% yield (Scheme 1). A single-crystal X-ray diffraction (SCXRD) study unequivocally revealed 3 to be not a simple pyridine adduct of 1 but a doubly base-stabilized 1-hydro-2pyridyldiborene, suggesting ortho-C-H borylation of pyridine COMMUNICATION ( Figure 1). [30] A signal corresponding to the boron-bound hydrogen Scheme 1. Single and double C-H borylation of diborynes.
atom was detected in the 11 B-decoupled 1 H NMR spectrum of 3 at 3.35 ppm as a broad singlet. Apart from those corresponding to the carbene carbon nuclei, the most low-field 13 C NMR resonance can be assigned to the boron-bound carbon atom of the pyridyl substituent (180.3 ppm), identified by a 2D 13 C, 1 H HMBC NMR experiment. The solid-state structure of diborene 3 ( Figure 1) shows a B1-B2 distance of 1.591(5) Å, lying in the expected range for doubly NHC stabilized diborenes. [19], [23], [31] The nearly identical B1-C1 (1.546(5) Å) and B2-C2 (1.563(5) Å) distances, the distinct B2-C3 single bond (1.589(5) Å), as well as the ca. 50° twist of the pyridyl ring from the central diborene plane, suggest negligible π-delocalization between the B=B and pyridyl groups. This is supported by DFT-calculated molecular orbitals (MOs) of 3, with both the HOMO and LUMO resembling those of conventional doubly NHC-stabilized diborenes ( Figure 2). The HOMO displays delocalization of π electron density over the C NHC -B=B-C NHC axis, while the LUMO shows π* antibonding character at the B-B bond and π bonding character at the B-C NHC bonds.
While combining CAAC-stabilized diboryne 2 with one equivalent of pyridine led to roughly half of the precursor remaining unreacted, adding two equivalents of pyridine to a benzene solution of 2 (Scheme 1) at room temperature led to a color change from purple to pink within one hour. The 11 B NMR spectrum of the reaction mixture displayed new resonances at 32 and 22 ppm, upfield of those of the starting material (1: δ( 11 B) = 80 ppm.) After workup, a purple solid was obtained in 82% yield. A SCXRD study revealed the compound to be the tricyclic diazadiborinine derivative 4, resulting from activation of two pyridine molecules. The 1 H NMR spectrum of 4 shows, in addition to expected aromatic signals of the CAAC ligands, only four additional protons in this region, with an additional set of signals found in the alkene region (5.89-5.54 ppm), confirming the loss of aromaticity of one pyridyl group and consequent formation of a butadiene-type structural motif. A signal at 4.26 ppm can be assigned to the hydrogen atom now bound to a former carbene carbon atom (H2 in Figure 1, middle), in line with previous observations of H-shifts onto CAAC ligands, [32] as well as results observed during element-hydrogen bond activations induced by CAAC itself. [33] A broad 1 H NMR spectroscopic signal at 3.33 ppm (H1 in Figure 1, middle) shows a cross-signal to a resonance in the 13 C, 1 H HSQC NMR spectrum at 65.2 ppm, corresponding to the hydropyridyl carbon atom bound to boron (C2 in Figure 1, middle). The solid-state structure of 4 ( Figure 1, left) shows a distinct butadiene-like structure of the hydropyridyl unit, with alternating C-C bond distances (C5-C6: 1.342(4); C6-C7: 1.450(4); C7-C8 1.328(4) Å), while the aromatic pyridyl unit shows typical bond equilibration (1.425(4)-1.346(4) Å), comparable to those of a recently published CAACstabilized diboraanthracene diradical. [34] These differences are confirmed by the calculated zz components of the shielding tensor nucleus-independent chemical shift (NICSzz(1)) elements ( Figure 3). Also notable are the B-C distances: while the B1-C2 distance (1.611(4) Å) suggests a single bond, the B1-C3 distance (1.507(4) Å) indicates double bond character and the presence of an alkylideneborane unit. [35] The B1-C1 distance (1.521(4) Å) is in the expected range for a dative CAAC-B interaction with significant π-bonding character, whereas the B2-C4 (1.615(4) Å) distance suggests a single covalent bond.
Heating a C6D6 solution of 4 at 80 °C for 14 h led to an additional color change of the reaction mixture from light to dark pink, the 11 B NMR spectrum of which showed only a single resonance at 28 ppm (4: δ( 11 B) = 32, 22 ppm). The purple solid 5 was obtained after workup in nearly quantitative yield, the 1 H NMR spectrum of which showed the absence of signals in the alkene region and no hydropyridyl signal comparable to that of 3 (δ( 1 H) = 3.33 ppm). Instead, two signals corresponding to protonated CAAC substituents were observed (4.96 and 4.87 ppm). These data suggested rearomatization of the hydropyridyl unit and a concomitant H-shift to the remaining CAAC unit (Scheme 1).
A SCXRD study of 5 using crystals obtained from a saturated pentane solution indicated the presence of an essentially planar tricyclic central unit, with the aryl substituents of both CAACH substituents oriented on opposite sides of the tricyclic core. Alternatively, a solid-state structure derived from crystals prepared using benzene as crystallization medium shows both aryl substituents to be oriented on the same side of the tricyclic core, leading to a butterfly-like structure with an angle of 21° (Figure 1, right).
Together, these spectroscopic and structural data indicate that 5 is a very rare example of a 1,3,2,5-diazadiborinine. The high symmetry of 5 results in disorder in the molecular structure shown in Figure 1, whereby a molecule with swapped C and N atoms is superimposed on the first. This disorder led to reduced precision in the structure, prompting us to turn to DFT calculations to gain a better idea of the structure and energetics of 5. All distances of the central core, both experimental and calculated (1.422-1.532 Å), lie in the range of elongated double bonds, suggesting extended delocalization, similar to results reported by Kinjo et al. for their 1,3,2,5-diazadiborinines. [36], [37] The calculated NICSzz(1) values (Figure 3) of the outer rings of 5 suggest greater aromaticity than those of 4, with the zwitterionic inner B2N2C2 core being relatively aromatic. These NICS values underscore the similarity of 5 to its purely hydrocarbon analogue anthracene, which is known to exhibit a higher NICS(0) for its central ring relative to the outer rings. [38] Accordingly, the transformation of 4 to 5 is exergonic by -25.9 kcal mol -1 based on DFT calculations at the SMD(benzene):ωB97X-D/6-311++G(d,p)//ωB97X-D/6-31G(d,p) level.
In order to test if diborene 3 also undergoes thermally-induced reactivity, benzene solutions of 3 were heated independently to 60 °C and 80 °C. However, in both cases this led only to decomposition.
Given the intriguing reactivity of diborynes with monocyclic Nheterocycle pyridine, we sought to expand our scope to bicyclic N-heterocycle quinoline. The reaction of 2 with quinoline gave an inseparable mixture of products, however, treatment of 1 with quinoline resulted in an immediate color change from red to blue, similar to the above reaction of 2 with pyridine. 11 B NMR spectroscopic resonances at 25 and 30 ppm were observed after a few minutes, suggesting the presence of diborene 6, analogous to 3. However, the resonance at 25 ppm had disappeared after 10 minutes, while the signal at ca. 30 ppm had broadened significantly. An additional color change to green occured within one hour, and a near-complete decoloration took place overnight, the remaining light yellow solution suggesting the absence of diborene in the mixture. After workup by washing the dried reaction mixture with hexane and crystallization from a saturated hexane solution, the product was identified by SCXRD as the borylalkylideneborane 7 (Scheme 2), a constitutional isomer of the presumed intermediate 6 in which one of the carbene carbon atoms has exchanged with one boron atom. The unexpected and highly unusual structure of 7 is confirmed by its NMR spectra. A singlet resonance corresponding to the alkylideneborane C-H proton was found in the 1 H NMR spectrum at 3.79 ppm, presenting a cross-signal to a 13 C NMR resonance at 104.3 ppm in the 13 C, 1 H HSQC NMR spectrum. This resonance is downfield of typical alkene resonances, but is in the same range as that of a cyclic borylalkylideneborane reported by Berndt et al. (115.2 ppm). [39] The broad resonance observed in the 11 B NMR spectrum of 7 (30 ppm) can be rationalized by the superposition of two signals.
Because of the unexpected swapping of B and C atoms in the reaction furnishing 7, we carried out DFT calculations (SMD(benzene):wB97X-D/6-311++G(d,p) level, see ESI for further details) in order to establish a plausible reaction mechanism for the rearrangement. Our calculations suggest that the reaction starts with the coordination of quinoline at one boron atom of diboryne 2 (Figure 4). Electronically similar to CAAC, the SIDep ligand has enhanced p acidity that allows adoption of a cumulenic structure so that one of the boron atoms can accommodate the lone pair of the quinoline in 2ad
|
2022-11-29T06:16:48.911Z
|
2022-11-28T00:00:00.000
|
{
"year": 2022,
"sha1": "7c3c633d1509fe64aa86b4e6724c29af1fc1ee84",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "83c2c71954ee06ea94eb3f6fbb828cff96aba22a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59337775
|
pes2o/s2orc
|
v3-fos-license
|
Therapeutic efficacy and safety of radiofrequency ablation for the treatment of trigeminal neuralgia: a systematic review and meta-analysis
Objective The objective of this study was to summarize the effectiveness and safety of trigeminal neuralgia (TN) treatment via different radiofrequency approaches such as continuous radiofrequency (CRF), pulsed radiofrequency (PRF), and combined CRF and pulsed radiofrequency (CCPRF) treatments, thus providing high-quality clinical evidence for TN treatment. Methods A series of databases were searched for relevant articles published between January 1998 and April 2018. The modified Jadad scale was referred to evaluate the methodological quality of the included studies. Data were extracted independently, and the outcome and safety of different routes, temperatures, and guidance used in CRF, PRF, and CCPRF were compared. Meta-analysis and publication bias were calculated using Review Manager software. Results In total, 34 studies involving 3,558 participants were included. With regard to TN treatment, PRF had no difference in cured rate in comparison with CRF, while CRF was more effective than CCPRF (P<0.05). The comparison of complication rates showed that PRF and CCPRF were safer. For puncture guidance via CRF, three-dimensional-printed template was more accurate in success rate at first puncture than computed tomography guidance (P<0.05). For puncture route, foramen rotundum (FR) or pterygopalatine fossa (PPF) route had no significance in efficiency rate via CRF in comparison with foramen oval (FO) route, but PPF and FR routes were safer. For CRF treatment, low temperature (68°C–70°C) compared with high temperature (71°C–75°C) had no effect. Moreover, higher temperature (66°C–80°C) had a greater effect compared with lower temperature (55°C–65°C) on TN treatment (P<0.05), while the safety of which was decreasing. Conclusion CCPRF could achieve a greater effect and safety on TN treatment. FR and FO routes in TN puncture treatment via CRF are safer. Medium temperature range is better for CRF therapy, and higher temperature is recommended in PRF, especially for the elders. Further international multicenter trials are needed to confirm the evidence.
Introduction
Neuropathic pain is a kind of painful experience, which is often associated with a specific condition. There are several types of neuropathic pain, such as postherpetic neuralgia, trigeminal neuralgia (TN), painful diabetic peripheral neuropathy, and glossopharyngeal neuralgia. Among them, TN is a clinically common painful disease and was estimated to account for 12.6-28.9/100,000 in the general population per year. 1 It is a recurrent intense paroxysmal pain in the facial trigeminal area which is like knife-cutting, acupuncture, electric shock, or cauterization. Its frequency and degree of pain increase with the duration of the disease. TN can be effectively treated by minimally invasive techniques such as microvascular decompression, 2 balloon compression, 3 continuous radiofrequency (CRF) thermocoagulation, and radiosurgery. 4 In these approaches, CRF is widely used for the clinical treatment of TN with a pain relief rate of 90-100%. 5 CRF could produce heat by vibration and friction, which further leads to thermocoagulation, denaturation, and necrosis of the target tissue. However, patients accepting CRF may develop various complications, such as facial numbness, mouth penetration, eyelash hypoesthesia, forehead numbness, corneal hypoesthesia, and dysacousis. 6,7 These complications are presumably due to neuronal injury mainly produced by surgical puncture and heat from radiofrequency temperature during the thermocoagulation. 8 Moreover, the higher the temperature is, the more those complications are. Another method, pulsed radiofrequency (PRF), an ideal technique for the treatment of chronic pain, has been proved as a minimally invasive, safe, and effective interventional treatment choice for TN patients. [9][10][11][12] PRF uses a lower temperature application of energy generated by the radiofrequency generator at the tip of the needle and transmits the energy to the nerve in a pulse manner. However, its efficacy for pain relief in TN remains controversial. Besides, different temperatures used in those radiofrequency therapy might lead to different outcomes. [13][14][15] Meanwhile, the combined CRF and pulsed radiofrequency (CCPRF) has been proposed and verified in clinical trials with pain relief efficiency compared with PRF or CRF treatment alone in patients with chronic pain. 16 In order to resolve the abovementioned controversies, the aim of this study was to summarize the effectiveness and safety of the treatment of TN via different radiofrequency approaches such as CRF, PRF, and CCPRF treatments. Besides, we compared the outcome and safety of different routes, temperatures, and different kinds of guidance used in above approaches, thus providing high-quality clinical evidence for TN treatment.
Search strategy
We searched the China National Knowledge Infrastructure (CNKI), Chinese VIP Information (VIP), Wanfang, Web of Science, and PubMed databases. Searches that are limited to human studies published between January 1998 and January 2018 were carried out in parallel by YWG and JZ. We used the following search terms: "idiopathic trigeminal neuralgia," "TN," "semilunar ganglion," "gasserian ganglion," "oval foramen," "pulsed," "continuous," "percutaneous," "conventional," "radiofrequency thermocoagulation," "radiofrequency ablation," "CRF," "PRF," "RFT," "CCPRF," "radiofrequency temperature," "treatment," "therapy," "pain relief," "complication*," "recurrence rate," and "satisfact*." The terms were combined with logical connector AND for each component such as patient's condition, intervention, control, and outcome, while OR was used for all candidate terms inside each component. In this way, a subset of citations that address the objective of our research study were generated. The reference lists of relevant articles were hand-searched to get potential eligible studies beyond the electronic searches.
inclusion criteria
All clinical studies involving radiofrequency interventions used for the treatment of classical TN were included. The diagnostic criteria for TN were based on the principle of International Classification of Headache Disorders (International Headache Society [IHS])-II (2004) 17 and IHS-III (2013). 18 The treatments described were CRF, PRF, and PRF + CRF/CCPRF. Outcome indices were as follows: 1) efficiency of treatment: cured rate, determined as the proportions of patients with >50% pain relief, and effective rate, determined as the proportions of patients with >25% pain relief; 2) the severity of pain, defined using visual analog scale (VAS) score (0: no pain; 1-3: mild pain; 4-6: moderate pain; and 7-10: severe pain) or numerical rating scale (NRS; 0: no pain; 1-3: mild pain; 4-6: moderate pain; 7-9: severe pain, 10: the most painful); 3) satisfactory, determined as the assessment of life quality from 0 (lowest) to 22 (highest) using Life Satisfaction Index B; 4) success rate at the first puncture, determined as first-quantified puncture of effective nerve puncture; and 5) safety: complication rate, determined as the incidence of complications including facial numbness, masticatory muscle weakness, neurological disorder, hematoma, nausea, and vomiting.
exclusion criteria
The exclusion criteria were as follows: 1) The subjects of the study were patients accompanied with other diseases affecting TN; 2) studies without the control group; 3) studies about secondary TN caused by tumors, intracranial lesions, multiple sclerosis, herpes zoster, or other severe organ diseases; 4) insufficient data in outcome index; and 5) the literature with a reporting language other than English and Chinese.
selection of studies and data extraction
Three authors (JZ, JLC, and YWG) completed study selection, data extraction, and cross-checking independently according to the inclusion and exclusion criteria. We read the titles and abstracts (when available) of all the articles to identify whether it was related to the theme through the searches. Then, we obtained the full articles of these studies and independently judged whether they met the inclusion criteria or not. We resolved any disagreement and difficulties by discussion or consulting another reviewer. The extracted data included demographic data of patients (sample size, sex, age, preoperational pain duration, pain side, and preoperational drug dosage), treatment protocols (methods, temperature, lesion time, stimulating voltage, stereotaxis, and surgery duration), and the efficacy of the treatment including pain score (VAS and NRS), the quality of life (QoL) as well as complications.
Evaluation of methodological quality for the included studies
The modified Jadad scale 19 was referred to evaluate the methodological quality of the included studies according to the following four items: 1) the generation of random sequences: i) adequate: random numbers or similar methods generated by computers or random number table (two points); ii) unclear: random test without random distribution method (one point); and iii) inadequate: alternate distribution method, such as odd and even numbers (zero point); 2) randomization: i) adequate: center or pharmacy control allocation scheme, or containers with consistent sequence numbers, onsite computer control, sealed opaque envelopes, or other methods so that clinicians and participants cannot predict the allocation sequence (two points); ii) unclear: only using a random number table or other random allocation scheme (one point); iii) inadequate: alternate distribution, case number, and any other measures cannot prevent predictability packets; and iv) unused (zero point); 3) blindness: i) adequate: using completely consistent placebo tablets or similar methods (two points); ii) unclear: just having the statement of blindness, but without description (one point); and iii) inadequate: not by double-blindness or way of blindness is not appropriate, such as the comparison of tablets and injections (zero point); and 4) follow-up: describing the number and reasons for withdrawal (one point), without the number or reasons for withdrawing (zero point). In a word, 1-3 points were considered as low quality, and 4-7 points were considered to be of high quality.
Statistical analysis
In this study, meta-analysis and publication bias were calculated using Review Manager (RevMan5.3; The Nordic Cochrane Center, The Cochrane Collaboration, Copenhagen, Denmark). ORs were used for evaluation, and 95% CIs were calculated for each estimate. Heterogeneity was considered low, moderate, or high for I 2 values <25%, 25-50%, and >50%, respectively. The analyses were performed using a random-effects model on those studies with high heterogeneity (I 2 >50%) and with fixed-effects model on those studies with less heterogeneity (I 2 ≤50%). P≤0.05 was considered statistically significant.
Results
The relevant article search yielded 2,142 references from PubMed, Web of Science, CNKI, VIP, and Wanfang databases, of which 34 articles 14,15, were qualified for this study finally according to the flowchart. Figure 1 presents the study selection process.
Summary characteristics of the included studies
All the included studies met inclusion and exclusion criteria. A total of 34 studies involving 3,558 subjects from different regions including Turkey, Egypt, Korea, and People's Republic of China were finally included in this meta-analysis. The age of the included population mainly ranged from 55 to 75 years, and TN is commonly seen in the middle-aged and elderly people. Each study recruited both men and women in the case and control groups. TN occurred at the left and right sides, and the most suffered areas were on the right side. Meanwhile, most studies suggested that the pain at the two sides were often involved two (V2 + V3) branches, while the pain at one side most commonly involved only one (V2) branch. For the treatment, all studies reported the way of radiofrequency ablation (eg, CRF, PRF, and CCPRF), temperature, surgery time, and guidance (eg, X-ray, computed tomography [CT], type-B ultrasonography, and MRI). In these studies, 19 articles investigated the association between different kinds of guidance and the efficiency of CRF, and six articles reported the association between different routes of thermocoagulation and CRF. Six articles compared the efficiency of CRF by different temperatures, while three articles analyzed the effect of different temperatures in PRF treatment. Preoperational pain duration was partially inconsistent among the involved studies. The outcome index for efficiency and safety involved in different articles were pain score (VAS and NRS), QoL, and complications. The most common complications were facial hematoma, facial numbness, nausea and vomiting, headache, masticatory muscle weakness, hearing loss, facial swelling and congestion, corneal paralysis, and so on. Table 1
Quality of the included studies
Most included studies cannot be completely double-blind; as a result, their Jadad scale scores were less than four points. Table 2 shows the detailed scores of each study. In total, 79.4% (27 of 34) of the included studies were of low quality; the scale of two studies was zero point, nine studies one point, eight studies two points, and eight studies three points. Only seven clinically randomized trials were of high quality. Six studies 22,24,27,28,32,33 in the PRF group compared with the CRF group patients in cured rate showed great heterogeneity (χ 2 =20.66, P=0.0009<0.05, I²=76%); the total effect size OR in this study was 0.60 (95% CI: 0.19, 1.92), and the Z value was 0.86 (P=0.39>0.05), indicating that there was no statistically significant difference in the cured rate between PRF and CRF in TN treatment. Seven studies [22][23][24]27,28,32,33 in the PRF group compared with the CRF group patients in complications of mortality showed great heterogeneity of the studies (χ 2 =28.13, P<0.0001, I²=79%); the total effect size OR in this study was 0.04 (95% CI: 0.01, 0.23), and the Z value was 3.70 (P=0.0002<0.05), indicating that PRF was safer than CRF in TN treatment. comparison of the cured rate between three-dimensional (3D) CT guidance vs manual puncture in cRF treatment Figure 4 shows the comparison of the cured rate between different kinds of guidance for RF (between 3D CT vs manual puncture) in TN treatment. Two studies 40,41 in the 3D CT group compared with the manual puncture group patients showed great heterogeneity of the studies (χ 2 =2.58, P=0.11<0.05, I²=61%); the total effect size OR in this study was 0.85 (95% CI: 0.49, 1.48), and the Z value was 0.57 (P=0.57>0.05), suggesting that there was no significance of the cure rate between 3D CT guidance vs manual puncture in TN treatment.
Comparison of efficiency and safety between PRF and cRF
Comparison of the success rate at first puncture between 3D-printed guide plate vs cT guidance in cRF treatment Figure 5 shows the comparison of the success rate at first puncture between different kinds of guidance for RF (3D-printed guide plate vs CT guide) in TN treatment. Two studies 34,35 in the 3D-printed guide group compared with the CT guide group patients showed no heterogeneity of the studies (χ 2 =0.63, P=0.43>0.05, I²=0%); the total effect size OR in this study was 55.47 (95% CI: 12.47, 246.69), and the Z value was 5.27 (P=0.00001<0.05), indicating that 3D-printed guide has a great effect on treatment in success rate at first puncture.
Comparison of the success rate at first puncture between combined guidance and simple image guidance in cRF treatment Figure 6 shows the comparison of the success rate at first puncture between different kinds of guidance for RF (combined with other means vs simple image-guided like CT or C-arm) in TN treatment. Three studies 36 Comparison of efficiency and safety between the pterygopalatine fossa (PPF) vs foramen oval (FO) route in TN treatment via cRF Figure 7 shows the comparison of the effective rate and complication of different routes (the PPF group vs the FO group) in treating TN. Three studies 43,46,47 Comparison of efficiency and safety between the foramen rotundum (FR) route and FO route in Tn treatment via cRF Comparison of efficiency between different temperatures in Tn treatment via PRF Figure 9 shows the effect of different temperatures of PRF thermocoagulation on TN treatment. Three studies 14,15,50 in the low-temperature group (38°C-44°C) compared with the high-temperature group (45°C-50°C) patients showed less heterogeneity of the studies (χ 2 =3.14, P=0.21>0.05, I²=36%); the total effect size OR in this study was 0.32 (95% CI: 0.14, 0.73), and the Z value was 2.71 (P=0.007<0.05), indicating that the higher temperature group (45°C-50°C) has a greater effect on the treatment. Figure 11 shows the effect and complications of different temperatures (66°C-80°C vs 55°C-65°C). Two studies 22,26 in the higher temperature group (66°C-80°C) compared with lower temperature group (55°C-65°C) patients showed no heterogeneity of the studies (χ 2 =0.64, P=0.42>0.05, I²=0 %). The total effect size OR in this study was 2.77 (95% CI: 2.02, 3.80), and the Z value was 6.33 (P<0.00001). Two studies 22,26 (66°C-80°C vs 55°C-65°C) showed no heterogeneity (χ 2 =0.17, P=0.68>0.05, I²=0%); the total effect size OR in this study was 4.58 (95% CI: 3.21, 6.54), and the Z value was 8.40 (P<0.00001), indicating that relatively higher temperature group (66°C-80°C) has a greater effect on TN treatment, while the safety of which is decreasing. Figure 12 shows the effect and complications of different temperatures (80°C-85°C vs 86°C-90°C). Two studies 21,51 in the lower temperature group (80°C-85°C) compared with higher temperature group (86°C-90°C) patients showed no heterogeneity of the studies (χ 2 =0.01, P=0.94>0.05, I²=0%); the total effect size OR in this study was 0.91 (95% CI: 0.43, 1.93), and the Z value was 0.26 (P=0.80>0.05). Two studies 21,51 (80°C-85°C vs 86°C-90°C) showed no heterogeneity (χ 2 =0.49, P=0.48>0.05, I²=0%); the total effect size OR in this study was 0.66 (95% CI: 0.34, 1.28), and the Z value was 1.23 (P=0.22>0.05), indicating that the change in temperature >80°C had no significant difference in the efficiency and safety during the TN treatment.
Discussion
TN is a clinically common painful disease, and a variety of drugs or surgical procedures are available for its treatment. Medications such as carbamazepine oxcarbazepine, baclofen, lamotrigine, phenytoin, and topiramate could be administered to control pain. Intravenous infusion of a combination of magnesium and lidocaine can be very effective in some patients. However, different extents of side effects could occur after drug treatment. Procedures such as radiosurgery, percutaneous balloon compression, glycerol rhizotomy, radiofrequency thermocoagulation, peripheral nerve dissection, partial sensory nerve root dissection, and microvascular decompression could be utilized. Similarly, partial sensory nerve root dissection and microvascular decompression may not have persistent curative effect, while peripheral nerve dissection could lead to a loss of partial facial sensation. 52 CRF, as a less invasive and effective treatment, has been widely applied to the treatment of TN patients who are refractory to medical therapy since 1974. 53 The heat produced by the radiofrequency needle is thought to selectively destroy the Aδ and C pain fibers by thermocoagulation at temperatures >65°C, while the medullary fibers (Aδ fibers) that conduct the tactile sensation can tolerate higher temperature. Some results showed that there was a significant difference in the efficacy of PRF with less postoperative complications, but the recurrence time was shorter than that of CRF. 22 PRF achieves an analgesic purpose by stimulating the nerve instead of damaging it; therefore, the effect duration turns to be shorter. 33,54 The idea of combined application of CRF and PRF, sometimes named as CCPRF, could not only reduce the excessive damage of the CRF to the nerve tissue but also decrease the occurrence of the complications to some extent. Although PRF and CRF had no statistical difference in cured rate, CCPRF had a greater effect on treatment than CRF, while PRF and CCPRF had no difference. 27 Furthermore, some studies suggested that its long-term efficacy was not as good as the simple CRF. 31 The advantage of CCPRF is still in need of further investigation.
As our results showed, there is a big lift in the success rate at first puncture when combining extra guidance techniques such as semiconductor laser locator and stimulation potential guidance other than simple image guidance approach. Interestingly, the recent 3D-printed template guidance achieved a greater success rate at first puncture than that of simple image guidance approach, indicating that 3D-printed template might be a potential guidance for TN treatment. We thought that 3D-printed template might provide benefits for making an evaluation and the best puncture plan, including puncture point, angle, and depth. Traditional puncturing methods rely on the experience of the surgeon and the recognition of the anatomical structure near the FO mostly, and it is sometimes in need of repetitive punctures due to the individual differences, while 3D-printed template can avoid this. At present, the selection of CRF temperatures in TN treatment has no specific standards. Various studies used temperatures ranging from 55°C to 90°C. Temperature >65°C was known to destroy nerve fibers, which could further result in severe complications such as blindness, deafness, ptosis, and permanent facioplegia. Hence, how to ensure pain relief and reduce complications is the focus of clinical attention of temperature selection. Therefore, a lower temperature was recommended not only to ensure the therapeutic effects but also to reduce complications. In this study, we analyzed all studies on the selection of radiofrequency treatments at different temperatures and studied the patients' satisfaction degree of postoperative efficiency. We found that the temperature range of 68°C-70°C was better in patients' satisfaction
437
Radiofrequency ablation for trigeminal neuralgia treatment degree, and the efficiency was better at 66°C-80°C. When the temperatures were lower between two groups, the effect was better, which was associated with the previous reports. Yao et al 30 15 showed that different temperatures (38°C, 42°C, 45°C, and 48°C) via PRF were all effective in the patients, but increasing the temperature of PRF did not improve the analgesic effect and maintenance time. Moreover, Jiang et al 50 showed that 45°C-50°C was a suggested temperature range for PRF, especially for the elders. Different anatomic routes might have different impacts on RF efficiency. In this study, the RF efficiency between PPF and FO routes had no significant difference, and they both had the advantages of simple positioning, operation, and low recurrence rate in RF treatment. A number of clinical studies 55,56 showed that RF thermocoagulation targeting on V2 branch could affect the ophthalmic and mandibular nerve function, while the PPF or FR route is safer with less postoperative complications and the possible reason is that the PPF or FR route is turning the target of RF thermocoagulation from the semilunar ganglion to the cranial branch of V2, from intracranial operation to extracranial operation, which obviously reduces the damage to the arteria meningea media, optic nerve, and other branches of the trigeminal nerve during the process of RF puncturing and thermocoagulation. 43, 49 Ding 57 showed that the FO route through the mandibular angle approach was reasonable too, with a higher target selectivity and a lower long-term pain recurrence rate. Furthermore, Chen et al 58 indicated that the best approach of percutaneous puncturing of RF thermocoagulation for treating V2 in TN was the upper side against zygomatic and the inner side against the wall of maxillary sinus. Despite of the main findings, there are some limitations in this study. First, the number of data samples was small, and some of the included articles were reported from the same institutes, which will inevitably result in a repetition and publication bias. Second, the long-term follow-up data of therapeutic effect in different studies were inconsistent. Therefore, we cannot get a uniform standard in the subgroup analysis. Third, most of the clinical trials included in this study were conducted in People's Republic of China, which may restrict the generalization of our conclusion. Fourth, the quality of the included studies was relatively low, which might limit the accountability of our results. Moreover, the evaluation criteria in response to the treatment effect are heterogeneous among different studies, some of them adopt patients' satisfaction degree, and others used an effective rate. In addition, some data are missing during the process of data extraction, limiting the robustness of this analysis.
Conclusion
CCPRF could achieve a greater efficacy and safety on TN treatment compared with PRF and CRF. Although there was no remarkable difference among PRF, FR, and FO routes in TN puncture treatment via CRF, the first two routes are safer. With regard to the guidance, 3D-printed template guide was more accurate for RF puncture than for imageonly guidance even with skin stimulation potential and semiconductor laser locator. Medium temperature range was better for CRF therapy, and higher temperature was recommended for PRF, especially for the elders. Further international multicenter RCTs comparing the effect and safety between PRF and CCPRF in terms of temperature, guidance, and routes are needed to confirm the evidence.
|
2019-01-31T14:03:03.812Z
|
2019-01-18T00:00:00.000
|
{
"year": 2019,
"sha1": "6528c16a22a9d48fc53fbaebf38bf0b89edc54b5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=47609",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abb51d2dd3c67a674fcc878d5d7c41b1a78b47cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
135353106
|
pes2o/s2orc
|
v3-fos-license
|
Modeling the Communications of Dayak Meratus Ethnic in Producing Forest Myths in Hulu Sungai Tengah and Balangan District
Dayak Meratus ethnic interacts with forests in their daily life to create a distinctive culture. This distinctive culture is forest itself, in which the community life relies on forests. The forest is classified into three parts based on customary agreements. They are managed forest, customary forests and sacred forests. This classification is developed by forming symbols and myths about forest. These symbols and myths create communication model between Dayak Meratus ethnic communities and forests. The focus of this article is to know how the communication model of Dayak Meratus ethnic in conserving forest is. The research of this article uses qualitative approach using ethnographic communication method developed by Dell Hymes with speaking model. The communication model of Dayak Meratus ethnic in producing myths and symbols about the forest is in speech communication, myth and symbol of forests are generated from generation to generation. The myths and symbols are applied in agricultural tradition and Kaharingan religious tradition of Dayak Meratus Ethnic. Keywords—communication model; Dayak Meratus ethnic; forest
I. INTRODUCTION
Dayak is a tribe living in the forest area with their all social activities. They have their own understanding and meaning to the forest according to their experience on it. As Dayak Tepian Sungai (Ut danum/river bank) and those Dayak Bukit (mountain). They cannot be separated from the myths and symbols inherent in their daily life. These myths and symbols are able to guide their system of interaction with the forest.
For Dayak Bukit known as Dayak Meratus have sumbayang ceremony (Prayer) to rice. They pray to gods who maintain the rice, land dan trees. They also pray to ancestors who have guided them about good farming practices [1, p. 246]. For Dayak Bukit tribe believes that forest and land are the source of livelihood for them. For this tribe, forest utilization is always associated with customary law and authority. Syahruji [2, p. 121] states that Dayak Kiyu community in particular and Dayak Meratus generally, forest and customary land is their life. Forests are pharmacies, food barns, kitchens, learning sites and banks. Forest is also like a mother who must be respected for giving life. Whereas, the tradition is the root of life. So the terms created by Dayak Meratus community are forestry base. For example, akar kehidupan (root of life); this suggests that tradition is the lifeline.
Traditionally, Dayak Meratus indigenous community manages customary forests and their natural resources cannot be separated from customary law. As the practice of farming may be in the area of cultivation only, not allowed to cut timbers in the sacred territory. It is believed that if the forest destroyed, the tradition also will be extinct. Violations will be subjected to customary sanctions as stated in recent customary law.
Referring to the above explanation, this article aims to model Dayak Meratus Ethnic communication in producing forest symbols and to analyze Dayak Meratus ethnic communications model with forest.
II. METHODOLOGY
This research is qualitative with ethnographic communication method. As proposed by Hymes in [3, pp. 208-209], it is a model that abbreviated by 'speak-ing" that stands for setting/scene, participants, ends, act sequence, keys, instrumentals, norms of interaction, genre. This means that each of these sections become the focus unit of the analysis in studying the communication reality of Dayak Meratus ethnic community.
The subject of this research is the indigenous community of Dayak Meratus ethnic located in two areas of Hulu Sungai Tengah and Balangan district in South Kalimantan Province. In these both areas, Dayak Meratus ethnic community occupies the area along Meratus Mountains.
Data analysis technique is an important part in conducting research. Data analysis is the process of organizing and sorting into the patterns, category and basic description so that the data can be collected as targeted to get a conclusion. As stated by Creswell [4, p. 152-153], techniques in ethnographic research are description, analysis, and interpretation.
A. The Rise of the Myths in Dayak Meratus Ethnic
The traditional life of Dayak Meratus ethnic always adheres to the tradition of cultural ancestry happens for generations. The proof of cultural sustainability is reflected in the tradition of farming applied in various traditional ceremonies. The farming activity is embedded in Dayak Meratus ethnic culture, both in Hulu Sungai Tengah district, Balai Adat Kiyu and Balangan district in Balai Halong Adat. The farming activity is identical to the forests and it is limited by law that should be obeyed. The law can be seen in myths and symbols that always exist in various farming activities.
Makurban, the chief of Balai Adat Kiyu states that: "The existing myths in Dayak Meratus ethnic culture were derived from our ancestors, what we do now is what that have been inherited from Abah Uma (Father, mother). We are their generations, if Abah Uma lie to us, we will also lie with these myths. Hence, so far it is just fine for us. These myths are stories, bans, or customary restrictions which are regulated by culture particularly during ceremonies. The myths and symbols can be a story about the origin of our datu nini, mamangan (magic spell), and offerings in ceremonial accessories." (Interview, August 20, 2017) The most traditional Dayak Meratus ethnic mostly have livelihood on farming and gardening. They lived on the edge of forest because they previously lived in that area before they established a village. Previously, they lived in the forests separately between an umbun (family) and another. This way of life was caused by their nomadic agriculture in the period of 4 to 5 years.
The existing myths found in Dayak Meratus ethnic both in Balai Adat Kiyu in Batang Alai Timur sub district and Balai Adat Halong have same elements. Makurban states that the myths evolved in these two traditional halls were derived from previous ancestors, it was orally told to the generations of Dayak Meratus Ethnic.
Makurban (interview, 22 August 2017) narrated that the origin of the indigenous Dayak Meratus' ancestors was from lowlands to the coast. After many migrants came (Banjar traders or others), the indigenous Dayak Meratus community gradually migrated to upstream of some rivers and Meratus mountains. The migration happened was due to the cultural differences and self-defense. The prolonged social conflicts of economic and religious issues especially the spread of religion,
B. Function of Myth for Dayak Meratus Ethnic
The myths was developed in Dayak Meratus ethnic, both Balai Adat Kiyu and Balai Adat Halong. From the above explanation related to the divine conception based on their belief. The belief is based on two fundamentals such as, help and threats. The help relates to the functions that they obtained particularly farming and gardening activities. Suliman, 41 (Chief of Padang) Balai Adat Kiyu Traditional stated: "For us, Dayaks, the yearly cultivation activity is part of worship. The cultivation of fields inherited by ancestors require us to make offerings to the spirits of our ancestors. Besides the offering to ancestors, we also offer gifts to the guardians of nature, rivers, forests, wind, fire and land. So, if we do not bahuma (farm) means there is no aruh (ceremony). Therefore, this means that there is no worship." (Interview, August 23, 2017) As for those related to threats, the ceremony is related to salvation. The salvation of disasters were caused by some spirits who feel disrespected and not served.
There are two fundamental functions of the myth. There are the myth which serves to create fear and threats for Dayak Meratus ethnic and myth related to help. Both of these myth functions are embodied in forms of Dayak Meratus ethnic ceremony which in the form of ganal aruh (big ceremony) that full of offerings. Aruh (ceremony) is close related to their life activities in the forest, especially the farming activities every year.
C. Discussion
The existence of myths in Dayak Meratus ethnic does not appear without a cause. The data shows the existence of these myths are caused by the communication happened in social life of Dayak Meratus ethnic. The myths were derived from the ancestors of Dayak Meratus ethnic. They were inherited from generation to generation through a way of telling stories. In Dayak Meratus ethnic social practice, these myths are believed and maintained. As suggested by Ralph Larossa and Donald C. Reitzes (1993) in [5, p. 96], symbolic interactions essentially explain the frame of reference to understand how human and others create a symbolic world and how the world builds human behavior.
Referring to the theory of symbolic interaction, the source of the myths that arise and develop in Dayak Meratus ethnic derived from a family reference as group or community. It happened from generation to generation, and has built human behavior. The limit of understanding on the forests reality, Dayak Meratus ethnic acknowledged the existence of the myths communicated by the ancestors. Whereas, their life becomes part of the forest itself. Besides that, the myths are also communicated in the form of traditional ceremonies take place in Dayak Meratus ethnic community.
The reference of the myths in Dayak Meratus ethnic originating from story is surely related to the subjective human experience. This subjective experience as proposed by Schutz in [6, p. 94]. He states that in this inter-subjective world, people create social reality. It is compelled by the existing social reality and by the cultural structures of their ancestors' creation. In the world of life, there are many collective aspects, but there is also personal aspect. Schutz, distinguishes the world of life between close relationship (our relations) and impersonal and space relationships (their relations). While intimate relationship are so essential in the world life.
In addition, the existing myths in the life of Dayak Meratus ethnic have two fundamentals functions, such as related to threats and help. One example of myths that threatens is the myth of the pangeran (Scary spirit), it likes to disturb humans. This myth is certainly related to human desire to have security, then in everyday life, the figure of pangeran is honored by offerings that happen in every full moon.
The myth associated with help is a myth that can bring benefits to Dayak Meratus ethnic. As example, myth putir (comforter spirit), it can nourish the soil, the roots of trees and fruits. The myth as the figure of putir is so respected and always be given offerings in every traditional ceremony.
Both of these are certainly related to myths considered to have benefits and functions for human. Vegeer [7, p. 171] proposes that viewing social reality as a social behavior that has subjective meaning, hence behavior has purpose and motivation. In addition, Weber states that social behavior becomes social if the subjective of social behavior makes the individual guides and considers other behavior that guides to that subject. The behavior has certainty if it shows uniformity with behavior in general in society Related to the function and benefit of the myth, both threatening or Scary, and myth which is believed help. This is certainly related to the subjective meaning of human, because the meaning of human beings have purpose and motivation. Safety and freedom from threats also become the desire of Dayak Meratus Ethnic. Likewise, the meaning of welfare is contained in the myth of putir (good spirit) that can fertilize the soil, and keep the fruits, which grow in the forest area.
Either the background of myths, which is the product of the ancestor of Dayak Meratus ethnic, or from the functions of these myths to form communication of Dayak Meratus ethnic with forest. This communication is viewed from the behavior of Dayak Meratus ethnic to the forest. They considered the respected spirits live in the trees grown in the Meratus mountain forests. Littlejohn Meratus ethnic can be classified into two, such as; the myths of scary and the myths of help.
Both the Scary myths and the help myths are as spirits that must be respected by Dayak Meratus Ethnic. The form of honor is by offerings like cakes or foods.
|
2019-04-27T13:12:11.404Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e67f58ca26b138e0b8690803a28ce4e9d9e87668",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25904589.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f9fda3e878e9356431d3529c058411fe02186c3",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
9972296
|
pes2o/s2orc
|
v3-fos-license
|
The Regulation of Steroid Action by Sulfation and Desulfation
Steroidsulfationanddesulfationarefundamentalpathwaysvitalforafunctionalvertebrateendocrinesystem. After biosynthesis, hydrophobic steroids are sulfated to expedite circulatory transit. Target cells express transmembrane organic anion-transporting polypeptides that facilitate cellular uptake of sulfated steroids. Once intracellular, sulfatases hydrolyze these steroid sulfate esters to their unconjugated, and usually active, forms. Because most steroids can be sulfated, including cholesterol, pregnenolone, dehydroepiandrosterone, and estrone, understanding the function, tissue distribution, and regulation of sulfation and desulfation processes provides significant insights into normal endocrine function. Not surprisingly, dysregulation of these pathways is associated with numerous pathologies, including steroid-dependent cancers, polycystic ovary syndrome, and X-linked ichthyosis. Here we provide a comprehensive examination of our current knowledge of endocrine-relatedsulfationanddesulfationpathways.Wedescribetheinterplaybetweensulfatasesandsulfotransferases, showing how their expression and regulation influences steroid action. Furthermore, we address the role that organic anion-transporting polypeptides play in regulating intracellular steroid concentrations and how their expression patterns influence many pathologies, especially cancer. Finally, the recent advances in pharmacologically targeting steroidogenic pathways will be examined. ( Endocrine Reviews 36: 526–563, 2015) Different OATPs have differing affinities for different steroids. Once intracellular, steroids can be desulfated by STS, and then resulfated by SULTs. The expression ratio between these competing pathways will, most likely, define ultimate sulfation/desulfation outcome. Sulfated steroids can be removed from the cell via MRP1 and MRP4. Nonsulfated steroids act intracellularly, or, because they are lipid soluble, they will diffuse across the cell membrane and potentially act in a paracrine fashion.
I. Introduction
A
I. Introduction
S ulfation and desulfation are vital biological processes that regulate steroidogenesis and thus, steroid hormone action in a variety of tissue ( Figure 1). Controlled by two distinct enzyme families, the sulfatases and the sulfotransferases (SULTs), these processes are intimately involved in the hydrolysis and esterification of sulfate groups to alkyl (eg, dehydroepiandrosterone [DHEA]) and aryl (eg, estrone [E 1 ]) steroids. As early as the 1940s, steroids were identified as one of the major classes of biomolecules that could be sulfated (1)(2)(3). Chemically, it is possible to attach a sulfate to each and every hydroxyl group of a steroid, and taking into account the astonishing substrate promiscuity of the various sulfotransferase enzymes, many different sulfated steroids are detected analytically in biological samples (4). Historically, sulfated steroids were considered to be metabolic end products because their increased water solubility expedites excretion. However, over the past 20 years, a wealth of research demonstrates that sulfated steroids, such as DHEA sulfate (DHEAS) and E 1 sulfate (E 1 S), can act as circulating reservoirs for the peripheral formation of bioactive hormones. Therefore, an understanding of how sulfation and desulfation processes are regulated and dysregulated provides key insights into physiological and pathophysiological endocrine control. This review examines our current understanding of sulfation and desulfation steroid pathways, including the intracellular influx and efflux of sul- . Predominance for steroid sulfation or desulfation in endocrine and selected nonendocrine human tissues. Sulfation pathways dominate in the healthy brain, colon, adrenal, and kidney. The colon and kidney sulfate steroids to expedite excretion. The adrenal synthesizes DHEA, which is subsequently sulfated to increase water solubility and allow circulatory transport. The brain favors sulfation, although this is primarily due to the role of pregnenolone sulfate as a neurosteroid. In the liver, a so-called "futile-loop" of DHEA/DHEAS, E 1 /E 1 S, and E 2 /E 2 S occurs, as well as other steroids. Because sulfated forms of these steroids persist longer in the circulation due to greater half-lives, this accounts for their higher circulating concentrations compared to their nonsulfated forms. Desulfation, via STS, dominates in the breast, ovary, prostate, testis, placenta (not shown), and uteri (not shown). In breast and ovarian tissue, E 1 S uptake occurs through OATPs (see Section IV), where it is desulfated by STS to form E 1 , and subsequently E 2 via 17HSDs. In the prostate and testis, circulating DHEAS can also be transported into the cell via OATPs, desulfated by STS, and then metabolized to androgens such as T and DHT, which can then enter the circulation. doi: 10.1210/er.2015-1036 press.endocrine.org/journal/edrv fated steroids via the organic anion transporter proteins (see Section IV), the role of these pathways in disease (see Sections V and VI), and the potential to pharmacologically target these pathways for therapeutic gain (see Section VII).
A. Steroid analysis
The era of steroid analysis via immunoassay is drawing to a close as these unspecific assays are replaced by highthroughput, specific, sensitive mass spectrometry (MS) analyses (5). The inherent problem of immunoassays is their poor specificity due to antibody cross-reactivity, which hampers both enzyme immunoassay and RIA approaches. With regard to the measurement of estradiol (E 2 ), this problem was identified over 25 years ago (6) and more recently in human plasma samples (7). However, with the increasing clinical and laboratory demand for steroid measurements, cheap RIA kits emerged as popular one-step kits and multiplex assays in the 1980s and 1990s. These "direct" immunoassay kits sacrificed accuracy for speed and economy (8).
Gas chromatography (GC)-MS, coupled with either electron impact ionization or chemical ionization, is sensitive and specific, but it requires extensive sample cleanup as well as multistep deconjugation and derivatization procedures. Thus, it is liquid chromatography (LC)-MS or LC-tandem MS (LC-MS/MS) that after pioneering work in the 1990s (9) is becoming the reference method for the analysis of both sulfated and nonsulfated steroids in clinical laboratories, due to its fast turnaround time and high accuracy. Indeed, a recent statement by The Endocrine Society had attempted to implement a policy toward introducing LC-MS as the diagnostic standard for publication of steroid measurements (8), although this position was later relaxed because many laboratories do not have the technology to achieve such accurate analysis (10). Table 1 shows plasma reference ranges for nonsulfated and sulfated steroids in adult men, premenopausal adult women, and postmenopausal women.
The measurement of sulfated steroids can be straightforward, as conjugated steroids easily ionize resulting in greater LC-MS sensitivity. RIAs do exist that can measure sulfated steroids, as mentioned above cross-reactivity and the lability of the sulfate group, make these methods unreliable. Advances employing ultrahigh pressure LC quadrapole time-of-flight MS can now detect a range of sulfated and glucuronidated steroids simultaneously in human urine with similar sensitivity to GC-MS (11). With regard to plasma, recently a rapid LC-MS/MS procedure has been designed involving diethylether extraction from plasma, purification by immunosorbents containing specific antibodies against E 1 S, followed by LC-MS/MS using electrospray ionization. This sample preparation markedly improved the sensitivity of LC-MS/MS for E 1 S (12). Others have utilized LC-MS/MS with electrospray ionization to detect other sulfated steroids such as dihydrotestosterone sulfate (DHTS) and 3-hydroxy-5␣-androstane-17-sulfate simultaneously (13). However, the main difficulty with measuring most sulfated steroids lies with the lack of availability of appropriate reference standards, making measurements impossible to accurately quantify.
A. Molecular overview and functionality
The sulfatase enzyme family catalyzes the hydrolysis of sulfate ester bonds from a wide range of substrates. Within this family, 17 genes have been identified in humans, many associated with genetic disorders (14). Of these, three have their crystal structure determined: arylsulfatase A, B, and C (the latter also known as steroid sulfatase [STS]). Arylsulfatases A and B are both water soluble and involved in the hydrolysis of cerebroside-3-sulfate and the breakdown of glycosaminoglycans (GAGs), respectively; thus, neither is involved in steroid pathways. In contrast, STS has been shown to be the primary enzyme involved in steroid desulfation (15) and therefore is the main focus in this review. The principal hormone substrates for STS are E 1 S, DHEAS, pregnenolone sulfate, and cholesterol sulfate, and therefore this enzyme represents one of the major pathways in regenerating biologically active steroids in both steroidogenic and nonsteroidogenic tissues. DHEA and E 1 circulate predominantly in their inactive sulfated forms, DHEAS and E 1 S, respectively. Cells can transport, via organic anion-transporting polypeptides (OATPs; see Section IV), circulating hydrophilic sulfated steroids, such as DHEAS and E 1 S, for intracellular desulfation by STS and subsequent generation of androgenic and estrogenic steroids.
Structurally, STS has a hydrophobic domain and is a membrane-bound microsomal enzyme, mainly localized in the rough endoplasmic reticulum (16,17). The 10 exons-spanning STS gene is located on the short arm of chromosome X and mapped in Xp22.3-Xpter (17)(18)(19). It escapes X-inactivation (20) with a nonexpressed Y-linked homolog in man (18). It is thought that STS is glycosylated, with its three-dimensional structure crystallography showing it to be a monomer of a "mushroom-like" shape with two hydrophobic antiparallel ␣-helices protruding from a spherical molecule (21,22). This 40 Å-long hydrophobic stem is most likely embedded in the luminal membrane of the endoplasmic reticulum. Opening beside it is a long narrow pocket with the enzyme reaction site lying at the base, suggesting that the product has to travel through the endoplasmic reticulum membrane (23).
STS is expressed as a membrane-associated precursor with a molecular mass of 63 kDa and asparagine-linked oligosaccharide chains. These chains are cleaved by endoglucosaminidase H, creating a final size of 61 kDa with a half-life of 4 days (24). STS can undergo various posttranslational modifications; it holds four potential N-glycosylation sites; however, digestion by endoglycosidase H and endoglucosaminidase H showed that only two (Asn47 and Asn259) are used (25,26). Supporting this, Stengel et al (27) found that although all four of the N-linked sites are glycosylated to some extent, only mutations in two major glycosylation sites, again at asparagines 47 and 259, decreased activity. Another modification is the conversion of C75 to formylglycine (FGly) (see Section II.A,1) and further hydration forms the gem-diol hydroxylformylglycine with a bound sulfate in the resting state (28).
Disease resulting from impaired STS activity, such as X-linked ichthyosis (XLI), is most often due to large deletions of the gene (80 -90%). Alternatively, in some XLI patients, six-point mutations have been identified, all abolishing STS activity (29 -31). Five of the point mutations lead to nonconservative amino acid changes, and the sixth is a frameshift mutation. Interestingly, these mutations are all within 105 residues of each other in the Cterminal half. Two are even on the same amino acid, 372, changing tryptophan to either arginine or proline. The others are an arginine for tryptophan at amino acid 444, a tryptophan for a cysteine at 446, a cysteine substitute for a leucine at 341, and an arginine for serine at 419. This close accumulation of mutations suggests that this as an area crucial for STS activity (32,33). Furthermore, artificially truncating N or C termini of the STS enzyme does not have any effect on protein synthesis and degradation, when transfected into COS-1 cells, however, there was reduction in activity (34). Thus, when coexpressed with wild-type STS, C-terminal STS mutants have a dominant negative effect.
Sulfatase-modifying factors
The molecular mechanisms underlying STS catalytic activity are highly conserved among different human sulfatase enzymes (16,35). A cysteine residue resides in the catalytic center of all sulfatases, which is post-translationally modified to form a FGly residue ( Figure 2). FGly is catalytically active and "attacks" the sulfate moiety of substrates; it is essential to bind the substrate and also to hydrolyze the sulfate ester bond (36,37).
Modification of the cysteine to form FGly is mediated by the coenzyme FGly-generating enzyme (FGE), which is encoded by the sulfatase-modifying factor 1 (SUMF1) gene. FGE, a glycosylated enzyme that, like STS, resides in the endoplasmic reticulum, can be secreted by cells (38). Intriguingly, FGE can thus act in a paracrine fashion because it can be taken up by neighboring cells as a functional protein and increase intracellular sulfatase activity (39). The importance of this process in regulating STS activity and steroid output is currently unknown.
Mutations in SUMF1 cause multiple sulfatase deficiency, a rare and fatal autosomal recessive disorder characterized by absent activity of all sulfatase enzymes (see Section V.A) (40,41). A paralog of SUMF1, SUMF2, has been cloned in vertebrates due to its sequence homology to SUMF1 (42,43). SUMF2 lacks the crucial catalytic domain present and highly conserved in SUMF1, and the role of SUMF2 in the process of post-translational modification of sulfatases is, at present, unresolved.
B. STS cellular and tissue distribution
STS is a membrane-bound protein primarily localized in the lumen of the endoplasmic reticulum (23), although it has also been found in Golgi cisternal, trans-Golgi reticulum, plasma membranes, and elements of the endocytic pathway (44). In 1965, Warren and French (45) examined STS tissue distribution and found virtually ubiquitous expression in human tissues, with placenta demonstrating the greatest mRNA and activity. These findings have been substantiated by many research groups using various techniques, such as immunohistochemistry, biochemical analysis, and real-time PCR, analyzing a multitude of tissues including testis, ovary, adrenals, prostate, skin, brain, endometrium, kidney, thyroid, pancreas, colon, aorta, bone, and lymphocytes (19,35,46), which all show STS activity.
From gestation and throughout life, STS activity remains imperative to both genders for tissue-specific steroid hormone production and regulation. In premenopausal women, the main source of active E 2 is the ovaries, whereas E 1 is formed mostly in peripheral tissues, eg, fat. However, in post-menopausal women and men, E 2 is metabolized from adrenal steroid precursors at extragonadal sites such as breast and fat. The active estrogens can be generated by two enzymes, aromatase and sulfatase. STS desulfates E 1 S to E 1 , followed by reduction to E 2 via reductive 17-hydroxysteroid dehydrogenase (17HSD) activity. Aromatase converts androstenedione and T to E 1 and E 2 , respectively. Of note, androstenedione is synthesized from the precursors DHEA and DHEAS, which circulates at very high concentrations compared to other steroids (see Table 1). STS desulfates DHEAS, and thus STS also plays a role in liberating androgens for aromatization (47).
C. The regulation of STS
STS tissue activity fluctuates depending on physiological conditions, but exactly which factors regulate these changes remains unknown. For example, STS activity is higher in leukocytes in the third trimester of human pregnancy compared to nonpregnant females and adult males (48), an effect possibly regulated by elevated FSH concentrations (49). Furthermore, and again as measured in leukocytes, STS activity changes throughout puberty, differing between males and females and being at its highest in prepubertal females (50). STS is also frequently increased in various malignant tissues, such as in breast cancer (see Section VI.A.1). However, very little is known about the underlying regulation of this expression or activity, although circulating estrogen concentration most likely plays a role.
The promoter region and 5Ј upstream regulatory elements of the STS gene were first characterized in human placenta (51); however, this promoter was noted to lack basal activity, suggesting additional regulatory elements. Subsequently, tissue-specific STS isozymes with different kinetic parameters for DHEAS and E 1 S were discovered (52)(53)(54). Zaichuk et al (52) characterized the 5Ј heterogeneity of the human STS gene in MCF7 cells. The STS gene exhibits alternative splicing and promoter usage, which is likely to be the basis for tissue-specific regulation. 5Ј-Rapid amplification of cDNA ends analysis has identified eight splice variants used in STS transcription based on the first six exons. First reported was exon 1a from placenta, which utilizes DHEAS as the major steroid produced by fetal adrenal glands and the main source of active estrogens (55). All splice variants encode the same active protein and all, except exon 1d which is found only in peripheral mononuclear leukocytes, vary in length with multiple transcription start sites with tissues generally expressing one or more of these variants. Heterogeneity in signal peptide sequences is thought to facilitate folding and localization of proteins to the correct intracellular compartment (19,46). STS mRNA and activity are higher in many cancerous tissues compared to normal, implying an important role in hormone-dependent tumor growth (see Section VI). Although ubiquitously expressed, the regulation of STS expression does appear to be tissue specific and is subjected to various feedback mechanisms, such as that shown by the positive correlation between STS and estrogen receptor (ER) isotypes mRNA (52). In MCF7 cells, STS transcription may be up-regulated by E 2 via direct binding to ER and activation of estrogen response elements in the STS promoter regions. Furthermore, MCF7 cells treated with antiestrogen ICI182780 displayed reduced basal and E 2stimulated expression of all STS mRNA. E 2 also induced ER␣ degradation in an autoregulatory feedback loop, whereas pretreatment with proteasomal inhibitor MG132 prevented this. Exposure to E 2 and MG132 resulted in STS mRNA increase, whereas MG132 alone reduced STS mRNA (52,56). Thus, to control estrogenic tissue, STS expression may be regulated by local estrogen concentrations in an ER-dependent manner. However, as yet, this pathway for STS regulation has not been demonstrated in other cell lines, suggesting that it may be unique to MCF7 cells.
In addition to the potential for estrogens to regulate STS activity, the proinflammatory cytokines IL-6 and TNF␣ alter STS enzyme kinetics. MCF7 cells increase STS activity in response to IL-6 and TNF␣ without alteration in STS mRNA levels (57,58), a trait also noted in other cancer cell lines (59). This suggests that post-translational modifications, possibly via STS glycosylation, are involved in regulating STS activity (17,60,61). However, it cannot be currently ruled out that these cytokines alter membrane permeability and therefore increase substrate availability, which is then perceived as an increase in STS activity (62).
Regulation of STS by inflammatory mediators is of interest, considering that sex steroids have a role in immune functions, inflammatory processes (63,64), and cancer, where STS activity is frequently dysregulated and often associated with inflammation (65). Both epidemiological and immunological evidence implies that steroids can influence the pathogenesis of many chronic inflammatory diseases (66). For example, in the vascular smooth muscle cells of atherosclerosis patients, STS was found to be higher in females with mild atherosclerotic changes compared to severe disease and male aortas. Additionally, the counterpart of STS, estrogen sulfotransferase (SULT1E1), was lower in females with severe disease (67), suggesting the importance of the STS/SULT ratio in the local regulation of estrogen formation in inflammatory disease states. How this alteration in ratio affects disease inflammatory progression remains ill-defined.
A. Molecular overview and functionality
Endocrine sulfation pathways include sulfate uptake, conversion of this inert anion to active sulfate in the form of 3Ј-phospho-adenosine-5Ј-phosphosulfate (PAPS), and transfer to steroid hydroxyl groups by sulfotransferases. Sulfate is an obligate nutrient provided mainly by food and drinking water, taken up from the gut by several sulfate transporters of the solute-linked carrier (SLC) 13 and 26 gene families (68), and to a minor extent also by oxidation of cysteine and methionine amino acids (69).
Enzymatic sulfate activation by PAPS synthase is essential due to the inert nature of the sulfate ion; this activation occurs via consecutive enzymatic steps ( Figure 3) (70,71). First, the AMP moiety of ATP is transferred to sulfate catalyzed by the ATP sulfurylase activity of PAPS synthase, yielding adenosine-5Ј-phosphosulfate (APS). Formation of this unusual phospho-sulfo-bond is highly endergonic, so that subsequent cleavage of the release pyrophosphate by ubiquitous pyrophosphatases and an additional phosphorylation step are needed to draw the reaction to completion. This phosphorylation of APS at its ribose 3Ј-hydroxyl group is carried out by the APS kinase domain of PAPS synthase, resulting in 3Ј-phospho-APS (PAPS) (70). PAPS is the universal sulfate donor required by all human sulfotransferases, and in humans and most vertebrates it is exclusively produced by two bifunctional PAPS synthases, PAPSS1 and PAPSS2 (72). Active sulfate in the form of PAPS is used by sulfotransferases for sulfation of a multitude of hydroxyl and amino groups in a diverse array of biomolecules, including steroids. The byproduct of this reaction, the bis-phospho-nucleotide 3Јphospho-adenosine-5Ј-phosphate (PAP), is then degraded by dedicated phosphatases (73, 74) (see Section III. C.).
Sulfotransferases are a large gene family traditionally classified into membrane-bound, Golgi-residing enzymes (75) and soluble, cytoplasmic sulfotransferases (76). Golgi-residing sulfotransferases are responsible for sulfation press.endocrine.org/journal/edrv 531 of proteins, carbohydrates, and proteoglycans, whereas cytoplasmic sulfotransferases modify mainly hydrophobic, low-molecular-weight substances such as phenols, xenobiotics, and steroids. Recent research has provided an increasing number of structural studies on cytosolic sulfotransferases, but also with Golgi sulfotransferases, eg, the carbohydrate 2-O (77) and 3-O-sulfotransferase (78) as well as the first structure of a protein sulfotransferasethe human TPST2 protein (79). Sequence conservation is rather low between these different sulfotransferases, but their fold and catalytic features including binding of the PAPS cofactor are highly conserved. Central to all sulfotransferases is an ␣/-motif consisting of a five-stranded parallel -sheet; the 5Ј phosphosulfate loop-loop consisting of a strand-loop-helix structure, which is involved in binding the phosphosulfate moiety of the PAPS cofactor; and an additional conserved ␣-helix (80 59 and 62), followed by the two-step enzymatic sulfate activation by bifunctional PAPS synthases. PAPS is then either used directly by cytoplasmic and nuclear sulfotransferases or shuttled to the Golgi apparatus to serve a multitude of Golgi-residing carbohydrate and protein sulfotransferases. In contrast to the nonsulfated biomolecules, sulfated xenobiotics or steroids need designated organic anion transporters to enter or exit cells. Many different sulfatases exist to cleave sulfate esters again. The otherwise toxic, sulfation by-product PAP needs to be removed by dedicated phosphatases (reviewed in Ref. 65). In this review, we focus on sulfate activation, steroid sulfation, and desulfation as well as the transport of steroid sulfates via organic anion transporters. For all other steps, the reader may refer to the reviews given above. (83). Interestingly, specification at the human 16p11.2 locus does not stop here because for the SULT1A1 gene, interindividual differences in gene copy number have been described, with some individuals carrying up to five SULT1A1 gene copies correlating with elevated SULT1A1 activity (84). Cytosolic SULTs generally show broad substrate specificity. Taking the metabolic capacity of the microbiota additionally into account (85), virtually unlimited numbers of substrates may be sulfated. Traditionally, certain sulfotransferases were named according to their presumably preferred substrate, eg, estrogen sulfotransferase (SULT1E1) and DHEA sulfotransferase (SULT2A1). In light of the greatly overlapping affinities of different steroids to different SULTs (Ref. 86 and Table 2), the most likely sulfotransferase for E 2 sulfation may still be SULT1E1 (because SULT1A1 and SULT1A3 have much lower affinities for estrogens, with maximal activity in the micromolar range). DHEA, however, may also be sulfated by SULT1E1 or SULT2Bs, in addition to SULT2A1. On the other hand, SULT2A1 sulfates several other steroids as well as many xenobiotics. A comprehensive study compared ligand-binding profiles for eight human SULTs (87); out of SULT1C-1 to -3, SULT1B1, SULT1A1, SULT1A3, SULT2A1, and SULT1E1, E 1 only bound to SULT1E1; 2-hydroxyestradiol only bound to SULT1C3, 4A1, 2A1, and 1E1; DHEAS only bound to SULT2A1 and 1E1; and the bile acid lithocholic acid only bound to SULT2A1 and 1E1.
The broad substrate specificity of the sulfotransferase enzymes may be linked to three highly flexible loops flank-ing the catalytic binding site that can adapt to various ligands. These loops are the least conserved parts between different sulfotransferases. One of them, Asn226-Gln244 in SULT2A1, is referred to as a "cap that closes in," once the PAPS cofactor is bound with Arg247 (conserved in all SULTs) making direct contact to this nucleotide (88). This gating mechanism confers substrate specificity (89), and the equilibrium between open and closed conformations may restrict access to the catalytic core for larger ligands, whereas sulfation of smaller substrates is unaffected (88). Active site plasticity may be a general feature of SULT enzymes (90), and it has two direct consequences for the interaction of SULT2A1 with steroid molecules. First, the steroid molecule may bind in a nonproductive way causing substrate inhibition (91). Second, for some pseudosymmetric steroids with two hydroxyl groups, the substrate plasticity of SULTs allows sulfation also at other hydroxyl groups than the normally targeted 3-hydroxyl group of the steroid A-ring. Interestingly, this change in stereoselectivity may happen in SULT2A1 upon allosteric binding of certain drugs, eg, celecoxib, a cyclooxygenase-2 inhibitor (92). Furthermore, bis-sulfated steroids may be created in this way that represent poorer substrates for STS (93). Given this substrate promiscuity of sulfotransferases, it is essential to understand the regulation of tissue-specific expression of the different SULT genes.
B. Tissue and cellular distribution
Sulfotransferase enzymes are broadly expressed in the human body. Tissues that putatively have the highest sulfation activities are those that are affected most severely by loss of the ubiquitously expressed 3Ј,5Ј-bisphosphate nucleotidase (BPNT1) phosphatase, the enzyme that removes cytoplasmic PAP, the otherwise toxic by-product of sulfation, by degrading it into AMP and phosphate. In the BPNT1 knockout mouse model, the tissues mainly affected are hepatocytes as well as enterocytes of the early small intestine and proximal tubule epithelial cells of the kidney (94); however, it should be noted that adrenal steroid synthesis in these knockout animals was not investigated.
The expression of five sulfotransferases (SULT1A1, SULT1A3, SULT1B1, SULT1E1, and SULT2A1) was recently compared in four human tissues (liver, intestine, kidney, and lung) by quantitative Western blotting (95). The highest concentrations of sulfotransferases were found in liver and intestine consistent with the above, with SULT1A1/SULT2A1 and SULT1B1/SULT1A3ϩA1 the most/second most prevailing SULTs in these tissues (95). SULT1E1 has been identified as the major sulfotransferase in lung tissue, whereas expression is at lower levels in liver and intestine and nonexistent in the kidney (95). SULT1E1 press.endocrine.org/journal/edrv may play a more important role during fetal development, being highly expressed in fetal liver and lung (96,97). SULT1A1 and SULT1B1 were found in all four tissues tested; SULT1A3 was found in kidney, lung, and intestine, but not in liver (95). Therefore, SULT2A1 may exclusively carry out hepatic sulfation of orally administered and externally absorbed DHEA. Within the human adrenal cortex, SULT2A1 is specifically expressed in the zona reticularis (98,99), and hence this sulfotransferase is responsible for the massive DHEAS production in this tissue. Strong adrenal expression of SULT2A1, compared to SULT2B1a and SULT2B1b, was also reported by Javitt et al (100). Thus, one may regard SULT2A1 as a gene with dual functionality, detoxification of xenobiotics in the liver and maintaining steroid homeostasis in the adrenal; its secondary adrenal function may have been gained only during primate evolution (101).
All of these sulfotransferases need to be provided with active sulfate in the form of PAPS, and hence the coexpression of at least one of the two PAPS synthase genes is crucial for their functionality. The PAPSS1 gene is thought to be expressed ubiquitously (82,102), whereas PAPSS2 seems to be expressed in a tissue-specific manner, with particularly high expression in the adrenal glands, colon, lung, and liver. PAPSS2 gene expression seems to be more dynamically regulated (103)(104)(105).
C. Regulation of sulfotransferases and PAPS synthase activity
Sulfotransferase genes are part of the phase-II-biotransformation machinery targeting drugs and xenobiotics, and as such their transcriptional regulation (mainly of SULT1A1 and SULT2A1) is highly complex, involving regulation by several nuclear receptors like the pregnane X receptor (PXR) and the constitutive androstane receptor (CAR) (106). These receptors are activated by xeno-and endobiotics, and they also regulate the expression of many other detoxification genes like cytochromes P450 and uridine 5Ј-diphospho-glucuronosyltransferases (107). What makes sulfotransferases special in this regard is that the ligands activating those nuclear receptors are substrates for sulfation, and this sulfation usually decreases ligand binding to the respective nuclear receptor, representing a crucial feedback regulation loop. Noteworthy, sulfation may convert some nuclear receptor ligands into effective receptor antagonists. This phenomenon, well described for oxysterols and their involvement in the regulation of bile acid detoxification and ultimately lipid metabolism, is further described in Section V.B.2.
The transcriptional regulation of SULT gene expression by nuclear receptors may even result in cross-talk between different steroid hormones. In this regard, induction of the cholesterol-preferring sulfotransferase SULT2B1b by the vitamin D receptor was recently shown (108). Furthermore, glucocorticoids may antagonize estrogen function by glucocorticoid receptor-mediated transcriptional up-regulation of estrogen sulfotransferase SULT1E1 (109,110), resulting in inactivating sulfation of E 2 .
Many studies on transcriptional regulation of SULTs have focused on the SULT2A1 gene (111,112). In fact, in a mouse model for hyposulfatemia due to disruption of the NaS1 sodium sulfate cotransporter, SULT2A1 is the only sulfotransferase that shows significant changes in expression (113). Interestingly, transcriptional coregulation of the genes for SULT2A1 and the producer of active sulfate, PAPSS2, has been shown in some cases (103,104). The murine Sult2a1 gene may also be coregulated with the DHEAS efflux transporter Mrp4 through the nuclear receptor CAR, with Mrp4 knockdown reducing Sult2a1 expression and CAR activation increasing both Sult2a1 and Mrp4 (114).
Most studies on xenobiotic-induced transcriptional upregulation of SULTs focus on hepatic detoxification pathways, mainly in rodent models. In human adrenal cells, SULT2A1 gene expression is increased upon stimulation by CRH or ACTH (115) and regulated by the nuclear receptor steroidogenic factor 1, the transcription factor GATA-6 (116), and ER␣ (98). Although binding of all these transcription factors to the human SULT2A1 promoter has clearly been demonstrated, this still does not explain the striking specificity of SULT2A1 expression within the human zona reticularis or the remarkable changes in SULT2A1 expression directly after birth, during adrenarche, and in human aging.
On the protein level, SULTs are subject to substrate inhibition (eg, DHEA binding to SULT2A1). SULTs are usually exposed to different substrates at the same time. Some xenobiotics are able to bind to the mostly hydrophobic ligand binding sites of SULTs, thereby blocking enzyme activity. This mechanism may explain the hormone-like, estrogenic action of endocrine disruptors that otherwise do not bind and activate the ER (117). Estrogen action can be enhanced by the potent inhibition of SULT1E1, resulting in reduced estrogen inactivation by sulfation, mediated by hydroxylated metabolites of polyhalogenated aromatic hydrocarbons (118). As an example, tetrabromobisphenol A, a commonly used flame retardant, mimics E 2 binding to SULT1E1, making use of the versatile substrate binding pocket and inhibiting the activity of the enzyme (119). These findings highlight the potential of xenobiotics to cause endocrine disruption by interfering with steroid sulfation without the need to bind to hormone receptors directly.
It is well established that product inhibition of SULTs by the side-product of sulfation reactions, PAP, can occur via the formation of a dead-end enzyme-PAP-substrate complex (120). Because PAP binds to SULT1E1 with an affinity (Kd) of 30 nM (121), this inhibition may be physiologically relevant and can be counteracted by the abovementioned nucleotide phosphatases that specifically degrade PAP to AMP and phosphate: BPNT1 phosphatase and its Golgi-resident paralog (Golgi-resident PAP phosphatase [gPAPP]) (74). Loss of the BPNT1 gene leads to impaired protein synthesis resulting in impaired hepatic function and low serum albumin levels in mice (73).
On the other hand, SULT activity is generally regulated by the availability of active sulfate in the form of PAPS (122). PAPS tissue concentrations tend to be in the lower micromolar range (4 -80 nmol/g tissue), yet sulfation rates can be relatively high, resulting in depletion of the entire hepatic PAPS pool in less than 1 minute (123), requiring rapid and constant dynamic delivery of PAPS. Biosynthesis of PAPS, on the other hand, is energetically very costly (the three phospho-phospho-bonds that need to be cleaved are equivalent to more than 90 kJ/mol), and hence this pathway and the PAPS synthases involved are subject to tight regulation on various levels, including regulated nucleo-cytoplasmic shuttling (124), dimerization (125), and stabilization by ligand binding (70).
IV. Cellular Influx and Efflux of Sulfated Steroids
Hydrophilic sulfated steroids require active transmembrane transport for cellular uptake. Because these endobiotics are generally organic anions, cellular influx and efflux are regulated by numerous transporter proteins that belong to two major superfamilies: solute carrier (SLC) transporters, and ATP-binding cassette (ABC) transporters. Evidence suggests that most transporters are bidirectional; however, ABC transporters generally mediate efflux, and SLC transporters mediate influx (126). Two of the 52 gene families within the SLC transporters, the SLCO and the SLC22A superfamilies, contain transporters involved in sulfated steroids transport. The SLCO superfamily contains OATPs (127), and the SLC22A superfamily contains the organic cation transporters and the organic anion transporters (OATs) (128). The OATPs are the primary transporters for sulfated steroid influx, with each OATP possessing distinct uptake kinetics and substrate specificity for different conjugated steroids (Table 3). However, it should be noted that some OATs (OAT1, OAT3, OAT4, and OAT5) can transport sulfated steroids, particularly E 1 S in human placenta (129) and kidney (130).
Conversely, cellular efflux of conjugated steroids occurs through the ABC transporters multidrug-resistant protein (MRP) and in certain instances through breast cancer-resistant protein (BCRP) (131). Usually associated with cancer drug resistance, ABC transporters are transporting polypeptides that utilize ATP-binding and hydrolysis to transport various substrates across membranes. Thirteen MRPs have so far been identified within the human genome, although MRP1 (also known as ABCC1) and MRP4 are considered most efficient in mediating efflux of sulfated steroids.
Taken together, the relative extent of OATP, MRP, and BCRP tissue expression directly relates to total steroid intracellular concentration, and therefore these transport mechanisms are likely to play key roles in regulating steroid action (Figure 4).
A. OATP-regulated influx
There are numerous OATPs expressed in almost all epithelia throughout the human body. In addition to conjugated steroids, they are involved in the cellular uptake of a large range of substrates, including bile acids and xenobiotics. The mechanism of OATP-mediated transport remains controversial, although all agree that transport is ATP-and sodium-independent (126). However, what drives uptake is still ill-defined. OATPs can transport bidirectionally, and evidence suggests that they may act as electroneutral exchangers. For example, some OATPs exchange substrates for intracellular bicarbonate (132), glutathione (133), or glutathione conjugates (134). However, transport mechanisms may differ with different OATPs because glutathione does not mediate OATP1B1 and OATP1B3 uptake (135). Furthermore, although acidic pH levels (pH 5.5-6.5) generally elevate OATP2B1-mediated transport (136 -139), this is not the case with regard to E 1 S transported by OATP1B1 and OATP1B3 (135). Recent evidence suggests that these two transporters are altered in different ways by both cell membrane potential and local pH conditions (140).
B. MRP-regulated efflux
The ABC transporter MRP1 was first identified in H69AR cells, a human small cell lung cancer cell line that exhibits resistance to a broad range of natural producttype drugs (141). Along with its role in drug resistance, MRP1 also facilitates efflux of antioxidant glutathione and the proinflammatory leukotriene C4 (142) as well as E 1 S (143) and DHEAS (144), and is expressed in a range of cancerous tissues including hormone-dependent breast (145), prostate (146), and colorectal cancer (147). Transport of E 1 S and DHEAS is distinguished by a dependence on glutathione (148,149), but the physicochemical prop-erties that determine whether or not sulfated steroid requires glutathione for MRP1-mediated efflux remains unresolved.
However, other MRPs should not be overlooked with regard sulfated-steroid transport. Along with bile acids, MRP8 facilitates the efflux of E 2 17-glucuronide and E 1 S (150,151), and it has also been shown to transport DHEAS in the canine kidney cell line MDCK (152). MRP4 has also shown high affinity transport (at 2-10 M) of DHEAS (149) and therefore may be involved in the regulation of adrenal DHEAS secretion. Intriguingly, Morgan et al (153) demonstrated that MRP4 knockout mice have decreased plasma T concentrations, a process reported to be caused by impaired cAMP-response elementbinding protein in Leydig cells. Although these authors measured circulating Androstenedione concentrations, they do not report on circulating DHEAS concentrations in these animals, an experiment that may demonstrate the importance of this MRP4 in adrenal DHEAS secretion.
C. Estrone sulfate influx and efflux
Most OATP/MRP transport studies have utilized E 1 S because it represents a major substrate for many transport proteins (Table 3). Because estrogens can drive many hormonedependent cancers, it is not surprising to find that most studies on E 1 S transport are oncologically focused, and little is known about the importance of OATP-mediated uptake in normal physiology. However, studies have shown that many cancerous tissues and cell lines have altered OATP expression compared to healthy tissue. For example, the normally liver-exclusive OATP1B3 is also expressed in gastric, colon, pancreatic, prostate, and breast cancers (154 -157).
Structural investigations of OATP proteins and E 1 S transport are still at an early stage. Transmembrane do- Once intracellular, steroids can be desulfated by STS, and then resulfated by SULTs. The expression ratio between these competing pathways will, most likely, define ultimate sulfation/desulfation outcome. Sulfated steroids can be removed from the cell via MRP1 and MRP4. Nonsulfated steroids act intracellularly, or, because they are lipid soluble, they will diffuse across the cell membrane and potentially act in a paracrine fashion.
536
Mueller et al Steroid Sulfation and Desulfation Endocrine Reviews, October 2015, 36 (5):526 -563 mains (TMs), essential structural features of membrane proteins critically involved in the proper function of other transporters such as OATs, confer substrate specificity across the OATP family. Thus, it has been shown that TM8 and TM9 in OATP1B1 are critical for its substrate recognition and E 1 S transport (158). More recently, phylogenetic analysis of OATP sequences has revealed that TM2 is also among the TMs that have high amino acid identities within different family members (159). Subsequently, Asp70, Phe73, Glu74, and Gly76 were found to be essential for E 1 S uptake by OATP1B1 (159), although whether this is true across other OATPs remains to be determined. Initial studies pinpointed hepatic OATP1B1 as the major E 1 S transporter (160), and recent evidence suggests that OATP1B1 is overexpressed in hormone-dependent breast cancer cell lines such as MCF-7 compared to noncancerous epithelial MCF-10A cells (161). Following these early studies, evidence came that OATP1B1 (162,163), OATP1B3 (162,164,165), OATP2B1 (162,166), and OATP1A2 (167) also transport E 1 S. The expression of these "sulfated-hormone transporters" (OATP1B1, OATP1B3, OATP2B1, and OATP1A2) is low, if not completely absent, in many normal endocrine tissues (166,168) but is elevated in hormone-dependent cancers arising in these same tissues (168). Indeed, with regard to OATP1B3, there is now strong evidence suggesting that this transport polypeptide becomes a specific cancer-variant isoform localized to colon, lung, and pancreatic cancer (169,170). This suggests that OATP overexpression and subsequent increased sulfated-hormone cellular influx, along with other substrates, is important in cancer progression, and therefore these proteins represent novel therapeutic targets against estrogen-driven carcinomas. Indeed, inhibiting E 1 S uptake by using organic anions such as bromosulfophthalein, which competes as a substrate for all OATPs, blocks E 1 S MCF-7 cell proliferation (171). Some evidence suggests that it is primarily OATP1B3 that transports E 1 S in breast cancer (156), making it an attractive specific target for inhibitor studies. However, it is evident that many OATPs can transport E 1 S, and thus the jury remains out on whether selectively targeting just one OATP to block E 1 S-uptake is a viable therapeutic strategy.
The kinetics of E 1 S uptake can be influenced by various factors, notably local pH and solute conditions. For example, E 1 S uptake by OATP1B3 is Na ϩ independent (126). Intriguingly, OATP2B1-mediated uptake of E 1 S is enhanced in the presence of progesterone (172,173). This finding is of special relevance for the formation of estrogens in tissues like placenta and mammary gland, which depend on the uptake of precursor molecules for steroid hormone synthesis like E 1 S and DHEAS, and provides an indication of the importance of OATP transport in normal physiology. With regard to efflux transport, MRP1 and BCRP both influence total E 1 S uptake. By preloading Caco-2 cells with tritium-labeled E 1 S and then inhibiting BCRP and MRP1 activity, Grandvuinet et al (174) demonstrated that these efflux transporters are actively involved in intracellular E 1 S availability, thus suggesting that the relative expression of OATP, MRP1, and BCRP will ultimately determine intracellular estrogen concentrations. However, definitive studies investigating the relative importance of all these transporters in E 1 S uptake have not yet been performed.
D. DHEAS influx and efflux
DHEAS transport was first demonstrated in Xenopus laevis oocytes overexpressing the human OATP1A2 (175). Similar to most studies on E 1 S, research into DHEAS transport is sparse and again mainly focuses on uptake in cancerous cells. Obviously, interest has focused on the prostate because it is known that prostate cancer cells possess STS activity (176) to desulfate DHEAS, followed by downstream conversion of DHEA to androstenedione (177) resulting in androgen receptor (AR) activation. More pertinently, OATPs involved in DHEAS influx are elevated in human castration-resistant metastatic prostate cancer (178). Indeed, under androgen deprivation, LNCaP cells elevate OATP1A2 expression, and knockdown of this transporter significantly attenuates DHEAS-driven proliferation (179).
In the placenta, DHEAS uptake seems to be regulated by OATP2B1 transport (180). Placental DHEAS uptake correlates with OATP2B1 and BCRP expression, suggesting an interaction of these two proteins in regulating transport of DHEAS (181).
E. Genetic variation and regulation of OATP expression
The genetic variation in various OATPs (OATP1B3, OATP1B1, OATP1A2) has also been shown to affect overall steroid uptake in a variety of cell lines (182). For example, transfection of SLCO1B1 single nucleotide polypeptide rs4149056 (37041TϾC) into HEK293 cells results in lower cell surface expression and thus lower E 1 S uptake compared to wild-type transfections (183). This was also seen with SLCO2B1 SNP rs2306168 (1457CϾT) transfection, where E 1 S uptake was less than half that of the wild-type variant (184). Further studies are required to determine whether these SNPs are important in sulfated steroid uptake in cancerous cells.
However, support on the importance of genetic variation in OATPs and DHEAS uptake comes from various clinical studies examining these transporters and prostate cancer outcomes. For example, in a cohort of 538 patients suffering metastatic hormone-sensitive prostate cancer, men with each of three OATP2B1 alleles (rs12422149 [935GϾA; Arg312Gln], rs1789693, and rs1077858) had a shorter median time to progression of 10, 7, and 12 months, respectively; and this effect was additive (185). Patients with multiple "at-risk" OATP2B1 variants (including OATP2B1 allele rs12422149 935G, which has a high-transport efficiency for DHEAS), who also had the high T transport OATP1B3 SNPs, had the shortest time to progression. These data have been supported by a study examining 532 Japanese men, where homozygosity for the OATP2B1 rs12422149 935G variant was associated with shorter median time to progression (186).
Little is known regarding OATP regulation, and we will only focus on the OATPs with substrate affinity with conjugated steroids. Generally, OATP expression is controlled by transcriptional regulation (126) and is most likely tissue specific. OATP1B1 expression is dependent on Hepatic Nuclear Factor ␣1 (187,188) and may also involve Signal Transducer and Activator of Transcription 5 (189), Interferon-␥ (190), and IL-1 (191). In contrast, it is bile acids that can up-regulate OATP1A2 expression in intestinal and liver tissue (192), although in breast tissue OATP1A2 regulation is significantly associated with PXR expression (193). Meyer zu Schwabedissen et al (167) have also demonstrated that OATP1A2 is upregulated in malignant breast tissue, with this elevation directly related to E 1 S uptake. Furthermore, OATP1A2 expression is regulated by activation of the nuclear receptor PXR, whose primary function is to sense foreign toxins and in response up-regulate OATPs for detoxification and clearance purposes.
V. Disease-Causing Mutations Affecting Steroid Sulfation and Desulfation
A. Pathogenic mutations in steroid sulfatases and SUMF1
X-linked ichthyosis (STS deficiency)
Mutations or deletions of the STS gene result in a skin condition called "X-linked ichthyosis" (XLI), which in approximately 80% of cases is due to complete deletions of the STS gene (31,194,195). XLI is also termed STS deficiency and represents one of the common inherited metabolic disorders, with 1:6000 live births and no geographical or ethnical variation (196 -198).
Generally, ichthyosis refers to genetically and acquired disorders of the skin characterized by abnormal keratinization; the skin often resembles "fish scales," explaining the origin of the term ichthyosis from Greek ichthys, translated as fish. XLI was first recognized in the 1960s as a distinct form of ichthyosis due to a distinct clinical ap-pearance and the mode of inheritance (196,199). It is characterized by large, dark-brown, and tightly adherent scales found at most areas of the skin, but predominantly symmetrically located on the trunk, the neck, and the extensor surfaces. The scalp is nearly always affected; however, plantar and palmar surfaces are spared. The scaling starts a few months after birth, and generally tends to improve during the summer months.
The underlying pathophysiology of the excessive scaling/hyperkeratosis results from impaired cholesterol metabolism. STS catalyzes the breakdown of cholesterol sulfate in the outer layers of the skin (stratum granulosum and stratum corneum) (200). In patients with XLI, where there is no STS activity, this breakdown is impeded and cholesterol sulfate, which physiologically stabilizes cell membranes and adds cohesion (201), accumulates in the stratum corneum causing partial retention hyperkeratosis with visible scaling (194,200,202).
Cryptorchidism has been reported in up to 20% of patients with XLI (203)(204)(205)(206)(207). Because the patients from these reported case series were not genetically characterized, it is unclear whether the testicular maldescent is a direct consequence of STS deficiency or secondary to deletions of adjacent genes to the STS locus. Indeed, complex syndromes including XLI due to contiguous gene deletions of the X chromosome are reported, including Conradi-Hunermann syndrome (OMIM 302960; limb shortening, epiphyseal stippling, craniofacial defects, short stature) and Rud syndrome (OMIM 308200; cryptorchidism, retinitis pigmentosa, epilepsy, and mental retardation). Lynch et al (208) reported an X-linked recessive pattern of concomitant XLI with hypogonadism in one family with five males affected. Although anosmia has not been reported in this kindred, it seems likely that a contiguous gene syndrome affected both the STS and KAL1 loci. Recent investigations in a fully genetically characterized cohort of XLI patients and genetic abnormalities confined to the STS gene indicate that testicular maldescent is rare. Of 30 males with XLI, only one boy had unilateral cryptorchidism (unpublished data), which is within the range of the general population risk in Western countries (209).
An association between STS deficiency and testicular cancer independent of testicular maldescent has been hypothesized and reported in two patients with XLI (210); however, this report is the only one published to date. The very first clinical presentation of XLI may occur at birth because efficient desulfation of DHEAS and consequent conversion of DHEA to estrogens is important for cervical softening (211), which would be disrupted in STS deficiency. Thus, women carrying children affected by XLI have reported prolonged labor due to insufficient cervix dilatation (cervical dystocia) (204, 212, 213)-a severe and unexpected birth complication where perinatal death has been reported (214). Prenatal diagnosis of STS deficiency is possible because maternal estrogen excretion is decreased, and hence characteristically low estriol is found. GC-MS analysis of maternal urine can help to distinguish fetal STS deficiency from other conditions associated with low estriol, such as aromatase deficiency or congenital adrenal hyperplasia due to P450 oxidoreductase deficiency, because sex steroid precursor metabolite excretion in maternal urine during a pregnancy affected by XLI is normal (215)(216)(217).
Androgen metabolism has been studied in several cohorts of male XLI patients (218 -222). Interestingly, increased serum DHEAS was not consistently found in XLI/STS-deficiency patients. Lykkesfeldt et al (221) investigated 20 adult males with XLI and found decreased downstream androgens with a trend toward higher serum DHEAS and lower serum androstenedione levels. An in vivo study in healthy young men investigating DHEA-DHEAS interconversion suggests that DHEA sulfation is the predominant direction, whereas desulfation by STS does not seem to play a role in normal adult physiology, with no increase in circulating levels of DHEA or sex steroids after iv DHEAS administration (223). This is confirmed for adult males from our cohort of 30 mixed adult and pediatric patients with STS deficiency and agematched controls; however, the ratio of serum DHEA/ DHEAS, reflecting in vivo STS activity, is increased in the prepubertal healthy boys, suggesting that STS is active before puberty, contributing toward peripheral androgen activation. In addition, the global 5␣-reductase activity, determined by urinary steroid profiling, is increased in STS-deficient males, indicative of a compensatory mechanism counteracting a relatively reduced rate of tissuespecific androgen activation (unpublished data).
Although STS may not contribute to peripheral androgen activation in healthy male adults, ample placental STS activity during pregnancy substantially increases circulating DHEA and sex steroid levels; accordingly, increased levels after iv DHEAS challenge have been described (224).
Multiple sulfatase deficiency
Multiple sulfatase deficiency (MSD; OMIM 272200) is a rare and severe autosomal recessive disease that affects the function of all sulfatase enzymes, leading to a rather complex phenotype, which essentially incorporates the features of each single known sulfatase deficiency. The elucidation of the underlying pathology in patients with MSD has led to the discovery of a unique post-translational event, which is shared by all human sulfatase enzymes: the activation of a cysteine residue to form an ac- (226,227).
To further understand the pathology of SUMF1 deficiency, various groups have identified eight other disorders genetically and clinically linked to deficiencies of distinct human sulfatase enzymes. Six of them represent lysosomal storage disorders, where the sulfatase enzyme fails to exert its catabolic function such as the desulfation of sulfated glycolipids (via arylsulfatase A), leading to the accumulation of sulfatides and the progressive demyelinization observed in metachromatic leukodystrophy (OMIM 250100); or the accumulation of GAGs, including heparin sulfate, dermatan sulfate, keratin sulfate, and chondroitin sulfate, as observed in the various types of mucopolysaccharidosis (see Ref. 14 for excellent review and Section V.B.1). Patients with MSD therefore show severe neurodegeneration with mental retardation, hepatosplenomegaly, short stature (resembling mucopolysaccharidosis), combined with XLI-type skin and skeletal changes as observed in chondrodysplasia punctata (OMIM 302950) (227).
Autism and ADHD
Recent studies have shown an association of XLI with behavioral disorders, including autism, attention deficithyperactivity disorder (ADHD), and social communication deficits; however, in the affected subjects, large gene deletions in the proximity of the STS locus have been found that included the NLGN4 gene encoding neuroligin 4, a synaptic peptide that has been previously implicated in X-linked autism and mental retardation (228). However, the STS gene in 384 patients with ADHD identified two SNPs of the STS gene that were significantly associated with ADHD (229). The authors hypothesized that disturbed neuronal DHEA-DHEAS metabolism might result in altered neurotransmitter function contributing to the observed behavioral abnormalities. This has been supported in STS knockout mice that develop attention disorders consistent with ADHD (230), which can be alleviated with the administration of DHEAS (231).
Bone and cartilage malformations
Inborn defects in various genes involved in sulfate uptake, activation, and utilization have been linked to developmental defects in cartilage and bone (232). Diminished sulfate uptake is caused by mutations in the diastrophic dysplasia sulfate transporter gene (SLC26A2) and causes diastrophic dysplasia, achondrogenesis type IB, atelosteogenesis type II, and a recessive form of multiple epiphyseal dysplasia (68).
A missense mutation in the gene encoding the sulfateactivating enzyme PAPSS2 has been described as associated with a brachymorphic phenotype in mice (233), with normal levels of GAGs that are, however, severely undersulfated (234). Human PAPSS2 mutations were first described in the context of a severely affected consanguineous Pakistani kindred (235,236). Mutations in PAPSS2 can cause varying forms of bone malformation in humans, ranging from subclinical brachyolmia with only mild radiological spinal changes (237), via overt brachyolmia with dysplasia confined to the spine (15 reported cases so far) or with additional minimal epimetaphyseal changes only visible on x-ray (four cases), to overt spondyloepimetaphyseal dysplasia with both vertebrae and long bones affected (23 reported cases), as summarized recently (238).
Undersulfation of the GAG chondroitin sulfate may also be caused by inactivating mutations of the chondroitin 6-O-sulfotransferase gene, CHST3, resulting in severe chondrodysplasia with progressive spinal involvement (239) and congenital joint dislocations in humans (240). It has been assumed previously that undersulfation of GAGs directly leads to changes in the biomechanical properties of cartilage (105). However, more likely, morphogen signaling involving hedgehog proteins, wingless-related proteins, and fibroblast growth factors may be compromised by changed chondroitin sulfate proteoglycans because all of these growth factors interact with the extracellular matrix (241).
Bone and cartilage malformation caused by sulfation defects contrasts with bone and cartilage phenotypes due to sulfatase defects. The sulfate group transferred to Nacetylgalactosamine of chondroitin sulfate by CHST3 is the same as that removed in the lysosomes by Gal-NAc-6-sulfatase, the enzyme deficient in mucopolysaccharidosis type IV A (also known as Morquio syndrome; OMIM 253000). This highlights the importance of the correct balance of sulfation and desulfation for bone and joint development in humans.
Furthermore, the side-product of sulfation reactions, the bis-phospho-nucleotide PAP, also has an impact on bone development. The phosphatase gene BPNT1, responsible for removal of cytoplasmic PAP, has a paralog localized to the Golgi compartment, gPAPP (74), and this gene has been associated with impairment of skeletal development (242). More recently, patients were described with homozygous missense (243) and homozygous truncation mutations (244) in the gene encoding gPAPP. Affected patients presented with short stature, joint dislocations, brachydactyly, and cleft palate; these phenotypes highlight the importance of fully functional sulfation pathways in the development of skeletal elements and joints.
Androgen excess, PCOS, and metabolic disease
Androgen excess is one of three hallmarks of polycystic ovary syndrome (PCOS), the most common female endocrine disorder, affecting about 6 -9% of women worldwide (245). Furthermore, increased androgen levels are associated with an adverse metabolic phenotype, increasing the risk of insulin resistance, type 2 diabetes, obesity, and cardiovascular disease (246). Many molecular causes for androgen excess exist, with one possibility a failure in the sulfation pathway that converts DHEA to DHEAS, the most abundant steroid in the human circulation. The obvious candidate gene for such a disorder, SULT2A1, has indeed been suggested to play a role in inherited androgen excess in PCOS (247). Two recent studies looked at the association of common genetic variants (minor allele frequency Ͼ 5%) in SULT2A1 and PAPSS2 with androgen status without an obvious link between inherited genetic variation and androgen excess (248,249). However, rare inactivating genetic variants of the PAPSS2 gene result in apparent SULT2A1 deficiency associated with androgen excess. This results from decreased conversion of DHEA to DHEAS, consequently increasing the DHEA pool available for downstream conversion to active androgens. The resulting clinical androgen excess manifests with premature pubarche and early-onset PCOS, and of note, in both families that were characterized in detail (237,238,250), the heterozygous mothers carrying a major loss-of-function mutation on only one allele clinically presented with PCOS. An association of circulating DHEAS levels with common variants in the SULT2A1 and PAPSS2 genes has been recently excluded in a population-based study (249). Additionally, in a large PCOS cohort study (248), common SULT2A1 and PAPSS2 variants did not present as risk alleles, although a common SULT2A1 allele variant was associated with the serum DHEA/DHEAS ratio. Further studies in PCOS cohorts including analysis of rarer genetic variants are warranted.
Obesity is an important risk factor for PCOS because it contributes further to the characteristically decreased in-sulin sensitivity. Circulating estrogen levels may be increased in obese patients due to enhanced aromatization within adipose tissues (251), and estrogens can regulate fat mass distribution and glucose metabolism. Thus, estrogen action in obesity will be regulated by steroid sulfation because the estrogen sulfotransferase SULT1E1 is highly expressed in adipose tissue of male mice and induced by T in female mice (252). Overexpression of SULT1E1 in a murine transgenic model results in reduced parametrial and sc inguinal adipose mass and reduced adipocyte size, but normal retroperitoneal and brown adipose deposits (253); SULT1E1 overexpression also prevents adipocyte differentiation (254). In humans, however, SULT1E1 is a proadipogenic factor (252). Its expression is reported to be low in preadipocytes but increases upon differentiation to mature adipocytes. Overexpression and knockdown of SULT1E1 in human primary adipose-derived stem cells promotes and inhibits differentiation, respectively (252). If this holds true, SULT1E1 could represent a drugable target, and adipose-specific SULT1E1 inhibitors could be used to inhibit the turnover of adipocytes in obese patients.
Steroid sulfation and desulfation pathways have both been implicated in improving and/or worsening metabolic outcomes associated with obesity and type-2 diabetes. Estrogen and androgen concentrations have been implicated in regulating energy and glucose homeostasis. For example, mice lacking the aromatase enzyme become obese due to attenuated physical activity and decreased lean body mass (255), and ER␣-deficient mice exhibit reduced energy expenditure leading to an obese phenotype (256). Estrogen deficiencies also result in impaired insulin sensitivity in both aromatase knockout (255) and ER␣ knockout mice (257). Conversely, estrogen administration improves insulin sensitivity in high-fat-diet female mice (258) and ob/ob obese mice (259).
This evidence suggests an importance in both STS and SULT1E1 activity in improving metabolic outcomes associated with obesity. Recent studies have examined the effect of both enzymes on metabolic function in obesity and diet-induced type 2 diabetes in mice. Hepatic SULT1E1 expression, although normally low, is elevated in type 2 diabetic mice, and loss of SULT1E1 improved metabolic function in these same animals (260). Furthermore, SULT1E1 ablation increased energy expenditure and insulin sensitivity and decreased hepatic gluconeogenesis and lipogenesis. This metabolic benefit resulted from decreased estrogen sulfation, and therefore an increased estrogenic activity in the liver; this effect was not seen in ovariectomized mice (260). The same group then developed a liver-specific STS knock-in mouse model and demonstrated that increased hepatic active estrogen con- press.endocrine.org/journal/edrv centrations are associated with an improved metabolic function when compared to obese and type 2 diabetic animals. Furthermore, they show that hepatic STS activity is increased in mice given high-fat diets and in ob/ob obese animals (261). This suggests that SULT1E1 and STS activities are important in energy homeostasis and that upregulation of STS and thus an increased synthesis of estrogens may be a hepatic defensive response against the metabolic syndrome. Intracellular accumulation of lipids, inflammatory responses, and subsequent apoptosis are major pathogenic events of metabolic disorders. Sulfated oxysterols also play a role in lipid metabolism and obesity. For a long time, it has been known that oxysterols, derivatives of cholesterol, bind to LXR nuclear receptors and up-regulate hepatic de novo lipogenesis (262). LXR activation also prevents bile acid toxicity (263). On the other hand, LXR expression correlates with intrahepatic inflammation and fibrosis in patients with nonalcoholic fatty liver disease (264). Recently, it became apparent that these nuclear receptor ligands, when sulfated, are not merely blocked from binding, but are actively inhibiting nuclear receptor signaling by yet unknown mechanisms (265), putting steroid sulfotransferases into the context of energy metabolism and regulation. Furthermore, sulfated sterol signaling is not limited to LXR receptors, but it affects several other members of the nuclear receptor family acting then as metabolic sensors of intracellular lipid, bile acids, and cholesterol levels: CAR, farnesoid X receptor, peroxisome proliferation activator receptors, and retinoid X receptor (266). Sulfation of bile acids and oxysterols is catalyzed exclusively by the SULT2A and SULT2B enzymes (100,267). Hence, sulfated oxysterols may represent candidates for the development of novel therapeutic approaches to nonalcoholic fatty liver disease (268), a metabolic complication of obesity that continues to increase in prevalence, now representing the second most common cause of liver transplantation.
A. Cancer
Steroid metabolism is significantly altered in many endocrine-related cancers (269). Evidence suggests that sulfation pathways are down-regulated, whereas STS activity increases in many tumors, thus favoring desulfation and therefore downstream conversion of steroids into more active metabolites ( Figure 5).
Breast
Most breast cancers are initially estrogen responsive and exhibit increased intratumoral estrogen concentrations compared to adjacent normal breast tissue (270). Hence, it is of interest that the highest incidence of breast cancer is observed in postmenopausal women despite cessation of ovarian estrogen synthesis and the consequent drop in circulating estrogen concentrations. Estrogens can still be produced in postmenopausal women by tissuespecific local conversion of androstenedione to E 1 , and to a lesser extent T to E 2 , by aromatase (271). However, estrogens are sulfated by E 1 sulfotransferase (SULT1E1) and phenol sulfotransferase (SULT1A1), and this accounts for the high circulating E 1 S concentrations observed in postmenopausal women, with this E 1 S pool acting as a reservoir for peripheral conversion to E 1 by STS (35).
Significant scientific discussion surrounds the relative importance of the two primary pathways for active estrogen generation, E 1 S desulfation, and androgen aromatization in hormone-dependent breast cancer. Whereas increased aromatase protein expression parallels increased intratumoral E 2 concentrations (272), there is currently limited support for STS expression directly correlating with locally increased E 2 concentrations. However, STS activity can be 50 -200 times higher than aromatase activity in breast cancer tissue (273), and STS mRNA is frequently detected in breast tumors, whereas aromatase levels are relatively low (274). This suggests that STS, rather than aromatase, may be the primary driver for local E 1 production in hormone-dependent breast cancer (275,276). Enzyme kinetic studies show that STS activity is higher than aromatase not only in cancerous tissue but also in normal breast (270). In addition to local estrogen metabolism via STS and aromatase, serum estrogen levels for E 1 , E 1 S, E 2 , and E 2 sulfate (E 2 S) have been reported to fall after surgical removal of STS-positive breast cancer in postmenopausal women, implying an additional systemic effect and indicative of the importance of STS activity in forming active estrogens (17,277).
In breast cancer, STS mRNA expression (278) and activity (275) are higher in cancerous compared to normal breast tissue, with elevated STS mRNA expression being significantly associated with lymph node metastasis, histological tumor grade (279), and poor prognosis (280). Soft tissue breast cancer metastasis expresses higher STS mRNA compared to primary tumors (281). Furthermore, SULT1E1 expression, responsible for E 1 sulfation, is decreased in breast cancer, with an inverse correlation between tumor histological grade and levels of intratumoral SULT1E1 immunoreactivity (17,282,283). Thus, it is possible that breast cancers favor local desulfation path-ways to increase E 1 availability from high circulating E 1 S. Subsequent E 1 conversion, by 17HSDs (17HSD-1), potentially results in E 2 concentrations that are considerably higher in breast cancer tissue compared to circulating levels (284). Intriguingly, patients treated with the aromatase inhibitor exemestane have elevated breast tumor STS and 17HSD-1 immunoreactivity, which both correlate neg-atively with tumor Ki67 proliferation index (285). This suggests a compensatory mechanism via E 1 S desulfation in response to local E 2 depletion caused by aromatase inhibition.
Surprisingly, however, there are no definitive studies correlating breast intratumoral E 1 and E 2 concentration and STS activity and expression. Haynes et al (286) have Figure 5. A, The balance between sulfation and desulfation strongly influences steroid hormone action. The nonsulfated steroid may exert its biological effect by binding to its cognate nuclear receptor or may be downstream converted to more active steroids. Once sulfation occurs by one of various sulfotransferases, solubility of the steroid is dramatically increased, facilitating renal excretion, but also circulatory transit fueling peripheral desulfation and local steroidogenesis. Sulfation may also suppress or modify downstream conversion by masking one of several functional groups; further sulfation steps may occur or sulfated steroids may exert biological effects directly. B, Dysregulation of sulfation and desulfation pathways dramatically alters available active steroids. In disease, especially in cancer, SULT enzymes expression and thus activity are decreased, whereas STS activity is elevated. This situation favors desulfation and therefore results in an elevated local synthesis of active steroids. Furthermore, OATP expression is also elevated in many cancers, increasing the intracellular availability of sulfated steroids to STS action. doi: 10.1210/er.2015-1036 press.endocrine.org/journal/edrv 543 shown that STS mRNA may be down-regulated in breast cancer from both premenopausal and postmenopausal women compared to matched controls. Furthermore, they suggest that no correlation was observed between intratumoral E 2 and STS mRNA expression, and there is limited evidence to support a role for STS in establishing intratumoral E 2 levels in these patients. However, they failed to examine STS activity in these tissue samples, and it is thought that post-translational modification of the STS enzyme is more important in determining STS activity than measuring STS mRNA expression levels alone (61). Furthermore, these results are in sharp contrast to other findings that show breast cancer patients have a significantly longer disease-free survival if their STS mRNA levels are low (278) and that STS protein expression correlates with ER␣ expression (287). Also, STS activity has consistently been shown to be elevated in breast cancer tissue (57,269,278,288). The regulatory mechanisms underlying increased STS expression in breast cancer are not fully understood. Current evidence suggests that inflammatory cytokines, TNF␣ and IL-6, increase STS activity (57,61), although this has been disputed by a study showing negative correlation between TNF␣/IL-6 expression and STS expression in soft tissue breast cancer metastases and primary tumors (281). Expression of tissue-specific transcripts of STS may also be controlled by ER␣ signaling in normal and cancerous breast tissue (52); these studies also demonstrated that ER␣-positive human breast cancer tissue expresses more active STS isoforms that are up-regulated by local E 2 concentrations, thus promoting cancer progression (52). Supporting this, a recent study investigating 45 primary breast tumors showed that STS and 17HSD-1 expression correlates with ER activity, as measured by transfection using adenovirus vectors carrying an ERE-tk-GFP reporter gene (287). Thus, ER␣ activation is important in regulating STS activity and subsequent E 1 and E 2 synthesis, although a full understanding of what regulates STS and SULT1E1 expression and activity in breast cancer remains to be elucidated.
But what of DHEAS desulfation and the subsequent synthesis of T and dihydrotestosterone (DHT) in breast cancer? Before aromatase action, desulfation of DHEAS by STS generates androgens, and although androgens can act as estrogen prohormones, they themselves may have a role in breast cancer incidence, risk, and proliferation (282,289). Historically, androgens were given therapeutically to breast cancer patients (290,291) to improve survival outcomes. However, patients suffered undesirable side effects such as hirsutism and amenorrhea.
Currently, controversy exists as to the significance of androgenic effects in breast cancer, and therefore, by ex-tension, the importance of local DHEA sulfation and desulfation. Unlike estrogens, in normal breast androgens inhibit proliferation (292,293). However, in breast cancer, androgenic effects are complex and most likely depend on the differing intracrinology of different breast carcinomas (see Ref. 294 for excellent review). A recent systematic review exploring 19 studies with a total of 7693 women found AR expression in 60.5% of breast cancers. AR expression was more common in ER␣-positive tumors (74.8%) compared to ER-negative (31.8%), and patients expressing AR had improved overall survival (295). This would support the rationale for selective AR activation as a potentially attractive therapeutic approach for breast cancer.
Although circulating DHEAS concentration correlates positively with breast cancer incidence in premenopausal (296,297) and postmenopausal women (298,299), the importance of androgen synthesis through DHEAS desulfation via STS in breast cancer has not yet been fully explored. Early studies showed that DHEAS caused proliferation in T47D breast cancer cells, known to have STS activity (300), even when cotreated with tamoxifen, implying that androgens influence breast cancer proliferation through AR activation (301) and not just through estrogenic metabolites (302). However, other studies contest these facts, with some showing DHEA as antiproliferative in MCF-7 (303) but not in MDA-MB-231 or Hs578T cells (304).
In vitro (305) and in vivo (306) studies using STS inhibitors imply that the dominant effect of increased STS activity in breast cancer is not inhibition of growth by androgens, but rather estrogen-driven proliferation. However, phase I clinical trials of Irosustat (STX64, 667Coumate), a potent STS inhibitor (307), in breast cancer patients demonstrated that blocking STS activity not only significantly reduced circulating E 1 S, but also lowered plasma DHEA and androstenedione concentrations, and if DHEA is indeed antiproliferative in breast cancer, this may have unwanted consequences for this treatment approach.
Prostate
In men, the prostate is the major peripheral tissue where STS activity contributes to the local synthesis of biologically active androgens. Unlike breast cancer, where a higher exposure to estrogens is associated with increased malignancy risk, prostate cancer incidence is not associated with high circulating androgen concentrations (308). Men with prostate cancer, who have been treated by castration, can be successfully treated further by adrenalectomy (309). Although outdated, this approach works because the adrenals secrete DHEAS, which can be activated to the active androgens T and DHT in prostate tissue (310).
Similar to breast cancer, STS activity has been detected in normal (311) and cancerous (312) prostate tissues. Furthermore, SULT1E1 (17) and SULT2B1 (313) mRNA are also detected. The expression patterns of these enzymes will therefore influence local estrogen and androgen synthesis. The prostate cancer cell line LNCaP exhibits higher STS activity than some breast cancer cell lines (176). STS activity is also present in DU-145 and PC-3 prostate cancer cells and in human prostate cancer biopsies (312). DHEAS can be metabolized to DHEA in these cells, with this hydrolysis being blocked by the STS inhibitor oestrone-3-O-sulphamate (314). DHEA inhibits, whereas T induces, apoptosis in LNCaP cells under serum-deprived conditions (315); this effect may be due to differing binding affinities to the AR of these two steroids, leading to different coactivator/corepressor recruitment. With regard to proliferation, administration of DHEAS to castrated male rats increases ventral prostate and seminal vesicle weights and increases circulating DHEA and DHT concentrations, with this effect abolished by STS inhibition (316). However, DHEA alone has little effect on LNCaP or LAPC-4 growth, unless they are cocultured with prostate stromal cells (317,318), suggesting that downstream androgen biosynthesis from DHEA requires both prostate stromal and epithelial components. Intriguingly, in prostate cancer patients treated with the nonspecific P450c17 inhibitor, ketoconazole, or the specific P450c17 inhibitor, abiraterone acetate, significant (ϳ20 g/dL) circulating DHEAS concentrations were still present, suggesting that this could act as a depot for further downstream androgen formation via desulfation and AKR1C3 action (319). Furthermore, a reasonably substantial (2.0 -2.5 ng/mL) concentration of DHTS circulates in men (320) and, similarly to E 1 S in women, could act as a reservoir for peripheral DHT synthesis. Indeed, prostate cancer patients exhibit significantly elevated circulating DHT and DHTS concentrations compared to aged-matched controls (321), suggesting their importance in this malignancy's development and a potential further role for STS in active androgen formation.
Recently, a role for estrogen signaling in prostate cancer development, particularly through ER splice variants, has also been postulated (322), and evidence is growing that ER may modulate androgen action and therefore prostate cancer development (323). Men have significant E 1 S concentrations in circulation (see Table 1). STS activity is present in healthy and malignant prostate tissue (312), and prostatic E 1 S uptake may increase during aging (324). Furthermore, circulating E 2 concentrations are elevated in patients with prostate cancer (325), suggesting estrogenic influences on the incidence and development of this malignancy.
Interestingly, SULT1B1, a sulfotransferase that can sulfate DHEA, is down-regulated in prostate cancer compared to normal prostatic tissue (108). Knockdown of SULT1B1 in LNCaP cells increases DHEA-induced proliferation (326), implying that the STS/SULT1B1 ratio in the prostate regulates DHEAS/DHEA-induced proliferation. This ratio is likely to be influenced by local inflammatory conditions, as shown by Suh et al (59) who assessed whether TNF␣ can induce STS expression; LNCap and PC-3 cells up-regulated STS expression in a TNF␣ concentration and time-dependent manner. They further demonstrated that at least part of this effect was via the phosphatidylinositol 3 (PI3)-kinase/Akt pathway because PI3-kinase inhibitors and AKT inhibitors suppressed STS mRNA up-regulation induced by TNF␣. The same group later examined PC-3 cells and found that IGF-2 increased STS expression via the same PI3-kinase/Akt pathway (327).
The fact that inflammation and cancer are often seen together (328), with evidence linking prostatitis with prostate cancer risk (329) and high TNF␣ associated with poorer prognosis with earlier onset of castration-resistant prostate cancer (330), it is interesting to surmise that local inflammatory conditions may impact on the balance of sulfation and desulfation in prostate tissue to drive proliferation. Intraprostatic hormonal dysregulation occurs in benign prostate hyperplasia (BPH) with an increase in active sex steroids. STS activity and tissue concentrations of DHEA and E 1 were found to be higher in BPH tissue compared to circulating concentrations (331,332). However, clinical evidence of an association between TNF␣, DHEAS, and DHEA concentrations, and BPH and prostate cancer progression is currently lacking.
Endometrium
Endometriosis is a common gynecological condition defined as proliferation of ectopic endometrial tissue and stroma, ie, in locations other than the uterus. It is associated with pelvic pain, dyspareunia, dysmenorrhea, and infertility. Endometriosis is estrogen-dependent and therefore occurs in women of reproductive years (333). The premenopausal endometrium undergoes a regular and predictable sequence of proliferation and secretion followed by menstruation. STS has been shown to have a cyclical change in activity during the menstrual cycle, suggesting that, in this tissue at least, it is regulated by hormonal factors as well as regulating local estrogen and androgen synthesis (334). In human endometrial tissue, STS activity peaks at the early secretory stage and declines thereafter (335). IL-1, known to increase at the secretory doi: 10.1210/er.2015-1036 press.endocrine.org/journal/edrv phase of menstruation, suppresses STS mRNA and activity in human endometrial stromal cells (336). STS activity is also elevated in ovarian and rectovaginal endometriosis compared to disease-free endometrium with enzyme ratios (STS/SULT1E1 and HSD17B1/HSD17B2), favoring E 2 production (337). Indeed, SULT1E1 protein has been shown to be down-regulated in human endometriosis tissue (338), and increasing STS activity correlates with disease severity (339). Not all studies have shown this correlation, but STS activity is consistently high in eutopic and ectopic endometrial tissue (340). STS inhibitors reduce STS activity in endometriotic implants (341), and inhibition of STS in murine models of endometriosis decreases disease severity (342). Interestingly, randomized, double-blind, placebo-controlled trials examining combining E2MATE, an STS inhibitor, with norethindrone acetate, a synthetic progestin, demonstrated a synergistic effect on STS inhibition, suggesting this approach as a potential treatment option for endometriosis patients (343). Increased STS activity and expression are also associated with endometrial cancer. Both nuclear ERs are expressed in the endometrium, with ER␣ more highly expressed than ER. Data on ER expression alterations in both endometriosis and endometrial cancer are conflicting (344 -346). However, as with breast and colorectal cancer, estrogen levels have been shown to be higher in endometrial tumor tissue compared to normal, with E 2 tissue levels correlating positively with disease stage and tumor invasion (347). Prolonged lifetime estrogen exposure and reproductive factors such as early menarche, nulliparity, and late menopause increases the risk of endometrial cancer (348 -350). Hormone replacement therapy (HRT) can increase the risk of endometrial cancer because estrogens stimulate proliferation in the endometrium, unless it is combined with progesterone therapy, as this hormone differentiates endometrial cells.
Despite endometrial cancer being estrogen driven, paradoxically and similar to breast cancer, the greatest incidence is in postmenopausal women (351), again indicating peripheral estrogen synthesis. Although aromatase activity is not present in endometrial tissue (352), STS activity is increased up to 12-fold in human endometrial cancer tissue (353,354). Utsunomiya et al (355) found 86% of endometrial tumors immunoreactive for STS and 29% for SULT1E1. The STS/SULT1E1 ratio correlated with poorer prognosis, with a higher ratio associated with high circulating E 2 levels. Of note, Lukanova et al (349) showed that elevated circulating estrogens and androgens were associated with endometrial cancer risk. They hypothesized that although serum androstenedione and T positively correlated with endometrial cancer risk, it can-not be concluded whether this is mediated primarily through estrogen conversion or by AR activation. Thus, attenuating both estrogenic and androgenic sex steroids through STS inhibition appears to be a feasible therapeutic strategy in endometrial cancer.
The endometrial cancer cell lines Ishikawa, HEC-1A, HEC-1B, and RL-95 do not metabolize androstenedione to E 1 or E 2 , suggesting that aromatase is not important in these cells (356). However, E 1 S is hydrolyzed in these cells, albeit at a low rate, and an in vivo Ishikawa xenograft model in mice has demonstrated that endometrial cancer proliferation can be driven by E 1 S and inhibited by STS inhibitors Irosustat and STX213 (357). Unfortunately, phase II trials of Irosustat as a monotherapy in endometrial cancer patients were discontinued in 2011 after data indicated no beneficial effect of STS inhibition when compared to megestrol acetate. However, future studies will examine the effects of combining STS inhibition with standard treatment options for endometrial cancer patients.
Colorectal
Colorectal cancer (CRC) is not routinely referred to as hormone sensitive; however, estrogens and androgens are implicated in both normal gastrointestinal physiology and carcinogenesis (358). Evidence supports a role for estrogens not only in CRC pathogenesis, but also in protection. This dual role of active estrogens was first postulated from the Women's Health Initiative (WHI) study that demonstrated combination (equine E 1 S plus progestins) HRT resulted in 40% CRC risk reduction, suggesting that estrogen or progestins may have protective roles. The combined oral contraceptive pill also reduced CRC risk by 20% (359). However, women diagnosed with CRC while using HRT had higher tumor grades, suggesting either that HRT delayed clinical diagnosis or that estrogens also play a role in tumor progression (360). A large study by Zervoudakis et al (361) explored the association between lifetime endogenous estrogen and CRC, finding that higher exposure increased risk in postmenopausal women. Contradictorily, as a population, males are at an increased risk of CRC, in particular compared to premenopausal women. Younger women also have an improved survival (362), suggesting that the relationship between estrogens and CRC incidence is complex.
Estrogen concentrations, as measured by LC-MS, are higher in human CRC tissue compared to normal colonic mucosa, and when separated into E 1 and E 2 , E 1 concentrations predominated (363), suggesting high CRC intratumoral E 1 S desulfation. High local total estrogen (E 1 and E 2 ) concentrations are associated with reduced CRC survival (337). Interestingly, estrogen concentrations are concordant with high STS and low SULT1E1 expression rather than aromatase, and the STS/SULT1E1 ratio correlates with prognosis; ie, patients with tumors negative for STS and positive for SULT1E1 had an improved outlook, whereas those positive for STS and negative for SULT1E1 were associated with unfavorable clinical outcome. Thus, estrogens generated through STS appear to contribute to CRC progression and poor survival. English et al (364,365) also found STS activity to be increased in CRC tumors, and additionally 17HSD-2 protein expression was frequently reduced with no alteration in aromatase activity; thus, increased E 1 generated via STS, together with a fall in 17HSD-2, should drive production of biologically active E 2 .
The evidence for DHEAS and DHEA in CRC incidence and proliferation is more obscure. Debate exists on whether there is any significant aromatase activity in the colon (358,364) and, if it is present, whether it affects clinical outcomes (366). Therefore, local DHEAS desulfation would mostly be utilized for androgen production, and functional membrane ARs are present in colonic tumors (367). However, the effect of androgens in CRC is unclear. In vitro T induced apoptosis in CRC cell lines (315,368), whereas DHEA enhances survival (315). In contrast, Tutton and Barkla (369) found that in vivo administration of T accelerated cell proliferation in the small intestine and induced colon cancer in rats, with CRC growth reduced after castration. This early study has been strongly supported recently by elegant studies demonstrating that T and DHT promote the development and proliferation of colon adenomas in rats and mice, whereas castration markedly protected colon adenoma formation (370).
In humans, studies exploring the effect of androgen treatment on the colon have been inconsistent. Alberg et al (371), examining serum from CRC patients, found that higher circulating DHEAS concentrations in men were slightly associated with a decreased risk of CRC. Supporting this finding, a large study of 107 859 prostate cancer patients explored CRC incidence and androgen deprivation therapy (372). Initial results showed orchiectomy caused the highest incidence of CRC, followed by GnRH agonist therapy and men with no androgen deprivation therapy. CRC risk increased with the length of time a patient was subjected to androgen deprivation therapy (373), and thus androgens may act like estrogens with both protective and cancer-promoting effects in the context of CRC.
B. Aging
Serum DHEA and DHEAS decline with age, and at 70 years of age, circulating DHEAS concentrations have diminished by 90% compared with the peak levels achieved at ages 20 -30 (374). Thus, there is widespread speculation about a causative role of DHEAS in age-related disease development and human longevity. In cross-sectional studies, low DHEA and DHEAS concentrations have been associated with geriatric syndromes, such as sarcopenia (375,376), poor cognitive function (377), depression (378), cardiovascular disease (379), erectile dysfunction (380), and low sexual drive (381). Little is known about what triggers the gradual decline of DHEA and DHEAS, but because it accounts for 50% of androgens in men and 75% of estrogens in premenopausal women (382), delineating this effect is of significant importance in age-related research.
It is most likely that declining DHEA and DHEAS concentrations are associated with decreased adrenal production, rather than an alteration in DHEA metabolism (383). However, some evidence suggests that a relationship between DHEAS and DHEA is defined by activity of SULT2A1, the enzyme that converts DHEA to DHEAS (223), and that impairment of DHEA sulfation causes low DHEAS and concurrent androgen excess with high DHEA and androstenedione concentrations (237). Genetic variants of SULT2A1 do not appear to have an effect on individual DHEA and DHEAS concentrations or the DHEA/DHEAS ratio as a marker of DHEA sulfonation capacity (249). However, to date, no other research has been published on aged-induced alterations in SULT2A1 and STS activity, particularly in the adrenal gland; thus, conclusions on potential mechanisms behind the age-associated decline in DHEA and DHEAS are lacking.
A. STS inhibitors
Clearly, the ability to pharmacologically target STS has significant potential in a number of disease states. In cancer, where the desulfation of E 1 and DHEA may play important roles in breast and prostate cancer, STS inhibitors may show significant promise (269). Furthermore, steroid dynamic studies reveal that DHEA and DHEAS can act as precursors for the formation of other steroids with estrogenic and androgenic properties, such as 5-androstenediol (Adiol). Evidence suggests that DHEAS (301) and Adiol (384) stimulate breast cancer cell proliferation in vitro, although other contradictory evidence suggests that DHEA may play a protective role against the disease (304,385). Interestingly, DHEAS concentrations in plasma are very high (Table 1); it is the most abundant steroid secreted by the adrenal cortex. Similar to E 1 S, it has a long plasma half-life (10 -20 h), significantly longer than the unconjugated DHEA (386,387). After hydrolysis via doi: 10.1210/er.2015-1036 press.endocrine.org/journal/edrv STS, DHEA undergoes further reduction to Adiol, an androgen steroid able to bind to the ER and cause mitogenesis (388). Therefore, due to the large plasma concentrations of the precursors of Adiol, this STS-affected pathway may play an important role in cancer tumorigenesis. Thus, inhibiting STS should not only block E 1 synthesis, but also significantly limit androgen precursors. Indeed, in the first phase I clinical trial of an STS inhibitor, circulating androstenedione and T were significantly decreased in postmenopausal women with refractory hormone-dependent breast cancer (307). There have been several recent comprehensive and excellent reviews covering the development of STS inhibitors for various hormone-dependent malignancies (269, 389 -391); thus, this section will only briefly examine and summarize the current status of STS inhibitor development.
The first STS inhibitor to demonstrate hepatic in vivo activity in a rat model was 667Coumate (STX64, Irosustat), a potent tricyclic coumarin-based sulfamate that irreversibly inhibits STS (392). This compound has shown excellent in vivo efficacy against E 2 S-driven breast cancer (393,394) and endometrial cancer (357) and has shown promise in phase I clinical trials in female patients with hormone-dependent breast cancer (307). Currently, 667Coumate undergoes evaluation in hormone-dependent breast cancer patients in combination with aromatase inhibitors in phase I/II trials, and results should be published in late 2015.
B. Modulation of sulfation
All human sulfotransferases need the atypical nucleotide PAPS as an active sulfate donor, and PAPS binding is highly conserved between distantly related members of this large gene family. Because PAPS has an adenosine moiety, kinase-directed and purine-based compound libraries have been used in the past to discover sulfotransferase inhibitors (400 -402). To develop SULT isoformspecific inhibitors, bisubstrate analog-based approaches have been applied to various sulfotransferases (403,404).
The rate-limiting step for all sulfation reactions is provision of active sulfate in the form of PAPS, and the responsible PAPS synthases are recognized as fragile enzymes stabilized by the APS intermediate of PAPS biosynthesis (70,72). APS interacts both with the sulfurylase and APS kinase domain and effectively suppresses PAPSS2 aggregation at low micromolar concentrations (72). Exploiting this principle of action for compound development may result in PAPS synthase-stabilizing compounds that may increase overall sulfation capacity.
VIII. Future Directions
Historically, steroid sulfation was regarded as a mechanism to facilitate steroid circulatory transit and renal excretion. Research over the past few decades challenged this view because it became clear that circulating steroid sulfates (ie, DHEAS) are desulfated and thus act as a systemic reservoir for peripheral metabolism. This is especially important because peripheral or local steroidogenesis can thus occur in otherwise nonsteroidogenic tissues (ie, devoid of the P450 side chain-cleaving enzyme P450scc), such as the brain or in prostate cancer (408). Sulfation and desulfation represent a dynamic way of balancing the availability of free steroid hormones near target sites; however, these processes need to be tightly controlled in cells where steroid sulfotransferase and sulfatase are coexpressed to avoid a vicious cycle.
This review has made clear that steroid hormone action strongly relies on the intricate interplay of sulfation and desulfation processes as well as membrane transport of sulfated steroids. Studies simultaneously looking at all three of these processes are still lacking; there are no clear data on the factors that regulate these pathways, and subsequently their importance in many pathologies has most likely been overlooked. It is clear that the ratios between STS and SULTs will have profound consequences on local steroid metabolism, but research into how these ratios impact upon normal and diseased tissue remains to be done.
It would be of great interest to map the relative concentrations of sulfated and desulfated steroids in a tissuespecific manner under various physiological states. Whether MS imaging (409) may turn out to be useful in this regard depends on when it will reach spatial resolution on a single cell level. Furthermore, the accurate measurement of the intracellular fluctuations of both sulfated and nonsulfated steroids in both normal and pathological states would provide significant insights into STS, SULT, and OATP biology.
Furthermore, the direct biological effects of steroid sulfates are the subject of lively scientific debate. E 1 S may elicit biological effects in uterine endometrium that are not seen with E 2 (15). As a neurosteroid, pregnenolone sulfate clearly exerts different effects than its nonsulfated counterpart, pregnenolone. Although unconjugated pregnenolone is a barbiturate-like agonist, pregnenolone sulfate can bind to and suppress the gamma-aminobutyric acid receptor acting as a picrotoxin-like antagonist (410). It is difficult to dissect the molecular roles of DHEA and its sulfate ester, DHEAS. Experimentally, it is challenging to discriminate between direct DHEAS effects and those caused by desulfation and downstream conversion to more potent androgens and estrogens. DHEAS has been reported to induce transcription of the abundant miR-21 in liver cell lines; however, this effect is clearly linked to both desulfation and conversion to more potent androgens and estrogens (411). Evidence accumulates that DHEAS may have physiological roles of its own-as a neurosteroid acting antagonistically to DHEA (408); it has a hormone-like activity on the spermatogenic GC-2 cell line by activating a G␣11-receptor (412) and has been shown to directly activate protein kinase C in human neutrophils (413).
Pharmacological intervention on sulfation and desulfation pathways remains in its infancy. Although promising progress has been made with regard to STS inhibition, few pharmacological tools exist to selectively target individual SULTs or OATPs. The development of these inhibitors would not only be a boon for basic researchers but also would allow for the potential development of future drugs targeting sulfation/sulfate transportation, many of which are up-regulated in various pathologies.
|
2017-10-17T06:53:33.983Z
|
2015-07-27T00:00:00.000
|
{
"year": 2015,
"sha1": "5ec4339ff658fa780fe99f9e88db479906767b4b",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/edrv/article-pdf/36/5/526/20218160/edrv0526.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "47b072139696383a6424bd3e327e8c69fd9695cc",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
27823307
|
pes2o/s2orc
|
v3-fos-license
|
Genetic disruption of uncoupling protein 1 in mice renders brown adipose tissue a significant source of FGF21 secretion
Objective Circulating fibroblast growth factor 21 (FGF21) is an important auto- and endocrine player with beneficial metabolic effects on obesity and diabetes. In humans, thermogenic brown adipose tissue (BAT) was recently suggested as a source of FGF21 secretion during cold exposure. Here, we aim to clarify the role of UCP1 and ambient temperature in the regulation of FGF21 in mice. Methods Wildtype (WT) and UCP1-knockout (UCP1 KO) mice, the latter being devoid of BAT-derived non-shivering thermogenesis, were exposed to different housing temperatures. Plasma metabolites and FGF21 levels were determined, gene expression was analyzed by qPCR, and tissue histology was performed with adipose tissue. Results At thermoneutrality, FGF21 gene expression and serum levels were not different between WT and UCP1 KO mice. Cold exposure led to highly increased FGF21 serum levels in UCP1 KO mice, which were reflected in increased FGF21 gene expression in adipose tissues but not in liver and skeletal muscle. Ex vivo secretion assays revealed FGF21 release only from BAT, progressively increasing with decreasing ambient temperatures. In association with increased FGF21 serum levels in the UCP1 KO mouse, typical FGF21-related serum metabolites and inguinal white adipose tissue morphology and thermogenic gene expression were altered. Conclusions Here we show that the genetic ablation of UCP1 increases FGF21 gene expression in adipose tissue. The removal of adaptive nonshivering thermogenesis renders BAT a significant source of endogenous FGF21 under thermal stress. Thus, the thermogenic competence of BAT is not a requirement for FGF21 secretion. Notably, high endogenous FGF21 levels in UCP1-deficient models and subjects may confound pharmacological FGF21 treatments.
INTRODUCTION
Brown adipose tissue (BAT), the main site of non-shivering thermoregulation (NST), defends body temperature in small mammals and human infants [1]. In adult humans, the presence of active BAT inversely correlates with body mass index [2e5], suggesting that BAT is a natural defense mechanism against obesity. Thermogenic functionality of BAT is regulated at multiple levels, but the mitochondrial uncoupling protein 1 (UCP1) is crucial. UCP1 converts nutrient energy directly to heat by uncoupling substrate oxidation from ATP synthesis [6]. UCP1 KO mice are cold sensitive but can survive after stepwise acclimatization to the cold [7]. Canonical activation of BAT requires sympathetic noradrenaline release, but the search for peripheral hormones that increase UCP1 gene expression revealed several serum proteins, including fibroblast growth factor 21 (FGF21). FGF21 is a pleiotropic regulator of glucose homeostasis, lipid and energy metabolism [8e10], that is mainly released from the liver. Recently, other tissues such as the stressed muscle and adipose tissue were identified as sources of FGF21 [11,12]. In white adipose tissue (WAT), FGF21 appears to act solely in an autocrine or paracrine manner [8,13]. In BAT of rodents, FGF21 expression is induced by cold exposure and beta3-adrenergic stimulation [14e16]. So far, only one study showed secretion of FGF21 from BAT, suggesting an endocrine role of activated BAT [16]. Increased systemic FGF21 serum levels, either endogenously induced or exogenously administered, lead to the appearance of brown fat likestructures in WAT depots by the recruitment of beige fat cells e generally termed the "browning" of WAT [8,11,15,17]. The beige adipocytes possess thermogenic potential through expression of functional UCP1 [18] but whether they contribute to systemic adaptive thermogenesis is under debate [19]. In adult humans, the major proportion of UCP1-positive cells is classified as beige adipocytes 1 [20,21] while the minor proportion is classical neonatal BAT that is found in the neck region [22]. Cold exposure of humans increased thermogenic gene expression including UCP1 and circulating FGF21, suggesting augmentation of BAT thermogenesis in concert with FGF21 [23]. Furthermore, the human data suggested that FGF21 secretion during cold exposure may require functionally active BAT [24]. Given the effects of exogenous FGF21 on "browning", it is possible that elevated serum FGF21 levels increase thermogenic gene programming in beige adipose tissue of humans. Accumulating evidence from rodents and man suggests that the BAT-FGF21 axis plays a key role in the regulation of energy expenditure and thermogenesis. In the UCP1 KO mouse model, which lacks the ability to recruit adrenergic thermogenesis in BAT [25,26], we discovered that circulating FGF21 levels are highly elevated in response to cold exposure and specifically released from non-functional BAT. The release of full-length FGF21 suggests endocrine crosstalk with other tissues, presumably altering serum metabolites and inducing thermogenic programs in WAT.
Animals
The experiments were performed in homozygous WT and UCP1 KO littermates (genetic background C57BL/6J e originally from Jackson Laboratory -Strain Name: B6.129-Ucp1tm1Kz/J). Mice were bred, born and weaned at 30 C. They were housed in groups with ad libitum access to food and water and a 12:12-h darkelight cycle (lights on: 7:00 CET). At the age of 10e12 weeks, mice were single housed and randomly assigned to warm (30 C) or cold (2e3 wks at 18 C followed by 4 wks at 5 C) acclimation. At 16e18 weeks of age, mice were euthanized 3e4 h after lights went on, and serum and tissue samples were collected. Another cohort of mice was acclimated to either 23 C or 18 C for 2 wks. The animal welfare authorities approved animal maintenance and experimental procedures.
2.2. Gene expression analysis RNA was extracted from BAT, liver and inguinal white adipose tissue (iWAT) using Qiazol according to the manufacturer's instructions (Qiazol Lysis Reagent, Qiagen). Synthesis of cDNA and DNAse treatment were performed from 1 mg of total RNA using the QuantiTect Reverse Transcription Kit (Qiagen). Quantitative real-time PCR (qRT-PCR) was performed on the ViiAÔ 7 Real-Time PCR System (Applied Biosystems). The PCR mix contained (5 ml) SybrGreen Master Mix, (Applied Biosystems), a cDNA amount corresponding to 5 ng of RNA used for cDNA synthesis and gene specific primer pairs. Gene expression was calculated as ddCT, using HPRT or B2 microglobulin (B2M) for normalization and relative to the WT 30 C, which was normalized to a value of 1. The oligonucleotide primer sequences are available on request.
Serum analysis
For all serum analyses, commercially available assay kits were used according to the manufacturer's recommendations. Serum triglycerides (Triglyceride Colorimetric Assay Kit e Cayman Chemical), glycerol (Glycerol Colorimetric Assay Kit, Cayman) and intact/active FGF21 (Intact FGF-21 ELISA Kit, Eagle Biosciences) were measured undiluted. Sera for FGF21 (Mouse/Rat FGF-21 Quantikine ELISA Kit -R&D Systems) and NEFA (NEFA-HR2 Wako Chemicals) detection were diluted 1:2. Serum samples were stored at À80 C until use.
Ex vivo analysis of FGF21 secretion in BAT and WAT
BAT and iWAT tissues were collected and washed several times with phosphate-buffered saline (PBS, Life technologies). Tissues were cut in pieces (10e15 mg), washed again 3x in PBS and incubated in Dulbecco's modified Eagle's medium (DMEM/F-12, Life technologies) containing 1% essential fatty acid-free bovine serum albumin (Sigma) for 4 h at 37 C in a humidified incubator containing 5% CO 2 . Thereafter, the supernatants were removed and analyzed for FGF21 (R&D Systems). BAT and iWAT tissue pieces were washed in PBS, weighted, snap frozen in liquid nitrogen and stored at À80 C for protein detection.
2.5. Ex vivo analysis of FGF21 secretion in soleus and EDL muscle Ex vivo analysis of FGF21 secreted from EDL and soleus muscles was measured as described before [11].
Histology
Fat tissue specimens were fixed in 4% paraformaldehyde (Roth chemicals) for 24 h and embedded in low melting paraffin (Paraplast PlusÒ, Sigma Aldrich) for histological examination. Four mm-thick sections were cut using a rotary microtome (HSM55, Microm). Sections were mounted on superfrost glass slides (Menzel glass), dehydrated in increasing ethanol series and stained with hematoxylin and eosin (H&E) (Merck). Bright field images were obtained with the Keyence Microscope BZ-9000.
Statistics
Statistical analyses were performed using Stat Graph Prism (6.0) (Graphpad). All data are reported as mean AE SEM. After testing for normal distribution of the data and equal variances within the data sets, a Student's t-test (unpaired, two-tailed) was used to determine differences between the genotypes under different temperatures, whereby asterisks indicate the degree of statistical significance which was assumed P < 0.05 (*P < 0.05, **P < 0.01, ***P < 0.001). Statistical analyses testing differences in tissue-specific levels and genotype differences were performed by one-way ANOVA followed by Bonferroni's multiple comparison adjustment. Statistical significance was assumed at P < 0.05. Statistical differences between groups are indicated by superscript letters, whereby different letters indicate significant different at P < 0.05.
RESULTS
3.1. Chronic cold exposure induces serum levels of intact FGF21 in UCP1 KO mice UCP1 KO mice kept at chronic cold (5 C) showed significantly increased FGF21 serum levels as compared to WT mice, whereas no genotype difference was detectable under thermoneutral conditions ( Figure 1A). To investigate whether the blood serum FGF21 levels can potentially mediate endocrine crosstalk between tissues, we assessed full-length FGF21 protein levels using a Sandwich ELISA assay, detecting the full-length protein by binding to the N-terminal and the Cterminal of FGF21. We found highly increased intact FGF21 in the blood serum of cold exposed UCP1 KO mice ( Figure 1B).
FGF21 gene expression is elevated in BAT and iWAT of UCP1 KO mice
We aimed to identify tissues that contribute to increased FGF21 serum levels of UCP1 KO mice under cold conditions and determined FGF21 mRNA levels in multiple tissues. FGF21 mRNA was highly induced in BAT and iWAT, but not in the liver, of cold-acclimated UCP1 KO mice ( Figure 1C). At thermoneutrality, FGF21 mRNA levels in BAT and iWAT were low in both genotypes. The cold-induced increase of FGF21 gene expression in functional BAT and iWAT was expected in WT mice, given the concurrent literature [14e16]. However, the genetic ablation of UCP1 resulted in a dramatic increase of FGF21 mRNA levels in BAT and iWAT but not in the liver ( Figure 1C), suggesting that BAT or iWAT is a relevant source of endocrine FGF21. To comparatively estimate the relative contribution of the tissues to increasing FGF21 serum levels, we also determined FGF21 mRNA levels only considering ct-values without normalizing to tissue-specific housekeeping genes in liver, skeletal muscle, heart, white and brown adipose tissue (Fig. S1), thus allowing for multiple tissue comparison. Among those tissues, we found the highest FGF21 mRNA concentrations in BAT and iWAT of cold acclimated UCP1 KO mice. As expected from the literature, liver shows the highest gene expression under thermoneutral conditions while FGF21 mRNA was nearly not expressed in muscle and heart.
FGF21 serum levels in UCP1 KO mice respond to mild temperatures below thermoneutrality
We asked whether high FGF21 serum levels are a result of prolonged cold exposure and adaptive non-shivering thermogenesis or whether temperatures below thermoneutrality directly control them. In order to distinguish between these two scenarios we performed mild cold challenges of WT and UCP1 KO mice raised and bred at 30 C. Exposure to room temperature (23 C, 11 days) significantly increased serum FGF21 levels in UCP1 KO mice, while no change was observed in WT mice. Acclimatization to 18 C, considered the critical temperature for survival of UCP1 KO mice prior to cold exposure [7], further elevated FGF21 serum levels in UCP1 KO mice to the magnitude observed in response to prolonged cold of 5 C (Figure 2A). The results demonstrate ambient temperature-dependent increase of FGF21 serum levels in UCP1 KO mice.
BAT is the source of cold-induced circulating FGF21 in UCP1 KO mice
To explore the source of increased FGF21 serum levels in UCP1 KO mice, we analyzed secretion of FGF21 from excised BAT, iWAT tissue samples and skeletal muscle (extensor digitorum longus (EDL) and soleus muscle). The ex vivo secretion analyses confirmed BAT but not iWAT or muscle of UCP1 KO mice as the source of circulating FGF21 by showing significant release of FGF21 in an acclimatization temperature-dependent manner (Figure 2BeE). In contrast, levels of secreted FGF21 from WT mouse BAT samples were close to the detection limit ( Figure 2B). The liver is a known source of FGF21; however, we found no evidence for differential up-regulation of liver FGF21 mRNA in response to cold and due to genotype ( Figure 1C). The secretion assays of this soft tissue are prone to error, due to unregulated protein leaking and therefore, were not performed in this study. Thus, although there is no evidence for regulation of liver FGF21, we cannot fully exclude the contribution of the liver to circulating FGF21 levels.
3.5. Cold-induction changes serum metabolites, iWAT morphology and gene expression The typical effects of FGF21 action, such as decreased serum triglycerides, free fatty acids and glycerol levels (Figure 3AeC), were observed in UCP1 KO mice kept at chronic cold (5 C). Further indication for enhanced endogenous FGF21 signaling is supported by increased FGF21-cofactor beta-klotho (Klb) in iWAT ( Figure 3D; BAT/ liver: Fig. S2). As expected, the iWAT of UCP1 KO mice displayed enhanced multilocularity, thermogenic and lipolytic gene expression (Figure 3EeG). The direct link between increased FGF21 of coldexposed UCP1 KO mice and iWAT remodeling requires further experimentation. Interestingly, the remodeling of iWAT in UCP1 KO mice appears associated with metabolic futile cycles but not mitochondrial thermogenesis. Mitochondrial oxidative capacity (measured as COX activity) and the protein content of respiratory chain complexes in iWAT were not changed between genotypes (Fig. S3). Searching for alternative metabolic (thermogenic) pathways revealed evidence for increased futile cycling of lipids in cold exposed UCP1 KO mice, reflected in trends toward higher lipogenic (ACC, FASN) (Fig. S4) and Figure 3: Serum levels of metabolites and iWAT morphology and gene expression of UCP1 KO mice and WT littermates. Mice were maintained at 30 C or exposed to 5 C for 3 weeks (upon acclimation to 18 C for 2 weeks). Serum levels of (A) Triglycerides, (B) NEFAs, (C) Glycerol; iWAT Klb gene expression (D), morphology (E), thermogenic (F) and lipolytic (G) gene expression. *P < 0.05, **P < 0.01, and ***P < 0.0001, significant differences between the genotypes or statistical differences of the genotypes are indicated by superscript letters, whereby means annotated with different letters are significantly different. Statistical significance was assumed at P < 0.05. Data are means AE SEM (n ¼ 6e8/ group). significantly increased lipolytic (ATGL) gene expression (Fig. 3G). The futile cycling of triglyceride hydrolysis and re-synthesis is promoted by glycerol-kinase in WAT [27]. Cold-induction of glycerolkinase (Gyk) and adipocyte glycerol transporter (aquaporin 7; Aqp7) were more pronounced in iWAT of UCP1 KO compared to WT mice (Fig. S4). Altogether, UCP1-independent browning of iWAT appears to be associated with futile cycling of triglycerides and thus, higher ATP turnover (heat) of beige adipocytes.
DISCUSSION AND CONCLUSION
Obesity and diabetes research is geared toward the increase of UCP1 in brown and beige adipocytes to combust surplus energy. Similarly, FGF21 has been suggested as a therapeutic target to lower body weight by increasing energy expenditure. As both, BAT and FGF21, are associated in cold exposed humans, a positive functional relationship of between FGF21 and thermogenic BAT has been suggested [23]. Data on cold exposed humans suggested the augmentation of BAT thermogenesis in concert with FGF21 [23] and the requirement of functionally active BAT for FGF21 secretion [24]. The idea that FGF21 is released as an adipokine from thermogenicallycompetent BAT remained controversial [14e16], and was demonstrated by a single study in mice [16]. Here, not only do we confirm the release of FGF21 from BAT but also we demonstrate that classical UCP1-mediated BAT thermogenesis in mice is not required for cold-induced secretion of full-length FGF21 protein. Unexpectedly, the lack of UCP1 potentiated FGF21 expression in BAT and iWAT, rendering BAT the major source of circulating FGF21 serum levels at temperatures below thermoneutrality. The control for increased FGF21 release is presumably extrinsic (e.g. sympathetic nervous over-activation) as primary brown adipocytes of WT and UCP1 KO mice show no cellautonomous differences in agonist-mediated FGF21 induction (CL316,243) (Fig. S5). Increased FGF21 plasma levels have pleiotropic metabolic effects but several studies established 'browning' of iWAT as a typical FGF21 target. Morphological remodeling such as multilocularity in iWAT [28e30], the induction of the b-Klotho/FGFR receptor complex, thermogenic and lipid metabolism gene programs (except UCP1) support the effects of increased FGF21 levels in coordinating adaptive responses in the absence of UCP1. In the absence of UCP1, thermogenesis may be supported by increased ATP turnover which can be enhanced by ATP consuming futile cycles. Whether these effects are solely mediated by FGF21 and assist to rescue metabolic homeostasis during cold exposure by mobilization of energy storage and futile metabolic cycles has to be determined in future studies, possibly utilizing UCP1-FGF21 double knockout mice.
CONFLICT OF INTERESTS
The authors declare no competing financial interests.
AUTHOR CONTRIBUTIONS
SK and MK performed all experiments except histology, which was performed by LB and FN. SK, MK, and MJ analyzed the data. MJ and SK drafted, wrote and edited the manuscript, SK, RO, CWM, AK, and MJ conceptualized and designed the study.
FUNDING
This work was supported by the German Center for Diabetes Research (DZD).
|
2016-08-09T08:50:54.084Z
|
2015-05-14T00:00:00.000
|
{
"year": 2015,
"sha1": "d923782f89ee100f47152349bdc0ed360446558a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.molmet.2015.04.006",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d923782f89ee100f47152349bdc0ed360446558a",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
266947257
|
pes2o/s2orc
|
v3-fos-license
|
Association between body weight perception and intuitive eating among undergraduate students in China: the mediating role of body image
Background The association between body weight perception and intuitive eating among undergraduate students in China remains insufficiently understood. In the present study, we were aimed to examine the correlation between body weight perception, body image, and intuitive eating and determine whether the link between body weight perception and intuitive eating was influenced by body image. Methods A total of 1,050 undergraduate students completed the survey. Participants provided self-reported demographic details and completed two structured scales. The Body Esteem Scale for Adolescents and Adults (BESAA) and the Intuitive Eating Scale-2 (IES-2) were employed to assess body image and intuitive eating. Analysis of the mediation model was conducted using version 4.1 of the PROCESS Macro. Results with a value of p less than 0.05 were deemed statistically significant. Results The average age of the participants was 20.08 years (SD = 1.64). Among the students, 837 (79.7%) were female, and 212 (20.3%) were male. Body image (r = −0.429, p < 0.001) and intuitive eating (r = −0.313, p < 0.001) exhibited significant negative associations with body weight perception. Furthermore, body image showed a significant positive correlation with intuitive eating (r = 0.318, p < 0.001). Significant mediating effects of body image were identified concerning intuitive eating and body weight perception in the right weight (95% bootstrap CI = 0.007, 0.040) and overweight groups (95% bootstrap CI = −0.048, −0.009). The indirect effects of body image constituted 12.19% and 15.33% of the total effects of intuitive eating in these two groups. Conclusion Although the indirect effects were not substantial, these outcomes shed light on the partial understanding of how body weight perception impacted intuitive eating via body image. Importantly, our findings emphasized the significance of body image and body weight perception, offering a novel insight for prospective interventions targeting undergraduate students.
Introduction
Dieting is the mainstream method to control or lose weight.Although dieting has advantages in weight reduction in the short term, its associations with eating disorders and psychological problems should not be ignored in the long term.Restrictions on eating frequency, amount, and variety can raise the risk of anxiety, depression, metabolic diseases (e.g., diabetes, hypertension), and even mortality (1).Consequently, there is a burgeoning interest in investigating adaptive eating behaviors like intuitive eating to mitigate these potential adverse consequences.Intuitive eating denotes an adaptive eating approach strongly tied to internal physiological signals such as hunger and satiety, rather than external cues (2).Tylka previously outlined three defining characteristics of intuitive eating."Unconditional Permission to Eat" entails individuals not resisting or feeling no guilt about hunger, eating when hungry without labeling food as good or bad."Eating for Physical Rather than Emotional Reasons" signifies consuming food due to physical hunger instead of negative emotions."Reliance on Hunger and Satiety Cues" denotes individuals' confidence in their sense of hunger and fullness, guiding their eating patterns."Body-Food Choice Congruence" was subsequently identified as the fourth feature of intuitive eating, encompassing the selection of foods that fulfill both physical needs and psychological satisfaction, thereby promoting good health and functionality (3).Furthermore, a meta-analysis has demonstrated a negative correlation between intuitive eating and eating disorders, alongside positive associations with body image, self-esteem, and wellbeing (4).Therefore, paying attention to the status of intuitive eating can be an optional approach to promote healthy eating and mental health.
When considering intuitive eating, it becomes easier to establish a connection with body weight.Recently, several studies have detected the relationship between body weight and intuitive eating.For example, eating more intuitively is correlated with better weight status in young adults and the elderly (5)(6)(7).These findings underscore the substantial influence of body weight on dietary behavior.Simultaneously, body weight perception plays a crucial role regarding body weight.Investigations have demonstrated that body weight perception significantly impacts individuals' lifestyles, including their exercise and dietary behaviors (8,9).Body mass index (BMI) serves as a widely used objective measure of body weight, reflecting actual weight status.Nevertheless, body weight perception contrasts as a subjective evaluation.It encompasses an individual's awareness of their body weight, shape, and weight status (10).Generally, individuals perceive themselves as underweight, having the right weight, or being overweight (9).While some accurately assess their weight, others experience discrepancies between body weight perception and actual body weight, with overestimation and underestimation frequently occurring (11).Those who self-perceive as underweight or overweight, or exhibit misperceptions about their body weight, seem more prone to adopting unhealthy eating behaviors and face an elevated risk of developing eating disorders (8,12,13).An American study showed that body weight perception surpasses BMI as a predictor for managing body weight and dietary habits (14).However, existing research fails to distinctly elucidate the relationship between body weight perception and intuitive eating.Moreover, given population disparities, undergraduate students represent a pivotal cohort in shaping appropriate body weight perception and fostering healthy eating habits (15).Hence, there exists a necessity to investigate the association between body weight perception and intuitive eating among undergraduate students.Culture should not be neglected in forming body weight perception.Numerous studies have highlighted differences in body weight perception among individuals from diverse countries (16,17).Therefore, the present study undertook a survey in China to explore the body weight perception of Chinese undergraduate students.
The role of body image emerges as a critical factor in the investigation of intuitive eating.Body image encompasses individuals' assessments and attitudes towards their body or appearance, including elements of perception, attitude, cognition, and behavior (18).A previous correlation analysis with Spanish adolescents has revealed that body weight perception was correlated with body image, and misperception of being overweight was associated with a poor body image (19).A systemic review comprising 97 studies has underscored a positive link between body appreciation and intuitive eating, providing substantial evidence for the relationship between body image and intuitive eating (4).In addition to the direct relationship, the mediating impact of weight and shape concerns on the association between BMI and intuitive eating exhibited statistical significance, particularly among older women.It suggests that body image serves as a mediator in the relationship between BMI and intuitive eating (7).Given the distinction between BMI and body weight perception, there is still a significant gap in research concerning the validity of the mediating model whether body weight perception influences intuitive eating via body image.Consequently, to delve into the mechanisms of intuitive eating, we investigated the connections among body weight perception, body image, and intuitive eating, and explored whether the relationship of body weight perception with intuitive eating was mediated by body image.We formulated two hypotheses designed to elucidate the pivotal processes underlying the advancement of intuitive eating among undergraduate students: (a) body weight perception exhibits a negative correlation with intuitive eating; and (b) body image mediates the associations between body weight perception and intuitive eating among undergraduate students.
Participants and procedures
All participants were undergraduate students from the Nanjing University of Chinese Medicine.This survey was carried out utilizing a convenience sampling method from September 4, 2021, to May 23, 2022.An anonymous electronic questionnaire was edited by Questionnaire Star.The study was introduced to participants, after they signed the informed consent, they filled in the questionnaire.A total of 1,159 students voluntarily participated in the survey, with 1,050 qualified questionnaires included for statistical analysis after quality checks.Notably, 109 incomplete questionnaires were excluded from the final analysis.The authors affirm that all procedures contributing to this study complied with the Helsinki Declaration and ethical standards of relevant national and institutional committees on human experimentation.The study was approved by the Ethics Committee of Nanjing Jiangning Hospital.
Measurements
The questionnaire comprised two parts.The initial segment gathered self-reported demographic data from participants, including gender, age, BMI (calculated as weight in kilograms divided by the square of height in centimeters), major, living area, body weight perception, and family income.The second part incorporated two structured scales: the Body Esteem Scale for Adolescents and Adults (BESAA) alongside the Intuitive Eating Scale-2 (IES-2).
Developed by Mendelson et al. in 2001, the BESAA aims to assess body image (20) and consists of 23 items across three subscales (private general feelings about appearance, weight satisfaction, and evaluations attributed to others about one's body and appearance), utilizing a 5-point Likert scale.Items were scored from 0 (never agree) to 4 (always agree), summing up to a total score ranging from 0 to 92.Notably, nine negative items were reverse-scored.The total score was computed by adding up the scores of each item.A higher total score indicates a more positive body image.The Cronbach's coefficient alpha for the total scale was calculated at 0.94 (21).
Tylka and Kroon developed the original IES in 2006 and revised it in 2013 to form the IES-2 ( 22).This 23-item scale aims to gauge an individual's intuitive eating tendencies.Comprising four subscales rated on a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree), the total score was computed by summing the items and dividing by the number of items.The Chinese version of the IES-2 yielded a Cronbach's coefficient alpha of 0.933, with a test-retest correlation of 0.831 among undergraduate students (23).
Data analysis
The statistical analyzes were conducted using the Statistical Package for Social Sciences (SPSS) version 22 (SPSS, Inc., Chicago, IL, United States).Notably, no missing data was reported within the electronic questionnaire.Numerical variables were expressed as means ± standard deviations, while categorical variables were presented in quantities (proportions).The χ 2 -and t-tests were used to compare BESAA and IES-2 scores in different populations.Correlations were explored among the three key variables.For mediation analysis, Model 4 in version 4.1 of the PROCESS Macro plug-in authored by Hayes (24) was utilized.In this analysis, body weight perception acted as the independent variable, classified, with underweight set as the reference (25).Body image served as the mediator, intuitive eating as the dependent variable, and age, gender, and BMI functioned as covariates.To ascertain the significance of the mediation effect, a 95% bootstrap confidence interval (CI) based on 5,000 bootstrapped samples was applied.The mediation effect was significant if zero was not included in the 95% CI.A p-value <0.05 was considered statistically significant.
Demographic characteristics
A total of 1,050 undergraduate students completed the crosssectional study.The average age of the participants was 20.08 years (SD = 1.64).Notably, the majority of the participants were female (79.7%).Furthermore, 497 students (47.3%) were pursuing a medical specialty, 408 (38.9%) resided in cities, and 490 (46.8%) reported monthly family incomes exceeding 8,000 yuan.The mean BMI stood at 20.87 kg/m 2 (SD = 3.44).697 (66.4%) participants reported normal BMI, while only 479 (45.6%) participants considered their body weight were right.Overweight and obese students accounted for 12.4%, but 41.1% of students thought they were overweight (Table 1).
Examination through χ 2 -and t-tests revealed that individuals with a BMI less than 18.5 kg/m 2 or those with an accurate body weight perception presented higher scores in both BESAA and IES-2.Furthermore, older individuals (aged 20 years or more) and males exhibited a better degree of intuitive eating (Table 1).
Preliminary correlation analyzes
The mean scores for total body image and intuitive eating were 50.05 ± 13.12 and 3.37 ± 0.44, respectively.Body image (r = −0.429,p < 0.001) and intuitive eating (r = −0.313,p < 0.001) were significantly negatively related to body weight perception.Additionally, a significant positive correlation was observed between body image and intuitive eating (r = 0.318, p < 0.001).
Mediating effect
The mediating effect of body image between body weight perception and intuitive eating was analyzed after controlling the age, gender, and BMI among undergraduate students (Table 2).Comparatively, the underweight group revealed that body weight perception exhibited a significantly positive association with both intuitive eating (β = 0.398, p < 0.001) and body image (β = 0.455, p < 0.001) within the right weight group.Both body weight perception and body image were included in the mediation analysis.Body weight perception (β = 0.350, p < 0.001) and body image (β = 0.107, p = 0.002) had positive predictive effects on intuitive eating in the right weight group.Compared to the underweight group, body weight perception was significantly negatively related to intuitive eating (β = −0.395,p < 0.001) and body image (β = −0.567,p < 0.001) within the overweight group.Subsequently, both body weight perception (β = −0.335,p = 0.001) and body image (β = 0.107, p = 0.002) were included in the mediation model, ultimately exhibiting predictive effects on intuitive eating within the overweight group.
Furthermore, the results of the bootstrapping method confirmed the significance of the indirect effects of body weight perception on intuitive eating through body image (95% bootstrap CI = 0.007, 0.040 and − 0.048, −0.009) within both the right weight and overweight groups.Since none of the 95% bootstrap CIs included zero between their lower and upper limits, the mediating effects were statistically significant for both groups.The direct effects of body weight perception on intuitive eating were measured at 0.154 and 0.148, with 95% CIs ranging from 0.080 to 0.225 and − 0.225 to −0.074 within the right and overweight groups, respectively.It was observed that the indirect effects of body image accounted for 12.19% and 15.33% of the total effects of intuitive eating within the two groups (Table 3).
Discussion
This cross-sectional study aimed to examine the relationship between body weight perception, body image, and intuitive eating, as well as explore the indirect effect of body image on these parameters among undergraduate students in China.The current findings demonstrated that in addition to the direct effect on intuitive eating, body weight perception also affected intuitive eating via body image.These results provided practical significance for deepening the relationship between body weight conception, body image, and intuitive eating.Furthermore, they can serve as a guide for nurturing positive body image and promoting better intuitive eating among undergraduate students with varying body weight perceptions.
The mean total intuitive eating score was 3.37, indicating a moderate level, akin to levels reported in studies involving college students from both American and Chinese settings (26,27).Although there has been limited research focusing on this aspect among undergraduate students, it can be seen that they have a neutral attitude toward intuitive eating.This study did identify a statistically significant difference in intuitive eating scores across two age groups, but the magnitude of this difference was minimal, possibly due to a small effect size.Combined with previous studies on different ages (e.g., adults and elderly), intuitive eating did not greatly change with age, suggesting that its overall trend is relatively stable (7,28,29).The body image of the students enrolled here was at a moderate level.Arslan et al. presented a lower body image score among 4th-grade children (30).This variance might be attributed to the age gap between the participants, as our current cohort comprised undergraduate students, thus likely exhibiting more mature and objective judgments regarding body image.It's worth noting that a Romanian study reported higher body image scores compared to our investigation (21).Additionally, our study indicated a positive correlation between body weight perception and body image.This trend could possibly be explained by the fact that the Romanian study involved 427 medical students, with 65.6% perceiving themselves as having an accurate weight, whereas in our study, only 45.6% described their weight as suitable.Those who regarded themselves as having an appropriate weight tended to demonstrate higher scores on body image assessment.Two-thirds of participants had normal BMI, whereas only 45.6% considered their body weight right.Overweight and obese participants accounted for 12.4%, but 41.1% of participants described themselves as overweight.Notably, there was a difference between the actual body weight and the body weight perception, and students were prone to perceive themselves as overweight, consistent with a previous study in which one-third of Chinese adolescents with normal BMI reported themselves as overweight (31).This result might be explained by the popular aesthetic trend that being thinner is better (31).Undergraduate students are in a sensitive period of shaping their aesthetic and judgment abilities and are easily influenced by social media.Even if they have a normal weight, they still think they are not thin enough and pursue a thinner body shape to cater to the popular aesthetic (9).
An important demonstration of the present study was that body weight perception and intuitive eating were correlated (r = −0.313,p < 0.001), and the heavier the body weight perceived, the lower level of intuitive eating.To our knowledge, this investigation marks the first attempt to explore the link between body weight perception and intuitive eating.Among students who viewed themselves as overweight or obese, there was a heightened inclination towards pursuing weight loss compared to those who regarded themselves as just the right weight, including instances of misperception regarding body weight (32).Those perceiving themselves as overweight tended to take various actions to achieve the desired weight loss.Food intake restriction is the most common way for students to manage their weight, contrary to the concept of intuitive eating.However, students not only limit the intake of highly processed and high-calorie foods but also reduce the intake of essential nutrients, leading to malnutrition and even eating disorders (33,34).Therefore, cultivating the habit of students to establish a correct body weight perception has significance in reducing unhealthy eating and improving students' level of intuitive eating.
Herein, we corroborated a significant indirect relationship via body image between body weight perception and intuitive eating in both the right and overweight groups, indicating that their association was partly mediated by body image.We also revealed that body weight perception was associated with body image, consistent with previous studies (19,35,36).Students who perceived themselves as having an appropriate weight exhibited greater contentment with their body image.Regardless of whether students accurately estimated their weight, as long as students subjectively considered that they were heavier, the more negative they were with their body image.A Brazilian study has found that a majority of women experienced dissatisfaction with their body image, a factor closely associated with their body weight perception (36).Despite the differences in population and aesthetic culture, the findings were consistent, providing more reliable evidence for the relationship between body weight perception and body image.
According to the acceptance model of intuitive eating, it is not necessarily the body weight that predicts body appreciation but the views of body acceptance by others and society (37).People are both independent individuals and members of society, and the views of others influence their perceptions and judgments.This underscores the critical importance of developing the right body weight perception.Those with higher body image scores tended to exhibit a more favorable inclination towards intuitive eating practices (7, 28). Lee et al. proposed that intuitive eating contributes to diminishing negative body image among women (38).Body image is a psychological construction.People who are satisfied with their body image will use an adaptive approach to reply to the physical cues and respect physical signals, such as hunger and satiety, rather than ignoring or restraining these internal feelings (39).Consequently, body image is an intrinsic factor that influences intuitive eating.Given the psychological and physiological benefits of intuitive eating for individuals, more research focusing on improving intuitive eating are required.Most interventions have followed 10 intuitive eating principles, including recognizing hunger and satiety cues, and awareness of emotional and stress eating (40).However, previous interventions have not included teaching participants how to treat their body weight and establish positive body image.Considering the direct and indirect impacts of body weight perception and body image on intuitive eating, integrating both variables into interventions may represent a practical and effective strategy.
However, our current study also has some limitations.Firstly, due to the utilization of a cross-sectional design, the establishment of causal relationships among body weight perception, body image, and intuitive eating was not feasible.In future, longitudinal studies will be imperative to scrutinize these causal connections in the future.Secondly, the data were obtained from self-reported measures, which may lead to response bias.Finally, although the indirect effect of body image on the relationship between body weight perception and intuitive eating was verified, its proportion was not large, suggesting that additional mediators should be explored in future studies.
Despite the above limitations, the present study also has several strengths.To our knowledge, this study was the first to examine the relationships between body weight perception, body image, and intuitive eating, providing novel and significant results to expand the existing recognition of intuitive eating among undergraduate students in China.Furthermore, our results may significantly contribute to an enhanced understanding of the underlying mechanisms shaping body weight perception and intuitive eating.
Conclusion
In summary, we demonstrated that body weight perception and body image influence intuitive eating and that body image mediate body weight perception and intuitive eating.The findings indicated that individuals perceiving themselves as overweight and expressing dissatisfaction with their body image were inclined to exhibit reduced participation in intuitive eating behaviors.These findings partially explain how body weight perception influenced intuitive eating through its association with body image.Significantly, these results highlighted the role of body image and weight perception, presenting a novel insight for prospective interventions.Interventions aimed at promoting intuitive eating among undergraduate students ought to concentrate on core aspects while integrating pertinent factors to facilitate the development of accurate body weight perceptions and foster positive body images, ultimately enhancing intuitive eating practices.
TABLE 1
Demographic characteristics and univariate analysis of body image, and intuitive eating of participants with different characteristics (N = 1,050).
TABLE 2
Mediating effect of body image between body weight perception and intuitive eating (N = 1,050).
|
2024-01-12T16:09:10.440Z
|
2024-01-10T00:00:00.000
|
{
"year": 2024,
"sha1": "4d2183bc1adce9839e1831dd5f190a4993dbc4e0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1288257/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cbada2d4dcdf63090d543b2e5190683470f9e5b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
49583018
|
pes2o/s2orc
|
v3-fos-license
|
An integrative approach to develop computational pipeline for drug-target interaction network analysis
Understanding the general principles governing the functioning of biological networks is a major challenge of the current era. Functionality of biological networks can be observed from drug and target interaction perspective. All possible modes of operations of biological networks are confined by the interaction analysis. Several of the existing approaches in this direction, however, are data-driven and thus lack potential to be generalized and extrapolated to different species. In this paper, we demonstrate a systems pharmacology pipeline and discuss how the network theory, along with gene ontology (GO) analysis, co-expression analysis, module re-construction, pathway mapping and structure level analysis can be used to decipher important properties of biological networks with the aim to propose lead molecule for the therapeutic interventions of various diseases.
I/R injury 26,27 . Although the anticancerous activity of picrorhiza has been exploited, its exact molecular mechanisms of actions and related pathways and targets remains poorly understood 28 .
To achieve the desired therapeutic effect while reducing the risk of unpropitious conditions, with a known drug, it is imperative to identify the neighborhood of these targets within which they have their action 29 . Consequently, using information from known drug target and creating networks of associated target proteins; we can understand how drugs can have beneficial as well as pernicious consequences 10 . Based upon these observations, relevant drugs for specific disease could be filtered out to provide only the beneficial population of drugs to the patients.
To decipher the regulatory interactions and underlying mechanistic behavior of picrorhiza, a target-pathway network re-construction was performed to discover the relationship between the drug and its relevant targets and pathways. Construction and analysis of such intricate network not only requires the basic concepts of network biology but also an understanding of how the interaction between drug and its relevant target determines regulation of various phenotypic characters in a diseased state. Besides the direct consequences of the interaction between drug and its target, drug action also depends on the consequences within the physiological system. Therefore a holistic approach is required to deal with drug-target interaction network for the selection of putative drug candidates.
As stated earlier, integration of concepts from various fields can help to reach the best solution for a given problem. Hence, we integrated advanced application of computational and experimental information through literature based support in our work to build networks for analyzing drug action and to develop poly-pharmacology for complex diseases and predict therapeutic efficacy and adverse event risk for individuals prior to commencement of therapy. In this study, we demonstrate a systems pharmacology pipeline and discuss how the network theory in combination with gene ontology (GO) analysis, co-expression analysis, module re-construction, pathway mapping and structural analysis can be used to decipher important properties of biological networks.
Results and Discussion
To acquire holistic view through empirical data, literature mining was performed to identify known targets for Picroside I, II, III, and IV. Unlike P-I and P-II, no target was identified for P-III and P-IV which led us to drop them for further analysis. The reason for such outcome can be imparted to its inability to cross the blood-brain-barrier 30 . Further, to uncover unknown drug targets those are yet to be verified experimentally, mapping of P-I and P-II structures was done against protein/receptor library through PharmMapper with threshold limited to 30 31 . Combined results from literature mining and PharmMapper, showed the presence of targets common to both and hence were categorized as primary targets while the ones found only in PharmMapper were categorized as secondary targets for downstream analysis.
To further address the question that whether targets taken as secondary are appropriate or not, we retrieved top ten co-expressed genes by considering primary targets as query dataset based on confidence score. Selected nodes were then considered for degree distribution with betweenness centrality, which would state the importance of genes with respect to their association with involved pathways. In this analysis, prioritization of node was done based on k and B c correlation. The node size reflects the association score, i.e., more the association score bigger will be the node and vice-versa.
Genes selected through co-expression analysis were combined together for P-I and II separately and gene ontology analysis was performed using GORILLA to identify the role of target association genes in Biological processes, Molecular functions and Cellular components 32 . Based on p-value, association type weak or strong was indicated, which further referred based on well-compiled GO databases. To further understand the role of primary targets we performed the docking analysis using PatchDock server, which checks various conformations and suggests the best one (Table 1). Additionally, we performed the PatchDock analysis with other FDA-approved drugs available in the market to find out the best drug based on molecular interactions in selected targets against P-I, P-II and other drugs 33 . Prioritized targets were cross-checked with literature and found to be key player in carcinogenesis and therefore their role in various malignancies was found to be crucial.
Primary targets were considered for module definition and tried to converge on pathways on the basis of co-expression based association score. Genes were highlighted using different colors and size, where red color represents the association between degree and betweenness centrality of nodes and node size shows the co-expression association between the interacting nodes ( Fig. 1). Mapping of these modules on pathways was performed through pathway reconstruction (Fig. 2).
Pathway Analysis. For drug-target interaction network we have considered literature mining techniques, scoring functions on the basis of co-expression modules derived from cancer and KEGG database as reference for giving a support factor for holistic network visualization by using Reactome Pathway Database and Pathway Interaction Database. The pipeline presented is based upon working modules where we have compiled the information through the stepwise procedures and outcome of one step can be used as an input for the next step. At few points cross validation is also being applied to present the refined information to the further steps. Pipeline is being verified through all the available data sets for the analysis and finally the robust one is being proposed. We have broadly explored the possible routes and diversion points on the basis of node involvement in networks and data is being generated from standard pathways available. Pathway analysis revealed that genes are distributed in pathways associated with various diseases such as, Cancer associated signaling, Hepatitis-B, Human T-cell leukemia virus type 1 (HTLV-I) Infection, Tuberculosis, Influenza A, Thyroid Hormonal Signaling Pathway and many more. But careful evaluation and mapping on the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways using combined score resulted association of maximum number of genes with cancer associated signaling, viz. receptor based Death-Associated Protein Kinase 1 (DAPK1) activation, Transforming Growth Factor Death-Associated Protein Kinase mediated signaling. Calcium/calmodulin-dependent serine/threonine kinases (CDK) play a major role in activation of various signaling via regulating apoptotic pathways 34 . Such kind of activation helps in stimulation of induction of autophagy through JNK regulation as shown in Fig. 2. BRAF1 targeted by both picroside derivatives for inhibiting pathway module as shown in Table 1.
Transforming growth factor beta mediated signaling. TGFβ signaling acts as crucial regulator in various apoptotic and proliferative pathways. The signaling includes binding to Transforming Growth Factor Beta Receptor TGFBβ-II which further initiates formation of SMAD complex and its phosphorylation ultimately resulting in transcriptional activation of various genes in nucleus 35 . FKB1A, CASP1, CASP3 and TGFB2 targeted by P-I and P-II for blocking TGFβ mediated signaling (Table 1).
Interleukin Mediated Signaling. IL-2 and IL-4 one of the types of promotes differentiation and proliferation of T helper 2 (TH2) cells and the synthesis of immunoglobulin E (IgE) 36 . Generally, it activates mitogen-activated protein kinase (MAPK), phosphoinositide 3-kinase (PI3K), signal transducers and activators of transcription (STAT), and mammalian target of rapamycin (mTOR) signaling modules, leading to both mitogenic and anti-apoptotic signals 37 . IL2 considered as target for inhibition for inhibiting interleukin mediated singaling (Table 1).
Cytokine Mediated Signaling. Cytokines plays critical role in the regulation of a various normal functions ranging from cellular proliferation, differentiation and survival to specialized cellular functions enabling host resistance against pathogens. Also, release of cytokines in response to inflammation, immunity or infection can supress cancer development and progression 38 . The JAK-STAT pathway trigerred by cytokines to achieve their ultimate goal can be thought of promising way for cancer therapy in humans 38 . MAPKs acts as central points for target inhibition due to hyperphosphorylation events. Hence, considered as inhibition of proliferative pathways. On the basis of reconstructed pathway, key nodes were selected to perform structural study using PatchDock server. Docking of P-I and P-II was performed and found that picrosides can be used as active inhibitory molecule for cancer treatment as it targets at multiple level which is evident from Fig. 3 and data presented in Table 1. But, there is need to prioritize contender on the basis of personalized gene expression of candidate targets in patients. Picroside derivatives combinely plays a crucial role in inhibition of BRAF, FKB1A, CASP1, CASP3, TGFB2, IL2 and MAPKs through various signaling routes and therefore, can be considered as potent inhibitory molecule for further experimental analysis.
Conclusively, our study presents a novel path to trace down the potential targets and propose them either for treating multiple diseases or for combinatorial therapy by identifying the exact course of disease transmission. It is anticipated that our network based drug-target interaction analysis protocol will assist computational biologists to look for similar patterns in other disease targets and biomedical scientists to design new therapeutic interventions based upon these findings.
Conclusions
Understanding of regulatory mechanism and subsequent effect on phenotypic level can not be dechiphered through individual genes only, but needs to include coordination of set of genes or gene groups. Hence, there is a need to study combinatorial effects of drugs by targeting multiple triggring points at same instance. With the aid of presented pipeline, biologists can infer key points involved in dysregulation of a particular mechanism given a medicinally important molecule using network-based perspective. For instance Picroside derivatives thought as medicinally important yet have not been broadly investigated for cancer treatment. Our study reveals key markers targeted by picroside derivatives through integration of data mining and network based approaches. The same revealation was found through computational molecular interactions and selected targets can act as potential markers for experimental validation.
Methods
Complex chemical composition of the metabolic compounds found in medicinal herbs makes the understanding of therapeutic mechanism of action arduous. However, to clarify its mechanism of action at molecular level with an aim to know its usefulness in treating disorders, one has to have not only a deep insight into the molecular mechanism but also should opt a systematic approach to aid precise identification of therapeutic target. To achieve the same in this pipeline, literature mining of metabolites along with target network analysis was performed under systems pharmacology framework. Schematic workflow is shown in Fig. 4.
Literature Mining.
With the advancement in scientific era, the information generated in the form of research articles being published in number of journals, is increasing at rapid rate and hence becomes a cumbersome task for a researcher to keep track of relevant literatures from MEDLINE manually. To make the task of information retrieval (IR) much easier, PubMed search engine was used to find all the hits with the query keywords like Picrorhiza (224 hits), Picroside (103 hits) Picroside-I (45 hits) and Picroside-II (78 hits). Screening for Picroside-III and Picroside-IV was also performed in similar fashion. Compiled scoping document of literature mining is available in Supplementary File 1.
Target Prediction.
With the purpose of cross checking the P-I, P-II, P-III, and P-IV interaction with Homo sapiens known targets, we downloaded the 2D structure of picroside derivatives from the PubChem library. Further, the downloaded structures were given as an input for PharmMapper 31 . PharmMapper is a web server to predict therapeutic candidate drug targets for small molecule provided as query. To dug out the possible picroside interaction, score for candidate targets was performed by setting the parametric values of 2241 for human targets and a maximum number of 300 reserved matched targets were considered and all other parameters with default values.
Common Target Identification. A comparative analysis was performed between the targets retrieved from
literature and PharmMapper in order to predict the verified target for further consideration of the same as a potential biomarker for various diseases. Targets common to both analysis were considered to have direct interaction for inhibition and therefore are called Primary Targets (PT) in our study. However, the targets that were present in literature and found to be affected but not present in the PharmMapper analysis were considered as Secondary Targets (ST) since no direct interaction was found at in-silico level.
where, TS is Target Screening, LM represents Literature Mining, PMR denotes PharmMapper Results, PT stands for Primary Targets and ST for Secondary Targets.
Gene Ontology and Co-Expression Network.
To capture comprehensive view of how targets form signaling cascade to inhibit or enhance the disease response and their role in various domains like biological process, molecular function and cellular component; we performed gene ontology analysis. The analysis also gave an idea about the inter-connecting component in which biomarker association with neighboring genes can be identified. Besides, BLAST2GO software was used to identify various interactions between predicted targets on the basis of node score.
where • desc(g) represents all the descendant terms for a given GO term g • dist(g, ga) represents the number of edges between the GO term g and the GO term ga • g represents the element of GO, where GO is the whole set of all GO terms • gp(g) represents the number of gene outcomes given to a given GO term g Score is calculated in terms of Biological Processes Score (BPS), Molecular Functions Score (MFS), and Cellular Components Score (CCS). Overall Gene Ontology Score (GOS) is represented as: To elaborate the network and to gain comprehensive knowledge about the targets and their associated partners we downloaded the interacting partners from Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database 39 , which contains information from several sources, like in-silico prediction methods, experimental data and scientific literatures. For network construction analysis both parametric (Pearson Correlation Coefficient (PCC)) and non-parametric test (Spearman Correlation Coefficient (SCC)). But, SCC has not shown significant correlation but PCC showed significant correlation and overlapping results with available literature. Following the collection from STRING database, data was weighted, integrated and a confidence score was calculated for all protein interactions using Pearson Correlation Coefficient (PCC) which measures the linear correlation between two variables. Table 1A and B) Picroside-II and targets listed in Table 1B. The structural information given as output from PatchDock helps in deciphering the binding site of our ligand with the targets. With the help of parameters listed out in Table 1 we can filter out the best targets and infer their structural interactions. where, CES stands for Co-Expression Score, CLC for Clustering Coefficient, BWC for Betweenness Centrality, and DON for Degree of Nodes. Information of selected parameters is given below: A co-expression network is an undirected graph, with every node representing a gene and every edge representing the connection between these nodes. In this study, we used an in-house Perl script to calculate gene co-expression; we calculated various scores, assigned weights to each score, and finally generated a combined score. Methodology was adopted from our previous study on miRNA regulatory network analysis 3 .
Gene Ontology Score deals with three components, namely Biological Processes (BPs), Molecular Functions (MFs), and Cellular Components (CCs). BLAST2GO was used to link selected genes to map with the GO database in terms of BPs, MFs, and CCs. The genes that belonged to the same category were clustered. A node score function was defined for all targeted genes. Genes that had the same score were clustered in the same cluster category. Interconnection from one cluster to another cluster was performed on the basis of their respective association based on the node score.
The degree of a node in an undirected graph is the number of connexions or edges a node has with other nodes, and it is defined as deg(i) = k(i) = |N(i)| where N(i) is the number of the neighbours of node i. The degree distribution p(k) reveals the fraction of vertices with degree k. DON gives the idea of association of nodes with node of interest.
Clustering Coefficient is the measurement that shows the tendency of a graph to be divided into clusters. A cluster is a subset of vertices that contains lots of edges connecting these vertices to each other. Assuming that i is a vertex with degree deg(i) = k in an undirected graph G and that there are e edges between the k neighbors of i in G, then the Clustering Coefficient of i in G is given by: i Thus, C i measures the ratio of the number of edges between the neighbors of i to the total possible number of such edges, which is k(k − 1)/2. It takes values as 0 ≤ C i ≤ 1. Betweenness Centrality shows that nodes which are intermediate between neighbors rank higher. Without these nodes, there would be no way for two neighbors to communicate with each other. Thus, betweenness centrality shows important nodes that lie on a high proportion of paths between other nodes in the network. For distinct nodes i, j, w ∈ V(G), let σ ij be the total number of shortest paths between i and j and σ ij (w) be the number of shortest paths from i to j that pass through w. Moreover, for w ∈ V(G), let V (i) denote the set of all ordered pairs, (i, j) in V(G) × V(G) such that i, j, w are all distinct. Then, the Betweenness Centrality is calculated as:
Pathway Mapping of Co-Expressed Modules.
After identifying co-expressed gene modules, a mapping of associated partners with designated pathway was performed by manual literature survey followed by constructing static map using KEGG, REACTOME and Pathway Interaction Database (PID) as a reference pathway maps, to aid proper understanding of molecular mechanism of action and target implication 40-42 . Patch Dock Analysis. Also, to understand the inhibitory role of selected targets with small molecules (P-I and P-II), molecular docking was performed based on shape complimentary principles using PatchDock web server 33 ; as we are interested to observe variations in target binding energy of picroside derivates with already known drug targets.
|
2018-07-07T13:45:35.565Z
|
2018-07-06T00:00:00.000
|
{
"year": 2018,
"sha1": "ca152028d684420ac3d7dd0ecfd66b92daea2f87",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28577-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca152028d684420ac3d7dd0ecfd66b92daea2f87",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
55340712
|
pes2o/s2orc
|
v3-fos-license
|
Engineering Research for Economic Advancement in Namibia
The aim of this special issue of the Open Journal of Applied Sciences on Engineering Research for Economic Advancement is to articulate the relevance of applied engineering research in driving the economic development and sustainability, and addressing national and regional needs of the “small economies”. The research focus areas covered in this issue include renewable energy, water resources management, manufacturing systems, and sustainable mining practices. The focus areas are also relevant in addressing the global change challenges relating to sustainable production and consumption of natural resources for provision of basic services at national and local level.
Introduction
Developing countries typically show poorly developed industrial sectors and, hence, are reliant on imports of basic manufactured goods; while developed countries and newly industrialized nations have managed to develop their industries within the broader spectrum of their respective development agendas.Transforming Namibia into an industrialized country of equal opportunities, which is globally competitive, is a basic goal of the country's Vision 2030.The National Planning Commission [1] identified the country's enabling sectors (i.e.health, education, social services, water and electricity and transportation), manufacturing, agriculture and tourism as sectors expected to sustain the growth of the Namibian economy.Namibia is thus compelled to develop its industries as part of the development agenda [2].
Namibia is a sparsely populated country of approximately 2.3 million people in sub-Saharan Africa [3].It is categorized as an upper middle-income country but has one of the highest levels of income inequality in the N. Kgabi world [4] with a Gini coefficient estimated at 0.58 by the 2009/10 household survey, which is one of the highest figures of any country in the world [3].The country has estimated annual GDP per capita of USD 5293.It was worth noting however, that from 1980-1990 Namibia had a GDP per capita which was higher than that of China and Thailand but comparable to that of Mauritius.In 2011, GDP per capita in both China and Thailand escalated much higher than that of Namibia, while Mauritius' GDP per capita rose to twice higher than that of Namibia.The country's per capita growth has been very slow compared to China, Mauritius, Thailand, Malaysia and Botswana [5].
The economy of Namibia is closely linked to that of South Africa [4] with the Namibian Dollar being pegged at a ratio of 1:1 to the South African Rand [6].South Africa plays an important role for logistics in Namibia because it has the most developed infrastructure and logistics skills in Africa as well as functioning as a gateway for southern Africa [7].Approximately 80% of Namibia's total imports are from or through South Africa [4], which is claimed to exercise a great deal of pressure on Namibia through monopoly control, restrictive purchasing, over-pricing and dumping [8].
The decline in Namibia's economic competitiveness cannot be overestimated.[9].The country intends to reverse this trend by pursuing long and short term goals as articulated in Vision 2030 and recently through NDP-4 and other sectoral interventions.Some of the long term interventions include increased investment in education and training, health sector, infrastructure, and Broad based incentive strategy [5].
The role of engineering research focused at economic advancement is thus a necessity for the country, and should without fail affect the technological imports and exports.The World Bank [10] confirmed that Namibia's technology-intensive exports were very small as a proportion of total merchandise exports.Low-, medium-, and high-technology exports together accounted for just 9 percent of total merchandise exports in 2008.Namibia's technology imports are dominated by medium-technology (mainly transport equipment, machinery, and chemical products) and high-technology goods (mainly machinery and pharmaceutical products), which account for 29 percent and 15 percent, respectively, of Namibia's total merchandise [10].The need for technological change, which raises the relative marginal productivity of capital through education and training of the labor force, investments in research and development and the creation of new managerial structures and work organization, is thus evident.Analytical work on long-term economic growth shows that in the 20th century the factor of production growing most rapidly has been human capital, but there are no signs that this has reduced the rate of return to investment in education and training [11].
The state of research and development in Namibia has been described by Kgabi [12] as more subject specific rather than multidisciplinary, while the current emphasis worldwide is on interdisciplinary, multidisciplinary, and trans-disciplinary approaches with increasing focus on problems, rather than techniques; and more emphasis on collaborative work and communication.The National Planning Commission [1] however, purports that the ability of the country to perform applied research in critical areas such as agriculture, fisheries, geology, information technology and manufacturing is severely hampered by the lack of qualified graduates in engineering, biology, chemistry, mathematics and information technology [1].This in a way keeps the knowledge economy and hence the pace of technological advancement of the country at its lowest.Powell and Snellman [13] define knowledge economy as production and services based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence.The key component of a knowledge economy is thus a greater reliance on intellectual capabilities than on physical inputs or natural resources.The concept of knowledge economy, which can only be developed by strengthening applied research, is crucial to a country like Namibia, which is aspiring expansion and emergence of new industries.
Contribution by Namibian institutions of higher learning in the form of development of engineering and applied sciences research is increasingly being recognized.Despite the fact that the higher education sector faces the challenges of recruiting and retaining Namibians who hold post graduate level qualifications; which is particularly true for the sciences, ICTs and engineering where most of the research and innovation output is ex-pected [1]; this special issue covers some of the research activities carried out by researchers in the School of Engineering, Polytechnic of Namibia.According to John [14], the research focus areas of the School of Engineering (SoE) were carefully chosen to respond to Namibian national imperatives as presented in the National Development Plans (NDPs), and the National Vision 2030.Global trends and in-house capacity, coupled with support from numerous national and international partners inform the SoE broad research fields, which are: Renewable energy which focuses on the development, analysis, design and implementation of renewable energy systems and technologies; Water resources management aimed at developing efficient ways of generating, distributing and re-using water resources; Manufacturing systems aimed at supporting the manufacturing sector of the nation through research activities in the fields of mechatronics, control systems and appropriate technology developments; and Sustainable mining practices with risk and safety management and environmental issues as a focus area for research activities in the mining sector [14].
It might be advisable though, to "pick" and implement President Barack Obama's ideas captured by the National Science Board of the United States of America, i.e. "We need to build a future in which our factories and workers are busy manufacturing the high-tech products that will define the century… Doing that starts with continuing investment in the basic science and engineering research and technology development from which new products, new businesses, and even new industries are formed"-President Barack Obama, February 2012 [15].
In 2012, the Rand Merchant Bank's (RMB) ladder for the best African countries to invest in, ranked Namibia the 20th most attractive investment destination out of 53 countries, dropping one spot from RMB's 2011 overall index.Between 2007/ 2008 and 2010/2011, the country exhibited an upward trend in competitiveness rating, from 89 in 2007/2008 (out of 131), to 80 in 2008/2009 (out of 134), remaining constant in 2009/2010 and 2010/2011 (out of 133 and 139, respectively).The rating has since gone down by nine places to 83 (out of 142) in 2011/2012 and yet again by the same number to 92 in 2012/2013 (out of 144).The 2012/2013 ranking is the lowest the country has been ranked over the years
|
2018-12-06T20:00:26.703Z
|
2014-12-24T00:00:00.000
|
{
"year": 2014,
"sha1": "ab504149ef0f5f2f776e4496257b55cefa9da59b",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=52572",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4721441291b596b335a5fc2bcab4ea732b42a15c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
119655452
|
pes2o/s2orc
|
v3-fos-license
|
Finite-time thermodynamics of port-Hamiltonian systems
In this paper, we identify a class of time-varying port-Hamiltonian systems that is suitable for studying problems at the intersection of statistical mechanics and control of physical systems. Those port-Hamiltonian systems are able to modify their internal structure as well as their interconnection with the environment over time. The framework allows us to prove the First and Second laws of thermodynamics, but also lets us apply results from optimal and stochastic control theory to physical systems. In particular, we show how to use linear control theory to optimally extract work from a single heat source over a finite time interval in the manner of Maxwell's demon. Furthermore, the optimal controller is a time-varying port-Hamiltonian system, which can be physically implemented as a variable linear capacitor and transformer. We also use the theory to design a heat engine operating between two heat sources in finite-time Carnot-like cycles of maximum power, and we compare those two heat engines.
Introduction
Thermodynamics was developed axiomatically in the XIXth century by showing that the empirical observations regarding heat and mechanical work are exchanged between systems could be explained by only four fundamental laws. Making thermodynamics compatible with-or even deducible from-the physical laws of movement of microscopic particles, whether described by newtonian mechanics or quantum mechanics has been one chief goal of statistical physics, developed by Boltzmann, Maxwell, Gibbs, etc. The tools of statistical physics allow to model a macroscopical system by a large-dimensional stochastic dynamical system, from which are derived under some assumptions to phenomenological laws for global statistical quantities such as temperature or expected total energy. Thermodynamics being aimed at studying the exchange of energy between systems and their environment, it is natural that the underlying microscopic system is modelled by an open system, i.e., an equatioṅ where x is the state of the system and u is an input provided by the environment, such as an external force or a voltage. When no input is present, the system is said to be autonomous, or isolated: the evolution of the system is not influenced by the environment. One can consider that the environment is itself another open dynamical system, and one may interconnect them in order to get one global autonomous system. A large part of statistical physics assumes no or infinitesimal exchange of heat with the environment, with probability distributions on the state that are almost constant, avoiding explicit use of open systems theory. However, there are fruitful interactions between open systems theory-also called control theory-and statistical physics, e.g., the dissipation-fluctuation theorem [1], dissipativity as a control-theory tool [2,3,4], control of stochastic ensembles [5,6], control thermodynamics [7], physical interpretation of Kalman filtering [8], port-Hamiltonian systems for irreversible thermodynamics [9], compartmental systems theory for thermodynamics [10], interconnected thermodynamic control systems [11], thermodynamic-based Lyapunov functions for distributed dynamical systems [12,13], model reduction of Hamiltonian systems [14], observer's effect in classical statistical physics [15,16], etc.
Physical autonomous systems are conveniently expressed in Hamiltonian formẋ = J(x) ∂H ∂x , where J(x) is a skew-symmetric matrix and H(x) is the Hamiltonian, often equal to the total energy of the system. One way to account for the influence of the environment is to assume that the Hamiltonian H(x, v) is dependent on parameters v determined by the environment, e.g., by the presence of an 'interaction Hamiltonian' that summarizes the influence of the environment. Another way, essentially equivalent to the first, is the addition to the right-hand-side of a term g(x)u that describes a force field modulated by the environment through the force intensity u.
In this paper, we consider both ways alongside, leading to a class of equationṡ which are called (time-varying) port-Hamiltonian systems. Port-Hamiltonian systems have been introduced in the 1990s for solving deterministic control problems of physical systems, see, for example [17,18,19] and the book [20]. The first message of this paper states that port-Hamiltonian systems (2) are a right class of systems for a system-theoretic study of statistical physics, large enough to easily model interesting physical systems and small enough to obey the laws of thermodynamics and be analyzed fruitfully by the techniques of control theory.
The second message illustrates the first and states that the classical problem of the conversion of heat to work can be succesfully formulated and solved as a control problem of a stochastic port-Hamiltonian system in contact with one or several heat baths. We first find the optimal work extraction from a single-temperature heat bath in a finite time and show to embody the optimal controller, which we call the linear heat engine, in an electric circuit with resistances, capacitance and transformer. We find that the corresponding optimal controller can be interpreted as implementing a zero-temperature low resistance, or in other words a Maxwellian demon [21,22] that extracts as much work as desired from a single source of heat. Explicit expressions for the finite-time work and heat are derived. As this optimal linear heat engine is not cyclic, we look for Carnot-like cyclic solutions operating between several temperatures and see the impact of the dynamical time constants of the system on efficiency and power of the cycle. This offers a dynamic interpretation of some classical results of endoreversible and finite-time thermodynamics [23,24,7,25].
Among the closest work in the literature are [5], from which we borrow the illuminating example of time-varying capacitance, [6], in which bilinear systems are studied from a stochastic control point of view with results for finite-time Carnot cycles illustrated on a piston-and-gas nonlinear example, [26], which formalizes a dissipativity and work extraction theory for the time-varying linear systems, and [27], which analyses some examples of linear heat engines. Other recent proposals for a physical implementation of Maxwell's demon, although of a very different nature than ours, are [28,29].
The paper is organized as follows. In Sections 2-3 the class of port-Hamiltonian systems is discussed, with a special focus on linear (Section 4) and scalar linear (Section 5) systems. How they obey the First and Second Laws of thermodynamics, thus Carnot's theorem, is explained in Section 6. A study of non-cyclic and cyclic optimal linear heat engines is proposed in Section 7.
Lossless port-Hamiltonian systems
This paper seeks to analyze work extraction of thermodynamic systems as a stochastic optimal control problem on fundamental physical systems. We must therefore choose a class of open systems that is large enough to allow the convenient modelling of any open system from first principles of classical physics, and small enough so that bounds can be derived on the achievable performance of the relevant control tasks on such systems, e.g., Carnot's theorem.
We propose the so-called port-Hamiltonian systems as such a suitable class. Port-Hamiltonian systems have been developed since the 90s by the open systems community as a mean to formulate control problems on mechanical and electrical systems. In particular, an intrinsic formulation has been initiated by [30], although in this paper we shall express the dynamics in coordinate form. The following formal definition is followed by some explanatory comments and reminders.
Definition 1. In this paper, we call a system lossless port-Hamiltonian if it obeys equations of the forṁ • g(x, v) derives locally from a gradient in x, i.e. can be expressed as • u(t) (called the linear input) and v(t) (called the nonlinear input) are time-varying parameters representing the influence of the environment on the dynamics of the system The linear output y is defined as The matrix J being skew-symmetric, invertible and closed ensures that in the absence of any external interference, i.e. when u = 0 and v is constant, the system is a mere Hamiltonian system. By Darboux's theorem, those are exactly the systems that can be expressed by the familiar Hamilton's equationsq = ∂H/∂p,ṗ = −∂H/∂q by a local change of variables (q(x), p(x)). We can then interpret the coordinates q i as (generalised) positions, associated to (generalised) momenta p i . In particular, the state space is always even-dimensional.
An input u i may for instance be a force, and the corresponding output y i the speed of the point on which the input force is exerted, or the other way around. The input u i may as well be a current through a two-terminal port, and the output y i a voltage across the port, or the other way around. If the nonlinear input v is a constant, then it is easy to see that we can express the energy balance asḢ = i u i y i , so that u i y i may be interpreted as the power flowing through the port (u i , y i ) (see [31] for more careful conditions to which this interpretation is physically unambiguous). If v is time-varying,Ḣ = i u i y i + jv j z j , thus justifying the form of the nonlinear output z j as ∂H ∂vj , corresponding to the nonlinear input v j .
We highlight, to avoid any confusion, that the definition above of lossless port-Hamiltonian systems slightly differs from the most common definition found in the control literature, see, for example [17,18,19,32], where in particular only linear inputs are considered, J is not required to be invertible and closed, and any force field g(x)u is allowed, making it possible to model phenomenological non conservative forces.
We may interconnect two port-Hamiltonian systems (x i , u i , y i ) (i = 1, 2) on the port (u i , y i ) in feedback, making one system's input the other system's output. More precisely, we may have for instance the interconnection equations u 1 + y 2 = 0, if u 1 and y 2 represent equal and opposite forces, and u 2 = y 1 represent a same speed. In an electrical context, the currents check u 1 + y 2 = 0 by Kirchhoff's Law and the voltage between two same points are denoted y 1 = u 2 . In all cases, such a lossless interconnection ensures thatḢ 1 (x 1 ) = −Ḣ 2 (x) = y T 1 u 1 , while the total energy remains constant. It is easy to see that the total dynamics is described by a Hamiltonian system with energy H 1 + H 2 and the This symplectic form is closed indeed, due to the fact that g i both derive from a gradient. Linear and nonlinear controls, representing apparently two different ways to act on the system, are essentially equivalent, as we now argue.
We first observe that a linear input can be seen as a particular nonlinear input. Since the force g(x, v)u is a gradient J ∂G ∂x , Equation (3) can be rewritten as a Hamiltonian system of Hamiltonian H(x, v) + uG(x, v). Although it is obviously equivalent in terms of evolution of the state space, it must be underlined that it leads to a different balance of energy. Indeed, u is now seen as a 'nonlinear' input with corresponding output z = ∂(H + uG)/∂u = G, and the Hamiltonian H + uG varies at the rateuG. By contrast, the linear input u and the corresponding linear output y = g T ∂H/∂x in Equation (3) leads to a rate uĠ for the Hamiltonian H. A concrete example of this difference would be a mass m of height x in a gravitational field of strength u. One may either consider the mass as a port-Hamiltonian system of Hamiltonian mẋ 2 /2 subject to an external force u, or a Hamiltonian system of Hamiltonian mẋ 2 /2 + mux, which amounts to including the gravitational field inside the system. Although these two views are equivalent in terms of equation of motions, they interpret the 'internal energy' of the system and its variation differently. The choice is a matter of convenience. Of course if at some initial and final times 0 and T the system is isolated from its environment, with u(0) = u(T ) = 0, the two interpretations coincide, the Hamiltonian is defined unambiguously in these moments, and it is important that the variation H(T ) − H(0) is predicted equally by the two models. It is indeed the case as shown by an integration by part. Similarly, any cyclic boundary conditions ensuring that u(0)G(0) = u(T )G(T ) will give a same energy balance over a cycle. Therefore any optimal energy extraction problem with these kinds of boundary conditions, as considered later in this paper, are well-posed. Restrictive conditions under which the instantaneous power transfered to a system is defined at all times unambiguously are given in [31]; those conditions are satisfied in the case of usual electric circuits such as those we use as examples in this paper.
Conversely, one may for every nonlinear input v add two states v and mv, and create a corresponding linear input u with the state space equations and an augmented Hamiltonian H ′ = H + mv 2 /2, for any constant m. We suppose, for consistency, that the rest of the system is also in canonical coordinates, or with constant J. The corresponding output is y =v. Thus for every trajectory v(t) we can find a corresponding linear input trajectory u(t) = ∂H/∂v +mv with the same effect on the state variables x. As m vanishes, the work yu exerted on the system is arbitrarily close tov∂H/∂v. For instance, a time-varying capacitance C in an electric circuit contributes to the Hamiltonian with a term q 2 2C associated with the electric charge q. The capacitance C is a nonlinear input that can extract energy from the system through its time variation. On the other side we may consider the detailed mechanism through which the capacitance is varied. For example, a capacitance may consist of two plates of mass m, the distance between which can be varied. In this case, the nonlinear input is the distance v, which is practically modified through the application of a force u, following Equation 4. But we may also create a time-varying capacitance by other means, such as changing the dielectric between the plates. Therefore, not only is the nonlinear input a simpler choice as it leads to more compact equations, but it ensures that any bound on the control performance obtainable from that system will be independent from the particular physical implementation of the nonlinear control.
In summary, the class of lossless port-Hamiltonian systems contains the Hamiltonian systems, is invariant under interconnection and allows two channels of interaction with the environment, through linear and nonlinear inputs/outputs. While those channels are largely redundant in principle, they offer a flexibility to model various situations easily. It is a reasonable stance to believe that the fundamental control of an environment over a system is through a linear control u, e.g. a force field, while the general nonlinear control v influencing for instance the Hamiltonian is only a phenomenological expression allowing to model an interaction at a higher level than by the detailed description of the interaction mediating the influence of the environment.
Dissipative port-Hamiltonian systems
Sources of energy dissipation, such as friction or electric resistance, are indispensable components for a convenient modelling of macroscopic situations. The most common model of dissipation involves a linear direct relationship between input and output (between force and speed, or between current and voltage): u = ry. It is known that this simple relationship can be implemented arbitrarily well by a many-dimensional fundamental linear system [33,34,15]. The initial state of the many-dimensional internal state can only be described by a probability distribution. The effect of this random initial distribution over a great many degrees of freedom translates into an additive noise. There are fairly good theoretical and empirical reasons to shape the effect of this noise as where n(t) is a Gaussian white noise of unit intensity (the 'derivative' of a Brownian motion) and T is a constant called temperature, that scales the amplitude of the noise. The noise is often called Johnson-Nyquist noise. In this paper, we limit ourselves to linear resistors because a general noise model for nonlinear resistors is not known and seems out of reach (see [35] and references within for partial results and a discussion). We do however allow a resistance r(x, v) that depends on the state or external environment.
Assume a port-Hamiltonian system with a resistance r as in Equation (5) connected between a scalar input u and a scalar output y = g T ∂H ∂x . Then the global equation readsẋ As we may add several resistances to several ports, one arrives at the following general form, which could also be called an 'open Langevin equation'.
Definition 2. A dissipative port-Hamiltonian system is of the forṁ
where v is a nonlinear input vector, u is a linear input vector, J and g satisfy the conditions of a lossless port-Hamiltonian system, It should be said here whether we understand the above stochastic differential equation in the Itō's or Stratonovich's sense. While Itō's calculus is popular in the control community because of its causality properties, Stratonovich calculus is often more adapted for physical situations. The two intepretations coincide whenever R i (v) is independent on x.
Linear port-Hamiltonian systems
It is customary to linearise an autonomous dynamical system in the vicinity of a stable fixed point in order to understand its behaviour. By (marginal) stability, the linear approximation remains approximately valid at all times in a certain neighbourhood of the origin.
It is common for an open system to have stable fixed point x = 0, which corresponds to a local minimum of the Hamiltonian, when the system is isolated from the environment (i.e., whenever u = 0 and v is constant). We may linearize all trajectories around this equilibrium, replacing x. We can assume H(0, v) = 0 without loss of generality. As x = 0 is a local minimum of H(x, v) for all v, ∂H(0,v) ∂x is zero. Therefore, one can consider a Hamiltonian of the form 1 2 x T Σ(v)x, for a positive definite Σ(v). As we shall see, this particular family of linear systems covers a large number of practically interesting cases.
The global dynamics is therefore of the forṁ with linear output y = g T Σx.
It has not been stressed so far that one system has a whole family of representations by differential equations, related to one another by change of coordinates. It is sometimes convenient to consider time-varying change of coordinates x = P (t)x, leading to a new equation in thex coordinates: whereJ andM are defined as the skew-symmetric and symmetric parts of P JP T +Ṗ Σ −1 P T , whileg = P g andR = P RP T . The linear output is y = g TΣx . The termM (v) plays the same role as the dissipation term −R(v), except that it is not necessarily negative definite and it is not matched by a random fluctuation term. In other terms it acts as a positive or negative zero-temperature resistance, that represents a loss or gain of energy by the system. As we shall see later on, this is associated to work (mechanical or other) performed on the environment. Energy-normalizing coordinates, which makes the energy form Σ(v) equal the identity at all times by choosing P T (t)P (t) = Σ(v(t)), are particularly convenient if they exist, as seen in the later sections. In caseΣ(v) is only nonnegative definite then we normalize it to a diagonal zero-one matrix D by choosing P T (t)DP (t) = Σ(v(t)). In this paper we usually consider the positive definite case.
Scalar linear systems
Most of the examples will be drawn from the linear systems detailed above, hence their importance. Let us consider in more detail the time-varying capacitor in Figure 1-(a), whose capacitance C can be modified at will by, e.g., moving the plates of the capacitance and acts as a nonlinear input v 1 = C. The linear input is the current, u = i, and the linear output is the voltage, y = v C . We can choose the state to be the charge q, the voltage v C , or x = q/ √ C = q/ √ v 1 , with corresponding equationṡ Note that these equations involve only one state, with a non-invertible J, since here J = 0. One may artificially add a dummy state variable, e.g., q, which has no influence in the Hamiltonian or in the input-output relationship. This would allow a proper Hamiltonian structure with an even-dimensional state space. For simplicity we keep the above one-dimensional systems. We will in the following choose to work with the third energy-normalizing state-space representation satisfying the energy balancė The mechanical work extraction rate from the moving plates of the capacitor is M (v 1 )x 2 , while the product yu is the electrical power into the capacitor. We find it useful to connect the above time-varying capacitor to a timevarying ideal lossless transformer [36] with varying turns ratio N > 0. The turns ratio is our second nonlinear input, v 2 = N . The system is illustrated in Figure 1-(b) and the model becomeṡ where u = i, y = v N , v 1 > 0, and v 2 > 0. This time-varying circuit can be used to implement a large class of first-order linear time-varying systems, as stated in the following proposition, proved in the Appendix.
Proposition 1. The input-output map of the first-order linear time-varying systemṗ
where a ∈ C 0 , b, c ∈ C 1 , can be exactly implemented using the port-Hamiltonian system (12) with for arbitrary v 1 (0) > 0.
To show how the circuit can be used, let us use it to implement an ideal time-invariant RC-circuit, see Figure 1-(c), where the resistor has temperature zero and exhibits no Johnson-Nyquist noise. Assuming the resistance is R 1 > 0, u = i, and y = v C , the model (in energy-normalizing coordinates) becomeṡ Thus a = −1/(R 1 C 1 ) and b = c = 1/ √ C 1 , and let us denote the time-constant of the circuit by τ 1 = R 1 C 1 . Then we should according to Proposition 1 choose the following nonlinear inputs for the port-Hamiltonian implementation ( Physically this corresponds to a compression of the capacitor plates and that mechanical energy is extracted in a way to resemble the dissipation of the RCcircuit. To further illustrate the flexibility of the time-varying circuit, note that we can easily also implement an active filter with a negative resistor −R 1 < 0 by just changing the nonlinear inputs to Physically this corresponds to pulling the capacitor plates apart which requires the mechanical work injection rate |w| = x 2 /τ 1 . Remark that not only the input-output map of the RC-circuit is replicated by the time-varying circuit, but also the amount of energy stored in the capacitor, since x(t) = p(t) for all t. Hence, in this sense the time-varying port-Hamiltonian system is both externally and internally equivalent to the RC-circuit.
That work extraction or injection can be interpreted as an equivalent RC circuit will be useful in later sections to analyze heat engines, and is further formalized for general systems in the following section.
The First and Second Law of thermodynamics
The class of port-Hamiltonian systems obeys the First and Second Law of thermodynamics, as we now detail.
Consider a port-Hamiltonian system with a random state x. The internal energy is defined as the expected Hamiltonian U = E x H(x, v). For a lossless system steered by inputs u, v, the variation of internal energyU is interpreted as the power or work rate w performed by the system on the environment: For a dissipative system as described by Equation 7, the energy balance is written, following the rules of (Itō's) stochastic calculus: Note that using Stratonovich's calculus would add a term T ijk ∂H/∂x i ∂S ij /∂x k S kj with S being a square root of R = SS T . This would modify the expression for heat q in the subsequent developments of this section. This term vanishes if R does not depend on x, as is the case in all our examples. For simplicity we keep the Itō form below. The term −E x ∂H ∂x T R ∂H ∂x represents the dissipation through the resistive parts, while T E x trace R ∂ 2 H ∂x 2 is due to the fluctuation. Together they form the heat rate: , so that the energy balance, or First Law of thermodynamics, is written simply asU = q − w.
A linear port-Hamiltonian system under the energy-normalizing coordinates x, where X := E x xx T is the second moment of x.
As we observed in the previous section for the scalar case, we see that the extraction of work is formally undistinguishable from positive or negative dissipation elements, thus can be represented e.g. by time-varying zero-temperature resistances in an electric circuit. This key intuition will be used in the next section for the convenient design and analysis of heat engines.
Kelvin's statement of the Second Law states that one cannot extract work from a unique source of heat through a cyclic process. This can be proved rigorously for port-Hamiltonian systems with classical arguments, which we overview briefly. A way to prove it is to introduce a quantity called entropy, denoted S, which is the differential Shannon entropy of the probability distribution of the state relative to the measure µ defined by the symplectic structure associated with the J matrix. The quantity is finite only when the state probability is characterized by a probability distribution ρ(x, t) with respect to µ, in which case it is equal to S = −E x (ln ρ). Using the fact that the Kullback-Leibler divergence of two initial probability measures for the same Markovian process is nonincreasing [37] and the fact that the Gibbs distribution ρ(x) ∝ exp(−H(x, v)/T ) is a stationary probability measure for one heat bath T , it is classic to derive the celebrated Clausius inequalityṠ where q i is the heat rate exchange with the heat bath of temperature T i . From there it is quite elementary to derive Carnot's theorem, which states that the efficiency η = w/ q hot of a system having access to heat baths T hot > T cold is bounded by 1 − T cold /T hot . Moreover this efficiency can be attained arbitrarily close by cycles that access at most one bath at a time and for which sup t |q(t)| is arbitrarily small, i.e, the cycles are infinitely slow. Matrix-theoretic proofs of this for the linear port-Hamiltonian systems can be found in [26], generalizing [5]. In [6], a proof for bilinear systems is provided along with interesting efficiency bounds on finite-time cycles with the tools from stochastic control theory. Our aim now is to proceed further into this direction, although our starting point is different in that we use the port-Hamiltonian framework and optimal linear-quadratic control theory.
Finite-time transformations
In this section, we first describe a simple non-cyclic optimal linear heat engine that extracts work from a single heat source. Then we will discuss its finite-time implementation using physical components, and its relation to Carnot heat engines, which are known to achieve the optimal thermodynamic efficiency.
An Optimal Linear Heat Engine
Let us consider a resistor R 2 of temperature T , whose effect can be modelled by a parallel source of random white noise current 2T /R 2 n, see (5). Looking at the frequency domain, it is well known that every frequency band ∆f carries a power 4R 2 T ∆f . By connecting a zero-temperature resistance R 1 = R 2 in parallel, one can dissipate a power T ∆f , therefore an infinite power over all frequencies, through the resistance R 1 . Remember that a zero-temperature resistance can be implemented by a capacitor with moving plates, see Section 5, so that this 'dissipated' energy is actually work extracted from the system. This apparent ability for a hot resistor to exchange an infinite amount of thermal energy with the environment is sometimes called the ultraviolet catastrophe. Of course, this diverging power only betrays the limit of the Johnson-Nyquist white noise model. In reality, high frequencies power vanish due to fundamental reasons (quantum cut-off [33]) or engineering constraints (limited heat conductivity). We assume therefore that there is a given capacitance C in parallel with R 2 , which filters out the high frequencies of the noise, with cut-off frequency 1/τ 2 , where τ 2 = R 2 C is the time constant of the R 2 C circuit, see Figure 2-(a). Intuition may therefore suggest that we can retrieve at best a mechanical power of the order of T /τ 2 . In this section, we show under what condition it is true, and how to extract a maximum amount of useful work from the hot R 2 C circuit within a time t f .
Using energy-normalizing coordinates across the capacitor, we have the model where H = 1 2 Cv 2 C = 1 2 x 2 is the Hamiltonian. The input is the injected current i and the output y is the voltage across the circuit, see Figure 2-(a). The resistor R ǫ of temperature zero represents inefficiencies in the work extraction mechanism, and we will let it tend to zero later. Alternatively one can interpret R ǫ as losses in the interconnecting wires. We assume that R ǫ has zero temperature, because a Johnson-Nyquist noise in R ǫ would again provoke an ultraviolet catastrophe and an infinite power, thus deteriorating the accuracy of the model rather than improving it.
Maximizing the amount of extracted work from the hot R 2 C circuit within a finite time t f (which is minimizing the work given to the same circuit) is an optimal control problem with the criterium which we solve in the Appendix for all values of t f , R ǫ , R 2 and C. Strikingly, the optimal controller is of the form u = −y/R 1 (t). In other words, the optimal heat engine is a zero-temperature time-varying resistance R 1 (t). A circuit representation of the optimal heat engine is given in Figure 2-(b). For large time horizon, R 1 (t) assumes a nearly constant value R 2 R ǫ + R 2 ǫ until roughly t f − τ 2 /(2 1 + R 2 /R ǫ ) where it converges exponentially fast to R ǫ , as illustrated in Figure 3. In case of an infinite horizon t f → ∞, R 1 takes the constant value R 2 R ǫ + R 2 ǫ . The total work extracted by the optimal linear heat engine is The power for large times is therefore (1 − 2 R ǫ /R 2 )T /τ 2 . Hence, a hot but small resistor R 2 has the potential to be a good source of work in the circuit Figure 2. Remark that the answer would be very different for a constant current source instead of random, where the maximum power transfer to R 1 is reached by an impedance matching R 1 ≈ R 2 .
It has been assumed all the power leaving the circuit, −y(t)u(t), is equal to the work extraction rate of the engine. One may wonder if there is a physical device that can generate the optimal current u = −y/R 1 (t) while converting this power into useful mechanical work (for example) without any losses. This requires a device that emulates a time-varying resistor R 1 (t) of zero temperature. As the capacitor C and R 1 (t)+R ǫ in Figure 2-(b) is simply an RC-circuit, it can be implemented with a lossless time-varying circuit, see Section 5 and Figure 2-(c). The details of this and the corresponding energy balance is the topic of the next subsection.
Energy balance of the linear heat engine
We focus here on the analysis of the constant R 1 case, which occurs for long or infinite time horizon t f . The model of the circuit in Figure 2- where the global time constant of the system, we can rewrite the model as where T ′ = T τ /τ 2 acts as an 'effective' temperature for which the model is identical to a simple Langevin's equation for an RC circuit in contact with one heat bath T ′ and no work extraction. This reformulation allows a simple analysis of the energy and work balance of the system, as we now show. The balance of internal energy x(t) √ C(R1+Rǫ) = 2U (t)/(τ 1 + τ ǫ ), the fraction α := τ 1 /(τ 1 + τ ǫ ) of which is useful work (i.e., dissipated into R 1 ): Assume the capacitor is initially in thermal equilibrium with R 2 , i.e., U (0) = T /2, and is then also connected to R 1 + R ǫ of temperature zero. The capacitor will then exponentially fast reach a new thermal equilibrium with internal energy U (t f ) ≈ T ′ /2 (for large enough t f ). The total work extracted during this relaxation becomes This expression can readily be maximized for τ 1 , which confirms that τ 1 = τ ǫ τ 2 + τ 2 ǫ (i.e., R 1 = R 2 R ǫ + R 2 ǫ ) is optimal and for small κ = τ ǫ /τ 2 recovers W ⋆ as in (17).
The linear heat engine can be understood as a physical implementation of Maxwell's demon, in that it acts on a system in feedback control with the intent to extract work from the random fluctuations of a single heat source. Unlike the original Maxwell's demon [21,22] portrayed as an intelligent being of some sort, our demon has an explicit implementation as a physical system, which is also the case in [28,29]. While most demons explored in the literature act in discrete time with a finite set of actions (e.g. open or close a trap door), our demon acts in continuous time with a continuous set of actions. Although it can extract any desired amount of work from a single heat source, the demon does not formally contradict the Second Law because it does so in a non-cyclic way, as it includes in particular a time-varying capacitor with exponentially increasing capacitance (see Equation (14) and around).
This analysis also reveals a perhaps troubling property of this linear heat engine. The temperature of the capacitor in steady state converges to T ′ = T τ /τ 2 . Since τ should be small to extract a large amount of work according to the above analysis, it indicates the optimal linear heat engine creates a large temperature gradient. This may seem to contradict an important message of thermodynamics: The most efficient heat engine (the Carnot heat engine) operates in quasi steady state avoiding finite temperature gradients and unnecessary entropy generation. This issue is further discussed in the next subsection.
Finite-Time Carnot Heat Engine
So far we have addressed the problem of extracting work from a hot heat bath of temperature T using a zero-temperature resistor R 1 (t). This device can be implemented by a time-varying capacitor with exponentially increasing capacitance, see Section 5 and in particular Equation (14). From a practical perspective, it is clear we cannot let this increase go on forever, and next we find a method to reset the capacitor to the initial state while extracting net work. Hence, we are interested in constructing a cyclic operation of the capacitor. The classical way to operate heat engines is to introduce two heat baths of temperatures T cold < T hot . This idea we will pursue next, based on the circuit in Figure 4, which is based upon Figure 2-(c). One should not confuse the timevarying capacitance C(t) with the emulated constant capacitance C, part of the R 1 C emulated circuit.
To reset the engine, we will construct a cycle resembling the Carnot cycle, see for example [5], and compute its efficiency. The cycle and its four legs are shown in Figure 5. The time-varying capacitor first goes through an isothermal phase (a→b) emulating a constant positive resistor R hot 1 and a constant capacitance (temperature T ′ hot < T hot , time constant τ hot 1 ) of duration t hot . The third leg is also an isothermal phase (c→d) but implementing a negative resistor −R cold 1 with a constant capacitance (temperature T ′ cold > T cold , time constant τ cold 1 > 0) of duration t cold . In these legs, the time-varying capacitance (the nonlinear input v 1 ) satisfies The temperatures depend on the time constants as which follow from models identical to (18) using τ 1 = τ hot 1 and τ 1 = −τ cold 1 .
To close the cycle, two adiabatic legs (b→c, d→a) are also introduced. These can be understood as stepwise instantaneous changes of the time-varying capacitance where the charge in the capacitor remains constant and we have in order to have a closed cycle. From the above expressions, it is clear we must satisfy the constraints to form stable closed cycles which can be repeated indefinitely. In particular, in the cold phase when the resistance is negative, we must have τ cold 1 > τ 2 to have a finite temperature. If we decrease the capacitance at a too high rate, the temperature goes unbounded.
We can now compute the work and heat flows in the isothermal legs as (assuming R ǫ = 0 and noting that U is constant) using the work extraction formula (20). The work in the adiabatic legs are ±W ad = ± 1 2 (T ′ hot − T ′ cold ) and the heat flow is zero. Hence, work is extracted in the hot phase, and work is put back in to reset the capacitor in the cold phase. The efficiency of heat engines is typically defined as the net work over the cycle divided by the heat input. Here it becomes where we have used the cycle condition (21). It is interesting that η has the same form as the Carnot heat engine efficiency, except that one should use the effective temperatures T ′ hot and T ′ cold instead of T hot and T cold . In particular, net work is only obtained if T ′ cold < T ′ hot which puts constraints on how to operate the engine.
The larger the difference between T ′ hot and T ′ cold , the higher the efficiency. This is obtained by making the ratios τ 2 /τ hot 1 and τ 2 /τ cold 1 small. In fact, we can come arbitrarily close to the Carnot heat engine efficiency by making R hot 1 and R cold 1 large relative to R 2 . It is also interesting to note that the efficiency does not depend on the period time t hot + t cold and the emulated capacitance C.
Another quantity of interest is the mechanical power defined and given bȳ w := W hot + W ad + W cold − W ad t hot + t cold The power is the net work averaged over a cycle, and can be made arbitrarily large by choosing the emulated capacitance C small. The efficiency η can be made large by choosing R hot 1 and R cold 1 large. Therefore power and efficiency can be simultaneously high. Note, however, that if there is a lower bound on the time constant τ 2 = R 2 C, for instance, then there is a trade-off between efficiency and power. It is a simple calculation to find that the optimal power is reached for A topic of finite-time thermodynamics is to characterise the maximum power cycle, often in terms of the thermal conductivity k between the bath and the system. Here we may identify the thermal conductivity to q/(T −T ′ ) = 1/τ 2 , for both baths. With this identification, we recover the maximum power ( √ T hot − √ T cold ) 2 /4τ 2 , as predicted by the classical Orlov-Berry formula [24] and the corresponding Chambadal-Novikov efficiency 1 − T cold /T hot [38,39].
Let us conclude by discussing the relation to the optimal linear heat engine and the issue raised in the end of Section 7.2. For this engine it was optimal to choose R hot 1 = R 2 ǫ + R 2 R ǫ → 0 as R ǫ → 0. This indeed gives the maximum possible power W hot /t hot during the hot phase since T ′ hot → 0. But if we take the resetting of the capacitor using a cold heat source into account, the net efficiency is very bad, and even negative since τ cold 1 > τ 2 . A negative efficiency means that it requires more work to reset the engine than was extracted in the hot phase. Finally, note that the optimal linear engine assumed a fixed capacitance C and was only optimized for the hot phase.
Conclusion
We have shown in this paper how even the most classical linear control theory can help us explore the fine performance of finite-time heat engines. We believe that this is only an example on how a better integration of existing controltheoretic tools, e.g., Kalman filtering, port-Hamiltonian theory, passivity theory, information-theoretic techniques in control, etc. may be better integrated with statistical physics, in order to explore the fundamental limits to work extraction, actuation, measurement, or computation-even in a nonlinear context, which is conveniently formalised in the port-Hamiltonian framework.
|
2013-08-06T09:19:56.000Z
|
2013-08-06T00:00:00.000
|
{
"year": 2014,
"sha1": "64dde5162602992fd5e6a0bf5de955bfc8f5ca6f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1308.1213",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "64dde5162602992fd5e6a0bf5de955bfc8f5ca6f",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
212625843
|
pes2o/s2orc
|
v3-fos-license
|
Chinese Outward Direct Investment in Australia: Current Condition Analysis and Prospect
Australia has become a crucial destination for Chinese foreign investment. Based on statistical data of Chinese outward direct investment (ODI) in Australia throughout recent ten years, by practicallyanalyzing the development and current condition of Chinese ODI in Australia, this paper discusses problemsof Chinese ODI in Australia, brings forward specific countermeasures and recommendations, and makes a prospect for its future development. Findings suggest thatChinese enterprises invest in Australia basically because of Australian resources’ reservation, Chinese domestic demand for various kinds of goods, and the support of relevant favorable policies.The outcomes also suggest that Chinese ODI in Australia has main characteristics of metal and energy industry dominance, imbalanced industrial and regional investment allocations as well as the tendency of industrial diversification. To improve imperfections, Chinese enterprises shall promote the industrial diversification further and to get ready knowledge about Australian foreign policies and local regulations, and Chinese government are supposed to bring forward more encouraging policies for private enterprisesto invest abroad.In general, there still exist a great potential anda prosperous prospect for Chinese enterprises to invest in Australia.
Introduction
Ever since 1970s, under the orientation of a series of national policies including "reform and opening up", "attracting investments", "bringing in" and "going global" strategies,the development of both Chinese industrial technologies and enterpriseshas fundamentallyexperienced changes from nonexistence to existence, changes from being backward to advancement and changes from the dominance of "bringing in" to the coordination between "bringing in" and "going global", and the industrial fund has got accumulated and expanded. From the perspective of operating scale,a great many Chinese enterprises have already aimed at overseas market instead of solely Chinese inbound market and have aggressively kept investing outbound and running globalized operation; from the perspective of foreign economic trade,China has stepped into the rank of great power in terms of foreign investment. As a developed country withample natural resources, Australia occupies a significant status of the foreign investment of China. In accordance to statistical data of Chinese outward direct investment (ODI) in Australia during recent ten years, on the fundament of practically analyzing development and current condition of Chinese ODI in Australia, this paper is going to discuss and summarize problems existing in Chinese ODI in Australia currently, to bring forward specific countermeasures and recommendations, and to make a prospect of development ofChinese ODI in Australia.
2The Origin and Development
Chinese foreign investment started to occur at the end of 1990s and the beginning of the 21 st century. Around the end of 1990s,Chinese capital flows began to be revealed as capital outflowat a relatively low level, and after entering 21 st century, the total volume of Chinese capital outflow started to climb up. Till now, Chinese capital outflows have still been fluctuating around the level of USD 400 HML (USD 40 billion) annually on average. It was not until 2005 that Chinese enterprises started to invest in Australia. 1 In 2005, ChineseODI in Australia amounted to USD 320 million, and afterward Chinese enterprises kept investing in Australia incredibly actively with annual investment amounts no lower than USD 2 billion, creating records of USD 15.99 billion in 2008 and USD 11.98 billion in 2015 respectively. Generally, Chinese ODI in Australia experienced large fluctuation, which was convergent to Chinese capital outflows fluctuation during the same period. Observed from the scale of investment volume, Chinese investing enterprises generally hold an optimistic attitude towards investing in Australia.
Australian Resource Advantage and Its Characteristics
Australia is abundant in natural mineral and energy resources, including bauxite, coal, iron ore, copper, tin, gold, silver, uranium, nickel, tungsten, rare earth elements, mineral sands, lead, zinc, diamonds, natural gas, petroleum [1], and Australia thus has a comparative advantage in relevant resources trading. Australian net export volume of coal accounts for 29% of the global total volume of that, becoming the largest coal net export country in the world [1]. From 2005 to 2015, in terms of energy trade, the export volume always far exceeded the import volume in each year with the annual average net export of AUD 7950.02 million at annual average growth rate of5.39%. In terms of the geographical distribution of natural resources, the majority distributes among the states 2 and regions that are relatively remote. The exploration volume of WA took up 59.38% out of the total, followed by that of QLD (16.90%) and that of SA (12.12%), which accounted for nearly 90% of total mineral and petroleum exploration in Australia. Remote as those regions are, those regions contain more ample mineral and energy resources compared with other regions. As for the non-natural resources such as human resources, financial resources, those tend to gather in the regions where populations are relatively intensive. As for proportions of households in each state in 2015, NSW (31.39%), VIC (24.98%) and QLD (20.16%) ranked top 3, which altogether took up over three-fourth (76.53% in practice) of total households within Australia. In addition, economy and economic development of each state are highly related to its foreign trade and that to what extent a state can attract foreign investment. The export volume and import volume of NSW, VIC, QLD and WA all transcended that of other states in 2015, which means that international trades in those regions were comparatively active. NSW and VIC for particular have relatively long history of economic development and economic fundament, so a large quantity of Australian companies and multi-national companies (MNCs) set their branches or even headquarters there, which leads to relatively efficient information flows and comparatively high capital liquidity. Accordingly, although NSW and VIC are poor of natural resources in comparison with other regions, the service trading is generally more active comparably.
The Need for Chinese National Economic Development
Chinese ODI in Australia is carried out to satisfy the need for Chinese industrial development and national economic development. For long, the development of heavy industries with high consumption within China in areas of steel production, concrete production, etc. has stimulated the demand for resources like metal and energy resources. As researched by World Steel Association and World Energy Consumption & Statistics, China has always been the nation with largest steel-production and energy consumption volume in the world which have continuously been ascending. Under this circumstance, Chinese large mineral and energy industrial enterprises transfer their investments to Australia to expand their economic scales. Chinese ODI also results from the incentive to satisfy the demand of Chinese domestic consumption. China is currently at the stage of transforming from a middle-income country into a high-income country, and in the developed regions such as Pearl River Delta and Yangtze River Delta, the income levels have already approximately been equivalent to that of upper-middle countries [2].The enhancement of income level enables Chinese consumption structure to step into the stage of developing new type of consumption 3 from the stage of material, surviving, conventional consumption with major goal of solving the subsistence problem (clothing and food) [3],leading to mounting demand for energy as well as a variety of products with high-quality, which effectively spurs and expands Chinese ODI in Australia in multiple industries.
Relevant Favorable Policies Released
Ever since December 21, 1971 when Sino-Australiandiplomatic relationship was established, through the leaders' frequent contacts and mutual visits, bilateral political, economic and trade relationship has been well developed, and communications and collaborations have been developed sustainably and stably. In order to expand Chinese enterprises' overseas investments, Chinese government has released a series of favorable policies for investing overseas in succession. The document of<Opinions about Encourage and Regulate Foreign Investment Cooperation> improved and released in 2007, for example, indicates Chinese government's will to encourage and promote Chinese enterprises to invest overseas, providing policies and institutional conditions to motivate mutually respectful and reciprocitarian investments and trades between China and other countries. In addition, Australian government has also taken a series of political actions favorable for FDI and its foreign investment, therefore promoting Sino-Chinese trades and investments. Especially China-Australia Free Trade Agreement (ChAFTA, negotiated on June 17, 2015)is expected not only to promote Chinese ODI in Australia but also tostimulate Australian economy. M&A of Australian local enterprises by Chinese enterprises may provide additional funds or human resources and more local employment opportunities. Also, with the investment volume and Chinese investing enterprises increasing, Australian market competition can possibly be intensified, which will develop and complete local market regime further.
Dominance of Metal and Energy Industry
The investments are basically dominated by metal and energy industry, albeit large amounts of Chinese ODI in Australia in recent years annually. Among the accumulated investments from 2005 to June 2016, metal industry (40.10%) took the lead, and energy industry (36.40%) ranked second only to it, which accounted for 76.5% altogether, remaining portion shared by other industries.However, due to the global economic downturn in recent several years, the growth rate of Chinese economy slowing down 4 and the transformation of Chinese economy development from being "production motivated" into being "consumption motivated", 5 the expansion ofAustralian metal and energy industry has been seriously affected and constrained. According to the statistical data of Australian Securities Exchanges (ASX), the performance indicators of metal sector and energy sector have generally revealed a downward tendency, and companies of those two sectors have suffered depression comparably [4].Also, influenced by the prices of staple commodities falling down, the Chinese investment influx to Australian mineral industry slumped (down 85% year-on-year), with Chinese foreign investment in Australia dropping by nearly 20% [5].
Imbalanced Distribution of Regional and Industrial Investments
Influenced by Australian demographic and resource distribution, Chinese ODI in Australia is characterized by large regional differences in the aspect of investing regions and industries and imbalanced regional distribution.Chinese ODI is extremely intensive in NSW and VIC, relatively intensive in QLD, NT, SA and WA and rare in ACT and TAS.NSW attracted 49.3% of Chinese ODI in 2015, above that of other states. VIC attracted 33.58% ODI from China, ranking second only to NSW, followed by QLD (9.05%), NT (3.74%), SA (3.58%) and WA (0.76%). By contrast, ACT 6 and TAS attracted zero ODI from China, i.e. these two states did not attract or attracted a very tiny amount of investment from China in 2015 (see Fig.1).
From the perspective of regional distribution of industries, Chinese ODI in real estate distributed in NSW, VIC, QLD and WA, but intensively in NSW (42.54% of total ODI). Investment in agribusiness spread most widely, distributing in all states except WA, but with the lowest proportion; the volume of infrastructure investment was only higher than that of agribusiness, intensively distributing in NSW and NT; the investments in energy industry and mining industry intensively distributed in VIC and SA.
Tendency of Industries and Investing Principals Diversifying
During the period between 2005 and June 2016, Chinese ODI in Australia development has implied a trend of diversification (see Fig.2). Between 2005 and 2007, Chinese ODI was occupied mainly by metal industry and energy industry, but dominated by metal industry. The investment in energy industry started to occur in 2007 and kept climbing up, and its investment volume exceeded that of metal industry after 2009 and dominated Chinese ODI in Australia. Originating from 2009 when 4 Up until the second quarter of 2016, Chinese GDP growth rate has decreased to 6.7%, the lowest point since 2009. 5 When the productivity level is high, in this scenario, ample products are produced and supplied to the society, and the majority of products are in the position of surplus instead of shortage, together with productivity surplus, so that consumers hold the option, where the market acts as the buyer's market. With the ratio of the third industry increasing, consumption structure accelerates, and the consumption patterns reveal diversification tendency. In this case, in the conflict between production and consumption, consumption dominates this conflict, and productivity as a result cannot be further developed without consumption expansion.See F. Chi in: Understand the "New Normal" of Chinese Economy, edited by China Workers Publishing House, Beijing (2015), p. 302. 6 ACT is geographically located inside NSW, where Canberra (the Australian capital) is located, therefore the political central area. Because of political and economic separation, ACT usually has less frequent trading activities.
industries of real estate, finance began to be invested in, Chinese ODI in Australia presented its trend of diversification, which was even manifest in 2014 when industries including real estate, finance, transport, agriculture, tourism, entertainments were covered.
It shall be noticed that the foreign investments in Australian real estates from China rose sharply since 2014, and its investment volume was higher than that of any other industry in 2015. On the contrary, Chinese foreign investments in Australian metal industry and Australian energy industry tended to go down since 2013 and 2015 individually. At present, metal industry, energy industry and real estate principally share the majority of volume and ratio of Chinese ODI in Australia.
In addition to the industrial diversification, the investing principals have also shown a tendency of diversification.In 2015, the investment value of state-owned enterprises(SOEs) was slightly lower than that of the non-SOEs, but the latter occupied almost 80% of total number of deals completed far beyond that of SOEs.In spite of the investing principals diversifying tendency, nonetheless,Chinese SOEs still take the majority of the accumulated investment. Some Australian have worried about it that excessive Chinese ODI in Australia and the control of Australian resources resulting from it are likely to let Chinese government indirectly interfere Australian politics. Factors including issues of populism like this and the particular characteristic of Chinese foreign investment ownership (like SOEs' dominance) have constituted "a source of political confusion in Australian policy development and in the Chinese perception of Australian policy" [6].Also, Lou (2014) pointed similar obstacles of trades out [7]. As a consequence, it becomes a realistic issue faced by Chinese government andChinese foreign tradehow to stimulate Chinese non-SOEs to invest abroad in order to balance the ratio of SOEs' participation with that of non-SOEs' participation in foreign trades.
Promote the Diversification
To improve the imbalanced industrial allocation of Chinese ODI in Australia, thediversification development shall be promoted further, with the target of investment moved towards other industries with investing potential.In recent years, seen from the perspective of Australian industry as well as demand and supply, Australian agricultural products (such as milk, wine, fruits) have been increasingly popularized among Chinese consumers. Australia needs FDI from countries like China to make up for its lack of deposits and capital to develop Australian economy [8]. Besides, with the tourists visiting Australia increasing year by year, Australian government has got down to the focus on and the input in the relevant industries surrounding tourism industry including tourism, transport, human resources. Additionally, natural gas resource has also long been an abundant natural resource in Australia while remaining undeveloped, providing potential investing opportunities. Thus, in the current situation where global economy is weak and the development of metal and energy industry is constrained, targeting at other various industries to make diversified investments can not only satisfy the Australian demand for foreign investment input andexpand Chinese ODI in Australia, but it can also promote both countries' economy development.
5.2Strengthen the Knowledge about Australian Foreign Policies and Regulations
In order to reduce the managerial difficulties caused by differences between Chinese regime and Australian regime, Chinese investing enterprises shall get a ready knowledge of multiple Australian local regulations as well as motivations and constrains of Australian foreign policies. According to <Australia's Foreign Investment Policy>, although foreign investments are welcomed by Australian Government, but in sensitive areas like aviation, shipping, media, telecommunications 7 Australian government has imposed some limitations on the investments [9].In relation to other industries like real estate, prior to investing in these areas, related constrains as well as the political and economic environment shall be taken into consideration as well. Because of the high return generated from investing in real estates in Sydney, Melbourne, etc. in recent years, a flock of foreign investors have turned to invest in real estates and then bid up the local house prices. Solely in Sydney, as can be surveyed, nearly one third of the total real estate assets are owned by foreigners. Consequently, Australian government has been releasing increasingly strict policies on foreign ownership of Australian real estates.When considering the preparation for the investments in these areas, Chinese enterprises are suggested to notice constrains ruled in relevant Australian policies.
Encourage PrivateEnterprises to "Go Global"
To improve the situation where Chinese SOEs dominate ODI and ease the negative influence of populism resulting from it, Chinese government shall release more favorable policies for private enterprises (as well as JVs) investing abroad [10] to actively motivate those enterprises to "go globally". Because private enterprises are characterized by high operating flexibility and adaptability, by contrast, these enterprises are more likely to adapt to the local market regime after going abroad, of which the characteristics and flexibility can possibly reduce or eliminate the negative influence of political uncertainties brought by huge amounts of SOEs' ODI. As for the realistic issues surrounding the rise of Australian populism caused by Chinese SOEs' ODI dominance, Australian government meanwhile shall set an examining framework for foreign investments to fairly treat Chinese SOEs' ODI in Australia, and official consultations and meetings of Chinese and Australian governments are expected to be arranged more frequently on discussing issues of "scrutiny facilitation of competition, corporate governance and financial transparency" surrounding Chinese SOEs [6].
Conclusion and Prospects
For long, Australia has been famous for its abundant natural resources reservation around the world. As Chinese national economy blooming and income level being enhanced, Chinese consumers' demands for Australian resources and commodities have been expanding. Under the support of related favorable policies of Chinese and Australian governments, in recent years, those demands have got satisfied through investments and trades. Sino-Australian mutual investments and trades have effectively promoted Australian economic development, in which Chinese ODI in Australia has played a vital role.
However, with Chinese ODI in Australia developing, some realistic issues have also been exposed.Up till now, the dominance of Chinese ODI in Australia belongs to metal industry and energy industry, yet the potential of other industries as well as the regions where these industries are distributed still have not been aware of and remain undeveloped, thus extremely imbalanced allocations in industries and geographical regions. Moreover, Chinese SOEs are still the major investing principals, so that Australian somewhat resist activities of Chinese ODI in Australia, which affects Chinese ODI in Australia negatively. As a solution to these problems, Chinese enterprises shall target on industries with great investing potential such as agriculture, infrastructure, transport, tourism as well as the areas where those industries are, instead of the only concentration on metal industry and energy industry. Additionally, during preparation prior to the investments, to reduce or avoid investing risks that are caused by the differences of Chinese and Australian regimes and regulations, Chinese enterprises shall be aware of Australian policies of foreign investments, political and economic environment and the changes of them. Also, Chinese government should bring forward more encouraging policies for non-SOEs investing abroad. Seen from the need of economic development of both countries and the long-term interest, the prospect of Chinese ODI in Australia is expected to be continuously affirmative. On the one hand, Chinese government will continue encouraging the Chinese enterprises to invest overseas. With farther Chinese domestic income level enhancement, together with the development of globalization, the demands of Chinese consumers for overseas commodities, especially Australian products, will continue growing up. On another, Australian government will still be willing to keep its door open for FDI including Chinese FDI to stimulate its economy. Besides, Australian government in the future will keep developing other regions other than NSW and VIC further. There exists a great probability that Australian government may consider transforming industrial structure in remote regions to boom the local economy and may even propose more favorable policies for foreigners' immigrations and investing in those regions. Therefore, analyzed from both sides, there still exista great potential and a prosperous prospect for Chinese ODI in Australia.
|
2019-08-19T08:25:35.774Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "d3d6e69fd613e58d49e242cc6eaeae6c44a2f7d6",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25872003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7dea2d56b8b6d2f6963996ff9f6bd8edb65c7980",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
198147525
|
pes2o/s2orc
|
v3-fos-license
|
High-precision nonlocal temporal correlation identification of entangled photon pairs for quantum clock synchronization
High-precision nonlocal temporal correlation identification in the entangled photon pairs is critical to measure the time difference between independent remote time scales for many quantum information applications. The first nonlocal correlation identification was reported in 2009, which extracts the time difference via the algorithm of iterative fast Fourier transformations (FFTs) and their inverse. The least resolution cannot be selected arbitrarily, which is restricted by the peak identification threshold of the algorithm, and the calculation precision accordingly. In this paper, an improved algorithm is proposed both in the resolution and time difference precision, which realizes well-performed coincidence identification with a flexible resolution down to 1 ps. With a common time scale reference, the measured time offset between two timing signals has achieved a minimum precision of 0.72 ps. We believe that this algorithm would be a dramatic step towards the field implementation of entanglement-based high-precision clock synchronization, ranging system and quantum communications.
Introduction
The strong temporal correlation (down to a few hundred femtoseconds) in time-energy entangled photon pair sources has found great applications in quantum spectroscopy [1][2][3], quantum communication [4][5][6], and quantum clock synchronization [7][8][9][10], etc. In typical laboratory experiments, the temporal correlation measurements are usually implemented by a local coincidence device, which provides a high-precision measurement for the time-of-arrival difference between entangled photons. However, for the real applications such as quantum clock synchronization and quantum key distribution between remote sites, a nonlocal time coincidence identification is required.
In 2009, Ho et al. proposed a first nonlocal time correlation identification algorithm [11].
In the algorithm, the remotely departed entangled photon pairs are individually detected and their time arrivals are nonlocally recorded by independent event timers. To identify the temporal correlation, the recorded time sequences are firstly binned for subsequent fast Fourier transformations (FFTs). By implementing direct product on the Fourier transformed bins and applying the inverse FFTs on the outputs, the time difference between the remote photon pairs can be extracted. As a pre-processing program is needed in the algorithm, according to different resolutions and the binning of the sequences, the calculated cross correlation array suffered from a deterioration in SNR, which made the algorithm a restricted resolution, and the precision consequently.
Here we present an updated algorithm for the nonlocal coincidence identification.
Instead of iteratively applying time binning onto discrete time sequences and subsequent FFTs, the coincidences are acquired by counting the subtractions of the time sequences that are within a certain time window, which corresponds to the identification resolution. To acquire the coincidence distribution of correlated photon pairs, a coarse extraction is firstly launched onto a segment of the time sequences to determine the rough peak position of the time difference. Referenced by this identified coarse time difference, the coincidence distribution is then constructed by dealing with the time differences distributed within a much finer time window than the correlation width of the photon pairs after propagation. Based on our algorithm, a much higher time-offset precision is achieved with a flexible resolution down to 1 ps. An analysis of the attainable precision of the algorithm has also been stated by using two commercial event timers that are referenced to a common time scale, which shows a standard deviation of 0.72 ps in a measurement time of 5 s and sampling rate of 12 kcps. In the following, the algorithm is firstly addressed.
Algorithm description
According to quantum theory [12], the joint detection probability of the time-correlated photon pair that are separately distributed to the space-time points ( 1 , 1 ) and ( 2 , 2 ) is proportional to the Glauber second-order correlation function [13,14], where E (-) and E (+) are the negative-and positive-frequency part of the electric field operators at space-time point ( , ), j=1, 2. In the stationary case, the second-order correlation function G (2) only depends on t1 -t2. Without loss of generality, a pair of ideally frequency-anticorrelated photons generated from spontaneous parametric down-conversion (SPDC) process can be considered. The two-photon joint spectral amplitude for degenerate type-II SPDC is given by F(Ω), where Ω is the deviation from the center frequency 0 of the SPDC photons. Assume that the photons propagate through dispersive media of length l1 and l2 to the space points 1 and 2 , the wave number of the photon can then be expressed as . Here α and β are the first-order and the second-order dispersion which are responsible for the wave packet delay and the wave packet broadening, respectively. In the far-field approximation, G (2) (tA-tB) can be approximated as [15] ( where ̄= 2 2 − 1 1 is the overall time delay between the signal and idler photons. Thus by determining the peak position of the (2) function, the time difference between the two space-time points is obtained. Assume the FWHM width of F(Ω) is ∆Ω, the FWHM width of (2) ( 1 − 2 ) in the far-field approximation will be Δ( 1 − 2 ) = ( 1 1 + 2 2 )∆Ω.
In practical experiment, such joint distribution of time detections cannot be directly measured, instead the coincidence counting rate Rc within a certain time window is measured, which can be expressed as [15] (2) 1 2 where T represents the data acquisition time. S( 1 − 2 − 0 ) is the coincidence window function centered at t0, and can be given by a rectangular function where BW denotes the resolution of the coincidence measurement. When BW is chosen to be so small that S( 1 − 2 − 0 ) is equivalent to a delta function, the probability of G (2) (t1-t2) at t0 is obtained. As the detected time events are actually discrete, the coincidence counting rate Rc should be rewritten as
Experimental implementation
To demonstrate this nonlocal temporal correlation identification algorithm, a frequency entangled photon source was utilized. The photon pairs with orthogonal polarizations were departed by a fiber polarization beam splitter [16]. The signal photons were detected by a single photon detector [17,18] and the timing information were recorded by an event timer (A033ET, Eventech Ltd) consequently. While the idler photons were detected after propagation in 10 km fibers. Due to the dispersion in the fibers, the coincidence width of the entangled photons is broadened to 460 ps. At the input of the two event timers, the single photon rates of both paths were 11 kcps, and the pair rate was 620 cps. Fig. 1 shows a coarse coincidence distribution of the entangled photon events acquired in
Results and analysis
Ta=0.05 s. The coarse resolution is set to be 1.25 ns, and the coarse time offset 0 , is 434409100 ps. The peak height approximates 20, in contrast to that of the average background correlations of the uncorrelated photon noise, which is less than 2. Fig. 3 (b). A best time offset precision of 0.72 ps can be realized by our algorithm. One can see that, within a certain acquisition time of photon events, the precision is approximately independent of the calculating resolution applied to the fine time offset identification when the fine resolution is smaller than the intrinsic coincidence width of the entangled photons, and the precision tends to be worse once the resolution is set to be larger than the coincidence width. In contrast to the precision variations, the measured time offsets tend to linearly shift with the debasement of the resolution, where a worse resolution leads to the birth of the obtained time offset deviations. Therefore, a fine resolution smaller than the coincidence width will benefit both the coincidence distribution and the time offset identifications. Furthermore, for the application of high-precision quantum metrologies, a more accurate time offset measurement determines the ultimate performance of the system, and in order to improve the time-offset accuracy, a better coincidence resolution is demanded.
Discussion
In this paper, we demonstrated a nonlocal and high-precision time offset identification algorithm of entangled photon pairs with a tunable resolution down to 1 ps based on two event timers. A precision of 0.72 ps has been achieved in time offsets acquisition. To verify the superiority of our method, we also introduced the algorithm reported by Ho et al. [11] to the nonlocal time offset identification. The initial time sequences were the same with our method stated in experimental setup. A size of N=2 24 time bins are applied to the time offset calculation with a resolution of 18 ps. The precision was calculated to be 58.78 ps denoted by standard deviation of 50 data sets. Moreover, when the resolution smaller than 10 ps, the time offset cannot be distinguished in Ho' s algorithm, which can still be mapped by our algorithm.
Even when the resolution is set to be 1 ps, the time offset information can be extracted accordingly by our method.
In contrast to dedicated coincidence hardware, there is no restriction on the locations of the two correlated photons, and a remote coincidence measurement between the photons can be achieved without deteriorating the resolution. Therefore, it can be used either in local or nonlocal coincidence identifications accordingly. For example, in high-precision quantum clock synchronization, femtosecond-level quantum clock synchronization (QCS) has been demonstrated in laboratory [20]. To implement field clock synchronization using entangled photon sources, high-precision and nonlocal measurement of the clock offset is a prerequisite technique. In our algorithm, by setting the external references of the two event timers both locked to local clocks that are needed to be synchronized, according to the QCS protocols, the clock offset can be deduced from the time offsets of the correlated photons. In case of high-precision ranging protocol utilizing quantum entangled photons, there is no such ranging ambiguity encountered in ultra-pulsed schemes [21]. One way is that by directly comparing the arriving times with and without passing through the path, the distance between two sites can be calculated via the product of the time offsets results and the refractive index of the path.
A minimum single short precision of the ranging protocol is expected to micrometers scale. In case of quantum spectroscopy [22], by means of chromatic group velocity dispersion (GVD) in large-dispersion fibers to resolve the spectrum in time domain, single photon spectrum can be quickly measured. In such application, a nonlocal coincidence protocol with a high resolution is a key factor due to the presence of long fibers, which can be further improved by a coincidence program with 1 ps resolution.
Conclusion
In summary, we have presented a nonlocal time offset identification algorithm of entangled photons based on nonlocal coincidence measurement by using event timers. A sub-picosecond precision of the time offsets measurement has been demonstrated with a tunable coincidence resolution of 1 ps. Note that the single shot RMS resolutions of both event timers are typically at 3 ps, and an optimization of the event timers to less than 1 ps will bring a remarkable upgrade of the coincidence and time offset identification by using our algorithm. Benefitted from the algorithm, high-precision quantum metrology can be greatly enhanced in practical and field applications.
|
2019-11-25T10:05:38.407Z
|
2019-11-04T00:00:00.000
|
{
"year": 2020,
"sha1": "2158946905739b64c1a61f646b14868a97949bb4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1907.08925",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "854c3f383d4ecf954843f57c3ade2b363d6e92b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Physics"
]
}
|
256845442
|
pes2o/s2orc
|
v3-fos-license
|
Beyond BMI: cardiometabolic measures as predictors of impulsivity and white matter changes in adolescents
Obesity is characterized by cardiometabolic and neurocognitive changes. However, how these two factors relate to each other in this population is unknown. We tested the association that cardiometabolic measures may have with impulse behaviors and white matter microstructure in adolescents with and without an excess weight. One hundred and eight adolescents (43 normal-weight and 65 overweight/obesity; 11–19 years old) were medically and psychologically (Temperament Character Inventory Revised, Three-Factor Eating Questionnaire-R18, Conners’ Continuous Performance Test-II, Stroop Color and Word Test, Wisconsin Card Sorting Test, Kirby Delay Discounting Task) evaluated. A subsample of participants (n = 56) underwent a brain magnetic resonance imaging acquisition. In adolescents, higher triglycerides and having a body mass index indicative of overweight/obesity predicted a more impulsive performance in Conners’ Continuous Performance Test-II (higher commission errors). In addition, higher glucose and diastolic blood pressure values predicted increments in the Three-Factor Eating Questionnaire-R18 emotional eating scale. Neuroanatomically, cingulum fractional anisotropy showed a negative relationship with glycated hemoglobin. The evaluation of the neurocognitive differences associated with obesity, usually based on body mass index, should be complemented with cardiometabolic measures. Supplementary Information The online version contains supplementary material available at 10.1007/s00429-023-02615-0.
Introduction
Overweight and obesity represent a major public health concern. Since 1975, the prevalence of excess weight among children and adolescents has more than quadrupled.
Psychological, biological, and sociocultural factors can contribute to the development of overweight/obesity (World Health Organization 2020). Impulsivity is a multidimensional construct that can correlate with the expression of excess weight, since it may lead to a rapid and unplanned reaction towards food. Impulsivity involves urgency, lack of perseverance and premeditation, and sensation seeking (Mobbs et al. 2010). Importantly, it is a broad concept that has diverse traits and manifestations. A systematic review (Liang et al. 2014) highlighted that, although the literature had mixed results, most studies found more pronounced impulsive behaviors in children and adolescents with overweight/obesity.
Overweight/obesity is also associated with neuroanatomic changes. White matter (WM) differences in this population have been studied with diffusion tensor imaging (DTI). DTI research evidenced WM alterations related to excess weight. Two common measures of WM microstructure are fractional anisotropy (FA) and mean diffusivity (MD). Lower FA and higher MD may reflect disturbances in WM microstructure. In adults, many studies found a negative association between body mass index (BMI) and FA in several WM tracts (Verstynen et al. 2012;Xu et al. 2013;Papageorgiou et al. 2017;Rodrigue et al. 2019). A positive association between BMI and MD was also described (Xu et al. 2013); although another study did not find associations between BMI and MD (Papageorgiou et al. 2017). In children and adolescents, there are mixed results regarding BMI and WM microstructure. There was described a positive (Ou et al. 2015), negative (Alarcón et al. 2016), and no relationship (Alosco et al. 2014) between BMI and FA. Concerning BMI and MD, no significant relationship was found in two studies (Ou et al. 2015;Alarcón et al. 2016), while another observed higher MD values in participants with excess weight (Carbine et al. 2019).
Moreover, not only BMI is related to WM microstructure. Obesity is usually accompanied by cardiometabolic changes that might have a potential impact on neural integrity. How WM integrity is negatively related to cardiometabolic measures has been studied under a broader approach: metabolic syndrome. Metabolic syndrome has been related to WM microstructure (Segura et al. 2009) and macrostructure (Morys et al. 2021), and in adolescents (Yau et al. 2012) and adults (Segura et al. 2009). Studies evaluating the independent effect that each cardiometabolic variable may have on WM are sparse and focused on adults (Verstynen et al. 2013;Lou et al. 2014;Cox et al. 2019;Johnson et al. 2019).
While BMI is a commonly used indirect measure of overweight/obesity, it is an incomplete diagnostic tool (Barlow 2007).
Thus, to favor an integrative assessment of overweight/ obesity, we complemented BMI with cardiometabolic variables. Despite the emerging interest in the study of cardiometabolic profile as a possible biomarker of impulsivity-especially focused on mental health (Conklin and Stanford 2008;Kavoor et al. 2017;Messaoud et al. 2017)-, the relationship of cardiometabolic measures with impulsivity and neuroanatomical differences in adolescents with overweight/obesity remains unknown.
The present study evaluates, in adolescents with normal weight and overweight/obesity, the association of cardiometabolic risk factors with (1) impulsive behaviors and (2) WM microstructure. We hypothesize that greater cardiometabolic risk factors might be related to (1) more impulsive behaviors, and (2) to WM microstructure differences. Specifically, we expect to find lower FA and higher MD values in association with greater cardiometabolic risk.
Participants
We recruited 108 adolescents (mean age = 15 ± 2.02 years old) from public primary care centers. From this sample, 53 participants were already included in a previous work regarding inflammation and grey matter (Prats-Soteras et al. 2020). Inclusion criteria involved being from 11 to 19 years old and having a BMI indicative of normal weight, overweight or obesity. Participants were classified into two groups: normal-weight (n = 43) and overweight/obesity (n = 65). For that purpose, Cole and Lobstein centile curves (Cole and Lobstein 2012), that provide age and sex-specific cut-off points from 2 to 17 years old, were used to classify underage participants. In participants aged 18 and 19, according to the World Health Organization's classification (World Health Organization 2020), those with a BMI between ≥ 18.5 and < 25 kg/m 2 were classified as normalweight, and those with BMI ≥ 25 kg/m 2 were classified as overweight/obesity. Exclusion criteria were (1) being prepubescent and (2) having a psychiatric, neurological, developmental, or systemic diagnosis. Participants did not take any chronic medication. Participants aged 18 and 19 were excluded if they met metabolic syndrome criteria (Alberti et al. 2009). For underage participants, and given the lack of general consensus to define pediatric metabolic syndrome (Yau et al. 2012), we used the cut-off points available from the Expert Panel on Integrated Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents (De Jesus 2011) as exclusion criteria. Finally, participants showing global cognitive impairment (i.e., scalar score < 7 in the Weschler Adults Intelligence Scale-III/Weschler Intelligence Scale for Children-IV vocabulary subtest (WAIS-III/ WISC-IV)), significant anxiety and depression symptoms (i.e., anxiety or depression symptoms total score ≥ 11 in the Hospital Anxiety and Depression Scale), and binge eating behaviors (i.e., score ≥ 20 in the Bulimia Inventory Test of Edinburgh) were also excluded.
This study was approved by the Institutional Ethics Committee. The research was conducted in accordance with the Helsinki Declaration. Written informed consent was obtained from all participants, or their respective legal guardian in underage participants, prior to entry into the study.
Procedure
Participants were randomly contacted by phone and briefly interviewed about general health aspects. The first day, potential candidates were cited to undergo a complete medical evaluation and a fasting blood sample extraction in the Pediatric Endocrinology Unit at a Public Hospital. Subjects not presenting any medical comorbidity were neuropsychologically evaluated in the next days. Participants without claustrophobia or metal prothesis also underwent a brain magnetic resonance imaging (MRI) acquisition on a 3T MAGNETON Trio (Siemens, Germany) at a Public Hospital.
Anthropometric and cardiometabolic measures
All anthropometric measurements were taken by a trained nurse from participants in light clothing without shoes: waist circumference, height (Holtan Limited Harpenden Stadiometer), weight (Seca 704 s) and BMI (kg/m 2 ), which was transformed into BMI z-score. Pubertal stage was determined according to the Tanner scale of sexual maturity. Cardiometabolic measures included total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides, glucose and glycated hemoglobin. Diastolic (DBP) and systolic blood pressure were manually determined twice (Riester Big Ben Round), and the mean of both determinations was used for posterior analyses. A description of cardiometabolic measures can be found in Supplementary Material-1a.
Impulsivity measures
The neuropsychological evaluation included: Temperament Character Inventory Revised (TCI-R), Three-Factor Eating Questionnaire-R18 (TFEQ-R18), Conners' Continuous Performance Test II (CPT-II), Stroop Color and Word Test, Wisconsin Card Sorting Test (WCST) and Kirby Delay Discounting Task (DDT). We used the following scores to characterize impulsivity: higher scores in the TCI-R novelty seeking subscale, TFEQ-R18 uncontrolled and emotional eating scales, CPT-II commissions errors, WCST perseverative errors and DDT geometric mean; and lower scores in Stroop Interference score. A description of the neuropsychological assessment can be found in Supplementary Material-1b.
Image acquisition and diffusion-tensor imaging processing
DTI is a neuroimaging analysis technique that quantifies the directionality of water molecules in the brain (Basser et al. 1994). A common measure used in DTI studies is FA, which reflects the orientation dependence of water movement, and hence gives information about WM microstructure. FA measurement ranges from 0 to 1. Higher FA values may suggest well myelinated and undamaged tracts constraining the directional diffusion of water to be parallel, whereas lower FA values may reflect disturbed WM microstructure (Kullmann et al. 2015). Since FA is unspecific to the source driving the changes in WM microstructure, we tested a complementary diffusivity scalar. MD is the average of all eigenvalues with higher values meaning exacerbated cell permeability and thus WM impairments.
Fifty-six participants (25 normal-weight and 31 overweight/obesity) underwent an MRI. The parameters used to acquire the diffusion-weighted images are detailed in Supplementary Material-1c. Image processing was performed in FMRIB Software Library (FSL) v.6.0.4 and BrainSuite v.18a1. Estimated total intracranial volume was obtained for each participant using Freesurfer v.6.0 recon-all pipeline. An explanation of the imaging processing procedure can be found in Supplementary Material-1d.
Two approaches were considered in the DTI analysis: region of interest (ROI) and whole brain. Using the JHU ICBM-DTI-81 White-Matter Labels Atlas and JHU White Matter Tractography Atlas, five ROIs previously implicated in obesity and impulsivity were defined: cingulum, corona radiata, corpus callosum, inferior fronto-occipital fasciculus (IFOF) and internal capsule (Verstynen et al. 2012;Yau et al. 2012Yau et al. , 2014Kullmann et al. 2015;Ou et To explore additional relationships between the cardiometabolic profile and WM microstructure, whole-brain contrasts were implemented in FSL randomise with 10,000 iterations and a Threshold-Free Cluster Enhancement approach (Smith et al. 2006). Age, sex, BMI z-score and estimated total intracranial volume were modeled as nuisance variables. Due to the exploratory nature of these tests, a Bonferroni correction was applied. This method is more restrictive than the False Discovery Rate (FDR) correction used in the ROI analysis. Thus, for whole-brain analysis, statistical significance was set at P < 0.0015 (8 cardiometabolic variables × 2 WM measures × 2 contrasts).
Data treatment and statistical analyses
Data manipulation and statistical procedures were performed in R statistical package v.4.0.5 and RStudio v.1.2.5033. Normality was determined with Shapiro-Wilk tests. Positively and non-normally distributed variables were transformed into their logarithmic form prior to any analyses. Briefly, we performed three different types of analyses: (a) Mean/ median differences between BMI groups in all variables, (b) multiple regression: impulsivity or DTI measures ~ cardiometabolic variables + covariates, and (c) median differences of impulsivity measures between high/low cardiometabolic groups. Analysis (a) was performed to describe between-group differences. Analysis (b) was performed with all variables of interest, regardless of whether they were or not significantly different between groups in analysis (a). Analysis (c) was performed with only those variables that were significant in analysis (b).
Specifically, (a) independent sample T-tests, Mann-Whitney U tests and Chi-square tests were used to analyze between-group differences. Effect sizes were calculated using R packages effsize and rcompanion. Missing values for every variable were reported in Tables 1 and 2. We repeated these analyses for the subsample of participants that underwent an MRI acquisition (Supplementary Material, Tables S1 and S2). (b) Multiple regression analyses were performed to determine which cardiometabolic variables were the strongest predictors of impulsivity measures (covariates: age, sex, BMI z-score and intelligence quotient estimation (WAIS-III/WISC-IV vocabulary subtest)) and FA/ MD differences (covariates: age, sex, BMI z-score and estimated total intracranial volume) in 5 WM tracts. Variance inflation factor (VIF) was used to assess multicollinearity within the predictors. To avoid a misestimation of the regression coefficients, total cholesterol was removed for having a VIF > 10. Confidence intervals at 95% for the regression coefficients were calculated as follows: [βi − 1.96 × SE(βi), βi + 1.96 × SE(βi)]. Multiple testing was controlled by FDR for 17 models (i.e., 7 impulsivity, 5 ROI-FA and 5 ROI-MD models). Only those with FDR < 0.05 were considered significant. (c) To provide a better visualization of the relationship between cardiometabolic measures with impulsivity, and only for those cardiometabolic regressors that were statistically significant in the multiple regression analysis, we defined two groups: participants with lower (≤ percentile 50th measure of interest) and higher (> percentile 50th measure of interest) cardiometabolic values. Then, the Mann-Whitney U test was used to analyze between-group differences in impulsivity test medians, and their ratio was calculated.
Results
Groups were not significantly different for sex, age, bulimia, anxiety, and depression symptoms (P > 0.05). As expected, the overweight/obesity group had a higher BMI z-score and waist circumference (mean = 1.95 and 95.08, respectively; P < 0.01) than their peers with normal weight (mean = − 0.06 and 69.85, respectively). Significant between-group differences were also found in the lipid profile: the overweight/ obesity group had lower HDL-c values (P < 0.01) and higher triglycerides (P < 0.01) than the normal-weight group. Neither LDL-c (P = 0.7) nor total cholesterol (P = 0.2) were significantly different. Differences in demographic, anthropometric and cardiometabolic measures are detailed in Table 1. Regarding impulsivity measures, participants belonging to the overweight/obesity group performed higher commission errors in the CPT-II test (P < 0.01). No significant differences between groups were found in the other impulsivity measures (P > 0.05). Table 1 provides a summary of impulsivity measures for both groups.
Cardiometabolic and impulsivity measures
After FDR correction, two out of the seven impulsivity models remained statistically significant: CPT-II commission errors [R 2 adj = 0.24; R 2 adj 95% CI = (0.11, 0.36), FDR = 0.0016] and TFEQ-R18 emotional eating [R 2 adj = 0.18; R 2 adj 95% CI = (0.06, 0.29); FDR = 0.012] models. The CPT-II model showed that for a 1% increase in triglycerides there was an increment of 0.04 commission errors in CPT-II (P = 0.018), and that for each unit of BMI z-score there was an increment of 2.1 commission errors (P = 0.004). Also, the TFEQ-R18 emotional eating model indicated that for each unit of glucose (mmol/L) and DBP (mmHg) there was an increment of 1.11 (P = 0.016) and 0.06 (P = 0.02) in this scale score, respectively. Table 2 provides a summary of the significant models. Correlations between cardiometabolic and impulsivity measures are included in Supplementary Material (Table S3).
Discussion
We examined the relationship of cardiometabolic measures with impulsive behaviors and WM differences in adolescents with normal-weight and overweight/obesity. First, we explored the relationship that cardiometabolic measures might have with impulsivity. We found that triglycerides were associated with a more impulsive performance in the CPT-II test (higher commission errors), and that glucose and DBP were associated with higher scores on the TFEQ-R18 emotional eating scale. Second, we assessed whether cardiometabolic variables were related to WM microstructure in five ROIs. We found that FA values in the cingulum were negatively associated with glycated hemoglobin.
Impulsivity
The most common ways to evaluate impulsivity are through rating scales and performance-based tests. Rating scales measure self-reported features of impulsive behavior over time, whereas performance-based tests provide an objective (Emery and Levine 2017). Impulsivity has been conceptualized as a broad trait composed of different phenotypes that manifest in a similar manner (Sharma et al. 2014). Within our sample, higher triglycerides were related to greater commission errors in the CPT-II test, which is also the only impulsivity measure (Shaked et al. 2020) significantly different between groups in the t-tests. Performance-based tests can never assess an isolated cognitive domain. It is possible that, compared to other tests, the CPT-II evaluates more directly impulsivity because it leads to more automatic responses and the capacity of inhibition becomes fundamental. To date, there is a lacking consensus about the relationship between impulsivity and cardiometabolic measures. A study in a large healthy sample (Sutin et al. 2010) evaluated the relationship of personality traits (NEO Personality Inventory) with lipid profile. They found that impulsivity was positively associated with triglycerides, while self-discipline and deliberation were negatively associated with triglycerides and positively with HDL-c. Excitementseeking was not significantly associated with lipid profile. Conversely, another study (Peterfalvi et al. 2019) did not find any relationship between the lipid profile (total cholesterol, HDL-c, LDL-c, triglycerides) and any CPT-II parameter in adults with major depression disorder, whereas lower HDL-c values did predict poorer shifting (WCST) abilities in this population. Although, as mentioned, there is a disparity in the literature, our results agree with previous studies that reported associations between cardiometabolic risk factors and impulsivity (Pozzi et al. 2003;Sutin et al. 2010). However, more research in clinical and healthy populations is needed to assess the nature of this relationship.
In addition, higher glucose and DBP values were related to higher scores on the TFEQ-R18 emotional eating scale. Emotional eating leads to the consumption of highly palatable and energy-dense foods-comfort foods-as a mechanism to cope with negative emotions. Given this, we hypothesize that such an eating pattern is accompanied by immediate glucose spikes and, at a mid/long term, with higher basal glucose levels. Also, negative emotions as a form of stress may be related to higher blood pressure. Particularly, the obesogenic environment and the easy access to palatable foods may be a key factor in this eating pattern, and future research studying its possible mediator effect may help to target specific public health actions.
Overall, our results support our first hypothesis. In our data, cardiometabolic risk factors are associated with impulsivity. Importantly, this association was found with cardiometabolic variables of different nature: blood pressure, glucose, and triglycerides.
White matter microstructure
The present study provides new evidence regarding WM microstructure and cardiometabolic measures in adolescents with and without excess weight. Our results suggested an inverse association between glycated hemoglobin and FA values in the cingulum; a WM tract that has been previously related to obesity (Verstynen et al. 2012;Kullmann et al. 2015;Papageorgiou et al. 2017;Carbine et al. 2019). This finding is consistent with a recent study in healthy adults (Repple et al. 2021) that demonstrated that non-pathological variations in glycated hemoglobin are related to WM microstructure. Regarding our second hypothesis, we expected to find more cardiometabolic components associated with WM microstructure. It is possible that glycated hemoglobin, even at levels much below the prediabetes, works as an early indicator of cardiometabolic risk (Veeranna et al. 2011), whereas more morbid levels may be required for the other cardiometabolic components to show an association with WM microstructure. Also, and since our research targets adolescencea period where individuals undergo several developmental processes, including WM maturation (Barnea-Goraly et al. 2005)-future longitudinal studies are necessary to see if our findings are related to brain maturation and myelination processes that occur in adolescence.
Limitations and future directions
This study has some limitations that should be acknowledged: (1) given our cross-sectional design, we could not assess causality in our results, (2) the smaller sample size used for the neuroimaging analyses limited their statistical power, and (3) the diffusion-weighted images were acquired with only 30 directions. Future studies including larger sample sizes and a longitudinal approach are needed to confirm whether our findings are consistent in different age spectrums and persist over time.
Conclusions
Our findings show that, in adolescents, triglycerides and having a BMI indicative of overweight/obesity predict a more impulsive performance in the CPT-II test (higher commission errors). In addition, glucose and DBP predict increments in the TFEQ-R18 emotional eating scale. Neuroanatomically, the cingulum FA shows a negative association with glycated hemoglobin and BMI. Our study provides a comprehensive overview of the relationship between cardiometabolic risk factors typically related to overweight/obesity and neurocognitive variables; and invites us to look beyond the BMI when evaluating possible behavioral, cognitive, and neuroanatomical differences associated with overweight/ obesity.
|
2023-02-15T06:17:52.633Z
|
2023-02-13T00:00:00.000
|
{
"year": 2023,
"sha1": "9950c389b61c3eb9145d7a6834dbca4d45e9914f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00429-023-02615-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9e1bc46ed1df21dd92a3a4b09228670d89ab791c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10571154
|
pes2o/s2orc
|
v3-fos-license
|
Repetitive Treatment with Diluted Bee Venom Attenuates the Induction of Below-Level Neuropathic Pain Behaviors in a Rat Spinal Cord Injury Model
The administration of diluted bee venom (DBV) into an acupuncture point has been utilized traditionally in Eastern medicine to treat chronic pain. We demonstrated previously that DBV has a potent anti-nociceptive efficacy in several rodent pain models. The present study was designed to examine the potential anti-nociceptive effect of repetitive DBV treatment in the development of below-level neuropathic pain in spinal cord injury (SCI) rats. DBV was applied into the Joksamli acupoint during the induction and maintenance phase following thoracic 13 (T13) spinal hemisection. We examined the effect of repetitive DBV stimulation on SCI-induced bilateral pain behaviors, glia expression and motor function recovery. Repetitive DBV stimulation during the induction period, but not the maintenance, suppressed pain behavior in the ipsilateral hind paw. Moreover, SCI-induced increase in spinal glia expression was also suppressed by repetitive DBV treatment in the ipsilateral dorsal spinal cord. Finally, DBV injection facilitated motor function recovery as indicated by the Basso–Beattie–Bresnahan rating score. These results indicate that the repetitive application of DBV during the induction phase not only decreased neuropathic pain behavior and glia expression, but also enhanced locomotor functional recovery after SCI. This study suggests that DBV acupuncture can be a potential clinical therapy for SCI management.
Introduction
One of the pain therapies is the use of chemical stimulation into an acupuncture point to produce an analgesic effect and to reduce pain severity. In this regard, the injection of diluted bee venom (DBV) into an acupuncture point, termed apipuncture, has been used clinically in traditional Korean medicine to produce a significant analgesic effect in human patients [1,2]. Many experimental studies have demonstrated that injecting DBV into the Joksamli (ST36) acupuncture point produces a robust anti-nociceptive effect in various pain animal models, such as the writhing test, the formalin test, the carrageenan-induced inflammatory pain test and arthritis models [3][4][5][6]. Furthermore, we demonstrated that this DBV-induced anti-nociceptive effect is associated with the activation of descending coeruleospinal noradrenergic pathways, which subsequently activate spinal alpha-2 adrenoceptors [3,7,8]. DBV stimulation of ST36 also inhibits the activation of spinal astrocytes in a mouse formalin test [3]. In particular, we showed that a single injection of DBV (0.25 mg/kg) into ST36 temporarily alleviated thermal hyperalgesia [9] and that repetitive stimulation using DBV significantly alleviated neuropathic pain-induced mechanical and cold allodynia in a sciatic nerve chronic constrictive injury (CCI) model of rats [7,8].
However, the precise roles of repetitive DBV treatment in the induction and maintenance phases of central neuropathic pain have not been examined.
Spinal cord injury (SCI), which is caused by direct traumatic damage to the spinal cord, has been related to many clinical complications, including functional disability, urinary tract problems, autonomic dysreflexia, altered sensations and pain [10][11][12]. These patients often have experiences of several types of pain; central chronic pain syndrome, which exhibits mechanical allodynia and thermal hyperalgesia, is one of the most common causes for a reduced quality of life [13,14]. Especially, below-level pain after SCI represents a clinically-significant symptom of central neuropathic pain that is very difficult to treat effectively [15,16]. Several experimental models of SCI have been developed to determine the detailed mechanisms and therapeutic strategies to treat SCI. The most widely-used models are rat SCI contusion, excitotoxic and hemisection models [14,[17][18][19]. In this study, the rat SCI hemisection model was chosen, because this model is widely used to verify the mechanism behind SCI-induced chronic pain development.
Recently, a number of studies have reported the potential role of spinal astrocytes and microglia in both postoperative pain and neuropathic pain [20][21][22][23][24]. Moreover, intrathecal treatment with glia inhibitors, such as minocycline (a microglia inhibitor) and propentofylline (a glia modulating agent), reduced below-level neuropathic pain behaviors in SCI rats [25][26][27]. However, although there is some evidence that glial cells are activated during the development of SCI-induced neuropathic pain, the precise mechanisms underlying glial activation, particularly in lumbar segments distant from the spinal cord injury site, are poorly understood.
Based on the above-mentioned studies, we hypothesized that repetitive DBV treatment into an acupoint reduces SCI-induced mechanical allodynia and thermal hyperalgesia and that this reduction is mediated by the suppression of spinal astrocyte or microglia activation. Thus, the present study was designed to examine the following: (1) whether repetitive DBV acupuncture point treatment for five days during the induction and maintenance phases following thoracic 13 (T13) spinal cord hemisection would produce a more potent and prolonged analgesic effect compared to controls that received repetitive injections of vehicle; (2) whether the anti-nociceptive effect of DBV is mediated by the modulation of spinal astrocyte and microglia activation; and (3) whether repetitive DBV treatment affects motor functional recovery in SCI rats.
Effect of Repetitive DBV Treatment during the Maintenance Phase of Spinal Cord Injury-Induced Pain
Spinal cord hemisection produced prominent mechanical allodynia and thermal hyperalgesia, as shown in Figure 1. During the maintenance phase, repetitive DBV treatment was administered twice a day from 15 to 20 days post-surgery. Repetitive DBV treatment significantly increased the decrease in the paw withdrawal threshold in the ipsilateral hind paw by SCI surgery at three and five days after DBV treatment (* p < 0.05, compared with saline-treated groups) ( Figure 1A); however, mechanical allodynia of the contralateral paw did not show any change compared to saline-treated animals ( Figure 1B).
As shown in Figure 1C, repetitive DBV treatment during the maintenance phase increased paw withdrawal latency to noxious thermal stimulus only Day 1 after DBV treatment compared with the repetitive saline-treated group (* p < 0.05). In the contralateral paw, repetitive DBV treatment had no effect on SCI-induced thermal hyperalgesia ( Figure 1D).
Effect of Repetitive DBV Treatment during the Induction Phase of Spinal Cord Injury-Induced Pain
The withdrawal response threshold to innocuous mechanical stimuli and withdrawal response latency to noxious thermal stimuli were measured in repetitive DBV and vehicle treatment groups during the induction phase (twice a day from one to five days post-surgery, Figure 2). Groups treated with saline (vehicle, n = 8) in the ipsilateral paw showed an approximately 6 to 7-g threshold for mechanical allodynic behaviors in both paws by five days post-surgery. The peak was reached at Day 14 (10 days after the termination of injection). Repetitive DBV-treated groups (DBV, n = 8) displayed potently suppressed pain induction in the ipsilateral paw as early as five days post-surgery (* p < 0.05, ** p < 0.01 and *** p < 0.001, compared with saline-treated groups; Figure 2A). In the contralateral paw, DBV-treated groups did not display significantly altered SCI-induced mechanical allodynia compared to the saline-treated control group ( Figure 2B). Repetitive DBV injection for five consecutive days during the induction phase significantly increased the SCI-induced decrease in the paw withdrawal latency to noxious thermal stimulus beginning seven days post-SCI surgery compared with the repetitive saline-treated group (* p < 0.05; Figure 2C). In the contralateral paw, groups treated with repetitive DBV for five days showed a tendency toward increasing the paw withdrawal latency after DBV treatment; however, this increase was not significant ( Figure 2D). -surgery, twice a day) increased the paw withdrawal threshold by mechanical stimuli for the period of DBV treatment in the ipsilateral hind paw (* p < 0.05 compared to the vehicle-treated group); (B) whereas the contralateral paw did not display any differences compared to the vehicle-treated group; (C) repetitive DBV treatment reversed the spinal cord injury (SCI)-induced decrease in the paw withdrawal latency (s) to noxious thermal stimuli compared to the vehicle-treated group (* p < 0.05); (D) no significant difference in the paw withdrawal latency was observed in the contralateral paw between the DBV and vehicle-treated groups. Two-way ANOVA followed by Bonferroni's test. PRE; one day before SCI surgery, POST; 15 days after SCI surgery. Tx.: DBV or vehicle treatment. Repetitive daily treatment with DBV (twice a day from one to five days post-surgery) suppressed the induction of SCI-induced mechanical allodynia in the ipsilateral hind paw compared with vehicle-treated rats (* p < 0.05, ** p < 0.01, *** p < 0.001); (B) whereas the contralateral paw did not display any differences; (C) repetitive treatment with DBV reversed the SCI-induced decrease in the paw withdrawal latency (s) to noxious thermal stimuli compared to the vehicle-treated group (* p < 0.05); (D) no significant difference in the paw withdrawal latency was observed in the contralateral paw between the DBV-and vehicle-treated groups. Two-way ANOVA followed by Bonferroni's test. PRE: one day before SCI surgery; Tx.: DBV or vehicle treatment.
Effect of Repetitive DBV Treatment during the Induction Period on Glia Expression after Spinal Cord Injury
To determine how repetitive DBV treatment might affect glia, astrocyte and microglia expression, the Western blot assay was performed on the lumbar spinal cord dorsal horn at 14 days after SCI surgery. Astrocytes can respond quickly to various pathological stimuli, and this response is related to an increase in GFAP. Repetitive saline injection during the induction phase significantly increased GFAP expression in the spinal dorsal horn compared with that of normal animals (** p < 0.01), and repetitive DBV treatment at ST36 suppressed SCI-enhanced GFAP expression (# p < 0.05), suggesting that DBV treatment has a potent anti-nociceptive effect on SCI-induced central neuropathic pain. However, the contralateral spinal cord dorsal horn did not reproduce this suppressive effect of DBV ( Figure 3A). In Figure 3B, repetitive saline injection also showed a significant increase in Iba-1 expression in the ipsilateral spinal cord dorsal horns compared to that of normal animals (*** p < 0.001), and DBV-treated rats displayed significantly decreased Iba-1 expression compared to the vehicle-treated group (## p < 0.01). Vehicle-treated rats displayed significantly increased GFAP expression in the spinal cord compared to normal rats (** p < 0.01), and repetitive DBV-treated rats displayed significantly decreased GFAP expression in the spinal cord compared to the vehicle-treated group (# p < 0.05). (B) In the ipsilateral spinal cord, Iba-1(microglia marker) expression increased following SCI surgery compared to the Iba-1 expression level in normal rats (*** p < 0.001), and repetitive DBV-treated rats displayed significantly decreased Iba-1 expression compared to the vehicle-treated group (## p < 0.01).
Effect of Repetitive DBV Treatment on Motor Function Recovery after Spinal Cord Injury
Before hemisection, the Basso, Beattie and Bresnahan (BBB) scores were 21 in the repetitive DBV and saline groups ( Figure 4). Immediately upon emerging from anesthesia, hemisected animals showed a dramatic loss of ipsilateral hindlimb function as indicated by BBB scores of zero for each group. From Days 1 to 5 after hemisection, rats treated with DBV during the induction phase displayed faster functional recovery rates throughout the four-week period following SCI surgery than those treated with vehicle (** p < 0.01 and *** p < 0.001 compared with saline-treated groups). *** ** ** ** ** ** ** *** Days after SCI surgery BBB rating Figure 4. Graphs illustrating the effects of repetitive DBV or vehicle treatment during the induction phase on the recovery of motor function in SCI rats. The Basso, Beattie and Bresnahan (BBB) scores were 21 in all SCI groups before hemisection. Immediately after recovering from anesthesia, hemisected animals appeared to show a loss of ipsilateral hindlimb function as indicated by BBB scores of zero for each group. Animals treated repeatedly with DBV from Days 1 to 5 after hemisection showed faster functional recovery rates than those treated with vehicle (** p < 0.01, *** p < 0.001). PRE: one day before SCI surgery; Tx.: DBV or vehicle treatment.
Discussion
This study demonstrated that repetitive injections of DBV into the Joksamli acupuncture point during the induction phase (one to five days after SCI) of below-level neuropathic pain significantly produce a more potent and prolonged anti-nociceptive effect compared to repetitive DBV treatment during the maintenance phase (15 to 20 days after SCI) or repetitive injections of the vehicle. Importantly, repetitive treatment with DBV had less of an effect when administered during the maintenance phase. Acupuncture therapy, including manual acupuncture, electro-acupuncture and DBV therapy, produces a gradually increasing anti-nociceptive effect in chronic pain patients when injected repetitively over the course of several days, weeks or months [28]. Huang et al. found that high-frequency electro-acupuncture treatment twice a week for four weeks produced a significant reduction in mechanical hyperalgesia by the third and fourth weeks of treatment, whereas it caused no effect on thermal hyperalgesia in a chronic inflammatory pain model of rats [29]. In general, bee venom (BV) contains a number of potential pain-related substances, including melittin, histamine and phospholipase A2, and this mixture of biologically-active substances is able to induce toxic effects, contributing to certain clinical signs or symptoms of envenomation. Human responses to BV include small edema, redness, extensive local swelling, anaphylaxis, systemic toxic reaction and pain [30]. Thus, the use of BV always requires great care.
By contrast, BV has been also used in Oriental and Korean medicine to reduce pain and inflammation. We demonstrated previously that repetitive DBV injection into the Joksamli acupuncture point twice a day for two weeks could significantly decrease mechanical and cold allodynia and thermal hyperalgesia in CCI-induced neuropathic rats [7,8], whereas single DBV injection into the acupuncture point temporarily suppressed thermal hyperalgesia (up to 45 min after DBV injection), but not mechanical allodynia in CCI rats [9]. Our results demonstrate that repetitive injections of DBV into the acupuncture point play an important role during early induction, but not during maintenance of pain behaviors associated with central neuropathic pain conditions. To exclude a possible influence of the temporary anti-nociceptive effect of DBV (observed immediately after each daily DBV injection) on the long-term effects of repetitive DBV treatment, we performed the pain behavioral tests in the afternoon between 6 and 10 h after DBV injection. During the sixth to 10th hour after DBV injection, the temporary anti-nociceptive effect was no longer shown, thus the collected anti-nociceptive data indicate the net long-term effect produced by repetitive DBV treatment. The present study also demonstrates the significance of the appropriate time point for drug administration in SCI patients. Preemptive or initiatory medication in the spinal cord level has not been widely examined in SCI patients, because administering drug preemptively in these patients is almost impossible because of the unpredictable clinical occurrence of chronic pain [31]. However, it might be important to detect situations where the possibility for the development of below-level neuropathic pain is high, and the ability to alter this situation would be of considerable clinical value. Although predicting which patients suffering from SCI will go on to develop chronic central neuropathic pain is impossible, our results demonstrate that a critical time window exists in which early treatment with DBV would be effective. Recently, Tan et al. suggested that inhibition of early neuroimmune events could have a critical impact on the induction of long-term pain phenomena after SCI [32]. Marchand et al. also demonstrated that early treatment with etanercept, a tumor-necrosis-factor inhibitor, caused the reduction of mechanical allodynia after SCI, whereas delayed treatment of etanercept had no significant effect [27]. These results are consistent with the time-dependent effect of DBV observed in our present study. Collectively, these findings, including the present results, imply that repetitive DBV acupuncture therapy during the induction phase is able to produce a powerful analgesic effect on chronic central neuropathic pain and suggest the clinical use of repetitive DBV treatment as a potential novel strategy in the early management of SCI-induced neuropathic pain.
Moreover, the findings of this study demonstrate that the suppression of glial cell activation in ipsilateral, but not contralateral, spinal cord dorsal horn is closely related to the anti-allodynic and anti-hyperalgesic effects of repetitive DBV treatment during the induction phase in SCI rats. Glial cells, in particular astrocytes and microglia, have been known as important modulators or key factors of nociception. Although glia have been traditionally recognized to have simple functions that are necessary for neuronal communication in normal conditions, they are now recognized as key modulators of plasticity changes in pathophysiological conditions. Furthermore, glia can interact directly with neurons, and then, they also play important neuromodulatory and/or neuroimmune roles in the CNS [33,34]. Recently, several studies demonstrated the involvement of glia activation in chronic pain conditions, including inflammatory pain, peripheral and central neuropathic pain [20,21,23,35]. Direct metabolic inhibitors of glia activation, like minocycline and propentofylline, have been shown to have an anti-nociceptive effect in SCI rats [25][26][27]32]. Moreover, the blockade of astrocyte gap junctions by the intrathecal injection of carbenoxolone during the induction period of SCI-induced neuropathic pain reduced the development of below-level mechanical allodynia and thermal hyperalgesia and suppressed astrocytic activation in spinal cord [36]. Thus, unsurprisingly, astrocyte activation may contribute to the induction of central neuropathic pain in SCI rats. DBV stimulation of the ST36 acupuncture point also suppressed the activation of spinal cord astrocytes and reduced nociceptive behaviors in the mouse formalin test [3]. Our results showed that GFAP and Iba-1 expression in the ipsilateral lumbar 4 (L4) to L6 segments significantly increased 14 days after SCI in vehicle-treated SCI rats. However, interestingly, the increase in GFAP and Iba-1 expression was significantly decreased by repetitive DBV treatment during the induction phase. In the contralateral L4 to L6 segments, GFAP and Iba-1 expression in vehicle-treated SCI rats did not differ from that in normal animals, and repetitive DBV treatment during the induction phase did not modify GFAP and Iba-1 expression in contralateral dorsal horn examined in the present study. This result indicates that glia activation in the ipsilateral lumbar spinal dorsal horn could be caused by the damage in a spinal cord injured segment (T13) and that the mechanism underlying this remote activation of glia could ultimately lead to the development of below-level neuropathic pain. Thus, these findings suggest that the early activation of astrocytes and microglia can initiate the induction of below-level neuropathic pain.
Finally, the BBB open field locomotor test was used to examine the functional recovery by repetitive DBV acupuncture point treatment during the induction phase in SCI rats. Because the ipsilateral hindlimb was operated on in the hemisection model, we only recorded the motility of the ipsilateral hindlimb based on the locomotor rating set by Basso et al. [37]. Repetitive DBV treatment during the induction phase evoked a significant and rapid recovery of motor function. Thus, early repetitive DBV treatment presents the advantage of motor function recovery, because DBV-treated rats appeared to present facilitated motor functional recovery from Day 3 after SCI. Significant functional recovery was observed after repetitive DBV treatment during the induction period; however, hindlimb deficits in the saline control group were relatively prolonged for 28 days. The demyelination of axons in the injured spinal cord is a known cause of motor function deficits, and remyelination or regeneration by natural formation are extremely limited due to glial scarring and growth inhibitors contained in the environment [38]. Glia scarring is the prominent factor that inhibits axonal regeneration in the central nervous system. By limiting the formation of astrocyte scars, we can facilitate axonal regeneration physically and biochemically [39]. Another possibility is the rerouting or plasticity of injured spinal cord. Iwashita et al. reported that a partial recovery is available due to a rerouting mechanism in untreated SCI animals [40]. Moreover, neuroplastic changes of the CNS in response to injury have been shown to be highly susceptible to intervention during the post-injury phase [41]. The observed functional recovery here might have also been partially evoked by the reintroduction of afferent feedback signals into the injured spinal cord by the rerouted nerve. One report indicated that animals under tactile stimulation, such as direct mechanical disturbance or electrical stimulation, resulted in greater locomotion restoration [42]. Therefore, the DBV-induced constant chemical stimulation at an acupuncture point may have facilitated locomotor function recovery. Collectively, we suggest that the repetitive DBV treatment during the induction phase can facilitate motor function recovery by enhancing sensory stimulation and by suppressing secondary injury development.
In conclusion, the present study demonstrates that repetitive DBV acupuncture therapy in spinal cord-injured rats can reduce the development of below-level mechanical allodynia and thermal hyperalgesia and can prevent glia activation in the ipsilateral spinal cord dorsal horn. In contrast, DBV treatment during the maintenance period after SCI did not modify glia expression in the spinal dorsal horn, nor below-level mechanical allodynia and thermal hyperalgesia previously established following SCI. In addition, the facilitation of motor function recovery occurred by repetitive DBV treatment.
These results suggest that the repetitive application of DBV acupuncture therapy suppressed SCI-induced central neuropathic pain syndrome development and might be a potential clinical therapy for the management of SCI.
Animals
All experiments were performed on Sprague-Dawley male rats weighing 180 to 200 g. Animals were obtained from Orient Bio (Sungnam, Korea). The rats were housed in cages with free access to food and water. For 1 week before the study, they were also maintained in temperature-and light-controlled rooms (24 ± 2 °C, 12/12 h light/dark cycle with lights on at 07:00 h). All experimental procedures used in this study were reviewed and approved by the Animal Care and Use Committee at Korea Institute of Oriental Medicine and performed as in the NIH guidelines (NIH Publication No. 86-23, revised 1985). We made an effort to minimize animal distress and to reduce the number of animals used in this study.
Spinal Cord Hemisection Surgery
Spinal cord hemisection surgery was performed according to the method described by Christensen et al. [13]. Briefly, rats were transiently anesthetized with a combination of 2.5 mg of Zoletil 50 (Virbac Laboratories) and 0.47 mg of Rompun (Bayer Korea) in saline to reduce handling-induced stress and then mounted on the surgical field. Then, the dorsal surface was palpated to locate the cranial borders of the sacrum and the spinous processes of the lower thoracic and lumbar vertebrae. The thoracic 11 to 12 (T11 to T12) vertebrae were recognized by counting spinous processes from the sacrum. A laminectomy was performed between the T11 to T12 vertebral segments, and the lumbar enlargement region was identified with the accompanying dorsal vessel; then, the spinal cord was hemisected directly cranial to the lumbar 1 (L1) dorsal root entry zone with a No. 15 scalpel blade. We tried not to damage the major dorsal vessel or its vascular branches. All surgical procedures were performed under visual guidance using an operation microscope. Then, the musculature and the fascia were sutured, and the skin was finally apposed. After the hemisected animals recovered in a temperature-controlled incubation chamber, they were housed individually in a cage with a thick layer of sawdust and were monitored.
Bee Venom Treatment and Experimental Groups
First, whole bee venom (Sigma, St. Louis, MO, USA; 0.25 mg/kg) was dissolved in a 50-µL volume of saline. The apposed solution was subcutaneously administered into the Joksamli (ST36) acupuncture point on the same side as the SCI surgery (ipsilateral side). The Joksamli point was located 5 mm below and lateral to the anterior tubercle of the tibia. Previously, we reported that this dose was effective in producing anti-nociception when injected into an acupuncture point, and thus, we chose the dose for evaluating the possible anti-nociceptive effects of peripheral injection [9]. Repetitive DBV or saline injections during the induction phase were initiated on the first day post-SCI surgery and were then applied twice a day (at 8 a.m. and 8 p.m., respectively) for 5 consecutive days. During the maintenance phase, repetitive DBV or saline was administered to SCI rats from Days 15 to 20 after surgery. Although previous data suggest that repetitive DBV injection does not induce pathological changes at the site of injection [8], we examined all animals receiving DBV injections into the ST 36 acupuncture point for the appearance of edema and possible infection. In addition, we massaged the injection site area daily immediately after DBV treatment in order to prevent the accumulation of DBV in the tissues.
Mechanical Allodynia Test
All behavioral assessments were performed under the ethical guidelines set forth by the International Association for the Study of Pain (IASP). Pain behavior assessments were performed one day before hemisection surgery to obtain baseline values of withdrawal responses to mechanical and heat stimuli. Then, rats were assigned randomly to each treatment group, and behavioral testing was subsequently performed blindly. During the experimental period, all behavioral tests were performed at the following time points after surgery: 3, 5, 7, 10, 14, 21 and 28 days. These tests were conducted at the same time of the day to reduce errors in relation to diurnal rhythm. Animals were placed on a metal mesh grid under a plastic chamber, and the tactile threshold was measured by applying a von Frey filament (North Coast Medical) to the mid-plantar surface of the hind paw until a positive response for withdrawal behavior was elicited. Nine calibrated fine von Frey filaments (0.40, 0.70, 1.20, 2.00, 3.63, 5.50, 8.50, 15.1 and 21.0 g) were used. They were presented serially to the hind paw in ascending order of strength with sufficient force to evoke slight bending against the paw. A brisk paw withdrawal response was considered as a positive response, for which the next filament was tested. If there was no response, the next filament was the next greater force. When animals did not respond at 21 g of pressure, the animal was recognized as being at the cut-off value. The 50% withdrawal response threshold was determined using the up-down method.
Thermal Hyperalgesia Test
To determine nociceptive responses to heat stimuli, paw withdrawal response latency (WRL) was measured using a previously described procedure [43]. Briefly, animals were placed in a plastic chamber (15 cm in diameter and 20 cm in length) on a glass floor and allowed to acclimatize for 10 min before thermal hyperalgesia testing. A radiant heat source was positioned under the glass floor beneath each hind paw, and paw withdrawal latency was measured to the nearest 0.1 s using a plantar analgesia meter (IITC Life Science Inc., Woodland Hills, CA, USA). The intensity of the light source was calibrated to produce a paw withdrawal response between 10 and 12 s in naive animals. The test was examined twice on both the ipsilateral and contralateral hind paws, and the mean withdrawal latency in each hind paw was calculated. The cutoff time was set at 20 s.
Motor Function Recovery
After the rats underwent spinal cord hemisection surgery, they were tested for motor function or coordination in an open-field test space using the BBB locomotor rating scale [36]. Briefly, the BBB scale ranges from 0 (no discernible hindlimb movement) to 21 (normal movement, including coordinated gait with parallel paw placement of the hindlimb and consistent trunk stability). Scores from 0 to 7 showed the recovery of isolated movements in the three joints (hip, knee and ankle). Scores from 8 to 13 indicated the intermediate recovery phase showing stepping, paw placement and forelimb-hindlimb coordination. In addition, scores from 14 to 21 mainly showed the late phase of recovery with toe clearance during every step phase. Only the scores of the ipsilateral hind limb on the hemisected side were examined, because there was no significant difference in locomotor function of the contralateral hind limb.
Western Blot Assay
All procedures for the Western blot assay were followed as described in our previous report [44]. After the mice were anesthetized by injecting a combination of 2.5 mg of Zoletil 50 with 0.47 mg of Rompun in saline, the spinal cord was obtained using the pressure expulsion method into a cooled saline-filled glass dish and was frozen quickly in liquid nitrogen. To investigate the functional changes of the L4-6 spinal cord segments, we verified the attachment site of spinal nerves in anesthetized rats. In addition, the extracted spinal segments were divided into ipsilateral and contralateral halves under a neurosurgical microscope. Subsequently, the ipsilateral and contralateral spinal dorsal horns were used for Western blot analysis. The spinal cords were homogenized with RIPA buffer (cell signaling, Beverly, MA, USA) containing protease inhibitor, phosphatase inhibitor and 0.1% SDS (sodium dodecyl sulfate). In addition, insoluble materials were removed by centrifugation at 12,000 g for 20 min at 4 °C. The sample protein concentrations were determined using Bradford reagents (Bio-Rad Laboratories, Hercules, CA, USA), and spinal cord lysates were separated by 10% or 15% SDS-PAGE (SDS-polyacrylamide gel electrophoresis). Subsequently, lysates were transferred to a nitrocellulose membrane. Non-specific binding was pre-blocked with 5% non-fat milk (Becton, Dickinson & Company, Franklin Lakes, NY, USA) in T-TBS and 8% bovine serum albumin (MP Biomedical) for 30 min at room temperature. Then, the membrane was incubated overnight at 4 °C with mouse anti-β-actin (1:1000, Sigma, St. Louis, MO, USA), mouse anti-GFAP antibody (1:1000, Millipore, Billerica, MA, USA) or rabbit anti-Iba1 antibody (1:1000, Abcam, Cambridge, UK) in 5% non-fat milk solution. The membrane was washed three times with T-TBS for 10 min each time and incubated with goat anti-mouse IgG horseradish peroxidase (1:5000; Calbiochem, Darmstadt, Germany) or goat anti-rabbit IgG horseradish peroxidase (1:5000; Calbiochem, Darmstadt, Germany) for 1 h at room temperature. After the membrane was washed three times with T-TBS, antibody reactive expressions were visualized using a chemiluminescence assay kit (Pharmacia-Amersham, Freiburg, Germany). The intensity of protein bands was analyzed by Image J software (Graph Pad Software, Stapleton, NY, USA, 2010).
Statistical Analysis
All data were expressed as the mean ± standard error of the mean (SEM) and analyzed statistically using the Prism 5.0 program (Graph Pad Software). Data from behavior studies were tested using two-way analysis of variance (ANOVA) in order to determine the significant effect of the repetitive DBV treatment. Bonferroni's multiple comparison test as post hoc analysis was also performed to determine the p-value among experimental groups. For Western blotting analysis, column analysis was examined by Student's t-test for comparisons between two mean values. p < 0.05 was considered statistically significant.
|
2015-09-18T23:22:04.000Z
|
2015-07-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b53734bb01c102a8eca4681744dfdaa0433736b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/7/7/2571/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b53734bb01c102a8eca4681744dfdaa0433736b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244769677
|
pes2o/s2orc
|
v3-fos-license
|
Did Iranians change their eating behavior following COVID-19 outbreak?
Background: Significant lifestyle changes have been reported after COVID-19 outbreak. The present study aimed at investigating changes in dietary habits in response to the COVID-19 outbreak in an Iranian population sample. Materials and Methods: In this cross-sectional study, the dietary habits of Iranian adults were assessed before and during the COVID-19 outbreak. Consumption of different food groups such as meats, dairy, fruits, vegetables, seeds, and nuts was assessed using a digital questionnaire which was shared on social media platforms. For the statistical analysis, the Wilcoxon signed-rank test was used. Results: In this online survey, 1553 questionnaires were completed. The results showed that the reported consumption of protein-rich foods increased (P < 0.05), but fish and dairy consumption showed a significant reduction (P = 0.006 and <0.001, respectively). There was a significant reduction in reported fast-food consumption (P < 0.001). Fruits and vegetables (P < 0.001), natural fruit juices (P < 0.001), and water (P < 0.001) were consumed more frequently. Individuals also consumed more vitamin and mineral supplements (P < 0.001) including those containing Vitamin D. Conclusion: During the COVID-19 pandemic, participants reported a significant change in their dietary habits and intake of supplements. Higher intakes of meats, protein-rich foods, fruits, vegetables, and nutritional supplements and lower intakes of fish, dairy, and fast foods were reported.
INTRODUCTION
In 2019, a new coronavirus causing a flu-like syndrome emerged in the Chinese province of Hubei, and an outbreak occurred in Wuhan in December 2019. [1,2] Due to the nature of the pulmonary symptoms, the virus was renamed severe acute respiratory syndrome associated with coronavirus-2 (SARS-COV-2) and is the cause of COVID-19 (1). A public health emergency was declared by the World Health Organization (WHO) on January 30, 2020. [3] The number of infected and deceased individuals affected by this disease has rapidly increased globally. In Iran, the first confirmed case was identified on February 19, been shown to be a contributory factor in the emergence of the disease. [2] However, the WHO has noted that a healthy diet can help prevention and treatment of the COVID-19, and nutritional recommendations have been provided during the pandemic. [6] Currently, there is limited research on dietary patterns and their impact on the incidence or mortality of COVID-19, but it has been proposed that the entry of SARS-COV-2 is facilitated by transmembrane angiotensin-converting enzyme (ACE2) and dietary patterns are associated with ACE levels. Thus, certain foods may be associated with a reduction in infection and mortality. [7] Large-scale studies analyzing the eating habits of people during this pandemic are important to support and encourage healthy eating patterns in the population. It may be possible to implement healthy eating patterns during a pandemic, and this may be important for future population health impact. [6] Therefore, the present study was designed to investigate changes in eating behavior following COVID-19 pandemic in Iranian adults.
Study design
A cross-sectional study was conducted from March 27, 2020, to April 29, 2020, to investigate changes in eating behaviors of Iranian adults following COVID-19 outbreak. This survey was performed during the period of COVID-19 epidemic among Iranians from all over the country using an online platform that could be accessed on any device connected to the Internet. The survey included an initial page describing the study objectives and details of the ethics of the survey. Participants could unsubscribe from the survey at any stage before submitting the survey. Incomplete answers were not saved. Online research of this type is recommended as a method for rapid access to individuals and ensures their safety in pandemic situations. [8] The research protocol of the study was approved by the Research Ethics Committee of Shiraz University of Medical sciences (IR. SUMS. REC.1399.271).
Participants
Individuals who had access to digital devices and the Internet and were also able to answer our questionnaire were included if they were (1) Iranian adults residing in Iran and (2) aged ≥ 18 years. Pregnant or lactating women and also those who have been hospitalized for 7 days during the past month were excluded from the study. These criteria were verified from the answers given to the survey questions.
Questionnaire
A digital questionnaire containing two major sections was used. In the first section, demographic and anthropometric information was sought. Data on age, gender, and residing area were recorded and also, self-reported weight and height were gathered. Body mass index (BMI) was calculated using reported data ([weight (kg)]/[height (m)] 2 ). In the following section, a modified food frequency questionnaire (FFQ) was used to assess eating behavior. Reproducibility and validity of the FFQ were assessed previously. [9] Due to the fact that the study was conducted during the difficult period of pandemic-related lockdown, we slightly simplified the questionnaire to prevent the negative effects of the length of the questionnaire on the response rate. [10] Two answers were asked for each question, one about the timeframe before the outbreak and the other about the period during the COVID-19 pandemic. Intakes of each food item were recorded on the scale of servings per day, week, or month. Furthermore, information on the frequency of consumption of nutritional supplements was recorded.
The digital questionnaire was constructed on the "Porsline" platform (www.porsline.ir) and was distributed via the Internet. The link for the e-form was forwarded throughout social media platforms, and the recipients responded if they were willing to do so. To avoid repeated responses, the domain limited the access using IP addresses.
Statistical analysis
Statistical analysis was done using SPSS version 21 (SPSS Inc. Chicago, IL, USA). Data were checked for normality using Kolmogorov-Smirnov test. Skewed data were analyzed using Wilcoxon signed-rank test. Results were reported as mean ± standard deviation. Median and interquartile range were also reported for skewed data. P < 0.05 was considered significant.
RESULTS
During a period that the link to the digital questionnaire was active, 4434 individuals visited the link. These individuals read the aims of questionnaire but did not necessarily start the questionnaire; 1553 completed questionnaires were submitted. The response rate was 49%. Completing the questionnaire took approximately 12 min on average, and the majority of responders used cellphones (97%).
The demographic and anthropometric characteristics of the participants are shown in Table 1. The mean age of responders was 36.24 ± 10.77 years and 71.2% of them were female. The participants' mean BMI was 25.89 ± 4.78 kg/m 2 . The distribution of participants, on the basis of stated residence, showed that most participants were from southern and central regions of Iran (39.4% and 38.4%, respectively).
Consumption of protein-rich foods before and during the COVID-19 outbreak is shown in Table 2. During the COVID-19 pandemic, the intake of milk, yogurt, and cheese decreased significantly (P < 0.05). However, the consumption of red meat and poultry increased (P < 0.001). The intake of fish and canned fish fell significantly compared to the time before the COVID-19 outbreak (P = 0.008). The intake of nuts, grains and beans, and seeds increased significantly, as did dietary egg consumption (P < 0.001). The consumption of both types of fast food (homemade and take out) declined significantly (P < 0.001). Table 3. In comparison to the period before COVID-19 outbreak, high Vitamin C fruits and vegetables, yellow and green fruits and vegetables, and onion or garlic consumption showed a significant increase (P < 0.001). The intake of whole bread also increased (P < 0.001).
The intake of vegetables and fruits is shown in
The amount of water (P < 0.001) and the intake of natural fruit juices increased (P < 0.001), and the consumption of commercial fruit juice and carbonated drinks fell significantly (P < 0.001) [ Table 4].
DISCUSSION
The present study assessed the changes in eating behavior during COVID-19 outbreak in Iran. To the best of our knowledge, it is the first study to investigate the pandemic's impact on eating patterns in Iran. Our results revealed that the consumption of protein-rich foods increased except for the dairy products and fish which showed a significant reduction. Fast-food consumption was also reduced significantly. During the outbreak, fruits and vegetables and natural fruit juices and water were consumed more often, and individuals had an increased consumption of vitamin and mineral supplements including Vitamin D supplements.
The consumption of milk, yogurt, and cheese decreased during the pandemic. A study in Italy showed that following COVID-19 pandemic, the consumption of fresh milk decreased, while the consumption of pasteurized milk with a long half-life increased. [11] In addition, in a population-based study in Italy, during the COVID-19 outbreak. [3] 45.7% of participants reported consuming one serving/day of milk and yogurt. In our study, we found that the weekly median of milk and yogurt consumption is one and two servings, respectively. It should be mentioned that in the Italian survey, [3] every 125 g is considered as one serving for milk and yogurt, while our serving consists of 3/4 of a cup for yogurt which is 180 g, and for milk, one cup equals to 250 g. The reduced consumption of dairy products during the outbreak might be due to the lockdown/home confinement at this time. [5] It has previously been shown that consumption of yogurt and dairy products can enhance the immune system, [12,13] but the effects on the immune system may not necessarily translate into a benefit against COVID-19. [2] Future studies are needed to assess the effects of dairy products on COVID-19 infection.
Our participants reported higher intakes of red meat and poultry and lower intakes of both fresh and canned fish in the outbreak period. Italians also reported increased consumption of red and white meat, while increments in the white meat consumption were more pronounced. [3] With respect to dietary fish, Italian individuals reduced their consumption of fresh fish but increased the consumption of frozen fish. [3] Although red meat has been shown to induce inflammation, [14] meats are rich sources of zinc and this may enhance the immune response. [15] Therefore, a higher consumption of meats, especially poultry and fish may be of some benefit in the face of the pandemic, and especially fish, due to its anti-inflammatory properties may be a better choice. [16] Of note, it has been proposed that meat derived from the wild should not be consumed during the SARS-COV-2 outbreak. [17] Furthermore because of the potential exposure to infection in the marketplace, frozen or canned meats may be a better choice.
Consumption of fast food and ultra-processed foods is associated with inflammatory effects. [3,8] It has been suggested that there may be a link between the consumption of such foods and impairment in the coordination of innate and adaptive immunity. Such a problem has been shown to increase the risk of developing COVID-19 or increase the risk of complications. [11,18] Results of the present study revealed a significant reduction in fast-food consumption, both take-out and home-made fast foods. Other studies have also shown a reduction in fast-food intake during the COVID-19 pandemic. [6,19] This reduction in fast-food consumption may be due to several reasons. First, due to lockdown, there was a reduction in people going out to eat. Second, families have more free time to cook. In Italy, during the COVID-19 outbreak, homemade pizza consumption increased, [3] which may be because the main concern for fast food is about preparation procedures. Our results also showed that the reduction in take-out fast food was more pronounced than for home-made fast food.
Nutritional status plays an important role in protecting against the emergence of new viral pathogens. [20] A nutrient-rich diet with antioxidant and anti-inflammatory activities, such as the Mediterranean diet, helps reduce the severity of SARS-COV-2 so that it has been suggested that the Mediterranean diet may demonstrate one of the best dietary models for restoring innate and adaptive immunity and may be an adjunctive treatment choice for COVID-19. [3] Limited consumption of fresh fruits and vegetables leads to borderline status or deficiency of vitamins and minerals, including Vitamin C, Vitamin E and beta-carotene, and deprivation of their antioxidant and anti-inflammatory properties. [21] Deficiency of these micronutrients is associated with an impaired immune response, thus making people more susceptible to viral infections. [3] During the pandemic, due to lockdown, the reduction in agricultural productivity, and rising prices, [3,22] some studies have reported reduced consumption of fruits and vegetables, [8,11] while some other studies have reported an increase in the consumption of these food items. [6,19] This could be due to the recommendations for increasing the consumption of such foods during the pandemic so that the WHO recommends legumes, fruits, and vegetables as the best foods during quarantine. [6] In Iran, the recommendations were made for increasing the consumption of fruits and vegetables during the COVID-19 pandemic, and also, food supply was not affected. Therefore, in the present study, Iranians turned toward eating more fresh fruits and vegetables, while the consumption of dried fruits decreased significantly. This might have two reasons: first, people consumed fresh fruits and vegetables for their high Vitamin C content, so dried fruits were not as rich in this vitamin and second, the difficulty of dried fruits' hygienic may have led the population to choose fresh fruit instead.
During the pandemic, the reported consumption of garlic and onion also increased in our study. This might be related to the fact that in Iranian traditional medicine, garlic and onion are considered to have strong antibacterial and antiviral properties, as well as immune-boosting effects. [23] Compounds such as allicin, methyl-allyl trisulfide, and S-allyl cysteine exist in garlic have shown anti-inflammatory properties. [24] In addition, onion has been reported to boost the immune system, [25] but future studies are required to assess the effects of them on COVID-19 infection.
During the outbreak, Iranians drank more water and natural fruit juices and limited their consumption of commercial fruit juices and carbonated drinks. These changes indicate a change to a healthy drinking pattern which can be a good behavior change that have short-term effects in modulating immune responses [26] and long-term health effects in preventing chronic disease. [27][28][29] We found that Iranians tended to consume more multivitamin, calcium, Vitamin C, zinc, and Vitamin D supplements during the outbreak. The effectiveness of these nutrients has been proposed by published articles. [2] This behavior change could be due to the experts' statements on mass media about using nutritious foods and nutrients that can help preventing COVID-19. From all nutrients and supplements, effects of Vitamin D on the immune system and antiviral immunity have been widely proposed, and published literature during the COVID-19 outbreak states that Vitamin D supplementation can reduce the risk of COVID-19 infection and death. [30] The present study had several limitations. First, research using an anonymous online survey does not make it possible to verify data based on objective measures. However, our web-based survey was undertaken during the pandemic, and it would have been difficult to do this using any other approach. Second, online sampling may lead to the recruitment of a narrow demographic sample, with some groups within the population that are not reachable; therefore, a representative sample cannot be expected. For example, the mean age and BMI of the study respondents, reveals mostly younger and nonobese individuals filled questionnaire. Younger respondents may be overrepresented because of the use of social media and their greater familiarity with digital technologies. Third, data on food intake before the pandemic may be prone to recall bias, and it was inevitable because there were no preplanned research projects before the pandemic. Finally, we did not measure weight and height and used self-reported values because of the virtual identity of our questionnaire and data gathering method. However, in view of the challenges of conducting such studies during quarantine, it would be very difficult to overcome such limitations. However, the present study was the first study that assessed the Iranian eating behavior and supplement intakes during COVID-19 pandemic. Moreover, using the electronic questionnaire, we were able to gather data from several regions within Iran. Further investigations are proposed to assess the effects of these lifestyle changes on nutritional status of the Iranians. Furthermore, it is recommended to assess the long-term effects of this pandemic on eating behavior and health status of the people.
CONCLUSION
During the COVID-19 pandemic, participants reported a significant change in their dietary habits and intake of supplements as assessed using an online survey of dietary habits. Higher intakes of meats, protein-rich foods, fruits, and vegetables and lower intakes of fish, dairy, and fast foods were reported. Furthermore, individuals consumed more nutritional supplements, especially vitamin D supplements. These changes might be driven by access to specific food items, avoidance behavior during lockdown, and received beliefs about diet and health.
|
2021-12-01T16:22:51.158Z
|
2021-11-29T00:00:00.000
|
{
"year": 2021,
"sha1": "c0a75fc779a44744bb6c1b9e5c89f50bdccca979",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jrms.jrms_1234_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e459d8687e30af40e9f93247ef1cea490c142ee",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9106419
|
pes2o/s2orc
|
v3-fos-license
|
Combining Deictic Gestures and Natural Language for Referent Identification
In virtually all current natural-language dialog systems, users can only refer to objexts by using linguistic descriptions. However, in human face-to-face conversation, participants frequently use various sorts of deictic gestures as well. In this paper, we will present the referent identification component of XTRA, a system for a natural-language access to expert systems. XTRA allows the user to combine NL input together with pointing gestures on the terminal screen in order to refer to objects on the display. Information about the location and type of this deictic gesture, as well as about the linguistic description of the referred object, the case frame, and the dialog memory are utilized for identifying the object. The system is tolerant in respect to impreciseness of both the deictic and the natural language input. The user can thereby refer to objects more easily, avoid referential failures, and employ vague everyday terms instead of precise technical notions.
Introduction
Various aspects of referent identification by hearers have been investigated in the last few years: It has been studied as a process of noun phrase resolution and attribute comparison (Lipkis 1982), as a planned action (Cohen 1981, 84), as a process which depends on focus (Grosz 1981), context (Reichman 1981), the mutual beliefs shared between speaker and hearer (Clark & Marshall 1.981) and the modality of linguistic communication (telephone vs. teletype, cf. Cohen 1984), and as a process which is prone to various sorts of conversational failure (Goodman 1985). In all of these studies, natural language is the only conversational medium. For identifying objects under discussion, the hearer can therefore only utilize the NL descriptions provided by the speaker, and information about the previous dialog and the task domain at hand.
In face-to-face conversation, however, participants also frequently use extralinguistic means for referent identification, in particular, various sorts of deictic gestures (such as pointing at something by ones hand, finger, pencil, head or eyes). One This work is part of the SFB 314 Research Program on AI and Knowledge-Based Systems and has been supported by the German Science Foundation (DFG). The authors would like to thank J. Rink and W. Finkler for their help in preparing the manuscript, E-mail address of the authors: surname%sbsvamuucp~germany.csnet 356 may assume that this is done for simplifying and speeding up the identification process for both the hearer and the speaker, as well as avoiding referential failures. Certain technical innovations in the last few years (e.g., high-resolution graphic displays, window systems, touch-sensitive screens, input via a pointing device such as the mouse or the light-pen) have made it possible for computational linguistics to also experiment with and study a certain class of these deictic gestures, namely, tactile gestures for identifying objects on a terminal screen.
From an application-oriented perspective as well, such an ability is certainly a desirable characteristic for natural language dialog systems. In current systems, referring to visual objects involves the user either to employ unambiguous labels displayed together with the objects (cf. Phillips 1985), or purely linguistic descriptions which sometimes become rather complex (e.g. the "bright pink flat piece of hippopotamus face shape piece of plastic" in Goodman 1985). In Woods et al. (1979), a combination of deictic and natural language input has already been envisaged, but solely with restricted flexibility. Since an analyzer for pointing gestures is independent of a particular language, one might also consider transferring it to other NL dialog systems.
In this paper, we will present the referent identification coin ponent of XTRA, a system for a natural-language access to expert systems currently under development at the University of Saarbrficken. In its present application domain, XTRA is intended to assist a user in filling out his/her annual withholding tax adjustment form. The system will respond to terminological questions of the user, extract from the user's natm'allanguage input the relevant data that is to be entered in the application form, and verbalize the inferences of the tax expert system. During the dialog, the relevant page of the application form is displayed on one window of the screen (for a simplified example, see Fig. 1; only the tax form is visible to the user).
For referring to single regions in the form, to the entities stored therein, or to larger regions which contain embedded regions, the user can employ linguistic descriptions (which we will call descriptors), pointing gestures with a pointing device (mouse), or both. From now on, the noun 'delctic' will refer to the use of a pointing device, and the term 'deictlc expression' to the ill Bfihler's (1982) terminology, the kind of deixis used ill our situation is a dcmonsU'atio ad oculos. The objects on tile display are visually observable, upon which the user and the system share a common visual field. In Clark & Marshall's (1981) terms, they are in a situation of physical coprese,~ce. Theretbre, objects on tbe display need not be introduced by the use,, but can immediately be referred to by a descriptor, a deictic, or both.
In many cases, however, neither kind of reference will be precise. Referential expressions, on tile one hand, will often apply to more than one region in our form (as is the case when the user employs the term 'the deductibles' in order to refer to specific deductible sums such as dues for the membership in a professional organization). Deictic gestures, on the other band, are also often imprecise in that they arc not aimed at the region in which the user actually wants to refer to. Reasons for this might be inattemiveness, an oversized pointing device, or the user's intention not to hide the data entered in the respective field. Another factor of uncertainty is the pars-pro-toto deictic. In this case, tile user points at an embedded region when actually intending to refer to a superordinated region. This is particularly the case when a form region is completely partitioned into a number of embedded sub-regions.
Therefore, in our nrodel, we utilize several sources of information for identifying the region the user probably wants to refer to: the descriptor s/he uses, the location and the type of his/her pointing gesture, intrasentential context (case fi'ames), and the dialog context. The information from each of these sources alone may be ambiguous or imprecise. Combined, however, they ahnost always allow for a precise identification of a reterent.
2. Knowledge sources of the system 2.1. The tax form and the form hierarchy During the dialog with tile user, the system displays the relevant page of the income tax form on tile terminal screen. As is illustrated ill Fig. 1, such a form consists of a number of rectangular regions, which may themselves contain embedded regions, etc. We will abbreviate these regions by R1, R2, etc. The user can apply deictic operations to all regions.
For representing hierarchical relationships between regions, the system maintains an internal form hierarchy. Every region ill the form has a corresponding element in the form hierarchy. Hierarchical relationships between form elements can then be expressed by father-son relationships within the form hierarchy. There are two reasons for introducing such a hierarchical order: -Geometrical reasons: If region Rj is geometrically embedded in region Ri, then the element in the form hierarchy corresponding to Rj becomes a son of tile element corresponding to Ri. An example is given in Fig. 1 where regions R2 and R3 are geometrically embedded in R1. Hence, their con'esponding elements in the form hierarchy are subordinated to the element corresponding to R1.
-Sema.ntic reasons: In many cases, there is a semantic coherence betwee.n regions in tile form not directly expressed by the geometrical hierarchy. For example, see regions R 15 and R16, and regions R33 and R34 in Fig. 1, which intuitively form units within the form for which no direct geometrical equivalents exist. Therefore, so-called abstract regions are introduced in the form hierarchy to which conceptually coherent regions call be connected. These regions even need not be geometrically adjacent and can be subordinated to more than one abstract region. In Fig. 1, abstract regions are denoted by the symbol 'AR' (as e.g. AR48, the father of R15 and R16). It is ,lot surprising that abstract units in the form hierarchy are often directly related to higher level representational elements in the conceptual knowledge base of the systcnr (cf. section 2.3.).
Moreover, we discern two types of bottom regions: Labd regions contain the ofticial inscriptions on the form (e.g. LR9 tbr 'Professional Expenses'), value regions contain tile space for the user's data (e.g. VR28 for educational expenses). From now on, we will no longer distinguish between the tbrm as displayed on the screen and tile form hierarchy stored in the system. Since a close relationslfip between both structures exist, no problems will arise thereby.
The pointing gestures
Following Clark et al. (1983), we will call the region(s) at which the user pointed to the demonstratum, and tile region which s/he intended to refer to the referent. Three cases can then be discerned: a) The demonstratum is identical to the referent.
b) The demonstratum is a descendant of the referent (pars-prototo deixis). In this case, the referent may be a geometrical or an abstract region.
c) The demonstratum is geometrically adjacent to tile referent. This occurs when the user points below tile referent, to its right, etc. (e.g., by inattentiveness or because of not wanting to hide the data entered in the respective region).
In most cases, obviously, the location ofa deictic does not iden tify its referent, but only restrains the set of possible referential candidates. Therefore, information about the pointing gesture usually has to be combined with information from other knowledge sources.
Another observation was that mos! subjects use several types of pointing gestures differing in exactness. Their cboice seems to depend on tile size of the target region. The larger tile referent and the more sub-regions it contains, the vaguer is the pointing gesture. Therefore, our system allows the user to choose among several degrees of accuracy in his/her deictic. The user's decision, in turn, is taken into account when the system has to choose between referential candidates differing in size or to the degree of cmbedment (cf. section 3.1.2.).
The conceptual knowledgc base
Ill our system, conceptual knowledge is represented by a framebased language that shows a strong resemblance to Brachman's (1978) KL-ONE. The general part of tile representation contains concepts and attribute descriptions of concepts. Attribute descriptions mainly consist of roles and value restrictions fbr possible role fillers. Ill Fig. 1, concepts are depicted by ovals and roles by small circles (the figure has been somewhat simplified). For object concepts (as e.g. 'MEMBERSHIP FEE' and 'OR-GANIZATION'), attribute descriptions specify the properties of tile objects described by the concept. For action concepts (as e.g. 'PEIYSICAL TRANSFER', 'ADD' etc.), they specify the case frame.
General concepts can be ordered in a concept hierarchy, allowing the attribute descriptions of concepts to be inherited fl'om the superordinated concepts. In Fig. 1, the bold arrows denote such superconcept relations. More specific concepts can be defined by introducing additional attribute descriptions or by fi~rther restraining the value restrictions of role fillers. It is possible for Hierarchy "Can I add my annual $15.00 ACLdues to these membership fees?" Fig. 1: The knowledge sources of the system a concept to be subordinated to more than one superconcept, thus inheriting the properties of several superconcepts.
Natural-language input of the user containing new facts relevant tot tax adjustment, as well as data entered directly into the form, causes slructures of the general part to be mdivMualized. Individualized concepts (depicted by ovals with lateral strokes in Fig. 1) and individualized attribute descriptions are thereby created. In Fig. 1, the individualized structures express the facts that the user spent $80 and $40 as professional organization and charitable organization membership fees, respectively.
Concepts and roles can be linked to elements in the tbrm bierarchy if they conceptually correspond to a region in the form.
The flmctinnal-semantic structure
Belbre individualizations of the conceptual knowledge base are created, the natural--language input of the user is first mapped onto individua{izations of the so:called timctional-semantic structure (FSS). The task of the FSS (cf. Allgaycr & Rcddig 1986) is to express the syntactic and semantic relationships bclween the constituents of the input sentence. It is also represented in a Kl,-()Nl'~-like scheme. Amongst other things, the word stem entries in the lexicon determine which parts of the FSS are to be individualized, l)uring this process, inlormation about the location and the type of the occuring pointing gestures is assigned to the nmm phrases to which flmy belong. Fig. 1 shows part of the individualized leSS generated by the input sentence.
The. I"SS forms the starting point tbr the referential analysis of tile naturalqanguage input, i.e. the mapping onto individual ized structures of the conceptual knowledge base. This task is perlbrmed by an interpreter using appropriate mapping rules.
The dialog memory
Our current provisional approach is to regard tile dialog memory as a structured Iis~ containing individualizations of the concepts in the conceptual knowledge base. When a rcfi'rent is recognized as not having been lnc.ntioned before (be cause it is not contained in the clialog memory), the respec live concept is individualized, linked to the referent, and entered as the most relevant element of the dialog memory. In Fig. 1 we assume that regions tt_16, R34, AR48 and ARSI, amongst others, have been addressed betbre. Thus the concepts PROF.ORG.MI';MB.FEE, CItAR.ORC;.MEMB.FEE and NUMBER have been individualized and linked to these regions.
Referent identification processes
In a user's NI, input, a deictic can be used at any position where a noun phrase or a (locative) adverbial phrase is to be expected. From a syntactic point of view, a deictic can serve two functions: it supplements a syntactically saturated description, i.e. takes the form of an additional attribute.
-it replaces a syntactically obligatory constituent (e.g. the head of a noun phrase).
The position ofa deictic may be before, within, or after a noun phrase. Syntactic vicinity is taken into account if an ambiguity occurs in embedded noun phrases.
In the XTRA system, tbur sources of intbrmatlon are utilized in order to identify the referent of a deictic expression: The location of tile user's pointing gesture, the descriptor s/he uses, case frame restrictions, and the contents of tbe dialog memory. The three former sources can be found in the lunctionalsemantic structure, the latter source in the individualized part of the conceptual knowledge base. RetErent identification, then, is perlormed in the following order: a) Generation of potential referents by the most appropriate knowledge source. Source--specific partial plausibility vatues are thereby assigned m each generated candidate. Only deictic, descriptor and case. ti'ame are considered in fills step, lhe dialog memory is only used in step (b).
b) Re-ewduation of each candidate by consecutively considering the inlbrmation from all other knowledge sources. c) ()vcrall evaluation by considering all partial plausibility assignments; sel.ection of' the candidate with the highest plausibilily factor.
In the tbllowing section we will describe how tile most appro priate knowledge source for refi:rent generation is selected and how referential candidates are generated. Since we arc pattie nlarly concerned with referent identification through pointing gestm'es, we will only descrihe the referem generation strategy of the deixis analyzer (also of. Allgayer I986). For general ing candidates through descriptors and ease flames, we use Ihe "classical" way leading from the lcxicon via the FSS oww to individualized concepts in the conceptual knowledge base and to the form hierarchy. In section 3.2., we then describe how lllc deixis analyzer rc-evahmle.s candidales supplied by descriptor and case fi'ame analysis, and how candidates generated by the deixis analyzer are re evaluated by considering the intormation of all other knowledge sources. The example depicted m Fig. 1, to which we constantly refi:r to in the upcoming section, was chosen to demonstrate that, in many c'ases, all, or nearly all of these knowledge som'ccs are necessary to correctly identify a referent.
3.1. Generating potential referents 3.1.1. Deciding for the most appropriate knowledge source In orde.r to restrain tile computational complexity of tile identification process, it nlust be decided first whether referential can didates shouM be generated by analyzing the pointing gesture, the descriptor, or the case fi'anm of the user's input. To assure that only a small number of candidates nmst be re-evaluated in the subsequent steps, it is certainly advisable to choose the knowledge source which yields the smallest set of plausible can didates that still contains the refe.rent. The evaluation of each knowledge source is performed according to the following criteria: -Deixis: The quality of a u,mr's deictic for candidate generation is inversely proportional to the number of regions conlained in the demonstratum and the number of ancestors of the demonstratum. A deictic to R3 in Fig. 1, for instance, will yield less candidates th.an a deictic to R34.
-Descriptor: Ifa descriptor does not contain a head, it cannot be used for candidate generation. Otherwise, its quality is inversely proportional to tim number of subconcepts of its conceptual representation and tile number of regions linked to these concepts. E.g., tbr the representation in Fig. 1, tile descriptor 'number' will yield by far more candidates than the descriptor 'membership fi~e'.
-Case frame: The quality of a case restriction for referent generation depends on the quality of the selection restriction concept of the corresponding role in the conceptual knowledge base. This quality can be computed in the previous manner mentioned. In Fig. 1, the selection restrictions for the ADD concept do not seem to be profitable for candidate generation.
3.1.2. Generating candidates by analyzing the user's pointing gesture As was mentioned above, our system allows for the use of several types of deictic gestures differing in precision. A so-called deictic fieM is associated with each type of pointing gesture, its size corresponding to the degree of exactness of the deictic. An example for three different types of pointing gestures is given in Fig. 2.
Fig.2: Three types of pointing gestures
A deictic fiekt may either be completely contained in a basic region (as is the case for deictic 1 in Fig. 2) or overlap two or more basic regions (deicties 2 and 3, respectively). All basic regions that are overlapped by a deictic field serve as first referential candidates in our system. The ratio of that part of a region covered by a deictic field in relation to the size of the total region yields the plausibility value for the region. Deietic 3, for instance, generates R18, R16, R17 and R15 as first candidates, in order of descending plausibility (cf. Allgayer 1986).
In a second step, the system accounts for the possibility ofparspro-toto deixis. All regions semantically or geometrically superordinated to any of the current candidates are also considered as candidates. The plausibility assignment of a superordinated region depends on its type, the plausibility of its candidate subregions, and the type of pointing gesture employed by the user (the vaguer the pointing gesture, the higher is the plausibility of the superordinated regions). In Fig. 2, regions AR49 and AR48 would be added in the case ofdeictie 3, both with higher plausibility than any of the first candidates. This upward prop,~gation through the hierarchy can be applied iteratively, yielding even more candidates (the valuation function smoothly declines thereby to render high-level regions less plausible). The resulting set of candidates has to be re-evaluated by the processes described below.
3.2. Re-evaluatlng the set of candidates 3.2.1. Re-evaluation by analysis of the pointing gesture If the optimization process of section 3.1.1. decided that descriptor or case frame analysis were the most appropriate knowledge sources for candidate generation, analysis of the deictie is employed in our system for re-evaluating the candidates supplied by these components. This evaluation process is rather similar to candidate generation described above. For example, see Fig. 1 (we assume that the delctic in this example is the same as deictic 3 in Fig. 2): If the desciptor analyzer generated AR48, AR51, R16 and R34 as potential referents (since the descriptor was 'membership fee', see below), the deixis component would assign high plausibility values to the former, and very low ones to the latter.
Re-evaluation by descriptor analysis
This process determines to what extent the conceptual representation of the descriptor applies to the current candidates. Each candidate is tested as to whether the representation of the descriptor, a subconcept of this representation, or (if existent) the restriction concept of the value slot of one of these concepts is linked to the candidate. The more concepts in between the representation of the descriptor and the linked subeoncept, the lower the new partial plausibility assignment. Let us assume for our example in Fig. 1 that the deixis analyzer, in order of decreasing plausibility, has generated regions AR49, AR48, R18, R16, R17 and R15 as potential referents. If the descriptor is 'these membership fees', the descriptor analysis will prel~r ARt8 and R16, since a subconcept of the representation of this descriptor is linked to AR48, and the restrictiou concept of its value slot is linked to R16.
Re-evaluation by case frame analysis
This process determines to what extent the selection restriction concept of the respective slot in the conceptual representation of the verb applies to the referential candidates under investigation. This evaluation process is performed almost identically to that of the descriptor. In our example, high plausibility would be attributed to regions R16 and R18, since the concept NUM-BER (the restriction concept of the relevant slot of the concept ADD) is linked to these regions.
Restriction by dialog memory
This process determines whether a referent has recently been mentioned by checking whether or not an individualized concept connected with it is contained in the dialog memory. The better the position of such an individualized concept in the list, the better the plausibility of the candidate. In Fig. 1, we assume that both the professional and the charitable society memberships and their values have been addressed just recently. Therefore, in our example, high plausibility values are assigned to regions R16 and AR48. The overall evaluation will then select R16, it having obtained the highest total plausibility.
Discussion
Our system demonstrates that spatial deixis is a valuable source of information for identifying referents which also can be investigated and utilized in natural language dialog systems with pictoral display. Three reasons sum up the advantages of using pointing gestures: They save the speaker the generation, and the hearer the analysis of complex referential descriptions and thus simplify the natural-language dialog; they often allow for reference in situations in which linguistic reference is simply not possible (think of referring to one out of a dozen similar objects); and they permit the speaker to be vague, imprecise, or ambiguous, and to use everyday terms instead of precise technical terms unknown to him/her.
In natural-language dialog systems, deixis analysis can be combined well with standard methods for referent identification.
Sonre of the identification processes (e.g., tests with case frame, descriptor and dialog menmry) are rather similar to the classical methods used ibr anaphora and ellipsis resolntion. Others, such as the generation and evaluation of candidates by the deixis analyzer, are typical with respect to this particular kind of conversational medium.
It should be pointed out, however, that out' environment for spatial deixis is, in several ways, somewhat simpler than those occurring in person-to-person dialogs (cf. Schmauks 1986). The deictic fieM is only two.dimensional, and the objects that carl be pointed at are clem'ly separated from each other. Compared to real-life situations, the number of possible referents is relatively small. "Left" and "right" xrman the same thing for the user and the system (which is not the case, e.g., in face-tolace conversation), iIowever, this relative simplicity neeci not be a rh'awback. Instead, one might regard our environment as a study in vitro, eliminating a number of uncertainty t~tctors so that tile essential characteristics of spatial deixis become more salient.
Another question is whether the deictic behavior ofsul~jects who use a poiming device is the same as that of subjects who touch the display with their fingers (and thus, whether deixis via a pointing device is a valid sinmlation of tactile deixis). One might argue, e.g., that people point more precisely with a mouse than with their lingers, or vice versa. We are currently conducting an inibnnal experiment to answer these questions. In any case, only the propagation functions are perhaps all~:cted t0y a change of tile deictic medium, whcreas the referent identiticalion processes will remain tile same.
Attempts are currently being made to also integrate visual ancl conceptual salience in our model (cf. Clark et al. 1983). When a pointing gesture is ambiguotxs, it appears that regions set off by boM fi'ame or coloring, as well as regions containing important data tbr tile task domain are preferred. We expect this preference to be laken into account in the evahmtion processes of tile deixis analyzer. Another possible extension which wc wouM like to invesdgatc is in replacing the strategy described in section 3.1.1. by a certain form of incremental referent identification. There is strong empirical evidence (e.g. Goodman 1985) that people begin with referent identification immediately alter receiving initial information about it, instead of waiting tmtil the speaker's reti~rential act is terminated. Since all components described above are strictly separated, it appears basically possible to also use them in an incrmnental identification process. In one-processor systems, however, great care must be taken that the knowledge source first adressed does not block the system by generating too many candidates. Therefore, some process controlling will be necessary, either by ressource limitation or by taking into account the heuristics listed in section 3.1.1.
|
2014-07-01T00:00:00.000Z
|
1986-08-25T00:00:00.000
|
{
"year": 1986,
"sha1": "199267ce6892953224a275f84f3df2202ad45789",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=991471&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "8bb8f845068a8513d299065070bd3d201d382213",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
88991536
|
pes2o/s2orc
|
v3-fos-license
|
JABUTICABA PEEL IN THE PRODUCTION OF COOKIES FOR SCHOOL FOOD : TECHNOLOGICAL AND SENSORY ASPECTS
Jabuticaba (Myrciaria cauliflora Berg) is agreatly appreciated fruit with nutritional importance, primarily found in the majority of Brazil. Its peel is a discarded by-product of the Brazilian agroindustry. The objective of this study was to develop cookie formulations with partial replacement of wheat flour (WF) and oat flour (OF) by jabuticaba peel meal (JPF), analyzing the technological aspects of the elaborate cookies and evaluating the acceptance of the selected product. All regression models of the cookies with JPF flour were significant. Cookies with JPF tended to blemish and had smaller thicknesses, greater ISA and IAA, smaller values of breaking strength and decreased color parameters (L*, a* and b*) compared to standard cookies. Cookies made with larger OF fractions had lower values of specific volume. Both the standard and the selected cookies from the cookie desirability test were deemed acceptable among students. This work presents a new possibility to produce cookies based on an agro-industrial co-products, which is interesting for the market for this type of product.
INTRODUCTION
The jabuticaba is native to south-central Brazil, and Myrciaria cauliflora (DC) Berg stands out among the currently known species because the fruits are suitable for fresh consumption and for agroindustry (Aschieri;Silva;Cândido, 2009).In Brazil, the residues of fruits and vegetables are generally wasted at all points at the market chain leading up to the final consumer, including the farmers, industry and customer.The foods and their by-products (peels, seeds and bagasse) that are often intended for animal feed could be used as alternative sources of bioactive compounds in foods for human consumption to meet nutritional needs, reducing waste and environmental impact and adding value to these by-products (Melo et al., 2011).
Among the various available alternatives to preventing the inappropriate disposal and waste of these consumable parts that are not typically consumed, their use in the production of flour stands out (Pelissari et al., 2012;Aziz et al., 2012) because they can be applied in formulations such as baked cakes, breads and biscuits (Ajila et al., 2010;Coelho;Wosiacki, 2010;Lopez et al., 2011), increasing their added value.
Cookies are obtained by kneading and baking dough prepared with flour, starches, fermented or not, and other food substances.Their quality is related to the flavor, texture, appearance and other factors, and, in recent years, they have emerged as a product of great commercial interest due to the practicality in their production, marketing and consumption, as well as their long shelf 625 Ciênc. Agrotec., Lavras, v. 39, n. 6, p. 624-633, nov./dez., 2015 life (Perez;Germani, 2007).Therefore, alternatives have emerged for producing nutritionally enriched flours, reflecting the current appeal for healthier eating habits (Fasolin et al., 2007).In this sense, incorporating flour from jabuticaba peels in the preparation of cookies is an alternative contribution to the health of students.Lucero et al., (2010) showed that enhancing school meals is extremely important given the health benefits to students and, consequently, the significant improvements in teaching and learning processes.Thus, the mathematical modeling and optimization of food formulations can be an important contributor in the assessment of the nutritional and sensory quality of food for various purposes, in addition to providing researchers with the necessary tools to develop and optimize food products (Dingstad;Westad;Naes, 2004;Ferguson et al., 2006).
In this sense, this work aims to develop a formulation of flour-enriched cookies elaborated with jabuticaba peel substituted for wheat flour and oatmeal, to analyze the technological aspects of both the flour and the prepared cookies, and to perform a sensory analysis of the acceptance of students from a school in the city of Goiânia, Goiás, Brazil.
Collection of samples
The fruits (50 kg) from different trees were collected for the production of jellies and pulps from the Association of Rural Producers in the Region of Bom Sucesso (APRO-BOM), Nazário, Goiás, Brazil, selected with regards to color, mechanical damage and attack from microorganisms and insects, and were then sanitized for 15 min in a solution of sodium hypochlorite (150 mg L -1 ).The peels were donated by the association and collected after removing the pulp; they were then refrigerated and taken to the laboratory of Agro-Industrial Waste Utilization of the School of Agronomy, Universidade Federal de Goiás (UFG), to be dehydrated.
Production of jabuticaba peel flour
In the laboratory, jabuticaba peels were dried in an oven (Tecnal-TE 394/3, Piracicaba-SP, Brazil) with forced air circulation at 60 °C until reaching 14% moisture to obtain microbiological stability.The dried peels were ground in a Wiley mill (Marconi model MA630, Piracicaba, Brazil) with sieve analysis in vibrating equipment (Produtest, MOD.T, São Paulo, Brazil) containing seven stacked sieves with openings varying from 1.41 to 0.053 mm and a pan at the bottom (AOAC Method 965-22 (1997)).The dry flour was vacuum packed in low-density polyethylene bags and stored under refrigeration (4±1 °C) until processing.
Mixture design
For the preparation of the cookies, response surface methodology and design of mixtures were used (Barros Neto;Scarminio;Bruns, 2001).The components of the mixture used in this study were jabuticaba peel flour, wheat flour and oatmeal flour.The ingredients in this study were expressed as pseudo-components for the jabuticaba peel flour (Equation 1), oatmeal flour (Equation 2) and wheat flour (Equation 3). (1) (2) where X is the component content in terms of the pseudocomponent, C is the actual proportion of the component, JPF is jabuticaba peel flour, OF is oatmeal flour, and WF is wheat flour.
The cookies were prepared and manipulated according to the experimental design shown in Table 1.The order of processing of the experiments was randomized.
The representation of the system of mixtures was constructed using ternary contour diagrams.A polynomial equation was adjusted for each response, estimating the respective coefficients using the canonical models of Scheffé for three components: linear (Equation 4) and quadratic (Equation 5) models.Therefore, regression models with all variables of interest were obtained. (4) (5) where y is the dependent variable, β is the regression coefficient for each component of the model, x 1 is jabuticaba peel flour, x 2 is oatmeal flour, and x 3 is wheat flour.
Processing of cookies
The formulation used in this test was defined after preliminary tests according to Table 2.A standard cookie with no JPF (control) was prepared for comparison with the desired formulation in the sensory evaluation.The ingredients mixed manually in a container until a homogeneous dough was obtained.The cookies were shaped in a PVC ring (1 cm x 4 cm), placed in a rectangular baking tray (24.5 x 8 x 35.5 cm), smeared with 5 g of soybean oil, and baked in a preheated 180 °C oven for 20 minutes.The procedures were identical for all formulations and were conducted in three replicates.
Technological analyses
The analysis of water activity (Aw) was performed in an Aqua Lab unit (CX-2, Washington, USA).The water absorption index (WAI) and water solubility index (WSI) of the standard cookie and the cookies formulated with JPF were determined according to the methodology of Anderson (1969).Analyses were carried out in triplicate.Instrumental color parameters (L*, a* and b*) were measured with a colorimeter (Color Quest II, Hunter Lab Reston, Canada) according to Paucar-Menacho et al. (2008).Thirty readings were performed on three standard cookies and on three cookies made with jabuticaba peel flour from each experimental point.The specific volume (SV) of the cookies was determined from the displacement of millet seeds in 15 replicates according to the method described by Silva, Silva and Chang (1998).
Determination of the rupture force of the cookies was performed in a texture analyzer (Stable Micro Systems, TA.TX Express, Surrey, England) using a rectangular probe with a Warner-Bratzler steel blade and a reversible blade to cut each cookie in half, which was then placed horizontally on a platform.The pre-test and post-test speed was 10 mm s -1 , and the test speed was 2 mm s -1 .The distance of the product to the probe was 8 mm.A total of 15 determinations of each formulation was performed on the second day after preparation.The analyzed samples were selected randomly.
Table 1: Design of mixtures to study the effect of jabuticaba peel flour (JPF), oatmeal flour (OF) and wheat flour (WF) on dependent variables in actual proportion and in pseudocomponents, defined by simple design, with x 1 + x 2 + x 3 = 1 or 100%.
Experiment
Proportions of ingredients in the ternary mixture In actual concentrations g (100g)
Desirability test
From the mathematical models obtained for the Aw, thickness, SV, WAI, WSI, rupture force, L*, a* and b* of the cookies generated in the experimental design and with the aid of the Response Desirability Profiling function of the Statistica software (Statsoft, Statistica 7.0, Tulsa, USA), an estimate was made to determine the most desirable cookie formulation for the sensory analysis.A cookie was considered to have the most desirable formulation if it had higher values of SV and rupture force and lower values of L* and WAI.
Validation of the Model
Using the values obtained in the technological analysis, validation tests of the models were carried out by comparing the expected with the observed results using two original replicates and five laboratory replicates.Models with an R 2 lower than 0.83 were discarded.
Microbiological tests of the most desirable JPF and cookie samples for sensory analysis
Microbiological analyses were performed at the Laboratory of Hygiene and Sanitary Control of Food (LACHSA), Faculty of Nutrition (FANUT) (FANUT/UFG) and met the microbiological standards established by RDC Resolution n.12 of the Brazilian National Health Surveillance Agency (ANVISA) of the Ministry of Health.Scores of yeast and molds and coagulase-positive staphylococci were also determined for all samples as hygienic-sanitary indicators of the processing conditions and pathogenicity to complement the evaluation of the proposed microbiological profile.Microbiological analysis followed the procedures described by the American Public Health Association -APHA ( 2001) and Food and Drug Administration -FDA (2002).
Sensory analysis of standard and most desirable cookies
The subjects able to participate in this study were 100 students aged between 9 and 14 years old of both sexes, with the interest, availability and parental consent to participate in the analysis.This age range was chosen because these individuals are consumers of school meals and the Association of Rural Producers in the Region of Bom Sucesso (APRO-BOM) is a supplier of such products to the municipalities in the region.Analyses involving humans were conducted after approval by the Research Ethics Committee of the UFG under the opinion number 497.446.
The acceptability test was applied using a 5-point facial hedonic scale in which the appearance, texture, flavor and odor were evaluated according to a randomized block design, in which each panelist represented a block.The third point in the scale was considered as the limit of acceptance of the cookie formulations.
Statistical analysis
The results of the physical and sensory analyses (acceptance) were evaluated by analysis of variance (ANOVA) at a significance level of 5%.For the analysis of WSI, a level of significance of 10% was used.The technological analyses of the cookies were evaluated using response surface graphs.Statistica software version 7.0 (Statsoft, 2004) was used to obtain the experimental design and data analysis and to construct the graphics.
Mixture design and technological characteristics
The jabuticaba peel flour obtained in this study, as well as the wheat and oatmeal flours, were fine-grained (80 mesh).According to Hoseney and Rogers (1990), the particle size distribution influences the ability of flour to absorb water, and smaller particles absorb proportionately more water more rapidly than larger ones.In this sense, it is possible to infer that the cookies in this study absorb water uniformly, depending on the textural properties of the flour used.
According to Table 3, all the regression models of cookies with jabuticaba peel were significant; the Aw, thickness, SV, WAI, rupture force, L*, a* and b* were significant at 5%, and the WSI was significant at 10%.In addition, the regression models had coefficients of determination (R 2 ) greater than 83%, except for the thickness, for which the coefficient of determination (R 2 ) was greater than 67%.Therefore, all models were considered predictive.
Analysis of Figure 1A verifies that the highest values of Aw (close to 0.48) were found in formulations with 0.30 to 0.60 g (100 g) -1 JPF (Table 1), with 0.15 to 0.35 g (100 g) -1 OF and 0.25 to 0.33 g (100 g) -1 WF.Lower Aw (close to 0.38) values were found in cookies with JPF formulations ranging from 0.44 to 0.53 g (100 g) -1 , with OF between 0.27 and 0.35 g (100 g) -1 and WF between 0.20 and 0.21 g (100 g) -1 .Madrona and Almeida (2010) found mean Aw values of 0.36 in cookies made with okara and oatmeal, which is 6% and 2% lower than the maximum and minimum values, respectively, of Aw found in cookies with jabuticaba peel in this study.Vieira et al., (2010), studying sweet biscuits made with 15% cassava starch as a substitute for wheat flour, found Aw values of 0.31, which is 11% and 7% lower than the maximum and minimum values, respectively, of Aw found in cookies with jabuticaba peel in the present study.
The concept of Aw is highly valued in studies on changes in foods because it is directly related to the growth and metabolic activity of microorganisms and hydrolytic reactions.According to Ordonez et al., (2005), foods with an Aw lower than 0.60 are microbiologically stable because Aw is considered limiting for the growth of microorganisms.Even with Aw percentages higher than in previous studies on cookies, the Aw of the cookies made with jabuticaba peel flour were below the recommended limit for the development of microorganisms.Aw levels below 0.6 can retard the growth of microorganisms, also reducing the activity of spoilage reactions and favoring a long shelf life of foods (Ordonez et al., 2005).
In Figure 1B, the highest thicknesses of the cookies (close to 1.38 cm) were found in formulations with JPF ranging from 0.30 to 0.34 g (100 g) -1 (Table 1), with OF between 0.30 and 0.35 g (100 g) -1 and WF from 0.31 to 0.40 g (100 g) -1 .In contrast, the lowest thicknesses (close to 1.18 cm) were found in formulations with JPF ranging from 0.57 to 0.60 g (100 g) -1 , with OF between 0.15 and 0.21 g (100 g) -1 and WF from 0.20 to 0.28 g (100 g) -1 .
The cookies prepared with higher percentages of JPF had smaller thickness than the formulations with higher proportions of OF and WF because JPF has a high content of dietary fiber (15.25%)(Ferreira et al., 2012).Perez and Germani (2007) justified that flours with a high fiber content tend to retain more water due to the hydrophilic characteristics of the fiber.
Analysis of Figure 1C verifies that cookies with the highest SV values (close to 1.8 cm) were those with a JPF content ranging from 0.43 to 0.60 g (100 g) -1 (Table 1), with OF from 0.15 to 0.28 g (100 g) -1 and WF between 0.20 and 0.47 g (100 g) -1 .However, cookies with the lowest SV values (approximately 1.3 cm) were those with JPF varying between 0.30 and 0.32 g (100 g) -1 , with WF from 0.34 to 0.35 g (100 g) -1 and WF between 0.33 and 0.36 g (100 g) -1 .
The SV of the cookies is affected by several factors, such as the quality of the ingredients used in the preparation of the dough, especially the flour and processing conditions (Moura et al., 2010).Cookies produced with larger fractions of WF had lower values of SV.This can be explained by the higher water retention capacity resulting from the presence of soluble dietary fiber in OF (6.8%) (Ada, 1999).However, the SV values found in cookies with JPF are in agreement with those found by Gutkoski, Nodari and Jacobsen Neto (2003) in their study on sugar-snap cookies elaborated with WF from three cultivars; these authors found SV values ranging from 1.15 to 1.41.Assis et al. (2009) studied cookies prepared with different fractions of pumpkin seed and obtained SV values between 1.00 and 1.36.Ciênc. Agrotec., Lavras, v. 39, n. 6, p. 624-633, nov./dez., 2015 In Figure 2A, the highest values of WAI (close to 1 g gel.g -1 ) were observed in JPF formulations ranging from 0.30 to 0.60 g (100 g) -1 (Table 1), with OF between 0.15 and 0.35 g (100 g) -1 and WF from 0.24 to 0.35 g (100 g) -1 .In contrast, lower values of WAI (0.5 to 1 g gel.g -1 ) were found in formulations with JPF ranging from 0.30 to 0.45 g (100 g) -1 , with OF between 0.15 and 0.31 g (100 g) -1 and WF from 0.39 to 0.40 g (100 g) -1 .The WAI is related to the availability of hydrophilic groups (-OH) to bind to water molecules and to the gel forming capacity of starch molecules.Cookies produced with higher percentages of JPF tend to absorb more water because they have high levels of dietary fiber (15.25%)(Ferreira et al., 2012).Perez and Germani (2007) justified that flour with a high fiber content tends to retain more water due to the hydrophilic characteristics of the fiber.
Camargo, Leonel and Mischan (2008), in a study with biscuits extruded from sour cassava starch with fiber, obtained WAI values of 4.8 to 11.9 g gel.g -1 .The WAI of the cookies with JPF in this study were 91.6% lower than the values found in the biscuits extruded from sour cassava starch because the gelatinized starch granules absorb water and swell at room temperature.However, increasing the degree of gelatinization also increases the starch fragmentation, thereby decreasing the water absorption (Borba;Sarmento;Leonel, 2005;Carvalho et al., 2002).
Analysis of Figure 2B verifies that the highest values of WSI (close to 80%) were found in formulations with JPF ranging from 0.41 to 0.60 g (100 g) -1 , with OF between 0.15 and 0.35 g (100 g) -1 and WF from 0.20 to 0.40 g (100 g) -1 .The lowest WSI values (close to 30%) were in the formulations with JPF ranging from 0.30 to 0.37 g (100 g) -1 , with OF between 0.33 and 0.35 g (100 g) -1 and WF from 0.29 to 0.35 g (100 g) -1 .
The WSI is related to the amount of soluble solids present in a dried sample and can be used to verify the intensity of the heat treatment, depending on the gelatinization, dextrinization and consequent solubilization of starch among the other components in the raw material, such as protein, fat and fiber (Moura et al., 2010).Camargo, Leonel and Mischan ( 2008) studied the production of extruded biscuits of cassava starch with fiber and obtained WSI values ranging from 23.17 to 29.23%.The lowest WSI values of the cookies with JPF corresponded to the highest WSI value of extruded biscuits composed of sour cassava starch with fiber.The water solubility of cookies in the present study can be explained by the presence of fiber in the JPF and OF (Leite-Legatti et al., 2012;Ada, 1999) and the presence of starch in the WF (Nascimento; Wang, 2013).The concentrations of the different flours in the cookies determine a higher or lower solubility in water.However, in this study, formulations with higher percentages of JPF and WF tended to be more water-soluble.This increased solubility is attributed to the dispersion of the amylose and amylopectin molecules as a result of gelatinization when conditions are milder and to the formation of low-molecular-weight compounds when conditions are harsher (Colonna et al., 1984).
From Figure 2C, the highest values of rupture force (close to 70000 N) were found in formulations with JPF ranging from 0.43 to 0.47 g (100 g) -1 (Table 1), with OF from 0.35 to 0.47 g (100 g) -1 and WF from 0.20 to 0.22 g (100 g) -1 .The lowest rupture force values (30000 N) were found in formulations with JPF ranging from 0.56 to 0.60 g (100 g) -1 , with OF between 0.15 and 0.16 g (100 g) -1 and WF from 0.24 to 0.29 g (100 g) -1 .Ciênc. Agrotec., Lavras, v. 39, n. 6, p. 624-633, nov./dez., 2015 Formulations with higher percentages of JPF had a significantly lower rupture force.In a study on the physical characteristics of biscuits with soy flour and oat bran, Mareti, Grossmann and Benassi (2010) found rupture force values of 3000 N. Differences in rupture force are related to the level of substitution of JPF by WF and OF and also to the other ingredients and their proportions.McWatter et al., (2003) studied the physical and sensory characteristics of biscuits containing a mixture of fonio (variant of millet) and cowpea flours and attributed the tougher texture in the cookies to the increased protein content and its interaction during the preparation of the dough and baking.The results of this study can be explained by the high fiber content of JPF (Legatti-Leite et al., 2012).Perez and Germani (2007) explained that flour with a high fiber content tended to retain more water due to its hydrophilic characteristics.This could have generated a more humid and consequently softer cookie.Figure 3 shows the color analysis performed on the JPF cookies of this experiment.
Similar to the L* value, a* and b* also decreased with increasing percentages of JPF in the formulations.This same tendency was also reported by Ajila et al. (2010).This can be explained by the fact that the jabuticaba peels have enzymes such as polyphenol oxidase (Daiuto et al., 2010) and are rich in polyphenols, (Leite-Legatti et al., 2012) which are a substrate for this enzyme (Ajila;Bhat;Rao, 2007).In this sense, these values decreased due to enzymatic browning.
Desirability test
From the experimental models of the instrumental parameters SV, rupture force, L* and WAI, the desirability test was carried out to define the final formulation of the cookie.The results of the desirability test indicated that the most desirable formulation for the cookie was a mixture of 0.45 g (100 g) -1 JPF, 0.35 g (100 g) -1 OF and 0.20 g (100 g) -1 WF, which corresponds to experimental point 4 (Table 1).
Model validation
The models were validated by comparing the observed with the expected results of Aw, SV, WAI, WSI, rupture force, L*, a* and b* (Table 4) and taking into account only values with an R 2 above 0.83.Regarding texture, the predicted model corroborated the analytically determined values, i.e., a cookie containing jabuticaba peel flour with technological characteristics close to those predicted by the models was obtained, with percent variations of 8.74% for Aw, 3.33% for SV, 9.6% for WAI, 5.68% for WSI, 6.2% for rupture force, 3.9% for L*, 10.9% for a* and 6.9% for b*.
The equated values (Table 2) were calculated based on the equations derived from the statistical coefficients of the mixture design for the attributes listed in Table 4.The differences between the analyzed and calculated values are related to the experimental error and the determination coefficient equations.
Microbiological tests of the most desirable JPF and cookie samples for sensory analysis
According to the results, the jabuticaba peel flour, the standard cookie and the most desirable cookie with JPF presented microbiological characteristics suitable for consumption because their values were below the maximum level permitted by Brazilian law.
Sensory analysis of cookies
In the sensory analysis, results were obtained regarding the acceptance of the product in the categories of appearance, texture, flavor and odor.For the standard cookie, the scores obtained for the parameters of appearance, texture, flavor and odor were 3.88, 3.99, 4.14 and 3.52, respectively.In the cookie containing JPF, the scores obtained for the same parameters were 3.62, 3.52, 3.06 and 3.46, respectively.For all parameters, the scores for the standard cookie were higher than those obtained for the cookie with JPF.Adding a larger amount of vanilla essence to the formulation is suggested to improve the sensory scores because it interferes with the flavor and odor, which were the lowest scores of the cookie prepared with JPF.However, according to the pre-defined methodology, the product would be considered accepted if it obtained scores greater than or equal to 3 (indifferent).In this sense, both cookies were accepted by the children.
The sensory analysis in the present study with cookies containing JPF corroborates that conducted by Ferreira et al., (2012), replacing 10% of WF with JPF.These authors obtained a cookie with good physical properties, and, based on their acceptance test, they obtained scores of 3.68, 2.95 and 4.21 for appearance, flavor and texture, respectively.In this study, the appearance and texture were also the main features positively influencing the acceptance of cookies.The flavor score determined in this study was 3.5% higher than that in the cited work.These data can be considered excellent because the cookies of the present study were prepared with 45% substitution of WF and OF by JPF, whereas the former had only 10% substitution of WF by JPF.Given this difference and the presence of dietary fiber and bioactive compounds in JPF (Legatti-Leite et al., 2012), the importance of eating these biscuits in school meals should be emphasized because of the health benefits and consequent significant improvements in teaching and learning (Lucero et al., 2010).The major nutritional contributions of the cookies with jabuticaba peels are related to their fiber content, which was 127% higher than the standard cookie, and their calorific value, which was 11% lower than the standard cookie (unpublished data).
Furthermore, further use of flours elaborated from agro-industrial co-products should continue to be investigated to develop food formulations in general because they contribute to the use of regional products, enhance the development of alternative foods and value the preservation and sustainable development in the native areas of Cerrado and the Atlantic Forest.
CONCLUSIONS
The results presented in this study indicate that JPF is a raw material with good technological qualities, thus being an alternative for use in cookies without a loss of physical and sensory qualities of the product.Moreover, this is a new possibility to produce cookies based on an agro-industrial co-products, which is interesting from a commercial standpoint due to aspects related to the sustainability of the jabuticaba processing industry and the market for this type of product.
Figure 1 :
Figure 1: Water Activity (Aw) (A), thickness (B) and Specific Volume (SV) (C) of the cookies with JPF.Response surface generated by the experimental model (in terms of pseudocomponents).The area delimited between points 1 and 6 shows the region analyzed experimentally.
Figure 2 :
Figure 2: water absorption index (WAI) (A), water solubility index (WSI) (B) and rupture force (C) of the cookies with JPF.Response surface generated by the experimental model (in terms of pseudocomponents).The area delimited between points 1 and 6 shows the region analyzed experimentally.
Figure 3 :
Figure 3: Luminosity (A), chroma a* (B) and chroma b* (C) of the cookies with jabuticaba peel.Response surface generated by experimental model (in terms of pseudocomponents).The area delimited between points 1 and 6 shows the region analyzed experimentally.
Table 2 :
Actual concentrations of the ingredients used in the formulations of cookies with JPF replacing WF and OF.
Table 3 :
Adjusted polynomial models, significance level (p), lack of fit (LF) and coefficient of determination (R 2 ) for the physical properties of cookies with JPF as a function of pseudo-components of JPF (x 1 ), OF (x 2 ) and WF (x 3 ).
Table 4 :
Observed and expected results of Aw, thickness, SV, WAI, WSI, rupture force, L*, a* and b*, for the cookie formulation with JPF according to the predicted models.
|
2019-04-01T13:13:12.562Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "034a6e5572f7ba59784371a21b4c6cf78b6791b2",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cagro/v39n6/1981-1829-cagro-39-06-00624.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "034a6e5572f7ba59784371a21b4c6cf78b6791b2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
261167565
|
pes2o/s2orc
|
v3-fos-license
|
SARS-CoV-2 Prediction Strategy Based on Classification Algorithms from a Full Blood Examination
A fast and efficient diagnosis of serious infectious diseases, such as the recent SARS-CoV-2, is necessary in order to curb both the spread of existing variants and the emergence of new ones. In this regard and recognizing the shortcomings of the reverse transcription-polymerase chain reaction (RT-PCR) and rapid diagnostic test (RDT), strategic planning in the public health system is required. In particular, helping researchers develop a more accurate diagnosis means to distinguish patients with symptoms with COVID-19 from other common infections is what is needed. The aim of this study was to train and optimize the support vector machine (SVM) and K-nearest neighbors (KNN) classifiers to rapidly identify SARS-CoV-2 (positive/negative) patients through a simple complete blood test without any prior knowledge of the patient's health state or symptoms. After applying both models to a sample of patients at Israelita Albert Einstein at São Paulo, Brazil (solely for two examined groups of patients' data: “regular ward” and “not admitted to the hospital”), it was found that both provided early and accurate detection, based only on a selected blood profile via the statistical test of dependence (ANOVA test). The best performance was achieved by the improved SVM technique on nonhospitalized patients, with precision, recall, accuracy, and AUC values reaching 94%, 96%, 95%, and 99%, respectively, which supports the potential of this innovative strategy to significantly improve initial screening.
Introduction
Te World Health Organization designated SARS-CoV-2 (COVID-19) as a pandemic on March 11, 2020 [1].Te rapid propagation of the disease around the world has increased the need to apply health protection measures.Tese measures were aimed at solving the problem of overburdened intensive care units, as well as to strengthen and preserve the capacity of hospitals.As a result, many countries have adopted new health approaches and diverse perspectives to prevent the excess spread of the virus in terms of virus vitality within a specifc political-economic territory.Examples include the closure of borders and the cancellation of sporting and cultural events.Unfortunately, these decisions have caused economic, social, and environmental disruptions.In addition, they have brought uncertainties and fears to the world economy, education, health, and the fundamental rights of the population.
As of September 30, 2022 [2], more than 622,585,710 cumulative cases of SARS-CoV-2 have been confrmed worldwide, along with more than 6,547,814 deaths in 228 countries and territories.Approximately 40% of cases present with mild disease (cough and fever), 40% with moderate disease (bilateral pneumonia), 15% with severe disease, and 5% with critical disease [3].Te severe consequences of COVID-19 are due to its rapid spread, the inability to make a quick and accurate diagnosis, and the inability to perform large-scale testing of patients.It is therefore crucial to establish rapid and reliable diagnostic methods to detect the disease in real time [4].
Indeed, healthcare is a vast sector that requires the collection, analysis, and processing of medical data, which have recently become impossible due to several factors, such as massive data volumes, the inadequacy of wireless network applications, and security issues [5].Hence, it is essential to use data mining to fnd and extract rich information for classifcation.Medical datasets can be used to precisely detect SARS-CoV-2 infections [6].However, the primary limiting factor is data processing, which necessitates realtime data collection and the provision of data to researchers for immediate medical response.
In the same vein, artifcial intelligence (AI) promises to transform the healthcare sector [7].Machine learning (ML) and deep-learning algorithms are capable of detecting COVID-19 [8].In fact, the classifcation is one process by which COVID-19 patients are assigned to their corresponding classes [9].Tere are many classifcation methods, such as the Bayesian method, AdaBoost, random forest, artifcial neural networks, and K-nearest neighbors [9].
Much COVID-19 research has focused on how AI can be deployed to detect, confrm, and make forecasts at early stages.As the authors in [10] involved regression models (CUBIST, RF, RIDGE, SVR, and stacked-set learning), the ARIMA statistical model has also been used in some cases to make similar predictions.Te authors in [11] adopted BRNN, KNN, QRF, and SVR as well as the VDM approach coupled with exogenous climate variables to predict confrmed cumulative cases in ten Brazilian states.Tese predictions were made one, three, and six days in advance.As reported in [12], the SVM model detected and discriminated patients with severe COVID-19 from those with mild symptoms using 28 features based on clinical information and blood/urine test data, with an overall accuracy of 0.8148.
Previously, an efcient scheme in [13] was proposed using the available, relevant X-ray images to train an efcient deep neural AI network and use the trained parameters to detect COVID-19 cases even with a very small sample of COVID-19 X-rays.Te proposed method provided a very satisfactory detection performance at 97.4% accuracy.
A case report in [14] emphasized the importance of full autopsy in understanding the disease process and identifying potential targets for therapeutic interventions.Te authors of the aforementioned study conducted a full autopsy on a confrmed COVID-19 patient in Lagos, Nigeria, providing valuable insights into the pathological features of the disease.
Two studies proposed diferent machine-learning approaches for addressing COVID-19 challenges.Ribeiro et al. [15] proposed the use of ensemble-learning models coupled with urban mobility information to predict COVID-19 incidence cases.Tis approach leverages the relationship between human mobility patterns and the spread of the virus to achieve accurate predictions.Another study [16] introduced an equilibrium-based COVID-19 diagnostic method using routine blood tests and a sparse deep convolutional model.Tis method provides a noninvasive, low-cost, and potentially more accurate alternative to existing diagnostic methods.Da Silva et al. [11] focused on using climatic exogenous variables to forecast COVID-19 cases.Tis study proposed a novel approach for forecasting Brazilian and American COVID-19 cases based on artifcial intelligence coupled with climatic exogenous variables, providing a more holistic approach to COVID-19 prediction.
In [17], the researchers applied AI to identify commercially available medicines that may be efective in treating patients with COVID-19.At the core of their proposed model, they implemented the bidirectional encoder representations from transformers (BERTs) framework.
COVID-19 primarily afects the respiratory system.Tus, in [18], the authors presented a fne-tuned model based on a generative adversarial network to detect one of the symptoms of COVID-19 infection from chest X-ray scans.Gunraj et al. [19] applied a convolutional neural network model to detect COVID-19 in patients using chest X-ray images.Tey used pretrained ImageNet and trained the model on an open-source dataset of X-ray images.Aggarwal et al. [20] reviewed and summarized a number of important research papers on deep learningbased classifcation of COVID-19 across CXR and CT images.Using a deep learning-based P-shot N-ways Siamese network as well as prototypical nearest neighbor classifers, classifcation of COVID-19 infection from lung CT slices was proposed by the authors in [21].Another approach for classifcation of COVID-19 chest X-ray images from two diferent datasets (small and large datasets) using a tunable Q-wave transform (TQWT) based on a memristive crossbar array (MCA) was proposed by the authors in [22].Te average accuracy values obtained for the proposed method are 98.82% and 94.64%, respectively.
Together, these studies highlight the potential of different approaches; therefore, it is necessary to construct prediction techniques and innovative applications for frequent diseases, as well as to further expand prediction methodologies.Te objective of the present study is to address this need.Te main contributions of the proposed work that have not been addressed in the prior art are as follows: (i) We showed that it is possible to predict whether a person is positive or negative for COVID-19 infection in the early stage of the disease, using anonymized data from Israelita Albert Einstein Hospital [23].Te data analysis process consists of two stages: statistical analysis followed by data processing with machine-learning algorithms using SVM and KNN.Tis article is structured as follows: Section 2 includes a description of the dataset, the data analysis and data preprocessing for the classifcation algorithms used, and a detailed description of the materials and methodology.Te results of the experiment are presented in Section 3. Section 4 discusses the results, and Section 5 ofers conclusions on the prospects for use of this analysis procedure to detect COVID-19.
Dataset.
Te data used in this study were obtained from the Kaggle website [23].Information was retrieved for patients treated at Israelita Albert Einstein Hospital in São Paulo, Brazil, who had samples collected to perform the SARS-CoV-2 RT-PCR and additional laboratory tests between March 28, 2020, and April 3, 2020 [24].Following international best practices, all data were anonymized.Te normalization process resulted in a mean of 0 and a standard deviation value for all clinical data.
Te hospital data consisted of 5,644 individual patients and 111 variables, as presented in Table 1.Te patients were classifed into four groups: community (not admitted to hospital), regular ward, semi-intensive unit, and intensive care unit (see Table 2).
Data Analysis.
In the dataset provided, we have divided the extracted information into columns and rows.Rows are referred to as observations.Each column in this dataset shows some information about observations, such as hematocrit, hemoglobin, or platelets.Tese columns are labeled features or predictor variables of our dataset.Te "SARS-CoV-2 exam result" column classifes our dataset and predicts whether or not the individual is infected with COVID-19; consequently, it is considered as the target variable.
Te degree of infuence that the variables in the dataset have over the target value can be determined by their correlation with the target.As a result, we were able to pinpoint the features that can distinguish an infected patient from a noninfected patient.
During our data mining and analysis of the "blood/target and hospitalization/blood" visualization graphs, we noticed that the monocyte, platelet, leukocyte, and eosinophil levels for infected and noninfected individuals were signifcantly diferent (see Figure 1).In addition, the relationship between a patient's hospitalization status and their blood characteristics difered for each hospitalization category (community, regular ward, semi-intensive unit, or intensive care unit) (see Figure 2), which presents the possibility that these variables are related to positive COVID-19 infection.Testing this hypothesis through Student's t-test allowed us to verify that the means (averages) between the two distributions (positive versus negative COVID-19 test result) are significantly diferent at the level of these variables.
Student's t-test results (see Table 3) support our hypothesis that the levels of platelets, monocytes, eosinophils, and leukocytes are signifcant for predicting SARS-CoV-2 and, therefore, can assist in decision-making.
Data Preprocessing.
Data preprocessing consists of treating, fxing, and preparing data before inputting it for machine learning.Te goal is to transform the raw data into Te Scientifc World Journal a format conducive for the development of a machinelearning model and to clean the dataset as much as possible to improve the performance of the model.For our data, we followed a simple and efcient approach.Te dataset consists of columns with continuous and categorical variables.Since the machine-learning model requires that all input data be in numeric form, we have coded the target value "SARS-CoV-2 exam result" by assigning 0 for "negative" and 1 for "positive."Te hospital data contain 111 columns with 90% missing values (5,046 of the 5,644 results).Te dataset is also challenging because no information is provided about the patients except their ages, which makes it difcult to fll out the missing data using precise extrapolation methods.Using diferent methods to recover the missing data using the mean value is efective for some cases, but not for a set of medical exam results (sensitive data).For all these reasons, 5,046 of the 5,644 results were excluded from analysis, leaving only 598 cases (517 positives and 81 negatives) containing complete variables for use in the study.
Te analysis was performed based on patients' severity according to their hospitalization status.Blood counts were obtained for the community, regular ward, semi-intensive unit, and intensive care unit cohorts (see Table 2).Only patients with a full blood examination and RT-PCR SARS-CoV-2 outcome were included.
To ensure that our prediction is based on early indicators, patients in the semi-intensive and intensive care units were removed from our analysis.In addition, we excluded pathogenic (viral) factors and age from our study.
In this work, we used feature selection using the SelectKbest transformer and polynomial features in both groups of the dataset to fnd the most important variables.Given the result of our statistical test (Table 3), we will examine only the blood variables.Tese variables will be used to detect the presence of SARS-CoV-2.
Evaluation Metrics.
Te purpose of this study was to accurately predict whether an individual is infected with COVID-19 based on available clinical data.Te main issue in this study is the unbalanced classes.Since this is a very sensitive prediction, accuracy alone is typically not sufcient in the absence of other performance measures.In this case, we used a confusion matrix to evaluate the performance of classifcation models.Four indicators are measured in the confusion matrix: accuracy, recall, precision, and F1 score (see Table 4) [25].Tese indicators are defned as follows.
Te terms used in the equations are a, true positive; d, true negative; b, false positive; c, false negative; r, recall; and p, precision.
Accuracy.
Accuracy is the percentage of all predictions that were accurate.
Te formula is 2.4.5.AUC.AUC is the area beneath the ROC curve.It is calculated using the ROC curve, which is a plot of the true positive rate versus the false positive rate.Te greater the area under the plotted line, the better the algorithm performs due to its higher sensitivity and specifcity.Te commonly used metric known as the "area under the ROC curve," or "AUROC," ofers an easy approach to compare algorithms.[27].SVM involves fnding a hyperplan, whose ideal location is in the center of two classes.Te best hyperplan equation is that which maximizes the margin between the two groups in various classes [28].Te choice of the kernel function is an essential component because a suitable kernel function is imperative for the SVM to acquire learning capability.Terefore, we employ SVM with the radial basis kernel function.Meanwhile, the KNN algorithm is considered a type of lazy learning since it is practical machine learning that does not require preparation or a training cycle.Because of its straightforwardness, the KNN calculation is one of the ten best known data-mining algorithms [29].KNN demonstrates high profciency and a magnifcent capacity to tackle troublesome classifcation problems.As a rule, KNN is a valuable and quick procedure [30], which lends itself to our purpose of saving valuable time for health experts.
Terefore, we implemented and regularized the two models as follows (see Figure 3).First, we created a list of models that included SVM and KNN and then submitted all the models to the same evaluation procedure.We note that the algorithms in the list are introduced through a pipeline that includes steps completed in the preprocessing phase.Ten, we renamed this pipeline that has the polynomial features and SelectKbest transformers as "preprocessors."Tis preprocessor pipeline is appended, upstream, to these two models.In contrast, we created a new pipeline of the SVM model that contains the preprocessor followed by a standardization operation (with the StandardScaler function) and an SVC classifer.In addition, we applied the same process to KNN, which is a pipeline containing the preprocessor, StandardScaler, and KNeighborsClassifer.We trained and evaluated both models on their default hyperparameters using our evaluation procedure.(Te evaluation function provides training and testing of the models, as well as visualization of the confusion matrix and the learning curve.)Our goal was to improve the performance of these models by enhancing these hyperparameters.
2.5.1.Hyperparameter-Tuning Techniques.Te random search technique via the RandomizedSearchCV function enables identifcation of the best hyperparameters by comparing the performance of each combination using the cross-validation technique.We created a dictionary containing the diferent hyperparameters (penalty coefcient C, Gamma, polynomial feature, and SelectKbest) to be regulated.We embedded the SVM model, which is a pipeline, as well as the dictionary of hyperparameters in the function RandomizedSearchCV, followed by a scoring rubric which is the recall with cross-validation (cv � 10).We applied the same process to KNN using the RandomizedSearchCV function.It included the KNN model, a dictionary of hyperparameters ("neighbors classifer weights," "neighbors classifer neighbors," and "polynomial features degree," SelectKbest k), followed by a scoring rubric, which is always the recall with cross-validation equal to 10 and the number of iterations fxed at 100.
Application of SMOTE to the Imbalanced Dataset (Community).
Te community dataset included 39 positive and 431 negative patients.Terefore, the data are characterized by a distribution of the modalities of the class that is very far from a uniform distribution (that is, unbalanced classes), which is a relatively frequent situation in some classifcations.More concretely, unbalanced classes generally refer to a classifcation problem where the classes are not equally distributed.Te difculty of working with unbalanced data classes (defned as positive/negative � 0.09) is that the KNN and SVM models ignore the minority class.A class imbalance increases the difculty of learning via the classifcation algorithm.Indeed, the algorithm has few examples of the minority class to learn from.It is therefore biased and produces potentially less robust predictions than if the data were balanced.
Te imbalance between the two classes in the community dataset is signifcant (positive/negative � 0.09), thereby degrading the performance of the defned ML model.Tus, the SMOTE technique [31] is adapted to balance the two classes in the dataset, a type of data augmentation for the minority class, 1, and designed to make it similar to the majority class, 0. We used the implementations of SMOTE provided by the Python library imbalanced-learn set to their default parameters (k neighbors � 5 . ..); this object is an implementation of [31].Te 10-fold stratifed crossvalidation technique is again applied and repeated recursively for 10 classes.
Next, we divided the dataset, designating 85% of the data points for training and 15% for testing.Finally, we implemented and evaluated the SVM and KNN models, as shown in Figure 3.
Results
In this section, the efectiveness of the proposed SARS-CoV-2 prediction strategy is evaluated.Te proposed SVM and KNN classifers will be evaluated for both regular ward and community groups to accurately detect SARS-CoV-2 patients.Te performance of each implemented model is presented in terms of AUC, accuracy, precision, recall, and F1 score.
Statistical Analysis. Polynomial features and SelectKbest
provide more information about the most important variables.SelectKbest selects the 14 variables with a statistical test score of dependence (ANOVA test) with the target.Tese variables are the most signifcant for predictive purposes.Te dependency test analysis of the main variables according to the SelectKbest transformer corresponding to the patients in the regular ward (see Table 5) and the community ward revealed a recognizable ANOVA test score.It should be noted that the four variables from the regular ward data had the best scores for eosinophils, followed by red blood cells, hemoglobin, and leukocytes (see Table 6).Community patients with SARS-CoV-2 have high scores on leukocyte, monocyte, platelet, and eosinophil parameters.
Results for Patients Admitted to the Regular Ward.
SVM, run using the default settings, yields a precision value of 89% and a recall value of 75% and a precision and F1 score of 1 and 86%, respectively, for class 1 (patients testing positive) on 10-fold stratifed cross-validation, as shown in Table 7. Receiver operating characteristic (ROC) curves were plotted for the 10-fold and area under the curve (ROC) values for all folds (Figure 4).After the model was improved through optimization of its parameters via a random search, we obtained almost the same values for the model metrics.
Te metric evaluations of the model are presented in Figure 5. Te confusion matrix and the learning and validation curve are illustrated in Figures 6 and 7.
Te results of the implementation of KNN with default parameters yield an average AUC of 84% on 10-fold stratifed cross-validation, an accuracy of 78%, a recall of 75%, a precision of 75%, and an F1 score of 75%, respectively, for the class 1 patients.After regularization of the hyperparameters, the AUC improved remarkably, from 84% to 91%.While the other metrics remain almost the same (Figure 8), the AUC score is equal to 0.91 ± 0.11 (see Figure 9).Te results of the two classifers are summarized in Tables 7 and 8.
Implementation Results for the Community Dataset.
SVM defned for the data on patients not admitted to hospital using default parameters yields an average accuracy of 85%, a recall of 81%, a precision of 89%, an F1 score of 84%, and an AUC of 99% for class 1 on 10-fold stratifed cross-validation (see Table 9).Te ROC curves for all 10-folds produce an average AUC of 0.99 ± 0.00.Meanwhile, the optimized SVM results yield an average AUC of 99% (see Figure 10), an accuracy of 95%, a recall of 96%, a precision of 94%, and an F1 score of 95%, respectively (see Table 10 and Figure 11).Te confusion matrix, learning curve, and validation curve are illustrated in Figures 12 and 13.
After tuning parameters and 10-fold stratifed crossvalidation are applied, the mean AUC score is 0.99 ± 0.10 (see Figure 14), accuracy is 90%, recall is 91%, precision is 89%, and the F1 score is 91%, respectively (see Table 10 and Figure 15).Te results of the two classifers are summarized in Tables 9 and 10.
Discussion
Te complete dataset included 5,644 patients tested between March 28, 2020, and April 3, 2020, of which 598 complete blood count results were used for statistical analysis.Te remaining 5,046 results were omitted because of incomplete blood count data.Despite the constraint of the small sample size, our goal of identifying patients with COVID-19 infection was achieved with an accuracy of 95% using an SVM classifer.By promoting and developing diferent classifcation models (SVM and KNN) with 598 patients, we predicted SARS-CoV-2 infection with an AUC of up to 0.99 for nonhospitalized patients and 0.92 for regular ward patients, using only standardized complete blood count data.Te fnding that SARS-CoV-2-positive and negative cases can be classifed using biological features at an early stage of the disease has important implications.Te study was performed on a dataset organized by the patients' hospitalization status.We excluded patients in the semiintensive and intensive care units from our analysis in order to base predictions of COVID-19 test results on indicators of Te symptoms of COVID-19 are often accompanied by an immune response [32].Terefore, hyperactivity of blood parameters is noticed in all stages whenever an infection exists [33].Indeed, several scientifc reports have confrmed this hypothesis.Te researchers in these works used similar predictive models based on blood parameters, suggesting elevated levels of some of these parameters; for example, an elevated level of eosinophilia could be a potential diagnostic indicator [34].Indeed, the value of this marker has been identifed in cerebrovascular pathologies and during coronary bypass surgery.Te previous fnding of an elevated neutrophil/lymphocyte count seems to be a relevant marker in the diagnosis of COVID-19 [35], as is the case in our study (see the tables of the predictive variables for each dataset).
In this study, we examined the evolution of blood parameters in all the patients in a particular unit as exploratory analyses using ratios of diagnostic blood characteristics.However, the fact that these characteristics may be related to other pathogens and viral diseases is a potential limitation of the proposed method.Indeed, previous studies have shown that MERS increases monocytes [36].SARS also directly infects monocytes, which produce cytokines that directly afect neutrophils [37].Both infections, then, produce similar efects on blood activity related to humanitarian reactions.At the same time, this study indicates the immediate relation of the pathogenesis of COVID-19 to monocytes and neutrophils as shown in the dependency test score results (ANOVA test) (see Tables 5 and 6).
However, these parameters are often signifcant depending on the results of the statistical test performed, and we have initiated research to identify feature scores that distinguish SARS-CoV-2 with a preprocessor that embraces both polynomial features and the SelectKbest transformer in both groups of the dataset to fnd the most signifcant variables (see Score Table ).Te variable selection method used shows, on the one hand, the utility of data mining in extracting altered information from key features for classifcation should a future strain of coronavirus emerge, which remains a risk and danger facing humanity.On the other hand, the collection, analysis, and processing of medical data must be of interest to the health sector.
Conclusion
Our model and all artifcial intelligence-based predictive models related to the healthcare sector rely on medical data.Te use of machine learning (ML) is important for processing patient data to guide efective control and treatment strategies for the pandemic.Te main element in constructing an AI-based predictive model is information.Terefore, the availability of and access to such data are crucial for the development of similar studies.Te study will also be further adapted to address the lack of information and collected data in the medical feld to facilitate the task of direct detection of SARS-CoV-2 in hospitals and medical testing laboratories.An automated medical diagnosis that reduces costs for healthcare institutions is very important, especially when quick decisions are necessary to isolate infected patients and provide prompt treatment.Direct contact with infected patients may threaten doctors and caregivers with illness or even death.To overcome this global and dangerous challenge, it is fundamentally essential to analyze patient data at health facilities and detect the disease immediately, with accuracy, and within the shortest possible time frame.Future work will focus on creating a pipeline that combines AI-based predictive models with these types of complete blood counts and healthcare data processing models.Tese models will then be included in applications that will help in the development of mobile healthcare.Terefore, ML can provide a step toward a semiautonomous, expeditious diagnostic system that would be useful in Te Scientifc World Journal 13 combating a future pandemic situation and would ofer tremendous opportunities to harmonize with sustainable development goals.
Figure 1 :
Figure 1: Visualization of blood/target.Te plots show the variation curves of individual parameters categorized according to whether the patient tested positive (blue curve) or negative (yellow curve) on the RT-PCR test for SARS-CoV-2.Tese plots indicate a statistically signifcant diference between the two curves (positive-negative).In particular, leukocytes, eosinophils, monocytes, and platelets seem to have diferent variability across the two classes (negative-positive).
Figure 2 :
Figure 2: Visualization of hospitalization/blood.Monocytes, eosinophils, leukocytes, and platelets seem to have diferent variability between COVID-19-positive and negative patients.In addition, the levels of these parameters vary according to patients' hospitalization status.
Figure 4 :Figure 5 :Figure 6 :
Figure 4: Te ROC curve and AUC values of the SVM model for regular ward patients.Te ROC curve shows the true-positive rate versus the false-positive rate.Comparing AUC values reveals that the ROC curve has greater AUC and thus indicates better overall performance.Generally, the higher the AUC, the better the model performance.
Figure 7 :
Figure 7: Learning and validation curve of the SVM model for regular ward patients.
Figure 8 :Figure 9 :
Figure 8: Te metric evaluations of the KNN model for regular ward patients.
SVM and KNN are very robust in analyzing data with two classes (positive or negative).
Figure 10 :
Figure 10: Te ROC curve and AUC values of the SVM model for community patients.Comparing AUC values for algorithm simulation cases (Figures 6 and 7) shows that the ROC curve for the SVM model with community patients has greater AUC and, thus, indicates better model performance.
Figure 11 :Figure 12 :
Figure 11: Te metrics of the SVM model for community patients.
Figure 13 :
Figure 13: Learning and validation curves of the SVM model for community patients.
Figure 14 :
Figure 14: Te ROC curve and AUC values of the KNN model for community patients.
Figure 15 :
Figure 15: Te metric evaluations of the KNN model for community patients.
Table 2 :
Albert Einstein hospital dataset groups.
Table 3 :
Student t-test results.Te purpose of this work was to predict COVID-19 infections from blood features using SVM and KNN classifers.After preprocessing and feature selection, we applied the two models to the bloodwork results for patients with and without COVID-19 who either were not hospitalized (39 tested positive for SARS-CoV-2 and 431 negative) or were admitted to the regular ward (26 tested positive for SARS-CoV-2 and 31 negative).A common supervised-learning technique used in regression and classifcation is SVM
Table 5 :
Te 14 variables (14 diferent blood counts) in the construction of the predictive model (regular ward data), ranked according to the best dependency test scores (ANOVA test) with the target (SARS-CoV-2).
Table 6 :
Te 14 variables (14 diferent blood counts) in the construction of the predictive model (community data), ranked according to the best dependency test scores (ANOVA test) with the target (SARS-CoV-2).
Table 7 :
Evaluation results for model predictions with the regular ward data group (patients testing positive for SARS-CoV-2) using default parameters.
10Te Scientifc World Journal the disease in its early phase.Terefore, age and pathogenic (viral) variables were excluded from our study.Te strategy we have developed can provide a reliable and rapid SARS-CoV-2 diagnosis.KNN and SVM algorithms on both groups of patient data have shown that the SVM algorithm applied to community patients with optimization and the SMOTE technique ofers the most accurate predictions.Tis enhanced SVM technique provides precision, recall, accuracy, and AUC values that reach 0.94, 0.96, 0.95, and 0.99, respectively.KNN optimized over community patient data after the SMOTE technique is applied produces accurate results.However, for the regular ward data, both classifers retain almost identical metrics regardless of the optimization
Table 8 :
Evaluation results for model predictions with the regular ward data group (patients testing positive for SARS-CoV-2) after tuning parameters.
Table 9 :
Evaluation results for model predictions with the community data group (patients testing positive for SARS-CoV-2) using default parameters.
formation for the signifcant variables of the regular ward patients as well (see scoring table).Overall, this underscores the difculty of interpreting standardized data of low registration.Both classifers can be used as an improved algorithm to perform SARS-CoV-2 prediction for new data.
Table 10 :
Evaluation results for model predictions with the community data group (patients testing positive for SARS-CoV-2) after tuning parameters.
|
2023-08-26T15:39:08.430Z
|
2023-08-22T00:00:00.000
|
{
"year": 2023,
"sha1": "670ac32fd12ce194fd4cb53a6e66dad49f5a71f6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/tswj/2023/3248192.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6601d711f910fea822046adcd9f562118b72087c",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.