id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
244920996
pes2o/s2orc
v3-fos-license
Cosmological model with time varying deceleration parameter in $F(R,G)$ gravity In this paper, we study the dynamical behaviour of the Universe in the $F(R,G)$ theory of gravity, where $R$ and $G$ respectively denote the Ricci scalar and Gauss-Bonnet invariant. Our wide analysis encompasses the energy conditions, cosmographic parameters, $Om(z)$ diagnostic, stability and the viability of reconstructing the referred model through a scalar field formalism. The model obtained here shows the quintessence like behaviour at late times. I. INTRODUCTION According to cosmological observations [1,2], the Universe is currently passing through an accelerated expansion phase. Generally, an entity named dark energy is said to cause such a counterintuitive anti-gravitational feature [3][4][5]. Most popular dark energy models advocate the presence of the cosmological constant in Einstein's field equations of General Relativity (GR) for a proper explanation of the observational data concerning the late time cosmic speed up issue [6][7][8]. The cosmological constant may be thought of as the vacuum quantum energy in Particle Physics, whose estimated value [9] differs more than one hundred orders of magnitude from the observed one [6]. This characterizes the so-called cosmological constant problem [9,10]. The cosmological constant problem has been the main reason for which theoretical physicists consider extensions of GR. These extended gravity theories are obtained by simply substituting the Ricci scalar R -in the Einstein-Hilbert action -by a generic function of R itself and/or some other scalars. A replacement of R by a more general function F (R) in the Einstein-Hilbert action leads to the F (R) gravity [11,12]. The F (R) gravity applications are multitudinous. To quote some of them, Nojiri and Odintsov have obtained a unified cosmic history in [13], while the cosmological perturbations were investigated in [14,15], respectively by Matsumoto, and Carloni and collaborators. Also, F (R) theory of gravity has been extensively used in investigating the solutions of the wormhole geometry [16][17][18]. Several cosmological solutions are shown as stable in the literature for the F (R) gravity theory [19][20][21]. However it is well-known that F (R) gravity is plagued by some shortcomings. For instance, Frolov has pointed out a curvature singularity problem appearing on the non-linear level [22]. Kobayashi and Maeda argued that relativistic stars cannot be present in F (R) theories [23]. This has been revisited in [24,25]. Furthermore, Solar System regime constraints obtained from GR classical tests seem to rule out most of the F (R) models proposed so far [26][27][28]. With the purpose of circumventing these shortcomings, the F (R) gravity has been extended through the consideration of further scalars in the referred Einstein-Hilbert action. In this regard, the F (R, G) gravity [29][30][31][32][33][34][35], with G being the Gauss-Bonnet scalar, arises as an optimistic alternative. For instance, a double inflationary scenario naturally emerges in the F (R, G) gravity according to [29]. The stability of cosmological solutions in F (R, G) gravity is discussed in [36]. A study of linear metric perturbations around a spherically symmetric static space-time in F (R, G) theories can be seen in [37]. In [38] it is shown that F (R, G) cosmology naturally leads to an effective cosmological constant, quintessence or phantom cosmic acceleration, not without describing the transition from a decelerated stage of the Universe expansion. In the present article we wish to construct a viable cosmological model in the framework of the F (R, G) gravity and to analyse the energy conditions of the constructed model. The energy conditions are mathematical inequalities that basically state that energy density cannot be negative. They are written in terms of the energy-momentum tensor as we are going to show below in Section IV. For now it is worth mentioning that in [39] a picture of energy conditions fulfilment and violation in the light of type Ia supernovae observational data was provided. Seminal applications of energy conditions can be seen in [40][41][42]. In extended gravity, the energy conditions have also shown great applicability and commendable results as one can check [43,44]. Particularly, some viability bounds were put to F (R, G) gravity from the energy conditions in [35]. Here we will also analyse the cosmography, Om(z) diagnostic and the dynamical stability under linear homogeneous perturbations, and finally the viability of the model through its scalar field reconstruction is presented. Our article is organized as follows: in Section II we present the F (R, G) gravity and cosmology basics. The material solutions of a particular F (R, G) model are presented in Section III in terms of redshift. The behaviour of the model with the energy conditions, geometrical parameters, Om(z) diagnostic, scalar reconstruction are done in Section IV. The concluding remarks are given in Section V. II. BASIC FORMALISM OF F (R, G) GRAVITY AND COSMOLOGY An interesting modification to the Einstein theory of gravity is the F (R, G) gravity [29][30][31][32][33][34][35]. The most general action for this gravity is given by where g is the metric determinant, L m describes the matter Lagrangian, k 2 = 8πG N , G N is the Newtonian gravitational constant and the speed of light c is taken as 1. The Gauss-Bonnet invariant is described as with R µν being the Ricci tensor and R µναβ the Riemann tensor. In differential geometry, G can be described as where χ(M) are the Euler characteristics of the manifold M in n dimensions. For n = 4, χ(M) = 0, it can be considered as surface term that does not affect the dynamics. By varying the action (1) with respect to the metric tensor g µν , the field equations of the F (R, G) gravity can be expressed as where G µν is the Einstein tensor, ∇ µ is the covariant derivative operator associated with g µν , ≡ g µν ∇ µ ∇ ν is the covariant d'Alembert operator and T µν is the energy-momentum tensor. We have also defined the following quantities while the energy-momentum tensor is written as where ρ denotes the matter-energy density and p is the isotropic pressure measured by the observer u µ . In the following we consider the spatially flat Friedmann-Lemâitre-Robertson-Walker metric with line element, where a(t) is the scale factor of the Universe, such that the Hubble parameter H ≡ȧ a , with the over dot indicating derivative with respect to cosmic time t. Then, R and G become, By substituting (6) and (7) into the gravitational field equations (4) we obtain Eqns. (9) and (10) are expressed in terms of the Hubble parameter and the functional F (R, G). To obtain the evolutionary behaviour of the matter pressure and energy density, we need a functional form for F (R, G) and the Hubble parameter, which we have discussed in the following section. III. MATERIAL SOLUTIONS In the present section, we extend our analysis by considering a specific form for the F (R, G) function. We take where α and β are constants. This particular functional form was used, for instance, in [29], in which a double inflationary scenario has naturally emerged. By using the form of F (R, G) as in (11), Eqs. (9) and (10) take the form We have expressed the field equations of F (R, G) gravity in the form of Hubble parameter. The dynamical parameters can be determined either with a relationship between the matter field or with an assumed form of the Hubble parameter. We have preferred here to frame the cosmological model with an assumed form of scale factor. Also, to frame a cosmological model of the Universe we need to obtain the pressure and energy density with respect to the cosmic time or redshift. In order to handle Eqs. (12) and (13), which are highly non-linear, we assume the hybrid scale factor (HSF), a(t) = e ηt t ν , such that H = η+ ν t , where η and ν are the model parameters and can be constrained in the ranges η > 0 and 0 < ν < 1 [45][46][47][48][49]. The reason behind the choice of such a scale factor is that it simulates a transition from a decelerated expanding Universe to an accelerated one. This can be substituted in Eqs.(12)- (13) to obtain the expressions for the pressure and energy density in terms of cosmic time. We obtain lengthy expressions for the energy density and pressure and therefore we opt to present them in graphical form. Along with the energy density, we have also presented the equation of state parameter (EoS), ω = p ρ graphically. In addition we prefer to present the graphs in terms of redshift z with the use of the expression 1 a = 1 + z for better analysis. The energy density ρ and the EoS parameter ω = p/ρ behaviours can be seen in FIG.1. To note, in theoretical sense, the behaviour of the EoS parameter is utmost essential to address the late time cosmic acceleration of the Universe. Then with the already obtained value of the scale factor parameter, we will analyse the EoS parameter to find the present and future scenario of the Universe. The evolutionary behaviour of matter pressure and energy density depend on the F (R, G) parameters α, β and scale factor parameters η and ν. The scale factor parameters η and ν are chosen similar to the works [46,73,74] to obtain an appropriate behaviour of the geometrical parameters. Now, the parameters of the assumed form of F (R, G) gravity, α and β are fixed to obtain the positive energy density and negative pressure to address the late time cosmic expansion phenomena. We have examined several possible combinations of the value of α and β and observed that with the lower value of β, the model shows appropriate accelerating behaviour. So, a small contribution from the Gauss-Bonnet invariant suffices for the late time cosmic acceleration. In Table-I, we have presented the model parameters considered in the present work. FIG. 1(left panel) shows that the energy density decreasing slowly from a higher positive value and approaching to a small value at the late time. Another observation is that with the increase in the parametric value F (R, G) parameter α, the evolution of energy density is starting from a lower value and for the other parameter β, no significant changes have been noticed. However, throughout the evolution, the energy density entirely remains in the positive region. The EoS parameter that shows us the behaviour of the accelerating Universe is found to be evolve from a early positive value and approaching ≈ −0.7 at late times. The curves of the EoS parameter are observed to decrease faster for higher value of α. However, at z ≈ 0.3, the curves intersect. We do not observe any substantial changes on the evolutionary behaviour for a variation in the values of β. We can conclude that the present model in the framework of F (R, G) gravity and hybrid scale factor favours a quintessence like evolutionary phase. IV. ANALYSIS OF THE MODEL A. Energy conditions in F (R, G) gravity The energy conditions are basically the boundary conditions in order to keep the energy density positive [50,51]. These do not correspond to the physical reality and the most recent example is the violation of strong energy condition with the observable effects of dark energy. Energy conditions provide additional constraints to the cosmological models [52]. The energy conditions assign the fundamental causal and geodesic structure of the space-time and the extended theory of gravity needs to confront with the energy conditions, in this case with the F (R, G) gravity. So, any extended theory of gravity, which is an extension of Einstein's gravity can be recasted in such a manner that it can be dealt with the standard energy conditions [53,54]. The energy conditions are, Null Energy Condition (NEC): ρ + p ≥ 0, Weak Energy Condition (WEC): ρ + p ≥ 0, ρ ≥ 0, Strong Energy Condition (SEC): ρ + 3p ≥ 0 and Dominant Energy Condition (DEC): ρ − p ≥ 0. With the help of Eqs. (12) and (13), we can obtain the energy conditions with respect to the cosmic time as, Another reason to fix the representative value of the model and scale factor parameters above is that we wish to compel our model in such a manner that the WEC condition satisfied at least at the late phase. FIG. (2) shows the WEC remains positive from an early time (z ≈ 0.35) till the late phase. Since our model is showing quintessence behaviour, we may expect the satisfaction of DEC and NEC at least at the late phase of the evolution. At the same time, the SEC starts violating from (z ≈ 0.2) and was satisfying before that. In fact, when the cosmic dynamics is fixed up by a derived or assumed Hubble rate, the detailed analysis on these energy condition can be obtained. B. Geometric Parameters and Om(z) Diagnostic There are two fundamental characteristics of the evolution of the Universe: (i) it has been extracted directly from the space-time metric; the kinematic approach and (ii) the dependency on the properties of the field that fill the Universe; the dynamic approach. The kinematic characteristic is universal, which is convenient to describe the expansion of the universe, whereas the dynamic characteristic is model dependent. Here we shall follow the kinematic approach. We have considered the FLRW space-time, which is homogeneous and isotropic, so the evolution of the Universe can be described by the scale factor a(t). The scale factor a(t) and the co-moving coordinate r can be related to the physical Euler coordinate R(t) as, a(t) = R(t)/r. On differentiation with respect to time, we can obtain the Hubble law in the form, V = HR, where V =Ṙ = dR/dt and H =ȧ a be the Hubble parameter. The expansion of the Universe, as determined by the Hubble parameter, depends on time and the measure of this dependency is the deceleration parameter. We shall define here the Taylor series expansion of the scale factor in the neighbourhood of the current time t 0 , This scheme of the description of the universe is known as cosmography [55] that is based on the cosmological principle. As per the principle of cosmology, the scale factor act as the degree of freedom in governing the Universe. The parameter set Hubble parameter (H), deceleration parameter (q), jerk parameter (j), snap parameter (s), lerk parameter (l),.... represents the alphabet of the cosmography. This set can be obtained upto the fifth derivative of the scale factor, where the first derivative is the Hubble parameter and so on as in following, , and Lerk parameter, The graphical behaviour of the cosmographic parameter set has been given in FIG. 3. The Hubble parameter, H = η + ν t becomes a constant (η) at late times. It remains entirely in the positive domain since the scale factor parameters are fixed with the positive value (Red curve). The deceleration parameter is showing the signature flipping behaviour and at t → 0, q −1 + 1 ν whereas at t → 0, q −1. So, the hybrid scale factor gives a deceleration parameter that assumes early positive and late time negative values. To mimic the present Universe with late time cosmic acceleration, there is a need of signature flipping behaviour of the deceleration parameter. This behaviour also favouring the recent H 0 tension findings that a transitive deceleration parameter fostering an early deceleration and late time acceleration is in accordance with the concordance ΛCDM model [56][57][58]. One can observe the similar feature in FIG.3(pink curve). The jerk parameter decreases from a higher positive value and remains entirely in the positive region (Blue curve) whereas the snap parameter is showing the transition behaviour from negative to positive value at late times. Since our model favours the quintessence behaviour therefore j < 1 and s > 0. Similar behaviour has been observed for both these geometrical parameters at least at the late times. The lerk parameter is decreasing very rapidly, then increases a little bit and settled at 0.5. To note except the deceleration parameter, all other geometrical parameters are lying in the range (0, 1) at late phase of the evolution. There are two important geometrical diagnostic approaches used in literature. They are the determination of the state finder pair (j, s) in the j − s plane and the Om(z) diagnostics. These geometrical diagnostic approaches are useful tools to distinguish different dark energy models [59,60]. The behaviour of the state finder pair can be seen from FIG. 3. The Om(z) diagnostic provides a null test to the ΛCDM model [60] and subsequently more evidences were gathered on its sensitiveness with the EoS parameter [61][62][63]. When Om(z) is constant with respect to the redshift, then the dark energy model would be in the form of cosmological constant. Also, the nature of the slope of Om(z) distinguish the dark energy models as: positive slope of the evolving Om(z) indicates phantom phase ω < −1, and negative slope for quintessence region ω > −1. Also the consistency test of ΛCDM model has been performed in the reconstructed Om(z) by using the Gausssian processes with SN Ia and Hubble data set [64,65]. In this problem, we wish to investigate the behaviour of the model with the Om(z) diagnostic which can be defined as, where E(z) = H(z) H0 is the dimensionless parameter, here H 0 be the Hubble rate of the present epoch. The plot for Om(z) with respect to redshift has been given in FIG. 4. The Om(z) parameter is showing a discrete behaviour. At C. Scalar Field Reconstruction Apart from the cosmological constant, another important candidate for dark energy is the scalar field with a slowly varying potential. This can also be used for the cosmological mechanism such as the inflation, which can be constrained through Cosmic Microwave Background(CMB) observations. This is compatible with a number of modified theories of gravity. The scalar field models can also be studied in the modified theory of gravity, where the effective energy momentum tensor with geometrical origin can be obtained [66,67]. Here we wish to reconstruct our model by introducing the scalar field. The cosmic acceleration phenomena can be modelled through the scalar field φ which can either be quintessence like or phantom like with the EoS parameter, (ω = p φ /ρ φ ) being ω ≥ −1 or ω ≤ −1 respectively. The action for the scalar field reconstruction is given by, where = +1 for quintessence field and = −1 for phantom field, V (φ) is potential function of the scalar filed. In a flat Friedman background, the energy density and pressure are derived for the quintessence field as, From the above eqns. (21) and (22), we can derive the scalar field and the potential function. The graphical behaviour of the potential function with respect to the redshift function can be observed in FIG. 5(left panel). For the representative values of the parameter α, the potential function evolving out from a infinite high value with a sharp increase and after attained the peak value at z 0.2, it gradually decreases. At late times, the curve approaches to a small value. Another observation is that higher in value of α the curve is more steep. The squared slope of reconstructed scalar field approaching to zero at late times and there is no change in the behaviour with the varying value of α [FIG. 5(right panel)]. D. Stability Analysis The study of stability analysis in extended gravity cosmological model has become necessary, as several assumptions are considered to deal with the field equations. The degree of the generality of the assumptions made has become difficult to assess. So, the qualitative properties of the field equations are need to be analysed to strengthen the result. One of the approaches to perform this analysis is the stability analysis [68,69]. Here we wish to study the stability of the cosmological solutions of the F (R, G) theory presented in this work under linear homogeneous and isotropic perturbations. For this purpose, we consider a pressureless dust FRW background whose general solution may be H(t) = H b (t). Here we consider perturbations of the Hubble parameter and the energy density around the arbitrary solutions H b (t) as [36,70] where δ m (t) and δ(t) are the respective deviations from the background energy density and the Hubble parameter. The functional F (R, G) may be expanded around the solution H(t) = H b (t) as where R b and G b correspond to the background solutions. The respective first derivatives F R (R = R b ) and F G (G = G b ) are evaluated at R b and G b . The term O 2 includes all terms containing the higher powers in R and G. Using the perturbative approach in the equivalent FRW equation, it is possible to obtain an evolution equation for the linear homogeneous perturbations which may be written in the form where the coefficients χ 1 (F b , F n b ) and χ 2 (F b , F n b ) are functions of the functional F (R, G) and its derivatives evaluated at the background. From the continuity equation, we may get another evolution equation for the perturbations aṡ In this work, we have considered the functional F (R, G) = R + αR 2 + βG 2 . Assuming that, GR should be recovered from the present model at some limit, we may neglect the contributions from the higher derivatives of the functional F (R, G). In such a case, Eq. (26) may be reduced to where One may note that, in the GR limit, stability for the present model may be achieved provided we have γ(t) > 0. In the present work, we have chosen the values of the coupling parameters α and β to be positive so that the condition γ(t) > 0 is satisfied through out to provide a model which may be stable under linear homogeneous and isotropic perturbations. V. CONCLUSION We have studied the late time cosmic acceleration issue in F (R, G) theory of gravity in presence of time varying deceleration parameter. The model shows quintessence like behaviour and is dynamically stable under linear homogeneous and isotropic perturbations. The scale factor chosen here provides the deceleration parameter that becomes positive at early time and negative at late times. So, the other two parameters of the F (R, G) function has been fixed in such a manner that the model shows an accelerating behaviour. Both the parameters that associated with the function F (R, G) contributing to the behaviour of the functional as well the model. However, while relating to the accelerating behaviour, the parameter α, associated with the Ricci scalar R is more significant than the parameter β associated with the Gauss-Bonnet invariant G. At the same time the varying β does not show any significant changes in the behaviour whereas higher value of α shows some changes in the transition behaviour. The behaviour of the EoS parameter at late time becomes insensitive to the choice of α since all the trajectories of EoS parameter for different choices of α behave alike at late phase. Another important result we obtained on the geometrical parameters, the deceleration parameter approaches to −1 at late times, however the (j, s) pair merging close to (0.46, −1). Since the model favours quintessence behaviour, this value of (j, s) pair is expected. The violation of SEC further validates the accelerating behaviour of the model in a modified theory of gravity. The important results of the present work are summarized in Tables II and III. Finally, we can say that the model presented here is another suitable extension of f (R) gravity that provides quintessence like behaviour at late phase. Hence, this model can be another approach in the search of the geometrical alternative to dark energy phenomena.
2021-12-08T02:15:44.899Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "f162b128a6f88168ac5f205c9d933805b28e184d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2112.03686", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f162b128a6f88168ac5f205c9d933805b28e184d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
243991124
pes2o/s2orc
v3-fos-license
Twenty-Year Reflection on the Impact of World Trade Center Exposure on Pulmonary Outcomes in Fire Department of the City of New York (FDNY) Rescue and Recovery Workers After the terrorist attacks on September 11, 2001 (9/11), many rescue/recovery workers developed respiratory symptoms and pulmonary diseases due to their extensive World Trade Center (WTC) dust cloud exposure. Nearly all Fire Department of the City of New York (FDNY) workers were present within 48 h of 9/11 and for the next several months. Since the FDNY had a well-established occupational health service for its firefighters and Emergency Medical Services workers prior to 9/11, the FDNY was able to immediately start a rigorous monitoring and treatment program for its WTC-exposed workers. As a result, respiratory symptoms and diseases were identified soon after 9/11. This focused review summarizes the WTC-related respiratory diseases that developed in the FDNY cohort after 9/11, including WTC cough syndrome, obstructive airways disease, accelerated lung function decline, airway hyperreactivity, sarcoidosis, and obstructive sleep apnea. Additionally, an extensive array of biomarkers has been identified as associated with WTC-related respiratory disease. Future research efforts will not only focus on further phenotyping/treating WTC-related respiratory disease but also on additional diseases associated with WTC exposure, especially those that take decades to develop, such as cardiovascular disease, cancer, and interstitial lung disease. Introduction In the 20 years since September 11, 2001 (9/11), many health problems have burdened the lives of rescue/recovery workers as well as survivors of this tragedy. Over 400,000 people were exposed to the toxins, physical, and emotional trauma in the days and months following the attacks [1]. The Fire Department of the City of New York (FDNY) workforce was nearly all present within the first 48 h after 9/11 and then for several months providing rescue and recovery efforts. They were heavily exposed to World Trade Center (WTC) dust containing inorganic species, metals, pesticides, asbestos, polycyclic aromatic hydrocarbons, and other hydrocarbons [2,3]. The WTC dust consisted of highly alkaline particulate matter of sizes greater than 10 μm [2], which typically undergo nasopharyngeal filtration [4]. However, the overwhelming extent of dust exposure coupled with the increased respiratory demand during rescue and recovery work resulted in both upper and lower airways being heavily impacted [5]. As a result, medical monitoring of those exposed began shortly after 9/11 [6]. In 2011, the James Zadroga 9/11 Health and Compensation Act of 2010 (Zadroga Act) [7] was signed into law, creating the World Trade Center Health Program (WTCHP), thereby unifying medical monitoring and treatment of WTCrelated health conditions into a federally funded program. Under the WTCHP, in addition to the FDNY program covering firefighters and Emergency Medical Services (EMS) providers, there is also the General Responder WTC Health Program following non-FDNY rescue/recovery workers [8], the Bellevue WTC Environmental Health Center (WTC EHC) covering neighborhood residents, local workers, and clean-up workers/volunteers [9], and the Pentagon/Shanksville Responder cohort which includes rescue/recovery workers at the Pentagon and plane crash site in Shanksville Pennsylvania [10]. The WTC Health Registry is an additional cohort of over 70,000 people who were near the WTC which tracks the health outcomes of 9/11 exposure with a series of questionnaires [10]. Unlike the other cohorts, it does not provide 9/11-related treatment. Because the FDNY had a well-established occupational health service for its firefighters and EMS providers prior to 9/11, the FDNY was able to immediately start a rigorous monitoring and treatment program for its nearly 16,000 WTC-exposed rescue/ recovery workers [5]. These annual exams include health questionnaires, laboratory data, audiograms, spirometry, and chest radiographs. This cohort has had the unique strength of being able to compare the occurrence of new findings to their pre-9/11 prevalence, strengthening any conclusions as to whether conditions were associated with WTC exposure. WTC Cough Syndrome In the days and months after 9/11, the most common selfreported symptom among FDNY workers was new onset cough and sore/hoarse throat [24]. What soon became known as "WTC cough syndrome" was defined as persistent cough that developed after exposure to the site and was accompanied by respiratory symptoms severe enough to require medical leave for at least 4 weeks [24][25][26]. Nearly all FDNY first responders who developed WTC cough syndrome were present at the WTC site within 48 h of 9/11, most of whom were considered highly exposed as they were present on the morning of 9/11 and exposed to the WTC dust cloud [26]. WTC cough syndrome has been found to be associated with obstructive airways disease, airway hyperreactivity, radiographic evidence of airway inflammation (i.e., air trapping and bronchial wall thickening on chest CT imaging), GERD, chronic rhinosinusitis, and PTSD [18,24,26] Airway Hyperreactivity Pulmonary function testing conducted immediately after 9/11 identified airway obstruction and hyperreactivity associated with WTC exposure [27]. Similar to findings in the non-FDNY volunteer/responder [28][29][30] and survivor cohorts [9,31], this relationship was strongly associated with exposure intensity in FDNY rescue/recovery workers [27]. Even after 6 months, FDNY workers most highly exposed (arrived on the morning of 9/11) were over six times more likely to have airway hyperreactivity based on methacholine challenge testing (MCT) than those with moderate exposure or no exposure in the first 2 weeks after 9/11 [32]. These findings were independent of baseline airway obstruction and smoking status and in follow-up studies were found to have persisted even 12 months after 9/11 [32,33]. Unlike the typical expected decrease in bronchial hyperreactivity after removal of noxious stimuli in traditional occupational asthma [34], Aldrich et al. found persistence of airway hyperreactivity in FDNY rescue/recovery workers 10 to 12 years after 9/11 exposure [35]. Decline in FEV 1 Although the majority of FDNY rescue workers continued to have normal forced expiratory volume in one second (FEV 1 ) post-9/11, two clinically relevant patterns of loss of FEV 1 have been observed. Aldrich et al. reported a significant mean decrease in FEV 1 in the first year after 9/11 on annual spirometry of 439 mL in firefighters and 267 mL in EMS workers who never smoked. Interestingly, despite normal lung function prior to 9/11, little to no recovery in FEV 1 to pre-9/11 function was observed during years of followup [36,37]. Additionally, after several years of medical monitoring it became clear that although some had partial improvement in lung function post-9/11, there was a subset of FDNY workers who continued to have persistent accelerated FEV 1 decline, while others had their expected agerelated decline (Fig. 1) [26,36,38,39]. It was also demonstrated that bronchial hyperreactivity along with respiratory symptoms such as cough, chest tightness, and shortness of breath were more prevalent in those with a decline in FEV 1 post-9/11 [35]. Soon the term "WTC-lung injury" (WTC-LI) was defined as newly decreased FEV 1 below the lower limit of normal [40]. WTC Metabolic and Inflammatory Biomarkers Multiple studies have focused on understanding the underlying metabolic and inflammatory pathways associated with post-9/11 decline in lung function and obstructive airways disease ( Table 1). Studies of serum biomarkers within 6-month post-9/11 from a sample of 801 FDNY firefighters were conducted to identify biomarkers related [49,50] to risk or protection against WTC-LI [41][42][43][44]. Granulocyte-macrophage colony-stimulating factor (GM-CSF) and macrophage-derived chemokine (MDC) increased the risk of WTC-LI by 2.5-fold and 2.95-fold, respectively [41]. Elevations of matrix metalloproteinase-1 (MMP-1) [42] and immunoglobulin E (IgE) [43] were also found to be risk factors for WTC-LI. Using a definition of accelerated FEV 1 decline (≥ 64 mL/year), Weiden et al. found that interleukin-4, -5, and -13 were associated with greater FEV 1 decline when controlling for several important confounders, such as WTC exposure intensity and smoking status [45]. Additionally, others found lower levels of serum alpha-1 antitrypsin (AAT) and higher serum levels of eosinophils and neutrophils were associated with accelerated FEV 1 decline after 9/11 [38,39]. Alternatively, the protective qualities of MMP-3 and MMP-12 were confirmed in a nested case-control study by Kwon et al. which showed that each log increase of MMP-3 and MMP-12 reduced the odds of developing WTC-LI by 73% and 54%, respectively [44]. Additionally, a subset of FDNY workers with elevated chitotriosidase levels, an enzyme vital in the innate host defense against bacterial and fungal infections [46], showed recovery of their forced vital capacity (FVC) and FEV 1 to their pre-9/11 levels on average 32 months after the attack [43]. Finally, Nolan et al. found that the odds of regaining lung function after WTC exposure were higher in those with higher levels of MMP-2 and tissue inhibitors of matrix metalloproteinase (TIMP-1) [42]. Several studies have explored the relationship between lung function post-9/11 and systemic inflammatory biomarkers found in patients with metabolic syndrome or other cardiovascular abnormalities. A case-control study of FDNY workers found an association between dyslipidemia, elevated heart rate and elevated leptin levels, a biomarker for metabolic syndrome [47], and the development of WTC-LI after adjusting for confounders, such as body mass index (BMI). Meanwhile, amylin was found to be protective for WTC-LI [48]. Furthermore, studies have evaluated the relationship between metabolic syndrome risk factors (i.e., abdominal obesity, insulin resistance, hypertriglyceridemia, low HDL levels, and hypertension) near the time of WTC exposure and new WTC-LI or airway hyperreactivity several years later [49][50][51]. Kwon et al. reported as much as a 69% increased risk of airway hyperreactivity for those with ≥ 3 metabolic syndrome risk factors, which was independent of other known risk factors, such as smoking status and WTC exposure intensity [49]. They later reported that having metabolic syndrome increased risk of developing WTC-LI by 56% [50]. The odds of developing WTC-LI years after WTC exposure have also been shown to increase in those with elevations within 6-month post-9/11 of cardiovascular serum biomarkers, such as apolipoprotein A-I (ApoAI) and ApoAII, C-reactive protein (CRP) levels, soluble Receptor for Advanced Glycation End-Products (sRAGE), and lysophosphatidic acid (LPA) [40,41,48,50,52,53]. Additionally, high-throughput metabolomics have facilitated the assessment of the metabolome of those with WTC-LI and bioactive classes of lipid and amino acid metabolites have been identified [54,55]. A multivariate predictive model of firefighters with WTC-LI was developed by integrating the metabolome with clinical, cytokine, chemokine, and environmental characteristics to improve early identification of disease [56]. Increased growth-regulated oncogene protein (GRO), monocyte chemoattractant protein-1 (MCP-1), and decreased macrophage-derived chemokine (MDC) were protective of WTC-LI. Pigment epithelium-derived factor (PEDF) was found to be a novel predictive biomarker of the negative health effects of particulate matter exposure and decreased levels along with macrophage inflammatory protein-4 (MIP-4), and increased ApoAII were associated with WTC-LI [56]. Many of these risk factors for metabolic syndrome and for cardiovascular disease are modifiable which could aid in reducing pulmonary dysfunction not only in WTC cohorts but also in the general population. WTC-Related Sarcoidosis Sarcoidosis is a systemic granulomatous disease that develops after a genetically primed abnormal immune response to an antigen exposure or inflammatory trigger [57]. Thus, soon after the massive antigen exposure that was 9/11, annual radiographs in FDNY workers demonstrated an increase in intrathoracic adenopathy and later tissue samples confirmed an increased incidence of sarcoidosis (intra-and extrathoracic) above that observed in non-WTC cohorts of similar sex, age, and race [58,59]. The average annual incidence increased initially from 15/100,000 in the 15 years prior to 9/11 to 85/100,000 in the year after 9/11 and then stabilized at 25/100,000 after 2002. Similar findings were reported in both survivor and non-FDNY rescue/recovery cohorts [60][61][62]. Unlike pre-9/11 sarcoidosis cases within the FDNY cohort, those with newly diagnosed sarcoidosis post-9/11 were more likely to have new asthma symptoms and airway hyperreactivity [59]. Hena et al. extensively characterized the clinical course of post-9/11 sarcoidosis in the FDNY cohort both at the time of diagnosis and again in 2015 (Fig. 2) [63]. All had pulmonary involvement at diagnosis and the majority had radiologic findings consistent with mostly stage I and II disease [63,64]. Nearly half had resolution of intrathoracic involvement at the time of follow-up 8 to 10 years later. Pulmonary function metrics were within normal limits for nearly all, changed little over time, and were not related to radiographic disease patterns [64]. Alternatively, extrapulmonary involvement increased from diagnosis to the time of follow-up with cardiac and bone/joint involvement being the most prevalent. It remains unclear whether the increased prevalence of cardiac sarcoid was due to WTC exposure alone or whether some of the association was due to increased surveillance. An argument in favor of surveillance bias is that everyone enrolled in the study had extensive screening for cardiac sarcoidosis, including cardiac magnetic resonance imaging (MRI). Cardiac MRI was far more sensitive than electrocardiogram and/ or echocardiograms, which missed nearly half of those with cardiac sarcoidosis [63]. Regardless of etiology and because cardiac sarcoidosis can be fatal, their findings suggest a greater need for potentially life-saving-advanced cardiac screening in asymptomatic patients, especially those with public safety responsibilities. Additionally, several unique genetic variants were identified in those with post-9/11 sarcoidosis in a nested case-control study matching based on degree of WTC exposure, age, sex, and race [65]. Seventeen allele variants of HLA and non-HLA genes were found to be associated with sarcoidosis in the FDNY cohort, all of which were in chromosomes 1 and 6. Although many of the singlenucleotide polymorphisms (SNPs) had never been reported before, one consistent finding with prior studies was an association between the SNP rs20417 which previously was shown to be associated with sarcoidosis in a northern European, mostly Caucasian cohort [66]. Although the sample size used for the FDNY sarcoidosis cohort was too small to identify specific alleles associated with extrapulmonary sarcoidosis phenotypes, there were several novel genetic variants found to be associated with extrapulmonary sarcoidosis generally [65]. Larger genetic studies of other WTC cohorts may help further understand these genetic relationships. WTC-Associated Obstructive Sleep Apnea (OSA) Although obstructive airways disease and hyperreactivity were the most common new pulmonary diseases reported in FDNY workers post-9/11, obstructive sleep apnea was also found to be associated with WTC exposure in survivor, non-FDNY rescue/recovery workers, and FDNY workers [14,15,67]. In 2011 Webber et al. [14] demonstrated that of 11,701 FDNY workers studied who were WTC exposed within 2 weeks of 9/11, 44% were considered at high risk for OSA based on a modified Berlin Questionnaire [68]. Interestingly, of those considered high risk, only 13.9% had a physician diagnosis of OSA. This study, like other WTC-related airway conditions, demonstrated a WTC exposure dose-response relationship in that those most highly exposed had the highest odds of being at risk for OSA. Also similar to other WTC-related airways diseases summarized above, they found an independent relationship between OSA and other WTC syndromes, such as GERD, chronic rhinosinusitis, and PTSD [14]. A follow-up study by Glaser et al. evaluated those who screened positive on the Berlin screener for OSA and had polysomnography testing; they found that 81% of the study participants were diagnosed with OSA. They also demonstrated that those with the highest level of WTC exposure were more likely to be diagnosed with severe OSA with an OR 1.91 (1.15-3.17), which was independent of BMI [15]. These studies suggest a relationship between OSA, a likely source of chronic systemic inflammation [69], and WTC exposure. Thus, there should be a low threshold to evaluate those with high WTC exposure and other WTC syndromes, such as GERD, PTSD, and chronic rhinosinusitis for OSA. Unique Treatment Approaches to WTC-Related Disease Given the high prevalence of bronchial hyperreactivity, obstructive airways disease, and symptoms, such as cough, wheezing, and shortness of breath, the most commonly prescribed treatments have been inhaled corticosteroids (ICS) with or without long-acting beta agonists (LABA). Other treatments have included systemic oral corticosteroids, ipratropium inhalation, and/or leukotriene receptor antagonist. Given an equal or even higher prevalence of GERD and/ or chronic rhinosinusitis, treatment often included proton pump inhibitors, acid-free diets, nasal saline rinses, and nasal sprays (antihistamines, decongestants, corticosteroids, and/or ipratropium) [70][71][72]. Initial studies noted that the use of corticosteroids had no effect on airway reactivity but did slow the rate of FEV 1 decline [35]. Later in a study evaluating 8,530 FDNY firefighters, 19% were prescribed ICS/LABA in the 16-year period after 9/11. When dyspnea was measured using the modified Medical Research Council (mMRC) dyspnea scale score, those without improvement in their dyspnea score after the initiation of ICS/LABA were more likely to have delayed treatment-increased time between 9/11 and treatment initiation. Those with improvement in their dyspnea were initially the most symptomatic and, perhaps because of that, were started the earliest on treatment after 9/11 [73]. Further study of this cohort found that those treated the earliest with ICS/LABA (prior to the median date of treatment initiation in the group) had the greatest improvement in FEV 1 slope post-treatment [74]. Although it is difficult to account for treatment bias, these studies suggest that delays in treatment may contribute to worsening lung injury after WTC exposure [73,74]. Additionally, FDNY firefighters with WTC-related sarcoid arthritis, one of the most common extrapulmonary manifestations, were treated with disease-modifying antirheumatic drugs (DMARDs) for steroid refractory disease, as is standard for a stepwise escalation of therapy [75,76]. However, eight of the eleven treated with hydroxychloroquine did not experience adequate disease control of their articular symptoms, while three experienced either partial or complete response. After a 3-month trial of hydroxychloroquine, they were given methotrexate but again suboptimal symptom control was achieved. Finally, anti-tumor necrosis factor alpha (anti-TNFα) agents were initiated with over a 70% improvement in symptoms. Cardiac sarcoidosis has been similarly treated with stepwise escalation from corticosteroids to methotrexate to anti-TNFα agents. Unlike non-WTC-related sarcoidosis, most with this unique WTC exposure required a higher escalation of therapy to achieve control. Finally, the therapeutic potential of attenuating metabolic syndrome risk in those with WTC-LI is being explored. To investigate the hypothesis that a low-calorie Mediterraneantype diet will reduce the primary clinical endpoint of body mass index (BMI) and will positively impact the secondary endpoints, such as FEV 1 , the Food Intake REstriction for Health OUtcome Support and Education (FIREHOUSE) trial was developed [77][78][79][80]. This collaborative randomized clinical trial utilizes self-monitored diet, physical activity recommendations, and cloud-based self-monitoring with Social Cognitive Theory (SCT)-based behavioral counseling delivered via video sessions [81] to WTC-exposed firefighters with WTC-LI. The goal of this study is to evaluate if these novel technology-supported Mediterranean diet and lifestyle modifications can further improve the treatment of WTC pulmonary disease. The treatment needs of this cohort for WTC cough, FEV 1 decline, airway hyperreactivity, sarcoidosis, and WTC-LI underscore the importance of close longitudinal follow-up so that individualized treatment can be initiated after a unique exposure, such as WTC. Future Directions in FDNY WTC Research Cohort studies after non-WTC exposures such as tobacco or asbestos raise serious concerns that twenty years after 9/11 incidence rates for interstitial lung diseases and lung cancer may rise to levels not seen in the general population. The WTC Health Registry has reported an increased incidence rate for self-reported interstitial lung disease in those with the highest WTC exposure history [82]. FDNY is currently involved in a CT imaging study to determine whether radiologic-confirmed interstitial disease is increased compared to the general population. As for lung cancer, three WTC cohorts (FDNY, General Responder Cohort, and WTC Health Registry) have reported lower than expected rates when compared to the general population [16,[83][84][85][86]. This is likely due to lower rates of tobacco smoking and the fact that solid tumors may take more than 20 years to develop. Now more than ever lung cancer screening is critical as early diagnoses predict treatment success. A study comparing risk factor and model-based lung cancer screening in the FDNY WTC cohort found that several of the diagnosed lung cancers would have been missed if traditional guidelines were used for lung cancer screening [87]. The findings support the recent expansion by the United States Preventive Services Task Force (USPSTF) for CT lung cancer screening eligibility by lowering smoking history to ≥ 20 pack-years and age 50 years old in this WTC occupational cohort [88]. As commonly known, dyspnea is a symptom with multiple origins-not only of a pulmonary etiology but also cardiovascular. A recent study demonstrated incident rates of FDNY WTC-exposed rescue/recovery workers with cardiovascular disease, such as myocardial infarction, stroke, unstable angina, coronary artery surgery or angioplasty, or cardiovascular disease-related death, were highest in those with the greatest WTC exposure intensity, even when adjusted for smoking status and age [89]. Given that there is a strong relationship between metabolic syndrome, a known risk factor for cardiovascular disease and WTC-LI; further studies are needed to explore the relationships between cardiovascular disease, respiratory disease, and WTC exposure. Finally, the extensive longitudinal follow-up of the FDNY WTC exposure cohort will allow for a better understanding of risk factors for new and emerging conditions, such as the novel coronavirus disease 2019 (COVID-19). For example, a greater rate of FEV 1 decline is associated with asthma and chronic obstructive pulmonary disease (COPD) in FDNY WTC rescue/recovery workers [90]. Recently, Weiden et al. found the same risk factors to be associated with severe COVID-19 disease (defined as hospitalization or death) in the currently active FDNY workforce (30% of whom were WTC exposed) [91]. Further investigation of WTC-related health conditions as independent risk factors for severe COVID-19 are needed in WTC-exposed longitudinal cohorts. Conclusion In the days and months after 9/11, the FDNY reported the most common symptom in its workforce was cough, later termed "WTC cough syndrome" which was characterized with obstructive airways disease, GERD, and chronic rhinosinusitis. The most common pulmonary function abnormalities post-9/11 were accelerated decline in lung function and airways hyperreactivity. Causality to the WTC exposure was confirmed early on by demonstrating a dose-response relationship for many of these findings. Unlike other acute occupational exposures [34], the majority of FDNY WTCexposed rescue/recovery workers did not recover their lung function to pre-9/11 levels. Later it became clear that there was a subset of FDNY responders who had developed accelerated lung function decline for several years after 9/11. Several studies have identified important biomarkers related to WTC-lung injury, such as CRP, IgE, MMP-1, GM-CSF, MDC, ApoAII, and ApoAI, while others have been found to be protective, such as MMP-12, MMP-3, GRO, and MCP-1 (Table 1). Although many of these biomarkers are not easily available in most laboratories, some may help predict the course of lung function and identify those who need closer monitoring and earlier intervention. Furthermore, these same studies suggest that metabolic syndrome is a unique risk factor and potential biomarker for WTC-LI and airway hyperreactivity which could be addressed clinically through lifestyle modifications such as that offered by the FIREHOUSE trial, a unique treatment approach to 9/11-related respiratory dysfunction. Studies have also demonstrated a relationship between systemic inflammatory conditions, such as sarcoidosis, with an increased incidence after 9/11 and a unique disease phenotype involving not only pulmonary disease but also cardiac and bone/joint involvement. Obstructive sleep apnea has also been associated with WTC exposure with a dose-response relationship, similar to other WTC-related health conditions. Much of the treatment efforts of WTC-related lung disease have focused on those who remain symptomatic and have not recovered their lung function. Early intervention with ICS and ICS-LABA inhalers reduced the risk of prolonged symptoms such as dyspnea and escalated treatment approaches have been required for sarcoidosis using anti-TNFα agents. Several of the WTC-related pulmonary disease findings suggest unique underlying inflammatory pathways that may explain the prolonged prevalence of these conditions even 20 years after 9/11. Early identification of those with 9/11-related inflammatory conditions could help detect those who need closer monitoring and earlier treatment. As demonstrated with 9/11-related cardiac sarcoidosis, prompt monitoring and intervention could prevent progression of potentially fatal disease. Finally, future research not only will focus on understanding these pathways but also on their interrelationships with co-morbidities, such as aging, cardiovascular disease, cancers, autoimmune diseases, PTSD, and even new emerging conditions like COVID-19. The longterm, ongoing study of this unique WTC-exposed cohort, beginning even before 9/11, represents the first occupational study of its kind to provide quality longitudinal data similar to that of the original Framingham Heart Study [92,93]. Conflict of interest The authors declare that they have no conflict of interest.
2021-11-12T14:48:19.985Z
2021-11-11T00:00:00.000
{ "year": 2021, "sha1": "a98c52a415071e128ff18f940858eee7e7c1828c", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00408-021-00493-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a98c52a415071e128ff18f940858eee7e7c1828c", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257496106
pes2o/s2orc
v3-fos-license
Learning-based robust speaker counting and separation with the aid of spatial coherence A three-stage approach is proposed for speaker counting and speech separation in noisy and reverberant environments. In the spatial feature extraction, a spatial coherence matrix (SCM) is computed using whitened relative transfer functions (wRTFs) across time frames. The global activity functions of each speaker are estimated from a simplex constructed using the eigenvectors of the SCM, while the local coherence functions are computed from the coherence between the wRTFs of a time-frequency bin and the global activity function-weighted RTF of the target speaker. In speaker counting, we use the eigenvalues of the SCM and the maximum similarity of the interframe global activity distributions between two speakers as the input features to the speaker counting network (SCnet). In speaker separation, a global and local activity-driven network (GLADnet) is used to extract each independent speaker signal, which is particularly useful for highly overlapping speech signals. Experimental results obtained from the real meeting recordings show that the proposed system achieves superior speaker counting and speaker separation performance compared to previous publications without the prior knowledge of the array configurations. Introduction Blind speech separation (BSS) involves the extraction of individual speech sources from a mixed signal without prior knowledge of the speakers and mixing systems [1].BSS finds application in smart voice assistants, handsfree teleconferencing, automatic meeting transcription, etc., where only mixed signals from single or multiple microphones are available.Several BSS algorithms have been developed based on different assumptions about the characteristics of the speech sources and the mixing systems [2][3][4][5][6][7][8][9].Learning-based BSS approaches have recently received increased research attention due to advances in deep learning hardware and software.Promising results have been obtained using single-channel neural networks (NNs) [10][11][12][13][14][15].To further improve separation performance, techniques have been developed that exploit the spatial information embedded in the microphone array signals began to emerge [16][17][18][19].However, most of these BSS techniques assume a known number of speakers prior to separation.As a key step prior to speaker separation, speaker counting [20] is examined next.Some studies have assumed the maximum number of speakers during speaker separation [15,[21][22][23].Another approach is to extract speech signals in a recursive manner [24][25][26], where the BSS problem has been tackled by a multi-pass source-extraction procedure based on a recurrent neural network (RNN).In contrast to the previous methods that use implicit speaker counting for separation, a multi-decoder DPRNN [27] uses a count-head to infer the number of speakers and multiple decoder heads to separate the signals.A speaker counting technique has been proposed using a scheme that alternates between speech enhancement and speaker separation [28].Instead of exhaustive separation, one can selectively extract only the target speech signal, with the help of auxiliary information such as video images [29,30], pre-enrolled utterances [31][32][33], and the location of the target speaker [34][35][36][37].Although the target speaker extraction approach leads to significant performance improvements, the auxiliary information may not always be accessible.To overcome this problem, the speaker activity-driven speech extraction neural network [38] has been proposed to facilitate target speaker extraction by monitoring speaker activity.However, the speaker activity-driven speech extraction neural network is susceptible to adverse acoustic conditions in speaker extraction using speaker activity information alone.In such circumstances, multichannel approaches may be more advantageous than monochannel approaches.For example, deep clustering-based speaker counting and mask estimation have been incorporated into masking-based linear beamforming for speaker separation tasks [39].Chazan et al. presented the use of a deep-neural network (DNN)based single-microphone concurrent speaker detector for source counting, followed by beamformer coefficient estimation for speaker separation [40,41]. Despite the promising results obtained with DNNbased approaches, most network models require a large amount of data for training.Another limitation is that identical array configurations used in the test, and training phases are preferred.Therefore, DSP-based approaches may have certain advantages [42].Laufer-Goldshtein et al. proposed the global and local simplex separation algorithm by exploiting the correlation matrix of relative transfer functions (RTFs) across time frames [43].The number of speakers is determined from the eigenvalue decay of the correlation matrix.The activity probabilities of each speaker are estimated from the simplex formed by the eigenvectors.In the separation stage, a spectral mask is computed for the identified dominant speakers, followed by spatial beamforming and postfiltering.Although the simplex-based approach is very effective in most cases, it does not work well for lowactivity speakers [44]. In general, the DNN-based approaches show promise, but require extensive training data and could not generalize well to unseen array configurations.The DSPbased approaches require no training and often allow for low-resource implementation, but their performance depends on the array configuration.While the deep clustering-based speaker counting and mask estimation methods [39][40][41] are also array configuration-agnostic, their speaker counting capability relies on a single-channel input feature, which can degrade counting performance in adverse acoustic conditions.Furthermore, the separation performance of these methods is dependent on the array configurations used. The goal of this study is twofold.First, we reformulate a spatial feature that significantly improves the performance and robustness of source counting and separation.Second, we seek to leverage the strengths of DSP-based and learning-based methods for improved speaker counting and speaker separation performance, with robustness to unseen room impulse responses (RIRs) and array configurations.Inspired by the work of Gannot et al. [43,45], which is a purely DSP-based approach, we propose a robust speaker counting and activity-driven speaker separation algorithm that combines statistical preprocessing and a neural network back-end.We formulate a modified spatial coherence matrix based on whitened relative transfer functions (wRTFs) as a spatial signature of directional sources.The whitening procedure provides spectrally rich phase information that proves to be a robust spatial signature for dealing with mismatched array configurations.In the speaker counting stage, our approach attempts to reliably estimate the number of active speakers in low-SNR and low-activity scenarios by incorporating eigenvalues from the spatial coherence matrix and the maximum similarity between the global activity distributions.In the speaker separation stage, the local coherence functions of each speaker are computed using the coherence between the wRTFs of each time-frequency (TF) bin and that weighted by the corresponding global activity function.The target masks for each speaker are estimated using a global and local activity-driven network (GLADnet), which remains effective for "mismatched" RIRs and array configurations not included in the training data. We train our DNN models with RIRs simulated using the image-source method [46], while the trained models are tested using the measured RIRs recorded at Bar-Ilan University [47].Real-life recordings from the LibriCSS meeting corpus [48] are also used to validate the proposed separation networks.In this study, the proposed speaker counting and speaker separation algorithms are compared with the simplex-based methods developed by Laufer-Goldshtein et al. [43] in terms of F1 scores and confusion matrices.Perceptual evaluation of speech quality (PESQ) [49] and word error rate (WER) are adopted as the performance measures in speaker separation tasks. While inspired by Ref. [43], this study presents three main contributions that differ from the previous work.First, a learning-based robust speaker counting and activity-driven speaker separation algorithm is developed.Second, a modified spatial coherence matrix is formulated to effectively capture the spatial information of independent speakers.A novel idea based on the maximum similarity between the global activity distribution of two speakers over time frames is explored as an input feature for speaker counting.Third, an array configuration-agnostic GLADnet informed by the global and local speaker activities is proposed. The remainder of this paper is organized as follows.Section 2 presents the problem formulation and a brief review of the simplex-based approach, which is used as the baseline in this study.Section 3 presents the proposed speaker counting and speaker separation system.In Section 4, we compare the proposed system with several baselines through extensive experiments.Section 5 concludes the paper. Problem formulation Consider a scenario in which the utterances of J speakers are captured by M distant microphones in a reverberant room.We assume that there is no prior knowledge of the array configuration.The array signal model is described in the short-time Fourier transform (STFT) domain.The received signal at the mth microphone can be written as where l and f denote the time frame index and frequency bin index, respectively; A m j f denotes the acoustic trans- fer function (ATF) between the mth microphone and the jth speaker; S j l, f denotes the signal of the jth speaker; and V m l, f denotes the additive sensor noise.This study aims to estimate the number of speakers J (speaker counting) and extract independent speaker signals from the microphone mixture signals without information about the sources and the mixing process. Baseline method: the simplex-based approach In this section, we present the baseline by revisiting [43].The simplex-based approach [43,44] is based on the global and local simplex representations and relies on the assumption of the speech sparsity in the STFT domain [50].By assuming speech sparsity, each TF bin is dominated by either the speaker or the noise.The ideal indicator selected in each TF bin can be expressed as follows: If a TF bin is not dominated by any speakers, such a TF bin will be dominated by noise, i.e., J j=1 I j l, f = 0 .Let p G j (l) be the global activity of speaker j in frame l: (1) which is the global activity associated with the jth speaker in the lth frame.Note that the global activities depend only on the frame index, not on the frequency index. Spatial feature extraction Assuming speech sparsity in the TF domain, the relative transfer function (RTF) [51], which represents the ratio between the ATF of the mth microphone and the ATF of the first (reference) microphone, can be written as follows: In the following, a feature vector r(l) for each frame l is defined to compose D = 2 × (M − 1) × K elements of the real and imaginary parts of the computed ratios (4) for 1 ≤ k ≤ K frequency bins and in (M-1) microphone signals: where f k K k=1 are the selected frequencies.The correlation matrix W ∈ R L×L is computed, where [W] ln = 1 D r T (l)r(n) .W can be approximated as [45] where ated with the jth speaker. Speaker counting For J independent speakers, the matrix P should have rank J.It follows that the number of speakers can be determined by counting the principal eigenvalues of the correlation matrix W.However, selecting an appropriate threshold is not straightforward due to complex acoustic conditions.To select an appropriate threshold, the speaker counting problem has been formulated as a classification problem [43], where each class corresponds to a different number of speakers.A feature vector consisting of the first J ′ principal eigenvalues of the correlation matrix is used as the input to the classifier (4) where J ′ is the maximum possible number of speakers and is set to 4 in this study.The multiclass support vector machine (SVM) is used as the classifier in [43]. Speaker separation Once the number of speakers (J) is available, the eigenvectors associated with the J largest eigenvalues for each frame l are selected to form the global mapping vector where u j (l) J j=1 denotes the lth element associated with the jth eigenvector. According to [43,45], the global mapping vector v G (l) can be expressed as a linear transformation of the global activity vector p G (l) : with embedded information of speaker activities.The successive projection algorithm [52] can be applied to identify the simplex vertices and construct the transfor- , where l j J j=1 represents frame indices of the simplex vertices.Hence, the global activity can be computed. For the local mapping, each TF bin is assigned to a dominant speaker or noise.The spectral mask can be obtained by using the weighted nearest-neighbor rule. where π j = L n=1 p G j (n) denotes the class normalization factor and ω ln f is a Gaussian weighting function [33]: that is inversely related to the distance in the space defined by the local representation r l, f L l=1 between frame n and frame l.The signal of the jth speaker can be estimated by applying the spectral mask in (11) to the reference microphone signal: where β is the attenuation factor to avoid musical noise.In this paper, β is set to 0.2 as in [43]. A linearly constrained minimum variance (LCMV) beamformer can be used to extract each independent speaker signals [43,44], with the weights below where the jth speaker and R nn f is the noise covariance matrix. In this study, only sensor noise is assumed, i.e., R nn = σ nn I .As a result, ( 14) reduces to where the RTF of the jth speaker can be estimated by where L j = l p G j (l) > ε, l ∈ {1, . . ., L} denotes the set of frames dominated by the jth speaker, and ε = 0.2 is an activity threshold. To further illuminate the residual noise and interference, a single-channel mask is applied [43,44], as given by where the vector x l, f = X 1 l, f , . . ., X M l, f T denotes the microphone signals, g j ǫR J ×1 is a one-hot vector with one in the jth entry and zeros elsewhere, and β = 0.2 is a small factor to prevent from musical noise. Proposed method Inspired by the above simplex-based approach, we develop a robust speaker counting and separation system by exploiting spatial coherence features of array signals, as illustrated in Fig. 1.The system consists of three modules: the feature extraction module (Section 3.1), the speaker counting module (Section 3.2), and the speaker separation module (Section 3.3), as detailed in the sequel. Spatial feature extraction The simplex-based method [43] exploits the spatial information provided by the microphone array.As a result, spatial feature extraction plays a critical role in subsequent speaker counting and separation algorithms.Instead of the RTF used in [43], in this study, we extract spatial information by whitening RTFs with no change in phase to enhance the spatial signature of the directional source, analogous to generalized cross-correlation with phase transformation (GCC-PHAT) [53].In the light of the uncertainty principle [54], this helps to improve the (14) time domain resolution for the computation of the spatial coherence matrix.Instead of the real feature vector used in the simplex-based approach, a "whitened" complex feature vector r(l) is defined as follows: Where R m (l.f ) is defined in (4), and {f k } K k=1 is the selected frequency band as in (5).Next, we construct a spatial coherence matrix W ∈ R L×L with the lnth entry defined as (18) where Re{•} is the real-part operator, �•� denotes the l 2 - norm, and D = �r(l)� �r(n)� = (M − 1)K due to the fact that the feature vectors have been whitened.Note that the complex inner product of r(l) and r(n) is computed, which can also be regarded as a sign-sensitive cosine similarity based on the Euclidean angle [55].An example of the spatial correlation matrix computed using the method reported in the references [43][44][45] and the proposed spatial coherence matrix are compared in Fig. 2, which is generated using a 12-second clip with a threespeaker mixture captured by an eight-element uniform ( 19) This suggests that the proposed spatial coherence matrix is effective in capturing speaker activity, much like a voice activity detector.In addition, the range of the proposed coherence matrix is within [−1, 1], which is a desired property for network training. Speaker counting The flowchart of the proposed speaker counting approach is detailed in Fig. 3. Two features related to the speaker count are extracted from the spatial coherence matrix W and input to the speaker counting network (SCnet), as will be detailed next. In this study, we propose to use the eigenvalues ˜ n L n=1 of the spatial coherence matrix W as the feature for the classi- fier.An example of scatter pattern of the eigenvalues to discriminate between different speaker count classes, J ∈ {1, 2, 3, 4} , is illustrated in Fig. 4. We generated 2000-sample speech mixtures for 1-4 speakers, with 0%, 10%, 20%, 30%, and 40% overlap ratios.Sensor noise was added with 10 dB SNR.Dry signals were convoluted with the measured RIRs selected from the Multi-Channel Impulse Responses Database [47] that was recorded using an eightelement ULA with interelement spacing of 8 cm and T60 = 0.61 s.Each cross in the figure represents one observation to specify the number of speakers.Figure 4 shows the ability of the eigenvalues obtained from the correlation matrix and the coherence matrix to discriminate between different numbers of speakers.In addition, the eigenvalues of the coherence matrix W can discriminate between different numbers of speakers better than those of the correlation matrix W .However, some of the observations cannot be classified into the correct class according to the eigenvalues.In this study, we evaluate the similarity between global activities as auxiliary information to address the cases where the principal eigenvalue-based counting method does not work. Apart from eigenvalues of the spatial coherence matrix, another feature that can help speaker counting is introduced to deal with meeting scenarios in which the overlap ratio of conversation is often less than 20% [56].For such scenarios, we first calculate a similarity matrix γ j ǫR j×j of the first j global activities with the pq-th entry defined as follows: where "•" denotes the inner product, pG p ∈ R L×1 and pG q ∈ R L×1 denote the pth and qth global activities estimated from the spatial coherence matrix W and 1 ≤ p, q ≤ j .Next, we find the maximum similarity value of all entries but the diagonal entries. Similarly, γ j max denotes the maximum similarity calculated using the first j global activities obtained from the spatial correlation matrix W.An example of scatter pattern of the maximum similarity to discriminate between different speaker count classes, J ∈ {1, 2, 3, 4} , is illus- trated in Fig. 5.The data generation is identical to those of Fig. 4. To visualize the separability by using the proposed feature, we plot the scatter plot by the projection (20) onto a two-dimensional feature space.Figure 5 suggests that the observations are separable by the maximum similarity, which helps to classify the number of speakers.In Fig. 5(a), the single-speaker observations and the two to four speaker observations are clearly separable along the γ 2 max coordinate.The one or two speaker observations and the three or four speaker observations are clearly separable along the γ 3 max coordinate.In Fig. 5(b), the one to three speaker observations and the four speaker observations are clearly separable along the γ 4 max coordinate.In this study, the speaker counting problem is formulated as a classification problem as in Ref. [43] with four classes corresponding to 1 to 4 speakers.For each observation (audio clip), the number of speakers is indicated by a one-hot vector z ∈ R 4×1 .For inference, the predic- tion is the highest probability of the output distribution.Three different input feature vectors are defined for the assessment of speaker counting performance: where J ′ = 4 is the maximum possible number of speak- ers, and the eigenvalues are normalized by the maximum eigenvalue to improve convergence.Features f baseline 2 is obtained from the spatial correlation matrix W, whereas features f proposal 1 and f proposal 2 are obtained from the proposed spatial coherence matrix W . ( 22) A DNN model termed SCnet is used as the classifier for speaker counting.Figure 6 shows an SCnet consisting of three dense layers followed by a rectified linear unit (ReLU) activation, with softmax activation in the output layer.In addition, (F size ,64) means a dense layer with input size = F size and output size = 64.The cross-entropy is used as the loss function in network training. Speaker separation The simplex-based method relies solely on the spatial cue to perform the subsequent beamforming, which depends on the specific array configuration.In contrast, our learning-based approach uses global and local spatial activity features to train the model, as shown in Fig. 7.The proposed system consists of two main modules: (1) the local coherence estimation of independent speakers, which monitors the local activity of each speaker according to the global activity of the speaker, and (2) the global and local activity-driven network (GLADnet), which extracts the speaker signal with the auxiliary information about the global and local activities of the speaker. In the local coherence estimation of a speaker, the local coherence is calculated between the wRTF of the target speaker and the wRTF of each TF bin.The wRTF of the jth speaker is calculated as follows: where A m j f is the estimated RTF.Thus, the local coher- ence of the jth speaker can be calculated as follows: where r l, f is given by the equation under (14).Local coherence serves to inform the DNN about the local activity of a speaker. GLADnet is based on a convolutional recurrent network [57], as illustrated in Fig. 8.The network has three inputs: the magnitude spectrogram of the reference microphone signal, the global activity of the speaker, and the local activity of the speaker.GLADnet has six symmetric encoder and decoder layers with an 8-16-32-128-128-128 filter.The convolutional blocks feature a separable convolution layer, followed by batch normalization, and exponential linear unit activation.The output layer terminates with sigmoid activation.The convolution kernel and step size are set to (3,2) and (2,1), respectively.Note that 1 × 1 pathway convolutions (PConv) are used as skip connections, which leads to considerable parameter reduction with little performance degradation.The global activity is concatenated to the output of the linear layer with 256 nodes in each time frame.The resulting vector is then fed to the following bidirectional long short-term memory layers with 256 nodes to sift out the latent features pertaining to each speaker.The soft mask estimated by the network is multiplied element-wise with the noisy magnitude spectrogram to yield an enhanced spectrogram.The complete complex spectrogram can be obtained by combining the enhanced magnitude spectrogram with the phase of the noisy spectrogram.The network is trained to minimize the compressed mean square error (24) where c = 0.3 is the compression factor and F denotes the Frobenius norm. Experimental study Experiments were performed to validate the proposed learning-based speaker counting and separation system.The networks were trained on the simulated RIRs and tested on the measured RIRs with different T60s and array configurations recorded at Bar-Ilan University [47]. For meeting scenarios, we also tested the proposed system on real meeting recordings from the LibriCSS meeting corpus [48]. Training and validation dataset In total, 50,000 and 5000 samples were used in training and validation, respectively.Dry speech signals selected from the train-clean-360 subset of th LibriSpeech corpus [58] were used for training and validation.Noisy speech mixtures edited in 12-s clips were prepared with different numbers of speakers J ∈ {1, 2, 3, 4} in reverberation conditions and signal- to-noise ratios (SNRs) between −5 dB and 5 dB.The overlap ratio of the speech mixtures varied from 0 to 40%.Reverberant microphone signals were simulated by filtering the dry signals with the simulated RIRs using the image-source method [46].The reverberation time was within the range of [0.2, 0.6] s.Sensor ( 25) noise was added with SNR = 15, 25, and 35 dB.In this study, simulated (Gaussian) noise was used to simulate the sensor noise.Two microphone array geometries were used for training and validation, as depicted in Fig. 9.The first microphone array is an eight-element ULA with interelement spacing of 8 cm.The geometry of the second array is similar to that of the sevenelement uniform circular array (UCA) used in the LibriCSS dataset [48] which has one microphone at the center and the other six uniformly distributed around a circle with a radius of 4.25 cm.The RIRs of rectangular rooms with randomly generated dimensions (length, width, and height) in the range of [3 × 3 × 2.5, 7 × 7 × 3] m were simulated.The ULA was placed at 0.5 m from the wall, while the UCA was placed at the center of the room.Any two speakers were separated by at least 15°. Implementation and evaluation metrics In this study, the signal frame was 128 ms long with a 32 ms stride.A 2048-point fast Fourier transform was used.The sample rate was 16 kHz.The feature vectors in ( 5) and ( 18) comprised K = 257 frequency bins in 1-3 kHz.We chose this frequency range because, as in Ref. [43], it performed well in all of the scenarios examined for different simulated and measured RIRs and array configurations.In the experiment, SCnet and GLADnet were trained using the Adam optimizer with a learning rate of 0.001 and a gradient norm clipping of 3. The learning rate was halved if the validation loss did not improve for three consecutive epochs. The F1 score and the confusion matrix are used to evaluate the speaker counting performance.The F1 score is a measure of the accuracy of a test in classification Fig. 8 The GLADnet problems.It is defined as the harmonic mean of precision and recall [59].PESQ [49] is used as a metric for speech quality and is computed only in the period when the speech is present.In addition, we also evaluate the WER achieved by the proposed system compared to the baselines, by using a transformer-based pre-trained model from the SpeechBrain toolkit [60].The pretrained model was trained on the LibriSpeech dataset.The WER obtained with this model when tested on the test-clean subset is 1.9%. Spatial feature robustness In this section, we aim to investigate the robustness of the algorithm with respect to the spatial correlation matrix and the spatial coherence matrix for measured RIRs and unseen array geometries.The proposed spatial coherence matrix based on wRTFs is used as a spatial signature for directional sources.The whitening process provides spectrally rich information that better accommodates unseen array configurations and measured RIRs.To see this, we compute the Modal Assurance Criterion (MAC) value on the spatial correlation matrix and the spatial coherence matrix for various unseen array configurations and RIRs.First, we vectorize the spatial matrix as , where . ψ and ψ ′ repre- sent feature vectors associated with two spatial matrices.The MAC value between ψ and ψ ′ is defined as follows: To evaluate the robustness of the proposed spatial feature extraction method, we generated four different test datasets, each consisting of 500 samples.The first three datasets (G1, G2, and G3) were generated using measured RIRs from the Multi-Channel Impulse Responses Database [47], while the last dataset (sG1) was generated using simulated RIRs.As shown in Fig. 10, the first array configuration (G1) is included in the training set, while the second and third array configurations (G2 and G3) are considered "unseen" to the trained model.Note that sG1 had the same array configuration as G1, but with simulated RIRs.Tables 1 and 2 summary the MAC values obtained using the spatial correlation matrix and spatial coherence matrix.The off-diagonal MAC values of the spatial coherence (26 Speaker counting performance In the following, we examine several speaker counting methods for different levels of sensor noise and T60s.We generated 2000-sample speech mixtures for 1-4 speakers, with 0%, 10%, 20%, 30%, and 40% overlap ratios and dry speech signals from the test-clean subset of the LibriSpeech corpus.Sensor noise was added with SNR = 10, 20, and 30 dB.The measured RIRs were selected from the Multi-channel Impulse Responses Database [47] recorded using an eight-element ULA with interelement spacing of 8 cm and T60 = 0.36, 0.61 s at Bar-Ilan University.The RIRs were measured in 15° intervals from −90 to 90° at distances of 1 and 2 m from the array center.Table 3 summarizes the speaker counting results in F1 scores.We compare the proposed counting approaches with two baselines.Baseline 1 is the method proposed in [43].The SVM classifier with f baseline 1 in (7) as the input feature is used for training.Baseline 2 is the SCnet trained with f baseline 2 in (22).For the proposed methods, proposals 1 and 2 represent the SCnet trained with f pro- posal 1 and f proposal 2 in (22).The speaker counting performance summarized in Table 3 suggests that baseline 1 performs comparably with baseline 2 in high SNR conditions.However, the speaker counting performance of baseline 1 degrades significantly as the SNR decreases. The feature using the eigenvalues obtained from the spatial coherence matrix (proposal 1) significantly outperform those obtained from the spatial correlation matrix (baseline 1), especially when the SNR is low.In addition, the method trained with the maximum similarity (proposal 2) could further improve the speaker counting performance over the method trained with eigenvalues only (proposal 1).In this study, speaker counting is highly dependent on the quality of spatial information extracted from the microphone array.However, it should be noted that spatial features tend to degrade as the SNR decreases.As a result, the counting performance may be relatively lower at SNR = 10. Next, we investigate speaker counting in low-activity scenarios using four-speaker mixtures, where the first speaker was active for only 5% of the time.In Table 4, we see a significant performance degradation in the SCnet trained on the eigenvalues of the spatial correlation matrix (baseline 1), even in high-SNR conditions.In contrast, the SCnet trained on the eigenvalues and the maximum similarities computed using the proposed spatial coherence matrix (proposal 2) performs quite satisfactorily despite the unbalanced speaker activity. The speaker count of each audio clip was labeled by using the ground-truth information.In addition, the dataset contains 511, 1119, 614, and 154 examples for one, two, three, and four speakers, respectively.The results of speaker counting are summarized in the confusion matrices depicted in Fig. 11.The F1 scores for the baselines 1 and 2, proposals 1 and 2 were 88.37%, 92.44%, 96.48%, and 97.36%.From Fig. 11, we can see that the methods trained on the features from the spatial coherence matrix (proposals 1 and 2) outperform the methods trained on the features from the spatial correlation matrix (baselines 1 and 2). Figure 11(c) and (d) show that the methods trained on maximum similarities (proposal 2) yield significantly lower underestimation rates than the methods trained on eigenvalues only (proposal 1).For the Fig. 12 Ground truth speaker activities for a case I and b case II BSS problems, underestimation can undermine the subsequent separation, while overestimation is less critical.In summary, we extract spatial information by whitening the RTFs without changing the phase to enhance the spatial signature of the directional source, analogous to generalized cross-correlation with phase transformation (GCC-PHAT) [53].In the light of the uncertainty principle [54], this helps to improve the time domain resolution for the computation of the spatial coherence matrix, which in turn leads to a more accurate estimation of the spatial activity, especially in low SNR cases.This enables a more accurate estimation of the maximum similarity of two global activities as independent activities, without overlooking scenarios with low activity speakers. Furthermore, unlike most multichannel source counting methods, which typically require more microphones than sources, the simplex-based and the proposed methods are limited by the total number of frames used to compute the spatial correlation matrix and the spatial coherence matrix, not the number of microphones.This implies that, in theory, there is virtually no limit to the number of speakers that can be identified.In fact, the only limit on counting accuracy is the degree of time overlap.To see this, we give two examples with different speaker activity patterns to show the maximum number of independent speakers that can be identified using ULAs with 2-5 elements evenly spaced at 8 cm. Case I represents a scenario where four speakers are active in moderately overlapping time periods, as shown in Fig. 12(a).Note that at 2-4 s, three speakers are active concurrently.Inspection of Fig. 13(a) indicates that the spatial coherence matrices associated with different numbers of microphones remain very similar.In this case, the eigenvalue distribution analysis reveals that the number of sources can be accurately estimated, even when the number of speakers (4) exceeds the number of microphones (5), as shown in Fig. 14(a). Case II presents a scenario where the proposed source counting method fails, where four independent speakers are active with 100% overlap, as shown in Fig. 12(b).In this case, the spatial coherence matrices in Fig. 13(b) show no meaningful patterns of activity, regardless of the number of microphones.The eigenvalue distribution analysis in Fig. 14(b) provides an incorrect estimate, one.In summary, methods based on simplex preprocessing are not limited by the number of microphones, but rather by the overlap percentage of the speaker activity time span. Speaker separation performance In the following, we compare the proposed speaker separation approach (GLADnet) with three baselines.The first baseline (mask) uses only a spectral mask (13).The second baseline (LCMV-mask) is the simplex-based approach [43,44] with beamforming and spectral masking (17).The third is the GLADnet, which is trained only on the global activity, called the global activity-driven network (GADnet).To evaluate the robustness of the proposed speaker separation approach when applied to unseen RIRs and array configurations, we created three 2000-sample test datasets for three different array configurations (G1, G2, and G3) using the measured RIRs from the Multi-Channel Impulse Responses Database [47].The array configurations G1, G2, and G3 are shown in Fig. 10. First, we examine the separation performance using the G1 configuration for different overlap ratios and T60s.The results in Fig. 15 show that the proposed GLADnet outperforms the three baselines in terms of speech quality.The performance of the GADnet, which is not trained with spatial features, degrades drastically as the overlap ratio increases.While the LCMV-mask method achieves comparable WER to GLADnet at moderate T60 = 360 ms, its separation performance drops sharply at high reverberation. Next, the effect of array configurations on separation performance is investigated.Figure 16 reveals that the speech quality (PESQ) and the ASR performance (WER) using the LCMV-mask method degrade as the array spacing and the array aperture decrease, even for moderate T60s.In contrast, the proposed GLADnet performs quite satisfactorily despite the unseen RIRs and array geometries. We also evaluated the proposed network in speaker separation using a more realistic LibriCSS dataset.The dataset generation for network testing is identical to that for speaker counting.Figure 17 shows that the LCMVmask method has a comparable performance to the proposed GLADnet when the overlap ratio is low.However, the performance of the LCMV-mask drops dramatically at high overlap ratios.In addition, GADnet performs satisfactorily only for non-overlapping speech mixtures.In summary, the separation performance of baselines such as mask and LCMV-mask, which rely solely on spatial information, can be significantly affected by the inter-element spacing and array aperture.On the other hand, the baseline GADnet, which relies solely on spectral information, can suffer performance degradation in adverse acoustic conditions such as large reverberation and high overlap ratios.In contrast to these baselines, the proposed GLADnet exploits both spatial and spectral information to achieve superior performance in terms of PESQ and WER metrics.In addition, the GLADnet is trained using the global and local activities derived from the wRTFs, which is less sensitive to unseen RIRs and array configurations. Conclusions In this paper, a learning-based robust speaker counting and separation system has been implemented by integrating array signal processing and DNN.In feature extraction, the spatial coherence matrix computed with wRTFs across time frames shows superior robustness to different array configurations and RIRs compared to the spatial correlation matrix.In speaker counting, the SCnet trained on the eigenvalues and the maximum similarities Fig. 1 Fig. 2 Fig. 1 Block diagram of the proposed speaker counting and separation system Fig. 3 Fig. 4 Fig. 3 Flowchart of the proposed speaker counting approach . 5 Fig. 5 Scatter plots of the maximum similarity to the observations with J ∈ {1, 2, 3, 4} speakers.Each cross with different color represents an observation corresponding to different number of speakers Fig. 9 Fig. 9 Settings for network training with different microphone array geometries Fig. 10 Fig. 10 Microphone array settings for experiments to investigate the effects of array configurations matrix are consistently close to one and larger than those of the spatial correlation matrix.The MAC test demonstrates that the proposed spatial coherence matrix exhibits superior robustness to different array configurations and RIRs compared to the spatial correlation matrix.This property is desirable for the subsequent learning-based speaker counting and speaker separation approaches when dealing with unseen array configurations and measured RIRs. Fig. 11 Fig. 11 Confusion matrices for the speaker counting results obtained using a baseline 1, b baseline 2, c proposal 1, and d proposal 2 Fig. 14 Fig. 14 Eigenvalue distribution in descending order of the spatial coherence matrix for a case I and b case II Fig. 15 Fig. 15 Comparison of separation performance with array configuration (G1) in terms of a, c PESQ and b, d WER for different overlap ratios Fig. 16 Fig. 16 Comparison of separation performance with array configurations (G1, G2, and G3) in terms of a, c PESQ and b, d WER for different array configurations Fig. 17 Fig. 17 Comparison of separation performance in terms of a PESQ and b WER for the LibriCSS dataset Table 1 MAC values calculated using the spatial correlation matrix for various array configurations and RIRs Table 2 MAC values calculated using the spatial coherence matrix for various array configurations and RIRs Table 3 Comparison of speaker counting performance under different acoustical conditions in terms of F1 score Table 4 Comparsion of low-activity speaker counting performance under different acoustical conditions in terms of F1 score
2023-03-14T01:16:31.189Z
2023-03-13T00:00:00.000
{ "year": 2023, "sha1": "52fe08632c211dfca7c1b79f57aa7c2df867e493", "oa_license": "CCBY", "oa_url": "https://asmp-eurasipjournals.springeropen.com/counter/pdf/10.1186/s13636-023-00298-3", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17f542e2b40ee6767476810cd3cf49daa7a682c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
219574097
pes2o/s2orc
v3-fos-license
Data on the proliferation and differentiation of C2C12 myoblast treated with branched-chain ketoacid dehydrogenase kinase inhibitor The catabolism of branched chain amino acids (BCAAs) is mainly carried out in skeletal muscle myofibers. It is mediated by branched chain aminotransferase 2 and branched chain alpha ketoacid dehydrogenase (BCKDH) in mitochondria for energy supply, especially during exercise. BCKDH kinase (BCKDK) is a negative regulator of BCAAs catabolism by its inhibitory phosphorylation of the BCKDH E1a subunit. The data presented in this article are related to the research article that we previously have reported entitled “Energy metabolism profile of the effects of amino acid treatment on skeletal muscle cells: Leucine inhibits glycolysis of myotubes” (Suzuki et al., 2020)[1]. In this report, we have demonstrated that 1hour treatment of BT2, an inhibitor of BCKDK, decreased the glycolysis of C2C12 differentiated myotubes compared to the control. Although BCAAs metabolism is basically assumed to be carried out in differentiated myofibers, BCKDK is expressed in both undifferentiated myoblasts and differentiated myotubes, and the biological and physiological significance of BCAAs metabolism in myoblasts is still unclear. Present data demonstrate an in vitro assessment of BT2 on C2C12 myoblasts proliferation and differentiation. The data suggest that activation of BCAAs catabolism by the BCKDK inhibitor BT2 impairs C2C12 myoblasts proliferation and differentiation. a b s t r a c t The catabolism of branched chain amino acids (BCAAs) is mainly carried out in skeletal muscle myofibers. It is mediated by branched chain aminotransferase 2 and branched chain alpha ketoacid dehydrogenase (BCKDH) in mitochondria for energy supply, especially during exercise. BCKDH kinase (BCKDK) is a negative regulator of BCAAs catabolism by its inhibitory phosphorylation of the BCKDH E1a subunit. The data presented in this article are related to the research article that we previously have reported entitled "Energy metabolism profile of the effects of amino acid treatment on skeletal muscle cells: Leucine inhibits glycolysis of myotubes" (Suzuki et al., 2020) [1]. In this report, we have demonstrated that 1hour treatment of BT2, an inhibitor of BCKDK, decreased the glycolysis of C2C12 differentiated myotubes compared to the control. Although BCAAs metabolism is basically assumed to be carried out in differentiated myofibers, BCKDK is expressed in both undifferentiated myoblasts and differentiated myotubes, and the biological and physiological significance of BCAAs metabolism in myoblasts is still unclear. Present data demonstrate an in vitro assessment of BT2 on C2C12 myoblasts proliferation and differentiation. The data suggest that activation of BCAAs catabolism by the BCKDK inhibitor BT2 impairs C2C12 myoblasts proliferation and differentiation. © 2020 The Author(s Value of the data • The data provide the possibility that activation of branched chain amino acids catabolism by the BCKDK inhibitor BT2 may suppress C2C12 myoblasts proliferation and differentiation. • Since BCAAs metabolism is basically assumed to be carried out in differentiated myofibers, the biological and physiological significance of BCAAs metabolism in myoblasts is still unclear. • The data is valuable for researchers interested in the relationship between branched chain amino acids metabolism and physiology of skeletal muscle cell. • The data help researchers design experiments examining C2C12 myoblasts proliferation and differentiation in response to drug. Data Description Recently we have reported that activation of branched chain amino acids catabolism by BT2, a BCKDK (branched chain ketoacid dehydrogenase kinase) inhibitor, impaired the glycolysis of C2C12 myotubes [ 1 , 2 ]. BCKDK is expressed in both C2C12 myoblasts and differentiated myotubes. Here, we present data regarding the effect of BT2 on C2C12 myoblast proliferation and myogenic differentiation. The data in Fig. 1 show the comparison of cell proliferation rate between control and BT2-treated myoblasts. Data in Fig. 1 were obtained by Cell Counting Kit-8. The data in Fig. 2 show the comparison of myogenic differentiation between control and BT2treated myoblasts after induction of differentiation by reducing the serum concentration. Data in Fig. 2 were obtained by immunoblot, qRT-PCR and microscopy. Experimental Design, Materials, and Methods To investigate the effect of BCAAs catabolism on myoblasts proliferation and myogenic differentiation, C2C12 myoblasts were treated with BT2, an inhibitor for BCKDK. For the evaluation of myoblasts proliferation, myoblasts were cultured for 24 hours and then relative cell proliferation rate was measured by Cell Counting Kit-8. For the evaluation of myogenic differentiation, myoblasts were collected at day 0, 2 and 5 after induction of differentiation and then myogenic marker genes and protein expression were measured by qRT-PCR and immunoblot. Cell culture and reagents C2C12 myoblasts were purchased from ATCC (Manassas, VA, USA). C2C12 myoblasts at early passage (3-10) were used for experiment. Cells were maintained in DMEM supplemented with 10% fetal bovine serum, 1% penicillin/streptomycin mixture at 37 °C with 5% CO 2 . For myogenic differentiation, myoblasts were cultured in 2% HS-DMEM until myotubes formed (5 days) after the cells reached 80-90% confluency. For gene and protein expression analyses, cells were seeded on 12-well miniplates (n = 6, each group) or 6-well miniplates (n = 3, each group), respectively. BT2 (3,6-dichlorobenzo[b]thiophene-2-carboxylic acid) (Axon Medchem, Groningen, Netherland) was used to inhibit BCKDC kinase for the activation of BCAAs catabolism [2] . The effect of BT2 treatment (40 μM and 100 μM) on total MyHC expression (anti-MF20) of C2C12 myoblasts for 5 days after induction of differentiation. Myoblasts at DM 0day (cultured in growth media) is used as negative control. Graph shows the relative intensity of each band after normalization to β-actin. Different superscripts indicate a significant difference between 2 groups. All assessments of significance were performed with 1-way ANOVA with Tukey post hoc test (p < 0.05) (n = 3). Values are expressed as means ± SEM. c) Representative images of Control and BT2-treated (100 μM) C2C12 myoblasts for 5 days after induction of differentiation. Myoblasts at DM day0 was shown as negative control. Bar = 100 μm. Cell proliferation assay Cell proliferation assay was assessed with a Cell Counting Kit-8 assay (Dojindo Laboratories, Kumamoto, Japan) according to the manufacture's protocol with slight modifications [3] . C2C12 myoblasts were seeded in 96-well miniplates at a density of 30 0 0 cells/well in DMEM containing 10% FBS for 24 hours. The culture medium was removed and replaced with DMEM containing 1% FBS and BT2 (10-100 μM). After 24 hours of culture, cell proliferation was assessed using Cell Counting Kit-8. RNA extraction and quantitative real-time polymerase chain reaction Expression of target and reference genes was measured using a quantitative real-time polymerase chain reaction (qRT-PCR) according to the previous report [4] . Gapdh was used as the reference gene. The significance of differences in mRNA was calculated by 2-Ct method. Total RNAs were isolated from 6 individual wells of cultured C2C12 myoblasts according to the regular Trizol-chloroform protocol. cDNA was synthesized from 1 μg of total RNA by a reversetranscriptase iScript (Bio-Rad, Hercules, CA, USA), and qRT-PCR was performed using LightCycler 96 (Roche Diagnostics, Mannheim, Germany). The primer sets were designed by Primer3. The primer sequences are as follows: Gapdh forward, TTGCCATCAACGACCCCTTC; Gapdh reverse, TTGTCATGGATGACCTTGGC; Myog forward, ACCTTCCTGTCCACCTTCAG; Myog reverse, CACC-GACACAGACTTCCTCT; Myh3 forward, CAATAAACTGCGGGCAAAGAC; Myh3 reverse, CTTGCTCACTC-CTCGCTTTCA. Protein extraction and immunoblot analyses Proteins were extracted from 3 individual wells of cultured C2C12 myoblasts of each group. The samples were homogenized in SDS sample buffer containing 125 mm Tris-HCl pH 6.8, 5% β-mercaptoethanol, 2% SDS and 10% glycerol. Extracted proteins were separated on acrylamide gels, and then transferred onto PVDF membranes (GE Healthcare). A blocking solution of 5% BSA was used. The chemidoc XRS Imager (Bio-rad) was used for evaluating the detected bands. Total myosin heavy chain was measured by MF20 antibody (eBioscience, 14-6503-82, dilution 1:10 0 0) to determine the differentiation level of C2C12 myoblasts. β-actin was used as internal standard (Cell Signaling Technology, #4967, dilution 1:10 0 0). Statistical analysis All data are presented as means ± SEM. P values less than 0.05 were considered significant and all assessment of significance was performed with unpaired 2-tailed Student's t-test or 1way analysis of variance (ANOVA) with Tukey post hoc test using Prism6 (GraphPad Software, La Jolla, CA, USA). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article.
2020-05-28T09:18:28.750Z
2020-05-26T00:00:00.000
{ "year": 2020, "sha1": "1fb3a6581ad859e3a9fcfd6dba208759093ad28f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2020.105766", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "122a30e52f927b649a55ee3aa7c05a3546d6f662", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
109845614
pes2o/s2orc
v3-fos-license
Ferrographic analysis of wear particles of various machinery systems of a commercial marine ship The objective of this paper is to present the ferrographic analysis of wear particles contained in used lubricant oil samples that collected from the engines, generators and gearboxes of a commercial marine ship. Flash point, viscosity measurement, ferrography analysis and energy dispersive X-ray analysis (EDX) have been employed to extract the relevant information about the physical aspects of used oil and the wear condition of the parts from generator, gearbox and main engine. The study showed that the application of wear particle analysis and ferrography in particular is an effective means to identify and respond to maintenance needs of marine ships machineries. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of The Malaysian Tribology Society (MYTRIBOS), Department of Mechanical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia. Introduction The growing importance of predictive maintenance has led to the development of a vast number of machine condition monitoring techniques. Vibration and oil analysis are the two distinct methods in determining mechanicals failures in common components of machinery, such as engines, generators and gearboxes. As it is difficult to monitor wear conditions by measuring vibration because of complex vibration sources, multiphasic interference and low frequency, oil analysis has become the main method for monitoring various machinery parts on board commercial ships [1]. Oil analysis can be categorised into three fluids analysis methods, which are property, fluid contamination and wear debris analysis [2]. Condition monitoring of machinery through analysis of wear debris is now an extensively applied as a tool in diagnostic technology. Wear debris analysis or analytical ferrography is a method of predicting the health of equipment in a non-intrusive manner by studying wear particles present in lubricating oil [3]. Previous studies have shown that the analysis of wear debris is important to detect critical stages of accelerated wear that precedes costly and dangerous component failures [4]. Its mains advantage is that oil samples can be taken from machineries which are still in operation, rather than dismantling them to study the surface damage. ____________ Ferrography is a technique that provides microscopic examination and analysis of wear particles separated from all type of fluids [5]. Developed in the mid-1970s as a predictive maintenance technique, it was initially used to magnetically precipitate ferrous wear particles from lubricant oils [6]. Ferrography is used to quantify the amount of wear debris within a given sample and to conduct microscopic analysis of that debris in order to identify its type in terms of shape, appearance and size [7]. The continuous trending of wear rate monitors the performance of machine components, and provides early warning and diagnosis of worn parts [8][9]. This technique has been successfully used to monitor the conditions of aircraft engines and gearboxes [1,6]. The reliability of a lubrication system is directly related to the presence of solid particulate matter contained in uid [10]. More than 50% of failures of turbine bearings systems are attributed to contamination in lubrication systems [11][12]. Since the 1970s, the detection and analysis of contamination using quantitative and qualitative wear debris analysis have been explored [1,13]. The subjective determination of component wear is based on the morphological and compositional analysis of wear particles extracted from lubricant oil [2,7,[14][15]. It has long been recognised that wear particles are unique and bear individual characteristics, and they provide significant information for obtaining evidence of the conditions in which they are formed and the wear mechanisms which are prominent [1]. The objective of this paper is to present the ferrographic analysis of wear particles contained in used lubricant oil samples collected from the engines, generators and gearboxes of a commercial marine ship. Based on the results obtained, the conditions of the machineries will determined in order to decide if they are safe to be operated or should undergo maintenance. Materials & Experimental Procedures The characteristic properties of the lubricant oil samples, such as kinematic viscosity at 100 °C and flashpoint temperature, were determine using a Stanhope-Seta flashpoint tester and Anton Paar SVM 3000 viscometer according to the ASTM-D92-05 and ASTM-D445-09 standards respectively. The types and concentrations of the metals present in the used samples were determined using a Shimadzu EDX-720 energy dispersive x-ray fluorescence (EDX) spectrometer. The size and distribution of ferrous particles were determined using ferrography. A Predict FM-III ferrogram maker was used to prepare the ferrogram photomicrographs by drawing the sample across a transparent glass plate in the presence of a strong magnetic field. Kinematic Viscosity and Flashpoint Temperature The physical appearances of all the samples are dark and opaque with no dissolved water detected. Water is one of important contaminants in lubricant oil systems because it can cause failure via a number of mechanisms. It can displace oil at contacting surfaces, reducing the amount of lubrication and activating surfaces which may then act as catalysts for degradation of oil. Kinematic viscosity is the most important property of oil in order to provide optimum film strength, with minimal frictional losses, in order to prevent metal-to-metal contact, scuffing, microwelding and wear of sliding surfaces. Viscosity indicates the essential physical properties of oils which will determine the suitability of the lubricants to be used in engine systems. Flashpoint identifies the minimum vaporisation temperature of the lubricant [16]. Table 1 shows the kinematic viscosities and flashpoint temperatures of the samples. The samples analysed at 100 °C showed consistent kinematic viscosity which was close to that of monograde SAE 40 lubricants. The kinematic viscosities of the used oils samples were found to be in the range of acceptable values (12.5-16.3 cSt), while there is no significant drop in the of readings of flashpoint temperatures. Hence, it can be inferred that the samples had not been polluted by fuel dilution or existence of volatile products. Metal Concentration Tables 2-4 showed the concentrations of elements that were detected from the samples collected from the engines, generators and gearboxes of the ship respectively. The EDX analysis of samples showed the presence of elements of Cu, Zn, Cr, Ni, Al, P, Pb, Mg, Ca, Na, Fe and Si. The metallurgical information and chemical composition of lubricated components in the engines, generators and gearboxes indicate that the observed Fe, Cr, Mg, and Si elements were from the parts made from steel alloy, whereas Cu observed for generator num. 2 (71 ppm) and gearbox num. 2 (111 ppm) might have originated from copper based alloy parts. Other chemical elements such as Na, Ca, Zn, Mo and P could be from the additives and their degradation products, and lter materials in the lubrication system. Substantial concentrations of Fe were detected to a level which could provide intimation about approaching failure in generators num. 2 (38 ppm) and 3 (107 ppm), and gearbox num. 2 (26.1 ppm). Concentrations of Fe at around 30 ppm is classified as medium wear conditions, while high and abrasive wear conditions are indicated by the iron concentrations of 40 ppm or higher [13]. Ferrographic Analysis The wear metals that generally re ect the conditions of the machineries were examined to determine if the machineries were wearing at a normal rate. By separating the wear particles suspended in the samples (via magnetic or filtration methods) and subsequently examining any debris found using an optical microscope (100x), tribologists will be able to collect information on the health of the machinery from which the sample was taken. Shape characteristics and outline profiles of wear particles are important features to be used to identify the ongoing wear process [8,17]. Wang and Wang [5] reported that the size of the normal wear particles for machineries is less than 15 m or less than 25 m for machineries used in the mining industry. Engines The ferrographic analysis showed that the majority of metal particles present in the engines were due to normal wear, with particle size of less than 15 m for engines num. 1 and 3, and less than 10 m for engine num. 2. The ferrogram photomicrographs for the samples after the ferrography test are shown in Fig. 1. Normal-rubbing wear particles were generated as a result of normal sliding wear in the engines and exfoliation of parts of the shear mixed layer. Rubbing wear particles consisted of flat platelets, generally 5 m or smaller, although they range up to 15 m depending on the equipment's application. There should be little or no visible texturing of the surface and the thickness should be 1 m or less. Generators The ferrographic analysis showed that the majority of metal particles present in the generators were due to normal wear, with particle size of less than 15 m for generators num. 1 and 2. For generator num. 3, there was the presence of wear metals with particle size of more than 50 m, which may have occurred as a result of fatigue wear. The ferrogram photomicrograph in Fig. 2c illustrates a large amount of abnormal wear particles and an obvious high wear mode; so much so that the magnetic flux lines are piled up on one another and are individually indistinguishable. Every wear particle size value is above the established value considered as "out of limits", indicating a serious problem may have happened. When compared with the results for generators num. 1 and num. 2 (where no abnormal wear particles had been detected), it was confirmed that this unit was undergoing major to catastrophic abnormal wear mode. As a result, based on the combination of the very high wear particle concentration along with the results of the ferrographic analysis, this sample was rated as critical and the user was notified for immediate action to be taken. Gearboxes The ferrographic analysis showed that the majority of metal particles present in the gearboxes were due to normal wear. The particle size was less than 15 m for gearboxes num. 1 and 3, while for gearbox num. 2, there was the presence of wear metals with particle size of around 50 m, which may have occurred as a result of severe sliding wear. The ferrogram photomicrograph in Fig. 3b indicates the presence of high concentrations of wear particles. It also shows a small amount of abnormal wear particles along with a moderate amount of normal rubbing wear with clearly distinguishable magnetic flux lines. The particles with scratches on the surface in parallel grooves were generated by severe sliding wear. The presence of this kind of particles indicate abnormal machine conditions and a breakdown of lubricating film in the gearbox [18]. Conclusion Tribological investigation was used in this study with the aim of obtaining highly reliable data, and planning better maintenance to avoid catastrophic breakdowns and expensive component replacements of the engines, generators and gearboxes of the commercial marine ship. The EDX analysis showed that moderate contamination levels occurred in the samples from generators No. 2 and 3, and gearbox No. 2. The chemical composition from the lubrication system con rmed presence of elements of Fe, Cr, Mg and Si, which can be from the steel alloy, whereas Cu from generator num. 2 and gearbox No. 2 might have originated from copper based alloy parts. Other elements such as Na, Ca, Zn, Mo and P could be from the additives and their degradation products, and lter materials in the lubricantion system. The ferrographic analysis indicated the presence of wear particles with particle size of 50 m in the samples from generator No. 3 and gearbox No. 2, indicating abnormality wear requiring urgent rectification. The presence of abnormal wear particles will cause the lubrication system to not work efficiently and at the same time destroy parts of the metallic components. The observed morphology of wear particles using ferrographic analysis, particularly in the iron-containing debris, indicate the involvement of two types of wear mechanisms, namely normal rubbing wear which generates very small iron particles in the range of 1-15 m or less, and abrasive wear, which is caused by particles with size of 15-50 m.
2019-04-12T13:57:16.222Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "2d012b50ff2fbe1526f6bd726585b064a978ad20", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.proeng.2013.12.190", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "69e02bafe1d385a7768907ddf1a6342865363724", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
51604114
pes2o/s2orc
v3-fos-license
Asymptomatic HIV People Present Different Profiles of sCD14, sRAGE, DNA Damage, and Vitamins, according to the Use of cART and CD4+ T Cell Restoration We aimed to analyze markers of immune activation, inflammation, and oxidative stress in 92 asymptomatic HIV-infected patients according to the adequate (AR, >500 cells/mm3) or inadequate (IR, <500 cells/mm3) CD4+ T recovery and the presence or absence of antiretroviral treatment (cART). In relation to those newly diagnosed, they were divided into two groups, cART-naïve IR (nIR) and cART-naïve AR (nAR). Among those diagnosed more than five years ago, the following division was made: the cART-naïve long-term nonprogressors (LTNP); patient under cART and AR (tAR); and patients under cART and IR (tIR). We investigated the expression of soluble receptor for advanced glycation end products (sRAGE), high-mobility group-box protein −1 (HMGB1), soluble CD14 (sCD14), IL-8, IL-10, 8-isoprostane, vitamins, and DNA damage. We observed higher levels of sRAGE in tAR as compared to nIR, nAR, LTNP, and more sCD14 than in nIR and nAR. As for IL-10 levels, we found nIR > nAR > LTNP > tAR > tIR. Higher levels of 8-isoprostane were observed in nIR. LTNP presented a higher retinol dosage than tAR and less genotoxic damage induced by oxidative stress than the other groups. We suggest that the therapy, despite being related to lesser immune activation and inflammation, alters the vitamin profile and consequently increases the oxidative stress of patients. In addition, the lowest genotoxic index for LTNP indicates that both VL and cART could be responsible for the increased DNA damage. More studies are needed to understand the influence of cART on persistent immune activation and inflammation. Introduction HIV infects many cell types, especially CD4 + T lymphocytes, and its activation provokes cytopathic effects through the production of new viral copies [1].This results into a progressive deterioration of the cellular immune system and a severe immunodepression, which makes the individual more susceptible to opportunistic diseases [2]. Until the 1990s, the most common causes of death were associated with infections caused by Pneumocystis jirovecii, Toxoplasma gondii, cytomegalovirus, and avian-intracellulare complex mycobacteria, among others [3].However, with the advent of combined antiretroviral therapy (cART), the survival of HIV-infected subjects increased significantly, as a consequence of viral replication control and immunological and clinical parameter improvement [4], as well as lower rates of virus transmission [2,5]. Despite therapeutic efficacy, some latent-infected cells keep the provirus integrated into their DNA, but without expressing viral proteins, known as "reservoirs" [6].This condition remains until the cell is stimulated and activated, thus delaying the immune system's response against the virus and hammering the action of antiretroviral drugs [1,6], which should be used throughout the patient's life. It is also known that the existence of viral reservoirs contributes to intense immune activation [7], leading to a chronic inflammatory status and triggering a series of non-AIDS-associated comorbidities, such as cardiovascular, hepatic, bone, renal, and metabolic and neoplastic diseases [8][9][10][11].In fact, these diseases are currently the leading causes of death among people living with HIV/AIDS (PLWHA) [2]. In addition to the persistent inflammation caused by HIV itself, the infection is also related to increased oxidative stress, one of the adverse effects that may be induced by therapy [12].It is known that PLWHA, especially those under treatment, present a large imbalance between oxidants and antioxidants [13][14][15].For example, in the presence of cART, decreased levels of vitamins, their precursors, and some antioxidants [16,17] as well as high concentrations of lipid peroxidation products, such as 8-isoprostane, are observed.In addition, mitochondrial toxicity and DNA damage are also observed in this population [18,19]. Another mechanism that participates in immune activation and inflammation is the homeostatic imbalance of the gut-associated lymphoid tissue (GALT), which promotes the rupture of the epithelial barrier and microbial translocation to the circulation, measured by lipopolysaccharide (LPS) and soluble CD14 (sCD14), among other markers [7,20].This immune activation, which can be triggered via Toll-like receptors (TLRs), activates the transcription factor NFκb, leading to the transcription of cytokines and other inflammatory products, such as IL-6, IL-8, and HMGB1 (high-mobility group-box protein −1), and to the increase of the expression of RAGE (receptor for advanced glycation end products) and other receptors on the cell membrane [20][21][22].Studies show increased inflammatory mediators in PLWHA [23], even in those under cART, which is responsible for only a partial decrease in the inflammatory status.Thus, these constant stimuli lead to immunosenescence and to the early aging process of PLWHA [8,10,12]. Considering the need of clarifying the mechanisms responsible for intense cellular immune activation, persistent inflammation, and oxidative stress, the aim of the present study was to investigate some of these markers in different groups of asymptomatic PLWHA, according to the use of cART and its different CD4 + T lymphocyte counts. Study Design. This cross-sectional study was conducted between 2012 and 2015 at the Specialist Outpatient Service for Infectious Diseases "Domingos Alves Meira," Botucatu Medical School Complex (FMB)-UNESP, in São Paulo state, Brazil.This service assists approximately 600 HIV-infected people from Botucatu and its surrounding area.For this study, 250 consecutive patients were interviewed, but only 94 of them were included according to the exclusion criteria.They were divided into five groups, according to Figure 1. The intention to study groups without treatment was to investigate the influence of therapy on oxidative status and immune activation of PLWHA in order to open new discussions on the benefits versus harms in the early indication of cART, as these days in several countries. Inclusion and Exclusion Criteria.PLWHA inclusion criteria were age between 20 and 50 years, no previous cART administration, or patients undergoing treatment for more than five years and presenting undetectable viral load (copies of HIV-1 RNA ≤50 copies/mm 3 ) in such period.For them, adherence to cART was confirmed by the patient himself and by the records of medication collection at the services' pharmacy.All subjects included signed an informed consent form.Considering that many habits and comorbidities could be confounding variables and would interfere in our analysis of oxidative stress [22], patients carrying any of the following conditions were excluded: use of vitamin supplements, cancer history (current or previous), anorexia, morbid obesity, diabetes mellitus, cardiovascular, genetic or autoimmune diseases, organ transplants, use of illicit drugs and alcohol, pregnancy at any stage or breastfeeding, AIDS symptoms (those with opportunistic infections), or coinfections, such as tuberculosis or chronic viral hepatitis.For the following criteria, exclusion occurred when patients concomitantly reported two or more of them: regular performance of intense physical exercise; use of antibiotics, anxiolytics, or antidepressants; and active smoking.People with only one of these conditions were included because the statistical analysis was adjusted for these variables. 2.3.Sociodemographic and Clinical Data.These data were collected by interviews and from the patients' medical records, taking into account the date of blood collection for this study. 2.4.Analyses of Laboratory Tests.Twelve milliliters of blood was collected into an EDTA-containing tube from each patient included in the study.The material was maintained in a cooled and dark environment for 2-3 hours.After that, 60 μl of total blood was separated for the comet assay procedure and the remaining sample was centrifuged at 1500 rpm for 10 minutes.Six plasma aliquots per individual were stored at −80 °C until the tests were performed. Evaluation of Immune Activation (i) Measurement of plasma HMGB1: a sandwich enzyme-linked immunosorbent assay (ELISA) was performed, using 100 μl of a diluted sample (1 : 10) and following the manufacturer's specifications from a commercial kit (MyBioSource, item MBS2707497).Plasma HMGB1 concentration was determined by spectrophotometry, at 450 nm.Results were expressed as pg/ml using the optical density (OD) of the curves and samples for this calculation. (ii) Measurement of soluble receptor for advanced glycation end products (sRAGE) and sCD14: Using 100 μl of pure samples, the sandwich ELISA protocol was developed according to the manufacturer's instructions (R&D Systems, item DRG00 to sRAGE and DC-140 to sCD14).Readings were performed immediately by a spectrophotometer at the wavelength of 450 nm.OD of the samples and curve was calculated and expressed in pg/ml. ( The slides were stained by Sybr Gold (Invitrogen, USA).Using an immunofluorescence microscope connected to an image analysis system (comet assay IV, Perceptive Instruments, Suffolk, Haverhill, UK), a total of 50 randomly selected nucleoids were counted for each slide.Results were expressed as "tail intensity" (ti), which is the percentage of migrated DNA and "tail moment" (tm), relative values from the fraction of migrated DNA multiplied by the length of the tail. (iii) Analysis of antioxidants by concentration of fatsoluble vitamins: They were measured from 100 μl of plasma by HPLC (Waters 2996), by a C30 column (150 × 4.6 mm, 3 μm), and according to Ferreira and Matsubara [25].The wavelength used was 455 nm for carotenoids (lutein, cryptoxanthin, lycopene and, β-carotene), 325 nm for retinol, and 290 nm for α-tocopherol.The values of the standard solution of the substances were fixed by their molar extinction coefficients expressed in μmol/ml. Analysis of Results . We used a generalized linear model with Poisson or negative binomial distribution for count variables and gamma distribution for asymmetric variables or binomial negative one-way ANOVA followed by Tukey-Kramer post hoc tests for those symmetric data.Pearson correlations were adopted to analyze continuous variables.After fitting the model, confounding variables (age, sex, and tobacco use, practice of intense physical activity, and use of anxiolytics and/or antidepressants) were added in order to evaluate their influence on the comparisons made.Significant differences were considered when p values were less than or equal to 0.05.All these procedures were performed with help from professionals at the institution's research support office using SAS for Windows, version 9.2.This study was approved by the Research Ethics Committees of the Botucatu Medical School, registration number 4101-2011. Results Most of the participants were males (61.9%) aged 37 ± 8 years, white (88.1%), heterosexual (70.6%), and single (66.3%).Approximately 24% were active smokers; almost 20% practiced intense physical activity, and 4.0% used anxiolytics or antidepressants (Table 1).The groups were homogeneous as regards the above-mentioned factors (p > 0 05).The mean time of HIV infection, from the HIV diagnosis, was 7 ± 2 years.The lowest nadir of CD4 + T cells was observed in tIR, followed by tAR, and then by nIR.The means of CD4 + T, CD8 + T cells, VL, time of therapy use, and cART schemes are shown in Table 1. As for cytokines, IL-8 production showed no differences among groups.Conversely, IL-10 expression was lower in the cART groups, and higher in nIR and nAR.The means of IL-10 were 0.63 ± 0.93 pg/ml in nIR, 0.24 ± 1.15 in nAR, 0.07 ± 0.19 in LTNP, 0.02 ± 0.03 in tAR, and 0.01 ± 0.00 pg/ml in tIR.Differences between naïve groups and those under cART reached statistical significance as shown in Figure 3. No differences were observed among the groups for β-carotene and lycopene dosages.Differences in mean retinol concentration were observed only between LTNP and tAR (0.45 ± 010 and 0.30 ± 0.10 μmol/ml, respectively, p = 0 035), while α-tocopherol dosages were comparable among the groups included in the study (Figure 5). In addition, we verified the correlation between the markers of immune activation and some parameters that could also be related to its increase, such as CD4 + T nadir, time of HIV diagnosis, and time of therapy, and no Discussion The recent introduction of cART correlates with both longer life expectancy of PLWHA and the development of non-AIDS comorbidities, which occur earlier in HIVinfected subjects than in the general population [10][11][12].The "early aging" of these subjects is caused by constant immune activation and chronic inflammation which leads to the exhaustion of the immune system and the imbalance of cytokines and other immunological and physiological components [8,10,12]. Several receptors and proteins promote cellular activation and consequently the activation of the signaling cascade that give rise to the inflammatory components.RAGE is a pattern recognition receptor (PRR) which activates the cell due to its interaction with its ligands, for example, AGEs and HMGB1 [25].Conversely, its soluble form, sRAGE [26], acts as a suppressor of activation, since it arrests the RAGE ligands, preventing the interaction between them and the subsequent cellular signaling [22]. We detected higher sRAGE levels in the cART group with higher CD4 + T cell counts as compared to the non-cART groups.The possible explanation for this observation is more than a few.First, in the blood circulation of these individuals, there could be a greater accumulation of ligands for this receptor [22], such as sCD14, which was increased in the tAR group in the present study.Second, the levels of sRAGE could mirror the possible partial decrease of inflammation presented by cART patients [23].This last argument could be justified, even, by the smaller dosages of IL-8 found in the same group of patients, although differences did not reach statistical significance.High concentrations of sRAGE have been related to the fewer occurrences of atherosclerosis in PLWHA [27], hypercholesterolemia [28], and arterial hypertension [29] in individuals not infected by HIV, suggesting that its dosage could be a useful tool in the diagnosis of cardiovascular diseases in PLWHA [27]. HMGB1 in its extracellular form is secreted actively following cellular stimulus, or passively by necrotic and apoptotic cells, and performs similar functions to those of proinflammatory cytokines [30].Due to its binding to RAGE and other PRRs present in CD4 + T cells, HMGB1 may even induce HIV reactivation [31].We did not find any difference in the levels of this protein among the five groups.However, HMGB1 correlated negatively with CD4 + T counts and positively with VL, validating the longitudinal study by Trøseid et al. [32], who showed a significant HMGB1 reduction after introduction of cART. Youn et al. [21] found that monocyte stimulation with the association of LPS and HMGB1 led to higher TNF-α production compared to LPS alone.Taken together, the findings by Trøseid et al. [32] and Youn et al. [21] may indicate that higher levels of HMGB1 might contribute to poorer clinical outcomes in HIV-infected individuals, as there would probably be more cellular activation, reactivation of the latent virus, and increased production of inflammatory cytokines in these individuals.In addition, elevated HMGB1 plasma levels are related to other chronic inflammatory conditions in non-HIV-infected individuals including diabetes mellitus and cancer [25,33].Thus, there is a need to investigate the influence of cART on the decrease of this marker in order to delay the development of HIV infection and the appearance of non-AIDS comorbidities. Among microbial translocation markers, sCD14 is related to monocyte activation and its concentration is higher in PLWHA compared to uninfected individuals.It is also associated with the characteristic comorbidities of aging in this population [34,35].Here, the group of individuals receiving cART and presenting high CD4 + T cell counts showed higher levels of sCD14 as compared to naïve groups and to those with recent infection, which was also demonstrated by Sandler et al. [36].This fact could be related to the time of infection and therapy because, despite viral suppression at plasma levels, GALT is one of the viral reservoirs that can sustain HIV replication [37] in a T celldeficient environment, whereas cART is not able to completely restore Th17 cells in GALT [38].However, in the present study, the increase in sCD14 did not occur in cART patients presenting CD4 + T cells below 500 cells/mm 3 , as it would be expected, since these individuals would probably have fewer T lymphocytes and an even scarcer immune response in GALT, which would further compromise the balance of this mucosa.Differently from our results, other authors have shown a relationship between greater microbial/polymicrobial translocation in immunological nonresponders and associated it with intestinal flora imbalance [39].The importance of studying the sCD14 marker in PLWHA is also justified by the observations that there is an association between sCD14 and increased risk for cardiovascular disease [40] and that sCD14 is a predictor of all mortality causes in HIV patients, even in those with undetectable VL [36]. In HIV infection, there is imbalance not only of the Th17 profile but also of Th1, Th2, and regulatory T cells (Treg).Additionally, large numbers of inflammatory cytokines are found in HIV-positive patients, which may influence the development to AIDS and the onset of non-AIDS comorbidities [41].IL-10 is a regulator of the inflammatory immune response [42].In the present study, this cytokine showed different concentrations in the different groups.The highest levels were found in naïve patients presenting a recent infection, intermediate levels in naïve patients who had been diagnosed more than five years before, and the lowest in individuals under cART.These results agree with those by Brockman et al. [42], which showed a reduction in both IL-10 plasma levels and its mRNA expression in cART subjects presenting adequate viral suppression as compared to naïve individuals, with the exception of elite controls, who presented similar values to those of uninfected individuals, suggesting that viral replication may be the main determinant of cytokine concentrations.Likewise, in the present study, a positive correlation was also found between VL and IL-10 levels. As reported by Haissman et al. [43], IL-8 showed a negative correlation with CD4 + T cells, which evidenced higher levels of inflammatory cytokines in individuals with CD4 + T cell counts below 200 cells/mm 3 .Thus, the monitoring of inflammatory cytokines could be included in the follow-up of patients in order to evaluate the evolution of HIV infection. It is known that the constant presence of these inflammatory components and the residual replication of HIV induce oxidative stress in PLWHA, which occurs when there is overproduction of ROS and RNS or a reduction in antioxidant capacity.Such oxidative stress is potentiated by the toxic effects of cART [19].For example, PIs are known to deregulate ubiquitin-proteasome proteins (UPS) that will contribute to endoplasmic reticulum stress, as well as to lipid accumulation, the development of insulin resistance and diabetes 7 Journal of Immunology Research mellitus, and an increased risk for atherosclerosis [44].In addition, PIs activate intracellular apoptosis pathways and increase the prooxidant status of the intracellular environment [44].On the other hand, NRTIs inhibit DNA polymerase, decreasing mitochondrial DNA and presenting membrane loss, lower respiratory rates, and oxidative phosphorylation, which consequently induces greater production of ROS and oxidative stress [45]. We found that 8-isoprostane, a marker of lipid peroxidation, was higher in naïve individuals with CD4 + T cells below 500 cells/mm 3 than in the other groups.This can be explained by the association between high oxidative stress in patients who have high VL and a poor immune system.Thus, in our study, cART does not appear to have increased levels of this marker, differently from other findings by Redhage et al. [18] and Hulgan et al. [14] in which cART individuals, even those with a controlled viral replication, had increased levels of 8-isoprostane.These authors also pointed out that PLWHA under cART without NNRTI in their composition had higher rates of 8-isoprostane as compared to those who used it or to naïve individuals.High levels of this marker are also found in several non-AIDS comorbidities, including atherosclerosis [46], which highlights the need for further studies aimed at reducing this parameter in PLWHA. Carotenoids have the ability to arrest free radicals, and many are precursors of retinol or vitamin A. In the immune system, retinol stimulates phagocytosis, T cell proliferation, activation of cell-mediated cytotoxicity, and antibody production and contributes to intestinal mucosal homeostasis [47,48].However, studies have shown deficiency of β-carotene and retinol in the HIV-infected population [49,50] as compared to uninfected individuals, which was also evident in subjects under cART [49].As for the carotenoids and the dosages of β-carotene and lycopene, the groups showed no difference between each other.However, the lutein level was lower in the cART group presenting low CD4 + T cell counts, suggesting that both the use of cART and immunodeficiency could influence lutein levels.Cryptoxanthin also appears to be influenced by cART and the immune response.Indeed, its concentration increased in naïve patients with high levels of CD4 + T cells compared to naïve individuals with an inadequate immune response or the cART groups.Retinol was also higher in naïve patients with high CD4 + T cell counts, proving the importance of the natural mechanisms of HIV infection control that these individuals show. α-Tocopherol is a lipid-soluble antioxidant that acts by blocking the lipid peroxidation of polyunsaturated fatty acids from membranes and lipoproteins.It also blocks the activation of NFκB and the consequent production of proinflammatory cytokines and has physiological potential in the reduction of atherosclerosis [51].There was no difference in the α-tocopherol levels between the groups studied here, but for other authors, there was a decrease in vitamin E [52] in individuals with low CD4 + T cell count.Such differences may be related to different study designs and characteristics of the population.However, we did not evaluate the cART schemes used by the participants or their vitamin E percent deficiency.In a Brazilian study, deficiency of this vitamin was found in almost 20% of PLWHA, which occurred more frequently in cART patients who did not use NNRTI in their composition [17].Following the antioxidant deficiency in this population and their importance in reducing oxidative stress, their monitoring should be routinely introduced and micronutrient supplements should be recommended when necessary. In the long term, oxidative stress leads to genotoxic effects which can either be repaired or lead to mutagenicity [53].When comparing DNA damage in our groups, there was less oxidative damage in naïve patients with infection for more than five years and CD4 + T cell counts higher than 500 cells/mm 3 .One possible explanation is that as these individuals have good control of viral replication and CD4 + T counts without the cART administration, they might be able to activate more efficient DNA repair mechanisms [54].After all, it is known that both HIV VL and antiretrovirals may contribute to genotoxic increase in these patients [14,55,56], as observed in the other groups studied here. There are few human studies on the frequency of DNA damage and its consequences for HIV infection and the appearance of other comorbidities, and there are few in vivo studies about the interference of cART in this context.Considering that genomic instability may contribute to the development of neoplasias and that such comorbidities are common in the HIV-infected population, it would be interesting to have more studies evaluating the influence of chronic use of cART as well as the persistent immune activation and inflammation in PLWHA in these genotoxic alterations. This study present some limitations, such as its crosssectional design, reduced number of individuals in LTNP group, and the lack of food surveys or anthropometric measures.However, other factors were deeply considered, such as strict exclusion criteria, group homogeneity regarding sociodemographic variables, and data analysis adjusted for gender, age, tobacco use, intense physical activity, and use of anxiolytics and/or antidepressants. Conclusions We found that patients with cART, viral suppression for over five years, and high CD4 + T cell count (> 500 cells/mm 3 ) have higher levels of sRAGE and sCD14 and this group along with that under cART and low CD4 + T cell count showed lower levels of IL-10 and vitamins compared to naïve subjects.This result suggests that cART may not restore cell functions in GALT, which would compromise local homeostasis, induce microbial translocation, and consequently lead to the increase of some soluble ligands (e.g., sRAGE) in an attempt to minimize the activation, via RAGE or TLR.In addition, despite the incontestable benefits of cART, these drugs can influence the antioxidant defense of the body, considerably reducing some vitamins which could compromise the oxidative balance and contribute to the persistent inflammation in PLWHA. 8 Journal of Immunology Research We also showed that high plasma concentrations of 8-isoprostane occurred in naïve individuals with CD4 + T cells below 500 cells/mm 3 , evidencing the participation of HIV viral replication in increasing oxidative stress.Regarding DNA damage, the group of naïve patients with CD4 + T cell count higher than 500 cells/mm 3 and diagnosed more than five years ago (LTNP) showed a lower genotoxic index than all the other groups, indicating that both VL and cART may be responsible for the DNA damage increase, a process that could perhaps be alleviated by specific intrinsic factors of the organism, such as more efficient repair mechanisms and protective genes in certain individuals. Further studies are needed to understand the persistent mechanisms of activation and inflammation, influenced or not by cART, in order to guarantee greater longevity and better quality of life for PLWHA. Figure 1 : Figure1: Flowchart for patients' inclusion and division into study groups, according to the presence or not of combined antiretroviral therapy (cART), CD4 + T cells count (cells/mm 3 ), and time of HIV diagnosis. Figure 2 :Figure 3 : Figure2: Plasma levels of soluble receptor for advanced glycation end products (sRAGE), high-mobility group-box 1 protein (HMGB1), and soluble CD14 (sCD14) proteins of the 92 people living with HIV/AIDS, according to the five studied groups.It is noted that sRAGE and sCD14 were higher in tAR than in naïve groups.Statistical tests: Tukey-Kramer for sRAGE and gamma distribution for the others.* p < 0 05; * * p < 0 005. Figure 4 : Figure 4: Plasma levels of 8-isoprostane of 92 people living with HIV/AIDS, according to the five composed groups.This marker is upregulated in nIR.Statistical tests: gamma distribution.* p < 0 05, difference among all the other groups. Figure 5 : Figure 5: Plasma levels of carotenoids, retinol, and α-tocopherol of the 92 people living with HIV/AIDS, according to the five study groups.LTNP presented higher cryptoxanthin and retinol in relation to cART groups.Statistical tests: gamma distribution.*p < 0 05, comparing the indicated groups; # p < 0 05, difference among all the other groups. Figure 6 : Figure 6: Leukocyte DNA damage of the 92 people living with HIV/AIDS, according to the five study groups, in three conditions-basal (BAS), that is, blades without enzymes treatment, or blades with enzymatic treatment (endonuclease III enzyme [END] and formanodipirimidina-DNA glycosylase, [FPG]).The LTNP group presented the lowest DNA damage; Statistical tests: gamma distribution for BAS-tm, END-tm and FPG-tm, and ANOVA for the others.* p < 0 05, difference among all the other groups. Figure 7 : Figure 7: Leukocyte DNA damage representation of five selected nucleoids of patients living with HIV/AIDS, according to the five study groups, whose slides were treated with the formanodipirimidina-DNA glycosylase (FPG) enzyme.By the analysis of the Comet assay, the immunofluorescence microscopy shows that LTNP presents less damage than the other groups. Table 1 : Epidemiological and clinical characterization of 92 HIV-infected individuals studied.
2018-08-01T19:03:36.781Z
2018-04-10T00:00:00.000
{ "year": 2018, "sha1": "cb4b310838bf75fa96457f926d14672352f4f840", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jir/2018/7531718.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb4b310838bf75fa96457f926d14672352f4f840", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249720600
pes2o/s2orc
v3-fos-license
Ring-Opening Polymerization (ROP) and Catalytic Rearrangement as a Way to Obtain Siloxane Mono- and Telechelics, as Well as Well-Organized Branching Centers: History and Prospects PDMS telechelics are important both in industry and in academic research. They are used both in the free state and as part of copolymers and cross-linked materials. At present, the most important, practically used, and well-studied method for the preparation of such PDMS is diorganosiloxane ring-opening polymerization (ROP) in the presence of nucleophilic or electrophilic initiators. In our brief review, we reviewed the current advances in the field of obtaining polydiorganosiloxane telechelics and monofunctional PDMS, as well as well-organized branching centers by the ROP mechanism and catalytic rearrangement, one of the first and most important reactions in the polymer chemistry of silicones, which remains so at the present time. Introduction Polyorganosiloxanes are one of the most important classes of polymers with great practical importance. The nature of the backbone determines the set of unique characteristics of these macromolecular compounds, making them indispensable in the creation of materials widely used in various fields of practice, ranging from construction, engineering, agriculture, and environmental protection to medicine, cosmetics, pharmaceuticals, and home care products [1][2][3][4][5]. Siloxanes have low surface tension, hydrophobicity, good surface wettability, damping properties, low glass transition temperature and frost resistance, low temperature dependence of physical properties, low toxicity and flammability, and they are safe for the environment [6][7][8]. The most important property of siloxanes is biocompatibility, which are already actively used in science and technology [9]. The development of new high-tech industries, such as organic electronics and photonics, 3D printing, gas separation, and drug delivery, has led to a further expansion of the applications for various types of organosiloxane polymers [8][9][10][11][12][13][14]. The significant feature of organosiloxane polymers in comparison with classical organic macromolecules is determined by a significantly higher Si-O bond energy than the C-C and C-O bonds in the main chains of organic polymers [15]; it determines hightemperature characteristics, and slight changes in physical properties over a wide temperature range. A longer bonds and wider angle of rotation compared to carbon analogues determine the high flexibility of the siloxane chain, low glass transition temperature, and high gas permeability. A weak intermolecular interaction also determines one of the significant disadvantages of organosiloxanes: their relatively low physical and mechanical properties [16]. Searching for ways to overcome this significant drawback, while maintaining the main advantages of this class of polymers, has continued throughout the history of their development. In the middle of the last century, a large number of detailed studies of the regularities and mechanisms of polymerization processes with the opening of cyclosiloxanes were carried out. Next, we briefly consider the most important results. It is known that the possibility of the process proceeding is determined by the magnitude of the change in the Gibbs free energy ΔG = ΔH − TΔS and is possible only in the case of ΔG < 0. The Si-O bond energies in dimethylcyclosiloxanes with n > 3 and in linear dimethylsiloxanes are close to each other [38,39], and the enthalpy change ΔH in the reaction is close to zero. Thus, the driving force of polymerization is mainly the change in entropy, leading to negative values of ΔG. The entropy gain during polymerization in the case of dimethylsiloxanes is due to the higher flexibility of the linear polydimethylsiloxane chain compared to its mobility in cyclosiloxanes, which leads to higher variance in linear polymers [40]. A change in the flexibility of the siloxane chain with a change in the organic surrounding has a significant effect on the course of the process. With an increase in the volume and polarity of the substituents at the silicon atom, the process leads to a higher yield of cyclosiloxanes. This is due to a relative decrease in the entropy of the polymer because of an increase in the interchain interaction and, accordingly, a decrease in the mobility of polymer chain segments [41]. During the bulk polymerization of [SiR(CH3)O]n cyclosiloxanes, the equilibrium concentration of the polymer, depending on the nature of the R substituent, decreases in the row: R = H > CH3 > CH2CH3 > CH2CH2CH3 ≈ C6H5 >> CH2CH2CF3 [42]. Thus, the polymerization of organocyclosiloxanes by the ROP method behavior depends on the type of organic radicals at silicon atoms, the number of units in the structure of the initial cyclosiloxane, and the nature of the initiator. In the middle of the last century, a large number of detailed studies of the regularities and mechanisms of polymerization processes with the opening of cyclosiloxanes were carried out. Next, we briefly consider the most important results. It is known that the possibility of the process proceeding is determined by the magnitude of the change in the Gibbs free energy ∆G = ∆H − T∆S and is possible only in the case of ∆G < 0. The Si-O bond energies in dimethylcyclosiloxanes with n > 3 and in linear dimethylsiloxanes are close to each other [38,39], and the enthalpy change ∆H in the reaction is close to zero. Thus, the driving force of polymerization is mainly the change in entropy, leading to negative values of ∆G. The entropy gain during polymerization in the case of dimethylsiloxanes is due to the higher flexibility of the linear polydimethylsiloxane chain compared to its mobility in cyclosiloxanes, which leads to higher variance in linear polymers [40]. A change in the flexibility of the siloxane chain with a change in the organic surrounding has a significant effect on the course of the process. With an increase in the volume and polarity of the substituents at the silicon atom, the process leads to a higher yield of cyclosiloxanes. This is due to a relative decrease in the entropy of the polymer because of an increase in the interchain interaction and, accordingly, a decrease in the mobility of polymer chain segments [41]. During the bulk polymerization of [SiR(CH 3 )O]n cyclosiloxanes, the equilibrium concentration of the polymer, depending on the nature of the R substituent, decreases in the row: R = H > CH 3 > CH 2 CH 3 > CH 2 CH 2 CH 3 ≈ C 6 H 5 >> CH 2 CH 2 CF 3 [42]. Thus, the polymerization of organocyclosiloxanes by the ROP method behavior depends on the type of organic radicals at silicon atoms, the number of units in the structure of the initial cyclosiloxane, and the nature of the initiator. Preparation of Organosiloxane Telechelics by Anionic ROP Anionic ring opening polymerization (AROP) under the action of various nucleophilic reagents is widely used for the synthesis of high molecular weight polydiorganosiloxane telechelics with various organic surroundings of the siloxane chain [28,43]. In the process of cyclosiloxane opening and chain growth (Figure 2a), side processes may occur: depolymerization due to the breaking of the linear chain by the active center (backbiting reaction) (Figure 2b) with the formation of low molecular weight cyclic products, and chain transfer reaction (Figure 2c), in which the terminal active site attacks the siloxane bond of another polymer chain, leading to a redistribution of macromolecules, which is also called equilibration. Preparation of Organosiloxane Telechelics by Anionic ROP Anionic ring opening polymerization (AROP) under the action of various nucleophilic reagents is widely used for the synthesis of high molecular weight polydiorganosiloxane telechelics with various organic surroundings of the siloxane chain [28,43]. In the process of cyclosiloxane opening and chain growth (Figure 2a), side processes may occur: depolymerization due to the breaking of the linear chain by the active center (backbiting reaction) (Figure 2b) with the formation of low molecular weight cyclic products, and chain transfer reaction (Figure 2c), in which the terminal active site attacks the siloxane bond of another polymer chain, leading to a redistribution of macromolecules, which is also called equilibration. Potassium hydroxide is one of the first initiators used for ROP of cyclic siloxanes. Its usage dates back to 1948, when ring-opening polymerization with alkali metals was first patented. With its usage, octamethylcyclotetrasiloxane was converted into a high molecular weight (HMW) polymer by heating for two hours at T = 140 °C. The polymer contained 13-15% low molecular weight volatile products; the MW of the main fraction varied from 100,000 at 10% conversion to 1,000,000 at equilibrium [44]. However, the active end groups remaining in the system caused the process of depolymerization at high temperatures; the polymer lost 99% of its mass when kept for 24 h at 250 °C. The formation of a stable product was achieved by neutralizing the product as quickly as possible. Already in early works, it was demonstrated that the rates of polymerization under the action of hydroxide and siloxanolate of the same metal are the same [45,46]. The activity of hydroxides and siloxanolates of alkali metals during bulk polymerization decreases in the series Cs > Rb > K > Na > Li [47]. Tetramethylammonium and tetrabutylphosphonium siloxanolates are comparable in activity to Cs compounds [48]. The great advantage of these compounds is the possibility of their deactivation and complete decomposition when the polymer is heated, which makes it possible to obtain a neutral thermostable polymer without using the end group blocking step. AROP Initiators Hydroxides of alkali metals, quaternary ammonium, and phosphonium bases and their derivatives, siloxanolates are most frequently used as process initiators, which leads to the formation of terminal silanolate anions ( Figure 2a) that are active centers in the polymerization reaction. Potassium hydroxide is one of the first initiators used for ROP of cyclic siloxanes. Its usage dates back to 1948, when ring-opening polymerization with alkali metals was first patented. With its usage, octamethylcyclotetrasiloxane was converted into a high molecular weight (HMW) polymer by heating for two hours at T = 140 • C. The polymer contained 13-15% low molecular weight volatile products; the MW of the main fraction varied from 100,000 at 10% conversion to 1,000,000 at equilibrium [44]. However, the active end groups remaining in the system caused the process of depolymerization at high temperatures; the polymer lost 99% of its mass when kept for 24 h at 250 • C. The formation of a stable product was achieved by neutralizing the product as quickly as possible. Already in early works, it was demonstrated that the rates of polymerization under the action of hydroxide and siloxanolate of the same metal are the same [45,46]. The activity of hydroxides and siloxanolates of alkali metals during bulk polymerization decreases in the series Cs > Rb > K > Na > Li [47]. Tetramethylammonium and tetrabutylphosphonium siloxanolates are comparable in activity to Cs compounds [48]. The great advantage of these compounds is the possibility of their deactivation and complete decomposition when the polymer is heated, which makes it possible to obtain a neutral thermostable polymer without using the end group blocking step. In the middle of the last century, many research works were devoted to the mechanism of interaction between the anionic center and the siloxane bond. It was shown that silanolates exist in the polymerization system as associates of various sizes. During polymerization under the action of Na, K, and Cs silanolates, apparently, the initiator is present Polymers 2022, 14, 2408 5 of 34 in the system in the active monomeric form and in the form of a low-active binary silanolate complex formed both intermolecularly and within one macromolecule ( Figure 3) [49]. In the middle of the last century, many research works were devoted to the mechanism of interaction between the anionic center and the siloxane bond. It was shown that silanolates exist in the polymerization system as associates of various sizes. During polymerization under the action of Na, K, and Cs silanolates, apparently, the initiator is present in the system in the active monomeric form and in the form of a low-active binary silanolate complex formed both intermolecularly and within one macromolecule ( Figure 3) [49]. The size of associates significantly depends on a specific alkali metal atom and affects both the rate of polymerization and the occurrence of side processes [50]. An increase in the polarity of the polymerization medium by changing the nature of the solvent or introducing small amounts of polar additives leads to the destruction of inactive metal associates and the formation of solvated ion pairs, which significantly increases the rate of polymerization of cyclosiloxanes [51][52][53]. Butyllithium is the most popular catalyst for non-equilibrium anionic ROP [53,54]. Lithium compounds used as initiators, such as n-, sec-and tert-butyllithium, have a number of features that significantly differ their behavior in ring-opening anionic polymerization reactions, especially in the case of D3. In a nonpolar medium, when D3 and BuLi react, the reaction products are exclusively the corresponding BuDLi and the remaining cyclosiloxane [55]: Apparently, due to the abnormally low activity of the lithium counterion and the high tendency to aggregation, the formation of inactive associates leads to the appearance of the first silanolate groups. Then the attack proceeds only along the activated siloxane bond adjacent to the silanolate bond until they are completely exhausted, and the process of opening no cycles. This feature formed the basis for the creation of a two-stage procedure for carrying out ROP D3 polymerization. The first stage is carried out in solution in a non-polar medium to convert active groups into a silanolate form with equal activity, and at the second stage, with the introduction of activating polar agents, the process continues until the initial cyclosiloxanes are exhausted. This approach makes it possible to obtain monomodal narrowly dispersed polydimethylsiloxane. The size of associates significantly depends on a specific alkali metal atom and affects both the rate of polymerization and the occurrence of side processes [50]. An increase in the polarity of the polymerization medium by changing the nature of the solvent or introducing small amounts of polar additives leads to the destruction of inactive metal associates and the formation of solvated ion pairs, which significantly increases the rate of polymerization of cyclosiloxanes [51][52][53]. Butyllithium is the most popular catalyst for non-equilibrium anionic ROP [53,54]. Lithium compounds used as initiators, such as n-, secand tert-butyllithium, have a number of features that significantly differ their behavior in ring-opening anionic polymerization reactions, especially in the case of D3. In a non-polar medium, when D3 and BuLi react, the reaction products are exclusively the corresponding BuDLi and the remaining cyclosiloxane [55]: Apparently, due to the abnormally low activity of the lithium counterion and the high tendency to aggregation, the formation of inactive associates leads to the appearance of the first silanolate groups. Then the attack proceeds only along the activated siloxane bond adjacent to the silanolate bond until they are completely exhausted, and the process of opening no cycles. This feature formed the basis for the creation of a two-stage procedure for carrying out ROP D3 polymerization. The first stage is carried out in solution in a non-polar medium to convert active groups into a silanolate form with equal activity, and at the second stage, with the introduction of activating polar agents, the process continues until the initial cyclosiloxanes are exhausted. This approach makes it possible to obtain monomodal narrowly dispersed polydimethylsiloxane. An interesting effect of solvation and separation of associates of active sites, leading to an increase in the polymerization rate, is observed in the polymerization of dimethylcyclosiloxanes with n > 5. The rate of polymerization of cyclosiloxanes with n = 7 and 8 is more than 100 times higher than the rate of polymerization of octamethylcyclotetrasiloxane. This acceleration is explained by the coordination of the metal by the oxygen atoms in the cyclosiloxane molecule, similarly to the interaction between the cation and the crown ether ( Figure 4) [56]. However, this effect was not subsequently reproduced or somehow confirmed. Apparently, the "crown" is instantly exhausted during the reaction process. cyclosiloxanes with n > 5. The rate of polymerization of cyclosiloxanes with n = 7 and 8 is more than 100 times higher than the rate of polymerization of octamethylcyclotetrasiloxane. This acceleration is explained by the coordination of the metal by the oxygen atoms in the cyclosiloxane molecule, similarly to the interaction between the cation and the crown ether ( Figure 4) [56]. However, this effect was not subsequently reproduced or somehow confirmed. Apparently, the "crown" is instantly exhausted during the reaction process. A polymer with a polydispersity index close to 1 was obtained using strong complexing agents as polar additives, for example, crown ethers and cryptates of the appropriate size [53]. The complexes were formed in these cases with the lithium counterion of the initiator. It led to the complete destruction of their aggregates and a significant increase in the polymerization rate in the almost complete absence of depolymerization processes. This technique was also successfully used in the polymerization of hexaethylcyclotrisiloxane, which is much less active than hexamethylcyclotrisiloxane [52]. The results of the study of AROP with a lithium counterion in the presence of traces of water are of practical importance. In this case, even when using a monofunctional, widely used organolithium initiator, the formation of a telechelic occurs due to the fast exchange reaction ~SiMe2-O-Li + HOH ↔ ~SiMe2-OH + Li-OH and subsequent opening of cyclosiloxane in situ with lithium hydroxide, with the formation of a macromolecule with two terminal functional groups. It was shown that between the active end groups ~SiMe2-OLi and ~SiMe2-OH there is a rapid exchange and the formation of an associate, and these ends are equally active in the process. The ratio of the amounts of water and lithium initiator in the system determines the rate of the process; an increase in the amount of water leads to an increase in the induction period. In this case, when using a monofunctional initiator, a monomodal telechelic is formed at [init] << [HOH]. In the case of [init]~[HOH], the polymerization product is bimodal with the presence of monofunctional macromolecules [57]. The most important class of initiators are tetraalkylammonium and tetraalkylphosphonium hydroxides and silanolates, which are used both in earlier publications and today [58][59][60]. The advantage of this series of catalysts is that, compared to other catalysts, they are quite easily removed from the polymer-they undergo thermal decomposition with the formation of volatile by-products and a neutral, thermally stable polymer. Another group of catalysts is quaternary phosphazene bases. They catalyze the polymerization of D4 in exactly the same way and decompose at high temperatures in the same way. Decomposition products are non-toxic and do not react with the polymer [61][62][63]. Modern studies also mention new types of AROP catalysts. Jinfeng Shi et al. showed that the organic cyclic trimeric phosphazene base (CTPB) ( Figure 5) is highly efficient for A polymer with a polydispersity index close to 1 was obtained using strong complexing agents as polar additives, for example, crown ethers and cryptates of the appropriate size [53]. The complexes were formed in these cases with the lithium counterion of the initiator. It led to the complete destruction of their aggregates and a significant increase in the polymerization rate in the almost complete absence of depolymerization processes. This technique was also successfully used in the polymerization of hexaethylcyclotrisiloxane, which is much less active than hexamethylcyclotrisiloxane [52]. The results of the study of AROP with a lithium counterion in the presence of traces of water are of practical importance. In this case, even when using a monofunctional, widely used organolithium initiator, the formation of a telechelic occurs due to the fast exchange reaction~SiMe 2 -O-Li + HOH ↔~SiMe 2 -OH + Li-OH and subsequent opening of cyclosiloxane in situ with lithium hydroxide, with the formation of a macromolecule with two terminal functional groups. It was shown that between the active end groups SiMe 2 -OLi and~SiMe 2 -OH there is a rapid exchange and the formation of an associate, and these ends are equally active in the process. The ratio of the amounts of water and lithium initiator in the system determines the rate of the process; an increase in the amount of water leads to an increase in the induction period. In this case, when using a monofunctional initiator, a monomodal telechelic is formed at [init] << [HOH]. In the case of [init]~[HOH], the polymerization product is bimodal with the presence of monofunctional macromolecules [57]. The most important class of initiators are tetraalkylammonium and tetraalkylphosphonium hydroxides and silanolates, which are used both in earlier publications and today [58][59][60]. The advantage of this series of catalysts is that, compared to other catalysts, they are quite easily removed from the polymer-they undergo thermal decomposition with the formation of volatile by-products and a neutral, thermally stable polymer. Another group of catalysts is quaternary phosphazene bases. They catalyze the polymerization of D4 in exactly the same way and decompose at high temperatures in the same way. Decomposition products are non-toxic and do not react with the polymer [61][62][63]. Modern studies also mention new types of AROP catalysts. Jinfeng Shi et al. showed that the organic cyclic trimeric phosphazene base (CTPB) ( Figure 5) is highly efficient for the ring-opening polymerization (ROP) of octamethylcyclotetrasiloxane (D4) and the copolymerization of D4 with octaphenylcyclotetrasiloxane (P4) under mild conditions. The polymerization proceeds rapidly, and the obtained polymers have a rather high molecular weight (Mn up to 1,353,000 g/mol). For the copolymerization of D4 and P4, it is easy to prepare copolysiloxanes with different contents of diphenylsiloxane (up to 64 mol%). There was no observed Q-or T-branching for copolysiloxanes in all cases according to 29Si NMR analysis, indicating good ROP control with the current CTPB/BnOH catalyst system. DSC analysis confirms that the copolysiloxanes are amorphous and Tg increases with increasing diphenylsiloxane content in the polymer chain. TGA analysis shows that the increase in thermal stability is achieved by introducing a diphenylsiloxane unit [64]. Recently there have been reports on the use of N-heterocyclic carbenes or bicyclic guanidines as AROP initiators. For example, Marta Rodriguez et al. report that Nheterocyclic carbenes are effective ROP catalysts for cyclotetrasiloxane D4 under mild conditions. Interestingly, a system using primary alcohols (MeOH and BnOH) more ef-fectively controls the molecular weight of the polymer; the molecular weight of silicone polymers can be controlled by simply changing the amount of alcohol initiator. Due to neutral conditions, the only by-products are a catalytic amount of moisture sensitive NHC and volatile alcohol, which are easily removed ( Figure 6) [65]. Polymers 2022, 14, x FOR PEER REVIEW 7 of 37 the ring-opening polymerization (ROP) of octamethylcyclotetrasiloxane (D4) and the copolymerization of D4 with octaphenylcyclotetrasiloxane (P4) under mild conditions. The polymerization proceeds rapidly, and the obtained polymers have a rather high molecular weight (Mn up to 1,353,000 g/mol). For the copolymerization of D4 and P4, it is easy to prepare copolysiloxanes with different contents of diphenylsiloxane (up to 64 mol%). There was no observed Q-or T-branching for copolysiloxanes in all cases according to 29Si NMR analysis, indicating good ROP control with the current CTPB/BnOH catalyst system. DSC analysis confirms that the copolysiloxanes are amorphous and Tg increases with increasing diphenylsiloxane content in the polymer chain. TGA analysis shows that the increase in thermal stability is achieved by introducing a diphenylsiloxane unit [64]. Recently there have been reports on the use of N-heterocyclic carbenes or bicyclic guanidines as AROP initiators. For example, Marta Rodriguez et al. report that N-heterocyclic carbenes are effective ROP catalysts for cyclotetrasiloxane D4 under mild conditions. Interestingly, a system using primary alcohols (MeOH and BnOH) more effectively controls the molecular weight of the polymer; the molecular weight of silicone polymers can be controlled by simply changing the amount of alcohol initiator. Due to neutral conditions, the only by-products are a catalytic amount of moisture sensitive NHC and volatile alcohol, which are easily removed ( Figure 6) [65]. However, other types of catalysts are also mentioned in the current literature. For example, Bashim Yactine used potassium vinyldimethylsilanolate (KVDMS) and potassium trimethylsilanolate (KTMS) catalysts in his work to perform ROP. Anionic polymerization of D3 initiated by potassium vinylsilanolate proved to be very efficient for the synthesis of difunctional and monofunctional polydimethylsiloxanes, respectively [66]. Oka et al. used the urea anion as a catalyst for ring-opening polymerization (ROP) of a cyclic siloxane initiated from silanols, which allowed control of the molecular weight and fineness of the final product. ROP of D3 was initiated with a trifunctional silanol (I3) (Figure 7), to form star-shaped polysiloxanes. Trifunctional silanol I3 is relatively poorly soluble in THF; deprotonation of I3 with NaH in THF led to the formation of a precipitate. However, the addition of U(4CF3) to this mixture solubilized the ionic species to give a homogeneous compound. The D3 conversion reached 96% in 60 min, although the dispersity increased slightly. Urea anion catalysts are particularly useful for producing lower molecular weight polysiloxane stars. Moreover, the combination of silanols and urea anion catalysts overcomes the solubility problem arising from the polarity mismatch between the initiators, D3 and PDMS [67]. However, this process remains rather complicated. However, other types of catalysts are also mentioned in the current literature. For example, Bashim Yactine used potassium vinyldimethylsilanolate (KVDMS) and potassium trimethylsilanolate (KTMS) catalysts in his work to perform ROP. Anionic polymerization of D3 initiated by potassium vinylsilanolate proved to be very efficient for the synthesis of difunctional and monofunctional polydimethylsiloxanes, respectively [66]. Oka et al. used the urea anion as a catalyst for ring-opening polymerization (ROP) of a cyclic siloxane initiated from silanols, which allowed control of the molecular weight and fineness of the final product. ROP of D3 was initiated with a trifunctional silanol (I 3 ) (Figure 7), to form star-shaped polysiloxanes. Trifunctional silanol I 3 is relatively poorly soluble in THF; deprotonation of I 3 with NaH in THF led to the formation of a precipitate. However, the addition of U(4CF 3 ) to this mixture solubilized the ionic species to give a homogeneous compound. The D3 conversion reached 96% in 60 min, although the dispersity increased slightly. Urea anion catalysts are particularly useful for producing lower molecular weight polysiloxane stars. Moreover, the combination of silanols and urea anion catalysts overcomes the solubility problem arising from the polarity mismatch between the initiators, D3 and PDMS [67]. However, this process remains rather complicated. a cyclic siloxane initiated from silanols, which allowed control of the molecular weight and fineness of the final product. ROP of D3 was initiated with a trifunctional silanol (I3) (Figure 7), to form star-shaped polysiloxanes. Trifunctional silanol I3 is relatively poorly soluble in THF; deprotonation of I3 with NaH in THF led to the formation of a precipitate. However, the addition of U(4CF3) to this mixture solubilized the ionic species to give a homogeneous compound. The D3 conversion reached 96% in 60 min, although the dispersity increased slightly. Urea anion catalysts are particularly useful for producing lower molecular weight polysiloxane stars. Moreover, the combination of silanols and urea anion catalysts overcomes the solubility problem arising from the polarity mismatch between the initiators, D3 and PDMS [67]. However, this process remains rather complicated. An organocatalytic controlled or living ROP of cyclotrisiloxanes using water as an initiator, strong organic bases as catalysts, and chlorosilanes as blocking agents was developed by the Keita Fuchise group for a convenient and efficient method for the synthesis of linear polysiloxane telechelics and their copolymers ( Figure 8). It was shown that guanidines B, namely guanidines containing the R-N=C(N)-NH-R' unit, showed the highest catalytic activity depending on their Brønsted basicity among the tested organic bases. In An organocatalytic controlled or living ROP of cyclotrisiloxanes using water as an initiator, strong organic bases as catalysts, and chlorosilanes as blocking agents was developed by the Keita Fuchise group for a convenient and efficient method for the synthesis of linear polysiloxane telechelics and their copolymers ( Figure 8). It was shown that guanidines B, namely guanidines containing the R-N=C(N)-NH-R' unit, showed the highest catalytic activity depending on their Brønsted basicity among the tested organic bases. In particular, TMnPG, monocyclic guanidine B, was the best in terms of its high catalytic activity and low number of side reactions [68]. Polymers 2022, 14, x FOR PEER REVIEW 9 of 37 particular, TMnPG, monocyclic guanidine B, was the best in terms of its high catalytic activity and low number of side reactions [68]. The polymerization rate of AROP depends on the nature of the initiator, the polymerization medium, and the selected monomer. However, the key factor controlling the kinetics of ring-opening polymerization is the counterionic interaction of silanolates leading to the formation of aggregates that are inactive in AROP [45]. The polymerization itself can be carried out in the bulk, in solvents, or in emulsion. However, the solubility of the initiator in the reaction medium plays an important role in the reaction kinetics. Suitable solvents are liquid hydrocarbons. In some examples, THF is used as a solvent in combination with a solid counterion [69]. Influence of the Structure of the Initial Organocyclosiloxane on the AROP Process The next parameter that determines the course of the AROP process is the structure of the starting organocyclosiloxane and the nature of the organic substituents at the silicon atom. The influence of the size of the initial cyclosiloxane is clearly manifested in the case of the most widely used dimethylsiloxane cyclosiloxanes, hexamethylcyclotetrasiloxane (D3) and octamethylcyclotetrasiloxane (D4). The difference between these compounds and, accordingly, the conditions and results of their polymerization processes is quite large. The similarity of the binding energies in dimethylcyclosiloxanes with n > 3, primarily in octamethylcyclotetrasiloxane and in linear dimethylsiloxanes, leads to the fact that the process is ongoing exclusively due to the thermodynamic component. Ionic active centers attack and break the Si-O bonds both in dimethylcyclosiloxane and in the resulting linear polymer, carrying out the depolymerization process parallel to polymerization. The result is a mixture of linear and cyclic siloxanes with different ring sizes. In the system, over time, an equilibrium is established between the polymer and the mixture of cyclosiloxanes. The equilibrium position in this case does not depend on temperature, because ΔH~0, same as the nature of catalyst and solvent [70]. The process of polymerization of hexamethylcyclotrisiloxane, which has a significant intensity of the three-link cycle, looks different, as a result of which the thermal effect of polymerization is ΔH~3-4 kk/mol. The presence of an energy gain during the opening of The polymerization rate of AROP depends on the nature of the initiator, the polymerization medium, and the selected monomer. However, the key factor controlling the kinetics of ring-opening polymerization is the counterionic interaction of silanolates leading to the formation of aggregates that are inactive in AROP [45]. The polymerization itself can be carried out in the bulk, in solvents, or in emulsion. However, the solubility of the initiator in the reaction medium plays an important role in the reaction kinetics. Suitable solvents are liquid hydrocarbons. In some examples, THF is used as a solvent in combination with a solid counterion [69]. Influence of the Structure of the Initial Organocyclosiloxane on the AROP Process The next parameter that determines the course of the AROP process is the structure of the starting organocyclosiloxane and the nature of the organic substituents at the silicon atom. The influence of the size of the initial cyclosiloxane is clearly manifested in the case of the most widely used dimethylsiloxane cyclosiloxanes, hexamethylcyclotetrasiloxane (D3) and octamethylcyclotetrasiloxane (D4). The difference between these compounds and, accordingly, the conditions and results of their polymerization processes is quite large. The similarity of the binding energies in dimethylcyclosiloxanes with n > 3, primarily in octamethylcyclotetrasiloxane and in linear dimethylsiloxanes, leads to the fact that the process is ongoing exclusively due to the thermodynamic component. Ionic active centers attack and break the Si-O bonds both in dimethylcyclosiloxane and in the resulting linear polymer, carrying out the depolymerization process parallel to polymerization. The result is a mixture of linear and cyclic siloxanes with different ring sizes. In the system, over time, an equilibrium is established between the polymer and the mixture of cyclosiloxanes. The Polymers 2022, 14, 2408 9 of 34 equilibrium position in this case does not depend on temperature, because ∆H~0, same as the nature of catalyst and solvent [70]. The process of polymerization of hexamethylcyclotrisiloxane, which has a significant intensity of the three-link cycle, looks different, as a result of which the thermal effect of polymerization is ∆H~3-4 kk/mol. The presence of an energy gain during the opening of hexamethylcyclotrisiloxane makes it possible, under certain conditions, to carry out polymerization in a nonequilibrium mode, which excludes depolymerization reactions and interchain exchange. The rate of polymerization of hexamethylcyclotrisiloxane under comparable conditions is almost 100 times higher than the rate of polymerization of octamethylcyclotrisiloxane. This method for the synthesis of polydimethylsiloxane makes it possible to obtain polymers with a controlled molecular weight, specified end groups, and a narrow molecular weight distribution (Mw/Mn = 1.0-1.2) [53,71,72], and the resulting dimethylsiloxane telechelics are widely used to obtain siloxane block copolymers polymer networks. The use of appropriate blocking reagents makes it possible to obtain polymer products with specified end groups [72]. The disadvantage of this method for obtaining polydimethylsiloxane telechelics is the need to obtain hexamethylcyclotrisiloxane, which is a more expensive monomer than unstrained cyclosiloxane, and stringent requirements for the process conditions for the purity of the system and temperature. Thus, from a commercial point of view, the polymerization of octamethylcyclotetrasiloxane is preferred [73,74]. The process of polymerization of diphenylcyclosiloxanes, both 6-and 8-membered, is completely different from other. Polydiphenylsiloxanes (PDFS) have very high thermal and radiation resistance, a very high melting point (about 265 • C), and mesophase properties [75], in connection with which they are of great interest; however, the preparation of telechelics and, in general, polymers of this nature remained an unsolved problem for a very long time. The extremely high tendency to cyclization of the diphenyl chain, caused by the steric factor, apparently playing a decisive role in this case. In all polycondensation and polymerization processes, the reaction product was either tetraphenyldisiloxanediol, or further octaphenylcyclotetrasiloxane [76]. Under anionic conditions, the polymerization and depolymerization processes proceed simultaneously and at the same rate and lead to the formation of low molecular weight cyclic products. Moreover, only in this case, the differences in the tension of the initial 6-and 8-membered diphenylcyclosiloxanes do not play a significant role: apparently, the thermodynamic unfavourability of the polymer chain leads to its absence in the products of the equilibrium process. The implementation of a nonequilibrium process without a significant occurrence of depolymerization was only possible under the conditions of solid-state polymerization of hexaphenylcyclotetrasiloxane at temperatures close to the melting point of the cycle, but not reaching it. It was shown that polymerization proceeds under heterogeneous conditions; the reaction proceeds inward from the surface of HPTS crystals and leads to the formation of a crystalline polymer, with polymerization and crystallization proceeding sequentially [77]. In this case, the crystallinity of polymerized PDPS samples is inversely proportional to its specific viscosity [78]. From the point of view of the arrangement of functional groups, the literature considers the preparation of two types of siloxane telechelics obtained by ROP: siloxane oligomers with functional groups directly bonded to terminal silicon atoms (Si-X) and siloxane oligomers with an organofunctional end (Si-R-X). Moreover, in addition to functional ends, functional groups can also be attached to the backbone of the polysiloxane ( Figure 9). This can be achieved by the aforementioned ROP of cyclotetrasiloxane as well as by catalytic rearrangement reactions. However, instead of the usual D4 or D3 that give known PDMS, the methyl group(s) attached to the Si atom of cyclic siloxanes can be replaced with different functional groups before polymerization [79,80]. siders the preparation of two types of siloxane telechelics obtained by ROP: siloxane oligomers with functional groups directly bonded to terminal silicon atoms (Si-X) and siloxane oligomers with an organofunctional end (Si-R-X). Moreover, in addition to functional ends, functional groups can also be attached to the backbone of the polysiloxane ( Figure 9). This can be achieved by the aforementioned ROP of cyclotetrasiloxane as well as by catalytic rearrangement reactions. However, instead of the usual D4 or D3 that give known PDMS, the methyl group(s) attached to the Si atom of cyclic siloxanes can be replaced with different functional groups before polymerization [79,80]. The above processes proceed according to one of three different basic reaction mechanisms: cationic, anionic, or coordination intercalation [81]. The next factor that has a significant effect on the course of polymerization is the nature of the organic groups at the silicon atoms of siloxane cyclosiloxanes. The introduc- The above processes proceed according to one of three different basic reaction mechanisms: cationic, anionic, or coordination intercalation [81]. The next factor that has a significant effect on the course of polymerization is the nature of the organic groups at the silicon atoms of siloxane cyclosiloxanes. The introduction of electron-donating substituents-longer hydrocarbons than the methyl group-reduces the rate of anionic polymerization of cyclosiloxane, while electron-withdrawing substituentsalkenyl, aromatic, phenyl, 3,3,3-trifluoropropyl or cyanoalkyl groups-increase the rate of polymerization [82][83][84]. In this case, electron-withdrawing substituents at the silicon atom during anionic polymerization lead to the formation of a siloxanolate anion with a lower nucleophilic activity, which somewhat levels out the increase in the activity of cycles during polymerization. In addition, the steric influence of bulky radicals leads to low reactivity of the corresponding cyclosiloxanes [85]. Preparation of Monochelical PDMS by AROP Monofunctional PDMS, also referred to as "macromonomers", are usually synthesized by living anionic polymerization of hexamethylcyclotrisiloxane. These experiments were first carried out in the 1960s by Bostick [86] and Lee [53], and showed the possibility of obtaining monofunctional polydimethylsiloxanes with controlled molecular weight and narrow molecular weight distribution by anionic polymerization of hexamethylcyclotrisiloxane (D 3 ) using lithium silanolate salts (R-Li + ) or lithium as initiators in the presence of promoters such as THF or diglyme ( Figure 10). tion of electron-donating substituents-longer hydrocarbons than the methyl group-reduces the rate of anionic polymerization of cyclosiloxane, while electron-withdrawing substituents-alkenyl, aromatic, phenyl, 3,3,3-trifluoropropyl or cyanoalkyl groups-increase the rate of polymerization [82][83][84]. In this case, electron-withdrawing substituents at the silicon atom during anionic polymerization lead to the formation of a siloxanolate anion with a lower nucleophilic activity, which somewhat levels out the increase in the activity of cycles during polymerization. In addition, the steric influence of bulky radicals leads to low reactivity of the corresponding cyclosiloxanes [85]. Preparation of Monochelical PDMS by AROP Monofunctional PDMS, also referred to as "macromonomers", are usually synthesized by living anionic polymerization of hexamethylcyclotrisiloxane. These experiments were first carried out in the 1960s by Bostick [86] and Lee [53], and showed the possibility of obtaining monofunctional polydimethylsiloxanes with controlled molecular weight and narrow molecular weight distribution by anionic polymerization of hexamethylcyclotrisiloxane (D3) using lithium silanolate salts (R-Li +) or lithium as initiators in the presence of promoters such as THF or diglyme ( Figure 10). Theoretically, anionic polymerization can be carried out using D4, however, the reaction will tend to equilibrate even at low conversions. Accordingly, the resulting polymers have a relatively broad molecular weight distribution and contain appreciable amounts of macrocyclic oligomers [53]. By using a cyclic trimer as a starting material, siloxane redistribution processes other than the desired ring-opening chain propagation reaction can be almost completely eliminated. This is mainly due to the ring tension in the D3 monomer, which significantly increases its reactivity towards anionic initiators. Selectivity is further enhanced by the use of initiators in which lithium is the counterion. Initiators containing lithium counterions are preferable to counterions of other alkali metals because of the lower catalytic activity of lithium in siloxane redistribution reactions [87]. Monofunctional oligomers are characterized by low molecular weight (500-20,000 g*mol −1 ). Vysochinskaya Y. S. et al. [92] synthesized monofunctional vinyl PDMS from D3 in the presence of BuLi as a catalyst and dimethylvinylchlorosilane as a blocking agent. The resulting compounds were used to prepare star-shaped polymers by hydrosilylation reaction with cyclic nuclei with Si-H groups in the presence of Karstedt's catalyst. Kawakami Y. et al. [93] used the same method to obtain monofunctional macromonomers of the styrene and methacrylate type, which are the starting compounds for obtaining graft polymers ( Figure 11): Theoretically, anionic polymerization can be carried out using D4, however, the reaction will tend to equilibrate even at low conversions. Accordingly, the resulting polymers have a relatively broad molecular weight distribution and contain appreciable amounts of macrocyclic oligomers [53]. By using a cyclic trimer as a starting material, siloxane redistribution processes other than the desired ring-opening chain propagation reaction can be almost completely eliminated. This is mainly due to the ring tension in the D3 monomer, which significantly increases its reactivity towards anionic initiators. Selectivity is further enhanced by the use of initiators in which lithium is the counterion. Initiators containing lithium counterions are preferable to counterions of other alkali metals because of the lower catalytic activity of lithium in siloxane redistribution reactions [87]. Monofunctional oligomers are characterized by low molecular weight (500-20,000 g*mol −1 ). Vysochinskaya Y.S. et al. [92] synthesized monofunctional vinyl PDMS from D 3 in the presence of BuLi as a catalyst and dimethylvinylchlorosilane as a blocking agent. The resulting compounds were used to prepare star-shaped polymers by hydrosilylation reaction with cyclic nuclei with Si-H groups in the presence of Karstedt's catalyst. Kawakami Y. et al. [93] used the same method to obtain monofunctional macromonomers of the styrene and methacrylate type, which are the starting compounds for obtaining graft polymers ( Figure 11): Polymers 2022, 14, x FOR PEER REVIEW 12 of 37 Figure 11. Obtaining monofunctional macromonomers of the styrene and methacrylate type by the AROP mechanism. Martin Fauquignon et al. synthesized a series of PDMS-b-PEO diblock copolymers of various molar masses and hydrophilic mass fractions. He used monofunctional 3-chloropropyl-PDMS which were synthesized by anionic ring-opening polymerization of the hexamethylcyclotrisiloxane (D3) monomer in anhydrous THF at 80 °C ( Figure 12). Butyllithium was used as the initiator, functionalization of the end of the chain was obtained using chloro-(3-chloropropyl)dimethylsilane as a termination agent [94]. The subsequent conversion of the chloropropyl group to the azidopropyl group made it possible to obtain poly(dimethylsiloxane)-block-poly(ethylene oxide) (PDMS-b-PEO) diblock copolymers by click chemistry. These polymers are applicable in the developing field of hybrid polymer/lipid vesicles. According to the same scheme, monofunctional azide PDMS were obtained in [95] and [96] for the subsequent preparation of copolymers. The functional end group can be introduced either by the organo segment of the initiator or by the chlorosilane molecule. Functionalized initiation has been reviewed by Casey L. [97]. A number of poly(dimethylsiloxane) homopolymers in the molar mass range Martin Fauquignon et al. synthesized a series of PDMS-b-PEO diblock copolymers of various molar masses and hydrophilic mass fractions. He used monofunctional 3-chloropropyl-PDMS which were synthesized by anionic ring-opening polymerization of the hexamethylcyclotrisiloxane (D3) monomer in anhydrous THF at 80 • C ( Figure 12). Butyllithium was used as the initiator, functionalization of the end of the chain was obtained using chloro-(3-chloropropyl)dimethylsilane as a termination agent [94]. The subsequent conversion of the chloropropyl group to the azidopropyl group made it possible to obtain poly(dimethylsiloxane)-block-poly(ethylene oxide) (PDMS-b-PEO) diblock copolymers by click chemistry. These polymers are applicable in the developing field of hybrid polymer/lipid vesicles. According to the same scheme, monofunctional azide PDMS were obtained in [95,96] for the subsequent preparation of copolymers. Figure 12). Butyllithium was used as the initiator, functionalization of the end of the chain was obtained using chloro-(3-chloropropyl)dimethylsilane as a termination agent [94]. The subsequent conversion of the chloropropyl group to the azidopropyl group made it possible to obtain poly(dimethylsiloxane)-block-poly(ethylene oxide) (PDMS-b-PEO) diblock copolymers by click chemistry. These polymers are applicable in the developing field of hybrid polymer/lipid vesicles. According to the same scheme, monofunctional azide PDMS were obtained in [95] and [96] for the subsequent preparation of copolymers. The functional end group can be introduced either by the organo segment of the initiator or by the chlorosilane molecule. Functionalized initiation has been reviewed by Casey L. [97]. A number of poly(dimethylsiloxane) homopolymers in the molar mass range The functional end group can be introduced either by the organo segment of the initiator or by the chlorosilane molecule. Functionalized initiation has been reviewed by Casey L. [97]. A number of poly(dimethylsiloxane) homopolymers in the molar mass range from 2400 to 15,000 have been synthesized using 3-[(N-benzyl-N-methyl)amino]-1propyllithium ( Figure 13). The protecting group on PDMS was quantitatively removed by hydrogenolysis to give a secondary amine ( Figure 14 A recent review by Goff J., Sulaiman S., and Arkles B. [98] is dedicated to the production of monofunctional PDMS and their application. However, the use of monofunctional PDMS in copolymerization reactions is rather limited. It is much more common to obtain difunctional terminal (telechelic) silicone oligomers, which are the starting compounds for a wide range of silicone copolymers. Obtaining Functional Telechelics by AROP For the further usage of organosiloxane telechelics, the nature of the terminal functional groups is an important parameter. Since the end of the last century, the most important PDMS-telechelics have been obtained by the AROP mechanism: vinyl-functional [99,100] and amine-functional [101][102][103], etc. Obtaining new functional PDMS-telechelics is also relevant in our time. Li X. et al. [104] obtained aminopropyl terminated polydimethylsiloxane in the presence of a tetramethylammonium hydroxide catalyst by the AROP mechanism to subsequently obtain silicone elastomers with good mechanical properties, as well as high self-healing efficiency through a simple amino-ene addition reaction according to Michael (Figure 15). A basic TMAH catalyst is added to the system to provide a dynamically crosslinked silicone elastomer network. Tensile tests show that the silicone elastomer has very good mechanical properties for an unfilled silicone composition with a tensile strength and an elongation at break of 1.08 ± 0.06 MPa and 206.10 ± 9.55%, respectively. After holding at 105 °C for 24 h, the tensile strength of "broken" specimens can recover 91% of their original strength, and specimens cut into multiple pieces can regain their original shape A recent review by Goff J., Sulaiman S., and Arkles B. [98] is dedicated to the production of monofunctional PDMS and their application. However, the use of monofunctional PDMS in copolymerization reactions is rather limited. It is much more common to obtain difunctional terminal (telechelic) silicone oligomers, which are the starting compounds for a wide range of silicone copolymers. Obtaining Functional Telechelics by AROP For the further usage of organosiloxane telechelics, the nature of the terminal functional groups is an important parameter. Since the end of the last century, the most important PDMS-telechelics have been obtained by the AROP mechanism: vinyl-functional [99,100] and amine-functional [101][102][103], etc. Obtaining new functional PDMS-telechelics is also relevant in our time. Li X. et al. [104] obtained aminopropyl terminated polydimethylsiloxane in the presence of a tetramethylammonium hydroxide catalyst by the AROP mechanism to subsequently obtain silicone elastomers with good mechanical properties, as well as high self-healing efficiency through a simple amino-ene addition reaction according to Michael (Figure 15). A basic TMAH catalyst is added to the system to provide a dynamically crosslinked silicone elastomer network. Tensile tests show that the silicone elastomer has very good mechanical properties for an unfilled silicone composition with a tensile strength and an elongation at break of 1.08 ± 0.06 MPa and 206.10 ± 9.55%, respectively. After holding at 105 °C for 24 h, the tensile strength of "broken" specimens can recover 91% of their original strength, and specimens cut into multiple pieces can regain their original shape A recent review by Goff J., Sulaiman S., and Arkles B. [98] is dedicated to the production of monofunctional PDMS and their application. However, the use of monofunctional PDMS in copolymerization reactions is rather limited. It is much more common to obtain difunctional terminal (telechelic) silicone oligomers, which are the starting compounds for a wide range of silicone copolymers. Obtaining Functional Telechelics by AROP For the further usage of organosiloxane telechelics, the nature of the terminal functional groups is an important parameter. Since the end of the last century, the most important PDMS-telechelics have been obtained by the AROP mechanism: vinyl-functional [99,100] and amine-functional [101][102][103], etc. Obtaining new functional PDMS-telechelics is also relevant in our time. Li X. et al. [104] obtained aminopropyl terminated polydimethylsiloxane in the presence of a tetramethylammonium hydroxide catalyst by the AROP mechanism to subsequently obtain silicone elastomers with good mechanical properties, as well as high self-healing efficiency through a simple amino-ene addition reaction according to Michael (Figure 15). A basic TMAH catalyst is added to the system to provide a dynamically crosslinked silicone elastomer network. Tensile tests show that the silicone elastomer has very good mechanical properties for an unfilled silicone composition with a tensile strength and an elongation at break of 1.08 ± 0.06 MPa and 206.10 ± 9.55%, respectively. After holding at 105 • C for 24 h, the tensile strength of "broken" specimens can recover 91% of their original strength, and specimens cut into multiple pieces can regain their original shape. Also, the preparation of aminopropyl terminated polydimethylsiloxane by the anionic ROP mechanism is reported in the work of V.V. Gorodov [105]. The resulting oligomeric amine-containing PDMS was subsequently treated with itaconic acid in an o-xylene solution in the presence of anhydrous magnesium sulfate as a dehydrating agent. Thus, telechelic oligodimethylsiloxanes with 4-carboxypyrrolidone fragments were synthesized and their thermal and rheological properties were studied. These polymers have been found to be prone to the formation of smectic-type mesophases. The introduction of carboxypyrrolidone groups into the siloxane chain significantly increases the viscosity and activation energy of the viscous flow of oligodimethylsiloxanes. Polymers 2022, 14, x FOR PEER REVIEW 14 of 37 Figure 15. Illustration of the synthetic strategy and structure of silicone elastomer (adapted with permission from Ref. [104]. 2018, Li, X) Also, the preparation of aminopropyl terminated polydimethylsiloxane by the anionic ROP mechanism is reported in the work of V. V. Gorodov [105]. The resulting oligomeric amine-containing PDMS was subsequently treated with itaconic acid in an o-xylene solution in the presence of anhydrous magnesium sulfate as a dehydrating agent. Thus, telechelic oligodimethylsiloxanes with 4-carboxypyrrolidone fragments were synthesized and their thermal and rheological properties were studied. These polymers have been found to be prone to the formation of smectic-type mesophases. The introduction of carboxypyrrolidone groups into the siloxane chain significantly increases the viscosity and activation energy of the viscous flow of oligodimethylsiloxanes. The group of Zuo Y. [106] obtained vinyl functional PDMS telechelics, as well as vinyl functional copolymers (Figure 16) by the AROP D4 mechanism in the presence of a tetramethylammonium hydroxide catalyst. Then these polymers were functionalized with Nacetyl-L-cysteine using thiol-ene chemistry, subsequently forming new transparent, luminescent silicone elastomers. Luminescence centers were formed by complexation of lanthanide ions into a functionalized polysiloxane. The group of Zuo Y. [106] obtained vinyl functional PDMS telechelics, as well as vinyl functional copolymers ( Figure 16) by the AROP D4 mechanism in the presence of a tetramethylammonium hydroxide catalyst. Then these polymers were functionalized with N-acetyl-L-cysteine using thiol-ene chemistry, subsequently forming new transparent, luminescent silicone elastomers. Luminescence centers were formed by complexation of lanthanide ions into a functionalized polysiloxane. Also, the preparation of aminopropyl terminated polydimethylsiloxane by the onic ROP mechanism is reported in the work of V. V. Gorodov [105]. The resulting o meric amine-containing PDMS was subsequently treated with itaconic acid in an o-x solution in the presence of anhydrous magnesium sulfate as a dehydrating agent. T telechelic oligodimethylsiloxanes with 4-carboxypyrrolidone fragments were synthe and their thermal and rheological properties were studied. These polymers have found to be prone to the formation of smectic-type mesophases. The introduction o boxypyrrolidone groups into the siloxane chain significantly increases the viscosity activation energy of the viscous flow of oligodimethylsiloxanes. The group of Zuo Y. [106] obtained vinyl functional PDMS telechelics, as well as functional copolymers (Figure 16) by the AROP D4 mechanism in the presence of a methylammonium hydroxide catalyst. Then these polymers were functionalized wi acetyl-L-cysteine using thiol-ene chemistry, subsequently forming new transparent, nescent silicone elastomers. Luminescence centers were formed by complexation o thanide ions into a functionalized polysiloxane. It should be noted that in modern studies on the ROP mechanism, it is possible to obtain telechelics with new functional groups. For example, F.V. Drozdov et al. reported the production of a number of unusual functional PDMS telechelics by the ring-opening polymerization D4 mechanism, using various functional trisiloxanes as a stopper and TfOH/Purolite/tetramethylammonium silanolate (TMAS) as a catalyst (Figure 17). If acid-resistant functional groups were present in the monomer, TfOH or Purolite was used as an acid catalyst and bulk polymerization was carried out. Otherwise, a basic catalyst was used and the reaction was carried out in toluene. For the polymerization of PDMS(St-CH 2 NHBoc) 2 , a number of catalysts were used: TfOH, Purolite, tetramethylammonium silanolate (TMAS), and KOH. In the case of acidic catalysts (TfOH and Purolite), only an insignificant part of the starting monomer participated in the polymerization even for 48 h at 60 • C. Using TMAS, a polymer with high conversion and molecular weight distribution was obtained. Thus, PDMS were obtained in the range of molecular weights from 1500 to 30,000 ( Figure 18). The synthesis of not only symmetrical oligosiloxanes but also asymmetric hydridosiloxane-containing analogs has potential applications as precursors for the preparation of functional siloxane derivatives by the hydrosilylation reaction. Thus, symmetrical or asymmetric telechelic oligo-or polydimethylsiloxanes with different functional groups are suitable as AA-type blocks for further synthesis of block copolymers [107]. resistant functional groups were present in the monomer, TfOH or Purolite was used as an acid catalyst and bulk polymerization was carried out. Otherwise, a basic catalyst was used and the reaction was carried out in toluene. For the polymerization of PDMS(St-CH2NHBoc)2, a number of catalysts were used: TfOH, Purolite, tetramethylammonium silanolate (TMAS), and KOH. In the case of acidic catalysts (TfOH and Purolite), only an insignificant part of the starting monomer participated in the polymerization even for 48 h at 60 °C. Using TMAS, a polymer with high conversion and molecular weight distribution was obtained. Thus, PDMS were obtained in the range of molecular weights from 1500 to 30,000 ( Figure 18). The synthesis of not only symmetrical oligosiloxanes but also asymmetric hydridosiloxane-containing analogs has potential applications as precursors for the preparation of functional siloxane derivatives by the hydrosilylation reaction. Thus, symmetrical or asymmetric telechelic oligo-or polydimethylsiloxanes with different functional groups are suitable as AA-type blocks for further synthesis of block copolymers [107]. Moving on nonequilibrium processes, it is worth noting the work of Fei H. F. et al. [108], where they studied the anionic ring-opening polymerization of 1,3,5-tris(trifluoropropylmethyl)cyclotrisiloxane in bulk using dilithium diphenylsilanediolate as initiator (I); and N,N-dimethylformamide (DMF), bis(2-methoxyethyl)ether (Diglyme), and 1,2-dimethoxyethane (DME) as promoters (P) (Figure 19). A detailed study of the polymerization kinetics with various molar ratios of promoter to initiator ((P)/(I)), which were equal to 2.0, 4.0 and 6.0, showed that the yield of linear polymers was highest when (P)/(I) 2.0 for all promoters, among which DME was the most effective in suppressing side reactions. The reaction initiated by DME had a very wide "cutoff window" with the highest linear polymer yield and a very narrow molar mass distribution. PMTFPS with end groups such as vinyl, hydroxyl, hydrogen and chloromethyl were obtained and characterized by 1 H NMR, 29 Si NMR, and FT-IR. Vinyl-terminated polymers showed higher thermal stability than hydroxyl-terminated polymers under nitrogen atmosphere. Figure 19. Synthetic routes of PMTFPS with different end groups. In a review by Köhler T. [1], the process of obtaining OH-terminated polydimethylsiloxanes by the ROP mechanism is considered in detail. US Pat. No. 5475077 describes a batch process for the synthesis of certain OH-terminated silicones by AROP. To do this, Moving on nonequilibrium processes, it is worth noting the work of Fei H.F. et al. [108], where they studied the anionic ring-opening polymerization of 1,3,5-tris(trifluoropropylmethyl) cyclotrisiloxane in bulk using dilithium diphenylsilanediolate as initiator (I); and N,Ndimethylformamide (DMF), bis(2-methoxyethyl)ether (Diglyme), and 1,2-dimethoxyethane (DME) as promoters (P) (Figure 19). A detailed study of the polymerization kinetics with various molar ratios of promoter to initiator ((P)/(I)), which were equal to 2.0, 4.0 and 6.0, showed that the yield of linear polymers was highest when (P)/(I) 2.0 for all promoters, among which DME was the most effective in suppressing side reactions. The reaction initiated by DME had a very wide "cutoff window" with the highest linear polymer yield and a very narrow molar mass distribution. PMTFPS with end groups such as vinyl, hydroxyl, hydrogen and chloromethyl were obtained and characterized by 1 H NMR, 29 Si NMR, and FT-IR. Vinyl-terminated polymers showed higher thermal stability than hydroxyl-terminated polymers under nitrogen atmosphere. Moving on nonequilibrium processes, it is worth noting the work of Fei H. F. et al. [108], where they studied the anionic ring-opening polymerization of 1,3,5-tris(trifluoropropylmethyl)cyclotrisiloxane in bulk using dilithium diphenylsilanediolate as initiator (I); and N,N-dimethylformamide (DMF), bis(2-methoxyethyl)ether (Diglyme), and 1,2-dimethoxyethane (DME) as promoters (P) (Figure 19). A detailed study of the polymerization kinetics with various molar ratios of promoter to initiator ((P)/(I)), which were equal to 2.0, 4.0 and 6.0, showed that the yield of linear polymers was highest when (P)/(I) 2.0 for all promoters, among which DME was the most effective in suppressing side reactions. The reaction initiated by DME had a very wide "cutoff window" with the highest linear polymer yield and a very narrow molar mass distribution. PMTFPS with end groups such as vinyl, hydroxyl, hydrogen and chloromethyl were obtained and characterized by 1 H NMR, 29 Si NMR, and FT-IR. Vinyl-terminated polymers showed higher thermal stability than hydroxyl-terminated polymers under nitrogen atmosphere. Figure 19. Synthetic routes of PMTFPS with different end groups. In a review by Köhler T. [1], the process of obtaining OH-terminated polydimethylsiloxanes by the ROP mechanism is considered in detail. US Pat. No. 5475077 describes a batch process for the synthesis of certain OH-terminated silicones by AROP. To do this, Figure 19. Synthetic routes of PMTFPS with different end groups. In a review by Köhler T. [1], the process of obtaining OH-terminated polydimethylsiloxanes by the ROP mechanism is considered in detail. US Pat. No. 5475077 describes a batch process for the synthesis of certain OH-terminated silicones by AROP. To do this, a mixture of cyclic siloxanes (mainly D 3 , D 4 , and D 5 ) is subjected to interaction with an aqueous solution of KOH at 170 • C. The reaction mixture is purged with steam to remove air and other gases. After keeping the reaction mixture for some time at this temperature and under a certain pressure of water vapor, ethylene chlorohydrin is added to neutralize KOH, which precipitates in the mixture in the form of a salt. The reaction product is then desorbed and silanol-terminated PDMS is obtained. The viscosity of the resulting polymer, which depends on the molecular weight and polydispersity, is controlled by the water vapor pressure in the vessel. The higher the water vapor pressure, the lower the viscosity of the resulting product. In a recent study by Sato K. et al. [109], the authors obtained of α,ω-chain-endfunctionalized PDMS with bromomethyl groups also by the ROP mechanism from D3 in the presence of water and a catalyst from the guanidine series using bromomethyldimethylchlorosilane as a blocking agent ( Figure 20). The advantages of this approach include the narrow dispersity of the obtained compounds, a fairly wide range of molecular weights, and the disadvantages are a rather laborious synthesis scheme. The resulting compounds were subsequently converted into azide PDMS for subsequent azide-alkyne cycloaddition reactions, model variants of which are also presented in the work. In another work by the same group of Japanese scientists [110], functional linear polysiloxanes were obtained, namely vinyl, 3-chloropropyl, and allyl ( Figure 21) by an organocatalytic controlled/living ring-opening polymerization (ROP) of monofunctional cyclotrisiloxanes using water or silanols as initiators, guanidines as catalysts, and chlorosilanes as blocking groups. The AROP method has made it possible to obtain polymers with a controlled number average molar mass (Mn), a narrow molar mass distribution, desirable end structures, and a good distribution of side chain functional groups. It can be expected that this convenient new method for the synthesis of linear polysiloxanes with well-defined functionalized side chains will allow the preparation of organosilicon compounds, hybrid materials with a variety of polymer structures, such as block copolymers, star polymers, comb polymers, and surface modified materials, which will ultimately contribute to the development of new advanced materials with improved properties. The vinyl, 3-chloropropyl, and allyl groups on the side chains can be further converted to other structures through a variety of reactions, including hydrosilylation, thiolene, oxidation, and nucleophilic substitution. In another work by the same group of Japanese scientists [110], functional linear polysiloxanes were obtained, namely vinyl, 3-chloropropyl, and allyl ( Figure 21) by an organocatalytic controlled/living ring-opening polymerization (ROP) of monofunctional cyclotrisiloxanes using water or silanols as initiators, guanidines as catalysts, and chlorosilanes as blocking groups. The AROP method has made it possible to obtain polymers with a controlled number average molar mass (Mn), a narrow molar mass distribution, desirable end structures, and a good distribution of side chain functional groups. It can be expected that this convenient new method for the synthesis of linear polysiloxanes with well-defined functionalized side chains will allow the preparation of organosilicon compounds, hybrid materials with a variety of polymer structures, such as block copolymers, star polymers, comb polymers, and surface modified materials, which will ultimately contribute to the development of new advanced materials with improved properties. The vinyl, 3-chloropropyl, and allyl groups on the side chains can be further converted to other structures through a variety of reactions, including hydrosilylation, thiol-ene, oxidation, and nucleophilic substitution. with well-defined functionalized side chains will allow the preparation of organos compounds, hybrid materials with a variety of polymer structures, such as block co mers, star polymers, comb polymers, and surface modified materials, which will mately contribute to the development of new advanced materials with improved pr ties. The vinyl, 3-chloropropyl, and allyl groups on the side chains can be further verted to other structures through a variety of reactions, including hydrosilylation, ene, oxidation, and nucleophilic substitution. The same group of scientists carried out the synthesis of linear polysiloxanes tionalized with disubstituted and monosubstituted alkynyl groups, also by the meth controlled/living ring-opening polymerization of cyclotrisiloxanes using water or sil The same group of scientists carried out the synthesis of linear polysiloxanes functionalized with disubstituted and monosubstituted alkynyl groups, also by the method of controlled/living ring-opening polymerization of cyclotrisiloxanes using water or silanols as initiators, guanidines as catalysts, and alkynyl (amino) silanes as end blockers agents ( Figure 22) [111]. Two alkynyl(amino)silanes, (diethylamino)dimethyl(phenylethynyl)silane and (diethylamino)ethynyldimethylsilane were synthesized by alkynylation of chloro (diethylamino)dimethylsilane. In addition, the resulting alkynyl-terminated polysiloxanes were subjected to non-catalytic and catalytic Huisgen reactions with organoazide compounds. The resulting polysiloxanes can be used in other reactions involving alkynyl groups, especially alkynylsilyl groups. These new polysiloxanes will provide great opportunities in molecular design and for obtaining polymer structures and cross-linked materials, hybrid materials with controlled molecular or network structure, which, in turn, should lead to the development of new advanced materials with improved properties and/or unprecedented functionalities. Polymers 2022, 14, x FOR PEER REVIEW 1 as initiators, guanidines as catalysts, and alkynyl (amino) silanes as end blockers ( Figure 22) [111]. Two alkynyl(amino)silanes, (diethylamino)dimethy nylethynyl)silane and (diethylamino)ethynyldimethylsilane were synthesized kynylation of chloro(diethylamino)dimethylsilane. In addition, the resulting alkyn minated polysiloxanes were subjected to non-catalytic and catalytic Huisgen rea with organoazide compounds. The resulting polysiloxanes can be used in other rea involving alkynyl groups, especially alkynylsilyl groups. These new polysiloxan provide great opportunities in molecular design and for obtaining polymer structur cross-linked materials, hybrid materials with controlled molecular or network stru which, in turn, should lead to the development of new advanced materials with imp properties and/or unprecedented functionalities. Figure 22. Precise Synthesis of Alkynylsilyl-Terminated Polysiloxanes by ROP of Cyclotrisil Using Water or a Silanol as an Initiator, Guanidines as Catalysts, and alkynyl(amino)silanes Capping Agents (adapted with permission from Ref. [111]. 2021, Fuchise, K) Thus, anionic ROP has been a universal method for the preparation of a wide of functional PDMS telechelics for many years. At present, the reaction remains rel and scientists are able to introduce new types of functional groups (for example, and acetylene) into the structure of PDMS, which, of course, expands the scope o compounds. Preparation of Organosiloxane Telechelics by Cationic ROP Cationic ROP is also of interest for the preparation of functional PDMS. T vantage of this process is that it can be carried out at a relatively low temperatu catalyst can be easily deactivated, and the process can also be used to synthesize pol anes having base-sensitive substituents such as Si-H or Si-(CH2)3-SH [112]. Thus, anionic ROP has been a universal method for the preparation of a wide range of functional PDMS telechelics for many years. At present, the reaction remains relevant, and scientists are able to introduce new types of functional groups (for example, azide and acetylene) into the structure of PDMS, which, of course, expands the scope of such compounds. Preparation of Organosiloxane Telechelics by Cationic ROP Cationic ROP is also of interest for the preparation of functional PDMS. The advantage of this process is that it can be carried out at a relatively low temperature, the catalyst can be easily deactivated, and the process can also be used to synthesize polysiloxanes having base-sensitive substituents such as Si-H or Si-(CH 2 ) 3 -SH [112]. CROP Initiators The first high molecular weight siloxane polymer was obtained by ring-opening D4 cationic polymerization in the presence of sulfuric acid. Polymerization in the presence of sulfuric acid proceeds in several stages. Acid is usually introduced in an amount of 1-3% (wt.). Polymerization lasts from two to eight hours at room temperature and leads to the formation of low molecular weight polymers, therefore, at the end of polymerization, a small amount of water is added to the system for subsequent growth of molecular weight. However, the polymerization mechanism is complex and is still a subject of debate in the literature due to the fact that some unusual kinetic patterns have been observed. There is a negative order in monomer concentration and a negative activation energy [112,113]. The role of water in the polymerization process is also a matter of debate as it can act as a promoter and inhibitor in CROP [114]. The mechanism of polymerization using trifluoromethanesulfonic acid as an initiator has been studied in more depth [115,116]. It is generally accepted that the Si-O bond is cleaved by strong protonic acids during the initiation of the reaction (Figure 23). Thus, the corresponding silanol based on the silyl ester is formed, which starts chain growth. Other catalyst systems have been reported in the literature such as HClO4, aryl and alkyl sulfonic acids, heterogeneous catalysts such as ion exchange resins, acid treated graphite and acid treated clays, and some Lewis acids such as SnCl4 [38,[117][118][119][120]. Polymerization in the presence of Lewis acids is a matter of controversy. Strong protonic acids such as HSnCl5, the reaction product of a Lewis acid with water or other protonic impurities, are also suggested as catalysts [121]. However, it was reported that some nonprotic systems, such as ethyl boron sesquitriflate [122] and antimony chloride vapors-acid chloride pairs [123], are capable of initiating the polymerization of cyclotrisiloxane. However, they have not received wide distribution due to either insufficiently good process control or their high cost. Other catalyst systems have been reported in the literature such as HClO 4 , aryl and alkyl sulfonic acids, heterogeneous catalysts such as ion exchange resins, acid treated graphite and acid treated clays, and some Lewis acids such as SnCl 4 [38,[117][118][119][120]. Polymerization in the presence of Lewis acids is a matter of controversy. Strong protonic acids such as HSnCl 5 , the reaction product of a Lewis acid with water or other protonic impurities, are also suggested as catalysts [121]. However, it was reported that some non-protic sys-tems, such as ethyl boron sesquitriflate [122] and antimony chloride vapors-acid chloride pairs [123], are capable of initiating the polymerization of cyclotrisiloxane. However, they have not received wide distribution due to either insufficiently good process control or their high cost. Other unusual types of catalysts are also mentioned in the literature. V.M. Djinovic at al. synthesized a series of α,ω-dicarboxypropyloligodimethylsiloxanes with a given molecular weight from octamethylcyclotetrasiloxane and 1,3-bis-(3-carboxypropyl)tetramethyldisiloxane (BCPTMDS) using a macroporous cation exchange resin as an acid catalyst. At the same time, the expected molecular weights in the range from 600 to 3500 were achieved with acceptable accuracy. However, the authors did not provide data confirming the effectiveness of this catalyst at higher molecular weights [124]. In Yactine B. [66], acid-treated bentonite (sold under the trade name TONSIL1) was chosen as the CROP catalyst because of its ability to catalyze the polymerization of cyclosiloxanes at a relatively low temperature (typically 70 • C) and because of its easy filtration departments. The novelty here is to compare conventional ROP D4 (and sometimes D H 4) using a conventional terminating agent with redistribution reactions starting with telechelic PDMS and D4 commonly practiced in the industry. Such methods will make it possible to obtain Si-H or Si-vinyl terminated telechelic homopolymers and copolymers [125,126]. Javier Vallejo-Montesinos and colleagues have used synthetic and natural silicon aluminates as inorganic acid catalysts for ring-opening polymerization of cyclosiloxanes. In particular, aluminosilicate and bentonite were used as catalysts in the opening of D3 and D4. Such catalysts have proven to be a good choice for the heterogeneous ROP cationic polymerization of cyclosiloxanes ( Figure 24). The increase in acid sites due to acid treatment led to the dealumination of materials, which made possible the polymerization of cyclosiloxanes. The structural change in the material caused by the loss of aluminum created the necessary chemical conditions to facilitate the polymerization process. The catalysts were obtained by a relatively simple and economical procedure and were easily separated from the reaction medium. However, product yields were extremely low [127]. Javier Vallejo-Montesinos and colleagues have used synthetic and natural silicon aluminates as inorganic acid catalysts for ring-opening polymerization of cyclosiloxanes. In particular, aluminosilicate and bentonite were used as catalysts in the opening of D3 and D4. Such catalysts have proven to be a good choice for the heterogeneous ROP cationic polymerization of cyclosiloxanes ( Figure 24). The increase in acid sites due to acid treatment led to the dealumination of materials, which made possible the polymerization of cyclosiloxanes. The structural change in the material caused by the loss of aluminum created the necessary chemical conditions to facilitate the polymerization process. The catalysts were obtained by a relatively simple and economical procedure and were easily separated from the reaction medium. However, product yields were extremely low [127]. In recent years, biocatalysis has become increasingly popular; that is, the use of natural catalysts such as clays in an organic synthesis reaction [128,129]. Djamal Eddine Kherroub and co-authors have developed and implemented an alternative method for the synthesis of silicone polymers. This method involves the use of Magnet-H + , an aluminosilicate ecocatalyst designed to initiate the polymerization reaction of pentavinylpentamethylcyclopentasiloxane (V5D5) (Figure 25). A total of 0.1 g of Maghnite-H + was heated under vacuum with mechanical agitation for 30 min before to use. The polymerization was carried out in bulk. The dried amount of Maghnite-H+ was added to a flask containing 5 g of V5D5, the flask was carried out in an oil bath at 60 °C under reflux with stirring. After 6 h the reaction was stopped by deactivating Maghnite-H + by adding cold water to the reaction mixture. However, the presence of additional steps makes the process more complex and less preferable compared to using standard catalysts [130]. In recent years, biocatalysis has become increasingly popular; that is, the use of natural catalysts such as clays in an organic synthesis reaction [128,129]. Djamal Eddine Kherroub and co-authors have developed and implemented an alternative method for the synthesis of silicone polymers. This method involves the use of Magnet-H + , an aluminosilicate ecocatalyst designed to initiate the polymerization reaction of pentavinylpentamethylcyclopentasiloxane (V 5 D 5 ) (Figure 25). A total of 0.1 g of Maghnite-H + was heated under vacuum with mechanical agitation for 30 min before to use. The polymerization was carried out in bulk. The dried amount of Maghnite-H+ was added to a flask containing 5 g of V 5 D 5 , the flask was carried out in an oil bath at 60 • C under reflux with stirring. After 6 h the reaction was stopped by deactivating Maghnite-H + by adding cold water to the reaction mixture. However, the presence of additional steps makes the process more complex and less preferable compared to using standard catalysts [130]. thylcyclopentasiloxane (V5D5) (Figure 25). A total of 0.1 g of Maghnite-H + was heated under vacuum with mechanical agitation for 30 min before to use. The polymerization was carried out in bulk. The dried amount of Maghnite-H+ was added to a flask containing 5 g of V5D5, the flask was carried out in an oil bath at 60 °C under reflux with stirring. After 6 h the reaction was stopped by deactivating Maghnite-H + by adding cold water to the reaction mixture. However, the presence of additional steps makes the process more complex and less preferable compared to using standard catalysts [130]. Today, PDMS-telechelics with "standard" functional groups are obtained for the purpose of their further modification [10]. Thus, Gorodov et al. obtained a series of hydridecontaining polymers and copolymers by cationic polymerization of octamethylcyclotetra- Today, PDMS-telechelics with "standard" functional groups are obtained for the purpose of their further modification [10]. Thus, Gorodov et al. obtained a series of hydride-containing polymers and copolymers by cationic polymerization of octamethylcyclotetrasiloxane with 1,1,3,3-tetramethyldisiloxane or polymethylhydrosiloxane and hexamethyldisiloxane. The process was carried out at various ratios of reagents in the presence of sulfonic acid resin for 8-10 h at 70 • C. Subsequently, siloxane copolymers containing fragments of undecylenic acid and its esters were synthesized by adding a trimethylsilyl group or tert-butyl undecenoate to the silicon hydride groups of polydimethylmethylhydrosiloxanes by hydrosilylation [139]. In addition, Gorodov V.V. et al. [140] in their review considered the preparation of hydride-containing PDMS and their subsequent functionalization to obtain carboxyl-containing PDMS. According to the same principle with the same catalyst, polydimethylsiloxanes with side functional hydrosilanes were obtained by Drozdov F.V. et al. [141]. Based on polydimethylsiloxanes (PDMS) with terminal dimethylhydrosilyl or distributed methylhydrosilyl groups in the polymer chain and methyl esters of boronic or phenylboronic acid, cross-linked polyborosiloxanes were obtained by the Pierce-Rubinstein reaction (PBS). Depending on the number and location of methylhydrosilyl groups in the initial PDMS, as well as on the functionality of the boron component, PBS with different macromolecular structures and crosslinking densities were obtained. Benjamin T. Cheesmana et al. prepared acrylate PDMS telechelics by the ROP mechanism in the presence of a trifluoromethanesulfonic acid catalyst. 1,3-bis(methacryl) tetramethyldisiloxane obtained by hydrosilylation of allyl methacrylate was used as a blocking agent (Figure 26) [143]. The methacrylate-terminated PDMS macromonomers synthesized in this study have been successfully used to form films by UV-induced crosslinking, and studies of the properties of crosslinked films are the subject of future publications. Polymers 2022, 14, x FOR PEER REVIEW 21 of siloxane with 1,1,3,3-tetramethyldisiloxane or polymethylhydrosiloxane and hexameth disiloxane. The process was carried out at various ratios of reagents in the presence sulfonic acid resin for 8-10 h at 70 °C. Subsequently, siloxane copolymers containing fra ments of undecylenic acid and its esters were synthesized by adding a trimethylsi group or tert-butyl undecenoate to the silicon hydride groups of polydimethylmethylh drosiloxanes by hydrosilylation [139]. In addition, Gorodov V. V. et al. [140] in their view considered the preparation of hydride-containing PDMS and their subsequent fun tionalization to obtain carboxyl-containing PDMS. According to the same principle with the same catalyst, polydimethylsiloxanes w side functional hydrosilanes were obtained by Drozdov F. V. et al. [141]. Based on po dimethylsiloxanes (PDMS) with terminal dimethylhydrosilyl or distributed methylh drosilyl groups in the polymer chain and methyl esters of boronic or phenylboronic ac cross-linked polyborosiloxanes were obtained by the Pierce-Rubinstein reaction (PB Depending on the number and location of methylhydrosilyl groups in the initial PDM as well as on the functionality of the boron component, PBS with different macromolec lar structures and crosslinking densities were obtained. In the work of Tasic et al., a cation exchange resin based on macroporous sulfonat cross-linked polystyrene was used as a heterogeneous catalyst for the synthesis of PDM telechelics with trimethyl-, hydrido-, vinyl-, and carboxypropyl end groups [142]. In cases, polymers with a low molecular weight (2500) were obtained, so that later they cou be used for the synthesis of block copolymers. Syntheses were performed starting fro D4, while the disiloxane co-reagents for inclusion of the functional group were hexam thyldisiloxane (HMDS), 1,1,3,3-tetramethyldisiloxane (TMDS), 1,3-divinyltetrameth disiloxane (DVTMDS), 1,3-bis(3-carboxypropyl)tetramethyldisiloxane (DCPTMDS). Benjamin T. Cheesmana et al. prepared acrylate PDMS telechelics by the ROP mec anism in the presence of a trifluoromethanesulfonic acid catalyst. 1,3-bis(methacryl)tet methyldisiloxane obtained by hydrosilylation of allyl methacrylate was used as a blocki agent ( Figure 26) [143]. The methacrylate-terminated PDMS macromonomers synthesiz in this study have been successfully used to form films by UV-induced crosslinking, a studies of the properties of crosslinked films are the subject of future publications. In the work of Drozdov F. V. [144], the preparation of limonene functional PDMS the mechanism of cationic ROP from D4 and difunctional siloxane derivative of limone in the presence of Purolite ST-175 catalyst was considered (Figure 27). In this work, a ser of prepolymers based on difunctional siloxane derivatives of limonene and dithiols w different methylene spacer lengths was obtained by a photoinitiated thiol polyadditi In the work of Drozdov F.V. [144], the preparation of limonene functional PDMS by the mechanism of cationic ROP from D4 and difunctional siloxane derivative of limonene in the presence of Purolite ST-175 catalyst was considered ( Figure 27). In this work, a series of prepolymers based on difunctional siloxane derivatives of limonene and dithiols with different methylene spacer lengths was obtained by a photoinitiated thiol polyaddition reaction. It has been shown that an increase in both the siloxane and methylene moieties in the starting monomers results in higher molecular weight products (4000-15,000 Da). Zhang C. et al. [145] report the preparation of hydroxyl-functional PDMS from D4 with water in the presence of solid super acid ( Figure 28). An effective method for improving the thermal insulation and stability of polysiloxane foam (SIF) by adjusting the chain length of hydroxyl-terminated polydimethylsiloxane (OH-PDMS) has been described. A series of SIFs were obtained through foaming and crosslinking processes with different crosslinking densities. Catalytic Rearrangement Reactions for the Preparation of PDMS Copolymers PDMS with chain-distributed functional moieties play an important role in the chemistry of silicones. These functionalities are not only capable of providing new properties to materials, but are also necessary for further transformations and obtaining new polymers with a given structure and a required set of characteristics. Among the many chemical processes used in silicone chemistry, there is one universal process that provides a variety of silicones that gives them with unique adaptability to the most controversial consumer requirements. This process is called equilibration or, as it is often called in Russian literature, catalytic rearrangement. It can be used to obtain telechelics homogeneous in molecular weight and structure, and make it possible to ob- Zhang C. et al. [145] report the preparation of hydroxyl-functional PDMS from D4 with water in the presence of solid super acid ( Figure 28). An effective method for improving the thermal insulation and stability of polysiloxane foam (SIF) by adjusting the chain length of hydroxyl-terminated polydimethylsiloxane (OH-PDMS) has been described. A series of SIFs were obtained through foaming and crosslinking processes with different crosslinking densities. Zhang C. et al. [145] report the preparation of hydroxyl-functional PDMS from D4 with water in the presence of solid super acid ( Figure 28). An effective method for improving the thermal insulation and stability of polysiloxane foam (SIF) by adjusting the chain length of hydroxyl-terminated polydimethylsiloxane (OH-PDMS) has been described. A series of SIFs were obtained through foaming and crosslinking processes with different crosslinking densities. Catalytic Rearrangement Reactions for the Preparation of PDMS Copolymers PDMS with chain-distributed functional moieties play an important role in the chemistry of silicones. These functionalities are not only capable of providing new properties to materials, but are also necessary for further transformations and obtaining new polymers with a given structure and a required set of characteristics. Among the many chemical processes used in silicone chemistry, there is one universal process that provides a variety of silicones that gives them with unique adaptability to the most controversial consumer requirements. This process is called equilibration or, as it is often called in Russian literature, catalytic rearrangement. It can be used to obtain telechelics homogeneous in molecular weight and structure, and make it possible to ob- Catalytic Rearrangement Reactions for the Preparation of PDMS Copolymers PDMS with chain-distributed functional moieties play an important role in the chemistry of silicones. These functionalities are not only capable of providing new properties to materials, but are also necessary for further transformations and obtaining new polymers with a given structure and a required set of characteristics. Among the many chemical processes used in silicone chemistry, there is one universal process that provides a variety of silicones that gives them with unique adaptability to the most controversial consumer requirements. This process is called equilibration or, as it is often called in Russian literature, catalytic rearrangement. It can be used to obtain telechelics homogeneous in molecular weight and structure, and make it possible to obtain random copolymers. Their composition corresponds to the ratio of the initial reagents, which was obtained from a complex mixture of co-hydrolysis products of complex compositions of chlorosilanes, which differ markedly in their reactivity. In addition, it should be noted that the substituents at the silicon atom strongly affect the rate of polymerization of cyclosiloxanes; as a result, it is difficult to obtain copolymers by polymerizing a mixture of cyclosiloxanes with different groups at the silicon atom. This problem can be solved using mixed dimethylcyclotetrasiloxanes, the synthesis of which is considered in the work Talalaeva E.V. et al. [146]. First of all, it is necessary to note the preparation of functional homopolymers by the catalytic rearrangement reaction. Temnikov M.N. et al. [135] obtained vinyl functional homopolymers from 2,4,6,8-tetramethyl-2,4,6,8-tetravinylcyclotetrasiloxane in the presence of a sulfonic cation exchanger catalyst by the cationic catalytic rearrangement mechanism ( Figure 29). The obtained PDMS were functionalized according to the thiol-ene mechanism for the subsequent preparation of aerogels in scCO 2 . should be noted that the substituents at the silicon atom strongly affect the rate of polymerization of cyclosiloxanes; as a result, it is difficult to obtain copolymers by polymerizing a mixture of cyclosiloxanes with different groups at the silicon atom. This problem can be solved using mixed dimethylcyclotetrasiloxanes, the synthesis of which is considered in the work Talalaeva E. V. et al. [146]. First of all, it is necessary to note the preparation of functional homopolymers by the catalytic rearrangement reaction. Temnikov M. N. et al. [135] obtained vinyl functional homopolymers from 2,4,6,8-tetramethyl-2,4,6,8-tetravinylcyclotetrasiloxane in the presence of a sulfonic cation exchanger catalyst by the cationic catalytic rearrangement mechanism ( Figure 29). The obtained PDMS were functionalized according to the thiol-ene mechanism for the subsequent preparation of aerogels in scCO2. In a study by Cao J. et al. [147], functional mercaptopropyl PDMS was obtained by the mechanism of catalytic rearrangement of (mercaptopropyl)methyldimethoxysilane hydrolyzate in the presence of acid clay ( Figure 30). Through the thiol-ene reaction of the obtained PDMS with 2,5,8,11,14,17,20,23-octaoxahexacos-25-ene, a water-soluble combshaped polysiloxane was synthesized with a different ratio of polyester and mercaptopropyl groups as side chains. Mukbaniani O. [148] reported on the preparation of epoxy-functional homopolymers from D4 with functional epoxy groups in the presence of an anionic initiator KOH in dry toluene ( Figure 31). Further, the reaction of compounds containing epoxy groups with primary and secondary amines was carried out, and the corresponding compounds containing aminohydroxyl groups were obtained. Similarly, a linear methylsiloxane oligomer with regular arrangement of propyl acetoacetate groups in the side chain has been obtained [149]. In this work, the possibility of revealing epoxy groups is open to question; however, the authors confirm the composition and structure of the obtained compounds according to elemental analysis, such as FTIR, and 1 H, 13 C, and 29 Si NMR spectrum. In addition, some properties of linear epoxides have been investigated. According to the given data, the percentage content of epoxy groups in the obtained oligomers is close to that calculated. In a study by Cao J. et al. [147], functional mercaptopropyl PDMS was obtained by the mechanism of catalytic rearrangement of (mercaptopropyl)methyldimethoxysilane hydrolyzate in the presence of acid clay ( Figure 30). Through the thiol-ene reaction of the obtained PDMS with 2,5,8,11,14,17,20,23-octaoxahexacos-25-ene, a water-soluble combshaped polysiloxane was synthesized with a different ratio of polyester and mercaptopropyl groups as side chains. should be noted that the substituents at the silicon atom strongly affect the rate of polymerization of cyclosiloxanes; as a result, it is difficult to obtain copolymers by polymerizing a mixture of cyclosiloxanes with different groups at the silicon atom. This problem can be solved using mixed dimethylcyclotetrasiloxanes, the synthesis of which is considered in the work Talalaeva E. V. et al. [146]. First of all, it is necessary to note the preparation of functional homopolymers by the catalytic rearrangement reaction. Temnikov M. N. et al. [135] obtained vinyl functional homopolymers from 2,4,6,8-tetramethyl-2,4,6,8-tetravinylcyclotetrasiloxane in the presence of a sulfonic cation exchanger catalyst by the cationic catalytic rearrangement mechanism ( Figure 29). The obtained PDMS were functionalized according to the thiol-ene mechanism for the subsequent preparation of aerogels in scCO2. In a study by Cao J. et al. [147], functional mercaptopropyl PDMS was obtained by the mechanism of catalytic rearrangement of (mercaptopropyl)methyldimethoxysilane hydrolyzate in the presence of acid clay ( Figure 30). Through the thiol-ene reaction of the obtained PDMS with 2,5,8,11,14,17,20,23-octaoxahexacos-25-ene, a water-soluble combshaped polysiloxane was synthesized with a different ratio of polyester and mercaptopropyl groups as side chains. Mukbaniani O. [148] reported on the preparation of epoxy-functional homopolymers from D4 with functional epoxy groups in the presence of an anionic initiator KOH in dry toluene ( Figure 31). Further, the reaction of compounds containing epoxy groups with primary and secondary amines was carried out, and the corresponding compounds containing aminohydroxyl groups were obtained. Similarly, a linear methylsiloxane oligomer with regular arrangement of propyl acetoacetate groups in the side chain has been obtained [149]. In this work, the possibility of revealing epoxy groups is open to question; however, the authors confirm the composition and structure of the obtained compounds according to elemental analysis, such as FTIR, and 1 H, 13 C, and 29 Si NMR spectrum. In addition, some properties of linear epoxides have been investigated. According to the given data, the percentage content of epoxy groups in the obtained oligomers is close to that calculated. Mukbaniani O. [148] reported on the preparation of epoxy-functional homopolymers from D4 with functional epoxy groups in the presence of an anionic initiator KOH in dry toluene ( Figure 31). Further, the reaction of compounds containing epoxy groups with primary and secondary amines was carried out, and the corresponding compounds containing aminohydroxyl groups were obtained. Similarly, a linear methylsiloxane oligomer with regular arrangement of propyl acetoacetate groups in the side chain has been obtained [149]. In this work, the possibility of revealing epoxy groups is open to question; however, the authors confirm the composition and structure of the obtained compounds according to elemental analysis, such as FTIR, and 1 H, 13 C, and 29 Si NMR spectrum. In addition, some properties of linear epoxides have been investigated. According to the given data, the percentage content of epoxy groups in the obtained oligomers is close to that calculated. A large number of studies are also devoted to the preparation of copolymers by the mechanism of catalytic rearrangement. Sheima Y. et al. [150] considered the preparation of vinyl functional PDMS by the mechanism of anionic catalytic rearrangement. Then the vinyl groups were converted into polar groups of various nature using an efficient onestep post-polymerization modification of thiol-ene addition ( Figure 32). The obtained set of materials was used to establish relationships between structure and properties, the design of side groups, and thermal and dielectric properties. Polymers with a high dielectric constant are promising for creating next-generation energy converters and storage devices with increased energy density. Besides to the usual functionalities of PDMS, which are considered everywhere, the introduction of new functional groups cannot be ignored. Sergey A. Milenin et al. used an anionic ROP mechanism to synthesize copolymers based on polydimethylsiloxanes and aminophosphonates from D4, a preformed cyclic siloxane with aminophosphonate functions, and hexamethyldisiloxane in a ratio of 6:0.25:1 in the presence of 0.05 wt.% crystalline KOH was carried out at 100 °C ( Figure 33) [151]. Thus, the synthesized cyclosiloxane with phosphorus-containing substituents at silicon atoms and the product of its copolymerization with octamethyltetrasiloxane are promising modifiers for formulations based on polydimethylsiloxanes. Such additives affect the rheology, suppress crystallization, and promote the formation of a cross-linked structure during the thermal-oxidative degradation of polydimethylsiloxanes. A large number of studies are also devoted to the preparation of copolymers by the mechanism of catalytic rearrangement. Sheima Y. et al. [150] considered the preparation of vinyl functional PDMS by the mechanism of anionic catalytic rearrangement. Then the vinyl groups were converted into polar groups of various nature using an efficient one-step post-polymerization modification of thiol-ene addition ( Figure 32). The obtained set of materials was used to establish relationships between structure and properties, the design of side groups, and thermal and dielectric properties. Polymers with a high dielectric constant are promising for creating next-generation energy converters and storage devices with increased energy density. A large number of studies are also devoted to the preparation of copolymers by the mechanism of catalytic rearrangement. Sheima Y. et al. [150] considered the preparation of vinyl functional PDMS by the mechanism of anionic catalytic rearrangement. Then the vinyl groups were converted into polar groups of various nature using an efficient onestep post-polymerization modification of thiol-ene addition ( Figure 32). The obtained set of materials was used to establish relationships between structure and properties, the design of side groups, and thermal and dielectric properties. Polymers with a high dielectric constant are promising for creating next-generation energy converters and storage devices with increased energy density. Besides to the usual functionalities of PDMS, which are considered everywhere, the introduction of new functional groups cannot be ignored. Sergey A. Milenin et al. used an anionic ROP mechanism to synthesize copolymers based on polydimethylsiloxanes and aminophosphonates from D4, a preformed cyclic siloxane with aminophosphonate functions, and hexamethyldisiloxane in a ratio of 6:0.25:1 in the presence of 0.05 wt.% crystalline KOH was carried out at 100 °C ( Figure 33) [151]. Thus, the synthesized cyclosiloxane with phosphorus-containing substituents at silicon atoms and the product of its copolymerization with octamethyltetrasiloxane are promising modifiers for formulations based on polydimethylsiloxanes. Such additives affect the rheology, suppress crystallization, and promote the formation of a cross-linked structure during the thermal-oxidative degradation of polydimethylsiloxanes. Besides to the usual functionalities of PDMS, which are considered everywhere, the introduction of new functional groups cannot be ignored. Sergey A. Milenin et al. used an anionic ROP mechanism to synthesize copolymers based on polydimethylsiloxanes and aminophosphonates from D4, a preformed cyclic siloxane with aminophosphonate functions, and hexamethyldisiloxane in a ratio of 6:0.25:1 in the presence of 0.05 wt.% crystalline KOH was carried out at 100 • C ( Figure 33) [151]. Thus, the synthesized cyclosiloxane with phosphorus-containing substituents at silicon atoms and the product of its copolymerization with octamethyltetrasiloxane are promising modifiers for formulations based on polydimethylsiloxanes. Such additives affect the rheology, suppress crystallization, and promote the formation of a cross-linked structure during the thermal-oxidative degradation of polydimethylsiloxanes. Kim E. E. et al. [154] prepared hydride-containing copolymers from D4 and D4 H in the presence of Amberlyst 15 catalyst (Figure 35). The resulting copolymers were used to obtain polysiloxanes with distributed dibenzoylmethane groups, on the basis of which a number of new cross-linked polymers were successfully obtained by the interaction of polyligand PDMS with grafted fragments of β-diketonate and nickel (II) acetate. Perju E. [152] presents the synthesis of three new tetracyclosiloxane monomers modified with either nitroaniline (NA) or the Disperse Red 1 (DR1) group and their ring-opening polymerization reaction in the presence of tetramethylammonium hydroxide ( Figure 34). Because of their high dielectric constant of 17.3 at RT. and rather low Tg, NA-modified polymers are attractive as active dielectric materials in actuating elements, capacitors, and flexible electronics [153]. Kim E. E. et al. [154] prepared hydride-containing copolymers from D4 and D4 H in the presence of Amberlyst 15 catalyst (Figure 35). The resulting copolymers were used to obtain polysiloxanes with distributed dibenzoylmethane groups, on the basis of which a number of new cross-linked polymers were successfully obtained by the interaction of polyligand PDMS with grafted fragments of β-diketonate and nickel (II) acetate. Kim E.E. et al. [154] prepared hydride-containing copolymers from D4 and D4 H in the presence of Amberlyst 15 catalyst (Figure 35). The resulting copolymers were used to obtain polysiloxanes with distributed dibenzoylmethane groups, on the basis of which a number of new cross-linked polymers were successfully obtained by the interaction of polyligand PDMS with grafted fragments of β-diketonate and nickel (II) acetate. Morariu S. et al. [80] obtained poly(dimethylsiloxane-co-diphenylsiloxane) by ringopening anionic copolymerization of D4 and Ph4 using tetramethylammonium hydroxide as a catalyst and a DMF Lewis base as a promoter. Basic catalysts are recommended for opening the octaphenyltetrasiloxane ring. The transition catalyst TMAH was chosen in this work because it can be easily removed at the end of the reaction by thermal decomposition to form volatile compounds (trimethylamine and methanol). DMF was added to increase the reaction rate by complexing the counterion and preventing ion pair formation. Unfortunately, the work does not pay sufficient attention to confirming the structure of the obtained copolymer. One cannot leave aside a number of works where PDMS copolymers with several types of functionalities are obtained by the catalytic rearrangement mechanism. Among these works, attention is drawn to the work of Bodkhe R. B. [155], where 3-aminopropylterminated polydimethylvinyl siloxane (APT-PDMVS) was obtained in the presence of a benzyltrimethylammonium hydroxide catalyst ( Figure 36). D4 Vi is more highly reactive with base than D4, and therefore, to study the reproducibility of the reaction for a given molecular weight, APT-PDMVS equilibration using different chain length end blockers was studied. To this end, a series of six different APT-PDMVS polymers was synthesized using each end blocker, having a target number average molecular weight (Mn) of 5000, 10,000, 15,000, 20,000, 30,000, and 40,000 g/mol. The mole ratio of D4 to D4 Vi was kept constant at 1:1. A number of aminopropyl terminated PDMS polymers having orthogonal carboxylic acid groups have been successfully incorporated into the polyurethane coating system. Further fine tuning of the composition of the acid-functional siloxane polymer and the amount of siloxane included in the polyurethane coating can lead to improved performance of the composite coating. Morariu S. et al. [80] obtained poly(dimethylsiloxane-co-diphenylsiloxane) by ringopening anionic copolymerization of D4 and Ph4 using tetramethylammonium hydroxide as a catalyst and a DMF Lewis base as a promoter. Basic catalysts are recommended for opening the octaphenyltetrasiloxane ring. The transition catalyst TMAH was chosen in this work because it can be easily removed at the end of the reaction by thermal decomposition to form volatile compounds (trimethylamine and methanol). DMF was added to increase the reaction rate by complexing the counterion and preventing ion pair formation. Unfortunately, the work does not pay sufficient attention to confirming the structure of the obtained copolymer. One cannot leave aside a number of works where PDMS copolymers with several types of functionalities are obtained by the catalytic rearrangement mechanism. Among these works, attention is drawn to the work of Bodkhe R.B. [155], where 3-aminopropylterminated polydimethylvinyl siloxane (APT-PDMVS) was obtained in the presence of a benzyltrimethylammonium hydroxide catalyst ( Figure 36). D4 Vi is more highly reactive with base than D 4 , and therefore, to study the reproducibility of the reaction for a given molecular weight, APT-PDMVS equilibration using different chain length end blockers was studied. To this end, a series of six different APT-PDMVS polymers was synthesized using each end blocker, having a target number average molecular weight (M n ) of 5000, 10,000, 15,000, 20,000, 30,000, and 40,000 g/mol. The mole ratio of D4 to D4 Vi was kept constant at 1:1. A number of aminopropyl terminated PDMS polymers having orthogonal carboxylic acid groups have been successfully incorporated into the polyurethane coating system. Further fine tuning of the composition of the acid-functional siloxane polymer and the amount of siloxane included in the polyurethane coating can lead to improved performance of the composite coating. A number of works are devoted to the synthesis of copolymers with phenyl and vinyl substituents. One of them is the study by Sheima Y. [150], in which homopolymers of poly(methylvinylsiloxane) (PV) and copolymers of poly(dimethyl-co-methylvinyl)siloxanes (PM x V y ) with ratios x/y = 2:8, 4:6, 6:4, and 8:2 were obtained by ring-opening anionic polymerization of the corresponding D4 and D4 Vi monomers in the presence of hexamethyldisiloxane as the end blocking agent. The vinyl groups were then converted into polar groups of various natures using an efficient one-step post-polymerization modification by the thiol-ene addition mechanism. In this paper, it is worth noting that the authors make a complex equilibrium system based on simple methods, while maintaining the level of control. A number of works are devoted to the synthesis of copolymers with phenyl and substituents. One of them is the study by Sheima Y. [150], in which homopolyme poly(methylvinylsiloxane) (PV) and copolymers of poly(dimethyl-co-methylvinyl) anes (PMxVy) with ratios x/y = 2:8, 4:6, 6:4, and 8:2 were obtained by ring-opening an polymerization of the corresponding D4 and D4 Vi monomers in the presence of hex thyldisiloxane as the end blocking agent. The vinyl groups were then converted into groups of various natures using an efficient one-step post-polymerization modificatio the thiol-ene addition mechanism. In this paper, it is worth noting that the authors m a complex equilibrium system based on simple methods, while maintaining the lev control. Guo M. et al. [156] investigated the ring-opening copolymerization (ROCP) of zylsulfonyl macroheterocyclosiloxane (BSM) and five different cyclosiloxanes (Figur Here, a general approach was developed for the synthesis of benzylsulfonyl-conta silicone copolymers with various substituents, including methyl, vinyl, ethyl, and ph A range of variable inclusion copolymers (6% to 82%) BSM was made by varyin comonomer ratio and using KOH as a catalyst in a mixture of dimethylformamide toluene as solvents. The resulting copolymers exhibit different composition-depen properties and unique viscoelasticity. Notably, the surface and fluorescent characteri as well as the glass transition temperatures of the copolymers, can be adapted by chan the amount of BSM. Unlike typical sulfone-containing polymers such as poly(olefin fones), the resulting copolymers exhibited excellent thermal and hydrolytic stability universal strategy developed in this study provides a platform for the developme innovative silicone copolymers with controlled structure and performance. Guo M. et al. [156] investigated the ring-opening copolymerization (ROCP) of benzylsulfonyl macroheterocyclosiloxane (BSM) and five different cyclosiloxanes ( Figure 37). Here, a general approach was developed for the synthesis of benzylsulfonyl-containing silicone copolymers with various substituents, including methyl, vinyl, ethyl, and phenyl. A range of variable inclusion copolymers (6% to 82%) BSM was made by varying the comonomer ratio and using KOH as a catalyst in a mixture of dimethylformamide and toluene as solvents. The resulting copolymers exhibit different composition-dependent properties and unique viscoelasticity. Notably, the surface and fluorescent characteristics, as well as the glass transition temperatures of the copolymers, can be adapted by changing the amount of BSM. Unlike typical sulfone-containing polymers such as poly(olefin sulfones), the resulting copolymers exhibited excellent thermal and hydrolytic stability. The universal strategy developed in this study provides a platform for the development of innovative silicone copolymers with controlled structure and performance. A number of works are devoted to the synthesis of copolymers with phenyl and vinyl substituents. One of them is the study by Sheima Y. [150], in which homopolymers of poly(methylvinylsiloxane) (PV) and copolymers of poly(dimethyl-co-methylvinyl)siloxanes (PMxVy) with ratios x/y = 2:8, 4:6, 6:4, and 8:2 were obtained by ring-opening anionic polymerization of the corresponding D4 and D4 Vi monomers in the presence of hexamethyldisiloxane as the end blocking agent. The vinyl groups were then converted into polar groups of various natures using an efficient one-step post-polymerization modification by the thiol-ene addition mechanism. In this paper, it is worth noting that the authors make a complex equilibrium system based on simple methods, while maintaining the level of control. Guo M. et al. [156] investigated the ring-opening copolymerization (ROCP) of benzylsulfonyl macroheterocyclosiloxane (BSM) and five different cyclosiloxanes ( Figure 37). Here, a general approach was developed for the synthesis of benzylsulfonyl-containing silicone copolymers with various substituents, including methyl, vinyl, ethyl, and phenyl. A range of variable inclusion copolymers (6% to 82%) BSM was made by varying the comonomer ratio and using KOH as a catalyst in a mixture of dimethylformamide and toluene as solvents. The resulting copolymers exhibit different composition-dependent properties and unique viscoelasticity. Notably, the surface and fluorescent characteristics, as well as the glass transition temperatures of the copolymers, can be adapted by changing the amount of BSM. Unlike typical sulfone-containing polymers such as poly(olefin sulfones), the resulting copolymers exhibited excellent thermal and hydrolytic stability. The universal strategy developed in this study provides a platform for the development of innovative silicone copolymers with controlled structure and performance. Fei H.F. et al. [157] obtained trifluoropropylmethylsiloxane-phenylmethylsiloxane gradient copolysiloxanes by copolymerization of 1,3,5-tris(trifluoropropylmethyl)cyclotrisiloxane (D3 F ) and phenylmethylcyclotrisiloxane (DPhrisiloxane) (Figure 38). An analysis of the reactivity ratios showed that the reactivity of D3 F with respect to anionic ROP is higher than that of D3 Ph ; however, D 3 F showed lower reactivity compared to D3 Ph during cationic ROP. Gradient copolymers of type AB and BAB were obtained due to the difference in the reactivity of the monomers. The microstructure of the copolymers was characterized by 29 Si NMR spectroscopy, gel permeation chromatography, and differential scanning calorimetry. risiloxane (D3 F ) and phenylmethylcyclotrisiloxane (DPhrisiloxane) (Figure 38). An analysis of the reactivity ratios showed that the reactivity of D3 F with respect to anionic ROP is higher than that of D3 Ph ; however, D3 F showed lower reactivity compared to D3 Ph during cationic ROP. Gradient copolymers of type AB and BAB were obtained due to the difference in the reactivity of the monomers. The microstructure of the copolymers was characterized by 29 Si NMR spectroscopy, gel permeation chromatography, and differential scanning calorimetry. Figure 38. Possible mechanism of copolymerization of D3 F and D3 Ph initiated by CF3SO3H (adapted with permission from Ref. [157]. 2016. Fei, H.F) The work of Indulekha K. [158] shows the Synthesis of trimethylsilyl terminated poly(dimethyl-co-methylhydrogen-codiphenyl)siloxane (TMS-PDMHS). TMS-PDMHS was synthesized by the cationic ring opening polymerization of cyclic siloxanes, D4 and D4 H in presence of DPDMS using H2SO4 as catalyst ( Figure 39). The structure of the obtained copolymers was confirmed by 1 H, 29 Si NMR data. The resulting compound was further used as a crosslinking agent. Isaacman M. J et al. [159] synthesized bifunctional polysiloxanes with tosylate functional groups, carried out by cationic polymerization of D4 in the presence of sulfuric acid. Similarly, copolymers with iodide functional end groups were obtained using D4 and D4 H as monomers ( Figure 40). The compounds, which was obtained by converting iodide or tosylate end groups into azides, were converted into interactive partners for alkyne-functionalized poly(oxazoline) A-blocks. The work of Indulekha K. [158] shows the Synthesis of trimethylsilyl terminated poly(dimethyl-co-methylhydrogen-codiphenyl)siloxane (TMS-PDMHS). TMS-PDMHS was synthesized by the cationic ring opening polymerization of cyclic siloxanes, D4 and D4 H in presence of DPDMS using H 2 SO 4 as catalyst ( Figure 39). The structure of the obtained copolymers was confirmed by 1 H, 29 Si NMR data. The resulting compound was further used as a crosslinking agent. Isaacman M.J. et al. [159] synthesized bifunctional polysiloxanes with tosylate functional groups, carried out by cationic polymerization of D4 in the presence of sulfuric acid. Similarly, copolymers with iodide functional end groups were obtained using D4 and D4 H as monomers ( Figure 40). The compounds, which was obtained by converting iodide or tosylate end groups into azides, were converted into interactive partners for alkyne-functionalized poly(oxazoline) A-blocks. It is important to note that the ROP and catalytic rearrangement reactions have not only made it possible to obtain PDMS with well-known functionalities for many decades, but also open up opportunities for introducing new functions that are relevant in our time. Thus, in the work of Milenin S.A. et al., for the first time the whole variety of options for the synthesis of polydimethylsiloxanes (PDMS) with azidopropyl functional groups at the silicon atom was demonstrated by classical ring-opening polymerization (ROP) and catalytic rearrangement of siloxanes in the presence of a strong acid (CF 3 SO 3 H) ( Figure 41) [160]. The proposed method was used to obtain not only PDMS containing azidopropyl functional groups at both ends of the polymer chain (telechelic), but also PDMS with an irregular structure containing various proportions (5-50%) of azidopropyl functional groups in the main polymer chain. It is important that the proposed method also turned out to be effective for the synthesis of PDMS containing both azidopropyl and hydridosilyl functional groups. As a result, PDMS were obtained with different mutual arrangement of two types of functional groups in the PDMS chain. The used method of catalytic rearrangement of low molecular weight siloxanes made it possible to obtain PDMS with azidopropyl functional groups in a wide range of molecular weights from 2000 to 88,000 according to gel permeation chromatography (GPC). In addition, the possibility of further modification of the resulting azidopropyl-functional PDMS, as well as multifunctional PDMS containing azidopropyl and hydridosilyl functional groups simultaneously, by azide-alkyne cycloaddition reactions was demonstrated. groups in the main polymer chain. It is important that the proposed method also tu out to be effective for the synthesis of PDMS containing both azidopropyl and hydri lyl functional groups. As a result, PDMS were obtained with different mutual arra ment of two types of functional groups in the PDMS chain. The used method of cata rearrangement of low molecular weight siloxanes made it possible to obtain PDMS azidopropyl functional groups in a wide range of molecular weights from 2000 to 8 according to gel permeation chromatography (GPC). In addition, the possibility of fu modification of the resulting azidopropyl-functional PDMS, as well as multifunct PDMS containing azidopropyl and hydridosilyl functional groups simultaneously, b ide-alkyne cycloaddition reactions was demonstrated. Conclusions As we have seen, the ROP method as applied to functional linear oligomers is b actively developed and remains the main method for obtaining PDMS oligomers w wide range of functional endings. There are many examples of reproducing this me to obtain compounds with given molecular parameters and the type of functional gr in the literature. However, it is quite obvious that the use of new catalysts, subst based on natural compounds, and the improvement of experimental techniques mak possible to count on the synthesis of new original silicone polymers, opening up new a of their practical application. Siloxane monochelics with polymerizable groups (macromonomers) made it pos to achieve a breakthrough in the properties of polymeric materials by changing the na Conclusions As we have seen, the ROP method as applied to functional linear oligomers is being actively developed and remains the main method for obtaining PDMS oligomers with a wide range of functional endings. There are many examples of reproducing this method to obtain compounds with given molecular parameters and the type of functional groups in the literature. However, it is quite obvious that the use of new catalysts, substrates based on natural compounds, and the improvement of experimental techniques makes it possible to count on the synthesis of new original silicone polymers, opening up new areas of their practical application. Siloxane monochelics with polymerizable groups (macromonomers) made it possible to achieve a breakthrough in the properties of polymeric materials by changing the nature of intermolecular interactions in networks of molecular brushes. It is possible to obtain materials with both very low and very high mechanical modulus in these systems, in which siloxanes play a dominant role. The level of regulation of properties is amazing; it is not by chance that the authors call this approach the polymer genome. It would be impossible to make this breakthrough without the synthesis of mono-and telechelics with strictly specified molecular parameters. Therefore, the emergence of new libraries of mono-and telechelics based on PDMS containing azide and acetylene functions, opening up a rich chemistry of azide-alkyne cycloaddition, promises the emergence of new polymer systems and materials with an unusual set of properties and the possibility of their fine tuning. All of this will make it possible to control the properties and functions of new polymers and materials based on them through fine control of the structure. Thus, the growing choice of functional endings both in mono-and telechelics, and in polyfunctional linear matrices with a controlled frequency of distribution of functional groups, as well as the quality of these functionalities, which allow the formation of target structures based on atom-saving addition processes, indicate a qualitative leap in design of silicone structures of complex architecture. In turn, the unique properties of materials based on such polymers are an effective stimulus for the further development of old, well-proven methods for the synthesis of polymers, which undoubtedly include ring-opening and equilibration reactions.
2022-06-17T15:14:01.688Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "7a1ca616834d465f48504411fb475e2b9bc5d5ba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/12/2408/pdf?version=1655381508", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "405694df6ce5072ff791bd04c746397ef0fdb560", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
263812716
pes2o/s2orc
v3-fos-license
Navigator utilization among African-American breast cancer patients at a Comprehensive Cancer Center Background Patient navigation has been demonstrated to improve access to standard-of-care oncologic therapy. However, many patients — particularly those of African-American race — often do not have access to navigation upon receiving a diagnosis of cancer. As the most common cancer among African-American women is breast cancer, we sought to assess the rate of patient navigation among African-American breast cancer patients at our institution, which resides in a regional ZIP code comprised of 46% African-American residents. Materials and methods African-American breast cancer patients who had been discussed at our weekly breast cancer multidisciplinary tumor board over a recent three-month period were assessed by a patient navigator representing the Navigator-Assisted Hypofractionation (NAVAH) program to determine their access to navigation in their cancer care. Responses were assessed from a breast cancer support group and culled to determine a baseline proportion of navigation utilization. Results A total of 18 women of African-American race having been diagnosed with breast cancer were identified and assessed. Of these a total of 4 noted that they had received navigation, yielding a navigation utilization percentage of 22.2% among African-American breast cancer patients at our institution. Conclusion The rate of navigation utilization among African-American breast cancer patients is poor. Despite our center residing in a region comprised of increased African-Americans, such predominance has not translated into optimizing navigation access for African-American breast cancer patients. This 22% rate of navigation utilization serves as a starting benchmark for initiatives such as the NAVAH program to provide tangible improvement in this patient population. Introduction Patient navigation is a community-based intervention designed to optimize access to timely diagnosis and treatment by eliminating barriers to care, thereby serving as a patient-centric healthcare service delivery model [1].Originally found-ed by Dr. Harold Freeman, M.D., the importance of navigation and its integration into the healthcare team has increased over time commensurate with the increasing complexity of cancer care requiring coordination between the three primary arms of cancer treatment: surgical, medical, and radiation. For African-Americans and other underrepresented minority patients diagnosed with cancer, navigation is of particular importance, since underrepresented patients are less likely to receive guideline-concordant oncologic care compared with Caucasian patients.As a result, increased adoption of navigation can potentially reduce oncologic treatment access disparities. Unfortunately, many patients -particularly those of African-American race -often do not have access to navigation upon receiving a diagnosis of cancer.Ideally instituted at the time of cancer diagnosis, navigators work with patients to guide them through the maze of visits, laboratory tests and imaging -which all comprise optimal care -in a timely and cost-efficient manner [2]. As the most common cancer among African-American women is breast cancer (3)(4), we sought to assess the rate of patient navigation among African-American breast cancer institutions at our institution, which resides within one of the first 50 National Cancer Institute-designated comprehensive cancer centers. The primary zip code for our institution's primary clinic (44106) is comprised of more than 46% African-American residents, which is far above the 12.6% African-American representation nationally [5], and the paultry 7% of African-American women on the CALGB9343 practice-changing breast cancer radiation therapy randomized controlled trial [6][7].The Navigator-Assisted Hypofractionation (NAVAH) program has been recently implemented at our institution to address radiation therapy access disparities facing African-American breast cancer patients [8]; consequently, an investigation into the baseline pre-NAVAH navigation rate at our institution represents an important initial step in fostering cancer care equity for African-Americans with breast cancer. Materials and methods The University Hospitals Seidman Cancer Center conducts multidisciplinary breast cancer tumor boards comprised of breast surgeons, medical oncologists, radiation oncologists, pathologists, radiologists, nurses, patient navigators, and research coordinators on a weekly basis.All breast cancer patients are eligible for presentation and discussion of optimal management.For this investigation, African-American breast cancer patients having completed initial surgical management and subsequently discussed for adjuvant radiation therapy at weekly tumor board were assessed by a patient navigator (UB) representing the NAVAH program for consideration of potential inclusion into NAVAH prior to discussion of their adjuvant radiation therapy options with a breast cancer radiation oncologist.This analysis occurred during a recent three-month time interval. Patient navigation history was ascertained from medical records and during responses from an established breast cancer support group.Patients were considered to have received navigation if they documented having received navigation at any point from the time of their breast cancer diagnosis through the time of NAVAH navigator contact, including the times periods before or after initial surgery and before post-surgical discussion in breast cancer tumor board. Results An average of six African-American women per month diagnosed with breast cancer at our institution were identified and assessed, yielding a total of 18 patients evaluated in this analysis. Of these 18 patients (age range: 28-62), a total of four women noted that they had received navigation between the time of cancer diagnosis and discussion of adjuvant therapy.This yielded a navigation rate among African-American breast cancer patients of 4/18 = 22.2% at our institution.The majority of patients navigated received navigation after their initial cancer surgery. Discussion Patient navigation has been demonstrated to improve access to standard-of-care oncologic therapy.In an ongoing oncologic navigation program known as Walking Forward (designed to provide culturally appropriate community education on cancer, screening and treatment for Native American patients), navigation has been demonstrated to: a) facilitate increased participation in clinical trials, b) assist cancer patients in using the healthcare system, c) reduce cancer treatment disparities facing Native American patients residing a median distance of 140 miles from any cancer center [9]. For cancer patients requiring radiation therapy (RT), navigation has also been shown to decrease RT treatment interruption compared with historical non-navigated RT patients [9].These successes inspired the creation of the NAVAH program to utilize navigation to tangibly combat RT access disparities facing underrepresented minorities [8,10].Such a tangible goal requires an initial assessment of navigation prior to NAVAH implementation. This study indicates that the navigation rate among African-American breast cancer patients at our institution is 22%, a number even more concerning given the high African-American representation in the community geographically located in this institution's primary clinic location.However, this rate serves as a starting benchmark for subsequent initiatives to provide tangible improvement in this patient population. Although this study is not without limitations, one of which is the limited time period of sampling, it provides tangible evidence of the need for active intervention to improve navigation exposure in this patient population.The evaluation of the efficacy of future initiatives, such as NAVAH, in comparison with this benchmark will provide evidence as to the utility of such initiatives in improving the care of these patients.
2023-10-11T15:28:57.933Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "6b13902d6a311850ec883fa697814f6417997027", "oa_license": "CCBYNCND", "oa_url": "https://journals.viamedica.pl/rpor/article/download/97509/74869", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "fa1225ccd096ea86f262c246605bf6afea679e49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
269576123
pes2o/s2orc
v3-fos-license
Identification of Key Genes and Pathways in Oxaliplatin-Induced Neuropathic Pain Through Bioinformatic Analysis Background The mechanism of Chemotherapy-induced neuropathic pain (NP) remains obscure. This study was aimed to uncover the key genes as well as protein networks that contribute to Oxaliplatin-induced NP. Material/Methods Oxaliplatin frequently results in a type of Chemotherapy-induced NP that is marked by heightened sensitivity to mechanical and cold stimuli, which can lead to intolerance and discontinuation of medication. We investigated whether these different etiologies lead to similar pathological outcomes by targeting shared genetic targets or signaling pathways. Gene expression data were obtained from the Gene Expression Comprehensive Database (GEO) for GSE38038 (representing differential expression in the spinal nerve ligation model rats) and GSE126773 (representing differential expression among the Oxaliplatin-induced NP model rats). Differential gene expression analysis was performed using GEO2R. Results Protein-protein interaction (PPI) analysis identified 260 co-differentially expressed genes (co-DEGs). Subsequently, Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis revealed three shared pathways involved in both models: Kaposi sarcoma-associated herpesvirus (KSHV) infection, Epstein-Barr virus (EBV) infection, and AGE-RAGE signaling pathway in diabetic complications. Further bioinformatics analysis highlighted eight significantly up-regulated genes in the NP group: Mapk14, Icam1, Cd44, IL6, Cxcr4, Stat1, Casp3 and Fgf2. Our results suggest that immune dysfunction, inflammation-related factors or regulating inflammation factors may also be related to Oxaliplatin-induced NP. Additionally, we analyzed a dataset (GSE145222) involving chronic compression of DRGs (CCD) and control groups. CCD model is a classic model for studying NP. We assessed these hub genes’ expression levels. In contrast with the control groups, the hub genes were up-regulated in CCD groups, the difference was statistically significant, except Stat1. Conclusion Our research significantly contributes to elucidating the mechanisms underlying the occurrence as well as the progression of Oxaliplatin-induced NP. We have identified crucial genes and signaling pathways associated with this condition. Introduction Chemotherapy-induced neuropathic pain (CINP) is a progressive, persistent, and challenging disease characterized by pain, numbness, tingling sensation, and sensitivity to cold.It affects about 50-90% of individuals undergoing chemotherapy treatment. 1CIPN is mainly induced by first-line chemotherapy drugs such as oxaliplatin, paclitaxel, and vinblastine.With the accumulation of chemotherapy drug toxicity, peripheral neuropathological symptoms become increasingly severe, compelling patients to reduce the chemotherapy drug dosages, shorten the chemotherapy cycle, or even discontinue medication, ultimately impacting patient survival rates.Chemotherapy-induced neuropathic pain (CINP) remains a therapeutic challenge with no effective drugs and treatments currently available. Oxaliplatin is a widely used chemotherapy-drug for the treatment of colorectal and advanced ovarian cancer. 2owever, one of the major side effects associated with oxaliplatin treatment is the development of neuropathic pain (NP).This condition is characterized by severe and painful peripheral neuropathy, which can often lead to dose reduction or early discontinuation of chemotherapy. 3,4Despite its clinical significance, the exact mechanism underlying Oxaliplatin-induced neurotoxicity and the subsequent development of NP remains unclear. Inflammation of the dorsal root ganglions (DRGs) is the pivotal physiological basis for the occurrence of NP.During the process of pain generation, the DGRs serve as the primary neuron of pain input in the pain mechanism.So the DGRs are considered to be key participants within the pathogenesis that lead to NP. 5 Studies have found that the expression of neuroinflammatory markers is significantly increased in both the dorsal root ganglion (DRG) and the spinal cord (L4-L5 region) of rats. 6,7Other studies have found that the main target organ of oxaliplatin's action is the DGRs, 8,9 its specific mechanism is still unclear.Investigating the shared targets and signaling pathways associated with different etiologies leading to the same outcome can enhance our comprehension of the molecular mechanisms underlying NP.In this study, GSE38038 and GSE126773 datasets obtained from the GEO database were investigated.Combined bioinformatics analyses with enrichment methods, the objective of this study was to find out shared differentially expressed genes (DEGs) and elucidate the functions of them in NP induced by diverse etiologies.Additionally, the construction of a protein-protein interaction (PPI) network was carried out, and the STRING database and Cytoscape software were employed to uncover gene modules and identify central genes as hub genes.Finally, we collected expression data on central genes from CCD model rat induced by chronic compression of DRGs to validate our research findings against the experimental and control groups. Identification of DEGs To differentiate DEGs of the diseased and control groups, we utilized the online analysis tool of GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r) 10 on a basis of the R packages of GEOquery as well as Limma, which facilitate data reading and differential expression calculation, respectively.By comparing gene expression profiles across groups, we determined the DEGs associated with the disease condition.Probe sets lacking annotated gene identifiers were excluded from the analysis to ensure the validity and precision of the results, and genes exhibiting multiple probe sets were either eliminated or their expression levels were averaged.Genes were deemed to be significantly DEGs if they met the criteria of an adjusted P-value less than 0.05 and a |logFC (fold change) | of no less than 1.0.To identify common DEGs, a Venn diagram performed using R software. Functional and Pathway Enrichment Analysis The Gene Ontology (GO) database provides succinct annotations regarding the characteristics of gene products, encompassing their functional attributes, involvement in biological pathways, as well as cellular localization.Besides, the Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway database focuses on the compilation of gene pathway information across distinct species.Results of enrichment analysis were achieved utilizing the Pathview database (https:// www.bioinformatics.com.cn). 11Pathview represents an innovative toolkit designed for integrating and visualizing data based on pathways simplifies the process of mapping user-provided data onto relevant pathway graphs.Users are only required to provide their data and data file, after which the tool seamlessly integrates and maps the user data onto the corresponding pathway.It also generates pathway graphs with the mapped data, including the generation of Venn diagrams.A significance threshold of adjusted P < 0.05 was utilized to indicate statistical significance. PPI Network Construction and Module Analysis To explore the relationships within relevant proteins, including direct binding interactions and interconnected regulatory pathways both upstream and downstream, the Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 11.0) 12 was utilized.The utilization of the STRING database facilitates the construction of a protein-protein interaction (PPI) network, which enables the representation and analysis of complex regulatory connections among proteins.Besides, interactions with a cumulative score more than 0.7 were deemed to have statistical significance.For visualizing this PPI network, we employed Cytoscape 3.7.2(http://www.cytoscape.org). 13To identify key functional modules, we employed Cytoscape's molecular complex detection technology (MCODE).Selection criteria were were defined as follows: (1) a K-core value of 2; (2) a degree cutoff of 2; (3) a maximum depth of 100, (4) a node score cutoff of 0.2. Selection and Analysis of Hub Genes The cytoHubba plug-in incorporated within the Cytoscape software was utilized for the purpose of identifying hub genes.Eight commonly used algorithms, namely MCC, Degree, MNC, Stress, Closeness, BottleNeck, EPC, and Radiality, were utilized to choose these hub genes, followed by the construction of a co-expression network among the hub genes using GeneMANIA, a powerful tool for elucidating intrinsic relationships within sets of genes.The GeneMANIA website (http://www.genemania.org) was employed for this purpose.Mod Enrichr was utilized for enrichment analysis by introducing hub genes.Mod Enrichr is a suite of tools for gene set enrichment analysis(https://maayanlab.cloud/Enrichr/). 14 Hub Genes Validation Study The expression of identified hub genes was verified in GSE145222, which contained 5 sham groups as well as 4 chronic compression of DRGs model groups.The comparison between groups was conducted using a T-test, a statistical test that assesses the significance of differences (P <0.05) between means. Ethical Declaration All data utilized in this study were sourced from publicly available databases.The study does not involve any investigations pertaining to animals or humans. Identification of DEGs To identify genes exhibiting differential expression in the DRGs following NP, we retrieved two microarray datasets from the GEO database: GSE126773 (Figure 1A) and GSE38038 (Figure 1B).These datasets provide information regarding gene alterations specifically observed in Rat DRGs after NP.The heap maps of GSE38038 as well as GSE126773 showed the expression levels of DEGs (Figure 1C and D).After performing differential expression analysis on the GSE126773 and GSE38038 datasets, 260 DEGs were identified.These DEGs were found to be overlapping, as illustrated in Figure 2. Analysis of the Functional Characteristics of Common DEGs To investigate the biological functions as well as pathways associated with the 260 common DEGs, enrichment analysis was conducted using GO and KEGG Pathway databases.The results of the GO analysis revealed that the enriched biological processes of these genes were primarily in relation with response to peptide and myeloid cell homeostasis (Figure 3A).Additionally, the analysis indicated enrichment in terms of neuron to neuron synapse, postsynaptic density (Figure 3B), growth factor receptor binding, and hormone receptor binding (Figure 3C).Regarding the KEGG Pathway analysis, results indicate significant enrichment of the 260 common DEGs in three key pathways, namely insulin resistance, EGFR tyrosine kinase inhibitor resistance, and toxoplasmosis (Figure 3D). Using Cytoscape, a PPI network was constructed for the common DEGs.The network was generated by incorporating interactions with combined scores greater than 0.7, indicating a high level of confidence in the reliability of the interactions.Through the utilization of the MCODE plug-in within Cytoscape, six gene clustering modules were identified (Figure 4A).Among these modules, a total of twenty-five genes were found to be present in all of them, indicating their potential functional significance.Notably, the three modules with the highest scores encompassed sixteen genes, suggesting their potential importance in the biological context (Figure 4B-D).The GO analysis revealed that the identified genes are significantly associated with several biological processes and molecular functions, including sterol biosynthetic process, perinuclear endoplasmic reticulum localization, 3-keto sterol reductase activity, type 5 metabotropic glutamate receptor binding, CD4 receptor binding, and oxidoreductase activity.Furthermore, the oxidoreductase activity was characterized by the process of oxidizing a pair of donors, leading to the reduction of molecular oxygen as well as the production of two molecules of water (Figure 5A).According to the KEGG pathway analysis, the identified genes were found to be primarily associated with three key pathways: steroid biosynthesis, Epstein-Barr virus (EBV) infection, and the AGE-RAGE signaling pathway in diabetic complications (Figure 5B). Hub Gene Selection and Analysis Through the application of the cytoHubba plug-in and employing six algorithms, the top 10 hub genes were identified, as presented in Table 1.Through the analysis of the Venn diagrams, we have identified ten hub genes that are commonly shared, namely Stat1, II6, Stat3, Cd44, Icam1, Casp3, Fgf2, Jak2, Cxcr4, and Mapk14 (Figure 6A).In order to gain a deeper understanding of the 10 identified hub genes, we utilized the GeneMANIA database to conduct further analysis of their co-expression network and associated functions.The analysis revealed a complex PPI network, with physical interactions accounting for 84.50%, shared protein domains for 6.34%, co-expression for 30.45%,co-localization for 4.01%, and predicted interactions for 5.61% (Figure 6B).Function analysis conducted using GeneMANIA demonstrated that these genes are primarily involved in regulating endothelial cell migration, blood vessel endothelial cell migration, myeloid cell homeostasis, and response to cytokine stimulus regulation, among other functions.Notably, eight genes were found to appear in the three modules with the highest scores, indicating their potential significance (Figure 6C).Additionally, these genes exhibited significant expression differences when compared the control group to the treatment group (Figure 6D).For a comprehensive understanding, Table 2 provides the complete names of these genes along with their associated functions.Furthermore, KEGG pathway analysis of these 10 hub genes has revealed their strong association with Kaposi sarcoma-associated herpesvirus (KSHV) infection, EBV infection, and the AGE-RAGE signaling pathway in diabetic complications (Figure 6E and F). Validation of Hub Genes Expression To assess the reliability of the expression levels of the identified hub genes (Mapk14, Icam1, Cd44, IL6, Cxcr4, Stat1, Casp3, and Fgf2), we selected a dataset (GSE145222) that includes both sham and chronic compression of DRGs (CCD) groups.The CCD model is a well-established model used to study nucleus pulposus (NP).We conducted an analysis of the expression levels of the aforementioned hub genes in the selected dataset.The results demonstrated that, in comparison to the control group, all of the hub genes showed up-regulation in the CCD groups, with statistically significant differences observed, except for Stat1 (Figure 7).These findings serve as additional evidence to support the potential involvement of these hub genes in the context of NP. Discussion Chemotherapy-induced peripheral neuropathic pain (CIPN) is a common and debilitating adverse reaction associated with anticancer medications.Oxaliplatin, an anticancer drug derived from platinum and utilized in advanced colorectal cancer, frequently induces a distinct type of CIPN marked by increased sensitivity to mechanical / cold stimuli.However, the mechanisms underlying the development and chronicization of Chemotherapy-induced peripheral neuropathy, as well as effective treatment options, remain poorly understood, highlighting the urgent need for therapeutic advancements in this area.In this study, our findings shed light on potential pathways implicated in the development of CIPN.Specifically, we identified KSHV infection, EBV infection, as well as the AGE-RAGE signaling pathway in diabetic complications as potentially crucial pathways in both animal models of peripheral neuropathy induced by spinal nerve ligation and oxaliplatin treatment.Moreover, we found that the genes Mapk14, Icam1, Cd44, IL6, Cxcr4, Stat1, Casp3, and Fgf2 may play significant roles during the development of CIPN.Kaposi sarcoma-associated herpesvirus (KSHV) is known to cause various human malignancies and hyperproliferative diseases, particularly in individuals with compromised immune systems like those infected with human immunodeficiency virus (HIV).KSHV has the ability to modulate innate immune pathways, enabling infected cells to survive for extended periods following primary infection and during viral latency. 15EBV primarily infects B cells, remaining dormant in the body until certain conditions disrupt the EBV-host balance and allow the virus to manifest its pathogenic potential.Studies have suggested that EBV can induce immune dysfunction in susceptible individuals, leading to neuroinflammation through autoimmunity or antiviral immune responses. 16Diabetes is associated with increased production of advanced glycation end products (AGEs) resulting from reactive dicarbonyl compounds in a hyperglycemic environment.AGEs can trigger the expression of pro-inflammatory cytokines through their main receptor, receptor for advanced glycation end products (RAGE).The involvement of the AGE-RAGE signaling pathway has been suggested in diabetic peripheral neuropathy's pathogenesis.8][19] In the context of our study, we found that immune dysfunction and the AGE-RAGE signaling pathway in diabetes complications may also play a role in the development of oxaliplatin-induced peripheral neuropathy.In our study, we observed significant upregulation of genes such as Mapk14, Icam1, CD44, IL6, Cxcr4, Stat1, Casp3, and Fgf2 in both Oxaliplatin-induced peripheral neuropathy (NP) and spinal nerve ligation-induced NP animal models compared to the control group.Mapk14, a member of the MAP kinase family, might be activated by various environmental pressures as well as pro-inflammatory cytokines.Previous research on microglia and NP has shown that Mapk14 is significantly upregulated and plays a role in microglial activation in NP animal models. 20Icam1 is a glycoprotein located on the cell surface and acts as an adhesion receptor.It plays a crucial role in regulating the recruitment of white blood cells to areas of inflammation.It participates in important physiological processes such as cell signal transduction, immune response, and inflammatory response.In NP animal models, Icam1 has been found to be significantly increased compared to the control group. 21Additionally, another study found upregulated expression of Icam1 in the spinal cord of rat models of NP compared to rat models of inflammatory pain. 22CD44 is a transmembrane glycoprotein that is widely expressed in various cell types.It is involved in various biological processes including cell adhesion, migration, signal transduction, proliferation, and tumor metastasis.Recent studies focusing on CD44 and CIPN have demonstrated that CD44 acts as a mediator of HMWH-induced pain relief in cases of oxaliplatin and paclitaxel-induced CIPN. 23These findings emphasize the significant role of CD44 signaling in HMWH-induced antihyperalgesia and establish it as a potential therapeutic target for inflammatory conditions and NP. 24Interleukin-6 (IL-6) is an interleukin with dual functionality, serving as both a pro-inflammatory cytokine and an anti-inflammatory myokine.The inflammatory response, involving the propagation of inflammation and recruitment of neutrophils, is initiated when glial cells are activated by noxious stimuli and inflammation.Pro-inflammatory cytokines, including IL-6, play a crucial role in the initiation and maintenance of NP. 25 In a study, it was found that the expression levels of the inflammatory factor IL-6 in the ipsilateral L4-L6 spinal dorsal horn were increased in contrast with the contralateral side at 7 days following nerve crush injury. 26xcr4 is a specific receptor for the chemokine stromal cell-derived factor-1 (SDF-1) and exerts a strong effect on the promotion of chemotaxis, particularly for lymphocytes.It has been suggested that Cxcr4 is closely related to the occurrence of NP.Research has shown that Cxcr4 expression increases in spinal glial cells of mice with peripheral nerve injury-induced NP.Blocking Cxcr4 can alleviate pain behavior, while overexpressing Cxcr4 can induce pain hypersensitivity. 27Stat1 belongs to the STAT family, which is critical for regulating gene expression within cells and important in immune responses.In the context of chronic NP, a study identified specific blood biomarkers for chronic NP by comparing patients with chronic pain (neuropathic and nociceptive) to painless controls, and Stat1 was among the identified biomarkers. 28Casp3, a downstream effector protease in the apoptosis cascade, plays a crucial role in cell apoptosis.In a study, IIK-7 was found to significantly alleviate mechanical allodynia and glial activation while inhibiting casp-3 proteins.This suggests that IIK-7 reduces NP by inhibiting glial activation and suppressing proteins associated with inflammation as well as apoptosis. 29Fgf-2 belongs to the fibroblast cytokine family, which is significant for various processes such as tendon-to-bone healing, cartilage repair, bone repair, and nerve regeneration.Research has shown that spinal cord astrocytes upregulate Fgf-2, a neurotrophic and gliogenic factor, in response to the ligation of spinal nerves L5 and L6.Studies have revealed that endogenous astroglial Fgf-2 plays a role in sustaining neuropathic pain (NP) tactile allodynia, which is linked to the reactivity of spinal cord astrocytes.Inhibiting spinal Fgf-2 has been shown to ameliorate NP symptoms. 30These findings suggest that factors involved in inflammation and regulation of inflammation may also play a role in oxaliplatin-induced NP.Understanding the involvement of these factors could provide insights into potential therapeutic targets for managing NP. The objective of this study was to utilize bioinformatics technology to identify common factors in oxaliplatin-induced NP animal models and spinal nerve ligation-induced NP animal models.By identifying DEGs and hub genes, the study aimed to enhance our understanding of the underlying mechanisms of oxaliplatin-induced NP.However, there were several limitations to consider in this study.Firstly, the study design is retrospective, which means that the findings should be validated using external sources to ensure their reliability and reproducibility.Additional studies with independent datasets are necessary to confirm the identified DEGs and hub genes.Secondly, further investigations are needed to validate the functional roles of the identified hub genes using in vitro models.Experimental studies should be conducted to explore the mechanisms by which these hub genes contribute to oxaliplatin-induced NP. Conclusions In conclusion, our study utilized bioinformatics analysis to identify common DEGs in two independent datasets related to oxaliplatin-induced NP and spinal nerve ligation-induced NP.Through enrichment and PPI network analysis, we discovered several hub genes that are potentially involved in the shared mechanisms underlying these two types of NP. Figure 2 Figure 2 Venn diagram showing the overlapping DEGs between GSE126773 and GSE38038. Figure 3 Figure 3 GO analysis and KEGG pathway analysis of the overlapping DEGs.(A) Enriched biological processes (BP) identified in GO analysis.(B) Cellular component (CC) terms enriched in GO analysis.(C) Molecular function (MF) terms enriched in GO analysis.(D) KEGG pathway enrichment analysis of the integrated DEGs. Figure 5 Figure 5 Common enrichment analysis results, and KEGG pathway graph.(A) Gene Ontology enrichment analysis results are presented.(B) KEGG pathway enrichment analysis results are displayed. Figure 6 Figure 6 Venn diagram and co-expression network of hub genes.(A) The Venn diagram illustrates the overlapping hub genes identified by six different algorithms.(B) The co-expression network of hub genes and their associated genes was analyzed using GeneMANIA.(C) Module analysis revealed that eight genes were present in the three modules with the highest scores.(D) Expression patterns of the eight hub genes in the control and treatment groups.(E and F) Pathways associated with the hub genes. Figure 7 Figure 7 Expression level of hub genes in GSE145222.(A-H) Comparison between data sets using a mean t test.P<0.05 was considered statistically significant.(I) Expression of hub genes in both the control and CCD groups. Table 2 Details of the Hub genes
2024-05-05T15:19:53.126Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "fb100ee845d065c445340229600b92f9794a8b49", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5a06553aed4de3870b2e6473899b81a4dc23ddfb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264151172
pes2o/s2orc
v3-fos-license
Sonophotocatalytic water splitting by BaTiO3@SrTiO3 core shell nanowires Graphical abstract Sonophotocatalysis, utilizing core-shell BaTiO3@SrTiO3 nanowires, enhances water splitting for hydrogen production. Varying Sr/Ba ratios influence performance, with BST-3 nanowires achieving the highest hydrogen evolution rate (17.94 µmol·g−1min−1) and exceptional stability, making them efficient sonophotocatalysts. Introduction The global increase in the consumption of fossil fuels and the resulting environmental concerns have necessitated considerable attention and earnest efforts towards the utilization of eco-friendly and sustainable alternative energy sources [1].Among the available options, solar energy stands out as the most promising owing to its abundant availability and ease of use.According to the Intergovernmental Panel on Climate Change, the solar energy striking the surface of the Earth is approximately 3000 times higher than the global power consumption, indicating that an efficient utilization of solar energy is highly beneficial for mankind [2].One potential method of harnessing solar energy is by converting it into hydrogen (H 2 ), which offers several advantages, such as cost efficiency, ease of transportation, and high energy content per unit mass.Various techniques, such as solar-converted electricity with an electrolyzer, concentrated solar thermal technology with electrolyzers, biological and thermochemical processes, and direct photo-/ photoelectrocatalysis of water, have been explored to convert solar and hydro energy into H 2 [3].However, these methods come with their own set of challenges, such as excessive space requirements, energy losses, high cost, and feasibility concerns.To address these anticipated issues, direct photocatalysis, sonophotocatalysis, and piezocatalysis have emerged as promising alternatives [4]. Over the years, scientists have proposed several solutions, such as nanostructures, bandgap engineering, element doping, and surface treatment, to improve the photocatalytic efficiency [5].Among these options, SrTiO 3 has gained prominence as a promising material for photocatalysis owing to its feasible band structure, optical stability, and low cost [6].Literature suggests that the photocatalytic performance of the SrTiO 3 can be further enhanced through the adoption of various strategies, such as heterojunction formation and metal doping [7].Not only as a photocatalyst but also as a piezocatalyst, it has drawn much attention in recent times.When the piezoelectric material is under ultrasonic waves, electric fields are formed across the material [8,9].In sonocatalysis processes, the spontaneous polarization of the piezoelectric material plays a crucial role.Upon the application of ultrasonic waves, they experience compression and tension forces, leading to the generation of piezoelectric charges (q + and q -).These piezoelectric charges, in turn, trigger oxidation-reduction reactions, making it a promising candidate for efficient water splitting [10][11][12].Combination of ultrasonic waves and light would increase catalytic performances additively or synergistically.Numerous studies have demonstrated that the piezoelectric potential in materials provides the driving force for the separation of photogenerated electrons and holes, thereby improving the photocatalytic performance [13].Since Wang first proposed the concept of the piezophototronic effect, several efforts have been made to combine piezoelectric materials and semiconductors as sonophotocatalysts, such as BaTiO 3 /ZnO, MoS 2 /KNbO 3 , BaTiO 3 /(C 2 H 6 OSi) n , PVDF/ ZnSnO 3 /Co 3 O 4 , and Ag 2 O/BaTiO 3 [1].They strongly support that the introduction of piezoelectric potential not only effectively enhances the photocatalytic performance of semiconductors but also enables independent completion of the catalytic process [14].Therefore, synthesizing a suitable piezoelectric/semiconductor composite structure has emerged as a new and effective strategy to improve the photocatalytic performance. Herein, our primary objective is to synthesize pristine BaTiO 3 and SrTiO 3 nanowires and BaTiO 3 @SrTiO 3 core shell nanowires with various shell coverages to develop efficient sonophotocatalysts.We assessed their photo/sono/sonophotocatalytic efficiency for hydrogen evolution via water splitting.A comprehensive analysis of the photo/ sono/sonophotocatalytic performance of the BaTiO 3 @SrTiO 3 core/shell nanowires (BST NWs) demonstrates the potential to advance the existing water-splitting processes. Materials Reagents sourced from Sigma-Aldrich were used as received, without any modifications Synthesis of SrTiO 3 The SrTiO 3 photocatalyst was synthesized as follows: Initially, 10 mM of Sr(NO 3 ) 2 (15 mL) was mixed with 0.1 g of polyvinylpyrrolidone (PVP).Subsequently, 10 mM of C 4 H 10 OTi (15 mL) and 6 mL of 2 M NaOH solutions were added, and the solution was vigorously mixed using an ultrasonic-probe sonicator (at room temperature for 1 h).This was followed by a hydrothermal treatment at 150 • C for 6 h.Afterward, the sample was thoroughly cleaned with deionized water and ethanol (EtOH) and subsequently freeze-dried for 24 h (See Fig. SF2). Synthesis of BaTiO 3 The BaTiO 3 photocatalyst was prepared using the following procedure: Firstly, 10 mM of Ba(NO 3 ) 2 (15 mL) was mixed with 0.1 g of PVP.Subsequently, 10 mM of C 4 H 10 OTi (15 mL) and 6 mL of 2 M NaOH solution were added to the mixture while vigorously stirring using an ultrasonic-probe sonicator (at room temperature for 1 h).The resulting solution was then subjected to a hydrothermal treatment at 150 • C for 6 h.Afterward, the synthesized material was thoroughly washed with deionized water and EtOH.Finally, it was freeze-dried for 24 h (see Fig. SF2). Synthesis of BST nanowire The synthesized BaTiO 3 was mixed with different molar ratios of Sr (NO 3 ) 2 (Sr/Ba − 2.5:7.5, 5.0:5.0,7.5:2.5 mM labeled as BST-1, BST-2 & BST-3 respectively) in 15 mL solution, and 0.1 g of PVP was added to this mixture.Subsequently, 10 mM of C 4 H 10 OTi (15 mL) and 6 mL of 2 M NaOH solutions were added while vigorously mixing the solution using an ultrasonic probe sonicator.The resulting mixture underwent a hydrothermal treatment at 150 • C for 6 h.Afterward, it was thoroughly rinsed with deionized water and EtOH, followed by a 24 h freeze-drying process (Scheme 1). Characterization Transmission electron microscopy (TEM) was used to identify the structure and morphology of the synthesized materials.X-ray diffraction (XRD) patterns were obtained using a Philips diffractometer (X'pert Pro).The electronic-band structures and chemical compositions of the photocatalysts were determined using a hybrid X-ray photoelectron spectrometer (XPS-UPS/Raman) (Omicron XPS, Scienta Omicron, Germany).Ultraviolet-visible diffuse reflectance spectroscopy (UV-Vis DRS) results were acquired using a Jasco spectrometer equipped with a 60 mm integrating standard BaSO 4 sphere as a reference (Jasco V-770 UV-Vis-NIR).Photoelectrochemical analyses, including electrochemical impedance spectroscopy (EIS) and transient photocurrent measurements, were performed using an electrochemical workstation (ZIVE SP1, WonATech, Korea) with a standard three-electrode system.An LED lamp (SOLIS-1C, Thorlabs, USA) was used as the light source, with a Pt wire (MW-1032, BASi, USA) as the counter electrode, an Ag/ AgCl electrode saturated with KCl (BASi, USA) as the reference electrode, and Na 2 S (0.1 M)/Na 2 SO 3 (0.4 M) as the electrolyte.The working electrode was prepared by dispersing 5 mg of the photocatalyst in a solvent mixture of 1 mL EtOH and water (70% EtOH) and by spincoating of the solution (100 µL) onto an ITO plate (2 cm × 2 cm).The electrode was then air-dried and calcined at 250 • C. Nanogenerator fabrication The nanogenerator was fabricated by packing the synthesized catalyst between two Al foils.An Ag wire was pasted onto the electrode, and two flexible polyethylene terephthalate (PET) substrates of varying thickness, preferably larger than the Al foil, were attached to the top (80 μm) and bottom (200 μm) sides using adhesive tape as packaging.The sandwiched structure was then subjected to appropriate pressure to fabricate a compact device, which helps to protect the nanogenerator from any damage in its surroundings (Fig. SF3). Water splitting The Ar gas was purged into a 100 mL three-neck Pyrex reactor.Purging the system with Ar gas ensured the removal of any excess air.A Xe lamp (300 W, Perkin-Elmer (PE300), Spectral information in Fig. SF4) was used as a light source during photocatalytic experiments.The irradiation intensity was maintained at 4.10 × 10 − 3 W/cm 2 .An aqueous solution containing 5 mg of the photocatalyst and 5% (vol./vol.) triethanolamine (98%, 5 mL) was used to perform the hydrogen evolution reaction (HER).The system was exposed to light with/or/and constant sonication (40 KHz) in an ultrasonication bath.The temperature was maintained at 25 • C throughout the reaction.Gas chromatography and a thermal conductivity detector (Agilent 7890B GC/TCD (USA)) were used to analyze the gas composition. Catalyst stability Multiple cycles of light on and off were conducted to investigate the stability of the prepared BST-3 nanowire photocatalyst in HER experiments.The stability of the photocatalyst was evaluated by measuring the amount of H 2 produced in each cycle, up to four cycles of HER with multiple washing and freeze dried after each cycle.The XRD and HR-TEM analysis was performed to examine potential alterations after the recovery of the photocatalyst following four treatment cycles. Characterization Fig. 1 illustrates the TEM images of BaTiO 3 , SrTiO 3 , and BaTiO 3 @SrTiO 3 core shell nanowires (BST-3).Specifically, Fig. 1(a-c) represent SrTiO 3 , Fig. 1(d-f) correspond to BaTiO 3 , and Fig. 1(g-k) depict BST-3.The synthesized SrTiO 3 exhibits a long-wired structure without any intertwining, with an average diameter ranging between 60 and 65 nm and an average length of 2.4 µm (Fig. 1(a-c) and SF5).BaTiO (Fig. 1(d-f) and SF5) assumes a similar long-wired structure, with an average diameter of 40-45 nm and an average length of 2.1 µm.The structural morphology of BST-3 (Fig. 1(g-h) and SF5) depicts a serrated structure with BaTiO 3 in the core and SrTiO 3 as the shell.The core exhibits an average length and diameter of 2.8 µm (Fig. SF5) and 40-50 nm, respectively, whereas the shell has a thickness of 20-30 nm (Fig. [16].These results confirm the successful synthesis of SrTiO 3 and BaTiO 3 nanowires through a hydrothermal treatment.The XRD analysis of the BaTiO 3 @SrTiO 3 core shell nanowires indicates that the peak intensity shifts to a higher degree with an increase in strontium content in the core shell nanowires (Fig. 2(b)).The XRD pattern does not exhibit any peaks pertaining to impurity, suggesting the pristine nature of the synthesized nanowires.These results are consistent with the observations from high resolution (HR)-TEM (Fig. 1(c, f, and k)). Fig. 2(c-f) show the XPS spectra of the core shell nanowires (BST-3), which provide information on the chemical composition and valence states of the different elements in the nanowire.XPS analysis was performed to determine the electron binding energy associated with each element.Fig. 2(c) exhibits two peaks at 780.52 and 794.84 eV, which originate from the Ba 3d 5/2 and Ba 3d 3/2 states in the BaTiO 3 core, respectively [17].Similarly, the peaks at 133.UV-Vis DRS was used to determine the optical absorption properties of the nanowires [19].Fig. 3(a) reveals that SrTiO 3 , BaTiO 3 , and core shell nanowires (BST-1, BST-2, and BST-3) absorb light at wavelengths shorter than approximately 400 nm.Tauc plots (Fig. 3(b)) illustrate SrTiO 3 , BaTiO 3 , BST-1, BST-2, and BST-3 with a bandgap energy of 3.21, 3.27, 3.25, 3.24, and 3.23 eV, respectively.No significant differences are observed in the bandgaps of pristine BaTiO 3 , SrTiO 3 and core shell nanowires [20].EIS was employed to gain insights into the charge migration phenomenon in the synthesized photocatalysts.In Fig. 3(c), the Nyquist plots of impedance data are displayed with an equivalent circuit consisting of a resistor (R) in series with a parallel combination of a resistor (R1) and a constant phase element (CPE).The BST-3 exhibits the arc with the smallest radius in the plot and the radius increases following the order of BST-2, BST-1, SrTiO 3 , and BaTiO 3 .These results suggest that the interfacial charge transfer resistance is significantly reduced in the core shell nanowires compared to the pristine SrTiO 3 and BaTiO 3 nanowires, thereby facilitating improved photocatalytic watersplitting performance [21].The coupling effect of Sr 2+ and Ba 2+ is considered responsible for the efficient suppression of the electron-hole recombination, leading to an enhanced photocatalytic activity [22]. Transient photocurrent was measured to assess the photocurrent response of the synthesized photocatalysts.Fig. 3(d) presents ten consecutive cycles of light on and off.All the synthesized materials exhibit photocurrents when illuminated and current is not observed when the lights are off.The BaTiO 3 nanowire shows the lowest photocurrent density of 0.03 µA/cm 2 and the maximum density of 0.12 µA/ cm 2 was observed for BST-3 core shell nanowires.The ascending order of photocurrent density is as follows: BaTiO 3 < SrTiO 3 < BST-1 < BST-2 < BST-3.Light absorption and charge separation directly influence the photocurrent response [7].A higher photocurrent response in BST-3 can be attributed to an effective separation of photogenerated electron-hole pairs and the subsequent charge transfer at given light absorption [23]. Fig. SF3 illustrates the piezoelectric nanogenerator (PENG) structure made from the nanowires and Fig. 3(e) shows a picture of the fabricated PENGs.The piezoelectric properties of the fabricated PENGs were examined using a periodic bending releasing machine.The experimental setup was confined in a Faraday cage to eliminate the possibility of artifacts from external electrostatic charges [24].The output voltages detected after each bending releasing cycle with a motion frequency of 2 Hz and a strain of 4 mm are presented in Fig. 3(f).The PENGs of BST-3 nanowires exhibit the higher outputs and a significant increase in the output voltages is observed with increasing Sr 2+ composition indicating the BST-3 nanowire is the most ideal piezoelectric material among all the investigated materials in this study.The core shell structure can cause local deformation to the BST nanowires, thereby creating multiple stress concentration points that enhance their sonocatalytic properties [25]. The specific surface area of the samples were determined using Brunauer-Emmett-Teller (BET) gas-sorption measurements.Table 1 lists the specific surface areas of the nanowires.The synthesized materials exhibit increasing surface areas in the following order: SrTiO 3 < BaTiO 3 < BST-1 < BST-2 < BST-3.Among all the samples, SrTiO 3 exhibits the lowest surface area of 79.3 m 2 ⋅g − 1 .The incorporation of SrTiO 3 shell onto the BaTiO 3 core increases the surface area of BST nanowires, ranging from 116.5 m 2 ⋅g − 1 for BST-1 to 135.9 m 2 ⋅g − 1 for BST-3.The larger surface area is explained by the corrugated surface in the core shell nanowires illustrated in Fig. 1(g).The high surface area of the core shell nanowires would increase the number of active sites and enhance photocatalysis [26]. Photocatalytic water splitting We examined sono-and photocatalytic properties of the nanowires for water splitting to produce hydrogen.Fig. 4(a-d) indicates the H 2 evolution efficiency of BaTiO 3 , SrTiO 3 , BST-1, BST-2, and BST-3 nanowires by sonocatalysis, photocatalysis, and sonophotocatalysis.As Fig. 4 (a) compares the sonocatalytic nature of the nanowires, all of the nanowires show the linear increase in the total amount of H 2 with increasing time.The BST-3 nanowire shows the best performance of sonocatalysis and the H 2 production rate is approximately 3.49 µmol⋅g − 1 min − 1 .The sonocatalytic performance agrees with piezoelectric output voltages created by the nanowires (Fig. 3(f) and SF8), which suggests the piezoelectric response caused by ultrasonic stimuli catalyzes water splitting.As Fig. 4(b) displays the photocatalytic performance of the nanowires, all of the nanowires also produce hydrogen gas with their own constant production rates.The photocatalytic performance of the nanowires coincides with the sonocatalytic nature and BST-3 nanowires exhibits the best performance in photocatalysis as well, which is accounted for by the larger surface area and effective charge separation and transfer between BaTiO 3 core and SrTiO 3 shell as evidenced by photocurrent measurements.As the next experiment, both of sonication and irradiation were applied for water splitting.The sonophotocatalytic property of the nanowires are represented in Fig. 4(c).As expected, the amount of hydrogen increases because of the combined stimuli.The core shell nanowires show better performances compared to the pristine nanowires [27].Interestingly, when comparing the calculated linear sum of the HER rate for photocatalysis and sonocatalysis with the experimental rate of sonophotocatalysis, a significantly enhanced efficiency was observed (Fig. SF9).This observation strongly suggests a synergistic effect resulting from the combination of ultrasonication and irradiation which is discussed in the later section (see Section 3.4). Influence of sonication over hydrogen evolution The influence of ultrasonic frequency on HER was studied by varying the frequency between 25, 45, 60, and 100 KHz.Fig. 5(a, b) indicates that the maximum H 2 yield after 60 min was observed at a frequency of 45 KHz.As shown in Fig. 5(b), the HER value increases slightly from kHz to 45 kHz but it diminishes with increasing frequency afterwards.An increase in H 2 evolution with increasing ultrasonic frequency can be attributed to the formation of more cavitation, refers to the formation and collapse of tiny bubbles or cavities in a water when exposed to rapid changes in pressure.Cavitation can occur naturally, or it can be induced by external means such as ultrasonic waves [28].However, an excessive number of active reactive sites beyond a certain threshold may pose a significant threat to process efficiency by deteriorating the effectiveness of the synthesized catalyst material [29]. Photocatalytic and sonophotocatalytic mechanism Fig. 6 provides a comprehensive overview of the mechanisms underlying photocatalysis, sonocatalysis, and sonophotocatalysis.In the typical photocatalytic process, the photocatalyst (in this case, BST) absorbs photons when exposed to light, resulting in the creation of electron-hole pairs [9,[30][31][32].These electrons and holes are situated in the valence band (VB) and conduction band (CB) respectively [33], thereby enabling subsequent redox reactions.In sonocatalysis, the primary driving force is attributed to the spontaneous polarization of the BST nanowire.When ultrasonic waves are applied, both compression and rarefaction waves travel through the water, exerting alternating forces on the BST nanowire in the solution.This cyclic stress induces a built-in potential, leading to the generation of piezoelectric charges (q + and q -), which in turn initiate oxidation-reduction reactions [28].Under simultaneous sonication and illumination, the established electric field enhances the efficient separation and utilization of the photogenerated electrons and holes.This, in turn, leads to a synergistic increase in the rate of hydrogen evolution in sonophotocatalysis [33].In summary, in a sonophotocatalytic process, the combined effects of ultrasound-induced microstreaming and cavitation, along with the developed built-in electric field, collaborate to boost the efficiency of charge transfer, ultimately resulting in an enhanced photocatalytic activity. Stability and recyclability of the photocatalyst The stability of BST-3 nanowires as a photocatalyst was investigated, as depicted in Fig. 7.The structural and chemical stability of catalysts plays a crucial role in determining their practical applicability [34,35].To evaluate the catalyst's stability, ON and OFF experiments were conducted for multiple cycles.The illumination was turned off for min after every 60 min, and this four-cycle process was repeated.Longterm stability, represented in Fig. 7(a), exhibited a linear increase in hydrogen production over time.Similarly, recycling experiments (Fig. (b)) confirmed the stability of BST-3 even after multiple cycles of HER experiments.XRD analysis and TEM images in Fig. 7(c,d) revealed no changes in the crystal structure and morphology of the BST-3 nanowires after four cycles.Hence, the photocatalyst demonstrated stability throughout four treatment cycles, indicating its potential for industrialscale applications [36]. Conclusion We synthesized BaTiO 3 @SrTiO 3 core shell nanowires with varying Sr/Ba ratio and explored their photocatalytic performance for water splitting.The nanowires were characterized using various techniques to examine their structural, optical and electrochemical properties.The separates photo-generated charge carriers spatially and prolongs their lifetimes consequently.The BST-3 nanowire was also observed to be functionally stable over multiple cycles.These findings suggest BaTiO 3 @SrTiO 3 core shell nanowires as a promising sonophotocatalyst. Fig. 3 . Fig. 3. (a) UV-Vis absorbance spectra obtained from DRS, (b) Tauc plots, (c) Electrochemical impedance spectra, (d) Transient photocurrent, (e) Photograph of piezoelectric nanogenerator (PENG) device and (f) Time dependent open-circuit voltages of PENGs made from the nanowires under a motion frequency of 2 Hz and a strain of 4 mm. best sonophotocatalytic performance (17.94 µmol⋅g − 1 min − 1 ) was observed in the BST-3 nanowire which is explained by the largest surface area and the highest piezoelectric potential developed by ultrasonication.The simultaneous application of ultrasonication and radiation on the nanowires increases hydrogen via water splitting synergistically (almost 1.35 times) due to the piezoelectric field which Fig. 4 . Fig. 4. Total amount of H 2 gas produced by the synthesized photocatalysts as a function of time by (a) Sonocatalysis, (b) Photocatalysis, (c) Sonophotocatalysis, and (d) their respective rates of H 2 gas evolution. Fig. 5 . Fig. 5. (a) Total amount of H 2 gas produced by the synthesized photocatalysts as a function of time at several ultrasonic frequencies and (b) their respective rates of H 2 gas evolution. Fig. 7 . Fig. 7. Photocatalytic performance of BST-3 nanowires for hydrogen evolution reaction (HER) (a) for long term stability, (b) recyclability during multiple cycles, (c) X-ray diffraction patterns and (d) TEM of BST-3 nanowire after four cycles of HER.
2023-10-17T15:02:50.243Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "ffc02fb072c33fb80b543a632f784f24abfad112", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ultsonch.2023.106650", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9066aa7b04c17018e9abe3fce06858c59855c3a", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
240075446
pes2o/s2orc
v3-fos-license
Harnessing Event Report Data to Identify Diagnostic Error During the COVID-19 Pandemic Introduction COVID-19 exposed systemic gaps with increased potential for diagnostic error. This project implemented a new approach leveraging electronic safety reporting to identify and categorize diagnostic errors during the pandemic. Methods All safety event reports from March 1, 2020, to February 28, 2021, at an academic medical center were evaluated using two complementary pathways (Pathway 1: all reports with explicit mention of COVID-19; Pathway 2: all reports without explicit mention of COVID-19 where natural language processing [NLP] plus logic-based stratification was applied to identify potential cases). Cases were evaluated by manual review to identify diagnostic error/delay and categorize error type using a recently proposed classification framework of eight categories of pandemic-related diagnostic errors. Results A total of 14,230 reports were included, with 95 (0.7%) identified as cases of diagnostic error/delay. Pathway 1 (n = 1,780 eligible reports) yielded 45 reports with diagnostic error/delay (positive predictive value [PPV] = 2.5%), of which 35.6% (16/45) were attributed to pandemic-related strain. In Pathway 2, the NLP–based algorithm flagged 110 safety reports for manual review from 12,450 eligible reports. Of these, 50 reports had diagnostic error/delay (PPV = 45.5%); 94.0% (47/50) were related to strain. Errors from all eight categories of the taxonomy were found on analysis. Conclusion An event reporting–based strategy including use of simple-NLP–identified COVID-19–related diagnostic errors/delays uncovered several safety concerns related to COVID-19. An NLP–based approach can complement traditional reporting and be used as a just-in-time monitoring system to enable early detection of emerging risks from large volumes of safety reports. D iagnostic errors are receiving intense investigation in the safety community due to their high prevalence and harmful impact on patients. [1][2][3] Diagnostic errors affect 12 million US adult patients per year in the outpatient setting. 3 and at least 0.7% of adult admissions involve a harmful diagnostic error. 4 The COVID-19 pandemic has further strained the health care system, resulting in cognitive errors, burnout, challenges with hospital resources, and a rapid shift in operational workflows that may contribute to missed and delayed diagnoses. [5][6][7][8] Due to the novel characteristics of COVID-19-related disease, as well as its impact on hospital capacity, staffing shortages, and burnout, Gandhi and Singh proposed that the COVID-19 pandemic would exacerbate diagnostic errors. 7 They developed a taxonomy to define eight types of diagnostic errors that could be expected in the pandemic: Classic, Anomalous, Anchor, Secondary, Acute Collateral, Chronic Collateral, Strain, and Unintended. Classic and Anomalous refer to missed or delayed COVID-19 diagnoses, whereas the other six categories pertain to missed or delayed non-COVID-19 diagnoses that may result from factors related to the COVID-19 pandemic. The classification definitions as well as examples are described in Table 1 . This approach accounted for diagnostic errors or delays based on possible disruptions COVID-19 may have on health care providers and the health care system, and identification of these can be used in specific mitigation strategies discussed in the original article. 7 Examples include cognitive errors, such as various forms of availability and anchoring bias; care deferment; and effects of rapidly expanding care delivery changes, such as use of telemedicine and personal protective equipment (PPE). At our institution, the ability to recognize COVID-19-related diagnostic errors was an important part of the COVID-19 response. This included an increased emphasis on the use of data and transparency for more rapid strategic response. 9 We thus embarked on a project to characterize diagnostic errors at our institution using the Gandhi and Singh taxonomy by leveraging our incident reporting systems. Although simplified clinician reporting mechanisms have been recently developed to improve reporting of diagnostic error, incident reporting has not yet been widely used to study diagnostic error. 10 , 11 There are a number of criticisms related to incident reporting systems, particularly the voluntary nature of reporting, which may lead to reporting bias and hindsight bias. [12][13][14] However, we believed that the data would be a readily accessible and valuable source of information about events pertaining to diagnostic errors during the pandemic. Harnessing Event Report Data to Identify Diagnostic Error As the project evolved, it became clear that the task of identifying COVID-19-related diagnostic errors included the need to analyze a high volume of safety event reports to discern whether a diagnostic error or delay occurred. Manual chart review for the volume of reports was not feasible given the multiple demands on our patient safety and risk management team during the pandemic; thus, we developed an informatics-based approach with the capability to preprocess large numbers of safety reports. To our knowledge, such a natural language processing (NLP)-and logic-based cohort enrichment have not been applied to safety event reports to facilitate identification of COVID-19-related diagnostic errors. In this article we describe results of our study that aimed to identify and analyze diagnostic errors at a large US health care system to identify patient safety risks during the COVID-19 pandemic. To achieve the study aims, we sought to (1) rapidly develop a safety reporting-based workflow to identify sources of diagnostic error in our institution, particularly in the context of a novel pandemic; and (2) examine application of Gandhi and Singh's classification framework in the real world and describe categories of potential diagnostic errors that were found. Setting We conducted the study at an academic tertiary care referral center in Northeastern United States with 753 inpatient beds and more than 135 ambulatory practices. The institution uses an electronic vendor-based safety reporting system (RL Solutions; RLDatix, London) capturing both inpatient and ambulatory safety events. Approximately 10% of our safety reports are from the ambulatory setting, with the remainder from the inpatient setting. Project managers within the Department of Quality and Safety reviewed all safety events related to COVID-19. We created customized fields within RL Solutions corresponding to the eight classes of COVID-19-related diagnostic errors and delays as proposed by Gandhi and Singh. These custom fields were available to patient safety and risk management specialists who routinely review safety reports, but not to frontline staff filing the initial safety report. Safety Reporting: Pathway 1 (Original Workflow) Early in the pandemic, we developed a workflow using safety reports that either were manually flagged as COVID-19 related or explicitly mentioned COVID-19 or coronavirus. 15 All safety reports containing the keywords "COVID" or "coronavirus" were extracted from the safety report database. These reports were then manually reviewed for potential diagnostic error using the classification for COVID-19-related diagnostic error developed by Gandhi and Singh. Chart reviews were performed in instances in which the safety reports had insufficient detail ( Figure 1 ). Because the resources needed were substantial, it was not feasible to review all safety reports to look for diagnosisrelated signals. Safety Reporting: Pathway 2 (Complementary Workflow) We developed a second pathway later in the pandemic response to complement Pathway 1, which looked specifically at safety reports excluded in Pathway 1 and reduced the number of manual chart reviews through the application of an NLP approach. This second pathway included cases that may not have been explicitly linked to COVID-19 by the staff member filing the safety report. We developed the software algorithm iteratively, following methods used successfully to develop other health informatics innovations. 16 This included forming a working group with both clinical and informatics expertise, establishing design requirements, and using an agile approach for iterative development to develop a rapidly deployable tool that could be integrated into an existing workflow. We considered two main requirements for the NLP algorithm in the design phase. First, the algorithm must be able to process a high volume of safety reports. The high volume precluded manual review of each report to assess for COVID-19-related diagnostic errors. Second, the algorithm must be able to rank cases to optimize the efficiency of human review. In other words, safety reports should be rank-ordered such that study personnel can review a high-yield enriched cohort of cases to discover COVID-19related diagnostic errors. Pathway 2 included the following steps: (1) extraction of case-related details of safety reports from RL Solutions, (2) automated processing of safety report free text to categorize concepts it contained, (3) creation of a ranked list of safety reports based on number of concept categories flagged, and (4) manual case review of the enriched cohort. We wrote the algorithm in R programming language (R Foundation for Statistical Computing, Vienna) and developed a custom NLP approach using heuristic keyword checking against a custom lexicon. In essence, we parsed the free-text report narrative for the presence of specific keywords in specific concept categories. The list of keywords was derived from working group consensus. Case-insensitive string matching for keywords, partial word fragments, common abbreviations, and misspellings were used. We used 11 concept categories based on an association with the diagnostic process-"COVID," "Communication," "Testing," "Orders," "Precautions," "Workflow gap," "Patient condition," "Symptoms," "PPE," "Diagnostic," and "Care plan"-and multiple keywords for each. For example, the "Communication" concept category included the following keywords: "call," "video," "virtual," "VV," "misunderst," "telemedicine," "hear ," "ipad," "communic," and "phone." A full list of terms used for heuristic Harnessing Event Report Data to Identify Diagnostic Error Figure 1: This flowchart shows that a total of 14,230 safety reports were filed between March 1, 2020, and February 28, 2021. These were processed through two pathways. Pathway 1 (1,780 reports) contained all reports with explicit mention of COVID-19, whereas Pathway 2 (12,450 reports) used automated natural language processing to highlight specific cases for manual review. Manual review was performed for 1,780 reports in Pathway 1 and 110 safety reports in Pathway 2. A total of 95 cases of diagnostic error or delay were identified. matching, as well as the R code for our algorithm, can be found in the GitHub repository (distributed under GPL v3 license). 17 The COVID-19 category was used specifically to exclude reports processed through Pathway 1 to avoid redundancy. A safety report was deemed to have a match for a particular concept category if any matching keywords were found in the safety report narrative. A single safety report could contain multiple concept categories. For each safety report, categories found were tallied; reports were then sorted in descending order of number of concept categories. We hypothesized that cases involving more concept categories were more likely to be high yield for human review. Safety reports meeting the threshold of 6 or more flagged categories were manually reviewed by project managers to assess for presence of diagnostic error or delay and subcategorization using concept analysis. A threshold of 6 was chosen from a practical perspective, as it generated a cohort that was of reasonable size for our team to manually review. Review and Categorization for Pathways 1 and 2 Our working group, which consisted of two project managers, a clinician quality and safety expert, and a clinicianinformatician, as well as an external clinician expert leader on quality and safety, was tasked with design of the pathway and rigorous review of potential diagnostic error/delay cases using the Gandhi and Singh framework. Team members included both clinical (physicians, nurses) and nonclinical staff. Primary reviewers of cases were nonclinical project managers (MPHs) who were extensively trained in the Gandhi and Singh taxonomy at the start of the project. In addition, reviewers were provided with an infographic with definitions and examples of each category. A report was classified as having a diagnostic error or delay when it met the definition of one of the eight categories in the Gandhi and Singh taxonomy. When the safety report itself did not contain sufficient information for categorization, additional chart review in the electronic health record (EHR) was performed to find contextual details. Any reports in which either the presence of diagnostic error/delay or the most appropriate classification was unclear during the project manager review were subsequently reviewed by a physician team member of the working group for a clinical assessment. Unclear cases were subsequently presented in a group setting by the physician team member who performed the secondary review for discussion until consensus was reached. Harnessing Event Report Data to Identify Diagnostic Error Figure 3: This chart illustrates that COVID-19-tagged safety reports (Pathway 1) had all types of errors represented, as compared to natural language processing-based reports (Pathway 2). Pathway 2 was most sensitive for detecting straintype diagnostic errors. Pathway 1 Manual review of 1,780 COVID-19-explicit reports in Pathway 1 revealed 45 reports with diagnostic error/delay, a positive predictive value (PPV) of 2.5% (45/1780). Ten safety reports in Pathway 1 required additional chart review by a physician for classification. Safety reports explicitly mentioning COVID-19 peaked in April with 260 reports, before gradually declining to a steady average of around 100 reports per month ( Figure 2 ). Pathway 1 had highest yield in April and May 2020 with 10 diagnostic error/delayrelated safety reports, with subsequently months having fewer cases. Of the cases identified in Pathway 1, the most common error type was "Strain" ( n = 16, 35.6%), followed by "Unintended" ( n = 9, 20.0%) ( Figure 3 ). Pathway 2 Of the 12,450 reports in Pathway 2, 110 were highlighted for manual review by the NLP-based tool using the threshold of 6 or more category flags. Fifteen of the 110 reports required additional content review, including accessing the EHR for clinical context. Fifty of the 110 reports were found to have a diagnostic error/delay with a PPV of 45.5 % (50/110). Pathway 2 yielded an average of approximately 4.2 cases per month ( Figure 2 ). Of the 50 diagnostic errors/delays found in Pathway 2, the predominant type was "Strain" ( n = 47, 94.0%), with "Unintended" and "Chronic Collateral" making up the remainder ( Figure 3 ). Due to the disproportionate representation of "Strain" categorization in Pathway 2, our team opted to conduct additional qualitative content review of these safety reports. We identified three major drivers and one minor driver as primary contributors of safety events in general: supply vs. demand imbalance, patient handoff, care provider fatigue and burden, and COVID-19 status uncertainty, respectively. Across the 110 manually reviewed safety reports in Pathway 2, 39 reports involved supply vs. demand imbalance (hospital resources or services insufficient to meet prompt clinical demand), 33 reports involved patient handoff (communication challenge or disagreement between providers, teams, or services during patient transfer or shift change), 25 reports involved care provider fatigue and burden (decision-making error by staff), and 5 related to COVID-19 status uncertainty (unclear patient COVID-19 infection status). The remaining 8 reports had other miscellaneous causes unrelated to the above four categories ( Table 2 ). DISCUSSION We found that COVID-19 diagnostic errors accounted for 0.7% of all safety reporting volume during a one-year period during the pandemic (95 out of 14,230 safety reports). We developed two complementary pathways to enrich the cohort of reports for manual review: a manual review pathway for COVID-19-explicit reports (PPV = 2.5%) and an NLP prescreened manual review pathway for the remainder of reports (PPV = 45.5%). In addition, qualitative review of Pathway 2 safety reports revealed three major drivers and one minor COVID-19-specific driver that contributed to safety events (major: supply vs. demand imbalance, patient handoff, and care provider fatigue and burden; minor: COVID-19 status uncertainty). Safety reporting during the COVID-19 pandemic can serve as an important tool to recognize potential gaps that lead to diagnostic errors or delays in care, with the potential to serve as an early monitoring system. NLP-Assisted Cohort Enrichment As our project evolved, we quickly realized that the volume of safety reports made it impossible to manually review them all for COVID-19-related diagnostic errors. As such, we needed a way to extract potential reports of interest from the larger pool. We first developed a workflow reviewing only COVID-19-explicit reports. For the remainder, which constituted the majority of the reports, we developed a rapidly deployable method of processing large volumes of safety reports to identify potential diagnostic errors. By focusing on a simple, yet practical NLP approach that drove a logic-based ranking algorithm, we were able to better focus human resources to find COVID-19 diagnostic errors (Pathway 2 PPV = 45.5%). Although the intent was to find diagnostic errors, we found that our algorithm logic was sensitive to strain-related safety events. A wide range of machine learning-based NLP with various degrees of complexity has been used for safety report-ing in the past, 18-21 but we opted to use a simple heuristic keyword approach for the practicality of rapid deployment without the lengthy machine learning-specific validation needed in NLP approaches. We have successfully used the keyword heuristic approach in other informatics innovations at our institution. 16 Methods we developed may serve as potential resources that can be adapted and implemented in other health systems. We chose to use R programming language because it is freely available, has easily interpretable (noncompiled) code, and is widely used in the biostatistics and informatics community, lowering the barrier to entry. In addition, by avoiding a machine learning NLP approach, retraining of a machine learning model for localization would not be necessary to deploy to other sites-a task that would likely require expertise and resources more commonly found only in large academic medical centers. Detection of early signals of trends or systemic patterns is increasingly important in the age of large volumes of data. 22 Our approach is generalizable outside of the COVID-19 pandemic because the keywords used are COVID-19 agnostic. For the specific tool developed in Pathway 2, the sensitivity and specificity can be easily adjusted by changing the threshold of categories flagged. An advantage of using a logical framework as the underpinning of the tool is that it can easily be adapted to search for other types of safety reports. For example, if an institution was interested in having monitoring for signals related to COVID-19 testing in safety reports, one could create a query to identify safety reports containing COVID-19 and testing concept categories. In addition, new features could be built on top of this foundation, including more complex tasks such as aggregating data for summary; searching for more specific concepts for more in-depth secondary analysis for quality improvement, such as whether handoff errors were more common in specific locations; or feeding traditional and nontraditional dashboards, such as word clouds. Limitations One limitation of our tool is that because manual review for diagnostic error was not done for all 14,230 safety reports, we are unable to estimate the sensitivity for all diagnostic errors using our approach. This is mainly because the tool was developed out of necessity as the project evolved and not planned a priori. As such, our project was not scoped to have the resources to conduct a rigorous evaluation of the tool. However, given that the tool was designed to generate a cohort that was practical for our study staff, we think that this approach is still useful in a real-world setting. Second, the lexicon we developed for keyword heuristic checking may not be all inclusive. Although the number of diagnostic errors found was low, the figure in our study was quite similar to the 0.7% pooled rate of errors for hospitalized patients in a recent meta-analysis. 4 Future work could include further refinement and expansion of the heuristic categories used in this project. Safety Reporting Trend Shift During the Pandemic Response Overall safety report volume was similar in 2020 as compared to 2019, except for a notable decline from March to May. We saw a shift in the type of COVID-19 diagnostic errors as the pandemic went on. Initially, diagnostic error types varied, with the Strain, Unintended, Anomalous, and Chronic Collateral categories being the most frequent ( Figure 3 ). Over the first four months, we saw both a decline in diagnostic errors found in COVID-19-labeled reports and a decrease in non-strain-related diagnostic errors. By June 2020 most diagnostic errors were predominantly strain related and were found using Pathway 2 ( Figure 2 ). Our observations reflect that certain types of diagnostic errors may have been more frequent early in the pandemic, when knowledge of the disease and its management was still evolving. Specifically, certain factors were more prominent at the start of the pandemic, including (1) development, implementation, and repeated revisions of regulations, policies that affect care delivery 23-25 ; (2) scientific and epidemiologic knowledge gaps on novel infectious diseases 26 ; (3) need for education for both health care workers and patients 27 ; and (4) development of individual attitudes and emotional response toward the pandemic. 28 , 29 Recognizing that different types of errors may occur in different stages of a pandemic or with a novel disease may be useful to assess future safety risks. Major Themes in Pandemic-Strained Health Care Setting COVID-19 emphasized the need for real-time data mining for detection of early risk signals to target. 9 , 22 By coupling safety reports with an NLP-based approach, human resources can be directed to safety events that may be indicative of larger systemic problems while the number of cases is still small. At our institution, safety reports drew attention to various drivers of safety events. Some of these drivers, such as communication challenges related to patient handoff, have long been recognized as an important source of medical error. 30 , 31 Others, such as supply vs. demand imbalance, took on new meaning during periods of extreme clinical demands on the hospital system. From a capacity and surge response perspective, significant attention has been paid to considerations such as number of beds and durable equipment such as ventilators and PPE. [32][33][34] However, Pathway 2-flagged safety reports repeatedly highlighted cases of supply vs. demand imbalance in less visible areas, such as patient transport and phlebotomy services. Early recognition of these signals can help inform a more robust response to ensure that additional resources are not overlooked in key low-visibility areas. Clinician fatigue and burden is often a difficult phenomenon to identify. Diagnostic errors, particularly certain categories such as Strain, Anomalous, or Anchor errors, 7 may suggest increasing clinician exhaustion. Finding a general signal of strain may be useful as an institutional barometer as well as for identifying systemic areas to reinforce, as an overburdened clinician is an ineffective safety net and a potential source of error. The use of an NLP-based tool to assess for diagnostic error and strain may provide critical information for health system leaders. Within our institution, we saw an increase in strain-related reporting several months after the start of the pandemic that persisted for the duration of the study period, regardless of the number of active COVID-19 patients. The NLP-based tool can be applied to identify diagnostic errors using safety reporting data. Although our organization is not currently leveraging this approach due to limited bandwidth on our team, it is our goal to employ this approach in the near future. CONCLUSION During a one-year period in the COVID-19 pandemic, our organization developed a new safety report-based workflow to identify diagnostic errors and delays related to COVID-19. A strategy using two complementary approaches (traditional reporting and a simple NLP-and logic-based algorithm) was effective in discovering diagnostic errors and could be a useful early signal detector for trends in safety reports. This strategy significantly reduced the number of manual reviews needed to find a true diagnostic error and can be readily adapted and applied to other settings and situations. All eight categories of diagnostic errors previously described by Gandhi and Singh were found at our institution, highlighting the need to address each of them through multifaceted interventions.
2021-10-29T13:13:19.428Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "a1adccbf929fa833bcec8afd63542f7b37eb9d0f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jcjq.2021.10.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8bda5e4fa2fa3a8cdef8ad15feba9acb0b0c5fa3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225451379
pes2o/s2orc
v3-fos-license
Enhancing knowledge exchange and performance recording through use of short messaging service in smallholder dairy farming systems in Malawi Abstract Monitoring animal performance is a challenge due to lack of systematic recording in the smallholder dairy sector in Malawi. A mobile recording system using short messaging service (SMS) was therefore trialled for data capturing and subsequent feedback provision to farmers following analyses and interpretation. This study aimed at drawing lessons regarding use of SMS recording system among dairy farmers. Of the 210 participants, 85% were farmers and 25% were other dairy value chain players. Farmers were from eight intervened (monitored for 18 months) and eight control Milk Bulking Groups (MBG). There are three regions in Malawi and Central region had the highest participants [59% (124)] than Northern [23% (49)] and Southern [1% (2)] regions submitting data using SMS. Milk production was the most recorded data and analyses showed that mean yield in litres per cow (10.7 ± 0.14) was similar to average estimate in literature for Malawi (10.4 ± 1.57). Household daily milk consumption (1.2 ± 0.04), milk sold through formal market (610.0 ± 55) and amount of milk rejected per day per MBG (5.9 ± 0.86) in litres were captured. Farmers asked questions and received timely feedback via SMS. Therefore, it is possible to capture quality data using SMS technology that is adequate for conducting analyses to inform decision-making. PUBLIC INTEREST STATEMENT Monitoring dairy cattle milk production performance is an important activity for dairy farmers as well as other value chain players in the dairy industry. However, due to lack of systematic recording in the smallholder dairy sector in developing countries such as Malawi, record keeping remains a challenge. Our study has shown that a mobile telephone short messaging service recording system (mSMSRS) captures accurate data from farmers and reduces time and cost of transport. Therefore, promotion of record keeping using short message service (SMS) based technology among the smallholder dairy farmers would allow generation of data useful for monitoring animal performance and breeding analyses to inform decision-making among the stakeholders in the dairy value chain in developing countries. The mSMSRS is preferred as it is quick, cheap and enable timely provision of feedback to farmers in case of questions and majority of households have access to a basic mobile telephone. Introduction Despite efforts to enhance access to technical, extension and other services by different stakeholders in the dairy industry in Malawi to improve dairy cattle milk production, the performance of animals kept by smallholder farmers, who dominate the dairy sector, is far from satisfactory. Substandard sources of improved animal genetics, poor animal health, feed shortage, poor prices for milk and poor animal performance monitoring, constitute fundamental constraints resulting in inefficiencies among the smallholder dairy farmers (Kawonga et al., 2012a;Tebug et al., 2012a). The smallholder dairy farmers contribute more than 60% of the milk processed in the country (Chagunda et al., 2007). The smallholder dairy sector in Malawi is well structured from farm gate to consumers and for the past decades, it has received diverse initiatives by different dairy value chain players that has resulted in the intense increase in number of smallholder dairy farmers and number of dairy cows (Chindime et al., 2016) estimated to be more than 21,000 and 98,000 as of year 2020, respectively. This smallholder dairy sector is as well expanding to farming communities outside the traditional milk-producing areas called milk sheds areas. The milk shed area in Malawi context means an area of high concentration of milk production for commercial markets designated according to regional set up of the country. At smallholder farm level, the herd size ranges from one to four cows with differences attributed to gender of household head in favour of male-headed family, farming experience and feeding system (Tebug et al., 2012a). The direct implication of this increase in number of smallholder dairy farmers is a simultaneous growth in demand for access to critical services such as extension, health, breeding and finance services; factors that are associated with dairy cow management and fertility (Banda et al., 2012). In the case of extension services, one of the notable impacts of high population of agrarian farmers and institutional extension reforms is a low ratio of the extension staff to farmers estimated at 1: 2700 for Malawi (Baur et al., 2017). Comparing to other agricultural sub-sectors, the latter factor has resulted in a substantial negative impact in the smallholder livestock sector in Malawi. With respect to dairy cow management at farm level, recording is one of the critical activities to enhance monitoring of management and breeding strategies (Kawonga et al., 2012a). Record keeping is however generally weak among smallholder dairy farmers in Malawi (Kawonga et al., 2012b). One reason is that according to farmers, it is not easily evident how record keeping contributes to overall farm performance. Previous efforts to introduce recording using forms at smallholder farm level were constrained by failure to timely conduct analysis and subsequent utilisation of records. It is as well costly to frequently visit farms, collect records, take them to a recording unit, perform analysis and provide feedback to farmers for them to mitigate problems and hence improve herd productivity. To mitigate this problem and re-introduce recording, the use of a mobile telephone short messaging service recording system (mSMSRS) to enhance recording on farms, transmission of data to a recording unit and provision of feedback to the farmers was proposed and trialled. The mSMSRS was therefore preferred, as it would provide quick, cheap and timely feedback to farmers. Noting that majority of the smallholder dairy farmers have access to a basic mobile phone, it was therefore as well worthy testing the mSMSRS. Use of information and communication technology (ICT) has already been demonstrated to be helpful in Sub-Saharan African countries such as Kenya (e.g. iCow) and South Africa in improving the quality of public services by making them faster, dependable, available in real-time, and more citizen-centred (Blessing & Julius, 2010). Martin et al. (2020) indicated that mobile phone technologies are one of the important tools for improving nutritional security especially in the rural areas. The use of ICT based technologies in smallholder dairy farming especially in Malawi is relatively new and its potential benefits, pitfalls and ease of integration in the smallholder sector based on field experience have not been assessed. Interestingly, there has been an increase in number of dairy farmers owning mobile phones, estimated at 42% in 2017 for Malawi (Arne, 2018), and this constitute fundamental factors to create a favourable environment for integrating ICT related technologies in day-to-day farming activities. As part of implementation and monitoring, a pilot project was conducted to draw lessons through assessing integration of a mSMSRS among the smallholder dairy farmers in Malawi. Further, we tested the hypothesis that data collected through SMS is dependable to inform decision-making. Mobile SMS recording system The mSMSRS used in this study was an interactive and easy to use system. The central principle of the mSMSRS was the ability to allow either the researcher or the farmer to initiate a conversation through SMS. This new system was developed to help smallholder farmers and other value chain players to share information and/or get quality and reliable information through SMS by optimising time and enhance farmers rapport with relevant experts in the dairy sector. Prior to mSMSRS, the existing telecommunication technologies in Malawi were the regular one-way SMS services from telecommunication companies which were not addressing the need for convenient data collection. Thus, mSMSRS was an innovation that leveraged advances in internet of things to bring an interactive data collection system through SMS. All the participants gave consent to participate in the study during induction sessions or following communication through print media. Smallholder dairy farmers took records and sent data through SMS via a short code to a data recording unit that was based at Bunda College Campus of Lilongwe University of Agriculture and Natural Resources (LUANAR) in Lilongwe district in Malawi. In addition, farmers were able to submit additional information related to their day-to-day farming activities including asking questions related to general livestock farming practices. The submitted data were recorded online before downloading and used for further analyses. From the data recording unit, feedback based on outcome of data analyses was sent to farmers via SMS. In addition, the data recording unit periodically was sending general SMS to all registered farmers to remind them on recommended routine livestock husbandry practices. The dairy husbandry information disseminated through the mSMSRS was based on the national agricultural calendar. Figure 1 shows a conceptual framework of the recording system that was implemented using the mSMSRS technology. Data collection and management As a component of monitoring the implementation, the study was conducted between September 2013 and February 2015. Data captured through the mSMSRS via a short code came from the three regions of Malawi (Southern, Central and Northern). To ensure that the influence of this SMS-based recording technology was determined, eight Milk Bulking Groups (MBGs) were strategically selected. The MBGs are smallholder dairy cattle farmer groups formed to facilitate selling of chilled fresh milk in bulk to processors in an organised and regulated manner. The farmers from the selected MBGs went through an induction training session on how to use the system and were followed up every 3 months to get feedback on how they were using the system and also as a reminder to use the system. Out of the eight groups chosen for monitoring, five MBGs selected (Bua, Bunda, Chitsanzo, Dzaonewekha and Magomero) were located in the Central region. The remaining three MBGs (Doroba, Kavuzi and Lusangazi) were in Northern region. No training and regular contact were conducted in the Southern region in order to leave it as the control group in the study. There was no interference in the activities of all the MBGs and dairy value chain players. During the study period, the system could only accept SMSs from one of the two major network service providers in Malawi. Once an SMS was sent to the system, it was automatically acknowledged and the personnel from the study team received an alert to enable them to assess the message and determine the kind of feedback the sender needed. Data collected through mSMSRS was on routine downloaded and added to a database. For the individual farmers, data mainly comprised daily milk yield, milk supplied to the MBG, milk consumed, and questions. At MBG level, data included milk bulked and milk rejected. Before a farmer delivers milk at MBG, a milk buyer at the MBG conducts a number of quality control tests before weighing the milk. The milk is tested for adulteration or sourness and once the milk fails to pass the tests it is returned to a farmer without weighing. The data were managed and analysed using IBM SPSS Statistics for Windows, Version 22.0 (Armonk, NY: IBM Corp). Data analysis One sample t-test was used to assess the quality and reliability of the selected variables. Statistical differences were considered significant when P < 0.05. Quantitative and qualitative data analyses on number of entries segregated by predefined categories of data such as registration, milk production and marketing, and its adequacy to inform decision-making were conducted. Stakeholders were coded using the lowest possible description and some of the codes included dairy farmer, farmers association, academician, MBG milk buyer, non-governmental organisation, and where possible their location of operation. Graphs were plotted using GraphPad Prism for Windows, version 7.03 (GraphPad Software, La Jolla California USA). Information acquisition A total of 210 individuals comprising smallholder dairy farmers (85%), government staff (7%), MBG staff (7%), academicians, national association and non-governmental organisation (NGO), the latter three categories each had 1% proportion, participated in this study. Excluding the 17% of the stakeholders who did not indicate their district of origin and place of operation when registering into the system, the majority of the remaining 83% with known districts were from Central region that had the highest recorded number of stakeholders submitting registration data [59% (124)], than Northern [23% (49)] and Southern [1% (2)] regions. The frequency distribution by stakeholder category in shown in Figure 2. The differences in the level of responses by the stakeholders on the use of the mSMSRS were attributed to routine monitoring and induction sessions conducted in the Central and Northern regions. Clearly, the sensitisation and monitoring intervention had a substantial influence especially on farmer`s motivation to use the system. Comparing the intervened and un-intervened groups in the Northern and Central regions, the intervention had influence on the number of participants registered in the mSMSRS, Figure 3. Smallholder dairy farmers ranked high among the dairy industry stakeholders who registered and were submitting data as expected. However, variations were observed when comparing the respective 15 MBGs that participated in this study. Chitsanzo MBG in the Central region had the highest number of individuals registered (20%) followed by Bunda MBG (15%). Lusangazi MBG located in the Northern region of Malawi recorded the third in ranking based on the number of farmers who participated (13%) and the remaining groups had less than 6% representation of all the 210 participants. Though the Northern region recorded a lower number of farmers (49) than Central region (124), it was noted that most of the dairy farmers in the Northern region were using a mobile number from a different telecommunications service provider, which during this study period, the system was not configured to accept data sent through its network. Therefore, it is possible that this contributed to the low number of individuals who participated from the Northern region than expected when compared to Central region where we also performed induction training sessions and periodic monitoring. Since one of the sensitisation approaches was through print media, we recorded users (n = 37) from the Southern region of Malawi and these users were included in the un-intervened group where necessary in the analysis. Further analysis showed that, of the total 1,310 entries recorded in the mSMSRS, the highest entry category of data was for milk production (701), then 294 entries related to registration process and 179 entries on milk marketing information. Queries and reporting of incidences had 122 entries and data on reproduction (e.g., calving and insemination dates) recorded by farmers had the lowest number of entries (14). Farmers mainly submitted most of the data as expected and this showed that this SMS recording technology has the potential of being integrated in their day-to-day farming activities and consequently benefit the farmers and subsequently the rest of the stakeholders in the dairy industry. The participation of the other dairy value chain players demonstrated that the system is likely to contribute to enhancing sharing of information among stakeholders and thus enhance networking, knowledge transfer and coordination among them. Quality of sourced data The system recorded daily milk production performance data of 61 lactating dairy cows and on average, each cow was producing 10.7 ± 0.14 l of milk per day, Table 1. Farmers had liberty to choose the frequency of submitting milk production data either on daily basis or on weekly basis. Very few [34 (5%)] farmers preferred submitting the records on weekly basis. However, it could not be established if this was the result of positive attitude and motivation to use the mSMSRS along with the real time feedback to their questions where it was applicable. The average daily milk production estimated in this study was within the national estimate for Malawi reported in literature and was within the range. The range is from 5 to 15 l of milk per cow per day for improved dairy breeds Kawonga et al., 2012b;Tebug et al., 2012b). Pure and crosses of Holstein-Friesian and Jersey are the common dairy breeds kept among the smallholder dairy farmers in Malawi Heifer International, 2015). However, no information regarding type of breeds in use by smallholder dairy farmers was sourced. We did not make a deliberate inquiry to assess farmers knowledge regarding the breeds of the cows they had at the time of the study. The estimated milk rejection rate due to sourness was 5.9 ± 0.86 at MBG level and it was not significantly different (P = 0.187) from rate of milk rejection values reported during similar period of 7% (Civil Society for Agricultural Network [CISANET], 2014). The milk rejection rate based on the data recorded in mSMSRS was between 1 and 40%. CISANET (2014) conducted their study during a similar period to our study and reported milk rejection rates ranging from 7 to 17%. The difference could be due to our observation that most of the MBGs do not measure the quantity of sour milk rejected at the cooling centres. This is the routine procedure in all MBGs and it is therefore difficult to quantify the exact amount of fresh milk rejected at MBG level. The processors conduct similar tests as well from the bulked milk at cooling centres before collection. The amount of fresh milk chilled and collected daily by processors per day per group was estimated at 610 ± 55 l and the estimate was lower than the daily milk collection in litres reported during the similar period in different study, a range from 700 to 5000 (Thomson et al., 2013). Thomson et al. (2013) reported that milk production volumes in Malawi are seasonal dependent and conducted the study between May and August 2013. We covered at-least one full season from September 2013 to February 2015. This possibly explains also the wide range for some of the parameters such as total milk output per month per farm (household) (150 to 549 l) and total-chilled milk collected by a processor per day at MBG level (63 to 3845 l). The number of famers delivering milk to a MBG during a specific period is another possible confounding factor that could have influenced the wide range of the volume of chilled milk collected by processors per day in our study. Apart from submitting milk production and marketing data, farmers were requesting information of which 106 inquiries were recorded. The most requested information was related to animal health (30%), reproduction (26%) and general husbandry information (11%). Six percent of the inquiries were complaints that the farmers submitted when there was a delay in receiving a feedback to their respective inquiries. The Malawi dairy industry is constrained by various factors of which animal health, reproduction and breeding are among the highest (Banda et al., 2012;Chagunda et al., 2016;Kawonga et al., 2012b;Tebug et al., 2012a). We suggest that some of these challenges are indirectly exacerbated due to poor recording among the farmers and low number of extension staff to provide adequate and quality support though the latter requires further investigation. We further suggest that the low number of queries on reproduction can be due to limited technical knowledge and skills among the smallholder farmers. It appears farmers are more concerned and immediately seek veterinary services when they observe an animal health-related issue on their farm. Possibly this could be in fear of losing the animal unlike asking for general husbandry information when the animal is in good health condition or just for the sake of improving their farming practices. Nevertheless, our results show that the information recorded through the SMS reflected the situation on the ground and it can help other dairy value chain players such as non-governmental organisations, academia, research institutions and processors who constantly work with smallholder dairy farmers to identify critical entry points for diverse innovative developmental interventions. The cumulating data can also later allow genetic analyses, which is missing in the smallholder sector. In rare cases, the questions were not related to dairy farming but general livestock production and this demonstrated that the system has the potential to be adapted for use in other agricultural sectors. Opportunities for success Smallholder dairy farmers showed interest and we observed that there was a potential for the farmers to adopt the mSMSRS as one of the credible sources of agricultural information. The level of use of the system, nevertheless, varied considerably across regions. Complaints received when farmers were not satisfied with the service mainly related to delays in receiving feedback. This indicated a potential perceived benefit of the system to the farmers. The mSMSRS was not accepting all the telecommunication service providers inputs and as such, this likely had a negative influence on the number of users of the system due to the limitation of requiring a change of the telephone number. Further, literacy level, poor record keeping at farm level, low response if not periodically reminded and the technical know-how of using some models of telephone constitute critical pitfalls to enhance integration of such type of technologies as mSMSRS in the smallholder dairy sector in Malawi. During induction sessions, we observed that though some of the farmers had a mobile telephone but they could not operate the phone. If in need of sending a text message, farmers without the technical know-how to use the phone reported consulting children, relatives or neighbours for assistance. It is therefore paramount that the system be adapted to such needs to improve its chances of adoption and impact such as integration of voice and image messaging. In areas where the study team collaborated with government technical field officers, it had the highest number of smallholder dairy farmers submitting data. This may indicate that diversity in stakeholders' participation in using mSMSRS is of importance and can influence the rate of acceptability of the system by farmers and its sustainability. Likely, the other dairy value chain players serve to provide constant reminders and encouragement. This underscores the need for public-private partnership model when integrating ICT technologies in smallholder dairy farming. The public-private partnership model contributes to enhanced diversified income and nutrition security of rural farming communities in the dairy sector (Gondwe et al., 2013) and would be critical in promoting ICT technologies among smallholder dairy farmers. In addition, the potential to adopt the mSMSRS and the ability of the smallholder farmers to pay for the SMS information service depends on the context (GSMA, 2017). In this study, the farmers did not pay for the service. The extent to which bundling of information with finance-oriented services to enhance the capacity of farmers to finance the SMS-based services in Malawi awaits further investigations. Conclusion The study has shown that data collected through mSMSRS is dependable to inform decision-making. The system allowed any stakeholder across the dairy value chain to initiate a conversation, which was the key principle of the system. The results following the analyses of the data captured using the system did not contradict with literature. In addition, farmers were receiving feedback in real time to guide management practices and a database of records could be developed that would subsequently allow other analyses such as genetic evaluation. The need to adapt some of the features of the mSMSRS to increase ease of use by farmers who are illiterate was observed. The capacity of the farmers to finance the service and the motivation thereof are some of the areas requiring further research.
2020-09-03T09:14:29.967Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d26dcce837cf85470dc21f7c16a863926000d73e", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311932.2020.1801214?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4d4b27a1b8c608aa4e18f2d465ec10c9c5ab36b1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
219188544
pes2o/s2orc
v3-fos-license
Prediction and Recommendation of Precision Medicine for Cancer using Machine Learning Techniques — Cancer is one of the major causes of death by disease and treatment of cancer is one of the most crucial phases of oncology. Precision medicine for cancer treatment is an approach that uses the genetic profile of individual patients. Researchers have not yet discovered all the genetic changes that causes cancer to develop, grow and spread. The Neuro-Genetic model is proposed here for the prediction and recommendation of precision medicine. The proposed work attempts to recommend precision medicine to cancer patients based upon the past genomic data of patient’s survival. The work will employ machine learning (ML) approaches to provide recommendations for different gene expressions. This work can be used in caner hospitals, research institutions for providing personalized treatment to the patient using precision medicine. Precision medicine can even be used to treat other complex diseases like diabetes, dentistry, cardiovascular diseases etc. Precision medicine is the kind of treatment to be offered in the near future. I. INTRODUCTION Cancer is a complex disease and has a property to spread and affect other parts of the body. Often, one type of cancer can lead to another type of cancer in a patient. Around 2.25 million people in the world are living with the cancer disease. Every year around 18.1 million new cancer patients are added. Cancer related deaths are around 9.6 million [1]. With the help of computer technology, lot of data related to patients is available with the hospitals. Data related to patient's history, drugs and / or therapies given, disease related information, is available in sufficient amount. Cancer is a complex disease. Complete causes behind cancer are not yet known to the researchers. Cancer disease complexity is defined in four stages. Effective cancer prognosis techniques need to be developed for early detection of a cancer. Cancer is also called a genetic disease as lot of genetic changes occurs in the patients with this disease. Cancers caused due to genetic changes in the tumor can be treated with the Precision Medicine. Precision medicine is not a new clinical approach but lot of technological advancements is taking place in recent times. Precision medicine is an approach where patient is given treatment by understanding genomics of a patient. It is observed that in cancer, genetic changes happening in one patient's tumor are not same in other patient. This is a reason behind treatment given to one patient suffering from same type and stage of a cancer is not suited to other patient suffering from same type and stage of a cancer. There is a strong need to treat cancer patient with personalized or precision medicine and not with a generalized one. Lot of research is happening in the field of precision medicine. Precision medicine can also play a major role in prognosis of a disease. Gene mutations take place in the cancer. A gene controls the functioning of the cell, especially how they grow and divide. When a gene mutated, cells can grow abnormally and lead to the development of the tumor. This tumor can be Benign or Malignant. Benign tumor is not cancerous and does not spread. It doesn't cause any harm to a patient. Malignant tumor is cancerous. It can grow, spread and can lead to many different types of cancers. Nearly 5% to 10 % cancers are caused due to mutations in the inherited genes from the parents. Other cancers are caused by factors such as age, gender, environmental factors such as exposure to UV radiation, lifestyle, consumption of tobacco, alcohol etc. Major deaths are observed worldwide are due to breast cancer, Lung cancer and Colorectal cancer. All cancer types are causing nearly 2 million deaths every year. Also, a particular cancer can lead to the development of another type of cancer. Idea of precision medicine is to accurately predict such genetic changes occurring in a cancer and propose a medicine which will suit a patient. Next generation sequencing technology is one of the DNA sequencing technology available today has revolutionized the cancer related research, such as precision medicine for cancer proposed here [2]. Microarray is also one of the advanced technologies available for DNA sequencing [3]. Though both technologies have its own merits and demerits, the choice of any one technology for DNA sequencing strongly depends on the application to be designed. This paper presents application of Machine Learning techniques on cancer patient data helping doctors to predict and recommend precision medicine to a patient based on genomic data. II. RELATED WORKS Machine learning algorithms are beingwidely used in cancer disease prediction and recommendation of precision medicine for cancer. Till date different machine learning algorithms are used to find the correlation between patient's molecular profile and drugs given. Though scientists are yet not able to find out all the genetic changes that are associated with the cancer, lot of research is in progress in the field of bioinformatics. Chih et al. proposed a deep learning model to predict drug response based on mutation and expression profiles of cancer or tumor cell. Because of the fundamental differences between in vitro (processes performed outside living organisms) and in vivo (processes performed inside living organisms) biological systems, a translation of pharmacogenomics features derived from cells to the prediction of drug response of tumors is not yet understood [4]. Lin Eugene et al. have proposed a Deep Learning approach for predicting Antidepressant response in major depression using clinical and genetic biomarkers. The goal of the proposed work is to establish deep learning models which distinguish responders from non-responders. and to predict possible antidepressant treatment outcomes in major depressive disorder (MDD) [5]. Huang et al. proposed an open source machine learning algorithm for the prediction of precision medicine for cancer. Aim of the precision medicine is to find out optimal drug therapies based on genomic profiles of individual patient tumors. An open source platform has been introduced which employs a highly versatile support vector machine algorithm combined with a standard recursive feature elimination approach to predict personalized drug responses from gene expression profiles. The model is giving 84% accuracy when tested across NCI-60 test dataset [6]. AyushSinghal et al. developed a machine learning based method to automatically identify the mutations available in the biomedical literature related to certain disease. A tool has been designed which will be helpful in predicting a precision medicine for certain diseases. Pubmed literature has been used as a source of data. Three diseases are targeted here: prostate cancer, breast cancer, and agerelated macular degeneration (AMD). The obtained result indicates that this approach will greatly benefit curation of mutation-disease databases on a mass scale [7]. III. RESULTS Studies of the literature in the form of parameters such as validation methods used, important features, ML methods and type of data used is given in table 1. IV. PROPOSED METHOD The aim of the proposed methodology is to devise a decision support system (DSS) which will assist doctors in predicting and prescribing suitable medicine for cancer, termed as personalized or precision medicine. The proposed architecture will consist of cancer patient's individual historical data to be used as an input to DSS and DSS will process it and provide personalized decision/recommendations in term of output. The proposed architecture is presented in fig. 1 below. It is divided into four phases I to IV. Fig. 1. Proposed Architecture for the prediction of precision medicine Phase I: Multi-Omics Data Collection: This phase aims at collecting data required for the prediction and recommendation of precision medicine. It is a two-step procedure as mentioned below.  Data Collection: Data will be collected for building the proposed model using DNA Sequencing technology for cancer patients.  Data Preprocessing: Handling of noise and missing data and encoding of categorical data will be done under this module. Phase II: Attribute Selection: Encoding technique will be applied on the patient's data. Genetic feature selection will be done through this module. Phase III:Pattern Mining: Hybrid Neuro-Genetic model will be designed and implemented for mining patterns from drug-genome datasets for the prediction and recommendation of personalized precision medicine. Phase V: Recommendation of precision medicine: Filtering of computed recommendations and finding strong recommendation for personalized medicine to the patients will be done in this module. The system methodology is as given below in fig. 2. It consists of hybrid Neuro-Genetic model. It is a combination of artificial neural network (ANN), which is a good classifier; in proposed system, neural network classifies a gene-drug correlation data points to find out whether a patient responded to prescribed drug, or not. Model also consists of genetic algorithm(GA), which is a best search and optimization algorithm. In proposed architecture, genetic algorithm has two usability: i. To select the best inputs from available patient's dataset and ii. To decide the connection weights of the neurons so as to improve the performance of neural network are achieving more accurate prediction i.e. best suitable drugs for an individual genomic profile. Genetic algorithm has patient's dataset as an initial population. Candidate solutions are selected from this population to undergo selection, crossover and mutation operations in order to produce a next generation of the solutions. GA then sends this new population of solutions as an input parameter to the ANN. Finally, fitness of ANN prediction on new population is calculated. Calculating a fitness function is a challenging one as one need to decide a parameter of fitness in the given context. V. FUTURE SCOPE Precision medicine is still in its infancy in the medical field. Due to the availability of big amount of electronic data of patients, there is a strong need to propose techniques to analyze this data. Existing data mining techniques are not able to handle and analyze critical medical data successfully. The available ML methods are showing good performance as a computer technique but lags clinical testing. Due to which, model performance cannot be analyzed to its best and hence lefts no chance in improving model performance. This can be handled by making DNA sequencing mandatory to cancer patients and create awareness among doctors to use DSS while prescribing treatment to the patient. VI. CONCLUSION Precision medicine is tried in large no of diseases in the recent years like pediatric oncology, psychiatric disorders, cardiovascular diseases, diabetes treatment, severe asthma, dentistry, cognitive ageing to name a few. Majority of these methodologies are based on patient's molecular profile such as genomics, proteomics etc. and are using machine learning techniques. This paper has proposed a prediction and recommendation of precision medicine for cancer patients based on genomic data using hybrid Neuro-Genetic model. The proposed model can give better accuracy compared to existing methods. Genomic profiles of patients are an important parameter in the personalized treatment. Once genetic variations of an individual are completely understood, most appropriate and personalized treatment can be given to patients on diseases which alter genomic profiles in a patient such as cancer, diabetes etc. The biggest challenge in using ML techniques to analyze genomic data is the interpretation of results. The results obtained by ML techniques need to be interpreted by a medical person in the correct context. The proposed work integrates the biology and computer science techniques together which demands the collaboration between computer scientists and medical professionals.
2020-01-09T09:11:02.227Z
2019-12-30T00:00:00.000
{ "year": 2019, "sha1": "d6d2f00b8bc39aa76156acd38fdbb55d2eae22f4", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.b3727.129219", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "517bc7034f27cafa3e6920f916b58d81ff2a004f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12316318
pes2o/s2orc
v3-fos-license
Alternative divalent cations (Zn2+, Co2+, and Mn2+) are not mutagenic at conditions optimal for HIV-1 reverse transcriptase activity Background Fidelity of DNA polymerases can be influenced by cation co-factors. Physiologically, Mg2+ is used as a co-factor by HIV reverse transcriptase (RT) to perform catalysis; however, alternative cations including Mn2+, Co2+, and Zn2+ can also support catalysis. Although Zn2+ supports DNA synthesis, it inhibits HIV RT by significantly modifying RT catalysis. Zn2+ is currently being investigated as a component of novel treatment options against HIV and we wanted to investigate the fidelity of RT with Zn2+. Methods We used PCR-based and plasmid-based alpha complementation assays as well as steady-state misinsertion and misincorporation assays to examine the fidelity of RT with Mn2+, Co2+, and Zn2+. Results The fidelity of DNA synthesis by HIV-1 RT was approximately 2.5 fold greater in Zn2+ when compared to Mg2+ at cation conditions optimized for nucleotide catalysis. Consistent with this, RT extended primers with mismatched 3′ nucleotides poorly and inserted incorrect nucleotides less efficiently using Zn2+ than Mg2+. In agreement with previous literature, we observed that Mn2+ and Co2+ dramatically decreased the fidelity of RT at highly elevated concentrations (6 mM). However, surprisingly, the fidelity of HIV RT with Mn2+ and Co2+ remained similar to Mg2+ at lower concentrations that are optimal for catalysis. Conclusion This study shows that Zn2+, at optimal extension conditions, increases the fidelity of HIV-1 RT and challenges the notion that alternative cations capable of supporting polymerase catalysis are inherently mutagenic. Results: The fidelity of DNA synthesis by HIV-1 RT was approximately 2.5 fold greater in Zn 2+ when compared to Mg 2+ at cation conditions optimized for nucleotide catalysis. Consistent with this, RT extended primers with mismatched 3′ nucleotides poorly and inserted incorrect nucleotides less efficiently using Zn 2+ than Mg 2+ . In agreement with previous literature, we observed that Mn 2+ and Co 2+ dramatically decreased the fidelity of RT at highly elevated concentrations (6 mM). However, surprisingly, the fidelity of HIV RT with Mn 2+ and Co 2+ remained similar to Mg 2+ at lower concentrations that are optimal for catalysis. Conclusion: This study shows that Zn 2+ , at optimal extension conditions, increases the fidelity of HIV-1 RT and challenges the notion that alternative cations capable of supporting polymerase catalysis are inherently mutagenic. Background Divalent cations are essential co-factors for polymerase catalysis and are also required for the RNase H activity of reverse transcriptase (RT) [1,2]. HIV-1 RT is a heterodimer consisting of p66 and p51 subunits, with the p66 subunit performing both the polymerase and RNase H activities [3]. Under physiological conditions, Mg 2+ functions as the co-factor for both activities. In addition to Mg 2+ , RT in vitro can use alternative divalent cations such as Mn 2+ , Cu 2+ , Co 2+ and Zn 2+ for polymerase activity [4]. These cations are important to many cellular processes and are tightly regulated. The total concentration of Zn 2+ in cells is~0.1-0.5 mM [5][6][7][8] while the total concentration of Mn 2+ in red blood cells is~2.5-3 μM [9,10], and Co 2+ in the serum is in the low μM range [11]. The available free concentration of all these cations is kept extremely low by cellular mechanisms [12,13]. Therefore, we believe these divalent cations do not play a significant role in the HIV replication lifecycle. One of the most notable effects of alternative divalent cations on polymerases is alteration of polymerase fidelity. Mn 2+ , Co 2+ , and Ni 2+ have all been shown to dramatically decrease the fidelity of DNA synthesis by several human, bacterial, and viral polymerases including HIV RT [37][38][39][40][41][42][43]. Mn 2+ and Co 2+ decreased the fidelity of avian myeoblastosis virus (AMV) RT and human DNA polymerase I in a concentration-dependent manner [40]. Increased error frequency in presence of Mn 2+ has also been observed in vitro with HIV RT [43], Escherichia coli DNA polymerase I [44], phage T4 DNA polymerase [45], DNA polymerases α and β [46], and Taq polymerase [47]. Most of these experiments were performed using concentrations of divalent cation higher than those required for maximal enzyme activity. However, we recently reported that physiological Mg 2+ concentrations, which are lower than the high concentration typically used to optimize enzyme kinetics in vitro, can increase RT fidelity [48]. Given the potential of Zn 2+ -based compounds as novel drugs against HIV and the vast amount of literature on alternative cations like Mn 2+ and Co 2+ being pro-mutagenic at elevated concentrations, we wanted to investigate the fidelity of HIV RT with each of these cations. Although Mn 2+ and Co 2+ were previously demonstrated to support RT catalysis, our recent publication [20] was the first to show (to our knowledge) that Zn 2+ , a potent polymerase inhibitor, can also support polymerase catalysis [15]. Therefore, we wanted to look more closely at how this previously untested divalent cation affects RT fidelity. A better understanding of the fidelity of RT with these alternative cations could also be important for modulating the accuracy of RT-PCR reactions. Mn 2+ is already being used in PCR reactions to generate random mutations [47]. In this report, we show that under optimal extension conditions, Zn 2+ increases the fidelity of RT, a previously unprecedented observation of an alternative cation for a polymerase. We also show that presumed pro-mutagenic cations, such as Mn 2+ and Co 2+ , are not mutagenic with HIV RT at concentrations optimal for dNTP catalysis. The potential mechanisms by which Zn 2+ enhance fidelity as well as the reason for the concentration-dependence of mutagenesis is discussed. Results Estimation of average and maximal extension rates of RT synthesis under the alternative divalent cations Optimal extension conditions for HIV RT with Mg 2+ , Mn 2+ , Co 2+ , and Zn 2+ in presence of 100 μM dNTPs were determined on a 425 nt RNA template derived from the gag-pol region of the HIV genome (as described in [20]). Optimal extension for each cation in the presence of 100 μM of each dNTP was observed at the following concentrations: 2 mM Mg 2+ , 0.4 mM Zn 2+ , 0.4 mM Mn 2+ , and 0.25 mM Co 2+ . Since a total concentration of 400 μM total nts (100 μM each) was used in the assays, the free concentration of each cation for optimal extension was~1.6 mM for Mg 2+ , 0.15 mM for Zn 2 + , 0.15 mM for Mn 2+ , and 0.07 mM for Co 2+ . Note that all 3 alternative cations showed maximal activity at much lower concentrations than Mg 2+ . This suggests that these alternative cations bind more tightly to RT than the physiological cation. Interestingly, we also found that Cu 2+ supported RT catalysis but optimum extension occurred at a much higher concentration of 3 mM (data not shown). Average and maximum extension rates were then calculated as described in Materials and Methods using the RNA template used for round 1 synthesis of the PCR-based lacZα-complementation fidelity assay. As expected, the rate of synthesis was fastest using Mg 2+ and slowest with Zn 2+ (Figure 1 and Table 1). An average extension rate of 1.8 ± 0.48 nts/s and a maximal extension rate of 7.4 ± 1.9 nts/s was observed with 2 mM Mg 2+ , whereas with 0.4 mM Zn 2+ , extension rates were 0.03 ± 0.02 nts/s and 0.19 ± 0.10 nts/s, respectively. Both 0.4 mM Mn 2+ and 0.25 mM Co 2+ decreased the average and maximal rate of extension as well (Table 1). HIV RT shows greater fidelity with Zn 2+ in the PCR-based and plasmid-based lacZα-complementation fidelity assays The PCR-based assay was a modified version of an assay used previously to examine the fidelity of poliovirus 3Dpol [49,50] (Figure 2). The 115 nt region screened for mutations is shown in Figure 2C. The assay is capable of detecting all frameshift mutations and several substitutions (see legend) in this region [51]. The assay essentially mimics the reverse transcription process since both RNA-and DNA-directed RT synthesis steps are performed. Most of the possible background mutations can be accounted for by performing a control in which plasmid DNA is PCR amplified to produce an insert identical to those produced in the complete assay. These inserts should comprise all error sources except the errors derived from HIV RT and T3 RNA polymerase. An average background colony mutant frequency (CMF, number of white or faint blue colonies divided by total colonies) of 0.0019 ± 0.0014 was obtained (Table 2). This corresponds to 1 white or faint blue colony in everỹ 500 colonies. Further details of this assay are discussed in a recent publication by our group [48]. Using 2 mM Mg 2+ , a CMF value of 0.006 (about 1 mutant colony in every 167 total) was obtained after background subtraction (Table 2). Results using Co 2+ were similar to Mg 2+ while Zn 2+ increased fidelity about 2.5fold (with high statistical significance). Although Co 2+ is reported to be mutagenic, its effect on the mutation rate of polymerases is concentration-dependent [40,42,46]. For example, the error frequency of avian myeloblastosis virus (AMV) RT increased from about 1 error per 1680 nt additions with Mg 2+ to 1 error per 1100 nt addition with activating concentrations of Co 2+ (1 mM), but increased further to 1 error per 200 nt addition when excess amounts of Co 2+ were used (5 mM) [40]. Only 0.07 mM free Co 2+ was used in these assays and it is possible that Co 2+ does not have a profound impact on fidelity at this concentration. This was further tested in the gapped plasmid assay described below. A second gapped plasmid-based lacZα-complementation fidelity assay, similar to the phage-based lacZα gapfilling assay, was performed to further confirm results obtained from the PCR-based assay. The gap filled by the polymerase is in a plasmid construct, which after fill in, is directly transfected into bacteria. Bacterial colonies rather than phage plaques are scored by blue-white screening in this assay. This assay screens a large region (288 nts) of the lacZα gene including the promoter sequence and it avoids the enzymatic (T3 RNA polymerase and Pfu polymerase) background issues of the PCRbased assay. The results (Table 3) were in strong agreement with the PCR-based assay ( Table 2). In this assay, Mg 2+ was modestly more accurate than 0.25 mM Co 2+ , while Zn 2+ once again showed~2.5-fold greater fidelity than Mg 2+ . Interestingly, Mn 2+ , a known pro-mutagenic cation for several polymerases including HIV RT [43], was comparable to Mg 2+ in the assays when used at its optimal concentration (0.4 mM total and 0.15 mM free). However, both Co 2+ and Mn 2+ were highly mutagenic when used at 6 mM, an amount which is in the same range shown by others to decrease the fidelity of several polymerases in vitro [39][40][41][42][43]52]. There was a~25-fold decrease in fidelity with 6 mM Mn 2+ compared to 0.4 mM Mn 2+ . Similarly, a~7-fold decrease in fidelity was observed with 6 mM vs. 0.25 mM Co 2+ . Both cations also showed severely inhibited polymerase activity at the 6 mM concentration while Zn 2+ incorporates only a few nts even after prolonged incubation at high concentrations (see [4], results with Co 2+ were similar to those shown with Mn 2+ in this report). Overall, the results from both the PCR-based and gapped plasmidbased lacZα-complementation fidelity assays show that the fidelity of RT increases with Zn 2+ and presumed pro-mutagenic cations do not modify RT's error rate significantly when used at low concentrations optimal for catalysis. Estimation of mutation frequency from CMF and sequencing data An estimate of the base misincorporation frequency can be made from the CMFs in Table 2 and the sequencing results in Figure 3 as described before [48]. In experiments with Mg 2+ ,~41% (17/42) of recovered mutations, after excluding the background mutations, were insertions or deletions (indels), and~59% (25/42) substitutions. Using a 33.6% detection rate for substitutions and 100% detection rate for indels in this region (see Figure 2C and accompanying legend) and a CMF of 0.0059 (from Table 1 ). (C) The nt and amino acid sequence for the 115 base region of the lacZα gene that was scored in the assay is shown. Both strands of the DNA plasmid are shown since HIV RT synthesis was performed in both directions (see Figure 2A). A line is drawn above the 92 nts that are in the detectable area for substitution mutations while frameshifts can be detected over the entire 115 nt region. Based on a previous cataloging of mutations in this gene [51], the assay can detect 116 different substitutions (33.6% of the 345 possible substitutions in the 115 nt sequence) and 100% of the frameshift mutations. substitutions, total is 5.6 × 10 −5 for both (see Figure 3 legend for further details)). Synthesis with Zn 2+ resulted in a higher ratio of indels vs. substitution: indels~63% (26/41), and~37% substitutions (15/41) were obtained. With a CMF of 0.0025 (Table 1), a mutation frequency of 1.9 × 10 −5 or~1 error per 53,000 incorporations was obtained for experiments with Zn 2+ . This value is also closer to the rate of~1 error per 77,000 incorporations that was observed with more physiological (0.25 mM), though sub-optimal Mg 2+ concentrations [4]. It is also possible to estimate the mutation frequency using the plasmid-based assay results ( Table 3). As no sequencing data was acquired, a combined error rate for both substitutions and indels can be estimated using the formula: ER ¼ CMF D Â P , where ER is the error rate, CMF is the Colony Mutant Frequency (from Table 3), D is the total number of detectable sites for plasmid pSJ2 which is 448, P is the expression frequency of the plasmid which equals 0.444 [53]. The calculations yield a mutation rate of 2.7 × 10 −5 for 2 mM Mg 2+ and 1.1 × 10 −5 for Zn 2+ . A mutation rate of 2.9 × 10 −5 for 0.4 mM Mn 2+ vs. 7.0 × 10 −4 for 6 mM Mn 2+ (~24-fold increase) and 4.8 ×10 −5 for 0.25 mM Co 2+ vs. 3.1 × 10 −4 for 6 mM Co 2+ (~7-fold increase) was obtained. Both the PCR-based and plasmid-based assays showed a comparable fidelity increase (~2.5-fold) for Zn 2+ vs. Mg 2+ , although the calculated mutation frequency rates were modestly lower in the plasmid-based assay. Analysis of fidelity by steady-state kinetics also demonstrates higher fidelity with Zn 2+ Kinetic assays have been used by many groups as a reliable way to estimate polymerase fidelity by insertion of specific nt mismatches or extension of specific mismatched primer termini (reviewed in [54][55][56]). Although pre-steady-state assays are more useful for understanding kinetic parameters for misincorporation, steady-state assays are much simpler to perform and typically yield results that are broadly similar to results with presteady-state assays [54]. Mismatched primer extension In background assays, plasmid pBSM13ΔPVUII ( Figure, 1B) was used as a template in PCR reactions to generate the insert that was scored in the assays. Numbers shown are the "colony mutation frequency" (CMF) defined as white + faint blue colonies divided by total colonies. Refer to the Results and Methods sections for details. and running-start assays using the sequences shown in Figure 4 were performed with constant concentrations of free cation of 0.4 mM Zn 2+ or 2 mM Mg 2+ . Note that reactions with each cation were performed using different enzyme concentrations and time points (see Materials and Methods). This was necessary as catalysis with Zn 2+ is much slower than Mg 2+ yielding negligible levels of extension with the conditions used for Mg 2+ . The results with Zn 2+ were seemingly consistent with steadystate conditions as the reactions were conducted over a prolonged period (30 min) and the ratio of unextended to extended primers remained high (i.e. the substrate was not significantly depleted). However, it is possible that these reactions may to some extent reflect presteady-state conditions, to an extent, since RT-primer-template complexes are extremely stable in Zn 2+ while catalysis is slow [20]. Because of these constraints, a direct comparison of the kinetic and equilibrium constants between the two cations cannot be made. However, the fidelity with Zn 2+ relative to that with Mg 2+ can still be estimated by comparing misinsertion ratios calculated for particular mismatches with each cation. The running-start assays performed here test RT's ability to misincorporate at a template C or G residue (depending on the sequence) after a "running-start" on a run of T's immediately downstream of the primer 3′ terminus. Experiments were analyzed on denaturing polyacrylamide-urea gels ( Figure 4B). A statistically pronounced (based on P-values) increase in fidelity was observed for all mismatches with Zn 2+ ( Table 4). The misinsertion ratio for the G.A mismatch, In background assays, the gapped plasmid was transformed into the bacteria allowing the bacterial polymerases to fill in the gap. Numbers shown are the "colony mutant frequency" (CMF) defined as white + faint blue colonies divided by total colonies. Refer to the Results and Methods sections for details. which is a difficult mismatch to make [48], could not be evaluated with Zn 2+ as no incorporation was detected (data not shown). In general, there was a~4-fold to 8-fold increase in fidelity for different mismatches in Zn 2+ compared to Mg 2+ . The ability of RT to extend primers with mismatched 3′ termini in Mg 2+ or Zn 2+ was also evaluated ( Figure 4C). Extension was more difficult with Zn 2+ and the magnitude of the difference was dependent on the particular mismatch (Table 5). Fidelity increased between~3-fold to 5fold for C.T, C.A, and C.C mismatches, however the G.T mismatch showed a statistically insignificant (P-value of 0.375) change in fidelity. Consistent with results in the running-start reaction, no extension of the G.A mismatched primer was detected with Zn 2+ . Overall, results from running-start and mismatch extension assays are in strong agreement with the lacZα-complementation assays showing that fidelity with HIV RT improves in Zn 2+ . Discussion The results presented in this paper show that using concentrations optimized for catalysis, Zn 2+ increases the fidelity of HIV-RT approximately 2-fold to 3-fold when compared to the physiological cation Mg 2+ . Mn 2+ and Co 2+ decreased the fidelity of RT at high concentrations; but at optimal concentrations these effects were almost completely mitigated. For Zn 2+ , misincorporation (as determined by running-start assays) and mismatch extension (as determined with mismatched primertemplates) were both influenced, suggesting that both steps involved in fidelity could be affected by Zn 2+ . There may be several possible mechanisms by which Zn 2+ alters fidelity. The geometry supported by different cations in the active site of polymerases has been proposed to affect fidelity. Magnesium supports tetrahedral symmetry at the active site, whereas Mn 2+ accommodates square planar, octahedral, and tetrahedral symmetries (reviewed in [57]). The ability of Mn 2+ to accommodate more than one type of symmetry may increase the reaction rate of misaligned substrates and hence decrease the fidelity of polymerization. It is important to note that although results presented here indicate Mn 2+ is not highly mutagenic at optimal concentrations, this may not be the case for other polymerases. Crystal structures of polymerases with Zn 2+ in the active site are not available, but Zn 2+ has been crystallized in a distorted tetrahedral symmetry in erythrocyte carbonic-anhydrase [58], as well as in a near tetrahedral geometry in Zn 2+ superoxide dismutase [59]. It is possible that Zn 2+ supports a different geometry than Mg 2+ in the active site and promotes a configuration of the amino acid residues which may be better suited to discriminate against misaligned substrates. Results showed that the fidelity of HIV RT with Mn 2+ and Co 2+ was concentration-dependent, as we observed Figure 2C). Numbering is as shown in Figure 2C. Deletions are shown as regular triangles, insertions are shown as downward triangles with the inserted base shown adjacent to the downward triangle, unless it was the same as the base in a nt run, and base substitutions are shown directly above or below the sequence. Substitutions shown correspond to the recovered sequence for the coding strand; however, these mutations could have occurred during synthesis of the non-coding strand as well (i.e. a C to A change shown here could have resulted from a C to A change during synthesis of the coding strand or a G to T during synthesis of the non-coding strand) (see Figure 2). Mutations recovered from HIV RT with 2 mM Mg 2± , and mutations from background controls are shown above the sequence as open triangles and normal text or filled triangles and bold italicized text, respectively. Mutations from HIV RT at 0.4 mM Zn 2+ are shown below the sequence. Individual sequence clones which had multiple mutations (more than one mutation event) are marked with subscripts adjacent to the mutations. Several clones with deletions (either single or multiple deletions) at positions 181-183, just outside of the scored region were also recovered (not shown). This was the dominant mutation type recovered in background controls (19 out of 24 total sequences) and probably resulted from improper ligation events or damaged plasmid vectors (see [48]). previously for Mg 2+ [48]. Although Mn 2+ is generally considered to be pro-mutagenic [57], the error frequency for several DNA polymerases usually increased as the Mn 2+ concentration increased [52]. In one report, E. coli DNA polymerase I and mammalian DNA polymerase β both showed relatively high fidelity when lower concentrations (below~100 μM) of Mn 2+ were used, whereas higher concentrations lead to greater mutagenesis. The high concentrations correlated with Mn 2+ binding to the single stranded template and possibly to secondary binding sites on the polymerase, raising the possibility that these factors promote the lower fidelity observed at high Mn 2+ concentrations [60]. In this regard, E. coli DNA polymerase I has been reported to have as many as 21 Mn 2+ binding sites on a single molecule but just a single high affinity binding site [61]. The effect, if any, of binding at the secondary sites is unknown. Still, when compared to Mg 2+ , careful analysis with other polymerases has suggested that Mn 2+ is promutagenic over a range of concentrations [62]. Differences between our results and these may stem from intrinsic differences in the enzymes or the different nucleic acid substrate used (many of the former experiments used homopolymers). Also, unlike RT, most DNA polymerases have intrinsic exonuclease activity. El-Deiry et al. [62] found that E. coli DNA polymerase I demonstrated a significant reduction in 3′ to 5′ exonuclease proofreading activity in the presence of Mn 2+ . This effect exacerbated the accelerated misincorporation with Mn 2+ which was observed. It is also possible that Zn 2+ affects the rate of conformational change in the enzyme and this leads to an alteration in fidelity. Catalysis with Zn 2+ is extremely slow (Table 1 and [20]) even though the complex between the enzyme and primer-template is over 100 times more stable than with Mg 2+ [20]. This indicates that one or more of the steps in catalysis is slow. Conformational transition of the protein after binding the substrate has a significant contribution to the ability of RT to add the The sequence of the DNA used in each assay type is shown. The underlined nts show the only differences between the two templates. Only one primer was used in the running-start assays and it terminated at the 3′ C nt before the dashes. The four dashes indicate the 4 A nts that must be incorporated before RT incorporates the target nt (denoted by X or Y). correct substrate [63]. Upon binding the substrate in Mg 2+ , the enzyme undergoes a conformational change to reach the transition state. A correctly matched nt then leads to tight binding and alignment of catalytic residues to promote catalysis, whereas a mismatched nt does not induce the tight binding state, thereby facilitating the rapid opening of the specificity domain and release of the misaligned substrate [63]. The conformational change in the specificity subdomain (fingers subdomain) of the polymerase plays a key role in determining the enzyme fidelity, and it will be interesting to investigate if the modified catalysis with Zn 2+ affects either the conformational change or the rate of conformational change in a way which might increase the specificity. Consistent with the model of slower catalysis promoting higher fidelity, suboptimal Mg 2+ concentrations also enhanced fidelity [48]. The observed enhancement was similar to what was observed with Zn 2+ as it resulted mostly due to a decrease in substitutions rather than insertion and deletion errors. Since insertions and deletions often result from primer-template slippage mechanisms, this suggests that both low Mg 2+ and Zn 2+ induce higher fidelity by intrinsically affecting RT catalysis rather than altering primer-template properties. It is possible that lowering the Zn 2+ concentration to suboptimal levels could also alter fidelity, however, catalysis dramatically declines as the concentration of Zn 2+ is either lowered or increased [20], making it difficult to test this possibility. As was noted in the Introduction, the level of available Zn 2+ and other divalent cations such as Mn 2+ or Co 2+ are kept extremely low in cells. Also, it is highly unlikely that these cations could support HIV replication. Although we show the alternative cations can support RT synthesis, the rate of nucleotide catalysis ranged from significantly reduced for Mn 2+ and Co 2+ , to essentially negligible for Zn 2+ (Table 1). Finally, the possibly of using supplements or natural minerals including Zn 2+ to treat HIV infection must be approached with caution. Low μM concentrations of Zn 2+ , which can inhibit HIV RT [20], are still~2-3 orders of magnitude greater than the level of free available Zn 2+ in cells. Low μM concentrations of free Zn 2+ in cells could have profound effects on the transcription of specific genes and the oxidation state of cells. Nevertheless, Zn 2+ as a constituent of cation based compounds Refer to the running-start sequences in Figure 2. The particular mismatch that was measured after incorporation of a run of A's over a run of T's on the template is shown in the column c V max,rel = I i /I i − 1 where I i is the sum of band intensities at the target site and beyond, I i − 1 is the intensity of the band prior to the target band. See Materials and Methods for a description. Conclusions In this report, we demonstrate that DNA synthesis by HIV RT in Zn 2+ is slow, but highly accurate. It was even more accurate than with the physiologically relevant cation Mg 2+ , when both were used at optimal concentrations. Other presumably pro-mutagenic cations (Mn 2+ and Co 2+ ) showed fidelity levels that were comparable to Mg 2+ under optimal conditions, while they were highly mutagenic when used at very high concentrations. This suggests that catalysis with these alternative cations is not intrinsically mutagenic and the observed mutagenicity in previous reports, may result from other mechanisms that could occur at high concentrations (see Discussion) that warrants further investigation. Materials Calf intestinal alkaline phosphatase (CIP), T3 RNA polymerase, "High Fidelity" (PvuII and EcoRI) and other restriction enzymes, T4 polynt kinase (PNK), and MuLV RT were from New England Biolabs. DNase (deoxyribonuclease)-free RNase (ribonuclease), ribonucleotides, and deoxyribonucleotides were obtained from Roche. RNase free-DNase I was from United States Biochemical. Rapid DNA ligation kit, RNasin (RNase inhibitor), and the phiX174 HinfI digest DNA ladder was from Promega. Radiolabeled compounds were from PerkinElmer. Pfu DNA polymerase was from Stratagene. DNA oligonucleotides were from Integrated DNA Technologies. G-25 spin columns were from Harvard Apparatus. RNeasy RNA purification and the Plasmid DNA Miniprep kits were from Qiagen. X-gal was from Denville Scientific, Inc. IPTG and media were from Gibco, Life Technologies. All other chemicals were obtained from Fisher Scientific, VWR, or Sigma. HIV RT (from HXB2 strain) was prepared as described [64]. The HIV RT clone was a generous gift from Dr. Michael Parniak (University of Pittsburgh). This enzyme is a non-tagged heterodimer consisting of equal proportions of p66 and p51 subunits. Aliquots of HIV RT were stored frozen at −80°C and fresh aliquots were used for each experiment. Refer to the mismatch extension sequences in Figure 2. In this assay primers with a matched C.G or a mismatched C.T, C.A, or C.C at the 3′ end were extended on one seqeunce. A second sequence with a matched G.C or mismatched G.T or G.A was also used. c V max is the maximum velocity of extending each primer-template hybrid. See Materials and Methods for a description. Preparation of RNA for the PCR-based lacZαcomplementation fidelity assay and RNA-DNA hybridization Transcripts (~760 nts) were prepared with T3 RNA polymerase and hybrids were prepared at a 2:1 5′ 32 P-labeled primer:template ratio as previously described [49]. Primer extension reactions for the PCR-based lacZαcomplementation fidelity assay For RNA-directed DNA synthesis, the~760 nt RNA template was hybridized to a radiolabeled 25 nt DNA primer (5′-GCGGGCCTCTTCGCTATTACGCCAG-3′). Full extension produced a 199 nt final product (see Figure 2A). The long template was used to make it easier to separate DNA synthesis products from the RNA template on a denaturing polyacrylamide-urea gel (see below). The primertemplate complex was pre-incubated in 48 μl of buffer (see below) for 3 min at 37°C. The reaction was initiated by addition of 2 μl of 5 μM HIV RT in 50 mM Tris-HCl pH 8, 80 mM KCl, 1 mM DTT and 10% glycerol and incubation was continued for 30 min for Mg 2+ , 1 hour for Mn 2+ and Co 2+ , and 3 hours for Zn 2+ . Different time points were used to assure that all the reactions were essentially complete with each cation. The final concentration of reaction components were 200 nM HIV RT, 25 nM template, 50 nM primer, 50 mM Tris-HCl, 80 mM KCl, 1 mM DTT, 0.4% glycerol and 0.4 units/μl RNasin along with different concentrations of salts. A final concentration of 100 μM dNTPs was used along with one of the following divalent cation: 2 mM MgCl 2, 0.25 mM CoCl 2 , 0.4 mM ZnCl 2 . The final pH of the reactions was 7.7. After incubations, 1 μl of DNase-free RNase was added and the sample was heated to 65°C for 5 min. Typically two reactions for each condition were combined and material was recovered by standard phenol:chloroform extraction and ethanol precipitation. Pellets were resuspended in 20 μl of 10 mM Tris-HCl (pH 7) and 2X loading buffer (90% formamide, 10 mM EDTA (pH 8), 0.25% each bromophenol blue and xylene cyanol) and products were analyzed by gel electrophoresis on 6% polyacrylamide-urea gels (19:1 acrylamide:bis-acrylamide). Fully extended 199 nt DNA was located using a phosphoimager (Fujifilm FLA5100), and recovered by the crush and soak method [65] in 500 μl of elution buffer containing 10 mM Tris-HCl (pH 7). After overnight elution, this material was passed through a 0.45 μm syringe filter and recovered by ethanol precipitation after addition of 10% volume 3 M sodium acetate (pH 7) and 50 μg of glycogen. After centrifugation, the pellets were vigorously washed with 500 μl of 70% ethanol to remove any traces of EDTA that may have carried over from the gel and potentially interfere with the second round of synthesis. The recovered DNA was hybridized to another 20 nt radiolabeled DNA primer (5′-AGGATCCCCGGGTACCGAGC-3′) with 10fold greater specific activity than the primer used for round one, and a second round of DNA synthesis was performed as described above except the reaction volume was 25 μl. Conditions for the cation, dNTPs, and pH were identical in the RNA-and DNA-templated reactions. Reactions were terminated with an equal volume of 2X loading buffer and products were gel purified as described above but on an 8% gel. The gel was run far enough to efficiently separate the 199 nt templates from the 162 nt full extension product of round 2. Polymerase chain reaction (PCR) for the PCR-based lacZαcomplementation fidelity assay The round two DNA (50% of recovered material) produced above by reverse transcription was amplified by PCR using the following primers: 5′-GCGGGCCTC TTCGCTATTACGCCAG-3′ and 5′-AGGATCCCCG GGTACCGAGC -3′. Reactions were performed and processed as previously described except that restriction digestion was done with 30 units each of "High Fidelity" EcoRI and PvuII in 50 μl of NEB buffer 3 for 2 hours at 37°C [49]. Preparation of vector for PCR-based lacZαcomplementation fidelity assay Thirty μg of the plasmid pBSΔPvuII 1146 [66] was double-digested with 50 units each of "High Fidelity" EcoRI and PvuII in 100 μl using the supplied buffer and protocol. After 3 hours, DNA was recovered by phenol-chloroform extraction and ethanol precipitation then treated with 20 units of CIP for 2 hours at 37°C in 100 μl of the supplied NEB restriction digest buffer 3. Dephosphorylated vector was recovered by phenol-chloroform extraction followed by ethanol precipitation and quantified using absorbance at 260 nm. The quality of the vectors for the fidelity assay was assessed in two ways: (a) ligation (see below) of the vector preparation in the absence of insert; and (b) religation of the vector preparation and PvuII-EcoRI cleaved fragment (recovered from agarose gels after cleavage of pBSΔPvuII 1146 but before dephosphorylation as described above). Vectors from (a) that did not produce any white or faint blue colonies and very few blue colonies in the complementation assay (see below), and those producing colony mutant frequencies of less than~0.003 (1 white or faint blue colony iñ 333 total) in (b) were used in the fidelity assays. Ligation of PCR fragments into vectors and transformation for the PCR-based lacZα-complementation fidelity assay The cleaved vector (50 ng,~0.025 pmol) and insert fragments (0.05 pmol) were ligated at a 1:2 (vector: insert) molar ratio using a rapid DNA ligation kit. Ligation and transformation of E. coli GC5 bacteria were carried out as previously described [49]. White or faint blue colonies were scored as harboring mutations while blue colonies were non-mutated. Any colonies that were questionable with respect to either being faint blue or blue were picked and replated with an approximately equal amount of blue colony stock. Observing the faint blue colony in a background of blue colonies made it easy to determine if the colony was faint blue rather than blue. Gapped plasmid-based based lacZα-complementation fidelity assay The gapped version of the plasmid pSJ2 was prepared as described [53]. One nM of the gapped plasmid was filled by 100 nM RT at 37°C in 20 μl of buffer containing 50 mM Tris-HCl, 80 mM KCl, 1 mM DTT, 2 μg of bovine serum albumin, 100 μM dNTPs, and varying concentrations of different cations. The reaction pH was 7.7. Reactions with 2 mM Mg 2+ , 0.25 mM Mn 2+ , 6 mM Mn 2+ , 0.4 mM Co 2+ , and 6 mM Co 2+ were carried out for 30 min while reactions with 0.4 mM Zn 2+ were performed overnight. Reactions were terminated by heating at 65°C for 15 min. After confirming complete extension by restriction digestion analysis (see [53]),~1 μl of the remaining original mixture was transformed into E. coli GC5 cells. The colony mutant frequency (CMF) was determined using blue-white screening as described above. Running-start misincorporation assays The approach used for these assays was based on previous results [67]. Reactions were performed as above using the same template but with a primer (5′-GAAA TTAACCCTCACTAAAGGGAAC -3′) ( Figure 4A) which does not have the last five nts at the 3′ end and has 5 additional bases at the 5′ end. Reactions with 2 mM Mg 2+ and 0.4 mM Zn 2+ were performed for 3 min and 30 min respectively at 37°C. The nt directed by the homopolymeric T run on the template running (dATP) was kept at a constant saturating concentration (55 μM) and the nt to be misinserted (for example, dTTP for measuring C-T misinsertion kinetics) was added at increasing concentrations in these reactions. The reaction pH was 7.7. Reactions were initiated by adding 2 μl of HIV RT (final concentration of 2 nM for Mg 2+ reactions and 8 nM for Zn 2+ reactions) and terminated by adding 2X loading buffer. The reactions were then electrophoresed on 16% denaturing polyacrylamide gels, dried, and imaged using a Fujifilm FLA5100 phosphoimager. Steady-state kinetic parameters K m , and V max were then calculated as described below. The amount of free cation in each reaction was adjusted according to the dNTP concentration because dNTPs are the major chelators of Mg 2+ or Zn 2+ in the reactions. The concentration of free cation was calculated using the formula: Where E t , D t , and [ED] represent the concentration of total Mg 2+ or Zn 2+ , total dNTP, and Mg 2+ or Zn 2+ bound to the dNTPs, respectively. The equilibrium dissociation constant (K d ) for dNTP with Mg 2+ as well as Zn 2+ , Co 2+ and Mn 2+ was assumed to be the same as that of ATP with Mg 2+ , (K d = 89.1 x 10 −6 M) [68]. This assumption leads to an approximate value for the free concentration of these cations in reactions. Mismatched primer extension assays The approach used for these assays was based on previous results [69]. The template, 5'-GGGCGAATTTAG(G/C)TT TTGTTCCCTTTAGTGAGGGTTAATTTCGAGCTTGG-3', used in these assays was a modified version of the template originally described in [70]. The underlined nts in parentheses indicate that templates with either a G or C at this position were used. The DNA primer (5′-TA ACCCTCACTAAAGGGAACAAAAX-3′) used in the assays was 5′ radiolabeled and hybridized to the template at a 1:1 ratio. The ′X′ at the 3′ end of the primer denotes either G, A, T, or C (see Figure 3). Matched or mismatched primer templates (14 nM final) were incubated for 3 min at 37°C in 10.5 ul of buffer containing 50 mM Tris-HCl, 1 mM dithiothreitol, 80 mM KCl with either 2 mM MgCl 2 or 0.4 mM ZnCl 2 and increasing concentrations of the next correct dNTP substrate (dCTP for this template). The reaction pH was 7.7. Reactions were initiated by adding 2 μl of HIV RT (final concentration of 2 nM for Mg 2+ reactions and 8 nM for Zn 2+ reactions) and terminated by adding 2X loading buffer. All reactions involving matched primer-templates were carried out for 2 min with 2 mM Mg 2+ and for 30 min with 0.4 mM Zn 2+ . Reactions with mismatched primer-templates at 2 mM Mg 2+ or 0.4 mM Zn 2+ were carried out for 5 min and 30 min respectively. The reactions were then electrophoresed on 16% denaturing polyacrylamide gels, dried, and imaged using a Fujifilm FLA5100 phosphoimager. Steady-state kinetic parameters K m , and V max were then calculated as described below. The amount of free cation in each reaction was adjusted according to the dNTP concentration as described above.
2016-10-31T15:45:48.767Z
2015-05-03T00:00:00.000
{ "year": 2015, "sha1": "d3adf3be72d751812577d37b2f3db193b2b93cc2", "oa_license": "CCBY", "oa_url": "https://bmcbiochem.biomedcentral.com/track/pdf/10.1186/s12858-015-0041-x", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d3adf3be72d751812577d37b2f3db193b2b93cc2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
225401511
pes2o/s2orc
v3-fos-license
Analysis of the Association Between Meteorological Variables and Mortality in the Elderly Applied to Different Climatic Characteristics of the State of São Paulo, Brazil : With the rising trends in elderly populations around the world, there is a growing interest in understanding how climate sensitivity is related to their thermal perception. Therefore, we analyzed the associations between mortality in the elderly due to cardiovascular (CVD) and respiratory diseases (RD) and meteorological variables, for three cities in the State of São Paulo, Brazil: Campos do Jordão, Ribeirão Preto and Santos, from 1996 to 2017. We applied the Autoregressive Model Integrated with Moving Average (ARIMA) and the Principal Component Analysis (PCA) in order to evaluate statistical associations. Results showed CVD as a major cause of mortality, particularly in the cold period, when a high mortality rate is also observed due to RD. The mortality rate was higher in Campos do Jordão and lower in Santos (and intermediate values in Ribeirão Preto). Campos do Jordão results indicate an increased probability of mortality from CVD and RD due to lower temperatures. In Ribeirão Preto, the lower relative humidity may be related to the increase in CVD and RD deaths. This study emphasizes that, even among subtropical climates, there are significant differences. Therefore, this can assist decision makers in the implementation of mitigating and adaptive measures. Introduction Climate sensitivity to human health is a widely discussed topic nowadays, mostly due to climate change. Thermal satisfaction with the environment is related to several factors, such as individual, economic, and environmental characteristics. Extreme or sudden climatic variations may impact the body thermoregulation system, contributing to illness and death [1][2][3]. In this context, the vulnerability of the elderly requires more attention, as they have reduced capacity for thermoregulation and decreased thermal perception due to body aging, caused by the degeneration of tissues and organs, on top of possibly pre-existing conditions [4,5]. Thus, the processes of maintaining body temperature are less efficient, increasing the susceptibility to CVD and RD [6][7][8]. The period studied was from 1996 to 2017, except in Ribeirão Preto, in which the data correspond to the period from 2000 to 2017. Only the elderly population was considered (60 years or more). Information on the populations of the cities and the average number of elderly people is shown in Table 1. Mortality Data Daily mortality data was obtained for deaths from diseases of the circulatory and respiratory system, according to the International Classification of Diseases (ICD-10) corresponding to I00-I99 and J00-J99, respectively. Data was obtained at the Brazilian Department of Informatics of the Unified Health System (DATASUS). Table 1 shows that the largest elderly population was present in Santos, with an average number of 73551 elderly (17% of the total population), followed by Ribeirão Preto with 66611 (9.6%) and Campos do Jordão with 3775 (7.3%). Due to the differences in the average number of elderly people among cities, during the study period, the number of deaths was lower in Campos do Jordão (1609 CVD and 529 RD), but the percentage of deaths by the average number of elderly people indicated a higher proportion in this city, with 42.6% of deaths (CVD) and 14% of deaths (RD), followed by Santos (30.3% CVD and 12.8% RD), and Ribeirão Preto with 29.4% of deaths (CVD) and 10.7% of deaths (RD). Meteorological Data Meteorological data of air temperature (T), relative humidity (RH), wind speed (W) and precipitation (PREC) were obtained from meteorological stations of the National Institute of Climate Characterization The State of São Paulo is characterized by a subtropical climate influenced by extratropical and tropical synoptic systems, with hot and humid summers, and cool and dry winters [35]. However, there is a distinction in climatic characteristics according to the Köppen-Geiger classification (1928) apud Rolim et al. [36], due to geographical position, relief, altitude and air masses. In view of this, Figure 2 shows the geographical differences among the study areas, responsible for the distinct behavior of meteorological variables (see Table 2). We observed that altitude is one of the main factors that characterize the climate of these areas, since Campos do Jordão is at a higher altitude with a colder climate, Ribeirão Preto is located in the interior of the state with intermediate altitude and dry climate, while Santos it is a coastal city with low altitude and humid and hot climate. Alvares et al. [37] and Dubreuil et al. [38] conducted studies for Brazil based on the Köppen-Geiger climatic classification. The variability of climatic characteristics of the cities in the study period is shown in Table 2. The climate of Campos do Jordão is classified as oceanic or maritime temperate (Cfb)¹, which presents milder temperatures with the minimum (-2.2 °C) and maximum (28.8 Calculation of the Mortality Rate (MR) To compare the intensity of mortality among cities, we calculated the crude mortality rate -A.10 suggested by the Indicators and Basic Data [40], as there are differences between the number of inhabitants in the cities studied ( Table 1). The calculation was performed separately for circulatory and respiratory system mortality, considering only the elderly, obtaining annual and monthly rates. The calculation is given by equation (1): Statistical methods The statistical analyses applied in this work aim to adjust the data to obtain a better understanding of the observations and to investigate the impact of climate on elderly mortality. In this stage, evaluations were carried out for the entire period and separately for the cold (April, May, June, July, August, September) and warm periods (October, November, December, January, February, March). In order to smooth the time series data, the 1970 Box and Jenkins methodology was applied, which aims to adjust the ARIMA model to a set of data to provide a better understanding [41]. Based on the Autoregressive Moving Average model (ARMA), the ARIMA model incorporates an integrated term (I), in which it differentiates the time series to make it stationary. The non-seasonal ARIMA model is called (p, d, q), where p is the order of the autoregressive component, d is the series differentiation number and q is the order of the moving average component. The purpose of using Principal Component Analysis was to verify the association between meteorological variables and mortality from CVD and RD in the elderly. This statistical technique is used to reduce the dimension of data with little loss of information, allowing identifying patterns in the data and expressing them through the clustering of objects [42,43]. The VARIMAX rotation was used to capture the maximum variance, to improve the interpretation, minimizing the number of variables with high loads in each factor [44,45]. Analyzes were performed considering lags of 0 to 5 days. Despite the very small variation in the values of the principal components between the days, we chose to use lag 3, because the commonality values (the proportion of the variance of a variable explained by all common factors) were slightly higher. The statistics covered in this study were performed using RStudio Software, using a significance level of 0.05 for all methods. Descriptive analyzes We analyzed the climatic characteristics of the study period through the behavior of the annual cycle with data obtained from the meteorological stations As mentioned above, the rainfall regime in the State of São Paulo consists of a rainy (summer) and a dry (winter) period [35,48]. In Figure 3d, the monthly average precipitation in Campos do Jordão is highest in December (200 mm). From April to August, there is minimal precipitation (50 mm). The highest rainfall in Ribeirão Preto is in January (250 mm) and the lowest values are present from June to August [37,49]. Santos shows the best rainfall distribution throughout the annual cycle, with a maximum in January (150 mm) and a minimum in August (50 mm) [46,47]. The calculated values of mortality rates are shown in Figure 4. The highest annual mortality rate for CVD was Campos do Jordão (from 12 to 32%), followed by Ribeirão Preto (from 11 to 18%) and then Santos (13 to 16%). Ribeirão Preto and Santos displayed similar patterns, possibly because the number of elderly people in these cities is similar (Table 1). Thus, the annual mortality rates due to RD also show similar values in Campos do Jordão (2 to 10%) and Ribeirão Preto (2 to 7%), while Santos presented the lowest rate, around 5% ( Figure 4a). As per Figure 4b-d, the highest mortality rates due to CVD and RD for the cities studied occurred in the cold period. The rate was higher in Campos do Jordão (approximately 3% CVD and 1.5% RD), while Santos presented the lowest rate (approximately 1% CVD and 0.5% RD).The warm period also showed a higher mortality rate for Campos do Jordão (approximately 2% CVD and 1% RD) and a minimum for RD in Santos. Statistical analysis The Principal Component Analysis results are presented in tables 3 to 8 show commonality (h²) and the three factors found through the VARIMAX rotation. The method was applied considering the meteorological variables (average temperature, relative humidity, and wind speed) and the number of deaths from CVD and RD for the entire study period using lag 3. We divided the analysis between cold and warm periods. The three factors in Campos do Jordão amounted for 70% of the explained variance. The first component explained 27% of the variance of mortality from RD with a weak negative association (0.31) with temperature (-0.74) and a weak positive association with wind speed (0.83). Factor 2 showed that 22% of the total variance was explained by the strong association with mortality from CVD (0.90), and by the weak association with deaths from RD (0.46) under the negative influence of temperature (-0.23). In factor 3, we found that mortality from RD (0.25) was negatively associated with temperature (-0.22) and wind speed (-0.23), and positively associated with relative humidity (0.93), explaining 21% of the variance. The variance in the cold period was explained by 72% by the three factors. A percentage of 23% of the variance was explained by the second factor, from the weak negative association between CVD mortality (-0.25) with temperature (-0.36) and wind speed (-0.23), and positive association with relative humidity (0.89), besides the strong positive relationship with deaths from RD. The explained variance of the warm period was 73%. The first factor explained 28% of the variance, with a weak negative association between mortality from RD (0.31) with temperature (-0.70), but positively with wind speed (0.88). Statistically significant correlations in bold and weak associations with (*). For Ribeirão Preto, the three factors explained 79% of the variance, with 31% from the first factor, in which CVD mortality (0.83) showed a strong negative association with relative humidity (-0.87). Nevertheless, we also found that mortality from RD (0.34) had a weak negative association with relative humidity. The explained variance of factor 2 (26%) indicated a strong negative association of mortality from RD (-0.64) with temperature (0.92). The third factor explained 22% of the variance, confirming the existence of a weak negative association of mortality from RD (-0.29) with wind speed (0.96). Statistically significant correlations in bold and weak associations with (*). In the cold period, the three factors explained 81% of the variance. The first factor explained 33% of the variance, with a strong negative association between mortality from CVD (0.85) and RD (0.64) with relative humidity (-0.70). The explained variance of factor 2 (25%) showed a weak negative association of mortality from RD (-0.40) with relative humidity (-0.48) and positive with wind speed (0.94). Factor 3 explained 23% of the variance through the weak negative association between mortality from RD (-0.26) and relative humidity (-0.35) and positive association with temperature (0.97). The sum of the explained variance of the three factors found in the warm period represented 75% of total variance. Factor 1 explained 33% of the variance through mortality from CVD (0.74) associated negatively, and strongly, with relative humidity (-0.91), and positively, with temperature (0.35) and wind speed (0.38). Statistically significant correlations in bold and weak associations with (*). According to Santos' analyzes, the three factors explained 77% of the variance. The second factor explained 22% of the variance, due to mortality from RD (0.96), which showed a weak negative association with wind speed. (-0.39). Factor 3 showed the weak negative relationship between CVD mortality (0.96) and wind speed (-0.31) and explained 22% of the variance. Statistically significant correlations in bold and weak associations with (*). A percentage of 73% of the variance in the cold period for was explained by the three factors in Santos. The second factor explained 23% of the variance, in which mortality from RD (0.90) showed a negative association with wind speed (-0.54), followed by factor 3, which accounted for CVD Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 3 August 2020 doi:10.20944/preprints202008.0068.v1 mortality (0.97) being associated negatively with temperature (-0.33), which explained 21% of the variance. The variance of the warm period was explained by 76%, of which the second factor represented 23%, in which mortality from RD (-0.55) was positively associated with wind speed (0.89). The third factor explained 21% of the variance, where mortality by CVD (0.94) and RD (0.43) are associated. Discussion The investigations carried out in this study address the importance of understanding how different climatic characteristics present in each city affect the mortality of the elderly. Results showed the predominance of cardiovascular diseases in the mortality rate for all studied cities [32], due to the fact that this type of disease can be influenced by several factors, including behavior (tobacco, alcohol, obesity, sedentary lifestyle and others) [50]. The highest mortality rates due to CVD and RD for the study period were observed in Campos do Jordão, 12 to 32% and 2 to 5%, respectively. It is considered one of the coldest cities in Brazil [51], and so, thermal stress to cold may have contributed to the occurrence of deaths [13,52]. The statistical associations found for Campos do Jordão revealed a possible role of low temperature in mortality, but deaths from RD can also be influenced by high relative humidity and wind speeds that impair human thermal comfort [10,53,54]. Ribeirão Preto is the second city with the highest mortality rate. Low relative humidity values are observed throughout the year, characterizing a dry climate. The statistical associations indicated that this climate may influence deaths, as dry air can cause mucous membranes to become excessively dry and more prone to infectious agents and dehydration, causing serious health consequences [6,32]. Also, we observed a decrease in deaths from RD due to higher temperatures and the high wind speeds, which may decrease thermal stress to cold. On the other hand, Santos showed a mortality rate slightly lower than Ribeirão Preto (~ 15% CVD and ~ 5% RD, ~ 16% CVD and ~ 6% RD, respectively). This may be a consequence of the decrease in thermal sensation in coastal cities, where the presence of the ocean softens the climate, favoring thermal comfort [47,55], in addition to being less cold (see Figure 2a). Besides, the lower percentage of deaths from RD may be associated with high relative humidity in the city, being consistent with studies that show the greater impact of low relative humidity on RD [19,56]. According to the principal component analysis, the lower wind speed values may be associated with CVD and respiratory deaths in Santos. This relationship can be explained by the city's climatic characteristics (hot and humid), where higher wind speed values would help relieving the thermal stress [29]. The analysis of the mortality rates is very important to alert the populations, particularly the most vulnerable groups, about the highest incidence of mortality in the warm and cold periods, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 3 August 2020 doi:10.20944/preprints202008.0068.v1 enabling increased prevention. The cold period showed a higher mortality rate for the cities studied. Thus, the statistical analyzes carried out in the cold period for Campos do Jordão revealed the influence of low temperature, high levels of relative humidity and calm winds in the increase of deaths due to RD. A cold and humid environment can favor the spreading of infectious agents [25], contributing to the development of RD. Also, the warm period demonstrated that the effects of low temperature can be enhanced by strong winds, increasing discomfort to the cold. However, deaths due to CVD did not obtain statistical significance in any of the periods in this city. In the cold period, the relative humidity reaches minimum values that can contribute to the increase in mortality in Ribeirão Preto. We emphasize that this season is also favorable for the increase in the concentration of air pollutants, due to the decreased deposition and dispersion of suspended particles, increasing the intake of particulate matter [57], which could exert a greater impact on the number of deaths. However, the warm period is associated to increased mortality from CVD through low humidity, high values of wind speed and high temperature [58]. As analyzed, the increase in mortality due to RD in Santos in the cold period may be related to weaker wind speeds maybe due to the increase of the air pollution, which it is outside of this study focus. However, lower temperatures may be related to the increase in deaths from CVD [9,14,20,59]. Furthermore, mortality from RD may decrease as a consequence of strong winds in the warm period, because it leads to better thermal comfort. Authors should discuss the results and how they can be interpreted in perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted. Conclusions In this study, we compared CVD and RD mortality rates in cities with different subtropical climatic characteristics in the State of São Paulo (Brazil), in addition to the joint use of the ARIMA and PCA models, which provided satisfactory results. We identified different behaviors of meteorological variables and mortality for each city. Santos is the city with the highest number and percentage of elderly people and presented the lowest mortality rate, possibly due to the softened climate due to the proximity to the ocean. However, because it is the hottest city in the study, the effect of lower wind speed can predominate in the increase in deaths. However, the intermediate rates of elderly and mortality for Ribeirão Preto indicate that the population is not exposed to such low temperatures and the effects of the dry climate can result in deaths. Meanwhile, Campos do Jordão has the smallest elderly population, and the highest mortality rate, so this can be related to the population's economic vulnerability, and, particularly, to the cold climate due to the higher altitude. The overall result emphasizes that even slight climatic differences among subtropical cities may cause CVD and RD mortality impacts´ changes. This work shows the relevance in evaluating the climatic impact on the mortality of elderly considering regions with different subtropical climates, aiming to inform society and guide decision makers in the implementation of mitigating and adaptive measures, aiming to provide a better quality of life for the elderly population.
2020-08-06T09:02:46.920Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "094bb935ade11c7a2604aa5dacc88fd753da617f", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202008.0068/v1/download", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "09081cc09c950ba7d09bf0cd3b014913d32280d3", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
155661575
pes2o/s2orc
v3-fos-license
First proof-of-principle of inorganic perovskites clinical radiotherapy dosimeters Inorganic CsPbBr3 perovskite devices have been manufactured and tested as dosimeters under both conventional and Intensity Modulated Radiotherapy (IMRT) X-ray beams. Samples showed a very good linear dependence of the collected charge/current on dose/dose rates in the range of 0.1–5.0 Gy/0.1–4.0 Gy/min of interest for clinical applications. A device sensitivity of about 70 nC/Gy mm3 compares favorably with other solid-state dosimeters. The first verification of an IMRT dose profile of a prostate cancer treatment, performed by moving the perovskite device on a 10 cm-long profile with a 0.5 mm pitch, showed agreement within 5% with the dose distribution required by the treatment planning system. © 2019 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5083810 I. INTRODUCTION Modern radiotherapy techniques, such as X-photon Intensity Modulated RadioTherapy (IMRT) and Volumetric Modulated Arc Therapy (VMAT), deliver highly conformed doses to irregularly shaped tumor volumes to spare surrounding healthy tissues. 1,2 The consistency between dose maps calculated by a dedicated Treatment Planning System (TPS) and those actually delivered to the patient have to be carefully checked experimentally either before or during the patient treatment. Modern dosimetric systems need precise measurements of the dose map distributions, characterized by high spatial resolutions. For this purpose, the active volume of the dosimeter must be minimized; therefore, a material with high sensitivity per unit volume is required. Solid-state dosimeters behave as ionization chambers during X-ray irradiation; electron-hole pairs are generated in the semiconductor bulk and then collected by the electric field applied across electrodes. 3 The charge Q collected at electrodes, in general, has a linear dependence on the absorbed dose, D, and sensitivity per unit volume is given by S = Q D Volume = qG R , with G being the electronhole pair generation rate, R being the absorbed dose rate, and q being the electronic charge. The generation rate is related to the dose rate R by G = Rρ E i , with ρ being the density and E i being the mean energy to create an electron-hole pair. 4 Thus, the sensitivity S per unit volume is ultimately ruled out by the ratio between density and mean energy for pair production: S = qρ E i . State-of-art dosimeters for clinical radiotherapy are ionization chambers (ICs) filled with air, silicon diodes, and, more recently, also diamond films. As an example, Table I shows a list of relevant parameters for three radiotherapy dosimeters based on ICs, 5 Si diodes, 4,6 and single-crystal Chemical Vapor Deposited (CVD) diamond, respectively. 7 The first two are modular arrays developed specifically for IMRT applications, while the third is a pin-point device, a bidimensional device based on this material being still under development. 8 In clinical radiotherapy, X-ray beams are characterized by high dose rates and high energy in the 0.05-18 Gy/min and MeV ranges, respectively, much higher than those used for X-ray inspection and ARTICLE scitation.org/journal/apm imaging, where dose rates are of the order of few micrograys per second and energy is hundreds of keV. With dose-rates and energies typical of clinical radiotherapy, a volume of about 1 mm 3 or less is sufficient to get currents of the order of 1 nA from a semiconductor device, this brings to adopting active layers of small thickness (not in the case of ionization chambers, where the low density of air needs for higher volume and depth). Moreover, a thick slab of a semiconductor material would suffer from a reduction in sensitivity with the accumulated dose due to the diffusion length decrease caused by radiation-induced defects. 9 In fact, commercial silicon and diamond devices are characterized by a thickness of the order of 1-30 µm (see Table I). In CsPbBr 3 , the mean energy to create an electron-hole pair has been estimated as E CsPbBr 3 ∼ 5.3 eV 10 by extrapolating results from a class of compound materials used in X-ray detection. 11 Considering ρ CsPbBr 3 = 4.55 g/cm 3 , one obtains a theoretical value s CsPbBr 3 ∼ 860 nC Gy mm 3 , much higher than that of silicon and far above that of diamond. This material is even more favorable than MAPbBr 3 : in fact, given ρ MAPbBr3 = 3.83 g/cm 312 and E MAPbBr 3 = 6.03 eV, 13 one obtains a theoretical value s MAPbBr 3 ∼ 635 nC Gy mm 3 , lower than that of CsPbBr 3 . CsPbBr 3 has already been proposed as an X-ray photodetector and imaging device in medical applications, 14-17 but up to now, no work has been carried out to demonstrate the feasibility of using this material to produce and test photoconductive dosimeters for clinical radiotherapy. Moreover, there is a second important reason that makes it extremely interesting to develop CsPbBr 3 dosimeters for clinical radiotherapy. Modern systems involve threedimensional geometries, often based on nonplanar arrangements, e.g., cylindrical water-equivalent phantoms equipped with threedimensional arrays of detectors placed in a helicoidal pattern. 18 Solid-state semiconductor devices, such as silicon diodes and diamond Schottky barriers, are manufactured from semiconductor wafers, rigid and with flat geometry, not suitable for nonplanar geometry arrangements. CsPbBr 3 inorganic perovskites can be in principle easily deposited on flexible large area supports to create a multipoint device adjustable on any kind of shape. This adding value of CsPbBr 3 makes this material a very promising candidate for modern clinical dosimetry applications. In this work, we present a first proof-of-principle experimental study on the feasibility of CsPbBr 3 dosimeters for clinical radiotherapy, with particular focus to IMRT modality. We manufactured a set of point devices by depositing thin films on custom printed circuit boards (PCB) equipped with electrodes and tested them under clinical X-ray beams used in advanced radiotherapy accelerators, in view to develop advanced dosimetric systems for modern clinical radiotherapy. II. EXPERIMENTAL PROCEDURE CsPbBr 3 films have been deposited by drop-cast directly on alumina Printed Circuit Board (PCB) specially designed for electrical tests of thin semiconductor films. The perovskite film is obtained starting from CsBr and PbBr 2 salts (Acros Organics, >99%) mole ratio 1:1 in a dimethyl sulfoxide (DMSO, from Sigma-Aldrich) saturated solution. Thermal annealing at 150 ○ C eliminates traces of the solvent and returns a layer of interconnected CsPbBr 3 microcrystals of the order of a few micrometer. The alumina PCBs have two parallel gold contacts, 7 mm long and spaced 0.8 mm, 20 µm thickness. Figure 1(a) shows a picture of one of the devices tested in this study in X-ray diffraction inspection, shown in Fig. 1(b), together with XPS analysis, demonstrated the excellent quality of the film in the absence of residual precursors and contaminants. Figure 1(c) shows an SEM photograph evidencing the microcrystalline nature of the film. Low temperature (10 K) photoluminescence analysis [ Fig. 1(d)] carried out on a similar drop-casted sample deposited on a glass substrate evidenced high-quality emission properties, similar to the literature data. 19 Soon after deposition, we covered the perovskite film with a polymethylmethacrylate (PMMA) layer to prevent degradation due to air and moisture contact and to favor electronic equilibrium during the exposure to therapeutic X-ray beams. A recent study on the photoconductive properties of similar samples showed a resistivity in the dark about 10 9 Ω cm and a positive value of the Hall coefficient R H ∼ 10 10 cm 3 /C, indicating p-type conductivity with a Hall mobility µ H = R H /(r H ρ) ∼ 10 cm 2 /(Vs). 20 We tested the dosimetric properties of our devices with clinical Xray beams delivered by linear accelerators (LINAC) routinely used for patient treatments at the Radiotherapy Unit of the University Hospital in Florence. A Precise Elekta linac delivering 6 MV and 25 MV X-ray beams (meaning that X-ray highest energy is 6 MeV and 25 MeV, respectively), in the nominal dose rate range of 0.5-4 Gy/min, was used. The linac dose rate was checked during the tests using the ionization chambers placed within the linac; it is stable within 0.01 Gy/min. Each perovskite-based device has been placed at the isocenter, namely, at Source Axis Distance (SAD) = 100 cm, corresponding to Source Skin Distance SSD = 95 cm, beyond a 5 cmthick polymethylmethacrylate layer to ensure electronic equilibrium during irradiation. A first test has been carried out with a uniform squared field of irradiation (40 × 40 cm 2 ) and linac gantry at an angle of 0 ○ to determine the sensitivity of our devices to doses and dose/rates typically used in clinical radiotherapy. Current flowing across the two electrodes, biased with a constant voltage of 5 V, was read out by a Keithley 6517 electrometer (also used as a voltage source) driven by a Matlab software toolkit. The current response has been read out during irradiation; the device has been tested under doses (0.1-5.0 Gy) and dose rates (50-400 cGy/min) of interest in clinical radiotherapy. A moderate drift of the current stabilized itself after the first minutes of voltage application. The device is also sensitive to local temperature changes, which can occur during irradiation and give rise to fluctuations in the baseline due to the dark current, which is anyway subtracted from the photocurrent signal. A second test has been carried out to investigate the performance of our perovskite-based dosimeter in a high conformal radiotherapy technique such as IMRT (Intensity Modulated Radio-Therapy). An Elekta Synergy linac equipped with a Multi-Leaf Collimator (MLC) composed of 80 4 mm-thick leaves has been used to deliver a clinical 10 MV photon beam out of five planned for an IMRT prostate treatment. The IMRT treatment was delivered in the step-and-shoot modality which means that dose is released during a discrete set of irradiation steps, each characterized by a selected configuration of the MLC, in view to have a suitable aperture shape at the linac head (segment), irradiating only when leaves are stationary at each position. The dose bidimensional map is obtained by summing 12 successive segments, with gantry placed at 0 ○ , carried out with a nominal dose rate of 400 cGy/min, each with a different spatial distribution. Each segment has a duration of a few seconds; the entire radiation treatment lasts about 1 min. Figure 2 shows the TPS IMRT dose map of the beam used in this work. We tested our ARTICLE scitation.org/journal/apm device acquiring data along a lateral-lateral profile passing through the isocenter (shown in Fig. 2) by changing its position from bottom to top with 5 mm-pitch, for a total elongation of 10 cm. We measured the current during each segment, and we calculated the total collected charge by integrating the overall signal during time, after subtracting the baseline due to dark current. Baseline fluctuations during measurement are lower than 1% of the signal. clinical radiotherapy. Figure 3(b) shows the collected charge obtained by integrating such current signals, plotted as a function of the dose. The best fit of data evidences an excellent linear trend, indicating a constant sensitivity in the overall investigated range, even in the low dose range, with very short pulses. Figure 3(c) shows a comparison of the current signals measured up to 50 cGy: signal transients are rather slow but well reproducible at any investigated duration. A very good linear trend in the whole charge vs dose investigated range is obtained (see the inset), characterized by the same slope and almost the equal intercept. III. EXPERIMENTAL RESULTS We calculated the sensitivity of our device to dose as the slope of the linear plot of charge vs dose shown in Fig. 3(b), S = Q D = 7.82 nC Gy , considering the geometry of this device, a sensitivity per unit volume: s = 69.8 nC Gy mm 3 is obtained. The intercept of the curve, q = 48.5pC, can be considered as the resolution in charge of our device, corresponding to about 6.2 mGy. The dose-rate dependence of the current response of our perovskite device has been investigated with another sample under a 6 MV X beam in the range of 0.5-4 Gy/min of interest in clinical radiotherapy. Current signals are shown in Fig. 4(a); the average current measured at signal saturation at each dose rate is plotted against the dose rate in Fig. 4(b). The sensitivity of our device, in this case, is SDr = ∆I D r = 5.9 nC Gy . Considering the geometry of this sample, the sensitivity per unit ARTICLE scitation.org/journal/apm volume is s = 70.5 nC Gy mm 3 , in good agreement with results obtained when charge vs dose and a 6 MV X beam has been used. The good linear dependence against dose and dose rate, as well as stable and reproducible responses measured under conventional uniform radiation fields, is a promising feature for achieving good results also in high conformal radiation modality such as intensity modulated radiotherapy (IMRT). We carried out the first investigation under an IMRT field, as described in Sec. II. We show results in Figs. 5(a) and 5(b). Twelve irradiation segments are clearly visible in the plots as separated current/charge signals, indicating that our device can follow the fast and complex structure of this high conformal radiotherapy technique. Dose profile relative to isocenter, measured with the perovskite-based dosimeter by moving it on a 10 cm along a line (shown in Fig. 2) with a 5 mm pitch, is shown in Fig. 5(c). It is compared to the same profile given by the treatment planning system and by a commercial dosimetric system made of a flat silicon diode bidimensional array (MAPCHECK TM by Sun Nuclear Corporation Melbourne, FL, USA). 6 Our data are in very good agreement, within 5% error, with the profile required by the TPS software and measured the reference dosimetric system. This experimental test is a valid proof that a perovskite-based device is indeed able to follow the complex radiation delivering used in an IMRT technique and opens the way to the application of inorganic perovskite dosimeters for modern radiotherapy in conformal modality. IV. CONCLUSIONS A set of inorganic CsPbBr 3 perovskite microcrystalline films have been deposited on custom printed circuit boards (PCBs) and tested as dosimeters with therapeutic X-ray beams. Current/charge signals are linear with dose/dose rates under 6 MV/25 MV X-ray uniform irradiation fields from linac, within dose and dose rate ranges 0.1-5 Gy, 0.1-4 Gy/min of interest for clinical applications. The sensitivity per unit volume found in our samples is about 70 nC/Gy mm 3 ; the lowest measurable dose is about 6 mGy, far below the machine lower delivering limit (1 cGy). The sensitivity per unit volume calculated by our devices is smaller than the theoretical value estimated for CsPbBr 3 . This is probably due to recombination at defects. Acting as a sink of electrons and holes, they reduce the charge collected at electrodes. As a result, the effective distance each electron-hole pair travel apart before being recombined (so-called effective charge collection length) is lower than the total thickness. The effective active volume of the device is thus reduced to a fraction of the entire volume of the sample. A recent study carried out by some of the authors on similar samples pointed out that in such a microcrystalline material, a non-negligible concentration of defects, probably located at grain boundaries of the microcrystals involved in the photoconductivity response of the devices at room temperature. 20 We note that this effect is also found in polycrystalline diamond dosimeters, e.g., in Ref. 21, where a device with an active area 1.8 × 18 mm 2 and a thickness of 300 µm, biased with 5 V, showed under a 6 MV X-ray beam of 38 nC/Gy sensitivity, indicating a charge collection length of about 55 µm, significantly lower than the total thickness of the sample. In silicon dosimeters, instead, problems are related to the formation of radiation-induced defects, which degrades the diffusion length and causes changes in the sensitivity with the accumulated dose. To overcome this problem, the thickness of the active volume, in most commercial devices, is tailored opportunely to values around 20-50 µm. 9 Hence, even if the sensitivity per unit volume of silicon is quite higher, the actual sensitivity of the device has a similar constraint found for diamond and perovskite samples. A first study in high conformal irradiation modality has also been performed using an intensity modulated radiotherapy (IMRT) ARTICLE scitation.org/journal/apm beam. A conformal dose distribution planned for prostate cancer treatment has been verified by measuring the current signal of the perovskite device at several points within the irradiation field map, along with a 10 cm-long profile passing through the isocenter, with a 0.5 mm pitch. Our data showed a good agreement with the dose distribution required by the treatment planning system, with errors within 5%, indicating that our CsPbBr 3 perovskite dosimeters can actually meet the stringent requirements of modern clinical radiotherapy techniques. In forthcoming studies, we plan to study the dosimetric performance of single crystal inorganic perovskite samples, in view to increase the sensitivity of our devices toward the theoretical limit. In addition, we wish to investigate the effect of radiation-induced defects on sensitivity, both for polycrystalline and single crystal CsPbBr 3 , to finally assess the possible application of these materials in radiotherapy.
2019-05-17T13:55:25.398Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "581756e4e2b8cd756ab0f04482ff7ff70f114cd4", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.5083810", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c2ba4e69a4b82e3d1b0427cd17f553a167ec41c4", "s2fieldsofstudy": [ "Physics", "Medicine" ], "extfieldsofstudy": [ "Materials Science" ] }
52183812
pes2o/s2orc
v3-fos-license
False localizing sign caused by schwannoma in cervical spinal canal at C1-2 level Abstract Rationale: False localizing sign means that the lesion, which is the cause of the symptom, is remote or distant from the anatomical site predicted by neurological examination. This concept contradicts the classical clinicoanatomical correlation paradigm underlying neurological examinations. Patient concerns: A 54-year-old man consulted for the right sciatica-like leg pain that had aggravated 1 year ago. Radiological examinations revealed degenerative spondylolisthesis with instability and right-sided recess stenosis at the L4–5 level. After initial improvement following 3 transforaminal epidural steroid injections with gabapentin and antidepressant medication, there was a recurrence of the symptoms a year later, along with wasting of the right leg for several months. Physical examination revealed difficulty in heel-walking and a weakness of extension of the right big toe; tendon reflexes were normal. Lumbar spine radiographs revealed no new findings. The initial course of treatment was repeated, but was ineffective. Diagnoses: Further cervicothoracic spine evaluations revealed a right-sided intradural-extramedullary mass and myelopathy at the C1–2 level. Interventions: The cervical mass was surgically resected and identified histopathologically as a schwannoma. Outcomes: Immediately after surgery, sciatica-like pain and weakness of right leg were completely resolved. Lessons: It is difficult to make an accurate diagnosis if there are symptoms caused by false localizing sign. Additionally, it is even more difficult to diagnose false localizing sign accurately when there is a co-existing lumbar lesion that can cause the similar symptoms. Introduction In 1904, Collier proposed the concept of the false localizing sign for the first time. [1] It can be described as a state in which the anatomical location of the lesion causing symptoms is distant or remote from the anatomical locus predicted by neurological examination. [2] This concept contradicts the classical clinicoanatomical correlation paradigm underlying neurological examinations. The symptoms caused by false localizing signs most likely result into missed or delayed diagnosis. Furthermore, these misleading signs lead to unnecessary or incorrect treatments and even surgical procedures at an unaffected site. In particular, coexistence of lesions in the lumbar region, which are likely to cause similar symptoms based on conventional neurological examinations, increases the difficulty in establishing accurate diagnosis and initiating proper treatment. Sciatica-like leg pain owing to cervical cord compression is a very rare symptom of false localizing sign and only a few clinical cases have been reported. [3][4][5][6][7] Among them, in 1967, Langfitt and Elliott [4] reported that cervical spinal cord compression caused by a tumor or degenerated disc material might cause lower back or leg pain that can easily be confused with a lumbar disc syndrome and accurate differential diagnosis is too easily impeded by abnormal findings in both cervical and lumbar region. Until now, however, no reports have described leg pain or sciatica as a false localizing sign caused by a schwannoma at cervical spinal canal. The authors concluded that the cause of the patient's right sciatica-like leg pain was a schwannoma in the cervical spinal canal at C1-2 level, as clearly revealed by the results of the surgery. We report this case as a typical false localizing sign. Case Report A 54-year-old man presented with right sciatica-like leg pain that began three years ago. This throbbing pain originated from the right buttock and radiated into the posterolateral thigh and leg Editor: N/A. Ethical approval: This case report is not a clinical trial and just incidental interventional process, so ethical approval was not necessary. and dorsal surface of the foot. He also reported lower back pain in addition to paresthesia and numbness in his right leg that had started worsening approximately one year ago. He had received numerous examinations and various kinds of conservative treatments for the lumbar spine in different hospitals, but no improvement was noted. His Visual Analogue Scale (VAS) score was 6 or 7 out of 10. He underwent laminectomy at L3-4 level six years ago. There was nothing special about his medical history. Physical examination revealed no particularities. Lumbar spine radiographs ( Fig. 1) and magnetic resonance imaging (MRI) (Fig. 2) showed degenerative spondylolisthesis with instability and right-sided recess stenosis at L4-5 level. It was considered to be the cause of symptoms. Transforaminal epidural steroid injection (TFESI) was performed at right-sided L5-S1 level every two weeks for three times. After the procedure, his lower back and right sciatica-like leg pain were reduced to VAS score 2 or 3 out of 10, and he showed great satisfaction with the results. However, paresthesia and numbness in the right leg persisted. Thus, gabapentin and antidepressant were prescribed. Three months later, his lower back and right sciatica-like leg pain were reduced to VAS 1 or 2 out of 10, and the improvement in paresthesia and numbness was also reported. He reported no difficulties in daily life. After the cessation of medication for one month, no changes in the symptoms were noted. The authors recommended him to return for consultation in the case of relapsing of symptoms or occurrence of new symptoms. He returned to the pain clinic one year later and reported recent deterioration of initial symptoms. Nature and radiating pattern of recurred right leg pain was similar to the previous pain. Since his place of residence was so far, he also visited the pain clinic after going through several hospitals. He reported that peripheral nerve abnormalities had been identified on the electromyogram performed at another hospital several months earlier. From several months ago, he felt that his right leg was tapered slightly, although this did not affect his daily activity but there was no significant difference between the two legs in actual measurements. This time, physical examination revealed several abnormal neurological findings. He showed some difficulty in heel gait. Right great toe extension and ankle dorsiflexion were slightly weak and classified as grade IV, according to the Medical Research Council Muscle Strength Scale. However, no abnormal reflex sign was present. Radiological examination of the lumbar spine was immediately performed, but no significant changes were observed as compared to the observations 1 year earlier. Same as with the past treatments, the TFESI at the right-sided L5-S1 level was repeated and medications were reinitiated. This time, however, the treatment had no effect. Immediate further examinations of cervical and thoracic spine were conducted, and cervical MRI revealed a right intradural-extramedullary mass with myelopathy at the C1-2 level (Fig. 3A). He was immediately referred to the department of neurosurgery, wherein right-sided C1 hemi-laminectomy was performed followed by tumor removal (Fig. 3B and C). Histopathological examination of the excised tissue identified the tumor as a schwannoma. Fortunately, abnormal neurological signs and sciatica-like pain in the right leg completely resolved immediately after the surgery. However, abnormal sensations such as paresthesia and numbness remained to a certain extent, which caused inconvenience. Gabapentin and antidepressant were prescribed again. After 3 months, he reported a great improvement in paresthesia and numbness. Thereafter, he stopped the medication and has been free of symptoms for more than a year. Approval of this study was waived from the Ethics Committee of Kyungpook National University Chilgok Hospital, based upon their policy on case reports. The authors obtained written consent from the patient to publish this case report. Discussion False localizing sign can be described as a state in which the anatomical location of the lesion causing symptoms is distant or remote from the anatomical locus predicted by neurological examination. [2] In 2003, Larner classified false localizing signs into those caused by intracranial, spinal cord, and other lesions, according to the location of lesion, in addition to describing the nerves affected and the symptoms specific to each case. [8] Until now, however, no reports have described leg pain or sciatica as a false localizing sign caused by a schwannoma at cervical spinal canal. Sciatica-like leg pain is a very rare symptom of false localizing sign owing to cervical cord compression. Sciatica represents a symptom rather than a specific diagnosis. The most prominent symptom is the pain in the lower limb radiating to the feet and toes. Sciatica can lead to clinical signs of neurological deficit, such as muscle weakness and a change in reflexes. It can be caused by a variety of conditions and diseases, and most of these are related to lesions affecting the lumbar spine or pelvis. About 90% of these lesions are because of lumbar herniated disc with nerve-root compression, other than lumbar spinal canal and foraminal stenosis. It can also be caused by tumors or cysts originating from the back or pelvis as well other parts of the body. [9] Although it is very rare, sciatica-like leg pain can be caused by cervical cord compression. Only a few clinical cases have been reported. [3][4][5][6][7] Schwannoma is an intradural-extramedullary nerve sheath tumor that predominantly occurs in the third to fifth decade of life. Tumors are benign in most cases. Although most patients develop symptoms at the stage of diagnosis, a few become symptomatic months or years before diagnosis. The most common symptom of schwannoma is a local or radicular pain, and some patients also report paresthesia and numbness. [10] In 2005, Jinnai et al [11] performed a retrospective review of 149 patients with spinal nerve sheath tumors. They found that the initial symptoms included motor weakness in 24.2% patients, pain in 36.9% patients, and paresthesia and/or numbness in 35.6% patients. The remaining 3.3% patients were diagnosed incidentally. In particular, when the nerve sheath tumor was located at the level of the first 2 cervical nerve roots or at the cauda equina, motor weakness was hardly observed. The findings for the present case were similar. Although an extensive mass was confirmed by cervical MRI in the present case, it can explain why pain, rather than apparent neurological abnormalities, is the most predominant symptom. They also subdivided their patients into 5groups according to the correlation between the tumor and the dura mater and/or intervertebral foramen. Tumors localized entirely within the dural sac, such as the tumor in the present case, were categorized as Group 1 tumors, and pain was the most frequent initial symptom associated with this group of tumors. The patient also presented with sciatica-like leg pain as the predominant symptom. In 2016, Murahashi et al [12] reported the results of the retrospective review of pre-operative neurological and radiological examinations for 24 patients with cervical myelopathy caused by spinal cord compression at C1-2. Although these patients were to be operated, 10 of 24 patients showed normal deep tendon reflex. The degree of spinal cord www.md-journal.com compression was more severe in patients with perceptible dysfunction and muscle weakness, but there was no difference in the degree or location of spinal cord compression among the patients with dysfunction. Nevertheless, no significant differences in the level and location of spinal cord compression were observed among patients with dysfunction. The results of these studies are in good agreement with the clinical symptoms of the present case. In 1967, Langfitt and Elliott [4] described that cervical spinal cord compression caused by a tumor or degenerated disc material might cause lower back or leg pain that can easily be confused with a lumbar disc syndrome. They described that abnormal findings in both cervical and lumbar region render accurate differential diagnosis very difficult. Furthermore, these authors already described that abnormal findings might be absent in the neurological examination even in the case of an extensive cord compression that can cause severe pain. In addition, mechanical signs, limitation in back mobility, and a positive reaction to the straight leg raising test might be absent even when there is a cord compression. This possibility may be explored in the present case. In fact, symptoms caused by false localizing signs are rarely reported and no pathological mechanisms underlying the occurrence of false localizing signs have been described so far. However, several hypotheses have been elaborated. In 1956, Scott [3] described the irritation of ascending spinothalamic tract as the likely cause of sciatica-like lower extremity pain caused by a tumor at high thoracic and cervical cord. Jamieson et al [13] presented several hypotheses regarding the origin of false localizing sign in their clinical case report. Above all, these authors suggested that myelopathy originated from the destruction of the anterior horn cell by venous obstruction and subsequent static hypoxia, and this hypothesis was evaluated by Taylor and Byrnes in 1974 as the most reasonable. [14] Another hypothesis was that disinhibition of normal ascending painproducing pathways that regulate pain signals is interpreted as pain by the brain. [15] In 2002, Ochiai et al [16] reported that false localizing sign is likely to be caused by a severe compression of the midline ventral structure within the cervical spinal cord including the anterior spinal artery, leading to ischemia in the thoracic watershed zone of the artery. After 3 TFESIs, some of the patient's symptoms improved, probably because some of the causes of the patient's right leg pain were owing to lumbar lesions. But it is believed that the schwannoma became enlarged as time progressed, and the right leg pain recurred and the neurologic abnormalities appeared. As described previously, it is very difficult to recognize false localizing sign earlier if lesions are simultaneously present in cervical and lumbar regions. In the present case, physical examination during the first visit showed no evidence of cervical cord compression, and radiologic examinations showed degenerative spondylolisthesis with instability and right-sided recess stenosis at L4-5 level. In addition, after 3 TFESIs, the patient reported the relief of symptoms and the recovery of normal life. Thus, no other causes to explain the symptoms were investigated. Fortunately, the present case exhibited a complete recovery immediately after surgery. However, irreversible damage would have occurred had he not undergone the appropriate examinations and received an accurate diagnosis and appropriate treatment in a timely manner. In the present case, the sciatica-like leg pain was induced by a slowly progressing schwannoma in the cervical spinal canal at C1-2 level. However; the identification of degenerative spondylolisthesis with instability and right-sided recess stenosis at the L4-5 level led to a discrepancy between the clinical level and the actual lesion, which is the most typical case of false localizing sign. It is difficult to recognize that the patient's symptoms are caused by false localizing sign from the beginning of the diagnosis. In addition, it is more difficult to diagnose false localizing sign accurately when there is a co-existing lesion that can cause the same symptoms. In conclusion, although it is rare, the patient's symptoms may be because of a false localizing sign if the patient does not respond to appropriate treatment, so proper diagnosis should be made through appropriate examination to reduce unnecessary treatment.
2018-09-16T06:22:59.998Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "5c5ad924b98612a3a94aef6b14189406c9ef458c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000012215", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c5ad924b98612a3a94aef6b14189406c9ef458c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211086539
pes2o/s2orc
v3-fos-license
Deficiency of Interleukin-36 Receptor Protected Cardiomyocytes from Ischemia-Reperfusion Injury in Cardiopulmonary Bypass Background Interleukin-36 has been demonstrated to be involved in inflammatory responses. Inflammatory responses due to ischemia-reperfusion injury following cardiopulmonary bypass (CPB) can cause heart dysfunction or damage. Material/Methods The CPB models were constructed in IL-36R−/−, IL-36RN−/−, and wild-type SD rats. Ultrasonic cardiography and ELISA were used to evaluate the cardiac function and measuring myocardial biomarker levels in different groups. TUNEL assay was used to evaluate apoptosis. Western blot assays and RT-PCR were performed to measure the expression of chemokines and secondary inflammatory cytokines in the heart. Oxidative stress in tissue and cultured cells was assessed using a DCFH-DA fluorescence probe and quantification of superoxide dismutase activity. Results Improved systolic function and decreased serum levels of myocardial damage biomarkers were found in IL-36R−/− rats compared to WT rats, while worse cardiac function and cardiomyocyte IR injury were observed in IL-36RN−/− rats compared to WT rats. TUNEL staining and Western blot analyses found that cardiomyocyte apoptosis and inflammation were significantly lower in the hearts of IL-36R−/− rats compared with that of WT rats. Oxidative stress was significantly lower in IL-36R−/− rats compared to WT rats. iNOS expression was significantly reduced, while eNOS expression was increased in the hearts of IL-36R−/− rats. Silencing of IL-36R expression in vitro activated SIRT1/FOXO1/p53 signaling in cardiomyocytes. Conclusions IL-36R deficiency in cardiomyocytes repressed infiltration of bone marrow-derived inflammatory cells and oxidative stress dependent on SIRT1-FOXO1 signaling, thus protecting cardiomyocytes and improving cardiac function in CPB model rats. Background Administration of anesthesia during cardiopulmonary bypass initiates myocardial ischemia-reperfusion (I/R) injury [1,2]. Reperfusion following persistent organ ischemia causes proinflammatory response and oxidative stress, which are crucial pathophysiological characteristics of ischemia/reperfusion injury [3,4]. I/R injury is commonly seen in many clinically pathological conditions, such as revascularization following myocardial infarction, septic shock, cardiopulmonary bypass, and thrombolysis therapy following cerebral infarction, which lead to cellular apoptosis and organ dysfunction [1,2]. Activation of inflamed signaling upregulates the expression of tissue chemokines and adhesion molecules in vascular endothelial cells, which facilitate infiltration and recruitment of bone marrowderived myeloid cells (BMDMs) [5,6]. The inflammatory cells then produce excessive cytokines, such as matrix metalloproteinase-2 (MMP-2), matrix metalloproteinase-9 (MMP-9), interleukin 1b (IL-1b), and MCP1, which lead to tissue damage in target organs [6]. An imbalance between antioxidants and oxidants following reperfusion induces oxidative stress followed by mitochondrial dysfunction, because rapid oxygen accumulation following reperfusion causes excessive production of reactive oxygen species [7]. Next, cardiomyocyte death is induced by the mitochondrial pathway [8,9]. Increased permeability of the mitochondrial inner membrane leads to release of cytochrome c and other activating factors, which induce apoptosis by promoting activation of caspase. It has been proved that permeability transition, which occurs during the IR process, is induced by increased ROS production, insufficiency of antioxidants, changes in pyridine nucleotide ratios, and calcium overload [8,9]. Clinically, the incidence of systemic inflammatory response syndrome is 2-10% because cardiopulmonary bypass induces a systemic inflammatory response due to contact between peripheral blood and artificial tube material used for the operation, which can aggravate the local inflammatory response in myocardial IRI [10,11]. Numerous studies have found that oxidative stress promotes the inflammatory process in a variety of models of inflammatory diseases [7]. The inflammatory response is characterized by recruitment of BMDMs and oxidative stress, as mitochondrial dysfunction coordinates the induction of IRI in the CPB model [7,12]. Interleukin-36 is composed of 3 IL-36 receptor agonists, including IL-36a, IL-36b, and IL-36r, derived from the IL-1 cytokine family [13,14], and this has been shown to be associated with a variety of autoimmune diseases. An antagonist of IL-36 improved the skin inflammation phenotype in psoriasis [14,15]. IL-36 expression is seen in various tissues and cells, including epithelial cells and immune cells. The production of IL-36 affects keratinocytes that express IL-36 receptors through the paracrine or autocrine pathways. IL-36 receptor activation can activate pro-inflammatory cytokines (IL-1b) and promote the release of TNF-a, IL-6, and IL-8. The association between IL-36 in immune cells and IL-36R in keratinocytes has been demonstrated. Induction of IL-36r, an isoform of IL-36R agonists, enhanced the expression of cytokines and induced nitric oxide synthase. These findings demonstrate that IL-36 can exert a potential effect, which is not inflammatory, on the oxidative stress process. SIRT1 is the most evolutionarily conserved mammal sirtuin, regulating stress response and promoting cell survival. The activity of SIRT1 deacetylase is significantly increased during stress. Deacetylation of SIRT1 downregulated transcriptional function of p53 to induce cell survival [16,17]. An increase in SIRT1 deacetylase activity can inhibit stress-induced cell apoptosis and oxidative stress injury via activation of SIRT1-FOXO1/p53 signaling [18]. Activation of SIRT1 under stress may be an important mechanism for avoiding I/R injury characterized by oxidative stress. In skeletal muscle injury induced by I/R, oxidative stress injury can be aggravated by downregulation of the SIRT1-FOXO1/p53 signaling pathway. Activation of SIRT1-FOXO1/p53 inhibits oxidative stress and apoptosis in skeletal muscle I/R models. Here, we revealed that the blocking IL-36/IL-36R signaling pathway could be utilized to reduce IRI injury in CPB model rats, which might be due to the inhibition of SIRT1/FOXO1/p53 signaling in cardiomyocytes. The inhibition of IL-36 signaling may be a potential therapeutic strategy for controlling I/R injury in patients who receive CPB. Material and Methods Cell culture and anoxia-reoxygenation treatment AC16 cell line cells (purchased from ATCC) were inoculated at a density of 10 4 /cm 2 and cultured in DMEM medium containing antibiotics and 10% fetal bovine serum (FBS). Cells were then washed with phosphate-buffered saline (PBS) and placed in serum-free DMEM for 24 h. To simulate in vitro I/R injury, H9C2 cells were incubated in glucose-free medium (in an environment with 95% N 2 and 5% CO 2 conditions for 6 h at 37°C) containing 100 units/ml of penicillin, 100 μg/ml of streptomycin, and 10% FBS. Subsequently, cells were washed with PBS, re-suspended in fresh DMEM, and moved to 95% O 2 /5% CO 2 conditions for reoxygenation. Cells were collected for analysis 16 h after reoxygenation. Synthesis and selection of SiRNA for IL-36R The small interference RNA (siRNA) of IL1RL2 was designed by Xuntong Bio Company (Shanghai, China). The primer sequences used were as follows: H9C2 myocardial cells were transfected with siRNA duplexes via Lipofectamine RNAi-Max according to manufacturer's instructions. Inhibition of IL-36R was evaluated by qRT-PCR 48 h after siRNA transfection. Animal models Every procedure was approved by the Animal Care and Use Committee of the First Affiliated Hospital of Guangxi Medical University (30 April 2015). Male Sprague-Dawley (SD) rats (350-450 g), IL-36R f/f allele rats, IL-36RN f/f allele rats, and the myh6::Cre transgenic rat strain were purchased from the Nanjing University Model Animal Center. The myh6::Cre transgenic rat strain expresses Cre recombinase during early embryonic development. This strain was bred with a floxed rat strain to create tissue-specific gene knockdown rats. We bred rats containing the IL-36R f/f and IL-36RN f/f allele with myh6::Cre transgenic rats, which resulted in an IL-36R and IL-36RN knockout in cardiomyocytes and generated rats with cardiac-specific IL-36R or IL-36RN deficiency. Rats containing this specific IL-36R or IL-36RN knock-out in the heart were bred. CPB was prepared by i.p. administration of ketamine (60 mg/kg) and xylazine (5 mg/kg). After intubation, mechanical ventilation was carried out with a small ventilator (respiratory parameters were set as follows: 60 times/min respiratory rate, 2.5 ml/kg tidal volume, and 1: 2 inspiratory-expiratory ratio). The cardiopulmonary bypass was performed as previously described [19]. Catheterization of the caudal vein was used to construct the liquid channel. The right femoral artery was perfused through the catheter, and the right internal jugular vein was catheterized into the right atrium to pump blood from the heart. The cardiopulmonary bypass device consists of a venous reservoir, a rat membrane oxygenator, and a peristaltic pump. The solution filling the flow tube contained 2 mL of mannitol, 10 mL of hydroxyethyl starch solution, 100 IU/heparin, and 1 mL of fresh allogenic blood. The total duration of CPB was 90 min. The relative physiological parameter monitored during CPB was set according to a previously described method [19,20]. Measurement of oxidative stress in heart tissue and cardiomyocytes For the evaluation of oxidative stress in tissue, superoxide dismutase (SOD) activity and malondialdehyde (MDA) were measured. SOD activity and MDA level in tissues were determined by spectrophotometry using a Superoxide Dismutase Activity and Lipid Peroxidation Assay Kit according to the standard instructions. To evaluate ROS production in vitro, a DCFH-DA probe coupled to confocal fluorescence microscopy was used to detect and analyze the level of ROS in myocardial cells. ELISA The serum levels of myoglobin, troponin, and lactic dehydrogenase were detected using the Rat Myoglobin ELISA Kit, Superoxide Dismutase Activity Assay Kit (Colorimetric); (Abcam, Cambridge, UK), and Rat Lactic Dehydrogenase ELISA Kit (Lianke, Shanghai, China) according to the standard instructions. Evaluation of cardiac function Cardiac systolic function was evaluated 3 h after reperfusion. The left ventricular pressure (LVP) was measured by microcatheter insertion into left ventricle via the right carotid artery. A hemodynamic analyzing system was used to record the physiological parameters related to myocardial systolic and diastolic function, such as heart rate, ejection fraction, and LVP, to further analyze the instant systolic characterization. Computer algorithms were used to deduce LVSP and the instantaneous first derivation of LVP. qRT-PCR TRIzol reagent (Invitrogen) was applied to separate total RNA in the heart, and reverse transcriptase (Takara) was used to convert RNA into cDNA. The ABI 7500 Real-Time system was used to perform the quantitative real-time PCR reactions. b-actin was used for the control. Quantitative analysis was performed via the -2 DDCt method. Western blot Protein expression in myocardial cells and tissues was determined using Western blot. Immunoblotting was performed with anti-eNOS, anti-iNOS, anti-p53, anti-SIRT1, anti-FOXO3, anticaspase-3, and anti-caspase8 probes. All primary antibodies for Western blot were obtained from Cell Signaling Technology. Protein bands were incubated with primary antibody at 4°C overnight and with secondary antibody at room temperature for 1 h. ECL-Plus reagent was used for visualization. Evaluation of apoptosis TUNEL was used to evaluate the apoptosis according to manufacturer's instructions. Samples were observed via fluorescence microscopy. The degree of apoptosis was expressed according to the apoptotic index (the ratio of TUNEL-positive cells to total cells). Immunohistochemical analysis After the rats were sacrificed, their hearts were removed, fixed in 4% paraformaldehyde, embedded in paraffin, and sectioned to a thickness of 5 μm. Immunohistochemical (IHC) staining e918933-3 was performed with an IHC staining kit. Macrophage-specific primary antibody (anti-CD68, Abcam) was used for selective detection of macrophages. Statistical analysis Statistical analysis was carried out using SPSS 23.0 software. All values are expressed as mean±standard deviation. Unpaired and 2-tailed t tests or two-way ANOVA was used to assess significant differences. Results CPB in IL-36R cardiac-specific knockout rats (Myh6-Cre IL1RL2 flox/flox ) improved systolic cardiac function and decreased levels of serum myocardial injured biomarkers Because IL-36R expressed in cardiomyocytes can interact with IL-36 secreted by immune cells and other epithelial cells in the heart, inhibition of IL-36R in the cardiomyocytes can directly block the biological effect of IL-36 in all cells, and exert other potential effects on endogenous activation of immune cells during CPB-induced IR injury. In rats, IL-36R is normally expressed in immune cells, which results in the pro-inflammatory effect of immune cells, including promotion of Th1 polarization and macrophage differentiation. To verify the effect of IL-36R deficiency in cardiomyocytes, we produced rats with cardiac-specific knockout of IL-36R. We explored the effect of IL-36/IL-36R signaling activation in cardiomyocytes on I/R injury. We found improved left ventricular systolic function, as indicated by left ventricular systolic pressure (LVSP) (left ventricular systolic pressure, WT vs. IL-36R-/-rats, P=0.013) and instantaneous first derivation of LVP (+dp/dt max ) in rat myocardial genetic knockout CPB models (+dp/dt max , WT vs. IL-36R-/-rats, P=0.020 and -dp/dt max , WT vs. IL-36R-/-rats, P=0.013) ( Figure 1A, 1B). Serum levels of myocardial damage biomarkers, including myoglobin (WT vs. IL-36R-/-rats, P=0.012) ( Figure 1C), troponin (WT vs. IL-36R-/-rats, P=0.015) ( Figure 1D), and lactic dehydrogenase (LDH) (WT vs. IL-36R-/-rats, P=0.018), were also decreased following reperfusion in rats with myocardial deficiency of IL-36R ( Figure 1E). The data demonstrated that myocardial specific knockout of IL-36R improved cardiac systolic function and suppressed myocardial injury induced by IR injury in CPB models. We evaluated cellular apoptosis in cardiomyocytes in the hearts of wild-type (WT) and IL-36R-/-rats using TUNEL staining and Western blot analyses. We found that cardiomyocyte apoptosis was significantly decreased in the hearts of IL-36R-/-rats compared with that of WT rats. The higher apoptotic rates were seen in the hearts of WT rats (Figure 2A, 2B). Expression of apoptotic proteins, including BCL2, Bax, caspase-3, and caspase-8, was measured. We found that the expression of pro-apoptotic genes was increased and Bcl2 expression was decreased in the hearts of WT rats ( Figure 2C). We then evaluated the expression of chemokines and chemokine receptors, including MCP-1, MIP, CCR2, CCL12, and CXCL2. The expression of some chemokines in WT rats was found to be significantly higher compared to that in Myh6-Cre IL1RL2 flox/flox rats ( Figure 2D). We performed immunofluorescent staining to assess macrophage e918933-5 infiltration (CD68+) and found significantly larger numbers of CD68+ macrophages in WT rats ( Figure 2E). Subsequently, we detected expression of pro-inflammatory cytokines, including TNF-a, IL-1b, IFN-g, IL-6, and IL-10, and found that expression of TNF-a and IL-1b in the hearts of WT rats was higher compared to that of Myh6-Cre IL1RL2 flox/flox rats ( Figure 2F). These results illustrated that decreased apoptosis of cardiomyocytes and mitigatory myocardial inflammation was characterized by decreased infiltration of macrophages and production of proinflammatory cytokines following CPB. Oxidative stress injury was attenuated and iNOS/eNOS ratio was reversed in IL-36R-/-rats We found SOD and MDA was significantly decreased in Myh6-Cre IL1RL2 flox/flox rats compared to those in WT rats ( Figure 3A, 3B). Considering that previous studies have demonstrated that IL-36g can promote the expression of inducible NOS (iNOS), we measured the expression of iNOS and eNOS in the hearts of Myh6-Cre IL1RL2 flox/flox rats and WT rats. We found that iNOS expression was significantly reduced, while eNOS expression was increased in the hearts of Myh6-Cre IL1RL2 flox/flox rats, as described previously ( Figure 3C). These results demonstrated that iNOS inhibition contributes to reduced inflammation and oxidative stress. We also examined the potential downregulating effect of eNOS expression on inflammatory response and oxidative stress, in contrast to inducible NOS. We found an increased eNOS expression and an inverted iNOS/eNOS ratio in the hearts of Myh6-Cre IL1RL2 flox/flox rats ( Figure 3D). The data demonstrated that IL-36R knockout and blocking of IL-36/IL-36R signaling in cardiomyocytes leads to increased eNOS expression and iNOS inhibition, which attenuates oxidative stress following IR injury in CPB model rats. Silencing of IL-36R expression in vivo and in vitro activated SIRT1/FOXO1/p53 signaling in cardiomyocytes We measured the expression of SIRT1, FOXO1, and p53 in vitro. We found that expression of these proteins in the CM was increased with silencing of IL-36R in the anoxic-reoxygen model ( Figure 4A). ROS production in CM was assessed using a DCFA-DH fluorescence probe, and a decrease in ROS production was observed ( Figure 4B). Next, we measured the expression of chemokines and pro-inflammatory cytokines. Interestingly there were no significant differences between the expression of pro-inflammatory cytokines, including TNF-a, IL-18, and IL-33. However, an increasing trend in TNF-a expression was observed ( Figure 4C). Expression of chemokines, including MCP1 and CCL12, in the CM decreased significantly with IL-36R silencing ( Figure 4D). These data demonstrated that IL-36R knockout in CM repressed oxidative stress and recruitment of BMDMs, but did not promote a release of cytokines secreted by CM. Rats with cardiac-specific IL-36RN deficiency suffered from worsened cardiac dysfunction and cardiomyocyte IR injury after cardiopulmonary bypass Based on the investigation of the role of IL-36R in the IRI of CPB model, we explored the IRI phenotype in rats with IL-36R antagonist deficiency. We performed CPB on rats with IL-36RN deficiency and WT. We found that WT rats had deteriorating Figure 5A-5C). Furthermore, lower left ventricular systolic pressure LVSP (left ventricular systolic pressure, WT vs. IL-36RN-/-rats, P=0.001) and ±dp/dt max (+dp/dt max , WT vs. IL-36RN-/-rats, P=0.001 and -dp/dt max , WT vs. IL-36RN-/-rats, P=0.002) was also found in the IL-36RN-/-rats ( Figure 5D, 5E). Interestingly, the administration of recombinant rat IL1R1, an inhibitor of IL-1b, slightly improved cardiac function ( Figure 5D, 5E) and appeared to decrease the levels of myocardial biomarkers. These results suggest the potential value of cytokine antagonists originating from IL1 family members for improving IRI phenotypes and cardiac dysfunction following CPB. Discussion The present study revealed that IL-36-deficiency rats had improved cardiac function and decreased TUNEL staining in cardiomyocytes following CPB. We evaluated inflammatory response and ROS production due to a higher ratio of eNOS/inducible , and troponin (C), were higher in the IL-36RN-/-rats compared to WT rats. (D) LVSP was worse in the IL-36RN-/-rats compared to WT rats. (E) Systolic function manifested by instantaneous first derivation of LVP (+dp/dt max and -dp/dt max ) was worse in the IL-36RN-/-rats compared to WT rats. The t test was used for comparison of 2 groups. All experiments were repeated 3 times, n=8. P<0.05 indicates a significant difference. * P<0.05, ** P<0.01. LDH -lactate dehydrogenase; LVSP -left ventricular systolic pressure; WT -wild-type. NOS in the myocardium of rats with IL-36R deficiency. IL-36R knockout attenuated the inflammatory response and decreased ROS production in cardiomyocytes. In conclusion, our investigation revealed that the IL-36/IL-36R signaling pathway could be utilized to reduce IRI injury in CPB models, via induction of aberrant inflammation and oxidative stress. Inhibition of IL-36 signaling may improve IRI in patients who receive CPB. IRI occurs in patients requiring cardiopulmonary bypass and cardiac surgery. Cardiopulmonary bypass (CPB), which involves open heart surgery, is indispensable for cardiac surgery [3,4,21]. During CPB, improvements must be made during CPB to prevent cardiac arrest and improve the safety of procedures in the surgical field. However, aortic cross-clamping may lead to myocardial ischemia/reperfusion (I/R) injury following reperfusion of blood flow. I/R injury may be the main form of post-operative myocardial injury, and is characterized by elevated serum cardiac injury biomarkers, such as cardiac troponin I (cTnI) and CK-MB [21]. It has been demonstrated that elevated levels of myocardial injury biomarkers are an independent risk factor for adverse reactions following an operation. A previous study indicated that 24 h following cardiac surgery, elevated cTnI was an independent risk factor for death by 30 days, 1 year, and 3 years. Patients with significantly elevated cTnI had a higher risk compared to those with lower cTnl [22]. Another study reported that increased serum CK-MB following cardiac surgery was an independent risk factor for increased mortality after 3 years [23]. Even with improved perioperative myocardial protection methods, increased levels of cardiac injury biomarkers and myocardial stunning due to I/R injury still complicate cardiac surgery and perioperative myocardial protection during CPB. Recent studies have investigated the potential therapeutic strategy for IR injury during CPB in higher animal models. Adbel et al. found that in the pig CPB model, hypoxic-reoxygenation (H/R) after reperfusion alleviated I/R injury and protected cardiac function following myocardial ischemia [24]. Huang et al. found that, compared with dopamine treatment in cardioplegia, isoflurane addition increased cardiac output more effectively. ST+EI was reported to decrease the release of cTni and CK-MB following CPB [25]. Emulsified isoflurane was combined with cardioplegia to prevent I/R injury. Protection of the integrity of mitochondrial ultrastructure of DNA may mediate this protective effect. CPB induced an increase of myocardial injury biomarkers and apoptosis in rat hearts. Suppressing inflammation by neutralizing IL-6 attenuated myocardial apoptosis and reduced the levels of myocardial biomarkers in CPB rats. Therefore, these studies demonstrated that excessive inflammation and mitochondrial dysfunction due to oxidative stress following reoxygenation contributed to IR injury in the rat CPB model, as demonstrated by increased cardiomyocyte apoptosis and higher myocardial biomarker levels. IL-36 cytokines contain 3 agonists -cytokine IL-36a, IL-36b, and IL-36g -and the antagonist IL-36Ra. IL-38 (IL-1F10) is an antagonist of IL-36R [26]. Cytokines interact with IL-36R-IL-1RAcPspecific heterodimeric receptors, but IL-36R, a receptor-binding subunit, is specific to IL-36. The antagonism of IL-36Ra was similar to inhibition of IL-1Ra on IL-1. IL-36Ra binding to IL-36R may inhibit the binding of IL-36R to IL-1RAcP and activation of signal complexes. IL-36 is involved in inflammation induction and immune recognition by stimulating innate, adaptive immune responses. A high expression level of IL-36R is present in BMDCs, and BMDCs respond to IL-36 stimulation and produce various inflammatory cytokines [13,14]. IL-36R is expressed in human monocyte-derived DCs (MDCs) and MDCs respond to IL-36 agonists and produce several cytokines [27]. Maturation of DCs is stimulated by IL-36 agonists to enhance the cell surface expression of CD83, CD86, and of MHC class II molecules. The synergistic increase of CD14 expression is induced by the co-culture of IL-36a and IFN-g, which is related to cell recognition of LPS. It was suggested that stimulation of related inflammatory cytokines (IL-36) may be necessary before expression of CD14 and detection of LPS. T-bet is a key transcript factor in Th1 CD4+ T cell differentiation, and its expression may be upregulated by DCs with overexpression of murine IL-36g, indicating a positive feedback between T-bet and IL-36g [28]. IL-36 has strong pro-inflammatory bioactivity and induces Th1 polarization. In vivo studies on the effect of IL-36 in Th1 responses can be performed. Four weeks following injection of M. bovis BCG into IL-36R-/-mice, Th1 responses (including decreases in IFN-g, TNF-a, and nitric oxide) were decreased by splenocyte stimulation in vitro, suggesting that an endogenous IL-36 signal was required for effective Th1 responses to mycobacteria [29]. In addition, IL-36R was also expressed in the keratinocytes and played a role in the release of inflammatory cytokines in an autocrine or paracrine manner. This effect is not important for survival of the host and clearance of bacteria. In contrast to IL-1R1-deficient mice, especially TNF-a deficient mice, there was no difference in survival following infection with M. bovis BCG in IL-36R-deficient mice as compared with wild-type mice. IL-36R deficient mice showed no susceptibility to M. tuberculosis-induced death. This data demonstrated that IL-36 played a pathological role in inflammatory diseases instead of one of physiological host defense. Therefore, we hypothesized that IL-36 deficiency would enhance inflammation in IR injury during CPB in rats, and indeed found that IL-36R-/-rats had improved post-operative cardiac function, which manifested as reduced serum levels of myocardial injury biomarkers, illustrated by attenuated infiltration of CD68+ macrophages and decreased expression of pro-inflammation. Excessive oxidative stress is another injury-related factor in the myocardial IR model described in numerous previous studies [30][31][32]. Downregulation of ROS production in the reperfusion following an ischemia event may attenuate tissue damage and cellular apoptosis. Therefore, we examined malondialdehyde (MPA) activity and superoxide dismutase activity in myocardial tissue, which reflected lipid peroxidation and anti-oxidative abilities. We found SOD and MDA were significantly lower in Myh6-Cre IL1RL2 flox/flox rats compared to those in WT rats, which suggested that the knockout of IL-36R attenuated the oxidative stress response following I/R injury in cardiomyocytes. Oxidative stress injury following IR injury also contributes to mitochondrial dysfunction and tissue damage. Sirtuin is a protein family that is highly conserved in bacteria and humans. The deacetylation of SIRT1 and activation of some proteins involved in cell repair and protection, including p53, Fox transcription factors, and PPARγ, can ameliorate pathologic and psychological stress responses [30]. SIRT1 can enhance activation of eNOS and promote vascular relaxation [31]. SIRT1 is also a key cytoprotective agent regulating these proteins in many cell processes, including metabolism, cell survival, and apoptosis [32]. In this study, it was found that SIRT1 expression was increased, while the iNOS/eNOS ratio was decreased following IL-36R knockout. The increased eNOS expression and an inverted iNOS/eNOS ratio in the hearts of Myh6-Cre IL1RL2 flox/flox rats were also found in our study, indicating the imbalance of iNOS/eNOS in the cardiomyocytes with I/R injury. It has been reported that SIRT1 is a crucial regulator of vascular homeostasis and that SIRT inhibits endothelial senescence via the eNOS-SIRT1 axis. Another study revealed that an increase in NO can originate due to increased SIRT expression and that SIRT deacetylates eNOS, increasing eNOS activity and production of endothelial nitric oxide (NO). e918933-10 Therefore, the data suggested that regulation of SIRT1 was associated with protection from oxidative stress-induced tissue injury. SIRT1 promoted the expression of FOXO1 and p53, enhanced activity of eNOS, and increased the production of endothelial NO molecules, which inhibited oxidative stress injury and protected cardiomyocytes from death in I/R injury. Limitations Our study has 2 limitations. Firstly, we did not perform in vitro experiments in the primary cardiomyocytes isolated from neonatal rats; we only examined the oxidative stress response in the cell line with or without knockout of IL-36R, and we believe that further investigation in the primary cardiomyocytes should be considered. Secondly, we only established CPB models in healthy animals. However, patients who need to receive CPB have organic heart diseases and even severe cardiac dysfunction. Therefore, CPB models in healthy rats cannot completely mimic the situation in patients receiving CPB. Conclusions We revealed that the deficiency of IL-36R in cardiomyocytes plays a protective role in IR injury induced in rat CPB models, via inhibition of oxidative stress and inflammatory response. The findings of the current study may provide a novel therapeutic strategy for IR injury, enabling post-operative myocardial protection in patients who undergo open cardiac surgery and extracorporeal circulation.
2020-01-23T09:09:16.262Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "9275bf9d34eb4876149507eb64b0aa24b9dbbdc0", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc7034403?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c45988343db6343b6f7750666a2872893a761c0c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64358527
pes2o/s2orc
v3-fos-license
Isolation and characterization of a tandem-repeated cysteine protease from the symbiotic dinoflagellate Symbiodinium sp. KB8 A cysteine protease belonging to peptidase C1A superfamily from the eukaryotic, symbiotic dinoflagellate, Symbiodinium sp. strain KB8, was characterized. The protease was purified to near homogeneity (566-fold) by (NH4)2SO4 fractionation, ultrafiltration, and column chromatography using a fluorescent peptide, butyloxycarbonyl-Val-Leu-Lys-4-methylcoumaryl-7-amide (Boc-VLK-MCA), as a substrate for assay purposes. The enzyme was termed VLKP (VLK protease), and its activity was strongly inhibited by cysteine protease inhibitors and activated by reducing agents. Based on the results for the amino acid sequence determined by liquid chromatography–coupled tandem mass spectrometry, a cDNA encoding VLKP was synthesized. VLKP was classified into the peptidase C1A superfamily of cysteine proteases (C1AP). The predicted amino acid sequence of VLKP indicated a tandem array of highly conserved precursors of C1AP with a molecular mass of approximately 71 kDa. The results of gel-filtration chromatography and SDS-PAGE suggested that VLKP exists as a monomer of 31–32 kDa, indicating that the tandem array is likely divided into two mass-equivalent halves that undergo equivalent posttranslational modifications. The VLKP precursor contains an inhibitor prodomain that might become activated after acidic autoprocessing at approximately pH 4. Both purified and recombinant VLKPs had a similar substrate specificity and kinetic parameters for common C1AP substrates. Most C1APs reside in acidic organelles such as the vacuole and lysosomes, and indeed VLKP was most active at pH 4.5. Since VLKP exhibited maximum activity during the late logarithmic growth phase, these attributes suggest that, VLKP is involved in the metabolism of proteins in acidic organelles. Introduction Symbiodinium species are eukaryotic, photosynthetic dinoflagellate algae that produce the light-harvesting carotenoid, peridinin. Although they can assume free-living forms with flagella, they usually reside in the endodermis of tropical invertebrates, e.g., corals, giant clams, jellyfish, and sea anemones. Their symbiotic relationship with corals and these other PLOS ONE | https://doi.org/10.1371/journal.pone.0211534 January 31, 2019 1 / 18 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 organisms allows corals to use the algal photosynthetic products for >90% of the energy required to maintain their homeostasis, growth, and calcification [1], whereas Symbiodinium species use host metabolites, e.g., carbon dioxide, ammonia, urea, and amino acids [2,3]. Corals take advantage of the symbiosis to form hard, calcium carbonate skeletons that form the structural basis for reefs in otherwise oligotrophic tropical seas. Certain cysteine proteases (CPs), i.e., those for which activity is dependent on an active-site cysteine, are involved in maintaining symbiotic relationships. The pea aphid Acyrthosiphon pisum harbors the enterobacterium Buchnera and coordinates Buchnera density with its growth stage via the A. pisum CP, cathepsin L-like protease [4]. The ciliate parasite Philasterides dicentrarchi also uses a cathepsin L-like protease to attack host fish [5]. The malaria protozoan Plasmodium falciparum, an apicomplexa, invades host erythrocytes with use of the CP, falcipain [6]. A CP, CysP of the pathogenic bacterium Mycoplasma, can cleave chicken IgG into its Fab and Fc fragments [7]. Given that CPs are involved in symbiosis, we hypothesized that a Symbiodinium CP(s) might exist and play a role in symbiosis. Furthermore, although genomic and transcriptomic studies of algal CPs have been performed [8,9], little direct information is available for these enzymes. For the study reported herein, we characterized the physical and biochemical properties of a CP from Symbiodinium sp. KB8, which had been isolated from the upside-down jellyfish (Cassiopea sp.) [10]. Among six fluorescing peptide substrates tested, proteolytic activity in a crude Symbiodinium sp. KB8 extract was greatest for butyloxycarbonyl-Val-Leu-Lys-4-methylcoumaryl-7-amide (Boc-VLK-MCA). Although Boc-VLK-MCA is a known substrate for plasmin and calpain, which are not found in photosynthetic organisms, it has been shown to be degraded by some CPs [11]. Therefore, we named the enzyme associated with this activity VLK protease (VLKP). In addition to purifying and biochemically characterizing VLKP, we sequenced its gene, produced recombinant VLKP (rVLKP) in Escherichia coli, and compared the substrate specificities of native VLKP and rVLKP. Based on our results, we propose a possible physiological function(s) for VLKP. Materials and methods Symbiodinium sp. KB8 culture Symbiodinium sp. KB8 algal cells isolated from the upside-down jellyfish were cultured in 3 l of f/2 medium [12] under 40-80 μmol photon m −2 s −1 light at 24˚C in glass flasks for one week. Logarithmic growth-phase cells (OD 730 ' 0.3) were harvested by centrifugation (7,000 × g, 10 min, 4˚C). The pelleted cells were suspended in 3% (w/v) NaCl and centrifuged again (9,000 × g, 15 min, 4˚C). The pelleted cells were stored at -30˚C immediately after freezing them with liquid N 2 . protease activity (see below) were measured once a week. The chlorophyll a concentration in a 90% (v/v) acetone extract was calculated as described by Jeffrey and Humphrey [13]. CP assay A slightly modified version of a published CP assay [11,14] utilized Boc-VLK-MCA as the substrate. Briefly, each reaction contained 50 μl of 100 mM succinate-borate (pH 4.0), 10 μl of 10 mM tris(2-carboxyethyl) phosphine hydrochloride, 29 μl distilled water, and 1 μl of 10 mM Boc-VLK-MCA dissolved in DMSO. The reaction was initiated by adding 10 μl of an enzyme solution into 100 μl of the reaction mixture at 37˚C. After a 30-min incubation period, the reaction was terminated by adding 2 ml of 1% (w/v) SDS in 100 mM sodium borate (pH 9.0). The fluorescence of the mixture was measured with an F-2500 fluorescence spectrophotometer (Hitachi High-Technologies, Tokyo, Japan; emission wavelength, 460 nm; excitation wavelength, 360 nm). The specific activity was expressed as the amount (mg) of purified enzyme that catalyzed the production of 1.0 μmol of 7-amino-4-methylcoumarin per hour at 37˚C. The concentration of this product was determined by comparing its fluorescence intensity to a standard curve using authentic 7-amino-4-methylcoumarin. The synthetic fluorogenic peptides used in this study are listed in S1 Table. Purification of VLKP All enzyme purification procedures were carried out at 4˚C. Algal cells were suspended in 20 mM Tris-HCl (pH 8.0) and disrupted by ultrasonication at 140 W (Branson Sonifier 250D, Emerson, Japan). Unbroken cells and debris were removed by centrifugation (17,000 × g, 30 min) and then ultracentrifugation (150,000 × g, 60 min). The supernatant was filtered through a Miracloth (Calbiochem, Darmstadt, Germany) and the filtrate retained, which was then subjected to a 60% (w/v) (NH 4 ) 2 SO 4 precipitation with stirring for 30 min. The mixture was held for 1 h, after which it was centrifuged (17,000 × g, 15 min). Next, the supernatant was applied to a column (2.6 × 15 cm) of Toyopearl Butyl-650M equilibrated with 20 mM Tris-HCl (pH 8.0), 60% (w/v) (NH 4 ) 2 SO 4 . The column was washed with three column volumes of the equilibration solution and eluted with 10 column volumes of a 60-0% (NH 4 ) 2 SO 4 linear gradient in 20 mM Tris-HCl (pH 8.0). The fractions with the greatest protease activity (~16% (w/v) (NH 4 ) 2 SO 4 ) were concentrated in a Centriplus YM-30 apparatus (Millipore, Bedford, MA). The concentrated sample was loaded onto a HiLoad 16/60 Superdex 200 column (GE Healthcare, Tokyo, Japan) equilibrated with 20 mM Tris-HCl (pH 8.0), 0.15 M NaCl, and connected to a Ä KTAprime chromatography system (GE Healthcare). NaCl was included in the eluent to reduce protein adsorption to the surface of the resin. Proteins were eluted in the same solvent at a flow rate of 1.0 ml min -1 . The fractions with substantial protease activity (eluted at~91 ml) were pooled, diluted with two volumes of 20 mM Tris-HCl (pH 8.0), and then applied to a Toyopearl DEAE-650S column (1.0 × 1.0 cm) equilibrated with 20 mM Tris-HCl (pH 8.0). The column was washed with three column volumes of the equilibration buffer and eluted with 10 column volumes of a 0-0.3 M linear gradient of NaCl in 20 mM Tris-HCl (pH 8.0). Fractions with substantial protease activity (eluting at~0.11 M NaCl) were pooled and concentrated in a Minicon CS15 concentrator (Millipore). The concentrated sample was loaded onto a Superdex 200 HR 10/30 column (GE Healthcare) equilibrated with 20 mM Tris-HCl (pH 8.0), 0.15 M NaCl, which was connected to the Ä KTAprime system. Proteins were eluted with the equilibration buffer at a flow rate of 0.3 ml min -1 . The fractions with substantial protease activity (eluting at~15 ml) were pooled and dialyzed against 20 mM Tris-HCl (pH 8.0) prior to characterization studies. Typical chromatographic profiles of the purification procedure are shown in Measurement of protein concentration Protein concentration was measured using Pierce BCA Protein Assay kit reagents (Thermo Fisher Scientific, Yokohama, Japan). BSA served as the standard. The protein concentration of each column chromatography fraction was expressed as its OD 280 . Absorbance was measured using a UV-2450 UV-VIS spectrophotometer (Shimadzu, Kyoto, Japan). Gel filtration for molecular mass determination The molecular mass of purified VLKP was estimated by HiLoad 16/60 Superdex 200 gel filtration with the chromatography controlled by the Ä KTAprime system. The column was equilibrated in 20 mM Tris-HCl (pH 8.0), 0.15 M NaCl and the protein eluted in the same buffer at a flow rate of 1.0 ml min -1 . Fractions of 0.5 ml were collected. The molecular mass markers (Sigma-Aldrich) blue dextran (2,000 kDa), thyroglobulin (669 kDa), alcohol dehydrogenase (150 kDa), BSA (66 kDa), and carbonic anhydrase (29 kDa) were used to calibrate the column. SDS-PAGE Laemmli SDS-PAGE [15] was performed using polyacrylamide gels containing 12 or 14% (w/ v) acrylamide. Samples were heated at 95˚C in the presence of 2-mercaptoethanol. The gels were stained with silver or Coomassie Brilliant Blue R-250 (CBB). Precision Plus Protein Kaleidoscope Prestained Protein Standards (Bio-Rad) were used to calibrate the gels. In-gel digestion of VLKP A TCA solution (100%, w/v; 250 μl) was added to 1 ml purified VLKP. The sample was incubated for 10 min at 4˚C and then centrifuged (15,000 × g, 10 min, 4˚C). The precipitate was washed twice with 200 μl acetone and centrifuged again (15,000 × g, 10 min, 4˚C). After removing the acetone, the precipitate was held on ice for 3 min. Then, the sample was suspended in 30 μl SDS-PAGE sample buffer, electrophoresed through an SDS-PAGE gel (12% (w/v) acrylamide), and then silver stained. The predicted VLKP bands were cut into small pieces to increase the surface area and put into a 1.5 ml tube. They were shaken with 100 μl of 15 mM potassium ferricyanide, 50 mM sodium thiosulfate for 10 min and then with 500 μl water for 15 min. The destaining procedure was repeated a second time. The proteins in the gel pieces were digested with trypsin as described [16]. The gel pieces were washed one time with 50 mM ammonium bicarbonate and then three times with wash buffer containing 50 mM ammonium bicarbonate and 50% (v/v) acetonitrile for 15 min (each time) with vortexing. Acetonitrile (100 μl) was added to the tube to cover the gel pieces completely, with subsequent incubation for 5 min. The gel pieces were then dried completely using a centrifugal concentrator CC-105 (Tomy, Tokyo, Japan). Reduction of cysteine residues was carried out with a 10 mM dithiothreitol (DTT) solution in 50 mM ammonium bicarbonate for 45 min at 56˚C. After discarding the DTT solution, the same volume of a 55 mM iodoacetamide solution in 50 mM ammonium bicarbonate buffer was added and incubated in darkness for 30 min at room temperature to achieve alkylation of cysteine residues. The iodoacetamide solution was replaced with wash buffer, and the mixture was vortexed two times for 15 min each. Gel pieces were washed and dried in 100% acetonitrile followed by final drying in a centrifugal concentrator (CC-105). The dried gel pieces were swollen with 2 μl trypsin solution (10 ng μl -1 ) (Promega, Tokyo, Japan) reconstituted with 50 mM ammonium bicarbonate. The gel pieces were incubated overnight at 37˚C. The supernatant was transferred to a new 0.5 ml tube, and the peptides were extracted with 10 μl of 5% formic acid/50% acetonitrile for 10 min in a sonication bath. This step was repeated twice. Samples in extraction buffer were pooled in 0.5 ml tubes and evaporated in a CC-105 centrifugal concentrator. The volume was reduced to approximately 5 μl, and then 10 μl of 0.3% formic acid was added for nano-LC-ESI-MS/MS analysis. Nano-LC-electrospray ionization-MS/MS of VLKP tryptic peptides MS and tandem-MS spectra were obtained using a LC-ESI-LIT-q-TOF spectrometer (Nano-Frontier eLD, Hitachi High-technologies) as described [16]. Linear ion trap-time of flight (LIT-TOF) and collision induced dissociation (CID) modes were used for MS detection and peptide fragmentation. The trypsin-treated liquid sample (10 μl) was diluted with formic acid solution to give a final concentration of 0.3% formic acid and then injected. Peptides were trapped with a C18 column (Monolith Trap C18-50-150, Hitachi High-Technologies). Peptide separation was achieved using a packed nano-capillary column (NTCC-360/75-3, Nikkyo Technos, Tokyo, Japan) at a flow rate of 200 nl min -1 . The separated peptides were then ionized with a capillary voltage of 1700 V. The ionized peptides were detected using a detector potential TOF range of 2050-2150 V. The peptides in the column were eluted using a stepwise acetonitrile gradient (buffer A: 2% acetonitrile, 0.1% formic acid; buffer B: 98% acetonitrile, 0.1% formic acid, gradient protocol: 0 min A = 98%, B = 2%; 60 min A = 60%, B = 40%). De novo sequencing and protein identification was performed using PEAKS Studio software (Bioinformatics Solutions Inc., Waterloo, Canada). A Symbiodinium sp. KB8 database was constructed in-house using expressed sequence tag sequences found at http://medinalab. org/zoox/kb8_assembly.fasta.bz2 [17]. De novo sequencing was performed using the Peaks algorithm with the following parameters: precursor-ion error tolerance, 0.05 Da; product-ion error tolerance, 0.05 Da; digestion enzyme, trypsin; fixed modifications, cysteine carboxyamidomethylation; variable modifications, histidine, tryptophan and/or methionine oxidation. The sequencing data were subjected to a SPIDER homology search/Blast (PEAKS Studio) against the in-house Symbiodinium database. Cloning of VLKP cDNA and construction of a vector for rVLKP expression Frozen Symbiodinium sp. KB8 cells were ground to a fine powder under liquid nitrogen using a mortar and pestle. Total RNA in the frozen powder was purified with RNeasy Mini kit reagents (Qiagen, Tokyo, Japan). cDNA was synthesized with PrimeScript 1st strand cDNA Synthesis kit reagents (Takara, Shiga, Japan) and total RNA as the template. To obtain the first half of the VLKP gene, the forward primer was 5'-ATGAACGCGGCCACGGCCTTTG-3' and the reverse primer was 5'-TCAAATAACGATGGCTGTCTCTTCAGCC-3'. RT-PCR was carried out using KOD-Plus-Neo DNA polymerase (Toyobo, Osaka, Japan) and the PrimeScriptsynthesized cDNA as the template. Using In-Fusion HD Cloning kit reagents (Takara), the reverse-transcribed amplicons were inserted into a SmaI-digested pQE-32 vector downstream of an encoded, in-frame, N-terminal His 6 tag (Qiagen). Sequencing of the plasmid confirmed correct insertion of the amplicon. Protease activity in the crude extract of Symbiodinium cells The protease activity of the crude extract of Symbiodinium sp. KB8 was found to be maximum for Boc-VLK-MCA among the six synthetic fluorogenic peptide substrates (S1 Fig). The activity for each substrate was greatest between pH 4 and 4.5. To determine the relationship between protease activity and cell proliferation, VLKP activity, chlorophyll a concentration, and OD 730 were measured at each Symbiodinium growth phase (S2 Fig). Activity during the late logarithmic growth phase was greater than during other phases. Therefore, to purify and characterize VLKP, cells were harvested during late logarithmic growth (OD 730 ⋍ 0.3). Purification of VLKP After ammonium sulfate fractionation, four chromatography steps were sufficient to purify VLKP to near homogeneity. The ammonium sulfate-fractionated extract from Symbiodinium sp. KB8 cells (wet weight, 31.4 g) was sequentially chromatographed through columns of Toyopearl Butyl-650M, HiLoad 16/60 Superdex 200, Toyopearl DEAE-650S, and Superdex 200 HR 10/30. After Toyopearl DEAE-650S chromatography, although the specific activity of the enzyme fraction had increased 566-fold in comparison with that of the crude extract, upon Superdex 200 HR 10/30 chromatography which followed, the specific activity of the sample decreased to 156-fold that of the activity in the crude extract, and the yield was a meager 0.4%. This yield loss could be partly attributed to denaturation and adhesion of the protease to the Minicon filtration apparatus just prior to Superdex 200 chromatography. Although the specific activity decreased during Superdex 200 chromatography, this step was retained for final purification because it eliminated most of the remaining contaminating proteins (refer to the protein profile in Fig 1) and thus allowed us to carry out a definitive peptide sequence analysis (see below). A summary of the purification procedure is presented in Table 1 VLKP is monomeric and most active at low pH The molecular mass of purified, native VLKP was estimated by Superdex 200 chromatography (S4 Fig). The VLKP elution volume (92.5 ml) in comparison with that of each of the molecular mass markers indicated that its molecular mass is 31.7 kDa. When denatured by 2% SDS and boiling, according to its position in an SDS-PAGE gel, its molecular mass is 31.3 kDa. The fact that these two molecular mass values agree indicated that VLKP is a monomer. The pH optimum for purified VLKP activity was examined in 0. Substrate specificity of purified VLKP was examined using the synthetic fluorogenic peptides at pH 4.0 ( Table 2). As found for the crude extract, the purified enzyme was most active with Boc-VLK-MCA. Effects of heat treatment and protease inhibitors on VLKP activity The optimum temperature and thermal stability of purified VLKP were determined (S6 Fig). Although the optimum growth temperature of Symbiodinium sp. KB8 was found to be 24˚C, VLKP was most active at 40˚C (S6(A) Fig). However, the stability of VLKP decreased as the temperature increased, with the temperature effect being especially noticeable above 30˚C (S6 (B) Fig). Only 50% of the activity remained at 40˚C, and 10% remained at 60˚C. The stability of VLKP was measured by holding the enzyme in the absence of substrate at the designated temperature for 10 min and then measuring the residual activity under standard conditions at 37˚C, whereas the experiments relating activity and temperature were performed at the designated temperature in the presence of substrate. The differences in the curves shown in S6 Fig. suggested that the protease-substrate complex is more stable than the protease itself. To determine the type of VLKP active site, the effects of various protease inhibitors on its activity were tested (Table 3). Proteases can be classified into four groups, namely aspartic protease, CP, serine protease, and metalloprotease. These proteases have an oxyanion hole which make their substrate susceptible to attack by a nucleophile [18]. Activity significantly decreased in the presence of CP inhibitors, i.e., 10 μM leupeptin (98% inhibition), 1 μM trans- Addition of distilled water to a reaction served as the control (None). Activity for the control was set to 100%, and activities for the other conditions were scaled to that epoxysuccinyl-L-leucyl-amido(4-guanidino)butane (98% inhibition), 1 mM antipain (94% inhibition), or 10 mM N-ethylmaleimide (90% inhibition). In addition, serine protease inhibitors also affected activity, i.e., 1 mM PMSF (65% inhibition) or 1 mM N-tosyl-L-phenylalanine chloromethylketone (97% inhibition), but these inhibitors were similar or less potent than were the CP inhibitors. Conversely, EDTA and EGTA (O,O'-bis(2-aminoethyl)ethyleneglycol-N,N,N',N'-tetraacetic acid) (each of which inhibits metalloproteases) or pepstatin A (inhibits aspartic proteases) did not affect activity. These results indicated that VLKP is either a CP or serine protease. VLKP is a CP The cysteine thiol of a CP can be oxidized via nucleophilic attack by a reactive oxygen species, leading to loss of activity. Conversely, CP activity increases in the presence of thiol-reducing agents [19], which prevent oxidation of the active-site cysteine. We, therefore, tested the effects of thiol-reducing agents on VLKP activity ( Table 4) and found that it was significantly increased in the presence of 10 mM DTT (10.9-fold) or 10 mM tris(2-carboxyethyl) phosphine hydrochloride (TCEP-HCl) (8.6-fold), but activity was less affected by 10 mM 2-mercaptoethanol (2-ME) or reduced glutathione (GSH) (each >2.5-fold) and hardly affected by cysteine (>1.5-fold), similar to published results [19]. These results indicated that VLKP is a CP. The possible effects of various metal ions on VLKP activity were examined (1 mM each; S2 Table). The presence of Mg 2+ or Fe 2+ had a relatively small but noticeable effect on activity (>1.5-fold). However, in the presence of a chelating agent, e.g., EDTA or EGTA, activity did not decrease (Table 3), indicating that divalent metal ions are not essential for VLKP activity but may stabilize its structure. In addition, activity was significantly inhibited by CuSO 4 , a VLKP is encoded as a tandem repeat pro-peptidase Purified VLKP was digested with trypsin and the resultant peptides subjected to LC-MS/MS sequencing with subsequent analysis with BLAST. The peptide amino-acid sequences were then compared with amino-acid sequences in our in-house Symbiodinium expressed sequence tag database. We used part of the VLKP amino acid sequence to construct specific PCR primers to reverse transcribe its gene to yield a cDNA (Fig 2). A search for motif sequences in VLKP classified it as a member of the peptidase C1A superfamily (C1A). C1As are conserved in a variety of organisms, e.g., the plant vacuolar proteases, papain and aleurain, and the animal lysosomal protease, cathepsin. Bacteria, fungi and protists also have C1As [9]. The predicted primary structure of VLKP includes two highly conserved tandemly repeated propeptidase sequences (identity of the repeated amino acid sequences: 98%, and DNA sequences: 97%). Specifically, the sequences between Lys 37 and Ala 332 and between Lys 357 and Ala 652 are identical. Although the precursor is predicted to have a molecular mass of~71 kDa, the gel filtration and SDS-PAGE results indicated that VLKP is a 31-to 32-kDa monomer. Notably, a termination codon was not found between the tandem sequences. These results indicated that the precursor is likely to be cleaved posttranslationally. Two domains were apparent within each VLKP pro-peptidase-an inhibitor prodomain and a peptidase domain. The sequence of the inhibitor prodomain is very similar to those of the inhibitor family I29, which are conserved in proC1As. I29 family domains contain the ERFNIN motif (ExxxRxxxFxxNxxxIxxxN), which is also present in the VLKP prodomain ( Fig 2). Furthermore, the typical C1A catalytic triad (Cys, His, and Asn) appears to be conserved in the VLKP peptidase domain (Fig 2). Homology modeling with SWISS-MODEL server predicted that VLKP has a 3D structure similar to that of a known C1A [20]. C1As have an oxyanion hole that contains these three residues [18], and VLKP likely has the same system. VLKP degrades C1A-specific substrates To confirm that VLKP can be classified as a C1A and has C1A function, the ability of VLKP to degrade C1A-specific substrates was tested ( Table 5). The ability of VLKP to degrade benzyloxycarbonyl-Phe-Arg-4-methylcoumaryl-7-amide (Z-FR-MCA), which only cathepsin B/L degrades, and benzyloxycarbonyl-Leu-Arg-4-methylcoumaryl-7-amide (Z-LR-MCA), which only cathepsin K/S/V and papain degrade, were compared with its ability to degrade Boc-VLK-MCA. For Z-FR-MCA, 152% (in crude extract) and 77% (purified enzyme) of the activity found for Boc-VLK-MCA (100%) was observed; for Z-LR-MCA, the respective values were 140% and 120%. The fact that both peptides were cleaved by purified VLKP supports its categorization as a C1A. That the activity for Z-FR-MCA was less for purified enzyme than for the crude extract suggests that an unidentified C1A(s) closely related to cathepsin B/L might exist in the crude extract and was likely removed during the last step of column chromatography. Furthermore, VLKP is likely to be more closely related to cathepsin K/S/V or papain. Recombinant and native VLKP have similar substrate specificities The N-terminal peptidase domain was expressed in E. coli as a His-tagged protein (rVLKP) to determine whether the cDNA encodes active VLKP. rVLKP was solubilized and purified by affinity chromatography. A single band for rVLKP was observed upon SDS-PAGE and western blotting (Fig 3). Although the molecular mass of rVLKP was predicted to be 36.8 kDa with the His-tag, the SDS-PAGE band migrated at~42 kDa. The electrophoretic difference might be caused by the presence of the His-tag. After centrifugation of the sonicated E. coli cells, the soluble and insoluble/pelleted fractions were compared. Although the CBB-stained gels revealed an intense band of molecular mass~42 kDa, western blotting for VLKP revealed its presence only in the insoluble fraction. These results suggested that much of expressed rVLKP was in inclusion bodies. Furthermore, for the western-blotted insoluble fraction, a faint band was observed under the rVLKP band (39 kDa, Fig 3). This band was not detected after affinity chromatography (Fig 3), and its identity remained unknown. We assessed the substrate specificity of purified rVLKP ( Table 6). Compared with the activity of native VLKP purified from Symbiodinium cells, rVLKP had similar activity. However, Ac-YVAD-MCA, Boc-LRR-MCA, and Suc-LLVY-MCA were less degraded by rVLKP, which may have been a consequence of impurities remaining in the native VLKP sample. Identification and characterization of VLKP The predicted VLKP amino acid sequence contains two conserved C1A-type prosequences (Fig 2). Based on its predicted sequence, VLKP should have a molecular mass of~71 kDa; native VLKP, however, was characterized as a monomer of 31-32 kDa (S4 Fig). Because we did not express full-length rVLKP in E. coli, we could not assess how the tandem repeat is cleaved, i.e., by autolysis or proteolysis by another enzyme(s). Further studies are thus needed to understand the activation mechanism of VLKP. Precursors of C1As have the inhibitor prodomain, I29, located N-terminally to the peptidase domain. The I29 sequence includes the characteristic motif ERFNIN, and this domain inhibits the activity of the peptidase domain. C1As are activated when the I29 is autocatalytically cleaved upon a decrease in cellular pH [21,22]. In addition, I29 is needed for the correct folding and membrane anchoring of the precursor [21]. VLKP contains an ERFNIN motif ( Fig 2) and is most active at pH 4.0-4.5, but it is hardly active at higher pH values (S5 Fig), which suggests that excision of the inhibitor domain occurs at pH < 5. Furthermore, a catalytic triad motif (Cys, His, Asn) probably exists in the VLKP peptidase domain (Fig 2) [18]. Precursors of papain-like proteases, i.e., C1As found in plant vacuoles, have an I29 domain as well as a Pro-rich prodomain and granulin prodomain that are C-terminal to the peptidase domain [23,24]; however, cathepsin-like proteases, which are also C1As and found in animal lysosomes, have only an I29 domain. Alveolata, e.g., the ciliate Philasterides dicentrarchi and the malaria protozoan Plasmodium falciparum, also express C1A precursors in which only an I29 domain is found. A C-terminal prodomain is not present in the VLKP precursor (Fig 2), indicating that VLKP and other Alveolata C1As may be phylogenetically more closely related to those of animals than those of plants. However, our BLAST search revealed that the sequence similarity of VLKP is greater for Alveolata C1As (⋍55% at best) than for those of plants (⋍50%) or of animals (⋍40%). Recently, a new eukaryotic taxonomy based on molecular phylogenetics classified animals into an Amorphea cluster and classified Alveolata and plants into a Diaphoretickes cluster [25]. The greater similarity between Alveolata and plant C1As is consistent with this type of taxonomy. Therefore, Alveolata and plant C1As possibly evolved from a common ancestor, and plant C1As might have acquired their Pro-rich and granulin prodomains after divergence from Alveolata. Possible physiological function of VLKP Plant C1As are involved in pathogen perception, disease-resistance signaling, defense against insects, and senescence [18]. In higher animals, most C1As are mainly involved in intracellular proteolysis or metabolic regulation [26]. Alveolata C1As, especially those of apicomplexa, destroy membrane barriers or immune system components in their hosts so as to obtain nutrition via digestion of host proteins [6]. Because most C1As have an inhibitor domain and are activated under acidic conditions, most plant C1As are located in the acidic environment of vacuoles and animal C1As in the acidic environment of lysosomes. To date, no information has been available concerning the location(s) and function(s) of Symbiodinium C1As. However, VLKP is most active at pH � 4.5 (S5 Fig), suggesting that it resides in acidic organelles, e.g. vacuoles, as do other C1As. In addition, because more VLKP-type activity is found in the late logarithmic phase, VLKP is likely to be involved in late-stage metabolism, e.g., senescence and/or nitrogen recycling (note that the medium used for this study, namely f/2, has a low nitrogen concentration, i.e., <1 μM). In fact, the cathepsin-like C1As are the most abundant proteases present during leaf senescence [27]. Notably, corals receive most of their essential amino acids from their intracellular symbionts [8]; as such, amino acid production by Symbiodinium is important to mutualism. We, therefore, predict that VLKP will be found in acidic vacuoles and that VLKP-mediated protein degradation during the late logarithmic phase of Symbiodinium should counter the effects of nitrogen deficiency by providing nitrogen from proteins no longer required for survival. In the dinoflagellate Peridinium gatunense, a CP may be involved in programmed cell death [28]. However, because this protease seems to function extracellularly at pH > 5 [29], this protease is probably not related to VLKP. To evaluate the in vivo function of VLKP, we need to identify its subcellular location and substrates. Symbiodinium morphology, growth rate, and gene-expression pattern differ when it is free living or in a symbiotic relationship. It would be interesting to compare VLKP activity in these two types of cells. Hence, it may be fruitful to compare VLKP activity in these two types of cells. Maximum activity was assigned a value of 100%, and the activities of the other samples were scaled to that value. Values represent the mean ± SE of three independent experiments. A, Optimum temperature determination. Activity of purified VLKP was measured between 20 and 50˚C at 5˚C intervals. B, Thermal stability determination. Purified VLKP was held at a temperature between 10 and 60˚C for 10 min, and then activity was measured under standard conditions. (PDF) S1 Table. Synthetic fluorogenic peptides used in this study. The target substrate of each enzyme are described by reference of Peptide Institute (http://www.peptide.co.jp) and MER-OPS (http://merops.sanger.ac.uk). (PDF) S2 Table. Effects of metal ions on the VLKP activity. Each value is shown as a relative value based on the value of the control (distilled water). Each metal ion was at a final concentration of 1 mM. The counterion SO 4 2used for Cu 2+ and Fe 2+ , and Cl − used for the other cations. Values represent the mean ± SE of three independent experiments. N.D., not detectable. (PDF)
2019-03-08T14:03:44.754Z
2019-01-31T00:00:00.000
{ "year": 2019, "sha1": "347ab147267867ba93aae5cd7a94df3d099d295a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0211534&type=printable", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0ce6a99ae9a6a8f12ee6e3b02fa976b75efdb203", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
160009765
pes2o/s2orc
v3-fos-license
A generalized Complex Ginzburg-Landau Equation: global existence and stability results We consider the complex Ginzburg-Landau equation with two pure-power nonlinearities: $$ \partial_t u = (a + i\alpha) \Delta u + (b + i \beta) |u|^{\sigma_1} u - (c+i\gamma) |u|^{\sigma_2} +k u. $$ After proving a general global existence result, we focus on the existence and stability of several periodic orbits, namely the trivial equilibrium, bound-states and solutions independent of the spatial variable. Introduction and main results The complex Ginzburg-Landau equation models various physical phenomena especially in theory of superconductivity and fluid dynamics. A particular Ginzburg-Landau equation can be written: ∂ t A = (1 + iα)∆A + (1 + iβ)|A| 2 A + kA which admit the development of singularities for certain values of parameters (see e.g. [18], [4], [16]. However, the introduction of a high-order term with a negative sign, like −(1 + ic)|A| 5 , allows to saturate the explosive instabilities. We refer e.g. [8] and [1] for a more complete physical background. In this paper, we extend some results of global existence of solutions and their stability and also the existence of standing wave solutions in one dimension, previously exposed for the complex Ginzburg-Landau equation in [6], where only one nonlinear term was present. As mentioned, the introduction of a higher-order term is necessary for more precise physical descriptions. − Ω ∆uv = − Ω ∇u · ∇v dx, ∀v ∈ H 1 (Ω)} (Neumann condition). It is well known that these operators generate an analytic semi-group (see [9]). Denoting by A any of these two operators A D or A N , let us introduce the following definition: Definition 1.1. A function u(·) ∈ C([0, T ); L 2 (Ω)), T > 0, is called a strong solution of (gCGL) if u(t) ∈ D(A), du dt (t) exists for t ∈ (0, T ), u(0) = u 0 and the differential equation in (gCGL) is satisfied for t ∈ (0, T ). and for any σ j > 0 if N = 1, 2, then there exists T = T (u 0 ) > 0 such that the problem (gCGL) has a unique solution on [0, T 0 ), and this solution depends continuously of the initial data (see [14], pag. 54 and 62). Now we have the following result: has a unique strong solution on [0, T ) and this solution depends continuously of the initial data. Moreover, if 0 < σ 1 < σ 2 , c > 0, α = 0 and γ/α ≥ 0, the solution is global. We now focus on the stability of the equilibrium solution u ≡ 0, the asymptotic decay of the global solutions of (gCGL) depending on the parameters and the stability of some particular time periodic solutions. First, we have the following result: Theorem 1.6. Assume the hypothesis of the theorem (1.2) and 0 < σ 1 < σ 2 . 1. L p stability: In addition, if k < 0 and b σ2−σ1 σ2 < |k| we have the asymptotic stability and u(t, x) L p → 0 as t → ∞, for all u 0 ∈ H 1 0 (Ω). In the particular case p = 2, if Ω is a bounded domain, where b + = max{0, b}, ω N represents the volume of the unit ball in R N and |Ω| the volume of Ω, then u(t, x) L 2 → 0 as t → ∞, for all u 0 ∈ H 1 0 (Ω). H 1 stability: , Ω is a bounded domain and b σ 1 + 2 Finally, we study the stability of some particular time periodic solutions of the generalized complex Ginzburg-Landau equation. Consider the (gCGL) equation on a bounded domain Ω with the Neumann condition on the boundary and assume 0 < σ 1 < σ 2 . Take the ordinary differential equation associated, which is equivalent, if we set u = u 1 + iu 2 , to the real planar system and look for periodic solutions. Multiply (1.5) byū and take the real part. We obtain d dt We consider now the two following cases 1. c = 0 (which corresponds, in particular, to the (CGL) equation) If bk < 0 we see that (1.5) allows a periodic solution with the orbit on the circle |z| = r 0 , r 0 such that br σ1 0 + k = 0 (this is just a consequence of the theorem of Poincaré-Bendixon (see e.g. [13])). We denote this T 1 -periodic solution by p(t), In a similar way we get now and, if bc < 0, the planar system (1.5) has a periodic solution with the orbit on the circle |z| = r 1 with r 1 such that b − cr σ2−σ1 It is clear that the (gCGL) equation with the Neumann condition on the boundary allows the time periodic solutions P (x, t) ≡ p(t), Q(x, t) ≡ q(t) for all x ∈ Ω. We have now the following result: Theorem 1.9. Let Ω ⊂ R N a bounded domain and consider the (gCGL) equation with a Neumann condition on the boundary. the solution u(t) of (gCGL) with initial data u(0) = u 0 exists on 0 ≤ t < ∞ and there exists a real ω and c > 0 such that The paper is organized as follows: in Section 2, we prove the global existence result (Theorem 1.2). In Section 3, the construction of bound-states on the real line is made. In Section 4, we study the stability of the trivial solution. Finally, Section 5 is devoted to the stability of periodic solutions. 3. Existence of bound-states of (gCGL*) Proof of Theorem 1.3. We look for solutions φ ∈ H 1 (R) of the elliptic equation (B-S), or in an equivalent form, where d ∈ R and ψ > 0 is the unique solution (up to translations of the origin) of the stationary Schrödinger equation Note that the existence of the solution ψ follows from the fact that and f (x 0 ) > 0 (see [2], Th.5). First, one has and we note that if ψ is a solution of (3.2), then a direct integration of the equation It follows from (3.1) that and so Hence, writing and so ǫ = ± ω 2 + k 2 . Stability of the trivial equilibrium In this section, we study the stability of the equilibrium solution u ≡ 0 and the asymptotic decay of global solutions of (gCGL) depending on the parameters and the coeficient for the driving term k. Let denote by S(t) the dynamical system associated to (gCGL): S(t)u 0 ≡ u(t; u 0 ), t ≥ 0. Definition 4.1. We say that u 0 ∈ H 1 0 (Ω) is stable if for any δ > 0 there exists In addition, we say that u 0 is asymptotically stable if u 0 is stable and there exists η > 0 such that lim t→∞ S(t)u 0 − S(t)y H 1 = 0 for all y ∈ H 1 0 (Ω), u 0 − y H 1 < η. More generally if S(t) denote a a dynamical system on a Banach space H we recall that a Lyapunov function is a continuous function W : H → R such thaṫ for all u ∈ H. The next lemma is mainly proved in [12]. Lemma 4.2. Let S(t) be a dynamical system on a Banach space (D, ). Let E a normed space such that D ֒→ E and W a Lyapunov function on D such that Then, the equilibrium point 0 is E -stable in the sense that Assume in addition thatẆ Then, lim t→∞ S(t)u 0 E = 0 for any u 0 ∈ D. Proof of Theorem 1.6. 1. Let us denote by S(t)u 0 ≡ u(t, u 0 ) the unique global solution of (gCGL) under the hypothesis of the Theorem (1.2) and define Since Furthermore, by interpolation, one has and by the Young inequality Hence, if we derive thatẆ p (u) ≤ pk u p L p and the conclusion follows from the Lemma 4.2. If p = 2 and Ω is bounded, by the Poincaré inequality, we obtain the same conclusion under the conditions 2. We now define the new functional: It is clear that V is a continuous real function on H 1 0 (Ω). By interpolation and the Young inequality, we have and Ω is a bounded domain (4.5) In addition, for any u ∈ H 1 0 (Ω) ∩ H 2 (Ω) and h ∈ H 1 0 (Ω), we have Therefore, for all u = u(t) ∈ H 1 0 (Ω) ∩ H 2 (Ω), and, for α a = β b = γ c , we obtaiṅ for some 0 < t * < t and so (4.6) is true for all t ≥ 0. Hence, the functional V is a Lyapunov function and, under the conditions (4.4), (4.5), we have the stability in H 1 0 (Ω) of the equilibrium solution u ≡ 0. We prove now the asymptotic stability. We have Next one has the following estimates: and, if bσ 1 /σ 2 < c and b(σ 2 − σ 1 )/σ 2 < |k|/2, it follows from (4.3) Finally we remark that Ω ∆u|u| σ1 udx ≤ (σ 1 + 1) Ω |∇u| 2 |u| σ1 dx and so, if we assume |b| (σ 1 + 1) < min{c, |k|}, we get If k < 0, is now clear that the asymptotic stability of u ≡ 0 follows from (4.7) and (4.8), (4.9), (4.10), (4.11), (4.12 Since k = 0, it is sufficient to estimate the fifth term in the r.h.s of (4.7) with b > 0. From the estimations (4.8), (4.12) and (4.13) we must require b(σ 1 + 1) ≤ c, and b(σ 1 + 1) < a 2 1 ω N |Ω| −2/N and we note that this second condition imply the last stability condition in (4.5). The proof is now complete. 5. Stability of some time periodic solutions of (gCGL). Consider the (gCGL) equation on a bounded domain Ω with the Neumann condition on the boundary. We study now the stability of some particular time periodic solutions. Le be ϑ(t) a T -periodic solution of the ordinary differential equation (1.5),u = (b + iβ)|u| σ1 u − (c + iγ)|u| σ2 u + ku associated to the (gCGL) equation, which is equivalent, if we set u = u 1 + iu 2 , to the planar system (1.6),(1.7). Proof of Theorem 1.9. First we linearise the (gCGL) equation around the T -periodic solution Θ(x, t) ≡ ϑ(t). We obtain the linear variational equation where A N = (a + iα)∆ denote the Neumann operator. If we set v = v 1 + iv 2 and ϑ = ϑ 1 + iϑ 2 we have Notice that B(t) is T -periodic. Now, let R(t, s) the evolution operator for (5.1), i.e. is the solution of (5.1) with initial data, v(s) = v 0 , and recall that the eigenvalues of the period map, U 0 = R(T, 0), are the characteristic multipliers. Since A N has compact resolvent, U 0 is compact and so, the spectrum σ(U 0 )\{0} is entirely composed by characteristic multipliers. Next, we prove the following result: The characteristic multipliers of (5.1) are the multipliers of the planar systeṁ and so the eigenvalues of U 0 are the eigenvalues of e (AN +C)T , i.e. the characteristic multipliers of (5.1) are the multipliers of (5.4). Denote this multipliers by µ j , (j = 1, 2). It is well known that µ j must meet the condition (see, e.g. [7]) Tr(−λI + B(t)) (5.5) We consider now the two cases stated in the theorem: 1. In (gCGL) equation let c = 0 (this corresponds, in particular to the (CGL) equation) and assume bk < 0. Take the T 1 -periodic solution P (x, t) ≡ p(t) for all x ∈ Ω. We obtain, for each λ eigenvalue of −∆ with the Neumann condition, since b|p(t)| σ1 + k = 0 for all t ∈ [0, T 1 ] ( recall that the T 1 -periodic solution p(t) has his orbit in the circle |z| = r 0 , with br σ1 0 + k = 0 ). If b > 0 and if we take λ = λ 0 = 0 (the first eigenvalue of the Neumann operator), it follows that µ 1 µ 2 = exp(−kσ 1 T 1 ) > 1. On the other hand, since the (gCGL) equation is an autonomous system the linear variational equation (5.1) has always 1 as a characteristic multiplier and so the T 1 -periodic solution P (x, t) is unstable (see [14], Th.8.2.4 ). If b < 0 it is clear that for all λ ∈ σ(−A N ), which implies the asymptotic stability of P (x, t) (see [14], Th.8.2.3 ).
2019-05-21T09:49:40.000Z
2019-05-21T00:00:00.000
{ "year": 2021, "sha1": "eaa2ac551f89988bd9e4defbf642bc3a97e3b82c", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=42bfeee3-19bc-414a-abbd-b88105cbccf6", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "084335d2fbddf3178ecebb5a8971141610aa949c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
52855281
pes2o/s2orc
v3-fos-license
In-Situ Experimental Modal Testing of Railway Bridges † In this paper the potential application of experimental modal testing of railway bridges by application of the forced vibration excitation method is proposed to identify reliable and reproduceable values of the natural frequencies and damping coefficients. It will be shown, that the damping values that are determined by in-situ experimental modal testing are in most cases significant higher than the values given in EN 1991-2 and that the normative damping values are quite conservative. The measuring results of a framed concrete bridge with 16.1 m span length are presented and the dependence of dynamic parameters to seasonable temperature changes and to the size of bridge vibration amplitude will be discussed in detail. Introduction Railway bridges are excited to forced vibrations during train crossing and destabilization of the ballast bed may result.Thus, instability of the rail position may occur that lead to critical states for trains and passengers, respectively.During the design process of new railway bridges that are built on high-speed railway lines in European countries a dynamic analyses of train crossing must be performed and a limit value of 3.5 m/s² of the maximum vertical bridge deck acceleration must be fulfilled according to EN 1991-2 [1].A second limit value is the restriction of the bridge end rotations due to train crossing to a value of 6.7 ‰ according to the Austrian guideline "Dynamic calculation of railway bridges due to train crossing [2]".In addition, the inertial force variables, i.e., bending moments and shear forces, due to train crossing must be smaller than the inertial forces that turn out from static analyses of bridge design. The arising size of the bridges forced vibration amplitudes due to train crossing depends on both, train specific and bridge specific parameters.Train specific parameters represent the dynamic excitation forces and they are influenced of train speed, axle load and axle distances.The bridge specific parameters are essentially the natural frequencies and the structural damping coefficients.In cases, where the periodic excitation forces due to train crossing meets one of the natural frequencies of the bridge structure, resonant vibrations occur with unwanted large vibration amplitudes. Within the dynamic analyses of train crossing the natural frequencies of railway bridges are computed on the basis of the ratio of bending stiffness to mass per unit length [3].Both, the stiffness and the mass are taken from schemes of the bridge design documents If the boundary conditions are realistic defined in the analytical or numerical model, the comparison of calculated natural frequencies with frequencies determined by in-situ modal testing show a good agreement. The structural damping of bridges is defined by the damping coefficient of Lehr and it cannot be calculated in practical applications.Within numerical simulations of train crossing the damping coefficients are always chosen according to regulations that are given in EN 1991-2.In case of steel and concrete bridges with span lengths L  20 m a value of 0.5% and of 1.5% has be chosen, respectively.In case of bridges with a smaller span length the damping can be increased by the factor 0.125  (20 -L) in case of steel bridges and by 0.07  (20 -L) in case of concrete bridges. Aim of the Project The research project "KOMET" (funded by the Austrian Federal Railways, 2016-ongoing) carried out by the Austrian research company REVOTEC, the Austrian Institute of Technology (AIT) and the Vienna University of Technology aims to show the potential and the benefits of assessing dynamic parameters (natural frequencies and damping coefficients) of railway bridges and their nonlinear behavior using modern forced vibration excitation methods.In addition, standard dynamic measurement methods are used and the results of gained natural frequencies and damping coefficients are compared, discussed and valuated in [4,5].Another focus of the project is to improve the comparability of measurements carried out by different contractors by updating the current Austrian guideline "Dynamic measurements of railway bridges [6]".A total of 50 railway bridges of different construction type have been measured since 2016 and the project is still ongoing.The span length of the bridges varies from 3 to 40 m and the measured natural frequencies (1st mode) varies from 3 to 80 Hz.As a main project result a table of realistic damping coefficients for railway bridges of different construction type is elaborated and recommendations for choosing the damping values within dynamic analyses of train crossing are specified. Applied Experimental Modal Testing Methods Within the Project KOMET in-situ experimental modal tests by application of the forced vibration excitation method were performed to identify the real values of natural frequencies and damping coefficients of different construction types of railway bridges.In addition, conventional dynamic measurement methods like ambient vibration measurement, time decay after train crossing and impact excitation through impulse hammer and sandbag are also applied to the tested railway bridges.The forced vibrations were mainly generated by use of two long stroke shakers in parallel operation, which are electrodynamic force generators, where the output is directly proportional to the instantaneous value of the applied current (see Figure 1).Both long stroke shakers are located, in dependence of the vibration mode of interest, on the railroad sleeper, on the ballast or on the boundary beam of the bridge.The moving mass of a single shaker is 30.6 kg and the dynamic excitation force is limited with 440 N. The long stroke shakers can deliver random or transient as well as sinusoidal waveforms of vertical excitation force in a very wide frequency range from 0.1 to 200 Hz.A closed-loop control is applied and assures a constant excitation force over the total frequency range of.The unit employs permanent magnets and is configured such that the armature coil remains in a uniform magnetic field the entire stroke range-assuring force linearity.Within the performed bridge measurements, a sine-sweep signal was used to excite the railway bridges and the natural frequencies were identified in the time and the frequency domain.For determination of structural damping a manually sweep was performed within the frequency range of interest and the half power bandwidth method was applied. In addition to the described electrodynamic long stroke vibration exciters a conventional unbalanced vibration exciter was also applied within a few selected dynamic measurements to study the influence of vibration amplitude size to the value of structural damping (see Figure 2).A maximum vertical dynamic excitation force of 175 kN can be generated over the frequency range from 1-25 Hz.The amount of excitation force is adjusted by proper choosing the number and size of rotating steel weights.The dynamic excitation force is a function of the rotating mass m, the eccentricity e and the excitation frequency , defined by Pex (t) = me 2 cost, i.e.Pex,max = me 2 .The aim of the proposed forced vibration excitation method for experimental modal testing of railway bridges is beside the reliable determination of natural frequencies especially the reliable and reproducible determination of the structural damping coefficients, also by taking changing environmental conditions (temperature) and nonlinear effects (e.g., size of vibration amplitude) into account.In practice it is often observed, that the application of conventional monitoring methods for modal properties of structures, like ambient vibration monitoring show limits in the evaluation of realistic structural damping values.By use of the proposed forced vibration excitation method, significant vertical vibration amplitudes are generated and the values of structural damping result more realistic. Identification of Natural Frequencies and Damping Coefficient under Summer Condition The application of forced vibration excitation method for in-situ experimental modal testing of railway bridges is presented for a single span framed concrete railway bridge with ballast track.Two long stroke shakers in parallel operation and a conventional unbalanced vibration exciter (separate measurement) were used to excite the bridge to forced vibrations.The dynamic measurements were performed in summer (+16 °C) and in winter (−2 °C) condition.The bridge has two tracks and a span length of 16.10 m (see Figure 3).The bridge cross section consists of a 0.8 m thick continuous rectangular concrete plate without any longitudinal joint.The two long stroke shakers (Figure 1) and the unbalanced vibration exciter (Figure 2, separate measurement) were positioned on the sleepers of a single track in the bridge midspan.The measurement equipment was placed below the bridge.Accelerometers were installed at different positions to measure the forced vibration response to excitation by the long stroke shakers and by the unbalanced vibration exciter.To identify the natural frequencies of the framed concrete bridge, at first an automatic sine-sweep was conducted by use of the two long stroke shakers over the frequency range of interest from 5 to 50 Hz.The excitation force amplitude of the two shakers was chosen with F0,total = 433 N. The FFT of the generated acceleration time history indicates three dominant frequency peaks at f1 = 12.59 Hz, f2 = 16.57Hz and at f3 = 33.13Hz.These frequencies are the bending and torsional vibration modes of the bridge, respectively.The damping coefficient of Lehr  is determined for the relevant first vibration mode and therefore a manual sine-sweep with a total excitation force of F0,total = 866 N was performed (excitation force twice the force within the automatic sine sweep).Within the manual sweep the steady state bridge vibration was produced at every chosen excitation frequency and the maximum vibration amplitude response was recorded.The resulting frequency response function is shown in Figure 4 and the half power bandwidth method was applied to determine the damping coefficient of Lehr .The results of the identified natural frequencies and of structural damping are given in Table 1 for both, the application of two long stroke shakers and the application of the unbalanced vibration exciter.It is seen that the measured structural damping coefficient of the first vibration mode is more than three times higher than the theoretical structural damping given in Eurocode ζEN1991-2 = 1.77%.The significant gap between in-situ measured structural damping coefficients and values given in European codes is detected in every measured framed concrete bridge.The difference in the identified natural frequency is affected from the self-weight of the exciter (ca.6 tons). Identification of Natural Frequencies and Damping Coefficient under Winter Condition To investigate the seasonable changes of dynamic bridge parameters a second dynamic measurement was performed under winter conditions (−2 °C).The position of the forced vibration excitation devices (two long stroke shakers and unbalanced vibration exciter) and of the accelerometers fixed to the bottom of the bridge deck was chosen according to the performed summer measurements.The FFT which was calculated from the acceleration time history results the three dominant frequencies with values f1 = 13.91 Hz, f2 = 19.09Hz and f3 = 38.71Hz.The structural damping coefficient was determined by application of the half power bandwidth method to the frequency response function generated by a manual sweep of the both long stroke shakers and a value of ζ1,winter = 6.52% results.The measured natural frequencies and structural damping under winter conditions are listed in Table 2 for both, the application of two long stroke shakers in parallel operation and the application of the unbalanced vibration exciter.It is indicated, that the increase of the first natural frequency from summer to winter results an amplification factor of 1.11 and of the structural damping coefficient of factor 1.16 and 1.26, respectively. Investigation of Nonlinear Damping Effects To study the influence of the size of vibration amplitude to the resulting value of structural damping a series of manual sine-sweeps were performed over the frequency range of interest.Starting with a very low excitation force amplitude of Pex,1 = 0.90 kN by application of the two long stroke shakers in parallel operation, three additional excitation force levels were chosen Pex,2 = 6.58 kN, Pex,3 = 15.94 kN and Pex,4 = 20.81kN by application of the unbalanced vibration exciter. Figure 5 shows the frequency response function generated by a manual sine-sweep and every displayed curve indicates the response due to the specific chosen value of excitation force.It is seen that in case of application of the long stroke shakers a very small bridge amplitude in the range of 0.04 m/s² occurs and that the corresponding damping coefficients results to 5.6%.The application of the unbalanced vibration exciter lead to maximum vibration amplitudes of 0.75 m/s² and the corresponding damping coefficients results to 5.67%.It is concluded that the damping coefficient is independent from the amount of excitation force and bridge vibration amplitude, respectively. Conclusions The performed work within the research project KOMET (50 railway bridges were under consideration) turn out, that the application of forced vibration excitation methods for experimental modal testing of railway bridges provides reliable and reproducible results for the natural frequencies and for the structural damping.Particularly, the structural damping coefficients that result from application of the half power bandwidth method to the frequency response function generated by manual sine-weeps with long stroke shakers, results in reliable and reproducible values as base for calibrating numerical simulations of train crossing.It is shown, that the in-situ measured structural damping coefficients of framed concrete railway bridges with ballast track are up to 4 times higher than the values given in European codes. Figure 1 . Figure 1.Electrodynamic long stroke shakers for experimental modal testing of railway bridges. Figure 2 . Figure 2. Unbalanced vibration exciter for experimental modal testing of railway bridges. Figure 3 . Figure 3. Schemes of measured single span framed concrete railway bridge. Figure 4 . Figure 4. Measured single span framed concrete bridge-frequency response function of bridge deck; manual sine-sweep by use of two long stroke shakers-summer conditions (+16 °C). Table 1 . Results of forced vibration excitation measurements by application of both, the two long stroke shakers in parallel operation and the unbalanced vibration exciter; summer conditions (+16 °C). Table 2 . Results of forced vibration excitation measurements by application of both, the two long stroke shakers in parallel operation and the unbalanced vibration exciter;
2018-09-24T14:39:49.910Z
2018-05-30T00:00:00.000
{ "year": 2018, "sha1": "262b6c5c8168ad3ed4e10b9dc80c51d961e4be52", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/2/8/413/pdf?version=1528712916", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "262b6c5c8168ad3ed4e10b9dc80c51d961e4be52", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
238257813
pes2o/s2orc
v3-fos-license
Left atrial conduit function: A short review Abstract Three‐dimensional echocardiography can elucidate the phasic functions of the left atrium if a simultaneous acquisition of a pyramidal full‐volume dataset, as gathered from the apical window and containing the entire left atrial and left ventricular cardiac sections, is obtained. Hence, conduit can be quantified as the integral of net, diastolic, instantaneous difference between synchronized atrial and ventricular volume curves, beginning at minimum ventricular cavity volume and ending just before atrial contraction. Increased conduit can reflect increased downstream suction, as conduit would track the apex‐to‐base intracavitary pressure gradient existing, in early diastole, within the single chamber formed by the atrium and the ventricle, when the mitral valve is open. Such a gradient increases in response to adrenergic stimulation or during exercise and mediates an increment in passive flow during early diastole, with the ventricle being filled from the atrial reservoir and, simultaneously, from blood drawn from the pulmonary veins. In this context conduit, and even more conduit flow rate, expressed in ml/sec, can be viewed as an indirect marker of left ventricular relaxation. It is well known, however, that a large amount of conduit (in relative terms) is also supposed to contribute to LV stroke volume in conditions of increased resistance to LV filling, when diastolic function significantly worsens. Stiffening of the atrio‐ventricular complex implies increments in LA pressure more pronounced in late systole, causing markedly elevated “v” waves, independently of the presence of mitral insufficiency. The combination of increased atrio‐ventricular stiffness and conduit flow is associated with an elevation of the right ventricular pulsatile relative to resistive load that negatively impacts on exercise capacity and survival in these patients. Atrial conduit is an “intriguing” parameter that conveys a noninvasive picture of the complex atrioventricular coupling condition in diastole and its backward effects on the right side of the heart and the pulmonary circulation. Given the easiness associated with its correctly performed quantification in the imaging laboratory, I am sure that conduit will survive the competitive access to the list of valuable parameters capable of deciphering, although not necessarily simplifying, the complex diastolic scenario in health and disease. The left atrial (LA) cavity, which connects the pulmonary circulation with the corresponding ventricle, is characterized by phasic activity which makes the description of the atrium as of a passive player in the complex scenario of the cardiac cycle unrealistic (Maccio' & Marino, 2008). This cavity, in fact, is strictly related to left ventricular (LV) function throughout the entire cardiac cycle (Braunwald & Frahm, 1961). After QRS complex at ECG takes place, the cardiac base is forced to descend, due to the longitudinal component of fiber shortening forces associated with LV contraction, contributing to LA filling from the pulmonary veins (Castello et al., 1991). Such reservoir function can be quantified as the difference between maximum and minimum volume of the cavity. During late diastole, the atrium also actively contributes to ventricular filling. Such pump function can be defined as the blood volume pushed into LV during atrial systole, plus the volume of the backward flow into the pulmonary veins. Finally, during early and mid-diastole, the atrium passively contributes to LV filling (conduit function). During this phase, the cavity is directly exposed to the LV pressure through the open mitral valve and conduit flow is obviously and strongly influenced by the left heart diastolic properties and the gradient relative to the pulmonary venous compartment (Kono et al., 1992), with conduit defined as [LV filling volume − (LA reservoir + pump volume)]. Obviously, in absolute terms, the total amount of blood handled by the atrium acting as a reservoir, pump, and conduit has to equal the ventricular filling volume (Maccio' & Marino, 2008). Stiffening of the atrioventricular complex implies increments in LA pressure more pronounced in late systole, causing markedly elevated "v" waves (Urey et al., 2017), a phenomenon not necessarily mediated by the presence of mitral insufficiency. More than 15 years ago, we studied a group of 15 patients instrumented during open heart surgery with open pericardium and dichotomized the group according to the median value of invasively assessed LA stiffness (≤ or >0.33 mmHg/ ml; Marino et al., 2004). We could demonstrate that a higher stiffness estimate (0.75 ± 0.43 mmHg/ml) was associated with an increased "v" wave and a subsequent larger "y" atrial pressure descent (−9.3 ± 5.6 mmHg) as compared with the group with a lower stiffness value (0.19 ± 0.10 mmHg/ml), in whom the "y" pressure descent was much less (−1.2 ± 0.6 mmHg, p < 0.001; Marino et al., 2004). Such higher pressure pulsatility inside the atrial chamber, mediated by increased cavity stiffness, was also associated with a larger cumulative pulmonary vein flow (51 ± 30 ml/m 2 vs. 27 ± 9 ml/m 2 , p = 0.04) during mitral valve E-wave acceleration (table 2 of Marino et al., 2004). Thus, LA cavity properties govern the pulsatile profile existing inside the chamber, while modulating conduit flow, with both factors, pulsatility and conduit, being inversely related to atrial compliance. Such considerations anticipate the consequences of a poorly performing ventricle on both LA reservoir, due to an attenuated cardiac base descent contributing to increased LA pulsatility, and conduit, secondary to an impaired downstream suction, counteracted by compensatory E-wave PV flow. | Atrial stiffness modulation of exercise capacity in heart failure patients It is well known that in heart failure (HF) patients prognosis is strongly associated with the performance of the right ventricular (RV) cavity and its loading conditions (Vonk Noordegraaf et al., 2019). Less known, however is that RV afterload is determined by a pulsatile component, represented by pulmonary artery compliance (PAC), in addition to the steady component indexed by pulmonary vascular resistances (PVR; Tedford, 2013). These two parameters, PVR and PAC, are closely related by an inverse relationship designated by the time-constant of the pulmonary circulation (RC-time; Lankhaar et al., 2006). This relationship, between the pulsatile and steady components, is believed to be rather constant, although it has been shown that such constancy can be impacted by LA pressure, when pressure (and stiffness) may be abnormally elevated like in HF patients. Tedford et al. (2012), in fact, have demonstrated that elevation of pulmonary wedge pressure shifts the PAC-PVR hyperbolic curve leftward and downward, foreshortening that LA pressure elevation augments RV pulsatile relative to resistive load. Such a shift in pressure components, from a steady to a pulsatile prevalent contribution, has important functional and prognostic significance. In HF patients, elevated LA pressure is, in fact, associated with a shift of the PAC-PVR relationship, such that the pulsatile component becomes augmented and contributes to RV afterload, despite a relatively "normal" steady component (Najjar et al., 2021). This may limit RV output, and negatively impact exercise capacity and survival in these patients . | Conduit and left heart diastolic dysfunction It is well known that a large amount of conduit is supposed to contribute to LV stroke volume in conditions of increased resistance to LV filling (and this would explain why conduit contribution to LV stroke volume would increase with worsening diastolic dysfunction; Marino et al., 2019). Less obvious, however, is that such an augmented conduit contribution may also be found in a totally opposite scenario, with conduit progressively contributing to filling volume during intensive exercise in competitive athletes (Wright et al., 2015). This is compatible with a U-shaped behavior for this parameter across the spectrum (Figure 1, left), from a markedly diseased condition, like end-stage HF, to a "supranormal" situation, like the one depicted in athletes, similarly to what happens for other functional imaging parameters (Al-Mashat et al., 2020). In the "supranormal" scenario, in fact, conduit would track the apex-to-base intracavitary pressure gradient existing, in early diastole, within the single chamber formed by the atrium and the ventricle, when the mitral valve is open (Courtois et al., 1988). Such a gradient increases with the augmentation of the LV longitudinal contraction and subsequent lengthening that develops in response to adrenergic stimulation or during exercise (Levy et al., 1993;Nonogi et al., 1988;Ohara et al., 2012) and mediates an increment in passive flow during early diastole, with the ventricle being filled from the atrial reservoir and, simultaneously, from blood drawn from the pulmonary veins. In this context conduit, and even more conduit flow rate, expressed in ml/s, can be viewed as an indirect marker of LV relaxation (Bhatt et al., 2020;Marino et al., 2021). According to the above reasoning increments in conduit flow may reflect increased suction, acting on the LV side, or exaggerated filling and pulsatile pressure, acting on the LA side (Figure 1, left), along with increased recoil of the pulmonary vasculature. This makes interpretation of this type of parameter not easy, but nonetheless informative of the complex scenario existing in case of diastolic dysfunction/hyperfunction. | How conduit can be quantified? It is challenging to quantitatively separate the rapid filling volume of blood entering the LV in early diastole into the conduit versus reservoir components. More than 2 decades ago, in order to quantify conduit, we proposed an approach that integrated the information derived from the Doppler mitral flow velocity profile with that of one single pulmonary vein (taken as a representative of all the four veins; Prioli et al., 1998). In that study we demonstrated the existence of an inverse linear relationship between the conduit contribution to LV filling and a "classical" diastolic descriptor, that is, the Doppler E-wave deceleration time. The approach we used, however, involved several assumptions and did not allow us to delve further into the potential role of atrial conduit function itself as an index of the diastolic condition. , from minimum LV volume (yellow vertical continuous line) to ECG P wave (red vertical line). Conduit starts from 0 ml, at minimum LV cavity volume, and then it reaches a plateau in mid-diastole, ending before LA cavity contractio More recently, we adopted 3D echocardiography, gathering simultaneous acquisitions of a pyramidal fullvolume dataset from the apical window and containing the entire LA and LV cardiac sections (Nappo et al., 2016). After optimal image alignment and manual contouring of the ventricular and atrial endocardium at the beginning of the cardiac cycle, software can produce, automatically, LV and LA volume curves along the entire subsequent beat. From these curves, that can be manually edited if necessary, conduit can be computed (provided mitral and/or aortic insufficiency is trivial) as the integral of the net, diastolic instantaneous difference between synchronized atrial and ventricular volume curves, beginning at minimum LV cavity volume and ending just before atrial contraction, synchronously with P wave on the electrocardiographic trace (Figure 1, right; Bowman & Kovacs, 2004;Nappo et al., 2016). Obviously, conduit flow rate can also be obtained, at baseline or during exercise, referencing the cumulative conduit flow to the above defined time interval. Whatever the way conduit is nominally expressed, it must be emphasized that the amount of blood entering LV from the atrium during diastole cannot be faithfully reflected by the atrial volume curve alone. In the phase of passive atrial emptying and atrial diastasis pulmonary veins drain blood from the lungs into the ventricle. Further, not to forget is that, during atrial contraction, some blood does flow back into the pulmonary veins, although it is known that sleeve contraction should normally limit this retrograde flow to a small amount (Thiagalingam et al., 2008). Thus, it is only the simultaneous availability of both (atrial and ventricular) cavity volume curves that guarantees a precise definition of the atrial conduit contribution to LV filling (Marino, 2010). It has been anticipated that simultaneous volumetric measurements of LV and LA cavity is not possible with 2D images because the LV major axis differs from LA major axis (Lang et al., 2015). Since in 3D the entire heart is scanned, the volumes of both the LV and LA can be carefully measured in one acquisition (Nappo et al., 2016). This would be particularly important in patients in atrial fibrillation, when the assessment of the atrioventricular volumetric interaction during diastole can be soundly investigated by 3D echocardiography (Otani et al., 2016). In this line of reasoning, we did recently show that conduit quantitation precardioversion is able to predict early arrhythmia recurrence in persistent atrial fibrillation patients (Giubertoni et al., 2019). These findings support the concept that conduit quantitation is valuable as it can reflect diastolic LA pathology that cannot necessarily be explained by ventricular pathology only and, thus, it could be also proposed as a clinically effective tool for exploring the link between AF and diastolic dysfunction, in excess of ventricular derangement (Degiovanni et al., 2018;Roeder et al., 2017). | Conclusion Conduit is an "intriguing" parameter that conveys a noninvasive picture of the complex atrioventricular coupling condition in diastole and its backward effects on the right side of the heart and the pulmonary circulation. Given the easiness associated with its correctly performed quantification in the imaging laboratory, I am sure that conduit will survive the competitive access to the list of valuable parameters capable of deciphering, although not necessarily simplifying, the complex diastolic scenario in health and disease (Nishimura & Tajik, 1997).
2021-10-05T06:22:57.143Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "261102bc7223ec200572ada2937b90590b390264", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.15053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca7a07f3a0834c22c9b731b97a36d5468872fd74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52044951
pes2o/s2orc
v3-fos-license
Dielectric function, screening, and plasmons of graphene in the presence of spin-orbit interactions We study the dielectric properties of graphene in the presence of Rashba and intrinsic spin-orbit interactions in their most general form, i.e., for arbitrary frequency, wave vector, doping, and spin-orbit coupling (SOC) parameters. The main result consists in the derivation of closed analytical expressions for the imaginary as well as for the real part of the polarization function. Several limiting cases, e.g., the case of purely Rashba or purely intrinsic SOC, and the case of equally large Rashba and intrinsic coupling parameters are discussed. In the static limit the asymptotic behavior of the screened potential due to charged impurities is derived. In the opposite limit ($q=0$, $\omega\to0$), an analytical expression for the plasmon dispersion is obtained and afterwards compared to the numerical result. Our result can also be applied to related systems such as bilayer graphene or topological insulators. I. INTRODUCTION It is now well established that at low energies the charge carriers in graphene are described by a Diraclike equation for massless particles. 1,2 While standard graphene, i.e., without any spin-orbit interactions (SOIs), does not exhibit a band gap, a gap opens up in the spectrum if one includes purely intrinsic spinorbit interactions. 3 The corresponding energy dispersion resembles that of a massive relativistic particle with a rest energy which is proportional to the spin-orbit coupling parameter (SOC). Including SOIs of the Rashba type, e.g., by applying an external electric field, lifts the spin degeneracy. Depending on the ratio of the intrinsic and the Rashba parameters a gap can occur in the spectrum or not. Many theoretical studies on the dielectric function of various systems have been made in the last years. Besides semiconductor two-dimensional electron gases 4-6 and hole gas systems, 7 large investigations have been made in graphene. Starting from the simplest possible graphene model within the Dirac-cone approximation, 8-10 more and more extensions have been included. These extensions range from numerical 11,12 and analytical 13 tight-binding studies and the inclusion of a finite band gap [14][15][16] to double-and multilayer graphene samples, [17][18][19][20][21][22] graphene antidot lattices, 23 and graphene under a circularly polarized ac electric field. 24 In this work we study the dielectric properties of graphene including both the Rashba and the intrinsic spin-orbit coupling. While the case of purely intrinsic interactions is well understood, 14-16 the dielectric function for the general case, where both types of SOIs are present, is unknown. Other previous studies have investigated the effect of SOI on magnetotransport 25 and the optical conductivity. 26,27 Our study is motivated by recent experimental and theoretical works demonstrating that the SOC parameters can significantly be enlarged by choosing proper adatoms [28][29][30] or a suitable environment. [31][32][33] Information that can be extracted from the dielectric function range from the screening between charged particles to the collective charge excitations formed due to the long-ranged Coulomb interaction. Knowledge of the latter is not only important for possible future applications in the field of plasmonics, where graphene seems to be a promising material, 34 but also because of fundamental reasons. Recent experiments and theoretical studies showed that interactions between charge carriers and plasmons in graphene, forming socalled plasmarons, yield to measurable changes in the energy spectrum. 35 The paper is organized as follows. In Sec. II, we introduce the model Hamiltonian including the eigensystem and summarize the formalism of the random phase approximation (RPA). In Sec. III, analytical and numerical results for the free polarization function of the undoped and the doped system are given. In Sec. IV, the dielectric function is used to analyze the static screening properties due to charged impurities. We provide qualitatively the asymptotic behavior of the induced potential. The long-wavelength collective charge excitations of graphene are derived in Sec. V and afterwards compared to the numerical result. We find the existence of several new potential plasmon modes that are absent without any spin-orbit interactions. Most of these zeros, however, are overdamped as can be seen from the energy loss function. We close with conclusions and outlook in Sec. VI. Finally, in Appendixes A and B we give details of the calculation of the free polarization function. II. THE MODEL We describe graphene with SOI within the Dirac cone approximation. At one K point, the Hamiltonian is given by 3Ĥ The Pauli matrices τ (σ) act on the pseudospin (real spin) space. The other K point can be described by the above Hamiltonian with σ x → −σ x and σ z → −σ z . Since the two K points are not coupled, we can limit our discussion to the above Hamiltonian, multiplying the final results by the valley index g v = 2. Moreover, without loss of generality, we assume a positive Rashba and intrinsic coupling as the eigensystem and thus the dielectric function is not changed for negative values. A. Solution For a sufficiently large intrinsic coupling parameter, λ I > λ R , the system is in the spin quantum Hall phase with a characteristic band gap. For λ R > λ I the gap in the spectrum is closed and the system behaves as an ordinary semimetal. At the point where λ R = λ I a quantum phase transitions occurs in the system. In the following we mainly set v F = 1 and = 1. The eigensystem reads with sin (θ ± ) = k √ k 2 +λ 2 ± and λ α = λ R + αλ I , and (k = |k|) For λ R = 0 the spin degeneracy is lifted and two distinct Fermi wave vectors, Fig. 1, the energy dispersion is shown for three characteristic values of the SOI. The energy scales for the SOC parameters in monolayer graphene, λ I = 12µeV and λ R = 5µeV for an electric field of 1 V nm , are generally small 36 . However, it was shown that these parameters can be enlarged to λ I ≈ 30meV for thallium adatoms 29 or λ R ≈ 13meV for graphene placed on a Ni(111) surface. 31 The above Hamiltonian with only Rashba coupling can be mapped onto the bilayer Hamiltonian without SOI, relating the interlayer hopping parameter t IL ≈ 0.2eV 37 to the Rashba SOC. Our findings can also be applied to a topological insulator within the Kane-Mele model. 3 FIG. 2: Single-particle continuum (dark area) for the particular choice of λ R = 2λ I = 0.3µ. Analytical expressions for the boundaries of the distinct regions I, II and III can be found in Sec. III B. B. Dielectric function In order to find the dielectric function in RPA 38 given by where V (q) = e 2 2 0q is the Fourier transform of the Coulomb potential in two dimensions, V (r) = e 2 4π 0r , and 0 the vacuum permittivity, one needs to calculate the free polarization, In the following we assume zero temperature. The Fermi function f (E) then reduces to a simple step function. Because of the general relation χ 0 (q, −ω) = [χ 0 (q, ω)] * , we restrict our discussions to positive frequencies ω. A. Zero doping For zero doping the valence bands are completely filled while the conduction bands are empty. Only transitions between bands E α− and E β+ are possible. The resulting charge correlation function can be decomposed as Here we introduced the notation χ η1η2→η3η4 (q, ω) describing transitions from the initial band E η1η2 (k) to the final band E η3η4 (k + q). For the imaginary part we find and Here we defined ω ± = ω ± 2λ R and γ = max{λ R , λ I }. For equally large spin-orbit coupling parameters, λ R = λ I , the imaginary part is divergent at the threshold ω = q but finite otherwise. The divergent part of the polarization is χ +−→++ as the bands E +± (k) are linear in momentum. The real part can be obtained via the Kramers-Kronig relations After carrying out the remaining integration, where it is necessary to keep the principal value, we arrive at and Re Here we defined the function B. Finite doping We now continue with the case of a finite chemical potential lying in the conduction band (the p-doped case is analogous). The free polarization in the doped case reads χ 0 is the undoped part given above. The two remaining contributions, δχ k F + and δχ k F − , with refer to transitions with initial states in band E ++ and E −+ , respectively. As the expressions for the extrinsic real and imaginary part of the free polarization function are quite lengthy, we refer to Appendix B where the results including major steps of the derivation can be found. Similar to the undoped case, the density correlation function of graphene is finite at ω = q for λ R = λ I and divergent for λ R = λ I . However, this divergence vanishes in the RPA improved result. 8 In Fig. 2 this is shown for the particular choice of λ R = 2λ I = 0.3µ. In general, the lower and upper boundaries of the damped region I are given by respectively. Region I is due to intraband transitions from band E ±+ (k) to E ±+ (k + q). Region II accounts for interband transitions between conduction bands and is confined by For region III the lower limit reads while there is no restriction to the upper boundary. This part is due to transitions between valence and conduction bands. IV. SCREENING OF IMPURITIES The potential of a screened charged impurity is obtained from the definition of the dielectric function, J 0 (x) is the Bessel function of the first kind and Q the charge of the impurity. Making use of Eq. (15) the screened potential for the undoped system is calculated numerically where Φ(r) is mainly determined by the longwavelength behavior of the static correlator. 18 As can be seen from Fig. 3(a), the long-wavelength limit of the polarizationχ 0 (0, 0) is finite in the semimetallic state (λ R > λ I ) and zero otherwise while for large momenta all functions scale like 1/q. From Fig. 4(a) we can see that for λ I λ R the potential scales like Φ(r) ∝ 1/r at large distances. For λ R > λ I the asymptotic potential behaves as Φ(r) ∝ 1/r 3 ; see Fig. 4(b). The actual values of µr at which the above asymptotics are appropriate approximations depend on the difference of λ R and λ I . As mentioned in the introduction, the two different parameter regimes belong to different phases separated by the quantum critical point at λ R = λ I . The static density correlator for the doped system is much more complicated. Integrals of the form (15) are usually treated analytically by approximating the Bessel function by its asymptotic values. The subsequent Fourier integral can then be solved with the Lighthill theorem. 39 The above theorem states that singularities in the derivatives of the dielectric function give rise to a characteristic, algebraic, oscillating decay of the screened potential. Physically, these Friedel oscillations are due to backscattering on the Fermi surface. We can thus make qualitative predictions for the potential Φ(r) at large distances away from the impurity, only from the analytical structure of the polarization function without carrying out the integration. Afterwards these predictions are compared to the exact numerical solution. For non zero SOC and λ R = λ I the first derivative of the polarization function is singular at special points q = 2k F ± ; see Figs. 3(c) and (d). According to the Lighthill theorem the potential will exhibit a superposition of two different kinds of oscillations whereat Φ(r) ∝ 1/r 2 . This beating should be observable in sufficiently clean samples if the Rashba parameter, and the consequential breaking of the spin-degeneracy, is large enough. For predominant intrinsic SOI, the two oscillatory parts interfere constructively finally yielding an additional spin-degeneracy factor of g s = 2. 14 For λ R = λ I already the first derivative of χ 0 (q, 0) is singular at q = 2k F − while at q = 2k F + only the second derivative diverges; see Fig. 3(b). The main contribution in the potential again will be of order 1/r 2 . The numerical inspection of Φ(r) as displayed in Fig. 5 confirms the above predictions. V. PLASMONS Plasmons are defined as the zeros of the dielectric function, For small damping constant γ, Eq. (17) can be substituted by the approximate equation 38 Re {ε(q, ω p )} = 0. Only if γ is small compared to ω p , one can speak of collective density fluctuations. For large Landaudamping, it is thus important to also discuss the more general energy loss function Im {−1/ε(q, ω)} which gives the spectral density of the internal excitations of the system. Similar to Refs. 14,18 , there are several solutions of Eq. (18) for non zero SOC parameters. In Fig. 6, these solutions are shown as straight lines together with a density plot of the energy loss function Im {−1/ε(q, ω)}. One of these solutions has an almost linear dispersion with a sound velocity close to the Fermi velocity which exhibits an ending point for λ R ∼ λ I associated with a double zero of the real part of the dielectric function. However, as can be seen from Figs. 7(a) and (b), this solution does not yield to a resonance in the loss function and does thus not resemble a plasmonic mode. In the case where the gap in the spectrum is closed (λ R > λ I ), two additional zeros appear leading to potential high energy modes similar to bilayer graphene. 18,19 However, these potential collective modes are damped by interband transitions; i.e., the corresponding peaks in the loss function are broadened out as can be seen from Fig. 7(b) and no clear signature is seen in the density plot. We are thus left with the branch which is also present for "clean" graphene and which resembles the only genuine plasmonic mode; see Fig. 6(a). Its dispersion ω p can be approximated in the long-wavelength limit (q ω) by 42 where the prefactor is given by We thus recover the typical √ q-dispersion of 2D-plasmons. The long-wavelength approximation is shown as a dashed line in Fig. 6 and coincides with the numerical solution, ω p , for small momenta, whereas for larger momenta, the ω p is red shifted compared to ω 0 p . If λ I is large enough, ω p remains in the region where Landau damping is absent, see Fig. 6(b), 14 otherwise it eventually enters the Landau-damped region due to interband transitions from the valence to the conduction band, see Figs. 6(a), (d). For two occupied conduction bands, which is the case in Fig. 6(c) and in Fig. 8, the plasmon mode is disrupted at q ≈ 0.05µ by a region with a finite imaginary part where it becomes damped. This additional Landaudamped region is due to interband transitions from the two conduction bands. The analytical description of the boundaries of this region can be found in Sec. III B. This "pseudo gap" of the plasmon dispersion can also be obtained from only considering Eq. (18) since the "plasmon" velocity formally diverges at the entering and exit point as can be seen from Fig. 6(c). The crossing points can alternatively be approximated by looking at the intersection of this region with the analytical long-wavelength approximation of the analytic plasmon dispersion. For the quantum critical point (λ R = λ I ), and in particular to q − cr ≈ 0.019µ and q + cr ≈ 0.025µ for λ R/I = 0.25µ. For a proper analysis, the full energy loss function thus needs to be discussed which is done in Fig. 9. It shows how the spectral weight is eventually transferred from the lower to the upper band as momentum is increased, explaining the step in the plasmon spectrum as shown in Fig. 8. The pseudo gap of the plasmonic mode always appears for λ R < 0.5µ, since then two conduction bands are occupied independently of the value of λ I , but it decreases for increasing λ I as the dissipative region due to interband transitions diminishes. In the opposite case of λ R > 0.5µ, either one or two bands can be occupied. For zero intrinsic coupling, the pseudo-gap is absent but increases up to a maximum value at around λ I ≈ λ R for increasing λ I . Let us close with a comment on plasmons in undoped graphene. For neutral monolayer graphene and at zero temperature, plasmons can exist if one takes into account a circularly polarized light field 24 or effects beyond RPA, 43 and in bilayer by including trigonal warping. 44 In our system, the real part of the dielectric function is always nonzero for the undoped case and thus no plasmons exist. VI. CONCLUSIONS AND OUTLOOK We have presented analytical and numerical results for the dielectric function of monolayer graphene in the presence of Rashba and intrinsic spin-orbit interactions within the random phase approximation for finite frequency, wave vector, and doping. The cases of predominant Rashba and intrinsic coupling and the case of equally large SOC were opposed. In the static limit the screening properties due to external impurities were studied. Our findings show that the power-law dependence of the screened potential in the undoped system depends on the ratio of the Rashba and intrinsic parameters. While for λ R > λ I the screened potential scales like Φ(r) ∝ 1/r 3 , for λ I ≥ λ R FIG. 9: Top: Energy loss function Im {−1/ε(q, ω + i0)} for λ R = 0.25µ = λ I = 0.25µ and various wave vectors q. Bottom: The same for λ R = 0.25µ and λ I = 0. a weaker screening, Φ(r) ∝ 1/r, was found. For finite Rashba coupling, a beating of Friedel oscillations in the doped system occurs due to the existence of two distinct kinds of Fermi wave vectors. For large λ I λ R , this beating vanishes and the two contributions interfere constructively. In the last section the influence of SOI on the collective charge excitations was discussed. We found that while only one plasmon mode exists for standard graphene, several new potential modes occur for finite SOC. However, most of these modes are overdamped and can hardly be detected as they lie in the region with finite Landau damping. In the case when the two conduction bands are filled, the undamped plasmon mode is disrupted by a narrow dissipative region strip due to particle-hole excitations. This "pseudo gap" might be useful to gain further control in possible plasmonic circuitries based on graphene. As already mentioned in the beginning, our findings go even beyond monolayer graphene. For purely Rashba coupling the dielectric function presented in this work equals that of bilayer graphene. The role of the SOC parameter is then played by the interlayer hopping amplitude t IL being several orders of magnitude larger than λ R . Additionally, the Hamiltonian in Eq. (1) generally describes a system known as the Kane-Mele topological insulator. 3 Our discussion can thus be fully adopted to materials modeled by this Hamiltonian. Besides that, our findings might also be relevant for other monolayers with similar symmetry properties compared to those of graphene, e.g. MoS 2 , where SOC is naturally strong. 46 A detailed study of the dielectric properties of MoS 2 , however, is left open for future works. The undoped polarization is composed of four parts, χ 0 (q, ω) = ηi=± χ η1−→η3+ (q, ω). (A1) As two of them can be obtained by a simple substitution, i.e., ω) only two contributions remain. With the help of the Dirac identity the imaginary parts read and Im with γ = max {λ R , λ I } and y = k 2 + λ 2 − . We can now make use of Eq. (9) in order to find the real part. The first contribution reads Here we introduced the functions and The second contribution can be solved in a similar way, Appendix B: Details of the calculation of the doped polarization The extrinsic part for the band E −+ , can be summarized as where (ω → −ω), and thus (ω − → −ω + ), denotes terms with the sign of the frequency changed compared to the preceding expression. The corresponding expression for E ++ can be obtained by substituting λ R → −λ R and k F − → k F + . After carrying out the angle integration for the real part and choosing a proper substitution, x = k 2 + λ 2 + −λ + , we arrive at These integrals can now be solved in terms of trigonometric and hyperbolic functions. 45 In order to simplify the expressions we use the shorthand notation 18 The result can then be written as , α ω 2 = q 2 − ω (ω − − 2 sign (ω) λ I ) 2 4 β ω 1 = ω 2 − q 2 (2λ + + ω) , β ω 2 = 8λ 2 R λ I − 2λ R λ I (ω + |ω|) − sign (ω) (ω − 2λ R ) q 2 − ω 2 − γ ω 1 = ω 2 − q 2 , γ ω 2 = |ω| 2 − − q 2 , c ω 1 = x 2 3 + 2λ + + 7ω 12 x + 4λ 2 + − 4q 2 + 8λ 2 The calculation of the imaginary part is quite similar. Starting from Eq. (B2) and carrying out the angle integration in a way similar to the real part, we arrive at sign (x − λ − + ω) ω 2 − q 2 q 2 4 1 + 4x(x + ω) sign (x + λ + + ω) The result can again be written as The limit → 0 in Eqs. (B5) and (B8) can now be taken safely giving finite results.
2012-11-22T09:44:39.000Z
2012-06-21T00:00:00.000
{ "year": 2012, "sha1": "c9aad1c0bf0927e85ab23c7441d06c9683bc7f0e", "oa_license": null, "oa_url": "https://epub.uni-regensburg.de/26813/1/PhysRevB.86.195424.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c9aad1c0bf0927e85ab23c7441d06c9683bc7f0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
629454
pes2o/s2orc
v3-fos-license
Functional Aerophagia in Children: A Frequent, Atypical Disorder Aerophagia is a functional gastrointestinal disorder characterized by repetitive air swallowing, abdominal distension, belching and flatulence. Pathologic aerophagia is a condition caused by the swallowing of excessive volumes of air with associated various gastrointestinal symptoms, such as burping, abdominal cramps, flatulence and a reduced appetite. It is a clinical entity that can simulate pediatric gastrointestinal motility disorders, such as gastroparesis, megacolon and intestinal pseudo-obstruction, and presents more frequently in children with mental retardation. Early recognition and diagnosis of functional aerophagia or pathologic aerophagia is required to avoid unnecessary, expensive diagnostic investigations or serious clinical complications. Functional aerophagia is frequent in the adult population, but rarely discussed in the pediatric literature. We present two pediatric clinical cases with a history of functional constipation in whom gaseous abdominal distension was the most important symptom. Mechanical intestinal obstruction, chronic intestinal pseudo-obstruction, malabsorption and congenital aganglionic megacolon were ruled out. Extensive gaseous abdominal distension was due to aerophagia, and treatment consisted of parents’ reassurance and psychological counseling. Introduction Functional aerophagia (FA) involves excessive air swallowing causing progressive abdominal distension. The typical clinical presentation is a non-distended abdomen in the morning, progressive abdominal distension during the day, visible, often audible, air swallowing and excessive flatus. When FA is associated with various gastrointestinal symptoms, such as burping, abdominal pain, flatulence and belching, this condition is de-fined as pathologic aerophagia [1]. Pathologic aerophagia is present in 8.8% of the mentally retarded population [2]. The mechanisms of onset of FA are correlated with involuntary paroxysmal openings of the cricopharyngeal sphincter followed by air swallows without cricopharyngeal swallowing movement sequences [3]. The Rome II and III criteria for functional gastrointestinal disorders (FGIDs) include the definition of aerophagia [4]. In many reported cases, the diagnosis is missed initially and parents often deny a functional origin of the disease to search for organic diseases. Gastrointestinal symptoms can be associated with a reduction in oral intake. We describe two pediatric cases of FA without underlying mental retardation. Case 1 An 8-year-old boy was admitted with a history of abdominal bloating associated with rectal tenesmus and increased flatus. These symptoms recurred especially during the afternoon and evening. No associated gastrointestinal symptoms were reported. His clinical history was characterized by rumination in the first years of life with associated non-organic feeding disorders as a picky eater. Radioallergosorbent test for alimentary and inhalant allergens, skin prick tests and celiac screening was negative. Ultrasonography did not reveal any organomegaly or fluid presence in the abdomen. Abdominal radiographs showed a distended colon with increased gas in the rectum and coprostasis, without signs of obstruction. On physical examination, weight was 22 kg and height 129 cm, with a mild degree of malnutrition according to the Waterloo classification. Cardiorespiratory objectivity was normal. A significantly non-tender, hypertympanitic abdominal distension was present. No hepatomegaly nor splenomegaly were noted. Rectal examination revealed a sensation of hypertonic anus sphincter without perineal erythema and stool. Neurologic examination was normal. The following laboratory investigations were performed: complete blood count (red blood cell count 5.1 × 10 6 /mm 3 , Hb 13.8 g/dl, HCT 43%, MCV 84 fl, white blood cell count 5 × 10 3 /mm 3 , neutrophils 34%, lymphocytes 60%, monocytes 4%, eosinophils 2%, basophils 0%, platelets 297 × 10 3 /mm 3 ), C-reactive protein was 0.10 mg/dl (normal 0-0.50 mg/dl), erythrocyte sedimentation rate 5 mm within the first hour, glycemia 68 mg/ dl, serum glutamic oxaloacetic transaminase 26 IU/l, serum glutamic pyruvate transaminase 14 IU/l, serum gamma-GT 7 IU/l, amylase 67 U/l, lipase 27 U/l, BUN 26 mg/dl, creatine 0.5 mg/dl, iron 54 μg/dl, sodium 140 mmol/l, potassium 4.3 mmol/l, calcium 9.78 mg/dl, total proteins 7.1 g/dl, and albumin 48 g/l; coagulation parameters and urinalysis were normal. Also, lactate dehydrogenase (395 U/l), muscle enzymes (creatine phosphate kinase 75 IU/l, CK-MB 10 IU/l), thyroid hormones (free thyroxine 10.98 pmol/l, thyroid-stimulating hormone 1.925 μIU/ml), celiac serology, cytomegalovirus, Epstein-Barr virus, herpesvirus serology and autoantibodies (ANA, nDNA, ANCA) were normal, and megacolon was excluded using barium enema. After ruling out primary pathologic causes, a neuropsychiatric consultation was requested with the disclosure of continuous aerophagia and anxiety disorder with obsessive compulsive notes and game-playing dependency. It was possible to reassure the family on the absence of an organic gastrointestinal disease with indications to neuropsychiatric and cognitive follow-up. Case 2 A 7-year-old girl, affected by autistic spectrum disorder, was admitted to our hospital for abdominal distension and constipation that had started at 2 years of age. A history of severe abdominal distension and bloating was present chronically, especially in the postprandial period until the evening. The chronic constipation was treated with macrogol. The patient's general condition at admission was good, weight was 23 kg and height 121 cm. Physical examination showed excessive air swallowing associated with visible abdominal distension, in the absence of organomegaly, and significant bloating. No abdominal pain or other gastrointestinal symptoms were present. A routine basic metabolic panel was performed with red blood cell count 4.3 × 10 6 /mm 3 , Hb 11.8 g/dl, white blood cell count 6.5 × 10 3 /mm 3 (neutrophils 44%, lymphocytes 51%, monocytes 3%, eosinophils 2%, basophils 0%), platelets 278 × 10 3 /mm 3 , C-reactive protein 0.10 mg/dl, erythrocyte sedimentation rate 7 mm within the first hour, lactate dehydrogenase 518 U/l, glycemia 79 mg/dl, serum glutamic oxaloacetic transaminase 34 IU/l, serum glutamic pyruvate transaminase 19 IU/l, serum gamma-GT 8 IU/l, total proteins 6.6 g/dl and serum iron 56 μg/dl; serum electrolytes, celiac serology, coproculture and a parasitologic examination of stools were negative. Abdominal ultrasound showed increased gas in the small and large bowel without signs of obstruction. Abdominal radiography confirmed distension of the large and small bowel and presence of coprostasis. Anorectal manometry showed a normal recto-anal reflex of 30 ml. Neuropsychiatric consultation pointed out a moderate retardation of psychomotor stages and considered aerophagia as a stereotype symptom. A diagnosis of FA was made with indication of a cognitive-behavioral approach and associated therapy with macrogol, simethicone and otilonium bromide. Discussion Aerophagia involves excessive air swallowing causing progressive abdominal distension. The symptoms in children are a non-distended abdomen in the morning, progressive abdominal distension during the day, visible, often audible, air swallowing and excessive flatus. Resolution of the abdominal distension occurs during the night by absorption of gas and by flatulence [5]. Aerophagia can occur in sudden acute attacks but also chronically. It has been included in the FGID Classification from the Rome II criteria for irritable bowel syndrome and the Rome III Committee Consensus (table 1). Management of the disorder implies a correct diagnosis with a careful history and a minimal number of diagnostic studies to exclude organic disease (malabsorption and intestinal obstruction) [2]. An overlapping of FA with other FGID-like irritable bowel syndrome or constipation can be found. There are no studies about the prevalence of this condition in a pediatric population without mental retardation and the correct diagnosis is missed in most patients. Typical clinical symptoms include progressive abdominal distension and flatus that can be present during the night as a result of increased parasympathomimetic activity during sleeping that causes gastrointestinal motility disorder. Involuntary cricopharyngeal sphincter openings may be presumed to disappear during sleep, resulting in a non-distended abdomen in the morning. Pathologic childhood aerophagia is defined as a condition of chronic aerophagia associated with gastrointestinal symptoms such as abdominal and epigastric pain, reduced appetite and burping [3]. FA in healthy patients implies a diagnosis on the basis of the presence of diagnostic clinical criteria combined with a normal physical examination. A careful history and a minimal number of diagnostic studies can exclude organic disease, such as malabsorption or intestinal obstruction. Supplementary investigations should be performed based on case history and physical examination [4] (table 2). The correct diagnosis helps in alleviating anxiety and prevents unnecessary testing, treatments and hospital admission [2]. The most satisfactory diagnostic criteria is abdominal distension that progressively increases during the course of the day (minimum in the early morning and maximum in the late evening, fig. 1), increased flatus during sleep, increased bowel sounds on auscultation of distended abdomen and an abdominal radiograph done in the late afternoon showing an air-distended stomach and increased gas in the small and large bowel without signs of obstruction ( fig. 2). In healthy children with high sensitivity and introverted personalities, who present aerophagia as part of a functional disease, the symptoms are precipitated by psychological stress. In a large retrospective analysis of these patients, a high prevalence of anxiety and depression has been found [6]. These findings strongly suggest that aerophagia is a behavioral disorder. In clinical management, a distinction between primary and secondary aerophagia can help identify different risk profiles for surgical complications or emergency situations. A distinction should be made between patients with aerophagia who have chronic stable symptoms and patients with acute and severe episodes of aerophagia with threatening situations (mainly occurring in mentally disabled patients) [1]. Neuropsychiatric consulting and assessment is always recommended. In our two cases, after clinical evaluation, no organic disorders were identified and a behavioral approach was started, with improvement of symptoms. Speech therapy can be considered a very important approach as it may make the patient conscious of his/her behavior. A diet free of beverages containing gas may help reduce the volume of intra-intestinal gas and alleviate symptoms. In addition, drugs such as simethicone and dimethicone can reduce gas formation in the bowel [7]. The education of the parents in terms of gulping sounds and movements suggestive of air swallowing may be considered an important part of the combined therapy. Finally, a multidisciplinary approach, including good communication with the pediatrician and speech therapist, with strong support and education for the family, is advisable for the correct management of this functional disorder. [4] Aerophagia Must include at least two of the following: ---Air swallowing Abdominal distension because of intraluminal air Repetitive belching and/or increased flatus
2016-05-12T22:15:10.714Z
2014-04-05T00:00:00.000
{ "year": 2014, "sha1": "54baf7e84ac2fb44e04863a2d3124ed4baa00d0d", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/362441", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54baf7e84ac2fb44e04863a2d3124ed4baa00d0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4422337
pes2o/s2orc
v3-fos-license
Tet1 controls meiosis by regulating meiotic gene expression Meiosis is a germ cell-specific cell division process through which haploid gametes are produced for sexual reproduction1. Prior to initiation of meiosis, mouse primordial germ cells (PGCs) undergo a series of epigenetic reprogramming steps2,3, including global erasure of DNA methylation on the 5-position of cytosine (5mC) at CpG4,5. Although several epigenetic regulators, such as Dnmt3l, histone methyltransferases G9a and Prdm9, have been reported to be critical for meiosis6, little is known about how the expression of meiotic genes is regulated and how their expression contributes to normal meiosis. Using a loss of function approach, here we demonstrate that the 5mC-specific dioxygenase Tet1 plays an important role in regulating meiosis in mouse oocytes. Tet1 deficiency significantly reduces female germ cell numbers and fertility. Univalent chromosomes and unresolved DNA double strand breaks are also observed in Tet1-deficient oocytes. Tet1 deficiency does not greatly affect the genome-wide demethylation that takes place in PGCs but leads to defective DNA demethylation and decreased expression of a subset of meiotic genes. Our study thus establishes a function for Tet1 in meiosis and meiotic gene activation in female germ cells. of DNA methylation 4,5 . Recent demonstration that the Tet family proteins are involved in DNA demethylation prompted us to evaluate the role of the Tet proteins in PGC reprogramming [7][8][9][10] . RT-qPCR analysis demonstrated that Tet1 is preferentially expressed in PGCs, while Tet2 is expressed in both PGCs and somatic cells, and Tet3 is mainly expressed in somatic cells during PGC development (E9.5-E13.5) (Supplementary Fig. 1). To explore a potential role of Tet1 in PGC reprogramming and/or germ cell development, we generated Tet1 gene-trap mice ( Supplementary Fig. 2a-d). Homozygous mutant mice (Tet1 Gt/Gt ) were generated by crossing heterozygous mice. Southern blot analysis, genomic sequencing, and allele-specific PCR confirmed a single site insertion of a tandem-repeated gene-trap cassette into the first intron of Tet1 gene ( Supplementary Fig. 2a-d). β-galactosidase activity analysis revealed that the transgene is almost exclusively expressed in PGCs ( Supplementary Fig. 2e), which is consistent with the Tet1 expression pattern ( Supplementary Fig. 1), supporting a single locus insertion. Western blot analysis demonstrated that the insertion nullified the expression of the full-length Tet1 protein ( Supplementary Fig. 3a). As expected, a fusion protein between the first exon of Tet1 (aa1-621) and βGeo (1303aa) was detected in Tet1 Gt/Gt ES cell. Consistent with loss of Tet1, dot-blot and mass spectrometry analyses revealed about 45% reduction of 5hmC level in E9.5 Tet1 Gt/Gt embryos (Supplementary Fig. 3b-d). RT-qPCR analysis of E11.5 PGCs demonstrated that Tet1 level is less than 5% that of the wild-type level ( Supplementary Fig. 3e). Consistently, Tet1 protein was also not detectable in spreads of Tet1 Gt/Gt PGCs ( Supplementary Fig. 3f). Furthermore, immunostaining revealed loss of the dotted 5hmC staining signal in germ cells of E14.5 Tet1 Gt/Gt genital ridge ( Supplementary Fig. 3g). Collectively, these data indicate that Tet1 expression is effectively abolished and 5hmC level was significantly reduced in Tet1 Gt/Gt PGCs. Analysis of early backcross generations (N1-2) revealed embryonic lethal for homozygous while heterozygous mice were born in Mendelian ratio (Supplementary Table 1). Although further backcross (N3-6) relieved the embryonic lethal phenotype, the number of viable homozygous mice is still only approximately 1/3 of the numbers expected. Since severity of embryonic lethality is affected by genetic background, we only used later than N6 generation of Tet1 Gt/Gt mice for subsequent analysis. Similar to a recent report 11 , reduced pup numbers were observed when either homozygous male or female animals were crossed with wild-type animals, and the pup numbers are even fewer when homozygous animals were crossed ( Supplementary Fig. 4a). Because male gonad was morphologically normal and no obvious defects in male germ cell development were observed (data not shown), our attention is focused on characterizing the female germ cell phenotypes. We found that the size of Tet1 Gt/Gt ovary is significantly smaller, with a 30% reduction in the ovary to body weight ratio (Figs. 1a, Supplementary Fig. 4b). Interestingly, ovary agenesis-caused asymmetric ovary size is frequently observed in the Tet1 Gt/Gt animals (Fig. 1a). Both fully-grown oocytes in the ovary and ovulated oocytes after hormonal stimulation are significantly reduced in the Tet1 Gt/Gt animals (Fig. 1b, c). Ovary staining with germ cellspecific markers (Mvh, TRA98, and Msy2) followed by counting revealed a significantly reduction of oocyte number from E16.5-E18.5 (Fig. 1d, Supplementary Fig. 5a), concurrent with a significant increase in apoptotic oocytes ( Supplementary Fig. 5b, c). These results suggest that increased apoptosis is likely one contributing factor for oocyte loss in Tet1 Gt/Gt animals. Since meiotic prophase takes place in the embryonic ovary and that meiotic defects can cause germ cell apoptosis 12 , we asked whether loss function of Tet1 might lead to a meiotic defect. Immunostaining of oocyte surface spreads with a meiotic marker SYCP3 and a centromere-marker CREST revealed that progression of meiotic prophase is severely impaired in Tet1 Gt/Gt animals. About 50% of oocytes remained at the leptotene stage and no oocyte reached the pachytene stage in E16.5 Tet1 Gt/Gt ovaries (Fig. 2a). Although pachytene stage oocytes were observed in E17.5 and E18.5 Tet1 Gt/Gt ovaries, their percentage is significantly reduced throughout the developmental stages analyzed, indicating a developmental block rather than a developmental delay. This notion is also supported by the fact that Tet1 depletion does not affect the percentage of diplotene stage oocytes at E18.5 (Fig. 2a), and no significant difference in the body size between Tet1 +/Gt and Tet1 Gt/Gt embryos at E16.5 and E18.5 ( Supplementary Fig. 6). The significant decrease of pachytene stage oocytes in Tet1 Gt/Gt ovaries suggests that Tet1 deficiency may cause aberrant synapsis formation. Considering the increased apoptosis in E16.5-E18.5 Tet1 Gt/Gt ovary (Supplemental Fig. 5b, c), severely affected germ cells might be eliminated, explaining the dramatic reduction in germ cell numbers in E18.5 Tet1 Gt/Gt ovaries (Fig. 1d). Thus, meiotic defects are likely the cause of germ cell reduction in Tet1 Gt/Gt ovaries. In normal meiosis, the axial element (AE) of synaptonemal complex (SC) starts to form at the leptotene stage, AE-alignment and synapsis formation are initiated at the zygotene stage and completed at the pachytene stage ( Supplementary Fig. 7). However, loss function of Tet1 resulted in an increase in unpaired SC at zygotene stage oocytes (Fig. 2b, Supplementary Fig. 7). Despite the presence of Sycp3-positive AE, about 30% of zygonema contained less than 4 SYCP1-positive transverse filaments (TF) in Tet1 Gt/Gt oocytes (Fig. 2b) suggesting a synapsis formation defect. Defects in synapsis formation were also observed in the pachytene stage Tet1 Gt/Gt oocytes. In these oocytes, the great majority of their AE failed to align and remain separated from each other as univalent chromosomes (Fig. 2c, Supplementary Fig. 7). Continuous SYCP3 in short-stretches of SC indicates that those oocytes were at a stage corresponding to pachytene. Quantification of pachytene stage oocytes revealed that 92% of them have 0-4 univalent chromosomes in wild-type oocytes, but that number dropped to 62% in the Tet1 Gt/Gt oocytes, which accompanies the increase of the oocytes with 5 or more univalent chromosomes (Fig. 2c). These results indicate that loss function of Tet1 impaired synapsis formation. Since about 62% of pachytene stage oocytes are successful for AE pairing, we asked whether they exhibited other defects. At the early phase of meiosis, DNA double strand breaks (DSBs) are introduced in the initiation of homologous recombination, which can be detected by the presence of γH2AX 1 . As expected, immunostaining revealed the presence of γH2AX throughout the nucleoplasm from leptotene to zygotene stages in both wild-type and Tet1 Gt/Gt oocytes ( Supplementary Fig. 8). However, while γH2AX is gradually decreased around pachytene stage and only few foci remained associated with fully synapsed chromosome cores in wild-type oocytes, cloud-like nuclear staining of γH2AX remained in the Tet1 Gt/Gt pachytene and even early diplotene stage oocytes marked by crossover-specific marker MLH1 13 (Fig. 2d, Supplementary Fig. 8). Analysis of γH2AX staining pattern and quantification of the three categories (negative, partially positive, and positive) (Supplementary Fig. 9) clearly demonstrated that Tet1 depletion caused accumulation of γH2AX in pachytene and early diplotene oocytes (Fig. 2e). The presence of γH2AX in late stage meiotic oocytes was also confirmed by co-staining of γH2AX with late stage meiosis marker Msy2 at E18.5 ovaries (Supplementary Fig. 10). Importantly, significant increase in γH2AX and cleaved Caspase3 double positive-cells were also observed ( Supplementary Fig. 10b, c), suggesting that increase in apoptotic cell death is likely caused by meiotic defects. Consistent with a DSB repair defect, the DSB repair associated recombinase RAD51 1 remains in pachytene and diplotene oocytes ( Supplementary Fig. 11). The presence of γH2AX and delayed removal of RAD51 in the chromosomes indicate that homologous recombination is impaired in Tet1 Gt/Gt oocytes. Staining with the crossover mark MLH1 indicated that the MLH1 foci numbers are significantly reduced in Tet1 Gt/Gt pachytene and early diplotene oocytes ( Supplementary Fig. 12), further support a homologous recombination defect. Collectively, the above results support the notion that loss function of Tet1 leads to meiotic defects that include univalent chromosome formation, as well as DSB repair and homologous recombination defects. Previous studies have shown that establishment of proper pericentric heterochromatin (PCH) structure plays an important role in meiosis 14 . Interestingly, immunostaining revealed specific enrichment of 5hmC at PCH of many prophase meiotic chromosomes and this enrichment is eliminated in Tet1 Gt/Gt oocytes ( Supplementary Fig. 13). Since prophase meiotic PCH possesses specific histone modification pattern 15 , we asked whether loss function of Tet1 can affect PCH structure by affecting PCH histone modification pattern. Immunostaining revealed that loss function of Tet1 neither affected PCH histone modification pattern ( Supplementary Fig. 14), nor centromere clustering or HP1γ localization ( Supplementary Figs. 14, 15). Collectively, these results suggest that the observed meiotic defect is unlikely due to a defect in PCH. The stage-and cell type-specific Tet1 expression pattern ( Supplementary Fig. 1) and the dual roles of Tet1 in transcription 16,17 suggest that the meiotic defects in Tet1 Gt/Gt germ cells might be due to aberrant transcription in PGCs. Thus, we purified PGCs from female E13.5 embryos and profiled their transcriptome by mRNA sequencing. We choose to focus on this time point because this is the time when female PGCs enter meiotic prophase after epigenetic reprogramming. We used a recently developed Smart-Seq method 18 and generated over 20 million unique reads per sample which allowed us to identify over 13 thousands expressed transcripts in each genotype (Supplementary table 2). Hierarchical clustering and global correlation analysis indicated that the samples were clearly separated by their genotypes with Spearman correlation coefficient of 0.98/0.99 within biological replicates ( Supplementary Fig. S16). Depletion of Tet1 resulted in differential expression of 1,010 genes (FDR<0.05), among which over 80% (899 genes) were down regulated (Fig. 3a, Supplementary table 3). Gene ontology (GO) analysis revealed that the most significantly enriched pathways of these down-regulated genes are related to cell cycle (p-value <9e-11) and meiosis-related processes (p-value <2e-6) (Fig. 3b, Supplemental Fig. S17a). In contrast, no significant enrichment of pathways or biological processes was identified in the up-regulated gene group. Importantly, genes known to be critical for meiosis are downregulated in Tet1 Gt/Gt PGCs (Fig. 3a, Supplementary table 3). These genes include Stra8, Prdm9, Sycp1, Mael and Sycp3 1, [19][20][21][22] . Notably, this set of meiotic genes remained down regulated even at later developmental stage (Supplemental Fig. S17b, c), consistent with the meiotic defects observed in E16.5 Tet1 Gt/Gt PGCs. The effect of Tet1 on the expression of at least a subset of these genes are direct as ChIP analysis demonstrated that Tet1 occupies Sycp1, Mael, and Sycp3 gene promoters (Fig. 3c). Thus, Tet1 loss directly contributes to aberrant regulation of at least a subset of meiotic genes in PGCs. To investigate how Tet1 might be involved in activation of these meiotic genes, we performed whole-genome bisulfite sequencing (WGBS) using an ultra-low input, Tn5mCseq method 23 . We generated 945 million and 302 million reads for Tet1 Gt/Gt and wild-type PGCs, respectively. After removing clonal reads due to limited input cells, we obtained 14-16 million CpG sites per genotype at 1.76-2.66x genome coverage (Supplementary table 4), which is over100-fold higher than a previous effort 24 . To our acknowledge, this provides the most comprehensive methylation map in PGCs so far. Consistent with previous findings 24 , we found that PGCs are globally hypomethylated (Fig. 4a), which is verified by immunostaining (Supplemental Fig. S18). Despite no dramatic increase in global DNA methylation, DNA methylation level is generally higher in mutant PGCs, particularly in exons, introns, LTRs, and IAPs (Fig. 4a, p<0.01). With the caveat that 2X genome coverage may not allow robust identification of differentially methylated regions (DMRs) between Tet1 Gt/Gt and wild-type PGCs, we nevertheless performed detailed analysis and identified 4,337 putative DMRs (Supplementary table 5, Supplemental Fig. S19) that are mostly located far away from transcriptional start sites (Supplemental Fig. S20a). These putative DMRs are associated with 5,242 genes, among which 255 exhibited altered expression (Fig. 4b, Supplementary table 6) and are enriched for cell cycle regulation as well as for reproductive and infertility processes (FDR=0.02) (Supplementary Fig. S20b, c). Furthermore, these DMRs are enriched for Tet1 binding in mouse ES cells 17 (P-value <10E-100). To evaluate whether Tet1 binding affects DNA methylation in PGCs, we performed bisulfite sequencing on the three verified Tet1 target genes (Fig. 3c). Consistent with the involvement of Tet1 in DNA demethylation, the methylation level of the Sycp1, Mael, and Sycp3 promoters is increased in the Tet1 Gt/Gt PGCs compared with that in the wild-type PGCs (Fig. 4c). These data indicate that Tet1-mediated demethylation of these genes is likely involved in their activation during PGC development. We note, some down-regulated genes, such as Stra8, showed no obvious change in DNA methylation indicating that they are either regulated indirectly by Tet1 or in a DNA methylation-independent manner. Taken together, our study provides the first evidence that Tet1 is not responsible for global demethylation in PGCs, rather it plays a specific role in meiotic gene activation at least partly through DNA demethylation. Depletion of Tet1 leads to down-regulation of meiotic genes, which causes defective meiotic prophase including accumulation of non-repaired DSBs, and formation of univalent chromosomes. The meiotic defects cause loss of oocytes and consequent decrease in fertility and small litter size. Previous studies have established that DNA methylation levels of certain meiotic genes are decreased concomitant with genomic reprogramming 25 . Our study has extended this observation by demonstrating that Tet1 mediates locus-specific demethylation and subsequent activation of a subset of meiotic genes, revealing a specific function of Tet1 in germ cell development. Supplementary Material Refer to Web version on PubMed Central for supplementary material. (c) Bisulfite sequencing analysis of the Sycp1, Mael, and Sycp3 gene promoters of Tet1 binding site in wild-type and Tet1 Gt/Gt PGCs. Each CpG is represented by a circle. Open and filled circles represent unmethylated or methylated, respectively. Percentages of DNA methylation are indicated.
2016-05-12T22:15:10.714Z
2012-10-31T00:00:00.000
{ "year": 2012, "sha1": "f5fc9eb23d37ba3251242718823d747f1f921e6c", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3528851?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0690fe1aa1b30a34892d0c851a0ac67fcd7b4986", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208612869
pes2o/s2orc
v3-fos-license
Electronic structure and core electron fingerprints of caesium-based multi-alkali antimonides for ultra-bright electron sources The development of novel photocathode materials for ultra-bright electron sources demands efficient and cost-effective strategies that provide insight and understanding of the intrinsic material properties given the constraints of growth and operational conditions. To address this question, we propose a viable way to establish correlations between calculated and measured data on core electronic states of Cs-K-Sb materials. To do so, we combine first-principles calculations based on all-electron density-functional theory on the three alkali antimonides Cs3Sb, Cs2KSb, and CsK2Sb with x-ray photoemission spectroscopy (XPS) on Cs-K-Sb photocathode samples. Within the GW approximation of many-body perturbation theory, we obtain quantitative predictions of the band gaps of these materials, which range from 0.57 eV in Cs2KSb to 1.62 eV in CsK2Sb and manifest direct or indirect character depending on the relative potassium content. Our theoretical electronic-structure analysis also reveals that the core states of these systems have binding energies that depend only on the atomic species and their crystallographic sites, with largest shifts of the order of 2 eV and 0.5 eV associated to K 2p and Sb 3d states, respectively. This information can be correlated to the maxima in the XPS survey spectra, where such peaks are clearly visible. In this way, core-level shifts can be used as fingerprints to identify specific compositions of Cs-K-Sb materials and their relation with the measured values of quantum efficiency. Our results represent the first step towards establishing a robust connection between the experimental preparation and characterization of photocathodes, the ab initio prediction of their electronic structure, and the modeling of emission and beam formation processes. The future of bright electron sources relies on the close interplay between theoretical understanding and experimental verification of the material characteristics and their role for the electron beam properties. Photoemission based electron sources are the enabling concept for many electron accelerator based applications such as the free-electron laser or ultra-fast scattering instruments 1 . Furthermore, they are essential components for a plethora of technological devices ranging from photovoltaic cells to radiation detectors. For accelerator applications great progress has been achieved in recent years with regards to the preparation and characterisation of photoemission electron sources, especially given the boundary conditions present in the harsh environment of the particle accelerator. The electron source, i.e., the photocathode, shares the vacuum environment of the first accelerating section and is subject to strong radio-frequency or static electric fields. In parallel with the experimental progress in the preparation of photocathodes it is also now possible to understand many aspects of the process of the generation of the electron pulses after emission. What is still missing is a deeper understanding of the intrinsic material properties and their role in the formation of the initial electron beam characteristics. There is also lack in knowledge about the effects of the growth process or contamination typically present in the accelerator environment. Right now the frontier of materials for accelerator applications is represented by semiconductors with maximised quantum efficiency (QE) in the visible region and minimised transverse energy (MTE) 2 . Compared to simple metals or conventional semiconductors such as GaAs, they feature relatively low electron affinity and band gap, both on the order of 1 eV, which are key requirements for high QE in the visible band. Furthermore, their vacuum requirements are more modest compared with GaAs photocathodes 1 . Motivated by these intriguing properties, a number of groups worldwide are striving to produce and operate photocathodes for particle accelerators based on multi-alkali antimonides 3-8 . For example, at the Helmholtz-Zentrum Berlin (HZB), Cs-K-Sb photocathodes have been grown, characterised, and transferred into the photoelectron gun of the energy-recovery linac (ERL) bERLinPro. In the photoelectron gun the photocathode is placed inside a superconducting radio-frequency (SRF) cavity at the location of peak electric field. The electric field in the SRF cavity oscillates with a frequency of 1.3 GHz. At a defined phase the photocathode is struck by short light pulses (some ps length) from a frequency-doubled (at 515 nm) laser locked to the frequency of the of the SRF cavity. During emission the electron pulses are subject to rapid acceleration with gradients on the order of 20 MV/m and reach relativistic velocities after a couple of cm. The goal for the ERL accelerator is to generate a beam of ultra-short electron pulses with high average power and high brightness. After usage of the beam in radiation generation processes the electron pulses are guided back through the main acceleration section and the energy is recovered and fed to freshly generated pulses from the electron source. For this electron source, the photocathode material should exhibit high QE for visible drive laser wavelength and low intrinsic emittance to reach high brightness. To keep the emittance low, the substrate and photocathode film must be smooth. Additionally, to ensure stable operation, the photocathode must be robust to the environment within the photoinjector. To fulfill these requirements, several ultra-high vacuum (UHV) systems have been built to grow, analyse and transfer Cs-K-Sb photocathodes into the accelerator. The interconnection between all these steps requires characterisation techniques that are compatible with the accelerator requirements. For example, destructive methods cannot be applied, meaning that the range of available tools is limited. Nevertheless, the alkali co-deposition procedure developed at HZB and the adopted characterisation techniques have demonstrated the possibility to obtain reproducible photocathodes with high QE 8 . In spite of these notable achievements, severe issues remain regarding the production of single-crystalline samples with a controlled amount of defects and impurities [9][10][11][12] . The growth procedures that are routinely adopted nowadays 6,8,9 do not allow for an a priori definition of the obtained stoichiometry and crystal structure starting from a given composition. As a result, in place of the desired single crystals, surface-disordered and polycrystalline photocathodes are often grown, which inhibit reproducible characterisation and in-depth understanding of the intrinsic material properties, which determine the operational characteristics of photocathodes. Ab initio electronic-structure theory is ideally suited to fill this gap. In a recent work on CsK 2 Sb we demonstrated that all-electron density-functional theory (DFT) and many-body perturbation theory (MBPT) are able to unravel the electronic structure and the excitations of multi-alkali antimonides with unprecedented insight 13 . Most importantly, these methods are truly predictive, as they do not require the a priori knowledge of any empirical parameter of the material but only its chemical composition and crystal structure. In this context, single-crystalline solids represented by unit cells including a few atoms and satisfying a large number of symmetry Ball-and-stick representation of primitive FCC unit cells of (a) Cs 3 Sb, (b) Cs 2 KSb, and (c) CsK 2 Sb. Sb atoms, depicted in black, are the origin of the cell at Wyckoff position (0,0,0), while Cs and K atoms (magenta and grey spheres, respectively) are located at crystal coordinates (1/2, 1/2, 1/2) and ±(1/4, 1/4, 1/4), labelled A 1 , A 2 , and A 3 in panel (a). (d) Brillouin zone of the adopted FCC unit cell with the high-symmetry points and the path connecting them marked in colour. (c) Image of polycrystalline Cs-K-Sb photocathode film deposited on a Mo plug substrate, and mounted on a flag style sample holder. operations are most efficiently treated. These structures, however, hardly match realistic multi-alkali antimonide photocathodes, where different crystal structures and stoichiometries often coexist and where the quality of the obtained materials is largely limited by the experimental conditions. The massive presence of defects in the samples is a typical characteristic of the grown photocathodes which is hardly predictable a priori and represents therefore a major complication from a theoretical perspective. Also experimentally, the strict conditions for in situ growth, characterisation, and operation requested by the samples limit the possibilities for an accurate determination of their stoichiometry and composition even through well-established techniques such as x-ray photoelectron spectroscopy (XPS) [14][15][16] . Overcoming this issue represents a challenge for the near future that demands new viable strategies to gain insight into the fundamental characteristics of photocathode materials, in spite of the limited knowledge of their structure and stoichiometry. First-principles methods for electronic structure theory can play a prominent role in this context, owing to their unprecedented predictive power in comparison with any other theoretical approach. For this reason, substantial advances can be achieved, by searching, identifying, and unravelling correlations between the crystalline multi-alkali antimonides modelled ab initio and experimentally grown photocathodes. In this way it is possible to create a virtuous circle between theory and experiments where the former does not simply fit the results of the latter but is able to provide on its own information about the materials properties. In return, data generated in the lab can be compared to the results of calculations to rationalise the properties of the photocathode in realistic growth and operational conditions. Following this strategy, here we present a first step in this direction. In ab initio calculations based on DFT and MBPT, we focus on the ideal face-centred-cubic (FCC) phases of Cs 3 Sb, Cs 2 KSb, and CsK 2 Sb, and investigate their electronic structure in the valence and core region. Experimentally, QE measurements coupled with XPS are used to determine correlations between the growth procedure, the material properties, and the photocathode characteristics, including their composition, stoichiometry, and oxydation state 8,17,18 . The analysis of core states enabled by the adopted all-electron DFT formalism provides all the ingredients for a qualitative comparison with the collected XPS data on the grown photocathodes in view of new, cost efficient strategies to predict and characterise advanced electron source materials. Results Structural properties. Caesium-potassium-antimonides are known to crystallise in a cubic crystal struture with 16 atoms in the unit cell [19][20][21] . The structure of Cs 3 Sb first resolved in the pioneering study by Jack and Wachtel 19 is characterised by 8 sites occupied by an equal number of Cs atoms and by additional 8 sites with random Cs and Sb occupations. In our first-principles calculations we consider an idealised FCC crystal structure for stiochiometric Cs 3 Sb, Cs 2 KSb, and CsK 2 Sb (see Fig. 1a-c) also adopted in previous theoretical studies 13,22 . In this FCC lattice, the Sb atoms are located at the origin of the unit cell with Wycoff coordinates (0,0,0), while the alkali species are found at (1/2, 1/2, 1/2) and ±(1/4, 1/4, 1/4). We denote these sites as A 2 , A 1 , and A 3 , respectively, as shown in Fig. 1a. From a crystallographic viewpoint, A 1 and A 3 represent equivalent sites. In Cs 3 Sb, where all these positions are occupied by Cs atoms, there are two inequivalent Cs atoms in the unit cell. We consider Cs 2 KSb and CsK 2 Sb structures such that the two alkali species occupy inequivalent sites, as visualised in Fig. 1b,c. The volume optimisation carried out from DFT for the three alkali antimonide structures shown in Fig. 1a-c shows a clear trend between the lattice parameter a and the relative amount of Cs and K atoms. The largest lattice parameter is found in Cs 3 Sb, where a = 9.38 Å, and its value monotonically decreases in Cs 2 KSb (a = 9.23 Å) and in CsK 2 Sb (a = 8.76 Å), in agreement with x-ray diffraction measurements 23 . The correlation between lattice parameter and composition of caesium-potassium-antimonides was pointed out by McCarrol in 1961 21 : According to chemical intuition, the larger atomic radius of Cs compared to K promotes an increase of the unit cell volume. Our DFT results qualitatively match the trend reported in that first experimental study 21 , and, in agreement with previous DFT results 24 performed on the cubic cell proposed by Jack and Wachtel 19 , validate our choice of the unit cell. Also the antimony-alkali bond lengths reproduce this behaviour. The distance between Sb and the alkali atom occupying the site A 1 and equivalently A 3 (see Fig. 1a), decreases monotonically with the relative amount of Cs atoms in the unit cell. In Cs 3 Sb the Sb-Cs distance is 4.06 Å, while in Cs 2 KSb and in CsK 2 Sb the bond length between Sb and Cs is equal to 4.00 Å and 3.79 Å, respectively. Electronic structure. The electronic band structures of the Cs-based materials considered in this work, as calculated from DFT (black lines) and from G 0 W 0 (red dots), are shown in Fig. 2. The high-symmetry points reported in abscissa and the path connecting them are highlighted in the Brillouin zone depicted in Fig. 1d. We Table 2. Core-level binding energies computed from DFT for all the atomic species in Cs 3 Sb, Cs 2 KSb, and CsK 2 Sb. Inequivalent Cs atoms are identified based on their crystallographic sites (see Fig. 1). A colour code is adopted to identify patterns in the magnitude of relative shifts with respect to Cs 3 Sb: yellow denotes shifts of less than 0.1 eV, orange of 0.27 eV, red of about 0.5 eV, green of 0.8 eV, and blue by over 2 eV. All energies are expressed in eV. www.nature.com/scientificreports www.nature.com/scientificreports/ notice that qualitatively the results obtained from DFT and G 0 W 0 are almost identical. However, quantitatively, the quasi-particle (QP) gaps obtained from GW are in all cases about twice as large as the DFT ones, as summarised in Table 1. CsK 2 Sb has the largest QP gap of 1.62 eV (0.92 eV from DFT, see also ref. 13 ) followed by Cs 3 Sb with a band gap 1.18 eV (0.59 eV from DFT) and finally by Cs 2 KSb with a QP gap of 0.57 eV (0.18 eV from DFT). www.nature.com/scientificreports www.nature.com/scientificreports/ Among these three materials, only CsK 2 Sb is characterised by a direct band gap at Γ, in agreement with earlier DFT results 24 . The other two systems have an indirect band gap between X and Γ, where the valence-band maximum (VBM) and the conduction-band minimum (CBm) appear, respectively. This finding suggests that reducing the relative amount of K leads to a shift of the VBM from the zone center Γ to the zone edge X. This feature has an influence of the optical properties of the materials as well, as seen by the optical gaps reported in Table 1. These values, as computed from G 0 W 0 range from 1.08 eV in Cs 2 KSb to 1.62 eV in CsK 2 Sb, where the optical and fundamental gaps coincide. Interestingly, the optical gap of Cs 3 Sb, being 1.53 eV, is only less than 100 meV smaller compared to CsK 2 Sb. Focusing more closely on the band structures, we notice that the dispersion of the valence bands increases significantly going from Cs 3 Sb, where the valence is spread over an energy range of about 1 eV, to CsK 2 Sb, where the highest-occupied bands extend over a region of almost 1.3 eV. While in the former case, and analogously also in Cs 2 KSb, the largest band dispersion occurs along the path connecting Γ, X, and W. In the band structure of CsK 2 Sb (Fig. 2c) one of the valence bands has a pronounced minimum around L and an almost parabolic shape in the vicinity of this point, along the path connecting W to Γ. In the conduction region we notice that the lowest unoccupied band in Cs 2 KSb does not cross the next one at higher energy, contrary to both Cs 3 Sb and CsK 2 Sb where this intersection is visible along the Γ-X path (see Fig. 2). The electronic properties of these three materials and their relation with the Cs-K ratio can be further analysed by inspecting the band character. For this purpose, in Fig. 3 we report the most relevant atom-projected contributions to the bands of each material. While in all systems the valence bands are dominated by the partially filled Sb 4p shell (see also refs. 13,22,24 ), the conduction bands carry the signatures of each compound. An overview on Fig. 3 reveals that the bottom of the conduction region is dominated by the s-states of the Sb atoms and, to lesser extend, also of the Cs atoms. This is not surprising, considering the parabolic shape of the lowest unoccupied band in the vicinity of its minimum, which falls at the Γ point in each considered material. The electrons in the empty Sb 5s shell form these bands. Also electrons from the half-filled Cs 6s shell contribute to the conduction bands. However, different from the Sb s electrons, the bands with Cs s-character appear more delocalised throughout the BZ. In Fig. 3 different colors are used to mark the contributions from inequivalent Cs atoms in Cs 3 Sb: Those from A 1 and A 3 (A 2 ) sites are displayed in red (orange). We notice that contributions from Cs s-states from atoms in A 1 positions appear mainly in the lowest unoccupied band. On the other hand, Cs s-states from atoms in A 2 sites contribute not only to the bottom of the conduction but also to higher unoccupied bands. In the other two compounds, Cs 2 KSb and CsK 2 Sb, only one inequivalent Cs atoms is present. In both cases the Cs s-states contribute to the lowest-unoccupied band in the vicinity of Γ being hybridised with the Sb s-states. Additional contributions from Cs s-states in these materials are provided also to higher-energy unoccupied bands, especially around the the high-symmetry points X (both), L (Cs 2 KSb), and W (CsK 2 Sb). It should also be noted that in these systems the contributions from K atoms enter into play in the conduction region 13,24 . In the bottom panels of Fig. 3 the bands with Cs d-character are displayed. Also in this case we adopt different colors to identify the contributions www.nature.com/scientificreports www.nature.com/scientificreports/ from inequivalent Cs atoms (light green for those on A 2 sites and dark green for those on A 1 and A 3 sites). We emphasise that the weights of each state, quantified by the size of the colored circles, are represented with the same scaling ratio for each atomic character and in each material. Hence, the relative comparison between the plots displayed in Fig. 3 is quantitative. Interestingly, in the materials containing potassium, the contribution of the Cs d-states to the conduction bands is complementary to those of the Cs and Sb s-states. The Sb d-shell does not significantly contribute to the first few eV above the CBm, as discussed in ref. 13 in the case of CsK 2 Sb. Core spectroscopy. The adopted theoretical formalism, based on all-electron DFT 25 , allows to access the energies of all the electrons in the core. Although core energies are systematically underestimated by 10-20% in DFT 26 , relative core shifts referred to the same levels in different materials can be correlated with XPS data in a meaningful way. The calculated core level energies of each state in all the investigated materials are reported in Table 2. We separate line by line the binding energies of states with different principal and angular quantum numbers n and l. Contributions from crystallographically inequivalent Cs atoms are identified by their site in the unit cell, according to the scheme in Fig. 1a. The energies of Cs atoms on sites A 3 , which are equivalent to those on sites A 1 , are omitted. A colour code is adopted in Table 2 to visualise the magnitude of core-level shifts with respect to Cs 3 Sb. In this way it is possible to catch at a glance the rather regular patterns that do not vary depending on the quantum numbers of core states but appear instead as a general property of atoms at specific sites. The relative shift of core levels of Cs atoms on site A 1 in Cs 2 KSb compared to those in Cs 3 Sb amounts to 0.27 eV. The corresponding values in Table 2 are marked in orange. The binding energies of the core states of Cs atoms on site A 2 in CsK 2 Sb are shifted by −0.82 eV with respect to Cs 3 Sb and are highlighted in green in Table 2. We recall that larger binding energies correspond to deeper levels. Binding energies of core levels associated to crystallographically inequivalent Cs atoms in Cs 3 Sb are always separated in energy by 0.87 eV, with states relative to atoms on site A 2 being always energetically deeper. The core shifts associated to K and Sb atoms are also insensitive to the specific core state but appear once again as a general property of the atomic species in the specific compounds. The core binding energies of antimony atoms in Cs 2 KSb and CsK 2 Sb increase with respect to those in Cs 3 Sb by different amounts of energies. In CsK 2 Sb, Sb binding energies are slightly larger by less than 0.1 eV compared to those in Cs 3 Sb (yellow boxes in Table 2. On the other hand, the shifts between Sb core states in Cs 3 Sb and Cs 2 KSb is of the order of 0.5 eV (red boxes in Table 2). The most remarkable shifts in the binding energies of core levels appear between 1s and 2p states of potassium in the two considered bi-alkali antimonides. A close inspection of the blue boxes in Table 2 reveals shifts larger than 2 eV, which are again almost constant regardless of the quantum numbers of core states. K states in Cs 2 KSb are deeper than those in CsK 2 Sb. The regular patterns observed in the computed core level shifts offer promising perspectives to monitor peak energies in XPS spectra of multi-alkali antimonide photochathodes grown for bERLinPro. Here, we consider this possibility retroactively for photocathodes P006 and P007 grown and characterised at HZB 27 . These samples are grown on a Mo substrate (see Fig. 1e) using an alkali codeposition technique recently developed for bERLinPro and described in details in ref. 8 . The stoichiometric characterisation of the photocathodes is performed in-situ using XPS, which is a well established surface analysis technique that is used to determine the chemical composition of surfaces and thin films 28 . The advantages of XPS as a diagnostic technique in the context of bERLinPro are its non-destructive nature and the possibility that it gives in the employed setup to characterise the materials without exposing them to the atmosphere (more details are provided in the Experimental Section below). In Table 3. Chemical composition, stoichiometric content, and final QE of the Cs-K-Sb samples P006, P007, and P015 (see Fig. 5). The QE is measured at 2.33 eV. Peaks associated to specific core states are clearly visible, with those referring to Cs 3d, Sb 3d, and K 2p highlighted as the most prominent and identifiable features in the spectrum. The stoichiometry of the samples is determined from the relative peak intensities of the aforementioned peaks, as reported in Table 3. More detailed region scans, such as the one shown for K 2p (see inset of the top panel in Fig. 4) offer additional insight in determining peak positions and shifts, and are also useful for peak fittings. The K 2p peak positions in the spectra of P006 and P007 are very similar. The sizable shifts predicted by theory are actually not observed in the top panel of Fig. 4, which is not surprising, considering that photocathode samples are non-stoichiometric (see Fig. 5) and they may contain amorphous or polycrystalline regions with different stoichiometry. Real photocathodes are also subject to the possibility of oxygen contamination due to residual gases of K and Cs dispensers. A detailed region spectrum of the Sb 3d and O 1s regions for P007 is shown in the bottom panel of Fig. 4. The oxygen contamination of a sample can be calculated from this plot by extracting the relative peak intensity of the O 1s and Sb 3d 5/2 states. Since the binding energies shown in the plot in Fig. 4 (bottom panel) are so close, the spectrum has been deconvoluted to identify the individual contributions of the O 1s and the two Sb species (3d 3/2 and 3d 5/2 ) with oxidation state Sb 3− (blue and orange dashed lines) and Sb (0) (red and purple dashed lines). For reference, the raw measured data is reported with offset (black solid line) together with the Shirley background (black dashed line), which has been subtracted to obtain the deconvoluted spectrum in the foreground. From this graph it is clear that the P007 sample is dominated by antimony atoms with Sb 3− oxidation state. Very minor contributions from Sb (0) can be seen only for the 3d 5/2 electron at 527.6 eV. For P007 we also see some oxygen contamination identified by the broad peak centred at 532.6 eV, which is visible also in the stoichiometric analysis reported below (see Fig. 5). The analysis of Fig. 4 is exemplary to the characterization of a photocathode sample which highlights the chemical nature of the grown material, which contain oxygen impurities and non-stoichoimetric composition, as discussed below with reference to Fig. 5. In Fig. 5 the correlation between QE, measured under illumination with 532 nm wavelength, and stoichiometric content of Cs and K atoms is presented for a selected number of Cs-K-Sb samples grown at HZB (for details, see the Experimental Setup below). The relative oxygen contamination (ratio of O 1s to Sb 3d peak intensity) of each sample is indicated according to the colour scale on the right-hand side of the figure: Green points correspond to oxygen-free cathodes, while red dots are referred to samples with up to 0.2 relative oxygen content. For pristine single crystals, such as those modelled by DFT, corresponding to CsK 2 Sb and Cs 2 KSb, respectively, one would expect to see two distinct clusters of samples around = x 1 and = x 2, where x indicates the relative K or Cs content. In reality, the grown samples are polycrystalline in nature, thus composed of grains of the two different stoichiometries. The presence of other crystal phases rather than the most stable cubic ones considered in the DFT calculations cannot be excluded. The stoichiometric content of the samples presented in Fig. 5 is the mean value averaged over a surface of several mm 2 27 . As a result, the points on both panels in Fig. 5 are rather scattered, spanning the entire region between = x 1 and = x 2 and also expanding above and below it. We notice a correlation between the amount of oxygen content and the QE, in particular in the samples P014 and P015, which exhibit highest purity and, concomitantly, highest efficiency among the data points reported in Fig. 5. On the other hand, the relatively large QE of P013 and, conversely, the relatively low QE of P011, do not allow us to draw general conclusions. Overall, in the presented pool of samples, those with stoichiometry close to the Cs 2 KSb exhibit the highest QE. We finally remark that while from Fig. 5 one may evince that no K content leads to a maximized QE, for the operation of the cathodes in the bERLinPro accelerator, specific constraints in term of material robustness and thermal stability have to be fulfilled. Such criteria are met by Cs-K-Sb compounds but not entirely by Cs 3 Sb 1 . We conclude this section by benchmarking the spin-orbit splitting, which is clearly visible in the calculated values of core binding energies in Table 2, as well as in the measured XPS spectra (see Fig. 4), against tabulated references. In this way, we can use spin-orbit splitting, which is known to be an atomic property, as a parameter to validate both computed and measured values. We summarize this comparison in Table 4, where we contrast the results of spin-orbit splitting for three selected states (Cs 3d, K 2p, and Sb 3d) in Cs-K-Sb materials (Cs 2 KSb and CsK 2 Sb calculated compounds) with those reported in ref. 29 for the corresponding elemental crystals. The comparison with DFT values reveals a very good agreement for K 2p and Sb 3d, with discrepancies of less than 5%. The computed spin-orbit splitting for Cs 3d is instead overestimated by DFT by slightly more than 10%. Analogous trends are obtained also for the experimental values, where the discrepancy between Cs 2 KSb and CsK 2 Sb can be attributed to the uncertainty of the measurement. In this case, the splittings of K 2p and Sb 3d are smaller than the reference. It should also be noticed that the experimental data reported in Table 4 are obtained from samples P007 and P015, which are not perfectly stoichiometric (see Fig. 5). Although spin-orbit splittings are not meaningful to identify different stoichiometries, the mutual agreement between DFT and XPS results, also Table 4. Spin-orbit splitting (Δ) within Cs 3d, K 2p, and Sb 3d states computed from DFT and extracted from XPS measurements on CsK 2 Sb and Cs 2 KSb stiochiometries (samples P007 and P015 in Fig. 5 and Table 3). Tabulated values from ref. 29 for elemental bulk materials are reported in the third column. Summary and Conclusions The results presented in this work demonstrate the ability of ab initio methods for solid-state theory to unveil the fundamental physical properties of photocathode materials. By adopting state-of-the art approaches based on all-electron DFT and MBPT in the GW approximation, we have computed the electronic properties of three alkali antimonides materials for photocathodes, namely Cs 3 Sb, Cs 2 KSb, and CsK 2 Sb. A quantitative estimate of the band gap has revealed that all these systems are characterised by rather low quasi-particle band gaps, ranging from 0.57 eV in Cs 2 KSb to 1.62 eV in CsK 2 Sb. The character of the band gap changes depending on the relative content of potassium, being direct at Γ in CsK 2 Sb and becoming indirect in the other two compounds. The study of the band character reveals valence bands dominated by antimony p states (see also ref. 13 ) and conduction bands with relevant contributions from both Cs and and Sb s states at lower energies, and from Cs d states further away from the Fermi energy. The analysis of the binding energies of the core states shows regular patterns in the shifts associated to individual atomic species in specific crystallographic sites. In particular, our DFT results indicate sizable shifts, of the order of 2 eV and 0.5 eV, for K 2p and Sb 3d states between Cs 2 KSb and CsK 2 Sb. These trends can be used to identify correlations between ab initio theory and measurements on bi-alkali antimonide photocathode samples. In this regard we have shown the result of XPS characterization measurements performed on a set of Cs-K-Sb photocathodes prepared for the bERLinPro project. The XPS survey spectra of the grown samples reveal non-stroichiometric compositions as well as the presence of oxygen contaminants, which in turn determine the oxidation states of the Sb species. Finally, we have shown how the stoichiometry and the chemical composition of the cathodes, including their relative oxygen content, is related to the measured QE. In conclusion, our combined theoretical and experimental study shows that correlations between ab initio results and measurements of core binding energies can be drawn in order to characterise multi-alkali antimonide photocathodes. We emphasise that this connection is particularly relevant given the limited freedom for in situ and in operando characterization of photocathode materials for accelerator applications. The possibility to grow purely stoichiometric photocathodes is achievable only using X-ray beamlines equipped with suitable and yet non-transferable setups 12 , which enhances the role of ab initio theory in the whole optimization process. While at the present status, we have explored from first principles only ideal stoichiometric bulk materials, further studies are requested to bridge the gap with experiments. Additional calculations addressing non-stoichiometric compounds are expected to be significantly more demanding from a computational perspective but more relevant to finally bridge the gap to experiments. Novel techniques such as high-throughput screening and machine learning are expected to provide an important aid towards the achievement of this goal. In this perspective, the results of this work represent the first step towards establishing a robust connection between experimental preparation and characterisation of photocathodes, ab initio prediction of their electronic structure, and modeling of the emission and beam formation processes in operational conditions. Methods Theoretical background and computational details. The first-principles results presented in this work are obtained in the framework of density-functional theory (DFT) and many-body perturbation theory (MBPT). DFT is based on the Hohenberg-Kohn theorems 30 and is implemented according to the Kohn-Sham (KS) scheme 31 , which consists of mapping the many-electron problem into a set of independent-particle equations for the electronic wave-functions of each electron n in the system at each k-point in the Brillouin zone: where ε n KS k is the KS energy per particle. Eq. (1) is expressed in atomic units, which are adopted from now on. On the left-hand side of Eq. (1), in addition to the kinetic-energy operator, we find the effective potential per particle v r ( ) s , which consists of the sum of three terms: . The external potential v ext (r) includes the interaction between the negatively-charged electrons and the positively-charged nuclei. The Hartree potential v r ( ) H accounts for the (mean-field) Coulomb between the electrons, and v r ( ) xc is the exchange-correlation (xc) potential. Since the exact form of v r ( ) xc is unknown, this term in Eq. (1) has to be approximated. In this work, we adopt the generalised gradient approximation (GGA) as implemented in the Perdew-Burke-Ernzerhof parameterisation 32 . A relevant aspect in the solution of Eq. (1) is the choice of the basis set. Here, we adopt the linearized augmented planewave plus local-orbital (LAPW + lo) method, which allows for an explicit treatment of core electrons. Their wave-functions are described by solution of the Dirac equation in the spherically symmetric potential given by the nuclei. Further details are provided in refs. 25,33 . The GW approximation 34 in the single-shot perturbative approach G 0 W 0 35 is adopted to estimate the quasi-particle correction to the valence and conduction states in the gap region. In this formalism, the quasi-particle energies of each electronic band ε i QP k are computed from the electronic self-energy Σ as: where Z ik is the renormalization factor accounting for the energy-dependence of the self-energy and ε i KS k are the solutions of the Kohn-Sham equations for the given states. For the derivation of Eq. (2) and additional information we refer for review to refs. 36,37 . The details of the implementation of GW in an all-electron formalism are reported in refs. 38,39 Binding energies of core states are computed from DFT as the the energy of each level with (2019) 9:18276 | https://doi.org/10.1038/s41598-019-54419-0 www.nature.com/scientificreports www.nature.com/scientificreports/ opposite sign. No QP correction is applied in this case, as the extremely localised character of core electrons requires a many-body treatment that goes beyond the GW approximation 40 . Absolute values of core binding energies computed from DFT are therefore systematically underestimated by 10-20% compared to experiments. All calculations presented in this work are performed with the exciting code 25 . The muffin-tin (MT) radii of all the atomic species involved (Cs, K, and Sb) are set to 1.65 bohr and a plane-wave basis-set cutoff = . R G 8 0 MT max is employed. For ground-state calculations the Brillouin zone (BZ) is sampled using a homogeneous cubic k-mesh with 8 points in each direction, corresponding to overall 29 points considering the symmetry operations. The optimised volume of each unit cell is obtained by fitting DFT results obtained at varying lattice parameters with the Birch-Murnaghan equation of state 41,42 . To calculate the quasi-particle band-structure within the G 0 W 0 approximation, a 4 × 4 × 4 k-mesh is adopted without exploiting symmetries, for a total of 64 k-points. Unit cells and BZs are visualised with the XCrysDen software 43 . Experimenatal setup. The experimental results presented in this work come from the Photocathode Preparation and Analysis system at HZB. The system is composed of two main chambers. The preparation chamber houses the evaporation sources: an effusion cell to evaporate high purity Sb beads and SAES dispenser sources for alkali deposition. A spectral response setup is also attached to the preparation chamber to measure the QE(λ), ranging from 1.77 eV (700 nm) to 3.35 eV (370 nm). For a full description of the setup, see refs. 27,44 . The setup for QE measurements is composed of the following: a tunable white light source and monochromator for stimulating photoemission, a pickup anode and picoammeter (Keithley model 6487) to measure the extracted photocurrent, and a calibrated photodiode power meter (Thor-Labs PM100D and S130VC) to measure the power of the light. At the end of the photocathode growth procedure a photocurrent measurement is taken. Green light (2.33 eV) is irradiated onto the sample and a bias voltage is applied between the earthed photocathode and the pick-up anode; the current is then measured by the picoammeter. A dark current measurement is taken and subsequently subtracted from the current measurement to obtain the true photocurrent, from which the QE is calculated. In this way, the QE values presented in this paper are obtained. The analysis chamber houses an electron analyser (Specs Phoibos 100 2D-CCD) and x-ray source (Specs XR 50) for XPS measurements. The chamber is directly attached to the preparation chamber thus allowing for in-situ growth and analysis. A full description of the setup is available at ref. 27 . In this work the samples have been excited by non-monochromatic Al Kα radiation. The survey spectra were obtained with a constant pass energy of 20 eV, and detailed region spectra with 10 eV. The uncertainty that arises from the corresponding spectra are 0.37 eV and 0.19 eV respectively. In an XPS experiment, a sample is irradiated with X-ray photons of known energy, and the resulting photoelectrons liberated by this process are collected and their kinetic energy is recorded with an electron spectrometer. A spectrum is generated of electron counts per second as a function of kinetic energy and then, using Eq. (3), converted into binding energy (see Fig. 4). KE BE where E KE is the kinetic energy of the emitted electron, ν h is the excitation photon energy, E BE is the binding energy of the core state with respect to the Fermi level and ϕ is the work function of the spectrometer. The binding energy is dependent on the element from which the electron was emitted, and therefore the spectra provides information of the electronic structure of the sample material from which elemental and chemical composition and can be determined 45 . To obtain the stoichiometric content for the photocathodes presented in this work, the relative peak intensity of the Sb 3d, K 2p and Cs 3d peaks from the XPS survey spectra were quantitatively analysed using CasaXPS in conjunction with a custom PYTHON code 27 . The ratio for two concentrations (c a and c b ) were determined using Eq. (4): where I A and I B are the intensity of two atomic species Cs and K, σ ax is the excitation cross-section, λ IMFP is the inelastic mean free path. In this work, cross-sections (σ ax ) from Scofield are used and IMFPs (λ IMFP ) are obtained from the program SESSA using the TPP-2M formula 46,47 . A custom PYTHON code was used to first carry out a normalisation of the data using the analyser transmission function and then a standardised Shirley background subtraction was performed, followed by an integration of the peak intensity I. In this way, the stoichiometric content of Cs and K were calculated for the photocathodes presented in this work. Data availability Data are available upon reasonable requests to the corresponding authors.
2019-12-04T16:06:22.485Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "c5bcb35f9877a2ce4bbbe7ba25c2e1b24be6f8ff", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-54419-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5bcb35f9877a2ce4bbbe7ba25c2e1b24be6f8ff", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
119274426
pes2o/s2orc
v3-fos-license
Direct Imaging by SDO AIA of Quasi-periodic Fast Propagating Waves of ~2000 km/s in the Low Solar Corona Quasi-periodic, propagating fast mode magnetosonic waves in the corona were difficult to observe in the past due to relatively low instrument cadences. We report here evidence of such waves directly imaged in EUV by the new SDO AIA instrument. In the 2010 August 1 C3.2 flare/CME event, we find arc-shaped wave trains of 1-5% intensity variations (lifetime ~200 s) that emanate near the flare kernel and propagate outward up to ~400 Mm along a funnel of coronal loops. Sinusoidal fits to a typical wave train indicate a phase velocity of 2200 +/- 130 km/s. Similar waves propagating in opposite directions are observed in closed loops between two flare ribbons. In the k-$\omega$ diagram of the Fourier wave power, we find a bright ridge that represents the dispersion relation and can be well fitted with a straight line passing through the origin. This k-$\omega$ ridge shows a broad frequency distribution with indicative power at 5.5, 14.5, and 25.1 mHz. The strongest signal at 5.5 mHz (period 181 s) temporally coincides with quasi-periodic pulsations of the flare, suggesting a common origin. The instantaneous wave energy flux of $(0.1-2.6) \times 10^7 ergs/cm^2/s$ estimated at the coronal base is comparable to the steady-state heating requirement of active region loops. Quasi-periodic propagating fast mode magnetosonic waves with phase speeds v ph ∼ 1000 km s −1 in active regions remain the least observed among all coronal MHD waves, while single-pulse "EIT waves" (Thompson et al. 1998) of typical speeds ∼200 km s −1 were interpreted as their quiet Sun counterparts (Wu et al. 2001;Ofman & Thompson 2002;cf., Chen & Wu 2011). Williams et al. (2002) first imaged during an eclipse a fast wave of v ph = 2100 km s −1 in a closed loop. Verwichte et al. (2005) The scarcity of fast wave observations was mainly due to instrumental limitations. The new Atmospheric Imaging Assembly (AIA; Lemen et al. 2011) on the Solar Dynamics Observatory (SDO) has high cadences up to 10 s, short exposures of 0.1-2 s, and a 41 ′ × 41 ′ full-Sun field of view (FOV) at 1. ′′ 5 resolution, which are all crucial for detecting fast propagating features. Within the first year of its launch, AIA has detected 10 quasi-periodic fast propagating (QFP) waves, among which the first was mentioned by Liu et al. (2010b) and the best example is presented here. OBSERVATIONS AND DATA ANALYSIS On 2010 August 1, an eruption (Liu et al. 2010a;Schrijver & Title 2011) occurred in NOAA active region 11092, involving a coronal mass ejection (CME) and a GOES C3.2 flare that started at 07:25 UT and peaked at 08:57 UT. Waves in the Funnel In AIA 171Å running difference images (Figure 1(d)-(f), Animation 1(D)) and even direct and base difference images (Animations 1(A) and 1(C)), we discovered arcshaped wave trains emanating near the brightest flare kernel (box 1 in Figure 1(b)) and rapidly propagating outward along a funnel of coronal loops that subtend an angle of ∼60 • near the corona base. They are successive, alternating intensity variations of 1-5%, repeatedly launched in the wake of the CME during the rise phase of the flare (07:45-08:45 UT). The wave fronts continuously travel beyond the limb, suggesting that they are not propagating over the solar surface like Moreton (1960) or EIT waves. They are not observed in the other AIA EUV channels, indicating subtle temperature dependence. Figure 2. The square box marks the region for Fourier analysis in Section 2.2. (g)-(i) Images of (d)-(f) in the boxed region Fourier filtered with a narrow Gaussian centered at the peak in Figure 4(b) at frequency ν = 14.5 mHz (P = 69 s) and wave number k = 9.0 × 10 −3 Mm −1 (λ = 110 Mm), which highlight the corresponding QFP wave trains (see Animation 1(E) and Section 2.2.3). To analyze wave kinematics, we placed three 20 ′′ (14.7 Mm) wide curved cuts that start from the brightest flare kernel and follow the shape of the funnel (Figure 1(d)). By averaging pixels across each cut, we obtained image profiles along it and stacking these profiles over time gives space-time diagrams as shown in Figure 2, where we see two types of moving features: (1) The shallow, gradually accelerating stripes represent the expanding coronal loops in the CME that have final velocities up to ≥260 km s −1 as indicated by parabolic fits (dashed lines in Figure 2(b)). EUV dimming is evident behind these loops (Figures 1(c) and 2(d)), indicating evacuation of coronal mass. (2) The steep, recurrent stripes result from the arcshaped wave fronts. Sinusoidal fits (Figure 2(e)) to the spatial profiles along the central cut yield a projected wavelength λ = 133 ± 17 Mm and phase velocity v ph = 2200 ± 130 km s −1 , giving a period of P = λ/v ph = 60 ± 8 s. Linear fits to the space-time stripes from the three cuts produced by the same wave front indicate similar velocities (Figure 2(a)-(c)). (Such velocities measured from projection on the sky plane are lower limits of their 3D values.) Each wave front travels up to ∼400 Mm with a lifetime of ∼200 s before reaching the edge of AIA's FOV, likely resulting from damping and amplitude decay with distance (∝ 1/r 2 ). Waves in Closed Loops At the same time, we noticed similar fast propagating waves along closed loops between two flare ribbons ( . The sudden switches of direction at the western footpoint (top edge of the plot) near 08:10 and 08:25 UT suggest wave reflection, but a general trend cannot be established. It is thus not clear whether the bi-directional waves are generated independently, or they are the same wave trains reflected repeatedly between the footpoints, We find no temporal correlation between the waves in the closed loops and those in the funnel that is dominated by outgoing waves, except for marginal incoming wave Vertical slices of (b) at times and distances marked by the two plus signs. They are snapshots of intensity running difference (x-axis) as a function of distance (y-axis) at five consecutive times. Each curve (and thus its average position, marked by the vertical broken line) is incrementally shifted by 12 DN which equals AIA's 12 s cadence and thus the x-axis also serves as elapsed time. Each profile is fitted with a sine function A sin[2π(r − r 0 )/λ] shown in red, where A is the amplitude, λ wavelength, and r 0 the initial phase in distance. The average fitted parameters and their standard deviations are listed. The filled circles mark the delayed occurrences at the average position, to which a linear fit indicates a phase velocity v ph = 2200 ± 130 km s −1 . (f) Horizontal slices of (d) in the selected region, i.e., temporal profiles of intensity base ratio at locations marked by the cross signs. Successive curves at greater distances are shifted upward. The two prominent wave periods of 69 and 181 s are marked with slanted lines, indicating wave propagation. signals near its base (Figure 2). Because of their simplicity (no superposition of bi-directional propagation), we choose to further analyze the waves in the funnel with Fourier transform as presented below. Overall k-ω Diagram We extracted a 3D data cube in (x, y, t) coordinates, i.e., a time series of 171Å running difference images for the FOV of Figure 1(a) during 07:45-08:45 UT. We obtained the Fourier power of the data cube on the (k x , k y , ν) basis of wave number k x and k y and frequency ν. We then summed the power in the azimuthal θ direction of cylindrical coordinates (k, θ, ν), where k = k 2 x + k 2 y (e.g., DeForest 2004). This yields a k-ω diagram of wave power at a resolution of ∆k = 2.09 × 10 −3 Mm −1 and ∆ν = 0.277mHz as shown in Figure 4(a). We find a steep, narrow ridge that describes the dispersion relation of the fast propagating waves, together with a shallow, diffuse ridge that represents those slowly expanding loops at velocities on the order of 50 km s −1 . To isolate the fast propagating waves (at the expense of reduced frequency resolution), we repeated this analysis for a smaller boxed region as shown in Figure 1(d) and a shorter duration of 07:58-08:23 UT in which these waves are prominent. The resulting k-ω diagram (Figure 4b) better shows the steep ridge that can be fitted with a straight line passing through the origin. This gives average phase (v ph = ν/k) and group (v gr = dν/dk) veloci-ties of 1630 ± 760 km s −1 , which cannot be distinguished in the observed range up to the Nyquist frequency of 41.7 mHz given by AIA's 12 s cadence due to the large uncertainty. Temporal Evolution of k-ω Diagram We repeated the above procedure for a data cube of the boxed region during 07:45-08:45 UT masked with a running time window that has a full width half maximum (FWHM) of 10 minutes with cosine bell tapering on both sides. We shifted the window by 1 min at a time (only 6 such windows are independent in the 1 hr duration) and obtained a corresponding k-ω diagram, as shown in Figures 4(c)-(f) and Animation 4. The early k-ω diagrams are dominated by a shallow ridge with an increasing slope that indicates the CME acceleration. When the CME front moves out of the FOV, a steep ridge corresponding to the fast propagating waves becomes progressively evident with a slope varying in the 1000-2000 km s −1 range. Frequency Distribution of Fourier Power We note that running difference (time derivative) in images used above, similar to a highpass filter, essentially scales the original signal with frequency ν and applies a ν 2 factor to the Fourier power. To recover the intrinsic power amplitude, we replaced running difference images with detrended images obtained by subtracting images running-smoothed in time with a 200 s boxcar, introducing a low-frequency cutoff of 5 mHz that is below all strong peaks on the ridge in Figure 4(b). We then repeated the above analysis in Sections 2.2.1 and 2.2.2. The new k-ω diagrams (e.g., Figure 4(g) vs. (e)) exhibit the expected general trend of decreasing power with frequency, and as a result the steep ridge becomes less evident at high frequencies. The Fourier power from running difference and detrended images yields consistent peak frequencies, which can be visually identified in the space-time domain. The lowest frequency ν 0 = 5.5 mHz (P 0 = 181 ± 13 s ≈ 3 min) manifests as slow modulations in Figures 2(b) and (d) at 08:06-08:18 UT. The next period 69 ± 3 s (14.5 mHz), dominating the power from running difference images (Figure 4(b)), matches the temporal spacing between bright stripes near 08:08 UT in Figures 2(a)-(c) and the period given by the sinusoidal fits (Figure 2(e)). The corresponding wave fronts are prominent in the original and Fourier filtered images (Figures 1(d)-(i), Animation 1(E)). These two periods are also evident in the emission profiles of Figure 2(f). The higher frequency 25.1 mHz (40 ± 1 s) has considerably weaker power and a close frequency of 23 mHz (see Figure 4(d)) can be seen in the spacing of narrow stripes near 08:01 UT (Figure 2(a)), when the other two frequencies are not yet strong. Common 3 min Periodicity in Waves and Flare As shown in Figure 5(b), the RHESSI X-ray flux and AIA 1600Å fluxes of flare ribbons (particularly the brightest one in box 1 where the funnel is rooted, see Figure 1(b) and Animation 1(B)) exhibit bursty bumps at a 3 min period (5.5 mHz). The onsets of these pulsations (vertical dotted lines) coincide with those of the slow modulations on the QFP waves ( Figure 5(a)). This can be also seen in the wavelet power of these flare emissions ( Figures 5(d)-(g)). The Fourier power of the X-ray flux (green curve, Figure 4(h)) is consistent with that of the QFP waves 10 mHz, but significantly lower at higher frequencies. It also matches that of the triangle wave up to the third harmonics because of its triangular pulse shape (Figure 5(c)). In contrast, the 1600Å flux of a background plage (box 3 in Figure 1(b)) is constantly dominated by the 5 min (3.5 mHz) photospheric p-mode oscillations ( Figure 5(b) and (f)). Estimate of Wave Energy and Magnetic Field The energy flux carried by the QFP waves can be estimated with the kinetic energy of the perturbed plasma, E = ρ(δv) 2 v ph /2 ≥ ρ(δI/I) 2 v 3 ph /8 (Aschwanden 2004), where we have assumed that the observed intensity variation δI results from density modulation δρ and used δv/v ph ≥ δρ/ρ = δI/(2I) for magnetosonic waves since I ∝ ρ 2 . If we take v ph = 1600 km s −1 and δI/I = 1%-5% observed in the mid-range of the funnel (200 Mm from the flare kernel), and use the corresponding number density n e 10 8 cm −3 estimated with the 171Å channel response (following De Pontieu et al. 2011), we reach an energy flux E (0.1-2.6) × 10 5 ergs cm −2 s −1 . The diameter of the funnel here has increased ∼10 times from the coronal base, where the the wave energy flux shall be 10 2 times higher by continuity of energy flow, if we assume the waves being generated there and consider damping on their path. This energy flux is more than sufficient for heating the local active region loops (Withbroe & Noyes 1977). However, considering the limited temporal and spatial extent of these waves, they are unlikely to play an important role in heating the quiescent global corona. Assuming the measured phase speed v ph equal to the fast mode speed along magnetic field lines in the funnel, which is the Alfvén speed v A = B/ √ 4πρ, the magnetic field strength is estimated as B = v ph √ 4πρ 8 G. DISCUSSION We propose that these QFP waves imaged with AIA's unprecedented capabilities are fast mode magnetosonic (a)) is a power-weighted linear fit to the (k, ν) positions of pixels greater than 10% of the maximum power in the k ≤ kmax range (marked by the vertical dotted line). (c)-(f) Same as (b) but for images masked with a running time window whose FWHM is labeled on each panel (see Animation 4). (g) Same as (e) but on log scale from detrended (rather than running difference) images (see Section 2.2.3). The diffuse horizontal band is an artifact at the 5 mHz detrending cutoff frequency and is ∼5% of the QFP wave power here. (h) Power spectrum vs. frequency obtained by averaging in k ≤ kmax on a k-ω diagram that is the same as (b) but from detrended images. (i) Spectrogram obtained by compiling wave number averaged power as shown in (h) from k-ω diagrams at different times as shown in (g). The x-axis here refers to central times of the running window. Prominent "islands" are contoured at the 50% level; their peaks are marked by plus signs and the peak frequencies (periods) by horizontal dotted lines. The frequency uncertainties are the standard deviations within the contours. waves that have been theoretically predicted and simulated (e.g., Bogdan et al. 2003;Fedun et al. 2011;Ofman et al. in preparation), but rarely observationally detected. We speculate their possible origin as follows. 1. The accompanying CME is unlikely to be the wave trigger, because it takes place gradually for ∼30 min (≫ wave periods, Figure 2(b)) and its single pulse would be difficult to sustain oscillations lasting ∼1 hr as observed here without being damped. However, the environment in its wake might be favorable for these waves. 2. The common 3 min periodicity (Section 2.3) of the QFP waves and flare quasi-periodic pulsations (QPPs; Nakariakov & Melnikov 2009;Kupriyanova et al. 2010) suggests a common origin. Quasi-periodic magnetic reconnection and energy release can excite both flare pulsations (Ofman & Sui 2006;Fleishman et al. 2008) and MHD oscillations that drive QFP waves, or in turn, MHD oscillations responsible for the waves can modulate energy release and flare emission (Foullon et al. 2005). This periodicity is the same as that of 3 min chromospheric oscillations, further suggesting their possible modulation on reconnection (Chen & Priest 2006;Heggland et al. 2009;McLaughlin et al. 2009). However, the deficit of flare power at higher wave frequencies ( 10 mHz, Figure 4(h)) is somewhat puzzling. Perhaps the waves are driven by a multiperiodic exciter that produces no detectable flare signals at these frequencies. A future study of similar events will further shed light on the nature of these waves. This work was supported by AIA contract NNG04EA00C. LO was supported by NASA grants NNX08AV88G and NNX09AG10G. Wavelet software was provided by C. Torrence and G. Compo. We thank Nariaki Nitta for helpful discussions.
2011-06-16T06:18:14.000Z
2011-06-16T00:00:00.000
{ "year": 2011, "sha1": "2c6f9ab60dc15241a22799c2aa672752bb9989a9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1106.3150", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2c6f9ab60dc15241a22799c2aa672752bb9989a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231876537
pes2o/s2orc
v3-fos-license
Impact of National Lockdown on the Hyperacute Stroke Care and Rapid Transient Ischaemic Attack Outpatient Service in a Comprehensive Tertiary Stroke Centre During the COVID-19 Pandemic Background: The COVID-19 pandemic is having major implications for stroke services worldwide. We aimed to study the impact of the national lockdown period during the COVID-19 outbreak on stroke and transient ischemic attack (TIA) care in London, UK. Methods: We retrospectively analyzed data from a quality improvement registry of consecutive patients presenting with acute ischemic stroke and TIA to the Stroke Department, Imperial College Health Care Trust London during the national lockdown period (between March 23rd and 30th June 2020). As controls, we evaluated the clinical reports and stroke quality metrics of patients presenting with stroke or TIA in the same period of 2019. Results: Between March 23rd and 30th June 2020, we documented a fall in the number of stroke admissions by 31.33% and of TIA outpatient referrals by 24.44% compared to the same period in 2019. During the lockdown, we observed a significant increase in symptom onset-to-door time in patients presenting with stroke (median = 240 vs. 160 min, p = 0.020) and TIA (median = 3 vs. 0 days, p = 0.002) and a significant reduction in the total number of patients thrombolysed [27 (11.49%) vs. 46 (16.25%, p = 0.030)]. Patients in the 2020 cohort presented with a lower median pre-stroke mRS (p = 0.015), but an increased NIHSS (p = 0.002). We registered a marked decrease in mimic diagnoses compared to the same period of 2019. Statistically significant differences were found between the COVID and pre-COVID cohorts in the time from onset to door (median 99 vs. 88 min, p = 0.026) and from onset to needle (median 148 vs. 126 min, p = 0.036) for thrombolysis whilst we did not observe any significant delay to reperfusion therapies (door-to-needle and door-to-groin puncture time). Conclusions: National lockdown in the UK due to the COVID-19 pandemic was associated with a significant decrease in acute stroke admission and TIA evaluations at our stroke center. Moreover, a lower proportion of acute stroke patients in the pandemic cohort benefited from reperfusion therapy. Further research is needed to evaluate the long-term effects of the pandemic on stroke care. INTRODUCTION The Coronavirus disease 2019 (COVID-19) outbreak began in December 2019 in Wuhan, Hubei province, China and then spread to Europe in January 2020 (1). The index case entered the United Kingdom (UK) on January 23rd 2020 from Hubei province in China (2). Subsequently, unique measures such as large-scale application of social isolation, closing borders and nationwide lockdown were adopted in UK since March 23, throughout June 2020, to fight against COVID-19. The COVID-19 outbreak has led to a huge reorganization of the health care systems worldwide and unprecedent strategies were rapidly implemented to face the increasing needs for COVID-19 patient such as resource allocation, mobilizing workforce, and optimizing bed availability (4). As a result several groups reported that stroke care suffered from a shortage of services and delays in time-dependent treatments and diagnostic work-up since the onset of the pandemic (3,(5)(6)(7)(8)(9). In addition, at the same time, observational studies showed a marked and unexplained reduction in the number of patients admitted in hospital with cardiovascular pathologies such as myocardial infarction and acute ischaemic stroke (3,. However, these preliminary global reports explored mainly the impact of the pandemic only on the overall volume of hospital admissions for acute stroke but with no report about the implications on the outpatient rapid Transient Ischaemic Attack (TIA) services. In this study we sought to investigate the impact of the national lockdown measures during the COVID-19 pandemic on the rate of admission of stroke patients to the Hyper Acute Stroke Unit (HASU) and rate of patients evaluated in the rapid outpatient TIA service of a comprehensive tertiary stroke center in London (UK) compared to a pre-pandemic cohort. We also investigated clinical characteristics of the patients, stroke reperfusion therapies and treatment metrics. METHODS This was a an observational, retrospective, single-center study based on data of consecutive patients with acute stroke and transient ischaemic attack (TIA) admitted to the Hyper Acute Stroke Unit (HASU) or evaluated in the rapid outpatient TIA service of the Stroke Department, Charing Cross Hospital, Imperial College Health Care Trust London between March 23rd and 30th June 2019 and between March 23rd and 30th June 2020. The Stroke Department at Charing Cross Hospital is a comprehensive tertiary stroke center and is the North West London (UK) regional lead referral stroke center for mechanical thrombectomy for a population of over 6.4 million people. It cares for over 1,800 patients admitted to the HASU annually and over 900 patients assessed in the rapid outpatient TIA service annually. The 24/7 thrombectomy service treats stroke patients presenting within 6 h of symptom onset, as well as selected patients with wake-up stroke (unclear time of onset) or presenting between 6 to 24 h using computed tomography (CT) perfusion imaging protocols. The Charing Cross Hospital as lead referral stroke center for mechanical thrombectomy accepts potential candidate for mechanical thrombectomy from the hyper acute stroke units of Luton and Dunstable University Hospital, Lister Hospital, Watford General Hospital, Northwick Park Hospital, Royal Berkshire Hospital, Wycombe Hospital, Royal London Hospital and University College London Hospital; but also from the acute stroke units at Chelsea and Westminster Hospital, Hillingdon Hospital and West Middlesex Hospital (Figure 1). Patients are accepted following a telephone consultation between the referring center and the on-call stroke consultant. The Charing Cross Hospital began performing diagnostic nasopharyngeal swabs for SARS-CoV-2 virus from March 3, 2020. Only between the 18th March and 30th April 2020, external stroke patients, proven to have COVID-19, were not transferred for thrombectomy at our hospital, but otherwise all external referral hospitals were instructed to continue referring patients for thrombectomy, including those with suspected COVID-19, via the same process as before the COVID-19 pandemic. The Thrombectomy management board released a new modified COVID stroke thrombectomy pathway with the aim to protect frontline health-care staff, reduce footprint across the hospital and maintain communication between team members (33). The primary outcome measure was to study the overall volume of patients admitted to our HASU and patients evaluated in our rapid outpatient TIA service between March 23rd and 30th June 2020 compared to the same period in 2019. The secondary clinical outcomes were to investigate patient demographics, clinical characteristics, proportion of acute recanalization therapy performed, stroke treatment metrics and final diagnosis between the two groups of patients (2020 vs. 2019). Data Source and Data Collection Process A database of admissions that is used for reporting to a central UK stroke data bank (Sentinel Stroke National Audit Programme) ensured the consecutive enrolment of eligible patients. Electronic medical records of eligible patients were obtained from the Imperial College Healthcare NHS Trust medical archive. Data of consecutive patients were extracted using a pre-specified case report file that included patient characteristics, including age, vascular risk factors and relevant medical history that were recorded during the admission. Events were captured by review of medical notes of all patients admitted to the HASU and referred to the rapid outpatient TIA service of the Imperial College Healthcare NHS Trust between March 23rd and 30th June 2020 and between March 23rd and 30th June 2019. Definition of Study Variables Ischaemic stroke was defined as an episode of neurological dysfunction caused by focal cerebral, spinal or retinal infarction (34). TIA was defined as a brief episode of neurological dysfunction caused by focal brain or retinal ischemia, with clinical symptoms typically lasting <1 h, and without evidence of acute infarction (35). Intracerebral hemorrhage (ICH) refers to primary, spontaneous, non-traumatic bleeding occurring in the brain parenchyma (36). Cerebral venous thrombosis refers to thrombosis of the dural sinus and/or cerebral veins (CSVT) (37). Stroke mimics included migraine aura, seizures, syncope, peripheral vestibular disturbance, transient global amnesia, functional/anxiety disorder, amyloid spells, subarachnoid hemorrhage, structural brain lesion and paroxysmal symptoms due to demyelination (38). The severity of the index stroke was assessed using the National Institutes of Health Stroke Scale (NIHSS) score on admission. The modified Rankin Scale (mRS) was used to assess patient's initial premorbid status pre-stroke and level of functional independence at 90 days of the patients who underwent mechanical thrombectomy. Data included point of first healthcare provider contact (999/F.A.S.T., emergency department or ED, local general practitioners or GP, etc). Data on known stroke risk factors were collected as follows: age, sex, current cigarette smoking, history of hypertension (blood pressure >140/90 mm Hg at least twice before acute stroke or already under treatment with antihypertensive drugs), history of diabetes mellitus (a random venous plasma glucose concentration >11.1 mmol/l or a fasting plasma glucose concentration >7.0 mmol/l or 2 h plasma glucose concentration >11.1 mmol/l 2 h after 75 g anhydrous glucose in an oral glucose tolerance test, or HbA1c >48 mmol/mol or under antidiabetic treatment), history of dementia, history of symptomatic ischemic heart disease (myocardial infarction, history of angina, or previous diagnosis of multiple lesions on thallium heart isotope scan or evidence of coronary disease on coronary angiography), history of symptomatic peripheral arterial disease (intermittent claudication of presumed atherosclerotic origin; or ankle/arm systolic blood pressure ratio <0.85 in either leg at rest, or history of intermittent claudication with previous leg amputation, reconstructive surgery, or angioplasty), previous stroke/ TIA and previous ICH. Process time variables were collected prospectively, when applicable, and included door to needle time, door to computer tomography (CT) time, CT to decision time and onset to needle time for intravenous thrombolysis (IVT); and door to groin puncture time and onset to groin puncture time for endovascular thrombectomy (EVT). Brief Description of the Workflow Patients presenting with features of acute stroke were evaluated in the hyperacute setting with appropriate neuroimaging and vascular imaging when indicated: CT, computed tomography angiography (CTA), computed tomography perfusion (CTP) of the brain and magnetic resonance imaging (MRI). Patients who fulfilled the relevant indications and without exclusion criteria would undergo acute recanalization therapy. Eligible patients who presented up to 4.5 h of ischaemic stroke symptoms onset received IVT with recombinant-tissue plasminogen activator (r-TPA) (39). Stroke patients would be considered for endovascular thrombectomy (EVT) if they met the following criteria: prestroke mRS 0-2, NIHSS score 6 or more, Alberta Stroke Program Early CT score (ASPECTS) 5 or more and within 6 h of symptom onset, anterior circulation large vessel occlusion, basilar artery occlusion. Selected AIS patients within 6 to 24 h of last known normal may be included if they meet other DAWN or DEFUSE 3 eligibility criteria (39)(40)(41). Local GPs in primary-care or Emergency Departments (ED) can refer any patient they suspect had a TIA, but whom they did not consider required immediate hospital admission, to our rapid outpatient TIA service. These patients or the caregiver at home (usually by telephone) are then contacted by our team to arrange a clinic appointment within 24 h of referral received. Our TIA clinic is organized to provide a standardized assessment to all our patients. On the same day, blood tests, ECG, brain imaging (usually CT), and carotid ultrasound imaging and a clinical assessment by a stroke physician are obtained. Patients are discharged home immediately after the assessment, unless the treating physician believes the patient requires urgent admission to our HASU. Study Outcomes and Statistical Analysis Continuous variables are presented as mean with standard deviation (sd) if values are normally distributed or as median with interquartile range (IQR) when they do not follow the normal distribution. We compared the distribution of continuous variables between groups with t-test or Wilcoxon rank-sum test as appropriate, whereas categorical values were compared with chi-square tests. Statistical significance was set at 0.05. All analyses were conducted with Stata 15.1 (StataCorp, College Station, TX). Hyperacute Stroke Care Between the March 23rd and June 30th, 2019, we admitted 514 patients in our HASU while we documented 353 admissions between the March 23rd and June 30th, 2020. This represents a fall in admissions of 31.33%. In Table 1 we showed the clinical characteristics of the two groups of patients admitted during the two study periods. There were no statistically significant differences with regards to age and gender distribution. However, patients admitted during the COVID-19 pandemic showed lower pre-stroke mRS scores (p = 0.015) and higher median NIHSS on arrival (p = 0.002) compared to the patients admitted in same period in 2019. Moreover, the median symptom onset-to-door time was significantly longer (p = 0.020) and the median length of inpatient stay in HASU was increased (p < 0.001) in the group of patients admitted between the 23rd March and 30th June 2020 compared to the same period in 2019. We documented a statistically significant difference in terms of the first medical provider contact used by the stroke patients during the two study periods ( Table 1 Table 1) (Figure 2). In Table 2 we reported the clinical characteristics and type of acute treatments received by the two groups of patients with ischaemic stroke admitted in our HASU. Patients' clinical characteristics were similar in both groups but patients admitted in the 2020 cohort more frequently had diabetes (p = 0.031) and less frequently had a past history of dementia (p = 0.042), or intracranial hemorrhage (p = 0.017). Regarding the acute treatment received, patients admitted between 23rd March and 30th June 2020 less frequently underwent IVT alone (p = 0.030) while more frequently were treated with IVT combined with EVT (p = 0.043). There was no significant difference between the two cohorts of patients treated with IVT alone, EVT alone or IVT plus EVT in terms of NIHSS on arrival and 24h NIHSS. In terms of process measures time (Figure 3), statistically significant differences were found between the COVID and pre-COVID cohorts in the time from onset to door arrival (median 99 vs. 88 min, p = 0.026) and from onset to needle time (median 148 vs. 126 min, p = 0.036) for IVT. We did not observe significant difference in the door to CT time, CT to decision time, door to needle time for IVT and door to groin puncture time and onset to groin puncture time for EVT. The median mRS at 90 days for the patients treated with EVT (alone or combined with IVT) did not differ among the two cohorts of patients (p = 0.403); moreover, we did not find any statistically significant difference between the two groups of patients regarding the proportion of patients who received EVT able to achieve functional independence (mRS score of 0-2) at 90 days (p = 0.367) ( Table 3). TIA Rapid Outpatient Service Between the 23rd March and 30th June 2019, 180 patients were referred with suspected TIA to our rapid TIA outpatient service while 136 patients were referred in same period in 2020. This represents a fall in the number of referrals by 24.44%. Patients characteristics were similar in both groups, but patients referred during the COVID-19 period had less frequently dementia (p = 0.027) ( Table 4). The median symptom onset-to-first medical review time was significantly longer in the group of patients referred between the 23rd March and 30th June 2020 compared to the same period in 2019 (median = 3 vs. 0 days, p = 0.002). We documented a statistically significant difference in terms of the first medical provider contact used by the TIA patients during the two study periods ( Table 4) (p = 0.020). Between the Finally, we observed a statistically significant difference in the final diagnosis (p = 0.020) ( Table 4) after the review in our TIA service. The percentage of patients with a final diagnosis of TIA increased from 46.11% in 2019 to 58.22% during the COVID period. Interestingly, our ambulatory rapid access TIA clinic experienced a doubling rate of ischaemic stroke diagnoses (9.56 vs. 5.0%), whilst registering a marked decrease in mimic diagnoses (35.29 vs. 48.89%) during the COVID period. Of note, 32.88% of the 2020 cohort had a final diagnosis of TIA mimic whilst in the same period in 2019 this was 48.89%. DISCUSSION In this observational study we explored the impact of the national lockdown measures due to the COVID-19 pandemic on our large regional tertiary stroke center. One element of novelty of our analysis is that investigated the impact of the COVID-19 outbreak on the hyper acute stroke care but also on the rapid outpatient TIA service care at a comprehensive tertiary stroke center. The main finding of our analysis is that our stroke sample presented with a lower median pre-stroke mRS combined with an increase in initial stroke severity during the national lockdown. In addition, our study showed that the proportion of patients with previous history of dementia, admitted to our HASU or assessed in our rapid outpatient TIA clinic, was statistically significantly lower during the COVID-19 outbreak compared to the same period in 2019. Several centers have described significant difference in severity at presentation of stroke patients but to date no difference in the degree of pre-stroke disability or dependence in the daily activities has been reported (11,30,32). Based on the available data, poor pre-admission functional status and dementia are risk factors for in-hospital mortality in patients with COVID-19 (42). Our hypothesis is that vulnerable patients and their caregivers might have intentionally avoided hospital admissions due to the risk of COVID-19 infection. Alternatively, epidemic response measures might also represent a contributing factor. Self-isolation reduces social connections, especially in the elderly and more frail population. Isolation can impact on early recognition of stroke symptoms and can lead to delayed notification of emergency services. Indeed, similar to previous studies (31), we have observed a significant delay in symptom onset-to-door review time in our sample that could support this thesis. Another key finding is that we showed an overall significant reduction in the hospitalization rate for stroke and in the number of patients presenting with TIA. The rate of thrombolysis delivery also reduced. Our results are in line with previous observational studies and have confirmed our preliminary report (43). A survey of 81 Italian stroke centers conducted by the Italian Stroke Organization reported a reduction of about 26-30% in the hospitalization rate for minor stroke and TIA, and of about 50% for stroke acute therapies in comparison with the same period in 2019 (3,44). In Germany the marked decrease of patients with TIA or minor stroke presenting in hospital has led the German Society of Neurology and the German Stroke Society to initiate a publicity campaign in television and newspapers about the socalled 'phenomenon of empty stroke units' to invite patients to seek medical help (3). Similarly, in the USA (30) and in China (5) there are reports of reduction in acute stroke volume in hospitals. This concern has also been raised by the World Stroke Organization (45). There are several factors that could potentially explain this phenomenon. First, fear of in-hospital infection and advice from health authorities, media and doctors probably led patients with mild symptoms to stay at home. Interestingly, our ambulatory rapid access TIA clinic experienced a doubling rate of ischaemic stroke diagnoses, whilst registering a marked decrease in mimic diagnoses. This is in keeping with the hypothesis that milder stroke patients were avoiding hospital admissions due to fear of the pandemic, and preferred an outpatient setting where accessible. This does however delay their presentation and limit access to reperfusion therapies. For this reason, information campaigns to educate patients to present early to the ED if they have symptoms suggestive of stroke must be implemented even during this ongoing COVID-19 pandemic. Secondly, primarycare, EDs and ambulance services have undergone significant pressures due to the volume of COVID-19 patients. This might have induced additional delays and errors during patient triage and transport, thus reducing the proportion of patients eligible for acute treatment. Moreover, the COVID-19 pandemic is having implications on stroke services in all parts of the world in terms of redeployment of stroke staff, and reallocation of the stroke beds to COVID-19 patients (45). Resources management is critical during the pandemic and should be established as quickly as possible. Designated stroke centers should be assigned to maintain resources for delivery of high-quality stroke care (5). In our center, we observed a significant reduction in the proportion of patients that used the emergency medical service 999/F.A.S.T. after the onset of stroke symptoms compared to the same period in 2019. This is in line with the NHS England data showing that during the COVID-19 pandemic was a general reduction at national level in emergency admissions (46). This could probably explain the longer delay in symptom onset-to-door time and onset to needle time in stroke patients and consequently the reduced proportion of patients who presented within the therapeutic time window for thrombolysis during the COVID pandemic. Interestingly, we did not observe any significant delay to reperfusion (door-to-needle and grointo-puncture time) for IVT and EVT, in line with other centers worldwide (11,13,22,27,(30)(31)(32) and with our preliminary report (47). By contrast Meza et al. (6) and Briard et al. (7) reported an increment in their door-to-needle time likely secondary to their new in-hospital infection control measures to manage stroke patients with suspected COVID-19 that may have delayed the acute stroke management. Moreover, our analysis showed that after any reperfusion therapy (IVT, EVT and IVT plus EVT) there was no statistically significant difference in terms of early neurology outcome although patients treated during the national lockdown demonstrated to have a higher 24 h NIHSS after the treatment. Despite the unprecedented demands on emergency healthcare, early multidisciplinary efforts to adapt our acute stroke treatment process resulted in keeping the stroke quality time metrics close to the pre-pandemic levels in our center. Future research with larger sample is needed to evaluate the impact of the delayed presentation of stroke patients during the pandemic on long-term outcomes. Our study has several limitations and strengths. It is limited by its single-center design. Our findings reflect the trend in a determined area which may not be generalized to all international healthcare practices. Although the demographics and clinical characteristics were similar between cohorts, the possibility of systemic or random bias cannot be excluded. The retrospective design is another limitation. Finally, a long-term follow up was not available for analysis. The strengths of our study include that this is the first single-center report to assess the impact of the national lockdown due to the COVID-19 pandemic on both the HASU admissions and the rapid outpatient TIA service of a comprehensive tertiary stroke center in London (UK). The strengths of our study include also our sample size and the length of the study periods. In conclusion, the national lockdown in the UK due to the COVID-19 pandemic was associated with a significant decrease in acute stroke admission and TIA evaluations at our comprehensive stroke center. In addition, a lower proportion of acute stroke patients in the pandemic cohort benefited from reperfusion therapy, specifically intravenous thrombolysis. More minor ischaemic stroke patients presented to our rapid access TIA clinic. These findings support concerns that the current ongoing pandemic may have negative impact on the acute management of non-COVID-19-related conditions such as acute stroke. Further research is needed to evaluate the long-term effects of the pandemic on population-based acute stroke incidence, hospital stroke and TIA outpatient evaluations volume, treatment metrics, and long-term outcomes. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT This study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments. Informed consent of subjects was not needed as the data collected for the study were information collected as part of the routine care and only de-identified data were used in the research. AUTHOR CONTRIBUTIONS LD'A and SB: study concept, statistical analysis, drafting, and critical revision of manuscript. MB, SO, NE, ZB, and BD: data collection, critical revision of manuscript. OH, SJ, HJ, AM, DK, JK, and MV: critical revision of manuscript. All authors contributed to the article and approved the submitted version.
2021-02-11T14:21:33.557Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "79a6bb2e2c373899c059d4819b72da0a682adf48", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.627493/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79a6bb2e2c373899c059d4819b72da0a682adf48", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245926454
pes2o/s2orc
v3-fos-license
Reply on RC2 Quantitative evaluations are necessary when discussing accuracy. Reply: Thanks for your suggestion. Quantitative improvements for M2 and K1 are briefly supplemented in the abstract: “It significantly reduces the total errors of eight tidal constituents (with the exception of N2 and Q1) in the traditional explicit tidal scheme, in which the total errors of K1 and M2 are reduced by 21.85% and 32.13%, respectively.” in Line 29-31. “Compared with Exp1, the total errors of K1 and M2 in Exp2 are reduced by 21.85% and 32.13% respectively.” in Line 268-269. and arbitrary on the but I think that this paper deals with all the tidal constituents directly from the first principle. In the paper, the advantages of the new method are not theoretically discussed, so the purpose of its introduction is unclear. The traditional method also has advantages, for example, changing the tide parameters such as the love number for each tide, targeting specified tidal constituents, and so on. Reply: Thanks for your suggestion. We have revised the paper based on your comments and added the theoretical discussion on the advantages of the two schemes in the summary: "The new tidal scheme has some unique advantages: It can accurately provide instantaneous tidal potentials, since both astronomers and oceanographers have well established models for determining the exact position of the sun and the moon by Julian and for calculating the instantaneous tidal potential by their projected positions. Traditional tidal scheme do not guarantee the correct transient tidal potential at any given time, as described in Section 4.1. Traditional method does not cover all tidal constituents, so it is more suitable to study only one specific tidal constituent rather than the full real tidal process in the OGCM. Besides, in the traditional scheme, the tidal potential is introduced in the form of sine wave, so that the climate state of tidal potential is zero at any position. The new tidal method does not impose this particular time variation." in Line 338-347. #2 I think that the verification approach using an OGCM is not suitable for the purpose of this paper, which is to propose a new tidal scheme. The authors should first verify it by a barotropic tide model. In addition, as the authors wrote, tide models have various tuning parameters, so the accuracy should be compared after tuning the parameters for the two schemes. Alternatively, the authors may explain theoretically the errors inherent in the traditional method, and show that they would be eliminated. Reply: Thanks for your suggestion. The aims of this study are to propose a new tidal scheme and investigate the application of the new tidal scheme in OGCM. So, we appreciate your suggestion. We will use the barotropic tide model to verify the tidal scheme in the next step and follow the suggestion in your comment #3. In addition, the new tidal scheme reasonably simulates the instantaneous tidal potential of spring and neap tide in Section 4.1. Exp1 applying the tradition tidal scheme exhibited larger errors in the amplitude, phase of major tidal constituents compared to Exp2 using the new tidal scheme, which is related with adopting the fixed amplitude for each tidal constituent in traditional explicit tidal scheme. Thus, we think the new tidal scheme reduces errors in the traditional method. #3 If the authors still want to introduce the new tidal forcing into an OGCM, the introduction method should be reconsidered. As discussed in detail in Arbic et al. (2010) and Sakamoto et al. (2013, DOI:10.5194/os-9-1089-2013, replacing the barotropic equation in an OGCM by Eq.(10) in the paper leads to disrupt the dynamical balance of the ocean circulation in the original OGCM. There is no point in verifying the model results in such a situation. Reply: Thanks for your suggestion, which has given us great inspiration. According to your suggestion, we conducted experiments by also adopting the practical scheme following Sakamoto et al. (2013). This scheme decomposes the barotropic process including tides into a linear component caused by tides and the original barotropic component that maintains the original dynamical balance in the ocean. The experiment was integrated for one year, initialized from the quasi-equilibrium state (300th year) of the spin-up experiment under the same CORE I forcing fields. We found the errors (including the phase error and total error) of all the eight tidal constituents of Exp2 using the new tidal scheme are less than Exp1 that applies the tradition tidal scheme (Table R1). When we apply the practical scheme of Sakamoto et al. (2013), the distribution and amplitude of tidal constituents in Exp1 and Exp2 ( Figures R1 and R2 in uploaded supplementary documents) are very similar to the original method. The conclusions of the new tidal scheme on improvement of the tidal constituents remain unchanged. Therefore, we added "Furthermore, we conduct two experiments (one using traditional tidal scheme, the other applying new tidal scheme) by also adopting the practical scheme following Sakamoto et al. (2013), we found the errors (including the phase error and total error) of all the eight tidal constituents of the experiment using the new tidal scheme are less than that applies the tradition tidal scheme (Table R1 in uploaded supplementary documents)." in Lines 303-307. The traditional formula of the eight tidal constituents by Griffies et al. (2009) is incorporated directly in barotropic equation in many OGCMs including MOM, MPI-OM, HYCOM and LICOM (Schiller, 2004;Arbic et al., 2010;Müller et al., 2012, Yu et al. 2016). Based on the above two points, we decide to use the same method to introduce the tidal forcing in this paper, i.e. introducing the tidal forcing directly in barotropic equation in OGCM. Besides, Arbic et al (2010) pointed out the global tidal simulations must include parameterized topographic wave drag in order to accurately simulate the tides, we added a drag term in barotropic equation (Eq.(10) in the paper), including parameterized internal wave drag due to the oscillating flow over the topography and the wave drag term due to the undulation of the sea surface (Jayne and Laurent, 2001;Simmons et al. 2004;Schiller and Fiedler, 2007), which is the same drag term with MOM to deal with the traditional tidal method. Therefore, we did not see evidence of the disruption of the dynamical balance by the introduction of our method. We revised the paper by adding a discussion about the dynamical balance: "Introduction of tidal forcing leads to disrupt the dynamical balance of the ocean circulation in the original OGCM (Sakamoto et al., 2013), and Arbic et al (2010) pointed out the global tidal simulations must include parameterized topographic wave drag in order to accurately simulate the tides, we added a drag term , in barotropic equation, including parameterized internal wave drag due to the oscillating flow over the topography and the wave drag term due to the undulation of the sea surface (Jayne and Laurent, 2001;Simmons et al. 2004;Schiller and Fiedler, 2007)." in Lines138-144 Sakamoto et al. (2013) provides a guarantee for the dynamical balance of tidal dissipation, we will use the decomposition method of Sakamoto et al. (2013) to quantitatively compare the effects of tidal forcing on large scale ocean circulation and the sensitivity of bottom friction due to the tidal component in our future work. #4 Abstract Quantitative evaluations are necessary when discussing accuracy. Reply: Thanks for your suggestion. Quantitative improvements for M2 and K1 are briefly supplemented in the abstract: "It significantly reduces the total errors of eight tidal constituents (with the exception of N2 and Q1) in the traditional explicit tidal scheme, in which the total errors of K1 and M2 are reduced by 21.85% and 32.13%, respectively." in Line 29-31. "Compared with Exp1, the total errors of K1 and M2 in Exp2 are reduced by 21.85% and 32.13% respectively." in Line 268-269. #5 Section 2. Add some appropriate references for the gravitation of celestial bodies (a textbook?). Description of variables is also insufficient. Reply: Thanks very much. When introducing Eq.(1), we have added some references for the gravitation of celestial bodies: "Assuming that the Earth is a rigid body, the horizontal tide-generating force is (Cartwright, 1999;Boon, 2004):" in Lines 99-100. We have added the description of the variable in Section 2, for example, " is the angle between the Moon pointing to the center of the Earth and point X, and are the distance and zenith angle of the Moon and an arbitrary position X on the Earth (Fig. 1)." in Lines 106-108. Please give a supplement or reference for readers. Reply: Thanks. We have modified this part of the paper and uploaded supplementary documents. #7 Eq. (9) There is no need to separate cases, since the value of "cos Tm" is the same. Reply: Thanks very much. We have deleted the Eq. (9) according to your suggestion. #8 L.187 "the negative regions of the spiring tide..." The meaning is not clear. What part of Schwiderski (1980) do you refer to? Reply: This shows that the minimum value of Exp2 in Fig.2 is a circle around the earth without closing, rather than the existence of two closed minimums like Exp1 ( Figure R3). Schwiderski (1980) pointed out that when the Earth is an ideal sphere, the equilibrium tide covering the earth's surface exhibits an ellipsoid shape, and distribution of Exp2 is consistent with the ellipsoid's planar expansion. Reply: Thanks for your suggestion. We have added the relationship between the position of solar projection and the tidal potential distribution of neap tide, which will more clearly illustrate the advantages of the new scheme in the simulation of solar projection position, we have revised to "There are pronounced differences in neap tides between Exp1 and Exp2 (Fig. 3). The neap tide simulated in Exp2 shows a larger meridional variation, the positive regions are mainly concentrated in the middle and low latitudes, the negative regions are mainly concentrated in the high latitudes of the two hemispheres, because the projection positions of the Sun and Moon are located in the middle and low latitudes, resulting in the relatively weaker tidal potential in the high latitudes farther away from the projection position, which is consistent with the results of Gill (2015). However, Exp1 presents a larger zonal variation (positive-negative-positive-negative pattern), and the negative regions are concentrated in the middle and low latitudes rather than the high latitudes, and the tidal potential in the polar regions is even higher than the negative regions in low latitudes, which means that the projection position of the sun is incorrect, locating at high latitudes rather than at low latitudes. Therefore, the new tidal scheme can better represent the position of the Sun compared to the traditional scheme." in Line 199-210 according to your suggestion. #10 Section 4.3 The definition of "Dynamic sea level" is required. Reply: Thanks for your suggestion. We have added the following definitions: "that is defined as the sea level associated with the fluid dynamic state of the ocean (Griffies and Greatbatch, 2012;Griffies et al., 2016)" in Line 309-310. #11 L.313-315 "Therefore, compared to Exp 1..." Why does the Exp 2 improve? The reason should be discussed. Reply: Thanks. We have added the following discussion: "This is because Exp2 applying the new formulation of the tidal scheme can reasonably consider the positions of both the Sun and Moon relative to Exp1, which makes the higher DSL in low latitude compared to that in high latitude due to the effect of gravity." in Line 330-333.
2022-01-14T16:45:31.334Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "7d4712c4644a34731d0a804065e00013f37b664b", "oa_license": "CCBY", "oa_url": "https://nhess.copernicus.org/preprints/nhess-2021-304/nhess-2021-304.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "063f0ceee846a721a2ce00bedd41794f1a464348", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
244772466
pes2o/s2orc
v3-fos-license
The impact of land cover changes on temperature parameters in new capital of Indonesia (IKN) North Penajam Paser Regency and Kutai Kartanegara Regency which are located in East Kalimantan Province are two locations that are planned as the New Capital of Indonesia (IKN). This has become one of the factors changing land cover from vegetation land to urban land, so that it can contribute to temperature changes. In this work, we analyze impacts of land use change on temperature in the new capital city. The change will simulate land cover changes using the Weather Research and Forecasting (WRF) model with two scenarios of land cover change from vegetation land to 547 % and 1,222 % urban land. Scenarios I and II will increase the temperature to 1.17 and 1.77 °C, respectively. This could means that the addition of more urban areas results in an increase in temperature. The quantitative values of this connection will be beneficial for urban planners to manage the development of new capitals without having a significant impact on climate change. Introduction Changes in land cover also change solar heat radiation, surface temperature, and heat storage in urban areas [1]. Urbanization has also been a major factor in land use and land cover changes in human history [2]. Urbanization causes an increase in the amount of accumulated rainfall [3]. Balk et al. [4], explains that the projected percentage increase in urbanization in Asia between 2010 and 2030 is 40-60% and the increase in Asian urbanization population is the highest in the world with a total of 800 million to 1 billion inhabitants. The projected rate of population growth in Indonesia is also projected by the Central Statistics Agency (BPS) to continue to increase where by 2035 there were 305.6 million from 238.5 million in 2010 [5]. North Penajam Paser Regency and Kutai Kartanegara Regency which are located in East Kalimantan Province are two locations planned as the National Capital of Indonesia (IKN). The relocation of the national capital will make land use activities high, whether in the form of building construction, vegetation cover, or daily human activities. The high level of these activities leads to changes in land cover that have the greatest impact on urban temperatures [6]. Planning in developing cities must determine ways to conserve biodiversity when the city develops and does not damage natural habitats [7]. This research will study how the impact of land cover changes on temperature in IKN using the Weather Research and Forecasting -Advanced Research WRF (WRF-ARW) model. WRF-ARW is an advanced generation model of meso-scale weather simulation systems used for operational simulations and atmospheric research needs. The WRF-ARW model is part of the WRF modeling system that includes physical settings, numeric / dynamic options, and data initialization [8]. Research conducted by Lim et al. [9], investigating the sensitivity of surface climate change to land cover types in the northern hemisphere with surface temperatures analyzed, that the temperature of barren land and urban areas will be higher than most other land types. Research on land cover changes conducted by Tursilowati et al. [10], in the city of Jakarta, using Weather Research and Forecasting (WRF) Version 3, which modified pasture land use into urban areas and expanded the Urban Heat Island (UHI) area by around 43 km2 (5%). Furthermore, adding more vegetation, will reduce the temperature in areas with high temperatures. Quantitatively with the addition of 440, 95, and 48% of vegetation (grasslands) in urban areas, the UHI area (306 K) was reduced respectively 88, 54, and 48%. In general, the addition of vegetation in this study will reduce the temperature in areas that have high temperatures. In research conducted by Wang et al. [11]), analyzed that changing agricultural land to urban land in the cities of Beijing and Tianjin using WRF 3.3 resulted in a significant increase in temperature and a decrease in humidity. WRF configuration Weather Research and Forecasting -Advanced Research WRF (WRF-ARW) is a next-generation mesoscale numerical weather prediction model designed for the needs of both operational and research predictions. The basic equation used in WRF-ARW is a non-hydrostatic compressible equation, but WRF- 3 ARW also provides hydrostatic options for research under ideal conditions. WRF-ARW is suitable for applications in areas ranging from meter scale to thousands of kilometers [12]. In this study, the WRF simulations were executed with three domains with horizontal resolutions of 9, 3, and 1 km for domains 1, 2, and 3, respectively ( Figure. 2). Domain 3 covers the whole administrative boundary of Ibu Kota Negara (IKN). The physics schemes used in numerical experiments are listed in Table 1. . The product that Global Forecast System (GFS) offers the near-real time analysis that runs four times a day at NCEP. The analyses are available at 26 mandatory (and other pressure) levels from 1000 mb to 10 mb, belonging the surface boundary layer and some sigma layers. The time period that this study taken is from 1 to 11 June 2019, with the benchmark of FNL data (fnl_190601_00_00-fnl_190611_00_00) in six hour time steps. Validation of the urban physical parameterization schemes Validation of the urban physical parameterization schemes uses 4 parameters of BULK, SLUCM, BEP and BEM parameterization schemes. Table 2 Table 2 that all schemes show their ability to properly simulate surface temperature values. The model tends to show a cooler value or underestimate of surface temperature observation data in almost all observation points indicated with a negative BIAS value and only the BEM model at AWS APT Pranoto Smarinda shows a warmer value or overestimate of surface temperature observation data shown with a positive BIAS value. The best BIAS value is shown in the BULK scheme at the AWS APT Pranoto Smarinda with a value of -0.057. The resulting error between observational data and model output results is also very good where it is shown from the MAE values in all four models at all points with values ranging from 0.93 -1.33. The smallest MAE error value is shown in the BULK scheme at the Sepinggan Balikpapan Meteorologica l Station observation. The resulting correlation between the observation data and the model output is also very good, which is shown by the CORR value of the four models at all points with values ranging from 0.75 -0.87. Table 2 above, the best configuration scheme of the four schemes (BULK, SLUCM, BEP, and BEM) is the BULK scheme. The BULK scheme is the best scheme in simulating the state of surface temperature at 3 points in the area near the prospective National Capital City (IKN). The BULK scheme shows the best BIAS and CORR values and shows the smallest MAE error values of the four schemes. fig 5a. Fig. 5b (S2) shows the addition of approximately 1-2ºC in the IKN area with the addition of 547% urban, while Fig. 5c (S3) Indicates the increase of roughly 2-3ºC in the IKN area with the addition of 1,222% urban. Changes in air temperature also occur in the area around IKN but not significant. Therefore, this could conclude that UHI occurs in the central city. Table 3 shows the spread of temperature changes that occurred at 4 observation points (IKN area, Sepinggan Meteorogical Station, AAWS of Penajam Paser Utara and AWS of APT Pranoto Samarinda). In the IKN area there was an increase in air temperature of approximately 0.83-1.77ºC. The maximum temperature change occurred at 06 UTC with the addition of 1.77ºC in S3. At Sepinggan Meteorologica l Station there was an increase in air temperature of approximately 0.01-0.05 ºC at 00 UTC and a reduction in temperature of approximately 0.01-0.09 ºC at 06-18 UTC. At the point of AAWS Penajam Paser Utara and AWS APT Pranoto Samarinda, there was an increase in temperature at 00-06 UTC and a temperature reduction occurred at 12-18 UTC. S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3 Generally, in the IKN area there will definitely be an increase in temperature in every time, while in the other three Sepinggan Meteorological Stations, AAWS Penajam Paser Utara and AWS APT Pranoto Samarinda, the temperature increase will only occur in the morning until noon. Reduction in temperature at 3 points occurs in the afternoon -early morning. The simulation result for the impacts of land use changes as we described, proposed in the numerical experiments of the Indonesian capital city relocation plan. Nevertheless, the points below have to be considered as the limitations when interpreting our modelling results in the numerical experiments. When interpreting modelling results, which are: The parameterization configuration used is Tropical Parameterization with the BULK configuration as the sf_urban_physics setting, because the configuration settings are the best configuration compared to 3 other configurations; the effects of future urban warning on health issues and the thermal comfort residents of growing cities in future studies are investigated to know the impact of temperature change in this area; and the benefits of this research are expected to be a consideration for the government of the Republic of Indonesia in carrying out the development and development of IKN that has been planned but has no significant impact on climate change. Conclusions To conclude, this study researched the impacts of land use changes proposed in the Indonesian capital city relocation master plan using numerical experiments with WRF modelling. The following are the main findings. It is found that the land use changes in IKN are respectively increase the average air temperature in the urban areas at S2 and S3 by up to 2 and 3°C. Further, LU modification added by 547% urban (S2) will increase in air temperature of approximately 0.64-1.17ºC while, in the (S3) modification added by 1,222% urban will increase in air temperature of approximately 1.03-1.77ºC. The results of this study can still be improved by modifying the other land use form, so that the model's results can be studied further for a better and more advanced city planning. The future work is to expected to analyze the other climate variables (i.e. albedo, accumulated total precipitation cumulus, planetary boundary layer height, latent heat flux, etc).
2021-12-01T20:04:52.255Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "dab40b11f4446c32fa160210d0c73f8723a81940", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/893/1/012033", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dab40b11f4446c32fa160210d0c73f8723a81940", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Physics" ] }
253321416
pes2o/s2orc
v3-fos-license
Ochratoxin A Defective Aspergillus carbonarius Mutants as Potential Biocontrol Agents Aspergillus carbonarius is one of the main species responsible for wine, coffee and cocoa toxin contamination. The main mycotoxin produced by this fungus, ochratoxin A (OTA), is a secondary metabolite categorized as a possible carcinogen because of its significant nephrotoxicity and immunosuppressive effects. A polyketide synthase gene (otaA) encodes the first enzyme in the OTA biosynthetic pathway. It is known that the filamentous fungi, growth, development and production of secondary metabolites are interconnected processes governed by global regulatory factors whose encoding genes are generally located outside the gene clusters involved in the biosynthesis of each secondary metabolite, such as the veA gene, which forms part of the VELVET complex. Different fungal strains compete for nutrients and space when they infect their hosts, and safer non-mycotoxigenic strains may be able to outcompete mycotoxigenic strains during colonization. To determine the possible utility of biopesticides based on the competitive exclusion of mycotoxigenic strains by non-toxigenic ones, we used A. carbonarius ΔotaA and ΔveA knockout mutants. Our results showed that during both in vitro growth and infection of grapes, non-mycotoxigenic strains could outcompete the wild-type strain. Additionally, the introduction of the non-mycotoxigenic strain led to a drastic decrease in OTA during both in vitro growth and infection of grapes. Introduction Mycotoxins are highly poisonous and carcinogenic secondary metabolites produced by several fungi. Aspergillus, Fusarium and Penicillium are the major filamentous fungi associated with the production of the mycotoxins that are hazardous for both human and animal health [1]. Aflatoxins, deoxynivalenol, ochratoxin A (OTA), fumonisins, T-2 and HT-2 toxins, patulin and zearalenone are considered the most relevant mycotoxins from a food/feed safety point of view [2]. The best strategy to lower mycotoxin levels in the food chain is to prevent the growth of the fungus and, if it has already grown, to prevent it from producing the toxin. Different physical (temperature and humidity management), chemical (use of chemical antifungal agents, electrolyzed oxidizing water treatment, etc.), and biological (use of antagonistic microorganisms, plant extracts) methods have been proposed [3]. One of the most important strategies for managing fungal diseases is to employ antagonistic microorganisms. The four main mechanisms deployed by biocontrol microorganisms are: (i) competing for resources and available space; (ii) producing antibiotics; (iii) inducing resistance; and (iv) direct parasitism. In addition to bacteria and yeast, it is also feasible to use filamentous fungi as biocontrol agents, and fungi that do not produce toxins and can compete with mycotoxigenic fungi are especially interesting. Construction and Characterization of Knockout Mutants In order to study whether a non-mycotoxigenic strain could outcompete a mycotoxigenic strain, we constructed two different knockout mutants in the OTA-producing A. carbonarius ITEM 5010 background [17]: (i) one based on the deletion of the otaA gene (the mutant denoted as ∆pks), the first gene in the OTA biosynthesis; (ii) a second one based on the veA gene (the mutant denoted as ∆veA), and a global regulator of secondary metabolism. The followed gene replacement strategy is shown in Figure S1. Gene replacement plasmids pRFHU2-pks and pRFHU2-veA were obtained by the USER-Friendly cloning strategy [25]. The co-cultivation of the Agrobacterium tumefaciens cells carrying the plasmid with the conidia of A. carbonarius led to hygromycin-resistant colonies appearing approximately 3 days after being transferred to selective PDA plates. PCR amplification was used to check the correct insertion of the T-DNA containing the hygromycin-resistant marker that replaced the gene of interest ( Figure S1). To select the knockout mutants with no further T-DNA integrations, the copy number of the integrated T-DNA was determined by quantitative real-time PCR (qPCR) ( Table S1). Three knockout mutants and two ectopic mutants for each construction were selected for further analyses. The phenotypic traits of the wt strain, and the ectopic and knockout mutants regarding growth, conidiation and OTA production, were recorded on PDA plates after 5 days post inoculation (dpi) of growth ( Figure 1). The secondary metabolism was impaired in the ∆veA mutants. It resulted in a brownish pigment observed at the back of the colony compared to the wt strain and the ∆pks mutants ( Figure 1A). Non-statistically significant differences for the growth area were detected among the wt as well as the ectopic and ∆pks mutants ( Figure 1B). However, the growth of the ∆veA mutants was significantly lower than that of the wt and ectopic mutants ( Figure 1E). The conidiation of the three ∆pks mutants and the three ∆veA mutants was lower than for the wt strain ( Figure 1C,F). The ∆veA mutants displayed a marked reduction in conidiation, approximately 50 % compared to the wt strain. The OTA production analysis showed that none of the ∆pks and ∆veA knockout mutants was able to produce the mycotoxin under the assayed conditions ( Figure 1D,G). These results were confirmed by HPLC-MS (data not shown). Construction and Characterization of Knockout Mutants In order to study whether a non-mycotoxigenic strain could outcompete a mycotoxigenic strain, we constructed two different knockout mutants in the OTA-producing A. carbonarius ITEM 5010 background [17]: (i) one based on the deletion of the otaA gene (the mutant denoted as Δpks), the first gene in the OTA biosynthesis; (ii) a second one based on the veA gene (the mutant denoted as ΔveA), and a global regulator of secondary metabolism. The followed gene replacement strategy is shown in Figure S1. Gene replacement plasmids pRFHU2-pks and pRFHU2-veA were obtained by the USER-Friendly cloning strategy [25]. The co-cultivation of the Agrobacterium tumefaciens cells carrying the plasmid with the conidia of A. carbonarius led to hygromycin-resistant colonies appearing approximately 3 days after being transferred to selective PDA plates. PCR amplification was used to check the correct insertion of the T-DNA containing the hygromycin-resistant marker that replaced the gene of interest ( Figure S1). To select the knockout mutants with no further T-DNA integrations, the copy number of the integrated T-DNA was determined by quantitative real-time PCR (qPCR) ( Table S1). Three knockout mutants and two ectopic mutants for each construction were selected for further analyses. The phenotypic traits of the wt strain, and the ectopic and knockout mutants regarding growth, conidiation and OTA production, were recorded on PDA plates after 5 days post inoculation (dpi) of growth ( Figure 1). The secondary metabolism was impaired in the ΔveA mutants. It resulted in a brownish pigment observed at the back of the colony compared to the wt strain and the Δpks mutants ( Figure 1A). Non-statistically significant differences for the growth area were detected among the wt as well as the ectopic and Δpks mutants ( Figure 1B). However, the growth of the ΔveA mutants was significantly lower than that of the wt and ectopic mutants ( Figure 1E). The conidiation of the three Δpks mutants and the three ΔveA mutants was lower than for the wt strain ( Figure 1C,F). The ΔveA mutants displayed a marked reduction in conidiation, approximately 50 % compared to the wt strain. The OTA production analysis showed that none of the Δpks and ΔveA knockout mutants was able to produce the mycotoxin under the assayed conditions ( Figure 1D,G). These results were confirmed by HPLC-MS (data not shown). Phenotypic traits of the wild-type strain of A. carbonarius ITEM 5010 (denoted as 'wt', black bars), two ectopic mutants (denoted as 'E', gray bars) and three knockout mutants (color-filled bars) of genes otaA (B-D) and veA (E-G), denoted as ∆pks and ∆veA, respectively. (A) The front and back colony views of the different strains point-inoculated on PDA plates in the dark at 7 days post-inoculation. Growth area (B,E), conidiation (C,F) and OTA production (D,G) were tested on the PDA plates centrally point-inoculated with 5 uL of 1 × 10 5 conidia/mL. After incubation at 28 • C for 5 days, colonies were scanned to determine the growth area with the ImageJ software. Three plugs were collected from the center, middle and inner parts of the colony for conidia determination and OTA extraction purposes. Values were normalized to those of the wt growing under the same conditions. Error bars represent the standard error of the mean of at least three biological replicates. The bars with different letters in the same panel are statistically different as determined by the one-way ANOVA and Tukey's test (p < 0.05). nd. not detected under the assayed conditions. Different Stresses Did Not Affect the Growth of the Knockout Mutants In order to investigate the sensitivity of mutants ∆pks and ∆veA to several stresses, the potato dextrose broth (PDB) supplemented with several concentrations of different stressor compounds was tested (Figures 2 and 3). The assay included high osmolarity, tested with sorbitol, osmotic stress induced by sodium salt (NaCl) and compounds that affect the cell wall and cell membranes, such as calcofluor white (CFW) and sodium dodecyl sulfate (SDS), respectively. Oxidative stress was tested with different oxygen peroxide (H 2 O 2 ) concentrations. The wt strain growth was not affected by increasing the H 2 O 2 , CFW or sorbitol concentrations, but it diminished at pH 7.5 compared to the more acidic pHs of 3.0, 4.5 and 6.0. Moreover, its growth was severely affected by adding high NaCl and SDS concentrations. Compared to the wt, the ∆pks mutants did not show significant differences for any of the tested compounds or pHs ( Figure 2). PDA plates centrally point-inoculated with 5 uL of 1 × 10 conidia/mL. After incubation at 28 °C for 5 days, colonies were scanned to determine the growth area with the ImageJ software. Three plugs were collected from the center, middle and inner parts of the colony for conidia determination and OTA extraction purposes. Values were normalized to those of the wt growing under the same conditions. Error bars represent the standard error of the mean of at least three biological replicates. The bars with different letters in the same panel are statistically different as determined by the oneway ANOVA and Tukey's test (p < 0.05). nd. not detected under the assayed conditions. Different Stresses Did Not Affect the Growth of the Knockout Mutants. In order to investigate the sensitivity of mutants Δpks and ΔveA to several stresses, the potato dextrose broth (PDB) supplemented with several concentrations of different stressor compounds was tested (Figures 2 and 3). The assay included high osmolarity, tested with sorbitol, osmotic stress induced by sodium salt (NaCl) and compounds that affect the cell wall and cell membranes, such as calcofluor white (CFW) and sodium dodecyl sulfate (SDS), respectively. Oxidative stress was tested with different oxygen peroxide (H2O2) concentrations. The wt strain growth was not affected by increasing the H2O2, CFW or sorbitol concentrations, but it diminished at pH 7.5 compared to the more acidic pHs of 3.0, 4.5 and 6.0. Moreover, its growth was severely affected by adding high NaCl and SDS concentrations. Compared to the wt, the Δpks mutants did not show significant differences for any of the tested compounds or pHs ( Figure 2). , NaCl (C), SDS (D) and sorbitol (E) concentrations and at distinct pHs (F). Growth was determined as the area under the curve (AUC) after 7 days of incubation at 24 • C. The two-way ANOVA, followed by Tukey's test (p < 0.05), was performed to determine the significant growth of the ∆pks8a (denoted as *) and ∆pks27c (denoted as Φ) knockout mutants compared to the wild type. Values represent the mean ± standard error of the mean of three biological replicates. The experiment was repeated twice. when H2O2, SDS or sorbitol were added to culture media ( Figure 3). However, the addition of CFW at 1125 and 2250 µg/mL reduced the growth of both ΔveA mutants. Although the addition of 1125 mM NaCl modified their growth, the overall curve profile was the same for the three strains. The pH of the medium affected the growth of the ΔveA knockout mutants, which was higher at pH 3.0 and lower at 7.5 compared to the growth of the wt. sorbitol (E), concentrations and at distinct pHs (F). Growth was determined as the area under the curve (AUC) after 7 days of incubation at 24 °C. The two-way ANOVA, followed by Tukey's test (p < 0.05, indicated as *), was performed to establish whether significant differences existed between the knockout mutants and the wild type. Values represent the mean and standard error of the mean of three biological replicates. The experiment was repeated twice. Competitiveness of Mutants Δpks and ΔveA during In Vitro Growth In order to assess the ability of mutants Δpks and ΔveA to compete with the wt strain, we co-inoculated the wt and the mutants at different ratios (10 wt:1 Δ, 1 wt:1 Δ, 1 wt:10 Δ) , NaCl (C), SDS (D) and sorbitol (E), concentrations and at distinct pHs (F). Growth was determined as the area under the curve (AUC) after 7 days of incubation at 24 • C. The two-way ANOVA, followed by Tukey's test (p < 0.05, indicated as *), was performed to establish whether significant differences existed between the knockout mutants and the wild type. Values represent the mean and standard error of the mean of three biological replicates. The experiment was repeated twice. No significant differences were observed for the ∆veA mutants compared to the wt when H 2 O 2 , SDS or sorbitol were added to culture media ( Figure 3). However, the addition of CFW at 1125 and 2250 µg/mL reduced the growth of both ∆veA mutants. Although the addition of 1125 mM NaCl modified their growth, the overall curve profile was the same for the three strains. The pH of the medium affected the growth of the ∆veA knockout mutants, which was higher at pH 3.0 and lower at 7.5 compared to the growth of the wt. Competitiveness of Mutants ∆pks and ∆veA during In Vitro Growth In order to assess the ability of mutants ∆pks and ∆veA to compete with the wt strain, we co-inoculated the wt and the mutants at different ratios (10 wt:1 ∆, 1 wt:1 ∆, 1 wt:10 ∆) on a 96-well microplate. The control wells were inoculated with either the wt or knockout mutants, and they were cultured under the same conditions. The percentage of each strain at the beginning of the experiment was verified by the colony-counting method, and by inoculating the PDA plates supplemented, or not, with antibiotic ( Figure S2A,B). Only the knockout strains were able to develop on the antibiotic-supplemented PDA plates. Each well was collected independently after 7 days of incubation at 28 • C for further analyses. The percentage of each strain under competition was calculated by the colony-counting ( Figure S2C,D) and qPCR ( Figure 4A,B) methods. Based on the qPCR data, when the mutants and the wt strain were inoculated at the same ratio (1 wt: 1 ∆), mutants were capable of displacing the wt strain at 14.5% wt: 85.5% ∆pks, and 25.6% wt: 74.4% ∆veA ( Figure 4A,B). Furthermore, mutants were able to displace the wt even under the most unfavorable condition (10 wt: 1 ∆ ratio). The ∆pks mutant succeeded in imposing itself on the wt more intensely than the ∆veA mutant. The percentage of each strain under competition was calculated by the colony-counting ( Figure S2C,D) and qPCR ( Figure 4A,B) methods. Based on the qPCR data, when the mutants and the wt strain were inoculated at the same ratio (1 wt: 1 Δ), mutants were capable of displacing the wt strain at 14.5% wt: 85.5% Δpks, and 25.6% wt: 74.4% ΔveA ( Figure 4A,B). Furthermore, mutants were able to displace the wt even under the most unfavorable condition (10 wt:1 Δ ratio). The Δpks mutant succeeded in imposing itself on the wt more intensely than the ΔveA mutant. Regarding OTA production, the wt strain synthesized OTA, but neither the Δpks (Figure 4C) nor the ΔveA ( Figure 4D) mutant produced OTA under the assayed conditions. OTA was not even detected in the 1 wt: 10 Δpks mixture culture. Under the most unfavorable growth condition (10 wt:1 Δ), OTA production lowered from the expected value of 90.9 % to 62.5 % and 68.0 % when the wt was co-inoculated with the Δpks mutant or the ΔveA mutant, respectively. Regarding OTA production, the wt strain synthesized OTA, but neither the ∆pks ( Figure 4C) nor the ∆veA ( Figure 4D) mutant produced OTA under the assayed conditions. OTA was not even detected in the 1 wt: 10 ∆pks mixture culture. Under the most unfavorable growth condition (10 wt:1 ∆), OTA production lowered from the expected value of 90.9 % to 62.5 % and 68.0 % when the wt was co-inoculated with the ∆pks mutant or the ∆veA mutant, respectively. Competitiveness of the ∆pks and ∆veA Mutants during Grape Berry Infection The results of the co-inoculation of the wild-type strain and the mutants in grape berries suggested that both mutants were able to lower the amount of OTA produced by the wt strain ( Figure 5). Both mutants ∆pks and ∆veA were capable of diminishing OTA production from the expected value of 50.0% to 21.1% and 10.1%, respectively, when the wt strain and mutants were equally inoculated (1 wt:1 ∆). Moreover, the presence of the ∆pks mutant reduced OTA production from an expected value of 90.9% to 30.7% under the most adverse condition (10 wt:1 ∆). berries suggested that both mutants were able to lower the amount of OTA produced by the wt strain ( Figure 5). Both mutants Δpks and ΔveA were capable of diminishing OTA production from the expected value of 50.0% to 21.1% and 10.1%, respectively, when the wt strain and mutants were equally inoculated (1 wt:1 Δ). Moreover, the presence of the Δpks mutant reduced OTA production from an expected value of 90.9% to 30.7% under the most adverse condition (10 wt:1 Δ). Discussion The present study investigated the potential application of non-toxigenic knockout mutants of A. carbonarius for reducing the OTA produced by the ochratoxigenic A. carbonarius ITEM 5010 wt strains. Different commercial products are already available for the application of non-toxigenic A. flavus strains [6,7,26], which supports the notion that competitive exclusion occurs between toxigenic and non-toxigenic strains [27]. Afla-Guard ® and AF36 are two commercial biological control products based on non-aflatoxigenic A. flavus strains, which have been approved by the U.S. Environmental Protection Agency for the biological control of A. flavus and aflatoxin contamination in peanut, corn and cottonseed with considerable success [28]. The addition of atoxigenic A. flavus strain A2085 (AF-X1 TM ) in artificially inoculated maize ears very significantly lowers aflatoxin concentrations [7]. Despite the good results obtained in the control of aflatoxins produced by A. flavus, there are no commercial products available to minimize the OTA produced by A. carbonarius. Numerous research works on this fungus have focused on natural strains that do not produce OTA and on deciphering the genetic causes of non-mycotoxin production. A common putative cluster of genes involved in OTA biosynthesis has been described in A. carbonarius [11][12][13], and OTA production is regulated by different biosynthetic genes, such as genes otaA-AcOTApks and otaB-AcOTAnrps, among others [15][16][17]. Other general regulatory genes, such as veA and laeA, impact mycotoxin production [16,21]. Previous research has demonstrated that co-inoculation of an OTA-producing A. carbonarius strain with a non-OTA-producing A. carbonarius strain, which harbors a Y728H mutation in the Discussion The present study investigated the potential application of non-toxigenic knockout mutants of A. carbonarius for reducing the OTA produced by the ochratoxigenic A. carbonarius ITEM 5010 wt strains. Different commercial products are already available for the application of non-toxigenic A. flavus strains [6,7,26], which supports the notion that competitive exclusion occurs between toxigenic and non-toxigenic strains [27]. Afla-Guard ® and AF36 are two commercial biological control products based on non-aflatoxigenic A. flavus strains, which have been approved by the U.S. Environmental Protection Agency for the biological control of A. flavus and aflatoxin contamination in peanut, corn and cottonseed with considerable success [28]. The addition of atoxigenic A. flavus strain A2085 (AF-X1 TM ) in artificially inoculated maize ears very significantly lowers aflatoxin concentrations [7]. Despite the good results obtained in the control of aflatoxins produced by A. flavus, there are no commercial products available to minimize the OTA produced by A. carbonarius. Numerous research works on this fungus have focused on natural strains that do not produce OTA and on deciphering the genetic causes of non-mycotoxin production. A common putative cluster of genes involved in OTA biosynthesis has been described in A. carbonarius [11][12][13], and OTA production is regulated by different biosynthetic genes, such as genes otaA-AcOTApks and otaB-AcOTAnrps, among others [15][16][17]. Other general regulatory genes, such as veA and laeA, impact mycotoxin production [16,21]. Previous research has demonstrated that co-inoculation of an OTA-producing A. carbonarius strain with a non-OTA-producing A. carbonarius strain, which harbors a Y728H mutation in the otaA gene, can lower OTA levels [29]. If, as suggested by the authors, failed OTA production in this strain is due to a single point mutation, then there is a chance that this mutation can be reversed and lead back to an OTA-producer strain. It is well-known that climate change can affect the production of mycotoxins, and an increase in CO 2 on a grapebased matrix often stimulates the development of A. carbonarius and OTA production [22]. With the present climate change scenario, it is critical to ensure that mycotoxins are never synthesized despite changing environmental conditions. In this work, we specifically wished to determine whether the use of non ochratoxigenic strains obtained by deleting either the otaA genes or veA genes could reduce OTA contamination in foods when competing with their parental mycotoxigenic strain. The deletion of the A. carbonarius otaA gene led to reduced conidiation ( Figure 1C) and, as expected, no OTA production was detected during either in vitro growth on rich medium ( Figure 1D) or grape berry infection ( Figure 5). Other authors have also reported similar results about ∆pks mutants' lack of OTA production [15,16]. Nevertheless, when growing on a minimal medium at 25 • C, Gallo et al. [15] did not observe any phenotypic differences between the wt A. carbonarius ITEM 5010 and the ∆AcOTApks strain. When grown on yeast extract sucrose medium at pH 4.0 and 28 • C, A. carbonarius isolate NRRL 368 and the Acpks-knockout mutant showed comparable growth, sporulation and conidial germination patterns [16]. The different strains utilized in these research works, the incubation temperature and the culture medium are all variables that may influence how the fungus behaves. However, none of the mutants produced OTA, which confirms that this gene is essential for the synthesis of this mycotoxin. The heterodimeric VELVET complex VelB/VeA/LaeA is a fungus-specific protein complex that controls development, secondary metabolism and pathogenicity. In A. carbonarius, VeA gene deletion resulted in decreased conidiation, which indicates that this gene is essential for the strain's ability to produce conidia ( Figure 1) [21]. In contrast, VeA acts as a negative regulator of asexual development in Aspergillus cristatus and Aspergillus nidulans, among others [30,31]. Although it would appear that this gene plays a species-specific role in the regulation of conidia formation, its role in the production of several mycotoxins is clear: although VeA generally plays a positive role in the regulation of secondary metabolite production, some secondary metabolites are negatively regulated by VeA [24,32]. In line with that previously shown by Crespo-Sempere et al. [21], OTA production was affected by the deletion of the veA gene in A. carbonarius (Figure 1). This gene also positively regulates OTA biosynthesis in Aspergillus niger and Aspergillus ochraceus [33,34], and it plays a role in the production of aflatoxins in Aspergillus flavus [35] and fumonisin by Fusarium verticillioides [36]. Light, pH, and osmotic and oxidative stresses are a few examples of environmental stressors that VeA responds to. For instance, in the dark, the OTA production in the Ac∆veA mutant decreased to a greater extent compared to the wt A. carbonarius when both were grown under light conditions. Nevertheless, conidia production similarly diminished in both situations compared to the wt [21]. A. carbonarius exhibits different behavior depending on the pH of the environment. At pH 7.0, it produces less OTA than at pH 4.0 [37]. Our results showed that the growth of both the ∆pks mutant and the wt was marginally lower under the neutral pH 7.5 conditions than under more acidic conditions (Figure 2). However, the ∆veA mutant developed more quickly under the most acidic situation that we examined but less rapidly under neutral conditions (Figure 3). Previous studies have shown that the disruption of the veA gene reduces tolerance to oxidative stress in A. flavus and A. niger [34,38], but it has no effect on A. fumigatus [39]. Our study demonstrated that loss of genes veA and otaA in A. carbonarius did not change the behavior of the mutants in a liquid medium with hydrogen peroxide employed as an oxidant inducer. This finding suggests that veA performs no function in preventing oxidative stress. We investigated how osmotic stressors, such as NaCl and sorbitol, affect growth in A. carbonarius, and whether genes otaA and veA could impact this. Our results revealed that osmotic stress caused by high NaCl concentrations leads to diminished fungal growth. However, the growth in both mutants was unaffected compared to the wt, which is similar to that described in A. flavus, where the response to osmotic is barely affected by VeA [38]. The parallel differentiation and quantification of A. carbonarius toxigenic and nontoxigenic strains in mixed cultures were completed by qPCR following a similar strategy to that used to discriminate non aflatoxigenic biocontrol strains and aflatoxigenic strains of A. flavus by droplet digital PCR and qPCR [40,41]. The qPCR results (Figure 4) were the equivalent to those obtained by the traditional plate-counting approach using PDA supplemented, or not, with Hygromicin B as a selective marker for mutants ( Figure S2). Even under the most unfavorable initial inoculation conditions (10 wt:1 ∆), both mutants ∆pks and ∆veA were able to displace the wt strain. A low percentage of the non-toxigenic A. carbonarius was able to significantly lower the OTA level in both culture medium (Figure 4) and infected grapes ( Figure 5). The overall results herein described indicate that the knockout mutants reduced OTA production to a greater extent than growth. It is worth noting that the co-inoculation of grapes with a low percentage of the ∆pks mutant resulted in a more pronounced OTA reduction than with the ∆veA mutant. This suggests that the ∆veA mutant displays less fitness than the ∆pks mutant during grape infection in the presence of the wt strain. This differential fitness could be related to the role of the missing proteins. As previously mentioned, OtaA is a polyketide synthase that contributes specifically to OTA biosynthesis, while protein VeA is involved in global regulation and affects several functions, such as sexual development, secondary metabolism, and even pathogenicity. As VeA is a global regulator of secondary metabolism, it could be advantageous to utilize ∆veA mutants if the goal of using biocontrol agents is to eliminate the possibility of any mycotoxin being present, and not only OTA. Further experiments should be run to test this hypothesis. The preliminary results of this study using these mutants as biocontrol agents are promising for the control of OTA. Similarly to what happens with A. flavus [7], it could be feasible to employ non ochratoxigenic mutants to outcompete toxigenic A. carbonarius strains to reduce OTA contamination in food and feed. Fungal Strains and Culture Conditions Aspergillus carbonarius ITEM 5010 was used as the wt strain to construct knockout mutants for genes otaA and veA. All the strains were grown on potato dextrose agar (PDA, Difco-BD Diagnostics, Sparks, MD, USA) or PDB, with or without the corresponding antibiotic. Cultures were incubated at 28 • C for 7-14 days. Conidia were scraped off agar with a sterile spatula, suspended in distilled water and titrated by a hemacytometer. Plasmid vectors were cloned and propagated in Escherichia coli DH5α grown in Luria-Bertani medium (LB; bacto tryptone 10 g, yeast extract 5 g, NaCl 5 g, agar 15 g, per liter) supplemented with 25 µg/mL of kanamycin at 37 • C. Agrobacterium tumefaciens AGL-1 was grown in the LB medium supplemented with 20 µg/mL of rifampicin, 100 µg/mL of kanamycin and 75 µg/mL of carbenicillin at 28 • C. Construction and Verification of the A. carbonarius Knockout Mutants In order to construct the otaA and veA gene replacement plasmids, the 5' and 3' flanking regions (≈1.5 kb) were amplified from A. carbonarius ITEM 5010 genomic DNA and cloned into plasmid vector pRF-HU2 [25]. All the primers pairs were designed by the Primer3 software [42]. The amplification of 10 ng of genomic DNA was performed using High-Taq DNA polymerase (Bioron GmbH, Ludwigshafen, Germany) according to the manufacturer's instructions and using primers pairs: veA-VA-O1/veA-VA-O2 and OTApks_O1/OTApks_O2 for the upstream regions; veA-VA-A3/veA-VA-A4 and OTApks_A3/OTApks_A4 for the downstream regions (Table S2). The cycling conditions consisted of: 94 • C for 3 min, 35 cycles of 94 • C for 15 s, 58 • C for 20 s and 72 • C for 1 min 30 s and 72 • C for 10 min. Binary vector pRF-HU2 was designed to be used with the USER (Uracil-Specific Excision Reagent) friendly cloning technique (New England Biolabs, Ipswich, MA, USA), as previously described [43]. First to obtain plasmids pRFHU2-VEA and pRFHU2-PKS ( Figure S1), the amplified upstream and downstream fragment regions were mixed with digested vector pRF-HU2 (ratio of 30:30:120 ng) and the USER enzyme mix, and then, they were incubated according to the manufacturer's conditions. An aliquot of the mixture was directly used for the transformation of the E. coli DH5α chemically-competent cells. Kanamycin-resistant transformants were screened by PCR using primers pairs RF-5/RF-2 and RF-1/RF-6 (Table S2). Proper fusion was confirmed by DNA sequencing. The transformation of A. tumefaciens was completed by introducing both plasmids independently in electrocompetent A. tumefaciens AGL-1 cells with a Gene Pulser apparatus (Bio-Rad, Richmond, CA, USA) [44]. Then, the final A. carbonarius transformation was completed as previously described [44] by mixing equal volumes of a conidial suspension of A. carbonarius (10 4 , 10 5 and 10 6 conidia/mL) and IMAS-induced A. tumefaciens cultures, and spreading them onto paper filter layered on agar plates containing IMAS. After co-cultivation at 24 • C for 40 h, membranes were transferred to PDA plates containing hygromycin B (HygB, 100 µg/mL, InvivoGen, San Diego, CA, USA) to select fungal transformants and cefotaxime (200 µg/mL, Sigma-Aldrich, St. Louis, MO, USA) as an inhibitory growth agent of A. tumefaciens cells. Finally, A. carbonarius ITEM 5010 hygromycin-resistant colonies appeared after 3 to 4 days of incubation at 28 • C, which allowed monosporic cultures to be subsequently obtained. Genomic DNA extraction was completed as formerly described [43]. The PCR analysis of the transformants was used to confirm the disruption of genes veA and otaA: screening the deletion of the veA gene (primers pairs: veA-VI/veA-VJ), the deletion of the otaA gene (primers pairs: OTApks-F3/OTApks-R4), and the insertion of selection marker HygB (primers pairs: HMBF1/HMBR1) (Table S2). Finally, to check the number of T-DNA copies integrated into the genome of transformants, qPCR was carried out as previously described [44]. The primers pairs used to determine the T-DNA copy number of genes veA and otaA were designed in the upstream region of the genes with primers pairs veA-VG/veA-VH, and OTApks-R4/OTApks-F8, respectively. The gene selected as a reference was the non-ribosomal peptide synthetase (nrps) gene (ID: 132610), which was detected with primer pairs AcNRP_F/AcNRP_R (Table S1) [21]. Then, qPCR reactions were performed as formerly reported [44] in a LightCycler 480 Instrument (Roche Diagnostics, Mannheim, Germany) equipped with the LightCycler SW 1.5 software and by calculating the number of T-DNA copies integrated into the genome of each transformant according to Pfaffl [45]. Characterization of the Knockout Mutants: Mycelial Growth, Conidiation and Growth under Stress Conditions For the phenotypical characterization, 5 µL of conidial suspension (1 × 10 5 conidia/mL) was placed in the center of PDA plates and incubated for up to 5 days at 28 • C. Plates were scanned daily, and the area of growth was analyzed with ImageJ 1.53q (Wayne Rasband, National Institutes of Health, Bethesda, MD, USA). To determine conidia production, three plugs (5 mm diameter) were collected from the center, middle and border of the colony, and 500 µL of methanol was added to the three plugs. The mixture was shaken in an Omni bead Ruptor 24 (Omni International Inc., Kennesaw, GE, USA) for 1 min and at speed 4. Conidia were counted with a hemacytometer and methanol extracts were stored at −80 • C to determine OTA production. Growth profiles were performed on the 96-well PDB plates supplemented with different concentrations of compounds H 2 O 2 , SDS, CFW, NaCl and sorbitol, and at distinct pHs (3.0, 4.5, 6.0, and 7.5). Spores were collected after 7 days of growth from the pointedcentrally inoculated PDA plates, and concentration was adjusted using a hemocytometer. To determine growth profiles, the 96-well PDB plates containing 100 µL of medium were inoculated in triplicate at a final concentration of 1 × 10 5 conidia/mL, and they were incubated at 24 • C for up to 7 days. Absorbance at 600 nm was measured automatically at 2 hourly intervals with a FLUOstar Omega (BMG Labtech). Growth curve analyses were performed in R 4.1.1. The area under the curve (AUC) used to describe the growth of the fungi under different stress conditions was calculated at 7 dpi for each replicate based on the spline fit model included in the 'grofit' package [46]. In Vitro Competition Assays Under controlled laboratory conditions, the competition assays involving the ochratoxigenic A. carbonarius ITEM 5010 strain and the non-mycotoxigenic mutants were performed on 96-well PDB plates. One knockout mutant for each gene (∆veA10b and ∆pks8c) was selected. Spore suspension concentrations were adjusted to 1 × 10 5 conidia/mL. Five 10:0, 10:1, 1:1, 1:10, and 0:10 mix ratios (the wt strain vs. ∆) were used to inoculate PDB on 96-wells plates. Cultures were grown for 7 days at 28 • C in the dark. Each competition ratio was performed in 15 wells and was repeated twice on different days. At the end of the competition assays, five wells were used to estimate the growth percentage of each competing strain. Then plates were stored at −20 • C for further analyses. Each strain's percentage of growth was calculated by two different approaches: (i) diluting the content of the well with 1 mL of water, counting the conidia concentration in at least five wells and plating 100 µL of 2 × 10 3 conidia/mL onto PDA plates and the PDA plates containing 100 µg/mL of hygromycin. After 24-48 h at 28 • C, incipient colonies were counted, and the percentages of the wt and knockout mutants were determined for each ratio. Only the knockout mutants were able to grow on the DA plates containing hygromycin; (ii) quantifying fitness by qPCR. At least three wells per ratio were used for DNA extraction. The DNAs of A. carbonarius ITEM5010 and the knockout mutants were used as a proxy to determine each strain's growth percentage by employing specific primers: (i) AcveA-VI/AcveA-VJ, and OTApks-F5/OTApks-R6 to detect gene veA and gene otaA, respectively, in the wt strain; (ii) HPH3F/HPH4R for the hygromycin-resistant marker in the knockout mutants; (iii) AcNRP_F/AcNRP_R to detect the reference gene [21]. A qPCR analysis was performed as previously described. To determine whether non-mycotoxigenic mutants could inhibit ochratoxin A production by the wt strain A. carbonarius ITEM 5010, OTA was independently extracted from at least three co-inoculated PDB wells in 2 mL tubes containing three 2.7 mm steel beads with 500 µL of methanol with the help of an Omni Bead Ruptor 24 (Omni International, Inc., Kennesaw, GE, USA). Samples were centrifuged and filtered, and methanol extracts were stored at −80 • C until further analyses. Artificial Inoculation of Grape Berries For each strain, nine grape berries were surface-sterilized with 2% sodium hypochlorite for 5 min, rinsed with tap water and air-dried. Conidia, collected by scraping the surface of 7-day-old colonies grown on PDA plates, were resuspended in water and adjusted to 1 × 10 5 conidia/mL. Five (wt strain vs. deletant) 10:0, 10:1, 1:1, 1:10 and 0:10 mix ratios were prepared. Aliquots of 5 µL of the conidial suspensions were inoculated on the grape berries that had been previously wounded with a needle at a 5 mm depth. After 10 days at 28 • C in the dark, three biological replicates containing all three grape berries were collected, frozen in liquid nitrogen and ground to a fine powder. Pooled samples were stored at −80 • C until further use. To determine OTA levels, 0.5 g of frozen tissues was mixed with 250 µL of methanol and three steal beads, as previously described, and the methanol extracts were stored at −80 • C to determine OTA production. OTA Quantification OTA was detected by a previously described high-pressure chromatography (HPLC) method [44] with minor modifications. Briefly, methanolic extracts were filtered and injected into HPLC. OTA detection and quantification were completed by an HPLC system (ACQUITY Arc Sys Core 1-30 cm, Waters Co., Milford, CT, USA) equipped with a Waters temperature control module, a Waters 2475 fluorescence detector (excitation wavelength of 330 nm and emission wavelength of 460 nm), and a Kinetex biphenyl column (4.6 mm × 150 mm, 5 µm, Phenomenex Inc., Torrance, CA, USA). Twenty microliters of each extract were injected. The mobile phase was acetonitrile: water: acetic acid (57:41:2, v/v/v) with isocratic elution for 15 min at a flow rate of 1 mL/min. Working standard solutions were prepared by appropriate diluting the known volumes of the stock solution with methanol, and they were used to obtain calibration curves in the chromatographic system. OTA was expressed as a percentage of the wt strain. Samples were also analyzed by mass spectrometry with a Waters Acquity UPLC I-Class System (Waters Corporation, Milford, MA, USA) coupled to a Bruker Daltonics QToF-MS mass spectrometer (maXis impact series, resolution ≥ 55,000 FWHM Bruker Daltonics, Bremen, Germany) using ESI for the positive (ESI (+)) ionization mode. UPLC separation was performed in an ACE-Excel C18-PFP column (3.0 mm × 100 mm, 1.7 µm size particle) at a flow rate of 0.3 mL/min. Separation was completed using water with formic acid at 0.01% as the weak mobile phase (A) and methanol with formic acid at 0.01% as the strong mobile phase (B). The gradient started with 20% of B at 0 min, which was maintained for 3 min. Next, the proportion was set at 40% and linearly increased to 80% until minute 22.50. Eluent B was set at 20% and kept constant until minute 30. Nitrogen was used as the desolvation gas with a flux of 9 L/min and as the nebulizing gas with a flux of 2.0 bar. The drying temperature was 200 • C and the column temperature was 40 • C. The voltage source was 4.0 kV for ESI (+). The MS experiment was completed by employing HR-QTOF-MS, applying 24 eV for ESI (+) and using broadband collision-induced dissociation (bbCID). The MS data were acquired within an m/z range of 50-1200 Da. The external calibrant solution was delivered by a KNAUER Smartline Pump 100 with a pressure sensor (KNAUER, Berlin, Germany). The instrument was externally calibrated before each sequence with 10 mM sodium formate solution. The mixture was prepared by adding 0.5 mL of formic acid and 1.0 mL of 1.0 M sodium hydroxide to an isopropanol/Milli Q water solution (1:1, v/v). UPLC-QToF-MS analyses were performed at the Metabolomics Platform of CEBAS-CSIC, Campus Universitario de Espinardo, 30100 Espinardo, Murcia (Spain). Statistical Analysis The ANOVA analysis was used to determine if there were significant differences among means. Tukey's test was carried out to determine if significant (p < 0.05) differences occurred between individual treatments (Statpoint). Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/toxins14110745/s1. Figure S1: Deletion of genes otaA and veA in Aspergillus carbonarius. (A) Physical maps of plasmids pRFHU2-PKS and pRFHU2-VEA. (B) Diagram of the deletion cassette used to replace the target region (gene of interest) in the wild-type (wt) strain by homologous recombination by generating mutants ∆pks and ∆veA. The primers used in the construction and analysis of both plasmids are shown. (C) Amplification band patterns of the different polymerase chain reactions (PCR) of the wt ITEM 5010 (wt), two ectopic mutants (preceded by letter 'e') and three knockout mutants (preceded by symbol '∆'). Figure S2: Competitiveness of knockout mutants ∆pks (A-C, pink bars) and ∆veA (B-D, blue bars) against the mycotoxigenic wt strain A. carbonarius ITEM 5010 (black bars) at day 0 (A,B) and 7 days post inoculation (C,D). Competitiveness was determined by counting colonies on PDA and the PDA supplemented with 100 µg/mL of hygromycin. Only knockout mutants can grow on the PDA supplemented with antibiotic. Values are the mean of at least three biological replicates and error bars represent the standard error of the mean (SEM). Table S1: Estimation of the number of T-DNA copies integrated into the genome of the mutants. Table S2: List of the primers used in this study.
2022-11-05T15:41:35.538Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "0b86c7b7a87579ec9e64b7b8836ec3594e5a1dfe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/14/11/745/pdf?version=1667207630", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84b67ee59c28617ac2cc193c0f4661a841733a0d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
246510237
pes2o/s2orc
v3-fos-license
ManyBirds: A Multi-Site Collaborative Open Science Approach to Avian Cognition and Behavior Research – Comparative cognitive and behavior research aims to investigate cognitive evolution by comparing performance in different species to understand how these abilities have evolved. Ideally, this requires large and diverse samples; however, these can be difficult to obtain by single labs or institutions, leading to potential reproducibility and generalization issues with small, less representative samples. To help mitigate these issues, we are establishing a multi-site collaborative Open Science approach called ManyBirds, with the aim of providing new insight into the evolution of avian cognition and behavior through large-scale comparative studies, following the lead of exemplary ManyPrimates, ManyBabies and ManyDogs projects. Here, we outline a) the replicability crisis and why we should study birds, including the origin of modern birds, avian brains and convergent evolution of cognition; b) the current state of the avian cognition field, including a ‘snapshot’ review; c) the ManyBirds project, with plans, infrastructure, limitations, implications and future directions. In sharing this process, we hope that this may be useful for other researchers in devising similar projects in other taxa, like non-avian reptiles or mammals, and to encourage further collaborations with ManyBirds and related ManyX projects. Ultimately, we hope to promote collaboration between ManyX projects to allow for wider investigation of the evolution of cognition across all animals, including potentially humans. hurdle in comparative cognition research today, as large samples of individuals or species can be difficult to obtain by single labs or institutions, due to aspects that include funding and space for animal facilities, as well as labor and time availability of researchers (Stevens, 2017). More commonly, research groups tend to focus their efforts on one or two model species, often with relatively low sample sizes, to address a range of questions or topics. The results from these studies are then often assumed to reflect the abilities of the species in question. However, such small samples raise questions about the generalizability and replicability of the results (Farrar et al., 2021; but see Smith & Little, 2018). Additionally, they limit our understanding of how cognition may vary within a species or population, factors which are critical to understanding its evolution (Chittka et al., 2012;Völter et al., 2018). Although the diversity of species involved in comparative cognitive research has increased substantially in the 50+ years since Beach's (1950) original commentary, the field still faces similar challenges today. As highlighted by Krasheninnikova et al. (2020), there is still a paucity of direct, largescale phylogenetic comparisons that would allow for robust inferences to be made regarding the distribution and evolution of particular cognitive abilities. Indeed, most direct comparisons involve just a handful of species (often two) -far below the estimated 20 species required to make reliable phylogenetic comparisons (Freckleton et al., 2002). More commonly, results from single research groups are considered alongside comparable data from other labs and species; however, even minor methodological differences between studies can significantly impact results and preclude meaningful comparisons (Barth et al., 2005). As subjects are often tested repeatedly with a range of tasks, members of the same species housed at different research sites can exhibit striking differences in performance on identical tasks as a result of their testing history or a range of other factors (Stevens, 2017). Consequently, introducing heterogeneity by sampling from more individuals across a range of conditions (such as from different research sites) is a necessary step forward (Farrar et al., 2021). In recent years, there has been a growing push to form large-scale, multi-lab collaborations aimed at collecting truly comparable data across a large number of species, specifically to target these weaknesses (e.g., Krasheninnikova et al., 2020;MacLean et al., 2014;Stevens, 2017). For example, Maclean et al. (2014) organized a large-scale comparative study of motor-self regulation across 567 individuals representing 36 species of mammals and birds with the aim of understanding the evolution of self-control. More recently, there has been the formation of several 'ManyX' projects with dedicated infrastructure for ongoing, collaborative data collection across labs, beginning with the ManyBabies Project (e.g., Byers-Heinlein et al., 2020) and the Open Science Collaboration (2015) in response to the replication crisis within psychology. Following on from this, the ManyPrimates project was launched, which has enabled researchers to collect and compile data from over 176 individuals representing 12 primate species (ManyPrimates, Altschul, Beran, Bohn, Call, et al., 2019;ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., 2019). These efforts provide an encouraging way forward and a viable means of addressing issues of sample size, replicability, and generalization, and can be extended to other taxa (such as the planned ManyDogs project). The avian clade consists of over 10,000 living species that cover the globe and represent a vastly diverse range of ecological niches, social structures, life histories, and foraging habits (Gill et al., 2021). Moreover, the field of avian cognition has been steadily expanding, particularly over the last two decades, to incorporate research on more species in both field and lab settings. Most notably, comparisons between corvids (and more recently parrots) and primates have revealed striking similarities in cognitive abilities that have generated extensive discussion into the convergent evolution of cognition in these taxa, which last shared a common ancestor some 324 million years ago (Dos Reis et al., 2015;Emery & Clayton, 2004;Lambert et al., 2019;Miller et al., 2019;Van Horik et al., 2012). Despite the growth of avian cognition research, the field still only encompasses a small proportion of all extant bird species -we therefore lack vital data and species representation to enable reliable inter-and intra-species comparisons within birds and other taxa which would provide reliability in interpretation. For example, inclusion of more crow species to a wider data set on self-control significantly improved overall performance of birds compared with apes (Kabadayi et al., 2016;MacLean et al., 2014). As a growing area of research, avian cognition is well placed to undertake similar large-scale collaborative efforts. We are in the process of developing a "ManyBirds" project ( Figure 1), with the aim of establishing an efficient and sustainable Open Science based framework for conducting multi-site studies of avian cognition. The primary project aim is to provide new insights into the evolution of avian cognition through comparative studies, following the lead of exemplary ManyBabies (e.g., Byers-Heinlein et al., 2020), ManyPrimates (ManyPrimates, Altschul, Beran, Bohn, Call, et al., 2019ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., 2019) and ManyDogs projects, such as testing the impact of socio-ecological factors, underlying evolutionary mechanisms and construct validity of cognitive abilities (ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., 2019). We aim to include as many avian species and subjects as possible from a variety of lab, zoo, farm, private residence and field sites, by pooling resources world-wide and facilitating collaboration, open discussions, expertise sharing, and the development of new study designs and ideas. It should enable improved replicability (e.g., test/re-test) and generalizability, with remote accessibility for all. Figure 1 The ManyBirds Project Logo Note. Credit to Stephan Reber and Emma Arbeau, 2021. In this article, we focus on: a) why birds, including avian brains and convergent evolution; b) the current state of the avian cognition field, including a 'snapshot' review of avian cognition from 2015-2020, across 30 journals and 550+ articles; c) the ManyBirds project, including project plans, current stage, limitations, implications and future research. ManyX Projects and the Replication Crisis ManyX projects have in part been motivated by psychology's replication crisis, and there is growing recognition that such replicability issues might affect animal cognition too (Beran, 2018;Brecht et al., 2021;Farrar et al., 2021;Schubiger, 2019;Stevens, 2017;Tecwyn, 2021). Through a combination of false-positive inflating research practices and a publication bias against negative results (Bishop, 2019), literatures can soon be populated by statistical effects that are large overestimates (Gelman & Carlin, 2014), and that often fail to replicate in new samples (Open Science Collaboration, 2015). In the human literature, multi-site studies were necessary to provide strong tests of scientific hypotheses with high statistical power and heterogeneous samples of participants, settings, and experimenters (Klein et al., 2014(Klein et al., , 2018Würbel, 2000). Such benefits may be heightened in fields with many unique samples, such as animal cognition (Lange, 2019). In these fields, multi-site collaborations offer the opportunity to test the replicability and generalizability of effects, both within-and between-species, in addition to stronger tests of evolutionary hypotheses (ManyPrimates et al., in press;ManyPrimates, Altschul, Beran, Bohn, Call, et al., 2019;ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., 2019). However, perhaps one of the key early benefits of ManyX projects in animal cognition will be in understanding just how much variation occurs across different samples of the same species between test sites, as this will provide indirect evidence for the likely robustness of many previously published effects, and consequently the robustness of the between-species comparisons that are central to comparative cognition. These ManyX approaches are therefore necessary across a range of taxa, including mammals, birds and reptiles. The Origins of Modern Birds In contrast to all other ManyX projects to date, the subjects of ManyBirds are not mammals but birds, which are reptiles and the last remaining dinosaurs. The direct ancestors of modern birds most probably split from non-avian dinosaurs in the middle Jurassic period (Xu et al., 2011). They evolved and diversified for approximately 100 million years before the Cretaceous-Paleogene extinction, which marked the end for all other dinosaurs (Jarvis et al., 2005). The diversity of birds that existed at that time almost completely disappeared as well. Only a few taxa of one of at least five major bird clades survived, the Neornithes (Longrich et al., 2011). It has been suggested that the surviving species were relatively small birds, capable of flight, which primarily lived on the ground and in the undergrowth (Field et al., 2018). After the extinction, these few species gave rise to the diversity of birds seen today. However, the major clades appear to have already originated in the late Cretaceous (Prum et al., 2015). The Neornithes had already split into Palaeognathae (today represented by ostriches, emus, kiwis, tinamous etc.) and Neognathae, which in turn had split into Galloanserae (the ancestors of fowl, such as chickens and ducks) and Neoaves (all other birds, including passerines). Today, birds are found on all continents, and they occupy almost every niche available to tetrapods. During diversification, they encountered very similar socio-ecological challenges as mammals, and these selected for highly comparable cognitive capacities (Emery & Clayton, 2004;Lambert et al., 2019). Due to the very different evolutionary history of birds, their cognition did not evolve from the same substrate (see section "The convergent evolution of cognition"). Hence, a large-scale comparison of the cognition of different bird species will allow us to understand the evolution of avian cognition as well as explore how the same selective pressures shape cognition in very different lineages. Avian Brain A further justification for focusing on birds is that the structure of the avian brain has significant similarities as well as differences to the mammalian brain. The avian brain is organized in nuclei and does not have a laminated cortex. For a long time, it was thus thought that the cerebrum of birds was primitive and consisted almost entirely of striatal regions. We know today that, like in mammals, the vast majority of it is actually pallium (Jarvis et al., 2005). The different nuclei are also interconnected similarly to the different layers of the mammalian neocortex (Jarvis et al., 2005). The basic "neuroarchitecture" of birds and mammals appears to share many more similarities than previously thought. They both have orthogonally organized networks of fibres, which radially connect areas of sensory input with regions for motor functions and tangentially associate areas of similar processing levels (Stacho et al., 2020). The neurons themselves are highly conserved as well: several transcription factors are identical (demonstrated by gene expression experiments) across mammals and birds (Briscoe et al., 2018). In short, the pallium (forebrain) of amniotes (vertebrates that undergo embryonic or foetal development within an amnion, including mammals, birds, other reptiles) can be organized in a variety of ways, but the different structures can still enable comparatively complex cognitive capacities (Güntürkün, 2005). For the purposes of the ManyBirds project, it is mainly important to note that the avian brain is not a primitive version of the mammalian one (Jarvis et al., 2005). The pallium in mammals contains the prefrontal cortex, a part of the frontal lobe, which is the main seat of executive functions, such as working memory, motor self-regulation, and decision-making (Diamond & Bond, 2003). The prefrontal cortex is disproportionally large in primates, particularly humans (Brodmann, 1909;Donahue et al., 2018) and hence generally considered vital for sophisticated cognitive capacities. In birds, which lack a cortex, the equivalent brain region is the nidopallium caudolaterale (NCL). It lies at the other end of the pallium to the prefrontal cortex. The two regions are on the structural level, to the best of our knowledge to date, not shared by common ancestry (Shubin et al., 2009). However, they strongly resemble each other in connectivity and chemoarchitecture (Güntürkün, 2005(Güntürkün, , 2012. This is perhaps most apparent when looking at dopamine, the neurotransmitter involved in selecting, maintaining, and processing information and in generating corresponding responses. The prefrontal cortex and the nidopallium caudolaterale are both innervated by dopamine via the ventral tegmental area and the substantia nigra, and their neurons are activated by D1 receptor cascades (Durstewitz et al., 1998). Both also receive input from all sensory modalities and their output gets sent to the motor structures . The nidopallium caudolaterale is well documented to be the seat of executive functions in birds (for reviews see Güntürkün, 2005Güntürkün, , 2012Güntürkün & Bugnyar, 2016). For instance, pigeons (Columba livia) show deficits in delayed alternation and working memory tasks when the area of the NCL is lesioned, and the reduction in performance is proportional to the size of these lesions (Diekamp et al., 2002;Güntürkün, 1997). Additionally, single-neuron responses were measured in the NCL of carrion crows (Corvus corone) during a visual detection task (Nieder et al., 2020). The crows' neuronal responses, shortly before providing feedback, correlated with the choices they made (correct or incorrect) rather than the actual stimulus intensity. This study provided strong evidence for sensory consciousness in birds. Given the importance of the NCL for sophisticated cognition, it should be expected that birds with larger NCLs should perform at higher levels in cognitive tasks. Such allometries have been known for the prefrontal cortex in mammals for over a century (Brodmann, 1909) -recent research showed that the same might apply for the NCL in birds. Von Eugen et al. (2020) mapped the NCL in chickens (Gallus gallus domesticus), pigeons, zebra finches (Taeniopygia guttata) and carrion crows. They found that the NCL is more derived (denser, higher parcellation) in Passeriformes than in pigeons, and that NCL is larger in pigeons than chickens The most elaborate version was observed in the carrion crow, a corvid known for particularly sophisticated cognitive capacities. Executive function performance might indeed correlate with the extent of the NCL, at least in Neoaves. In the classic cylinder task, which tests for motor self-regulation, zebra finches performed better than pigeons, and ravens (Corvus corax; close relatives of the carrion crow) outperformed the finches (Kabadayi et al., 2016;MacLean et al., 2014). These results would be expected given the relative size of the NCL in these species (Von Eugen et al., 2020). Bird brains are significantly smaller than mammalian brains, but they are still capable of highly comparable cognitive performance (Seed et al., 2009). One suggested explanation is that there are disproportionately more neurons in the avian pallium. The number of neurons in this part of the brain might indeed reflect levels of cognitive performance better than absolute or relative brain size (Herculano-Houzel, 2017;Jacobs et al., 2019). The pallium of some birds, such as corvids and large-brained parrots, contains an equal amount, or more, neurons than the forebrain of much larger primates (Olkowicz et al., 2016). For instance, the pallium of a raven has slightly more neurons than that of a capuchin monkey although it is only a quarter of the weight (Olkowicz et al., 2016). A ManyX approach focusing on birds enables further exploration of the influence of brain structure on cognitive evolution. The Convergent Evolution of Cognition Their evolutionary history, extreme diversity, and brain structure make birds ideal candidates for large scale multi-species comparisons, which in concert with the other ManyX projects can decisively advance our understanding of cognitive evolution. The cognitive capacities of several corvids and parrots has been suggested to rival those of primates in complexity (Emery & Clayton, 2004;Lambert et al., 2019). Similar social-ecological selection pressures may have led to this convergent evolution. However, in order to truly understand how avian cognition evolves, we need to test many more bird orders. Historically, avian research has focused on pigeons and quail, and more recently corvids, parrots, and several other Passeriformes; however, other lineages are distinctly underrepresented. For example, our review found no studies on Palaeognathae, one of the two major bird clades, which has five distinct orders. Furthermore, other bird lineages have adapted to unusual niches, such as penguins and flamingos, and we do not know how this affected their cognition. Many bird species are still to be investigated using a comparative approach to provide the opportunity to trace the evolution of cognition. We purposely did not limit this planned project to a specific bird lineage. By including as many species and individuals as possible, in labs, zoos, private residences and the field, the ManyBirds project will be able to test several hypotheses about cognitive evolution and compare behavior and cognitive performance both within (including between sites) and between species. A large sample of different species is crucial for the success of this endeavor. There are many hypotheses aiming to explain the evolution of mammalian, and ultimately human, cognition. Our project could validate or challenge these hypotheses by studying the evolution of cognition in birds, which are evolutionarily distant to mammals. In other words, hypotheses on cognitive evolution, which are confirmed in two very different lineages, can be considered to have more support. Methods In order to gain a broad overview of contemporary avian cognition research including the species, topics, and sites of study, we reviewed the recent avian cognition literature from 2015-2020, across select journals, encompassing 550+ articles (from an initial output of 2,050 articles). We focused on 30 journals relevant to avian/animal cognition (15 of which overlap with journals reviewed by ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., (2019)) and exported relevant articles from the Web of Science database using a keyword search (terms: ((avian OR bird) AND (cogniti* OR intell* OR psycholog*)) + (((physical OR social OR technical) AND (cogniti* OR intell*)) OR memory OR learn* OR atten*). For five select journals that were likely to yield the highest number of papers related to avian cognition, we exported all published papers within the specified time frame and reviewed titles and abstracts by hand. For both keyword and hand searches, our criteria for inclusion in the review were i) the paper included data from at least one bird species, ii) the paper included an experimental manipulation, and iii) the topic was cognition or relevant psychological phenomenon as defined by Shettleworth (2010). Our date criteria of 2015-2020 included articles published online in 2020, but with a print publication date of 2021. We ran two pilot coding trials, where each of the five coders followed a coding guideline to code a small subset of data. Following each pilot, we reconvened to discuss and confirm the final coding guidelines, before dividing the data set between coders and proceeding to code the data. We began by sifting by title and abstract according to the selection criteria. We then coded each relevant article using the same format of excel sheet (see data availability statement for link to data). We acknowledge that our search methods, like many other primarily keyword search-based reviews, may not result in a fully comprehensive sample of all possible available avian cognition studies from 2015-2020. However, in using a predetermined keyword search, plus hand-searching a selection of the most relevant journals, we aimed to produce a reliable overview which is sufficient for the main purposes of this review (i.e., focusing on general topic, sample size, sites and species representation). For included studies, we coded multiple types of information (see Table 1). Results In total, we extracted data from 562 avian cognition studies from 2015-2020. Five additional articles from 2021 were picked up by our searches, giving a total of 567 studies from 30 different journals, with the five most common being Animal Cognition (112 studies; 19.8% of the sample), Behavioural Processes (52; 9.2%), Journal of Experimental Psychology: Animal Learning and Cognition (43; 7.6%), Journal of Comparative Psychology (39; 6.9%) and Animal Behaviour (34; 6.0%). Replication Whether the authors defined their study as a replication (yes/no; assessed with keyword search within article for term 'repli*') Topic Divided into broad topic (physical cognition, social cognition, learning, memory, predisposition, or other) and detailed topic (the specific topic of the paper as assessed by the coder, such as 'Theory-of-Mind' or 'causal reasoning'). Note. If a study was coded using more than one of our categories, the topic reported in the results reflects the first one that was extracted by the coders. Multiple species Yes/no; whether or not the study tests multiple bird species Multiple sites Yes/no; whether the authors describe their study as occurring at multiple sites, usually by providing more than one GPS coordinate or site name Non-invasive Yes/no; whether the study involved invasive procedures such as injection, implantation, etc. Phylogeny of Extant Bird Orders Note. Numbers in bold indicate the number of instances this order occurred in our sample; the number of species (spec) and taxonomic families (fam) are given in brackets. The phylogeny is based on Prum et al. (2015) and Kuhl et al. (2021); the branches indicate inter-order relatedness only; branch length is not representing phylogenetic distance. Violin Plots of the Sample Sizes of the Four Most Often Studied Orders in Avian Cognition from 2015-2020 Note. The two largest studies with sample sizes of 459 (Langley et al., 2020) and 388 (Versace et al., 2017), both Galliformes (e.g., pheasants), are not displayed on the graph to improve the visibility of the smaller sample studies. Dashed line shows overall median sample size. Figure 4 The Sample Sizes of the Less-Often Studied Orders in Avian Cognition Note. Dashed line shows overall median sample size. Location and Geography In our sample, 54 of the 567 (9.5%) studies were conducted across multiple sites. In terms of the type of site, 423 (74.6%) were conducted in laboratories, 99 (17.4%) at field sites, 22 in zoos (3.9%), 17 on farms (3.0%) and 6 (1.1%) did not report their sites or were conducted at a mixture of sites (e.g., in the lab and the field). These studies were conducted across 34 different countries, 3 territories/islands (Canary Islands, French Polynesia and New Caledonia), and Antarctica. The four most common countries were: USA (143; 25.2%), UK (71; 12.5%), Canada (51; 9.0%) and Austria (41; 7.2%). Figure 5 displays the distribution of studies across the globe. Table 2 displays the topics studied in avian cognition research between 2015 and 2020. These topics were roughly evenly distributed between journals, other than learning studies, which were overrepresented in Behavioural Processes (26 of 52 articles; 50%) and The Journal of Experimental Psychology: Animal Learning and Cognition (22 of 43 articles; 51.1%). Figure 6 displays the topics studied for the four most common Orders. Of the 567 studies, 42 (7.4%) used invasive procedures, and 41 (7.2%) contained at least one self-defined replication study or reported using the same protocol as a previous study. Comparison to Primates We selected a similar review method as a recent primate review (ManyPrimates, Altschul, Beran, Bohn, Caspar, et al., 2019) in order to be able to compare findings. We found that, similarly with the primate review results, only a small proportion of bird species available (1.41%) were represented in bird studies from 2015-2020, across the 30 journals included in our review. These species were typically from 4 main orders, with relatively small sample sizes, represented by a small number of sites (Table 3). Outline of Project Plans With the ManyBirds project, our initial plan is to establish: 1. project infrastructure and 2. new collaborations 3. coordinate and contribute data to ManyBirds Study 1, and 4. coordinate & contribute data for the ManyBirds Study 2. We aim to build on this article, as well as a multi-lab neophobia (response to novelty) study in 10 corvid species (crow family) with collaborators across 10 labs worldwide (Miller et al., 2021), to formalize the project and enhance recruitment of collaborators. Furthermore, the success of the related ManyBabies, ManyPrimates and ManyDogs projects indicates that there is general desire and support throughout the research community to contribute and collaborate within these types of projects. The ManyBirds project is a new and promising direction of research. We outline the initial planned project objectives: Establish Project Infrastructure The ManyBirds infrastructure involves setting up and/or maintaining: a) website (www.themanybirds.com), b) twitter account (@TheManyBirds) c) mailing list (join via website) d) email address (manybirdsproject1@gmail.com) e) open repository (e.g., OSF/ Zenodo) f) slack channels (join via website). For the research, we will communicate with collaborators via use of google docs (e.g., all materials including manuscripts), video chats, slack and email, and will require a) polls to vote on study focus and plans b) pre-registrations (OSF) c) code of practice and project policies including ethics, authorship and data sharing guidelines d) study protocols, including practice videos illustrating procedures, and requesting pilot videos from each facility to be checked by core team before data collection proceeds e) coding guides f) analyses plans g) manuscript writing. We will actively utilise Open Science practices, including pre-registration and/or Registered Reports, open data and code on repositories alongside relevant publications, pre-prints, as well as the use of open-access publishing to ensure wide accessibility, as transparency and pre-registration is necessary for effective collaborations (Allen & Mehler, 2019). Aspects that we believe are novel for the ManyX projects include our aim to incorporate automated video analysis (e.g., Reber et al., 2021) to reduce the time investment of manual coding & control for reliability, and include captive (lab, zoo, private residence) plus field studies to improve generalisability and increase within-species comparisons. Form New Collaborations We currently have several confirmed collaborators worldwide, including established avian research labs through a previous collaborative study (Miller et al., 2021) as well as access to various bird species for testing through our core team's other new and existing collaborations (including with UK zoos). We plan to utilize the findings of our present review to contact the researchers who have actively published on avian cognition in the past 5 years to invite for collaboration. Additionally, through this article and additional promotion of the project on social media (e.g., Twitter) and via our own networks (academic and zoos), we hope to open up the collaborations more widely. To promote inclusivity, like ManyPrimates, we plan to encourage contributions beyond data collection, to experimental design, data analysis and manuscript writing, thereby enabling researchers without direct access to birds -or even specifically expertise in bird research -to take part. This would allow for researchers or others outside the field to be involved, such as statistical/ modelling experts and theoretical scientists including philosophers, to encourage interdisciplinary approaches. We will include a 'sign up' survey, where collaborators can provide information such as sample sizes and species, to enable us to prepare a general master list. Each collaborator is required to obtain their own ethical approval prior to starting data collection and must provide evidence of this when submitting data, an approach that has been successfully implemented with ManyPrimates (ManyPrimates, Altschul, Beran, Bohn, Call, et al., 2019). Those collaborators that are not affiliated with an institution (for example testing in private settings) will be required to sign an approval form confirming that they adhere to a set of ethical standards established by the organizers prior to data collection. ManyBirds Study 1: Neophobia in Birds We are currently preparing a publication on individual repeatability and the influence of socioecological factors on neophobia in corvids, encompassing contributed data from 10 corvid labs worldwide (241 subjects, 10 species, 13 groups of birds; Miller et al., 2021). We followed a similar protocol as Greggor et al.'s (2020) 'Alalā study. We tested latency to touch familiar food in the presence and absence of a novel food or novel object, compared with a baseline (familiar food only), and run 3x to allow for repeatability (Greenberg & Mettke-Hofmann, 2001). We used differences scores (control minus novel item values) to aim to standardize for unavoidable differences between labs. We found individual repeatability and significant effects of several socio-ecological variables, including use of urban habitats, on neophobia (Miller et al., 2021). We are in the process of expanding on this pilot work to form our first ManyBirds study by opening up new collaborations with bird species outside the corvid family, in order to test neophobia in birds with a focus on a) species differences, b) influence of socio-ecological factors, and c) individual temporal and contextual consistency. This work involves modifying the corvid protocol to be suitable for testing other bird species (e.g., in social settings or at non-academic sites), and introducing the use of automatic video analysis software . Additionally, to the existing corvid data set, Mettke-Hofmann et al. (2002) tested object neophobia in 61 parrot species with comparable methods. Therefore, through new and existing collaborations we have suitable data sets available for at least 10 corvid and 61 parrot species, with a number of new data collection and collaborations confirmed. By including a wider selection of avian species, we can also incorporate phylogenetic approaches. These neophobia tests are particularly ideal for a first study due to being low-time and minimal labor requirement (3 conditions, 3 test 'rounds', 1 trial per condition per round per day over 3 days, repeated every 2 weeks = 9 test days over ~6-8 weeks). There is also the option to only collect 1 test round if the opportunity to collect more is not available (similar to Mettke-Hofmann et al, 2002). ManyBirds Study 2: Topic TBC Following on from Study 1, we will initiate a second study within the project scope, where we adopt a consensus-based approach to selecting the research topic and experimental paradigm for each subsequent study to be voted on by the collaborative team. This will involve a list of potential suitable topics and paradigms being presented by the core team, such as self-control, short-term memory or innovation (problem-solving), with collaborators invited to vote on their preference. For example, ManyPrimates focussed first on short-term memory (ManyPrimates, Altschul, Beran, Bohn, Call, et al., 2019), then will test delay of gratification and inference by exclusion (https://manyprimates.github.io). We will select a design and appropriate research question that does not require individual separation for testing, so as to open testing up for socially housed and naïve-to-testing individuals/ species (e.g., zoos, private residences). As with Study 1, the protocols will be kept simple with low-time investment to enable crosssite standardisation, will account for species size differences, and measure discrete (e.g., correct/ incorrect) and continuous (e.g., latency to action) outcome variables. Current Stage of the ManyBirds Project We began setting up the project in February 2021, with the idea leading on from the neophobia in corvids study (conceived in April 2018) as a means of bringing together many corvid researchers worldwide for a collaborative study (Miller et al., 2021). Since February 2021, we have had regular ManyBirds meetings -with recorded meeting minutes and action points to enable transparency and document progress -communicated with some of the ManyPrimates core team, increased our own core team, set up the first study 1 team, created a dedicated email address and website, designed a logo and regularly communicate via Slack channels. In the first stage of the project, we primarily focused on writing this article, planning the project infrastructure, forming new collaborations, and initiating our first ManyBirds study (see website for more details). Core Leadership Team Our current core team are several early career researchers with expertise in avian cognition/ behavior, including corvids, parrots, ratites and other bird species, as well as experience in cognition/ behavioral research in humans, non-human primates, other mammals and reptiles (e.g., Farrar et al., 2020;Garcia-Pelegrin et al., 2020, 2021Lambert et al., 2015;Lambert & Osvath, 2018;Miller et al., 2016Miller et al., , 2019Reber et al., 2013Reber et al., , 2021. There are also opportunities for others to join the core leadership team and/or study organising teams -as well as general collaborators -as the project develops. Contribution to the project (team or general collaborator) provides an excellent opportunity for upcoming researchers to join an international network of collaborators and establish themselves independently. This is particularly true as the project premise does not require direct access to birds and can be contributed towards from any institution. At present, we all have access to birds for data collection through existing collaborations (including in UK zoos) and contribute to the project alongside our regular positions. Limitations The primary limitations of the ManyBirds project are also present for most comparative studiesin fact a benefit of these projects is that these limitations gain visibility. These limitations include unavoidable differences between subjects with regard to: testing sites, experimenters, conditions (e.g., testing area size), subject histories (e.g., rearing, prior research experience, training), sample sizes and individual versus social testing. To address this limitation, we will aim to test within-species to compare behavior between sites of the same species where available. Another potential limitation is that we will compare species with very different physical (e.g., body size) and cognitive capabilities, including motivation, attention and motor abilities. These issues can be addressed with careful study design, such as modifying size of novel stimuli according to each species' size, and using dependent variables such as choice (e.g., correct/incorrect) or response latency, which lend themselves to a wide range of cognitive tasks as well as well as cross-species (and cross-taxa) comparisons. In addition, testing at the same time of day with the first/ main 'meal' of the day in Study 1, can ensure (as far as possible) that subjects are equally hungry and motivated. Furthermore, with Study 2, and future subsequent studies, we will ensure ample opportunities for habituation and pre-training (if required) are included in protocols, which is particularly important for naïve, previously un-tested species/ individuals. Through collecting data across various different sites, including labs, zoos, farms, private residences and in the field, we can compare behavior between different groups of the same species and directly test for the effect of aspects like prior history. This approach is difficult to achieve without such large-scale comparisons. Regarding likelihood of having different experimenters at each site, we will ask that collaborators submit pilot videos for checking by the core team before confirmation to proceed in data collection, to ensure that protocols are administered in as comparable a manner as possible. Another issue with large-scale collaborative projects, especially in relatively small fields like avian cognition, is that the possibility of independent criticism from experts decreases with the size of the collaboration. If a large number of laboratories and individuals take part in the collaboration, the number of willing external critics will likely decrease. Further along these lines, one study in biomedical research reported that claims from collaborations with many authors tended to be less replicable than comparable claims from research themes with many smaller independent groups (Danchev et al., 2019). In ManyBirds, we recognize these potential problems, and to counter them will embrace a variety of transparent, and potentially-bias reducing, methods (Bishop, 2020;Nosek et al., 2018). These include pre-registration, registered reports and open data and code, as well as a focus on effective communication of the uncertainty in our results. Furthermore, we advocate for a variety of approaches to avian cognition and behavior, recognizing the benefits as well as potential limitations of both multi-lab collaborative and independent lab approaches. These include the complimentary use of research designs requiring minimal time and labor investment for ManyX projects to enable testing across a wide variety of species and individuals, compared with the essential, in-depth and often lengthy (in time or number of trials/experiments) designs contributed to the field by independent labs. A final limitation is that the project requires fairly significant investment in terms of time and effort, particularly from the core team, in organizing and establishing particularly while in the early stages. The time from study conception to publication is likely to be lengthy (considerably more so than a single-species study would be), given the time required to coordinate potentially large teams of researchers in study design, data collection, coding, analysis and manuscript writing. Where possible, we may utilise Registered Reports in order to ensure methods and data analysis plans are outlined and confirmed early on (prior to data collection) for added clarity to all involved and to smooth the process of writing the final manuscript. While the scale of ManyX projects does not offer traditional incentives to all participants (e.g., first/last authorship), it does offer benefits in terms of networking, training, and inclusivity for researchers from all experience levels and backgrounds (Byers-Heinlein et al., 2020). The high participation in our initial corvid neophobia project, coupled with the success of ManyBabies and ManyPrimates, shows an encouraging level of support for such collaborative, open-science endeavors. Expected Project Outcomes and Future Directions ManyX projects have proven to be a valuable tool for large-scale comparisons, but require significant management, particularly at the beginning. We are working to take the ManyBirds project from a strong idea to an established infrastructure for fully collaborative and open avian cognition/behavior research and completion of the first studies. These outcomes would place our team and the project in an excellent position to attract future funding to support subsequent studies, through demonstrated publications and collaborations. We will actively encourage collaborators to engage in science dissemination via conferences and meeting presentations, social media and publication media output. We hope to be able to arrange a workshop for collaborators in future, if funding can be secured. The project has potential for far-reaching consequences with regard to advancing the field of comparative cognition and animal behavior, by both promoting transparency and reliability, as well as providing the data necessary to understand the evidential value of previously published single-site studies -which currently dominate the avian cognition literature (Farrar et al., 2020). It will allow for a wider focus on research questions encompassing the evolution of avian cognition and behavior, such as testing the drivers of cognition in relation to socio-ecological factors, like diet, sociality and habitat use, as well as comparative, phylogenetic, developmental and longitudinal approaches. We also hope to encourage others to establish similar projects in other taxa groups, like reptiles, by outlining the process of establishing this project, and promote collaborations between ManyX projects to enable the investigation of the evolution of cognition more generally, i.e., across birds, mammals, as well as potentially in humans. Furthermore, there are recent calls for integrating cognition in applied animal conservation and welfare (Greggor et al., 2014). Our project facilitates collaborations with avian facilities holding hugely under-/not represented species, often in small numbers -zoos in particular are a key under-utilized resource as highlighted in our present review. ManyBirds, by nature, encourages contributions regardless of individual facility samples, as these can be increased by combining data across facilities. In zoos, species are often endangered and have active conservation efforts, therefore are ideal candidates for gathering cognitive data for application to conservation actions (Greggor et al., 2014). Similarly, welfare can be improved by providing published cognitive data, such as in farm animals like chickens (Marino, 2017). Finally, voluntary participation in cognitive experiments can be enriching and mentally stimulating for captive animals (Clark, 2017;Hopper, 2017). Conclusion The formation of large collaborative projects, such as ManyBabies, ManyPrimates, and ManyDogs, indicates a shared desire from the scientific community to collaborate towards a common goal of inquiry. As outlined here, birds occupy a diverse range of ecological, geographical, and social niches. As such, the investigation of avian cognition at a comparative level provides insight into the diverse evolutionary pressures that might have selected for different behavioral and cognitive adaptations. Moreover, the possibility of comparison of a wider range of distinct avian species might elucidate on the proposed theorems for convergent evolution of intelligence amongst different taxa. However, to do so, larger sample sizes and a wider coverage of the different species within the taxa are necessary. Our review of the current state of the field revealed that while 141 species of birds were represented, the median sample size was only 14 subjects. Moreover, comparisons between different bird species were rare, with only 10.9% of the studies directly comparing more than one species using the same methodology. The ManyBirds project outlined here provides an optimal infrastructure for collaborative testing and theorization of avian cognition and behavior by encouraging both input from established labs and field researchers, while also providing zoos, farms and bird holders in private collections (including private homes), the opportunity to collaborate in such endeavor. The implementation of this collaborative infrastructure will aid the reliability of the data collected, by offering larger sample sizes, a diverse array of avian species from which to obtain data, and by stipulating and systematizing the methodology used to obtain it. Ultimately, the creation of a ManyBirds infrastructure will provide unparalleled insight into the evolution of avian cognition by nurturing collaboration, replicability, and data openness. This project will aid the avian cognition and behavior field by facilitating the means to acquire larger comparative datasets from which to extrapolate wider scientific inferences. Conflicts of Interest: The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of the data in the writing of the manuscript, or in the decision to publish the results. Ethics: This review required no animal testing so ethical approval was not required.
2021-09-01T15:13:06.906Z
2021-06-21T00:00:00.000
{ "year": 2022, "sha1": "c3f93038349de9dc359c8fb934ed0e7a3d76ea5e", "oa_license": "CCBY", "oa_url": "https://www.animalbehaviorandcognition.org/uploads/journals/36/11%20Lambert%20et%20al_ABC_9(1).pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f8aea40314783a1bb6367a23d20aaebe0951916c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
149858393
pes2o/s2orc
v3-fos-license
Alcohol abuse in Iranian adolescents: A mediational model of parental monitoring and affiliation with deviant peers Aims: This study aimed to determine the attitudes toward alcohol abuse among students in Tehran and to develop and test a model of the relationships among parental monitoring and affiliation with deviant peers as they predict youth alcohol abuse. Materials and Methods: In this cross-sectional study, 1266 adolescents were recruited from high schools in Tehran and three scales of alcohol abuse, parental monitoring, and adolescent affiliation with deviant peers were completed for them. Data were analyzed using independent sample t-test, Pearson correlation coefficient, and structural equations modeling. Results: The results of this study indicated that 7.4% of individuals had a positive attitude toward alcohol abuse. The percentage of positive attitude among males was nearly 2 times more than females. The study model was confirmed and explained 0.42 of attitudes toward alcohol abuse variance. Moreover, affiliation with deviant peers had a mediating role in the relationship between Parental Monitoring and attitude toward alcohol abuse. Conclusion: According to the results, parental monitoring and affiliation with deviant peers could explain the alcohol abuse among adolescents. Therefore, it is suggested to include these factors in prevention programs aimed at reducing alcohol abuse. having had a whole drink of alcohol and 43% report drinking within the past 30 days. Furthermore, previous researches have shown that 9.8% to 25.7% Iranian adolescents have used alcohol during their life time. [21,22] Moreover, boys engage more than girls in high-risk behaviors such as drinking alcohol. [21][22][23][24] Considering these rates, it is vital to regard adolescents' alcohol abuse. Older adolescents reported higher tendency for alcohol abuse. [25,26] Studies showed that in mid adolescence, people tend to drink more for adaptation with risk factors of alcohol drinking. [9] The theories that focus on dominant sociability have proved that principal resources such as family, school and peers play a major role in normal and abnormal behavior acquisition. [27] It has been proved that parental monitoring has the major role in preventing early development and maintenance of high-risk behaviors in children and adolescents. [28] Among family process variables, parental monitoring has been identified in the literature as one of the proximal determinants of early development and maintenance of antisocial and high-risk behaviors in children and adolescents. [28] Parental monitoring means that parents be aware of their child's friends and the places that he or she spends time. [29] They also have to do behaviors involving attention to and tracking of the locations and activities of the adolescents. [28] In researches, parental monitoring is usually defined as parents' knowledge or adolescents' perceptions of their parents' knowledge of the child's activities and friends. [30] It has been well documented that poor parental monitoring is related to adolescents' alcohol risk taking. [31][32][33][34] Young adulthood is a period in which the child develops a relationship with peers and enters social context and new activities. [4] To fulfill intimacy needs, adolescents tend to spend their time out of the home with friends. [35] Brendgen et al. [36] considered parental monitoring as an important factor in adolescents' participation in high-risk behaviors and affiliation with deviant peers. Affiliation with deviant peers means relationship with adolescents who are committing risky behaviors such as weapon carrying, offending others, and drug abuse. [37] Considering social learning theory, affiliation with deviant peers can cause problem behaviors in adolescents. [38] Recent research has shown that those adolescents who had a relationship with deviant peers tend to engage in a variety of alcohol risk behaviors. [4,23,24,34,39,40] Those adolescents, who are monitored poorly, are more likely to participate in risky behaviors [9] and affiliate with deviant peers. [41] Problem behavior theory and other available models on high-risk behaviors suggest that peer affiliation mediates the relationship between parental monitoring and adolescent problem behaviors. [37] In other words, parental monitoring can cause high-risk behaviors through affiliation with deviant peers. [9,37,42] However, previous studies failed to consider the effectiveness of parental monitoring and affiliation with deviant peers on alcohol abuse in adolescents. This study was aimed to determine the attitude toward alcohol consumption among students in Tehran and to develop and also to test a model of the relationships among parental monitoring and affiliation with deviant peers as they predict youth attitude toward alcohol use. Materials and MethOds The study was a part of the Survey Project on Alcohol abuse and other high-risk behaviors among adolescents. A cross-sectional study was carried out among a sample of 1266 adolescents (737 girls and 529 boys), were recruited from high schools in Tehran, Iran. The Inclusion criteria were as followings: age limitation from 14 to 18 and residency in Tehran. Participants were selected through cluster sampling method. In the first step of sampling, Tehran was divided into 5 regions (north, west, center, east, and south). Then, some districts were randomly chosen from each of these regions. Subsequently, using the list of high schools located in these district, the sample was selected. All participants were informed about the goals of the survey and completed individually administered questionnaires with regular supervision to provide reliable and valid data. The following instrumentations were applied to collect data. Alcohol abuse scale The alcohol abuse scale (AAS) is a 4-item self-report scale which assesses the adolescents' attitudes to alcohol abuse. [43] Because of cultural limitations, there was not any feasibility to assess alcohol use record directly. Zadeh-Mohammadi et al. [43] confirmed the validity of the scale through exploratory factor analysis. Moreover, originally validated with college students, the AAS has acceptable internal consistency (α = 0.91; 43). In this study, the Cronbach's α of scale was. 83. Parental monitoring scale The parental monitoring scale is a 7-item self-report instrument that previously had achieved a Cronbach's α of. 81. [44] Parental monitoring items included questions about adolescent's whereabouts, friends, and activities. The possible responses were "never/unimportant (0)" to "always/very important." [28] The validity of the Persian version has been confirmed by Alboukordi et al. [44] For this study, Cronbach's α was. 70. Adolescent affiliation with deviant peers scale The adolescent affiliation with deviant peers (AADP) scale is an 8-item scale, used to ask adolescents for deviant behaviors committed by their peers, like drug and alcohol use, carrying knife or gun and physical fighting during the past 6 months. [37] The possible responses were "none of them (0)" to "all of them (4)." The total response score was computed for each adolescent, with the higher score indicating more affiliation with deviant peers. The reliability and validity of the Persian version of the scale have been confirmed in Iran. [44] In addition, the Cronbach's α of scale was. 82. Statistical analysis Attitude toward alcohol abuse was computed using descriptive analysis. Moreover, the latent variable analyses were performed using the structural equations modeling which compare a proposed hypothetical model with a set of actual data. The closeness of the hypothetical model to the empirical data was evaluated statistically and presented in Figure 1. Adolescents' attitude toward alcohol abuse According to the AAS, 7.4% of all individuals were at high risk in terms of alcohol abuse. The percent of positive attitude among males was nearly 2 times more than the attitude among females (10.39% vs. 5.29%,  2 = 23.570, P < 0.001). Sociodemographic variables analysis The participants were 529 male and 737 female adolescents. The participants mean and standard deviation (SD) of age were 16.07 and 1.04 years for males and 16.04 and 1.22 for females, respectively. All participants were high school students and 4.5% of them reported distress in the structure of their families. The results of independent sample t-test for study variables are shown in Table 1. These findings showed that males and females were significantly different in scores of AAS (P < 0.001), parental monitoring (P < 0.001), and affiliation with deviant peers (P < 0.001). Table 2 shows the mean and SD of study variables and their correlations. As the table shows, there is a positive and significant relationship between AAS and AADP while PM in negatively correlated with AAS and AADP. Model testing To investigate the proposed model based on the mediating role of AADP in PM and AAS relationship, our findings confirmed the model. Considering the obtained error index, this model explains 42% of AAS variance. Confirming the mediating role of AADP, the model goodness of fit was investigated using Chi-square test and adjusted goodness of fit index (AGFI). The AGFI equaled 0.98. The insignificant Chi-square showed model goodness of fit. Table 3 shows all of the investigated goodness of fit indices (GFIs). Schreiber et al. [45] argue that the model has goodness of fit if and only if the indices of NFI, nonnormed fit index, comparative fit index, GFI, and AGFI exceed 95%, the root mean square residual index is near to zero and SRMR and root mean square error of approximation indices are smaller than 0.80% and 0.60%, respectively. Therefore, Considering Schreiber et al., [45] the current model benefits from goodness of fit. [32][33][34] hence, this study supported this prediction. Consistent with Brendgen et al., [36] parental monitoring could indirectly predict affiliation with deviant peers. Dishion et al. [49] demonstrated that lacking parental monitoring can foster adolescents' affiliation with deviants by providing children with the opportunity to meet with them. In sum, we found that parental monitoring and affiliation with deviant peers were significant predictors of attitude toward alcohol abuse; furthermore, parental monitoring indirectly influences attitudes through affiliation with deviants. Regarding the results of the present study, the theoretical model proposed by Paschal et al. [37] is confirmed. In line with the previous research, it can be concluded that parental monitoring effectiveness on alcohol abuse is mediated through affiliation with peers. [9,37,42] Limitations of this study are worthy of discussion. Considering cultural limitations, we investigate alcohol consumption indirectly, which can affect the results of this study. Another limitation of this study is that measurement of research variables was based on participants' self-report, and there was no independent method for testing the validity of their responses. Furthermore, this study was carried out in Tehran, and its result should be generalized with caution. Future studies would probably benefit from using interview and observational research data to help researchers understand the connections of adolescent alcohol abuse and its connected variables in greater depth. cOnclusiOn Generally speaking, results of this study show that parental monitoring and affiliation with deviant peers had largely explained the attitude toward alcohol abuse among adolescents. Therefore, prevention efforts aimed at reducing risky alcohol drinking should be composed of these factors. In fact, the results suggested that prevention efforts beginning earlier (i.e., at the start of high school) may be warranted. Finally, to investigate significant of the indirect effect of PM on AAS through AADP used bootstrapping method by macro. [46] This result is shown in Table 4. Table 3, both of lower and upper bound in bootstrap results are negative. Therefore, these results show that indirect effect in this model is significant and then the relationship between PM and AAS mediate by AADP. discussiOn The purpose of this study is to investigate the attitudes toward alcohol abuse among students and the role of parental monitoring and affiliation with deviant peers in predicting alcohol abuse. According to the findings of this research, 7.4% of the adolescents were at high risk in terms of alcohol abuse. This can be due to factors such as psychosocial characteristics of the adolescents [1] and peers' influences. [27] Moreover, drug and alcohol abuse can be used by teenagers to cope with their stress. [47] This study, which is consistent with Kelly et al., Kristjansson et al., and Mohammadkhani, also showed that alcohol abuse was more frequent among boys compared to the girls. [21,23,24] Explaining the results, factors such as gender roles, different expectations from girls, [48] and parents' extra monitoring [28] should be considered. Our results are similar to those of Brendgen, Vitaro, and Bukowski, [36] Paschall et al., [37] and Meldrum et al. [38] as they showed that affiliation with deviant peers could predict the high-risk behaviors. Consistent with previous research, spending time with deviant peers has a direct effect on both high-risk behaviors and parental monitoring. [36,42] The results also support the idea that relationship with deviant peers is as an important factor in the development of high-risk behaviors in adolescents as it was suggested in the social learning theory. [38] This study showed that parental monitoring was a major factor in adolescents' alcohol abuse directly and also through affiliation with deviant peers. Previous research suggested that parental monitoring is an important deterrent of alcohol Conflicts of interest There are no conflicts of interest.
2019-05-12T14:22:30.684Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "6fa6d097dc69fdd2f6fe9a5b9ca18b05f24e6c36", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/iahs.iahs_15_17", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "aca167232d9f8063217b52e4bb1e9240feb78950", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
18862852
pes2o/s2orc
v3-fos-license
The Anti-helminthic Compound Mebendazole Has Multiple Antifungal Effects against Cryptococcus neoformans Cryptococcus neoformans is the most lethal pathogen of the central nervous system. The gold standard treatment of cryptococcosis, a combination of amphotericin B with 5-fluorocytosine, involves broad toxicity, high costs, low efficacy, and limited worldwide availability. Although the need for new antifungals is clear, drug research and development (R&D) is costly and time-consuming. Thus, drug repurposing is an alternative to R&D and to the currently available tools for treating fungal diseases. Here we screened a collection of compounds approved for use in humans seeking for those with anti-cryptococcal activity. We found that benzimidazoles consist of a broad class of chemicals inhibiting C. neoformans growth. Mebendazole and fenbendazole were the most efficient antifungals showing in vitro fungicidal activity. Since previous studies showed that mebendazole reaches the brain in biologically active concentrations, this compound was selected for further studies. Mebendazole showed antifungal activity against phagocytized C. neoformans, affected cryptococcal biofilms profoundly and caused marked morphological alterations in C. neoformans, including reduction of capsular dimensions. Amphotericin B and mebendazole had additive anti-cryptococcal effects. Mebendazole was also active against the C. neoformans sibling species, C. gattii. To further characterize the effects of the drug a random C. gattii mutant library was screened and indicated that the antifungal activity of mebendazole requires previously unknown cryptococcal targets. Our results indicate that mebendazole is as a promising prototype for the future development of anti-cryptococcal drugs. INTRODUCTION Cryptococcus neoformans is a yeast-like pathogen that causes expressive brain damage in immunosuppressed individuals (Colombo and Rodrigues, 2015). The fungus reaches the lungs of humans after inhalation of environmental cells. In the immunosuppressed host, C. neoformans efficiently disseminates to the brain and causes meningitis (Kwon-Chung et al., 2014). Cryptococcal meningitis is a global problem resulting in thousands of deaths annually (Park et al., 2009). Most cases occur among people with HIV/AIDS. Poor and late diagnosis, limited access to antifungals and drug resistance are directly associated to the high fatality rate of cryptococcosis, especially in developing countries (Rodrigues, 2016). The standard antifungal regimen for cryptococcal meningitis is a combination of amphotericin B with 5-fluorocytosine (Krysan, 2015). Amphotericin B is nephrotoxic and is intravenously administered (Sloan et al., 2009;Micallef et al., 2015), which demands considerable medical infrastructure. A 15-day intravenous treatment with liposomal amphotericin B is estimated to cost from €10.000 to €20.000 in Europe (Ostermann et al., 2014) and 5-fluorocytosine is not widely available outside rich areas (Krysan, 2015). As an alternative, fluconazole is frequently used, although it is associated with poorer outcomes and relapses (Sloan et al., 2009). In South Africa, more than 60% of people with culture-positive relapsed disease had fluconazole resistance (Govender et al., 2011). Hence, the need for new anticryptococcal therapies is clear. In this context, a new class of antifungals targeting the synthesis of fungal sphingolipids has been recently described, but its efficacy in humans is still unknown (Mor et al., 2015). Drug repurposing has emerged as an alternative to the costly and time-consuming processes of drug discovery and development (Nosengo, 2016). In the field of antifungal development, sertraline, an anti-depressive agent, has been reported to be an in vitro and in vivo fungicidal compound that, in combination with amphotericin B, improves the outcome of cryptococcosis (Zhai et al., 2012;Rhein et al., 2016). Sertraline is now under phase III trial to determine whether adjunctive therapy will lead to improved survival (ClinicalTrials.gov, 2016). In this manuscript, we aimed at finding anti-cryptococcal activity in a collection of drugs previously approved for use in human diseases. Our results are in agreement with the notion that benzimidazole-like compounds are interesting prototypes for the future development of efficient anti-cryptococcal agents interfering with fungal morphology, biofilm formation, cellular proliferation and intracellular parasitism. This study also supports the hypothesis that the antifungal activity of mebendazole might involve previously unknown cellular targets. Screening for Antifungal Activity in a Compound Collection The National Institutes of Health (NIH) clinical collection (NCC) was screened for antifungal activity against C. neoformans. The NCC consists in a small molecule repository of 727 compounds arrayed in 96-well plates at 10 mM solution in DMSO. These compounds are part of screening library for the NIH Roadmap Molecular Libraries Screening Centers Network (MLSCN) and correspond to a collection of chemically diverse compounds that have been in phase I-III clinical trials. Each compound was first diluted to 1 mM in DMSO and stored at −20 • C until use. For initial screening, all compounds were used at 10 µM in 100 µl of RPMI 1640 (two times concentrated) medium buffered with morpholinepropanesulfonic acid (MOPS) at pH 7 containing 2% of glucose, in 96-well plates. Final concentration of DMSO in all samples corresponded to 1%. C. neoformans cells (10 4 ) suspended in 100 µl of water were added to each well. The plates were incubated at 37 • C with shaking for 48 h. The optical density at 540 nm (OD 540 ) was recorded using the FilterMax 5 microplate reader (Molecular Devices, Sunnyvale, CA, USA). Compounds producing values of OD 540 smaller than 0.05 after fungal growth were selected for further studies. As further detailed in the section "Results, " mebendazole was the NCC compound selected for most of the analysis performed in this study. Analysis of Antifungal Activity of NCC Compounds Values of minimum inhibitory concentrations (MICs) were determined using the methods proposed by the European Committee on Antimicrobial Susceptibility Testing (EUCAST) with minor modifications. NCC compounds showing antifungal activity were serially diluted (20 to 0.03 µM) in RPMI 1640 (two times concentrated, pH 7; 2% glucose) buffered with MOPS in 96-well plates. The inocula of C. neoformans (strain H99) and C. gattii (strain R265) were prepared following the EUCAST protocol (Subcommittee on Antifungal Susceptibility Testing of the EECfAST, 2008;Arendrup et al., 2012). The plates were incubated at 37 • C with shaking for 48 h. MIC values corresponded to the lowest compound concentration producing inhibition of fungal growth. For determination of fungicidal activity, C. neoformans cells were grown overnight in YPD broth at 30 • C, washed in PBS and suspended in RPMI 1640 buffered with MOPS, pH 7. The yeast suspension was adjusted to 2 × 10 4 cells per 10 ml of RPMI 1640 (pH 7; 2% glucose) buffered with MOPS and supplemented with 1.25, 0.3125, and 0.078 µM of antifungal compounds. Final concentration of DMSO corresponded to 0.6% in all samples. The samples were then incubated at 37 • C in a rotary shaker at 200 rpm. Aliquots were taken at different time points and plated onto YPD agar plates that were incubated at 37 • C for 48 h. The numbers of CFU were then counted and recorded. The minimum fungicidal concentration (MFC) was defined as the lowest drug concentration inhibiting CFU formation in at least 90% in comparison to systems containing no antifungals. Antifungal Activity of Mebendazole against Intracellular Cryptococci To assess antifungal activity against intracellular C. neoformans, the fungus was first opsonized by incubation (20 min, 37 • C, with shaking) in DMEM containing 10% FBS and 10 µg/ml of the 18B7 IgG1, an opsonic monoclonal antibody to GXM (Casadevall et al., 1998) kindly donated by Dr. Arturo Casadevall (Johns Hopkins University). The fungus was washed with fresh DMEM and incubated with J774.16 macrophages (1:1 ratio, 5 × 10 5 cells/well of 96 well-plates) for 2 h in DMEM containing 10% FBS, 0.3 µg/ml LPS and 0.005 µg/ml IFNγ (37 • C, 5% CO 2 ). After interaction of C. neoformans with macrophages, the systems were washed with DMEM to remove extracellular fungal cells and fresh DMEM containing 10% FBS and variable concentrations of mebendazole (0.25, 0.5, and 1 µM, 200 µl/well) was added to each well. The plates were incubated at 37 • C with 5% CO 2 . After 8 or 24 h, supernatants were collected for inoculation of YPD agar plates and subsequent CFU counting. Alternatively, infected macrophages were lysed with cold, distillated water and the resulting lysates were plated onto YPD agar plates for CFU counting. To evaluate the toxicity of mebendazole for J774.16 macrophages, 5 × 10 5 macrophages suspended in DMEM containing 10% FBS were plated in each well of 96 wells plates and incubated overnight at 37 • C with 5% CO 2 . The medium was supplemented with fresh DMEM containing 10% FBS and mebendazole (0.25, 0.5, and 1 µM) or 0.5% DMSO. The plate was incubated at 37 • C in a 5% CO 2 atmosphere. After 48 h, the systems were washed with DMEM and 50 µl of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) at 5 mg/ml was added to each well, for further incubation for 4 h (37 • C, 5% CO 2 ) under light protection (Mosmann, 1983). Supernatants were removed and 200 µl DMSO was added for dissolution of formazan crystals. Absorbances were recorded using the FilterMax 5 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at 570 nm. Analysis of Synergistic Effects Synergistic activity between mebendazole and standard antifungal drugs was determined on the basis of the calculation of the fractional inhibitory index (FIC) (Mor et al., 2015). Briefly, mebendazole (denominated drug A) was serially diluted (0.38-0.006 µg/ml, 8 dilutions) in 96-well plates. Standard antifungals (denominated drugs B) were serially diluted (11 dilutions) from 16 to 0.015 µg/ml (amphotericin B) or 64 to 0.06 µg/ml (fluconazole). The FIC was defined as: Effects of Mebendazole on Glucuronoxylomannan (GXM) Release Cryptococcus neoformans cells (10 5 /well of 96-well plates, final volume of 200 µl, duplicates) were cultivated in RPMI 1640 buffered with MOPS, pH 7. The medium was supplemented with mebendazole (0.3125-0.001 µM). After 48 h of incubation at 37 • C with shaking, the optical density of 540 nm (OD 540 ) was recorded using the FilterMax 5 microplate reader (Molecular Devices, Sunnyvale, CA, USA). The plate was centrifuged for 10 min and supernatants were used for GXM quantification by ELISA using the protocol described by Casadevall et al. (1992). C. neoformans viability was monitored by propidium iodide (PI) staining of fungal cells. For this analysis, fungal cells obtained after exposure to 0.3125, 0.15625, and 0.078 µM mebendazole as described above were stained with 1 mg/ml PI for 5 min on ice and analyzed by flow cytometry in a FACS Cabibur (BD Biosciences, CA, USA). The percentage of stained (non-viable) cells was obtained with the FlowJo 7 software. Analysis of Capsular Size and Morphology Cryptococcus neoformans was grown overnight in YPD broth and washed twice with PBS. The fungus was then suspended in minimal medium containing sub-inhibitory concentrations of mebendazole and incubated for 48 h at 37 • C with shaking. C. neoformans cells were collected by centrifugation, washed in PBS and analyzed by microscopic approaches. For capsule size determination, the suspension was counterstained with India ink and placed onto glass slides. The suspensions were covered with glass coverslips and analyzed with an Axioplan 2 (Zeiss, Germany) microscope. Capsule size, calculated with the ImageJ Software, was defined as the distance between the cell wall and the outer border of the capsule. Cell diameters were determined using the same software. For additional analysis of capsular morphology, cellular suspensions were processed for fluorescence microscopy. Staining reagents used in this analysis-included calcofluor white (cell wall chitin, blue fluorescence) and the monoclonal antibody 18B7 (Casadevall et al., 1998). C. neoformans cells were prepared for fluorescence microscopy following the protocols established by our laboratory for routinely analysis of surface architecture (Rodrigues et al., 2008). Effects of Mebendazole on C. neoformans Biofilms Cryptococcus neoformans was grown in Sabouraud's dextrose broth for 24 h at 30 • C with shaking. The cells were centrifuged at 3,000 g for 5 min, washed twice with PBS, suspended in minimal medium and adjusted to a density of 10 7 cells/ml. Cell suspensions (100 µl) were added into quadruplicate wells of polystyrene 96-well plates (Greiner Bio-One, Australia), following incubation at 37 • C for 48 h. The wells containing biofilms were washed three times with PBS to remove nonadhered cryptococcal cells. Fungal cells that remained attached to the wells were considered mature biofilms. To evaluate the susceptibility of C. neoformans biofilms to mebendazole, 100 µl solutions (31.25, 15.63, 3.13, 1.56, 0.31, and 0.16 µM) were added to each well. Amphotericin B and fluconazole (2 and 8 µg/ml, respectively) were used as control systems of antifungal activity. Negative controls corresponded to wells containing only water and untreated biofilms. Mature biofilms and antifungal drugs were incubated at 37 • C for 24 h, washed three times with PBS and the biofilm metabolic activity quantified by the 2,3-bis (2-methoxy-4-nitro-5-sulfophenyl)-5-[(phenylamino) carbonyl]-2H-tetrazolium hydroxide (XTT) reduction assay (Meshulam et al., 1995). Prior studies demonstrated that the XTT reduction assay measurements correlate with biofilm and fungal cell number (Martinez and Casadevall, 2007). In addition to testing the effects of mebendazole on established biofilms, we evaluated whether this compound would inhibit biofilm formation. Cryptococcal cells were suspended in minimal medium and adjusted to a density of 10 7 cells/ml in the presence or absence of mebendazole (31.25, 15.63, 3.13, 1.56, 0.31, and 0.16 µM). These suspensions were added in quadruplicates to the wells of polystyrene 96-well plates, following incubation at 37 • C for 48 h. Amphotericin B and fluconazole (2 and 8 µg/ml, respectively) were used as antifungal controls. In negative controls, wells contained only ultrapure water. The wells were washed three times with PBS and biofilm formation was quantified by the XTT assay. Analysis of the Antifungal Activity of Mebendazole against a Collection of C. gattii Mutants A collection of randomly generated C. gattii mutants was screened for identification of possible cellular targets for antifungal activity. Mutants (n = 7,569) were generated by insertional mutagenesis after incubation of C. gattii with Agrobacterium tumefaciens as previously described (Idnurm et al., 2004). All colonies that grew on YPD hygromycin plates were selected and maintained at −20 • C in 96 wells plates containing 200 µl/well of YPD broth. Before exposure to mebendazole, mutant cells were first grown for 72 h (30 • C) in 200 µl of YPD distributed into the wells of 96 wells plates. The antifungal activity of mebendazole against C. neoformans was reproduced in the C. gattii R265 strain. The mutants were tested for their ability to grow in the presence of RPMI 1640 supplemented with MOPS, 2% of glucose, 1% DMSO and 10 µM mebendazole. Resistance phenotypes (A 540 > 0.3) were selected for dose-response tests of antifungal activity as described above and potential target identification was performed as detailed below. Identification of Potential Cellular Targets Required for the Antifungal Activity of Mebendazole Based on clear resistant phenotypes, two C. gattii mutant strains were selected for target identification by polymerase chain reaction (PCR). The fungus was cultivated overnight in 10 ml YPD at 30 • C with constant rotation. Each culture (2 ml) was centrifuged for 2 min at 4,000 × g and cell pellets were washed twice with PBS and collected for DNA extraction (Bolano et al., 2001). The cells were suspended in 400 µl of lysis buffer (50 mM Tris-HCl 1 mM EDTA, 200 mM NaCl, 2% Triton X100, 0.5% SDS, pH 7.5), for further addition of 1 volume of a phenol-chloroform mixture (pH 8) and 100 µl of 2-µm, acid washed glass beads. Mechanical disruption was performed by alternate 1-min cycles of vortexing and ice incubation. Lysates were centrifuged 4,000 × g for 20 min at 4 • C. Supernatants were collected and DNA was ethanol-precipitated overnight at −20 • C for subsequent treatment with RNAse (Bolano et al., 2001). Identification of missing genes in the mutants was performed using inverse PCR (Pavlopoulos, 2011). DNA was quantified using the Qubit reagent (Invitrogen) and 1 µg of each sample was cleaved separately with BglII, SalI, or StuI (Promega) restriction enzymes. The cleavage product was submitted to T4 DNA ligase reaction (New England) followed by PCR using primers for inverse PCR. Amplicons were gel-purified with the QIAquick Gel Extraction Kit (Qiagen). For the DNA sequencing reaction, 50 ng of each sample and 5 pmol of each primer were used. Sequences were obtained in an ABI-Prism 3500 Genetic Analyzer (Applied Biosystems) and their qualities were determined by a electropherogram analysis based on phred 1 . The identification of genes interrupted by the T-DNA was performed by comparison of each sequenced DNA fragment, which correspond to the T-DNA flanks, with the genomic sequence of C. neoformans H99 strain, available in the Cryptococcus genome databases (Broad 1 http://www.biomol.unb.br/phph/ Institute 2 ) using BLASTn. Orthologs distribution was evaluated using the OrthoMCL database (Chen et al., 2006). Mutants showing resistance to mebendazole were analyzed for their ability to produce melanin and extracellular GXM as previously described by our group . Statistical Analyses Statistics were obtained with the GraphPad Prism 6.0 software. Unpaired t student test was used for mebendazole toxicity analysis. The variance two-way ANOVA was carried out using Tukey's and Bonferroni's comparisons test for fungicidal activity and intracellular activity of mebendazole in macrophages. For biofilm analyses, one-way ANOVA was performed using Dunnett's multiple comparisons. Selection of Mebendazole as a Potential Anti-cryptococcal Agent Of the 727 drugs tested at 10 µM, 17 compounds were active against C. neoformans, including antibacterials (chloroxine, cycloserine, and linezolid), a neuroleptic drug (mezoridazine), calcium channel blockers (nisoldipine and enalaprilat), antiarrhythmic agents (flecainide acetate), drugs for gastrointestinal malignances (irsogladine maleate and cisapride), gynecologic regulators (medroxyprogesterone acetate and clomid), an anti-histaminic (olopatadine) and the anti-helminthic benzimidazoles (mebendazole, albendazole, flubendazole, and triclabendazole) (Figure 1). Noteworthy, none of the molecules showing antifungal activity were structurally related, except the benzimidazoles. Considering the currently observed efficacy in inhibiting the growth of C. neoformans, their structural similarity and previously described anti-cryptococcal activity (Cruz et al., 1994), we selected benzimidazoles for further investigation in our model. We extended the results obtained with the compound collection to dose-response tests using the four benzimidazoles showing antifungal activity and other related molecules, including thiabendazole, oxibendazole, and fenbendazole. The most active compounds were mebendazole, fenbendazole, and flubendazole (Figure 2). Mebendazole and flubendazole produced the same MIC values (0.3125 µM). Fenbendazole was the most potent compound, with a MIC corresponding to 0.039 µM. Mebendazole, however, efficiently penetrates the brain in animal models (Bai et al., 2015) and is in clinical trial for the treatment of pediatric gliomas in humans (ClinicalTrials.gov, 2013). Considering that the worse clinical outcome of cryptococcosis includes colonization of the brain, we selected mebendazole as the molecular prototype for our subsequent analyses of fungicidal effects, ability to kill intracellular cryptococci, effects on fungal morphology, interference on fungal biofilms and identification of cellular targets. Mebendazole Is Fungicidal against C. neoformans To test the fungicidal activity of mebendazole, C. neoformans was exposed to different drug concentrations for different periods of time. Mebendazole concentrations lower than 0.3125 µM had no effects on C. neoformans (Figure 3). At 0.3125 µM or higher, however, significant antifungal activity was observed after 6 to 12 h of exposure of C. neoformans to the drug. After 48 h, mebendazole killed 100% of C. neoformans cells. Activity of Mebendazole against Intracellular C. neoformans Cryptococcus neoformans is a facultative intracellular pathogen and this characteristic likely has a negative impact on the anti-cryptococcal treatment (Feldmesser et al., 2000). We then evaluated the ability of mebendazole to kill intracellular fungi. J774.16 macrophages were first infected with C. neoformans and then the cultures were treated with mebendazole for different periods (8 and 24 h) at variable drug concentrations (1, 0.5, and 0.25 µM). Macrophage viability was monitored by treating non-infected J774.16 cells with mebendazole alone (Figure 4A). After 8 h, no differences were observed between the viability of untreated macrophages and mebendazoletreated cells (P > 0.1). After 24 h, macrophage viability was affected by higher mebendazole concentrations (P = 0.0071 for 1 µM mebendazole), but no differences were observed between untreated phagocytes and cells that were exposed to 0.25 µM mebendazole (P = 0.3225). In infected macrophages, all mebendazole concentrations showed antifungal activity FIGURE 3 | Fungicidal activity of mebendazole against C. neoformans. Fungal cells were exposed to mebendazole (0.078-1.25 µM) for periods varying from 0 to 96 h. Fungicidal activity was evident after 48 h and required a minimum concentration of 0.3125 µM mebendazole. In comparison to control systems (no drug), mebendazole concentrations of 0.3125 and 1.25 µM significantly affected fungal growth in all incubation periods. All the other drug concentrations resulted in fungal growth that was similar to that observed in the absence of mebendazole. Data illustrate a representative experiment of three independent replicates. against both extracellular and intracellular fungi ( Figure 4B). The concentration showing the lowest impact on macrophage viability (0.25 µM) was efficient in killing both intracellular and extracellular C. neoformans. Analysis of Potential Synergism between Mebendazole and Standard Antifungals To evaluate whether the association of mebendazole with amphotericin B or fluconazole results in improved anticryptococcal activity, checkerboard assays were performed for calculation of the FIC index (Table 1). We found that mebendazole had additive activity against C. neoformans when combined with amphotericin B. Fluconazole did not show any improved effect in combination with mebendazole. Mebendazole Affects Capsule Size and Fungal Morphology To evaluate whether mebendazole affected key cellular structures of C. neoformans during regular growth, we cultivated the fungus for 48 h in the presence of a sub-inhibitory concentration (IC50; 0.22 µM) of the drug for further analysis of morphology and capsule size by a combination of fluorescence microscopy and India ink counterstaining. Fungal cells cultivated in the FIGURE 4 | Activity of mebendazole against intracellular C. neoformans. (A) Viability of drug-treated, non-infected macrophages. After 8 h, all systems had similar viability levels (P > 0.1). After 24 h, cell viability was only affected by 1 µM mebendazole concentration (P = 0.0071). (B) Activity of mebendazole against intracellular (red bars) or extracellular (blue bars) C. neoformans. In intracellular assays, statistical differences (P < 0.05) between no drug (0) and drug-treated systems were always observed, except when the 0.25 µM concentration of mebendazole was used in the 8 h incubation period. In extracellular assays, statistical differences (P < 0.05) between no drug (0) and drug-treated systems were observed for all mebendazole concentrations, but only after the 24 h incubation. Comparative analysis of fungal loads obtained from intracellular and extracellular assays revealed statistical differences (P < 0.05) only in the 8 h period of incubation, suggesting that mebendazole is initially more effective against extracellular fungi, but similarly active against both intracellular and extracellular cryptococci after prolonged (24 h) periods of exposure to infected macrophages. Data illustrate a representative experiment of three independent replicates. presence of mebendazole presented marked morphological alterations, including loss of spherical shape, intracellular furrows ( Figure 5A) and reduced capsular dimensions (Figure 5B, P < 0.05). Effect of Mebendazole on GXM Release Since mebendazole interfered with capsule size (Figure 5B), we asked whether GXM release was affected by exposure of C. neoformans to the drug. Supernatants of fungal cells cultivated in the presence of mebendazole were used for GXM quantification by ELISA (Figure 6). Unexpectedly, supernatants of mebendazole-treated cells had increased GXM concentration, mainly in the dose range required for fungal killing. Since the polysaccharide is synthesized intracellularly (Yoneda and Doering, 2006), we hypothesized that the increased GXM detection would result from leakage induced by membrane damage. To address this possibility, mebendazole-treated cells were stained with PI. Stained cells varied from 70 to 90% in the dose range generating cell death, which was compatible with membrane damage and polysaccharide leakage. Noteworthy, these results and those described in Figure 5 are also compatible with the hypothesis of GXM release from the cell surface, so we cannot rule out the possibility that mebendazole promotes detachment of capsular structures in C. neoformans. Activity of Mebendazole against C. neoformans Biofilms Biofilm formation causes well-known difficulties in the treatment of a number of infectious diseases, including cryptococcosis (Martinez and Casadevall, 2006;Borghi et al., 2016). Based on this observation, we evaluated whether co-incubation of mebendazole with yeast cells prevented C. neoformans biofilm formation or caused damage to mature biofilms. The metabolic activity was measured by XTT reduction assay (Figure 7). Fluconazole is known to have no effects on mature biofilms, in contrast to amphotericin B (Martinez and Casadevall, 2006). Therefore, these drugs were used as negative and positive controls, respectively. Mebendazole at MIC (0.3125 µM) affected biofilm formation (Figure 7A; P < 0.0001) and damaged mature biofilms ( Figure 7B; P < 0.0001). Lower concentrations of mebendazole similarly affected C. neoformans mature biofilms ( Figure 7B; P < 0.0001). As expected, higher concentrations of mebendazole had even clearer impacts on C. neoformans biofilms. Identification of Potential Cellular Targets for Mebendazole Antifungal activities of benzimidazoles were described before (Cruz et al., 1994), but the mechanisms by which these drugs affected cryptococcal growth remained unknown. In order to identify potential targets for mebendazole in Cryptococcus spp., we first evaluated whether the drug affected the growth of C. gattii and C. neoformans in a similar fashion. In fact, growth inhibition curves were identical for both pathogens (Figure 8A). A mutant collection generated by co-incubation of C. gattii and A. tumefaciens (Idnurm et al., 2004) was produced by our group for general purposes involving development of antifungals and pathogenic studies. In the present study, these mutants were screened for resistance phenotypes in the presence of 10 µM mebendazole, based on the assumption that, in the absence of a cellular target required for antifungal activity, the drug would lack anti-cryptococcal properties. Most of the mutants were sensitive to mebendazole and a number of strains were partially resistant to the drug ( Figure 8B). However, two of the mutant strains clearly stood out from the entire collection, showing highest levels of resistance even in higher mebendazole concentrations ( Figure 8C). Interrupted regions in these two mutants were identified by inverse PCR (Figure 8D). In the most mebendazole-resistant mutant, the protein codified by the interrupted gene contained a scramblase domain (PF03803 -CNBG_3981). In the second most resistant mutant, the gene coding for ribosome biogenesis protein Nop16 (PF09420 -CNBG_3695) was interrupted. These domains were found in diverse phylogenetic groups, according to the PFAM database (Finn et al., 2016). However, orthologs for these genes were found in a narrow group of organisms and absent in human cells (Figure 8D), according to OrthoMCL database (Chen et al., 2006). These results indicate that at least two novel cellular targets are involved in the antifungal activity of mebendazole. Neutralizing cryptococcal virulence factors is also likely to be beneficial for the control of cryptococcosis. In this context, we evaluated whether the mutants lacking potential targets for activity of mebendazole had normal production of the most-well characterized cryptococcal virulence factors. The two mutants had normal urease activity (not shown). Analysis of extracellular GXM and pigmentation, however, showed important differences between WT and mutant cells (Figure 9). Mutants disrupted for expression of the putative scramblase and of nucleolar protein 16 had decreased contents of extracellular GXM, in comparison with WT cells (P < 0.001). The kinetics of melanin production was also negatively affected in the mutants. DISCUSSION Processes of drug research and development are costly, time consuming and have questionable success (Kaitin, 2010;Kaitin and DiMasi, 2011). In this scenario, drug repurposing has provided a potential boost to the drug pipeline combating health emergencies and assisting neglected populations. Recent studies have provided evidence of successful drug repurposing to combat Cryptococcus (ClinicalTrials.gov, 2016;Rhein et al., 2016) and Zika virus (Veljkovic and Paessler, 2016;Xu et al., 2016;Sacramento et al., 2017) infections with promising results. The need for additional anti-infectious agents, however, is clear. In this study, we identified small-molecule inhibitors of C. neoformans via a drug-repurposing screen. Our findings demonstrated antifungal activity in a group of anti-helminthic benzimidazoles and suggested potential targets for development of novel antifungals. Benzimidazoles are heterocyclic aromatic bis-nitrogen azoles that are considered promising anchors for development of new therapeutic agents (Bansal and Silakari, 2012). Benzimidazole derivatives have been associated to the control of infectious diseases through antiviral, antifungal, antimicrobial, and antiprotozoal properties, but they also manifest antiinflammatory, anticancer, antioxidant, anticoagulant, antidiabetic and antihypertensive activities (Ates-Alagoz, 2016). The anti-cryptococcal activity of benzimidazoles was demonstrated two decades ago (Cruz et al., 1994), but the effects of these compounds on fungal morphology and biofilm formation were not explored. Similarly, their cellular targets and ability to kill intracellular fungi remained unknown. Mebendazole, one of the benzimidazoles showing antifungal activity, is in clinical trial for the treatment of human pediatric glioma (ClinicalTrials.gov, 2013). Since effective anti-cryptococcal agents obligatorily need to reach the central nervous system at biologically active concentrations, we selected mebendazole for our experiments of antifungal activity. This compound affected cryptococcal growth, morphology, biofilms and macrophage infection. The pathogenic mechanisms used by C. neoformans during infection bring significant complexities in the management of cryptococcosis. Conditions favoring biofilm formation are thought to contribute to cryptococcal virulence (Benaducci et al., 2016). Like with other pathogens, C. neoformans biofilms are resistant to antimicrobial agents and host defense mechanisms, causing significant morbidity and mortality (Martinez and Casadevall, 2015). These characteristics are especially relevant in a scenario of increasing use of ventriculoperitoneal shunts to manage intracranial hypertension associated with cryptococcal meningoencephalitis (Martinez and Casadevall, 2015). In our model, mebendazole was an efficient antifungal agent against C. neoformans biofilms. The minimum mebendazole concentration required for antifungal activity against planktonic cells (0.325 µM) was much higher than the doses required for activity against mature biofilms (0.0156-0.0312 µM). The reason FIGURE 9 | Cryptococcus gattii mutants lacking potential targets for mebendazole activity show defective capacity to produce well-known cryptococcal virulence factors. GXM determination by ELISA (left panel) indicated that the putative scramblase and nucleolar protein 16 were required for polysaccharide export. Both proteins were apparently involved in the kinetics of pigmentation in C. gattii (right panel). for this discrepancy is unclear. Since capsular polysaccharides have well described roles on the assembly of cryptococcal biofilms (Martinez and Casadevall, 2005), we hypothesize that the herein described impact of mebendazole on capsular architecture and GXM release may affect biofilm stability. An additional complexity in the treatment of cryptococcosis is the ability of the fungus to reside inside phagocytes. In fact, persistent pulmonary infection is associated with the intracellular parasitism of C. neoformans (Goldman et al., 2000). In this context, targeting intracellular survival and growth and/or cryptococcal virulence factors expressed during intracellular parasitism might offer new strategies to improve anticryptococcal treatment, as reviewed by Voelz and May (2010). Mebendazole was effective against phagocytized fungi in our model. The fact that the anti-helminthic compound had an additive effect to AmB also suggests that benzimidazolelike compounds could be used in therapeutic protocols against cryptococcosis. The primary mechanism of anthelmintic activity of mebendazole relies on binding the β-subunit of tubulin before dimerization with α-tubulin, with subsequent blocking of microtubule formation (Gardner and Hill, 2001). Tubulin may also be the target of mebendazole in C. neoformans, but our studies suggest the possibility that the antifungal effects of mebendazole may involve additional targets. Mebendazole induced membrane permeabilization, as concluded from increased levels of PI staining after exposure of C. neoformans to the drug. In addition, mutants lacking genes coding for a putative scramblase and the nucleolar protein Nop16 were highly resistant to mebendazole. Both mutants had defective formation of important virulence factors. Remarkably, sequences showing similarity to these two proteins were absent in human cells, suggesting a great potential for these two proteins as novel antifungal specific targets. Scramblases are ATP-independent enzymes that act to randomize lipid distribution by bidirectionally translocating lipids between leaflets (Hankins et al., 2015). Lipid-translocating enzymes, in fact, are fundamental for cryptococcal pathogenesis and GXM export (Hu and Kronstad, 2010;Rizzo et al., 2014). The functions of Nop16 in C. neoformans are unknown, but in S. cerevisiae this protein is a component of 66S pre-ribosomal particles required for 60S ribosomal subunit biogenesis (Harnpicharnchai et al., 2001;Horsey et al., 2004). Although there is no evidence in the literature linking tubulin polymerization, membrane permeability and cellular functions of these two potential targets, our studies suggest that a connection may exist and these proteins may be functionally integrated in fungi. Cryptococcosis affects regions where health infrastructure resources are extremely limited. Considering the high mortality rates associated with this disease and the socio-economic scenario behind cryptococcosis, low-cost and efficient antifungal alternatives are urgent. Clinical use of novel drugs, however, depends on a number of properties of the molecular candidates. In this regard, the potential use of mebendazole against human cryptococcosis raises several concerns. For instance, the use of mebendazole at large doses may cause bone marrow suppression (Fernandez-Banares et al., 1986) and it is unclear if the compound is safe in pregnancy (Torp-Pedersen et al., 2012). Benzimidazoles have only limited water solubility, which impacts the rate and extent of their absorption and, consequently, systemic bioavailability, maximal plasma concentration, and tissue distribution (McKellar and Scott, 1990). Animal studies with mebendazole demonstrated drug distribution through all the organs, including the central nervous system (Lanusse and Prichard, 1993). However, the drug and its metabolites were concentrated mainly in the liver, where they remain for at least 15 days post treatment (Lanusse and Prichard, 1993). Mebendazole is thought to be the active form of the drug rather than its metabolites (Gottschall et al., 1990). However, in the liver, benzimidazoles are mostly modified by the enzymatic system of hepatic microsomal oxidases, which are involved in sulfoxidation, demethylation, and hydroxylation (Short et al., 1988;Lanusse and Prichard, 1993). In fact, benzimidazoles are usually short lived and metabolic products predominate in plasma and all tissues and excreta of the host (Fetterer and Rew, 1984). Although the difficulties associated with the clinical use of mebendazole are clear, our results combine a multiple antifungal activity with molecular targets that are absent in human cells, which encourages further development of benzimidazolelike molecules against C. neoformans. In fact, a number of reports have suggested that the benzimidazole core represents a promising scaffold for development of new therapeutic agents (Yadav and Ganguly, 2015). Montresor et al. (2010) estimated that the cost to procure one million doses of standard benzimidazoles (500 mg each) would be approximately US$ 20,000, including international transport. In this context, the ability of mebendazole to penetrate the brain (Bai et al., 2015) and to cause expressive damage in C. neoformans cells suggest great potential as a prototype for development of novel anticryptococcal agents. AUTHOR CONTRIBUTIONS LJ, CS, LK, AS, MDP, MV, and MR prepared the experimental design. LJ, RS, WL, RA, and CS performed the experiments. LJ, CS, LK, AS, MDP, MV, and MR discussed the results, wrote and approved the final manuscript.
2017-05-04T00:15:12.333Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "e99a3516a0c6c3c59960703301c1d007540cd782", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00535/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e99a3516a0c6c3c59960703301c1d007540cd782", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239705685
pes2o/s2orc
v3-fos-license
Impact of Digital Strategic Orientation on Organizational Performance through Digital Competence : In the era of the digital economy, enterprises need a comprehensive digital transformation of strategy, business, organization, competence, and operation. However, being limited themselves to the development of digital technology, previous studies mainly focused on the development and application of digital technology, single case studies, and multi-case studies of digital transformation. Few researchers systematically studied the digital transformation mechanism at the organizational level. Therefore, this study explored the relationship between a strategic orientation and organizational performance though digital competence at the organizational level. To accomplish the task, this study basically constructed the dimensions of digital competence according to core competence theory. Digital competence contains three hub-factors: digital infrastructure, digital integration, and digital management. This study collected 160 questionnaires from Chinese enterprises and analyzed the data using SmartPLS 3. This study analyzed the positive relationship between digital strategic orientation, digital competence, and organization performance. This study identified the importance of digital competence through the empirical analysis of enterprises that are undergoing digital transformation or had completed a digital transformation. Therefore, enterprises need to pay attention to the impact of digital competence on organizational performance. Digital competence is a reshaping of corporate resources when facing a turbulent digital environment. Moreover, digital competence can ultimately achieve value delivery through the improvement of enterprise organizational performance. Introduction At the beginning of 2020, the sudden COVID-19 pandemic put the economies and societies of all countries in the world through a severe test [1]. According to the International Monetary Fund World Economic Outlook report of the World Bank, the total global economy in 2020 was approximately USD 84.538 trillion, which is USD 2.061 trillion less than that in 2019. However, in 2020, China's GDP was CNY 101,598.6 billion, an increase of 2.3% over the previous year. China is also the only major economy in the world to achieve positive economic growth in 2020. During the special period of fighting the epidemic, the development of a digital economy has played an important role in stabilizing economic and social operations. The new generation of information technology has been widely used in epidemic prevention and control, production, and living security, while the popularization of new digital formats, new models, and new applications has accelerated, all showing the value and potential of the digital economy [2]. There is no doubt that China was the first country to discover and pay attention to COVID-19. The severe impact of the epidemic has forced enterprises to try and think deeply about digital transformation and strengthen their concept of digital transformation [3]. Although telecommuting has solved the problem of collaboration among employees during the epidemic, the epidemic has made enterprises more fully aware of the need to complete collaboration between employees and machines, machines and machines, enterprises and enterprises, and enterprises and customers to truly realize digitalization [4]. There are many studies, solutions, best practices, and forums reported in the literature that are related to digital transformation in the academic world. For example, researchers report that digital transformation is a company transformation that develops new business models [5] and that digital transformation creates and obtains value by implementing new business architecture [6]. However, most of the research is only an update of digital technology and tool applications, or even just a presentation of new concepts. The studies attempt to solve enterprise problems from one side of enterprise operation, rarely involving organizational strategy and process management. However, the digital transformation of enterprises is a complex process involving many production factors, such as enterprise resources, technology, knowledge, and management, and it is necessary to solve such problems at the organizational level. Therefore, this study aimed to explore the mechanism of digital transformation and analyze the relationship between digital transformation and organizational performance. There were three main research questions in this study: (1) What are the influencing factors of digital competence? (2) What is digital competence? (3) What is the relationship between digital competence and organizational performance? This study constructed the dimensions of digital competence according to core competence theory. In the context of core competence theory, digital competence contains three hub-factors: digital infrastructure, digital integration, and digital management. Moreover, this study developed measurement items of digital competence based on previous core competence research. In order to answer the above research questions, we review digital transformation, the influencing factors of digital competence, and theories related to digital transformation in the second section. In the third section, models and hypotheses are proposed. The fourth section is an empirical analysis. This study collected 160 questionnaires from Chinese enterprises and analyzed the data using SmartPLS 3. This study analyzed the positive relationship between digital strategic orientation, digital competence, and organization performance. In the final sections, the implications and study limitations are discussed, after which conclusions are presented. This study has both theoretical and practical significance in the field of digital competence research. This study builds the dimension of digital competence on the basis of previous research and verifies through empirical analysis that digital competence positively impacts on organizational performance. At the same time, by exploring the digital transformation mechanism, this study provides enterprises with a digital transformation methodology, which can help enterprises focus on promoting the implementation of digital transformation projects. Digital Transformation In recent years, digital transformation has been an important phenomenon in the research of knowledge management, including the consideration of the significant changes in society and industry resulting from the use of digital technology. Companies are looking for various methods of digital transformation at the organizational level and are moving towards a strategic direction as they try to achieve better organizational performance [7]. In this complex and uncertain business era, the impact of digital technology has been experienced everywhere, and digitization is the greatest certainty in the future. In the era of the digital economy, all businesses will be digitized, and consumption and industry are facing the need for comprehensive digital upgrading. However, the vast majority of previous studies focused on digital transformation through the use of digital technology to improve the performance of companies. Digital technologies include digital artifacts, digital infrastructures, and digital platforms, such as social networking, mobile communication, data analytics, cloud computing, and IoT (Internet of Things) platforms and ecosystems [8]. For practitioners, it is necessary to combine insights on information systems, corporate strategies, and operations management to make reasonable decisions on digital transformation within a whole organization. Digitization leads to collaborative enterprise organization and operation, agile business processes, intelligent management decision making, and integrated industrial ecology, reshaping the logic of enterprise operations. Digital transformation is a company transformation that develops new business models [9]. Digital transformation creates and obtains value by implementing new business architecture [6]. The use of IT is transformative, leading to fundamental changes in existing business processes, routines, and capabilities, enabling enterprises to enter new markets or withdraw from current markets [10]. Digital transformation utilizes digital technology to achieve cross-border interactions with suppliers, customers, and competitors [11]. Therefore, due to digital technology, digital transformation is closely related to the strategic change of business models. To match the need for increased capabilities in the digital era, these objective realities have forced enterprises to pay more and more attention to the development of digital competence for digital transformation. Influencing Factors of Digital Competence By reviewing the previous research on digital transformation, we found that the main factor influencing digital transformation is the digital strategy orientation. Digital strategy is to use digital resources to create value to affect the enterprise's business strategy. The ability to build a digital enterprise architecture in large extent relies on a clear digital strategy, one that is supported by a culture of transformation and innovation cultivated by leaders [12]. Previous researches have shown that there are three main aspects of strategic orientation that directly affect digital competence: customer orientation, competitor orientation, and technology orientation [13][14][15][16]. Customer orientation means that the use of digital terminals as the best carrier to integrate customer's key journeys, realize B2C end-to-end interaction, support customized personalized products, accurately collect insight into customer needs, remove intermediary links, and improve operational efficiency and customer experience [17]. In other words, in the digital age, it is necessary to anchor the critical point of customers through digital terminal products and turn customers into users. Digital terminal products should answer three core questions: Who are the users? What is the application scenario? Can they help users solve any problems? Digital terminal products include external users on the C end, channel users at B end, and internal users at E end. The application scenario of digital terminal products is the customer critical point of C, B and E, different clients have different customer critical point. The boundaries between industries and resources, with the development of the digital age, are no longer clear, thus giving enterprises a huge market to create space. Therefore, enterprises in the digital era are not competing in a fixed resource field, and digital technology gives more possibilities for innovation. Enterprises, customers, and partners in different industries form a new digital ecosystem [18]. The goal of participants in this ecosystem is to gain growth space rather than simply seize the growth space of others. When competitors in the same industry respond to each other's digital strategies, innovation is often replaced by imitation, that is, multiple competitors have adopted similar products and service delivery methods and use similar business models to obtain benefits. In this case, enterprises need to obtain more market share than competitors to have room for survival. Technology orientation means that the system of enterprise, with the development of digital technology, is dynamically reconstructed with changes in enterprise needs, and the use of generalization and modular development is used to build on the basis of changes in the internal and external environments and market requirements of the enterprise. Enterprises can configure and customize their own systems according to their needs and further implement flexible and optimized combinations in time according to the progress of tasks [19]. Digital transformation can not only effectively improve the market reaction speed of enterprises, but also greatly improve the efficiency, reduce the product cost and resource consumption, and effectively improve the competitiveness of enterprises [20]. Theories Related to Digital Transformation The goal of the enterprise is to maximize the value and interests, which is achieved by developing the core competitiveness of the enterprise and optimizing the value chain [21]. The purpose of the digital transformation of enterprises is to formulate long-term development strategies, design reasonable organizational structures, optimize value chain networks, and develop unique core competitiveness, so as to make enterprises win in the global market competition [7]. Therefore, the theories related to digital transformation mainly include resource-based view and core competence theory. The resource-based view has a profound connection with the digital transformation of enterprises. The resource-based view affirms the importance of the digital transformation strategy of enterprises, and believes that redundant resources are beneficial to the implementation of the digital transformation strategy of enterprises [22]. Moreover, the resource-based view affirms the value of customers as unique resources to the digital transformation of enterprises and believes that the degree of customer involvement determines the performance of digital transformation [8] because customers participate in resource sharing, which promotes the integration of operational resources and management resources and ultimately enhances the market value of the enterprise through the creation of heterogeneous resources. At the same time, the resource-based view points out that the resource allocation of enterprises needs to match the development stage of the digital transformation of enterprises. It believes that organizations need to reintegrate, build, and configure their resources and capabilities in a changing external environment and finally form new unique resources to ensure the competitive advantage of enterprises [23]. Moreover, one of the most important themes in the digital economy era is the reconstruction and switching of digital infrastructure and the reconstruction of business ecology based on new digital infrastructure. For enterprises, resources such as chips, algorithms, data, software, networks, knowledge, sensors, databases, and cloud platforms are becoming more and more important for the long-term development of enterprises. According to core competence theory, the essence of enterprise competition is to see who can better and more efficiently allocate research and development and resources. In every link of research and development, design, procurement, production, distribution, and service, enterprises are facing the problem of how to optimize resource allocation and improve efficiency. The essence of digital transformation solutions, such as Industry 4.0, industrial Internet, etc., lies in the automatic flow of data to eliminate the uncertainty of complex systems to improve the efficiency of resource allocation [24]. The transformation and upgrading of enterprises requires new digital competence to innovate and develop. The integration of digital technology and enterprises will bring about a shift in paradigms and a change of business models, as well as the reconstruction of business systems and innovation capability [7]. For enterprises, no matter whether they start digital transformation or not, no matter how hard and fast they promote digital transformation, they will face risks and uncertainties. It is not that there is no risk without investment, but that the risk without investment may be greater. The driving force of digital transformation is not because it can be expected, but because the costs and risks of non-transformation are still difficult to be accepted. Therefore, the driving force of digital transformation is not the choice of CIOs, CDOs, and CEOs, but from the CIOs, CDOs, and CEOs of competitors. Research Model and Hypotheses In the current digital environment, the company's strategy is to support the company's transformation and focus on the upgrading of its digital strategy [12]. However, the implementation of the strategy requires the company to use resources and competency of Sustainability 2021, 13, 9766 5 of 15 development capability to improve performance [25]. Therefore, in order to find out how the company's strategy affects organizational performance through their competence, the research model on digital competence is suggested. This study proposes strategic orientation as an influencing factor on digital transformation for companies to strengthen their competitiveness. This study analyzes the impact on a company's digital competence according to its strategic orientation and also analyzes the impact on organizational performance according to its digital competence. The objective of this study is to empirically investigate the relationship between strategic orientation and organizational performance through its digital competence. As shown in Figure 1, based on resource-based theory and core competence theory, this study proposes the research model of strategic orientation (customer orientation, competitor orientation, and technology orientation) influencing organizational performance through digital competence (digital infrastructure, digital integration, and digital management). Research Model and Hypotheses In the current digital environment, the company's strategy is to support the company's transformation and focus on the upgrading of its digital strategy [12]. However, the implementation of the strategy requires the company to use resources and competency of development capability to improve performance [25]. Therefore, in order to find out how the company's strategy affects organizational performance through their competence, the research model on digital competence is suggested. This study proposes strategic orientation as an influencing factor on digital transformation for companies to strengthen their competitiveness. This study analyzes the impact on a company's digital competence according to its strategic orientation and also analyzes the impact on organizational performance according to its digital competence. The objective of this study is to empirically investigate the relationship between strategic orientation and organizational performance through its digital competence. As shown in Figure 1, based on resource-based theory and core competence theory, this study proposes the research model of strategic orientation (customer orientation, competitor orientation, and technology orientation) influencing organizational performance through digital competence (digital infrastructure, digital integration, and digital management). Strategic Orientation and Digital Competence The existing research has fully proven the importance of strategic orientation and its relationship with digital competence. Mithas et al. [13] pointed out that an enterprise's digital strategy affects digital business resources, especially in a highly competitive environment. Matt et al. [14] emphasized the importance of customer orientation, competitor orientation, and technology orientation in digital transformation strategies, and how they affect the digital transformation process. Holotiuk et al. [15] investigated the key success factors of digital business strategies, and emphasized the importance of enterprises deploying digital resources according to digital business strategies. Sebastian et al. [16] explained how large-scale old enterprises establish a digital strategy orientation and develop digital competence to respond to digital transformation. Robert et al. [26] claimed that in the face of digital challenges, the digital work that enterprises need to do include formulating a digital strategy that suits them and allocating resource and competence in accordance with the digital strategy. Vial [8] also emphasized the importance of strategic orientation and digital resource capabilities in the process of enterprise digitalization. Therefore, we propose the following research hypotheses: Strategic Orientation and Digital Competence The existing research has fully proven the importance of strategic orientation and its relationship with digital competence. Mithas et al. [13] pointed out that an enterprise's digital strategy affects digital business resources, especially in a highly competitive environment. Matt et al. [14] emphasized the importance of customer orientation, competitor orientation, and technology orientation in digital transformation strategies, and how they affect the digital transformation process. Holotiuk et al. [15] investigated the key success factors of digital business strategies, and emphasized the importance of enterprises deploying digital resources according to digital business strategies. Sebastian et al. [16] explained how large-scale old enterprises establish a digital strategy orientation and develop digital competence to respond to digital transformation. Robert et al. [26] claimed that in the face of digital challenges, the digital work that enterprises need to do include formulating a digital strategy that suits them and allocating resource and competence in accordance with the digital strategy. Vial [8] also emphasized the importance of strategic orientation and digital resource capabilities in the process of enterprise digitalization. Therefore, we propose the following research hypotheses: Hypothesis 1. Customer orientation has a positive impact on digital competence. Hypothesis 2. Competitor orientation has a positive impact on digital competence. Hypothesis 3. Technology orientation has a positive impact on digital competence. Digital Competence and Organizational Performance Based on the theory of core competence, this study develops digital competence to cope with the digital environment based on the original IT competence, which consists of three dimensions: digital infrastructure, digital integration, and digital management. Bharadwaj [27], based on the resource-based view, explores the IT competence and organizational performance in empirical research, dividing IT core resources into tangible IT infrastructure, IT human resources, and intangible IT resources. Wade et al. [28] used a multidimensional typology is utilized to analyze the attributes of IT resources sorted into outside-in, spanning, and inside-out processes to sustain a competitive advantage over time. Kim et al. [29] believes that IT competence is the enterprise's ability to use IT technology to effectively manage information. Choi et al. [30] proposed that IT competence includes IT human resources, IT infrastructure, and IT vendor management. Firms must reduce costs and maximize performance through effective management of IT resources. Zhang et al. [31] proposed that IT competence enable enterprises to effectively integrate and support different system components under changing business processes. Yu et al. [32], based on the resource-based view and core competence theory, states that IT competences were constructed from both aspects of IT flexibility and IT management and empirically analyzes the relationship between IT competence and performance. Therefore, we propose the following research hypothesis: Hypothesis 4. Digital competence has a positive impact on organizational performance. Measurement The survey items of all variables in the questionnaire are measured by Likert's 5-level indicator, where 1 means completely inconsistent and 5 means very consistent. The measurement of customer orientation (CUO) refers to the scale of Lu et al. [33], the measurement of competitor orientation (COO) refers to the scale of Yu et al. [32], and the measurement of technology orientation (TO) refers to the scale of Ng et al. [20]. The measurement of digital infrastructure (INF) refers to the scale of Reitz et al. [34], the measurement of digital integration (INT) refers to the scale of Boer et al. [35], the measurement of digital management (MAN) refers to the scale of Ravichandran et al. [36], and the measurement of organizational performance (OP) refers to the scale of Tanriverdi et al. [37]. The operational definition and measurement are shown in Table 1. Data Collection In order to investigate the digital transformation at the organizational level, this study collects data from enterprises that are undergoing digital transformation and have made some achievements. The questionnaire is mainly in the form of electronic questionnaires and the survey objects are middle senior managers of enterprises. As of April 2021, a total of 160 valid questionnaires have been obtained for this study. The statistics of respondents are shown in Table 2. Table 1. Operational definition and measurement. Factor Operational Definition Measurement Customer Orientation The extent to which the company has sufficient understanding of their target customers in order to create superior value for them continuously. 1. Competitive advantage is based on understanding customers' needs. 2. Business objectives are driven primarily by customer satisfaction. 3. Frequently and systematically measure customer satisfaction. 4. Pay close attention to after-sales service for customer satisfaction. 5. Continuously try to discover our customers' additional needs of which they are unaware. Competitor Orientation The extent of competition in the company's industry. Test of the Measurement Model Using Smart PLS3 to test the reliability and validity, the results are shown in Table 3. Cronbach's α of all variables is greater than 0.8, the internal consistency of the measurement items is high, and the reliability test is passed. The factor loading of all items exceeds 0.7, the combined reliability value (CR) of each variable is greater than 0.8, and the average variance extraction (AVE) is greater than 0.5, indicating that the questionnaire has high convergence validity [38]. The questionnaire is composed of a mature scale. The twoway translation method and the industry expert evaluation method are used to ensure the validity of the questionnaire. Therefore, there is no ambiguity in the questionnaire distribution process [39]. Moreover, as shown in Table 4, in the factor loading comparison of the first-order and second-order factors, the second-order factors are all larger than the first-order factors. Therefore, digital competence should use second-order construct [40]. As shown in Table 5, all constructs have a good discriminant validity as the indicators' outer loadings on their own constructs were all higher than all their cross loadings with other constructs. Test of the Structural Model To evaluate the structural model of our theoretical framework, we examined construct collinearity, the coefficient of determination (R 2 ), the significance of path coefficients, and the direct and mediation effects [41]. The R 2 score for the digital competence was 0.752, and for the organizational performance it was 0.301. In addition, the tested model has been expanded to examine construct collinearity, and the results were excellent. All of the variance inflation factor (VIF) values were far below five, which further shows that multicollinearity is not an issue for our model/data [42]. The significance of the path coefficients was calculated by using a bootstrapping algorithm with 5000 subsamples for a two-tailed test [43]. The numbers and significance of path coefficients can be seen in Figure 2 and Table 6. Test of the Digital Competence Mediating Effects According to the results of the data analysis, competitor orientation has no significant impact on digital competence, so digital competence has no mediating effect between competitor orientation and organizational performance (COO→DC→OP). In this study, there is two mediating effects (CUO→DC→OP, TO→DC→OP), in order to verify whether digital competence has mediating effect between strategic orientation and organizational performance, we test the mediating effects. The results show that the VAF value of digital competence is 35.41% (CUO→DC→OP), that is, digital competence has a partial mediat- Test of the Digital Competence Mediating Effects According to the results of the data analysis, competitor orientation has no significant impact on digital competence, so digital competence has no mediating effect between competitor orientation and organizational performance (COO→DC→OP). In this study, there is two mediating effects (CUO→DC→OP, TO→DC→OP), in order to verify whether digital competence has mediating effect between strategic orientation and organizational performance, we test the mediating effects. The results show that the VAF value of digital competence is 35.41% (CUO→DC→OP), that is, digital competence has a partial mediating effect between customer orientation and organizational performance. The results show that the VAF value of digital competence is 35.43% (TO→DC→OP), and digital competence has a partial mediating effect between technology orientation and organizational performance (see Table 7). Additional Analysis In order to find the relationship between strategic orientation, digital competence, and organizational performance, this study appends the analysis of the impact between those variables. As shown in Figure 3, the results of the path analysis show that among customer orientation, competitor orientation, and technology orientation, customer orientation has the greatest impact on digital management (path coefficient = 0.522, t value = 6.414) and a huge impact on digital infrastructure (path coefficient = 0.499, t value = 7.750). Competitor orientation only has impact on digital integration (path coefficient = 0.186, t value = 2.413). Technology orientation has the greatest impact on digital infrastructure (path coefficient = 0.432, t value = 6.667) and a huge impact on digital integration (path coefficient = 0.369, t value = 4.270). Among the digital competences, digital integration has only significant impact on organizational performance (path coefficient = 0.433, t value = 4.043). tation has the greatest impact on digital management (path coefficient = 0.522, t value = 6.414) and a huge impact on digital infrastructure (path coefficient = 0.499, t value = 7.750). Competitor orientation only has impact on digital integration (path coefficient = 0.186, t value = 2.413). Technology orientation has the greatest impact on digital infrastructure (path coefficient = 0.432, t value = 6.667) and a huge impact on digital integration (path coefficient = 0.369, t value = 4.270). Among the digital competences, digital integration has only significant impact on organizational performance (path coefficient = 0.433, t value = 4.043). *** p < 0.001, ** p < 0.01, * p < 0.05, ns-not significant. Implications This study explores the mechanism of strategic orientation on organizational performance through digital competence in the context of the digital transformation of enterprises. Through the empirical test, the following three points of academic implications are mainly obtained. First of all, through empirical analysis, it is confirmed that the customer and technology orientations have positive impacts on digital competence, which is consistent with the results of previous studies. Vial [8] also emphasized the importance of the digital strategic orientation and digital resource capabilities in the process of enterprise digitalization. Matt et al. [14] emphasized the importance of customer orientation and technology orientation in digital transformation strategies. Robert et al. [26] claimed that in the face of digital challenges, the digital work that enterprises need to do includes formulating a digital strategy that suits them and allocating resource and competence in accordance with the digital strategy. However, different from the previous literature review of Vial [8], Matt et al. [14], and Robert et al. [26], this study uses the method of empirical Implications This study explores the mechanism of strategic orientation on organizational performance through digital competence in the context of the digital transformation of enterprises. Through the empirical test, the following three points of academic implications are mainly obtained. First of all, through empirical analysis, it is confirmed that the customer and technology orientations have positive impacts on digital competence, which is consistent with the results of previous studies. Vial [8] also emphasized the importance of the digital strategic orientation and digital resource capabilities in the process of enterprise digitalization. Matt et al. [14] emphasized the importance of customer orientation and technology orientation in digital transformation strategies. Robert et al. [26] claimed that in the face of digital challenges, the digital work that enterprises need to do includes formulating a digital strategy that suits them and allocating resource and competence in accordance with the digital strategy. However, different from the previous literature review of Vial [8], Matt et al. [14], and Robert et al. [26], this study uses the method of empirical analysis to verify the importance of customer and technology orientation strategies for enterprise resource allocation and competence development. Second, customer orientation (path = 0.509, t = 8.474) has a greater positive impact on digital competence than technology orientation (path = 0.370, t = 6.285). Therefore, customer orientation is very important for enterprises rather than technology orientation. Beckers et al. [17] claims that customer orientation means the use of digital terminals as the best carrier to integrate customers' key journeys, realize B2C end-to-end interaction, support customized personalized products, accurately collect insight into customer needs, remove intermediary links, and improve the operational efficiency and customer experience. In addition, customer orientation is more important than technology orientation for digital competence. Compared with other previous research, such as Beckers et al. [17], this study provides clearer evidence through data analysis that enterprises with limited configurable resources should first focus on customer orientation, then on technology orientation and other strategies based on their own conditions. In other words, enterprises need to not only be customer-centric and accurately grasp customer needs, but can also develop and define customer demand scenarios of their own through advanced digital technology. Third, through empirical analysis, it is confirmed that digital competence positively affects organizational performance, which is consistent with the findings of Ravichandran et al. [36]; that is, enterprises have developed new digital resource competence by formulating digital strategy and digital transformation is conducive to increasing organizational performance. Compared with other empirical research on the relationship between competence and performance and combined with the resource-based view, this study explores the relationship between strategic orientation and organizational performance through digital competence in the digital environment and reveals the influencing factors of digital competence. Through digital transformation, enterprises not only realize the optimization of resources, such as demand, design, R&D, production, and marketing, but also realize the real-time, accurate, and efficient optimization pursued by Industry 4.0 to a certain extent. Moreover, this study has the following practical implications. First of all, for enterprises, it is necessary to pay attention to the strategic orientation. Hoffman et al. [18] claimed that customers not only guide the direction of digital transformation of enterprises by actively sharing demand information but also promote the integration of digital resources of enterprises by directly participating in enterprise service innovation activities. In other words, customers are not only the information source of the enterprise but are also the value co-creators of the enterprise. Therefore, it is an economical and efficient resource use strategy to use all available resources to meet customer needs to the maximum. Second, enterprises also need to pay attention to the impact of new digital technology on existing resources. Using the technological trend of the explosion of digital transformation technology such as AI, big data, cloud computing, and the IoT, enterprises can realize the evolution of resources from local optimization to global optimization, business synergy from within the enterprise to the expansion of the industrial chain, the upgrading of competition mode from single enterprise competition to ecosystem competition, and the deepening of industrial division from product-based division to knowledge-based division. Third, digital competence is the reshaping of corporate resources under the turbulent digital environment, which is consistent with the results of Yu et al. [32]. The goal is to narrow the gap between customer demand and technological innovation and the enterprise's own capabilities. Digital competence will eventually show the improvement of enterprise organizational performance. Compared with Internet enterprises, other enterprises need to pay more attention to the flexible use of digital infrastructure, the integration of digital resources, and the strengthening of digital management. To match the ability needs of the digital era, these objective realities have forced enterprises to pay more and more attention to the development of digital competence. Limitations and Future Research This study discusses the influence mechanism of digital strategy orientation on organizational performance through digital competence, but this study still has the following four limitations, which need to be further improved in future research. First, this study has the limitation of a small sample size. In the future, it is necessary to analyze the relationship between digital strategy orientation, digital competence, and organizational performance by increasing the number of samples. Second, this study has conducted research on Chinese enterprises. For enterprises in different countries, different social and economic environments, whether the influencing factors of their digital transformation are the same, and whether there are cross-cultural differences still need to be further examined in future studies. Third, this study explores the mechanism of digital strategy orientation on organizational performance through digital competence. In the future, we need to examine what digital dynamic capability is in the digital environment and what is the relationship with performance. Fourth, due to the limited literature review and the methodology, it is necessary for future research to strengthen the literature review and methodology and explain the answers to research questions more accurately. Conclusions This study aims to explore the mechanism of digital transformation and analyze the relationship between digital transformation and organizational performance. This study basically constructs the dimensions of digital competence according to the core competence theory. The digital competence contains three hub-factors: digital infrastructure, digital integration, and digital management. This study finds the importance of digital competence through empirical analysis of enterprises that are undergoing digital transformation or have completed their digital transformation. This study tests the positive impacts of digital competence on organizational performance through empirical analysis. Finally, this study provides enterprises with digital transformation methodology, which helps enterprises focus on promoting the implementation of digital transformation projects.
2021-09-24T15:30:59.632Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "8784fcda01845c16a899e0345691d138256a29b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/17/9766/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9bd11c5700d633a6baa8533cd6261a5eee01e46c", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
216400752
pes2o/s2orc
v3-fos-license
Optimization of Traffic Detector Layout Based on Complex Network Theory : With the recent development of traffic networks, traffic detector layout has become very complicated, due to the complexity of traffic network structures and states. Thus, this paper presents an optimal method for traffic detector layout based on network centrality using complex network theory. It mainly depends on the topology of the traffic network, and does not depend on pre-conditions (e.g., OD (Origin Destination)) traffic, path traffic, prior matrix, and so on) or consider route-choosing behavior too much. Considering the travel time, OD demand, observation demand of urban managers, dynamic characteristic of the traffic network, detector failure, and so on, an optimization model for traffic detector layout is established, which is called the Traffic Network Centrality Model (TNCM). Numerical experiments are conducted, based on data from the Sioux Falls network, which demonstrate that the model has a strong practical value. TNCM is not only helpful in reducing the traffic detector layout cost, but also improves the monitoring revenue of the traffic network in complex scenarios, which offers a promising way of thinking about the optimization of traffic detector layout schemes. Introduction With the rapid development of urban society, "urban diseases" (traffic congestion, accidents, environmental pollution, energy shortage, and so on) have become increasingly prominent, which have been considered as a global urban problem. These "urban diseases" lead to negative social influence, economic loss, and environmental damage, as well as challenging the sustainable development of urban cities. At present, many countries have taken measures against these challenges, leading to the emergence of smart cities. A smart city utilizes advanced information technologies to solve these "urban diseases" and realize the intelligent management and operation of the city. This creates a better life for people and promotes the harmonious and sustainable development of the city. As transportation plays an important role in urban social and economic systems, the sustainable development of transport provides a basis for the sustainable development of a city. Therefore, the United States, Japan, Australia, Europe, and other parts of the world have advocated for sustainable transportation. As one key component of the Intelligent Transportation System, the traffic information collecting system provides real-time traffic data to the traffic network. Traffic detectors form the basis of the traffic information collecting system and play a key role in the efficient operation of an Intelligent Transportation System. Meanwhile, the accuracy and timeliness of traffic data are closely related to the traffic detector layout scheme. With the rapid development of intelligent transportation, the quality of traffic data has been unable to meet increasing traffic demands and, so, traffic information collection systems are facing a tremendous challenge. At present, the traffic detector is still one of the key means of acquiring traffic data; thus, investing in building more traffic detectors has been considered in many large-and Optimization of Traffic Detector Layout Scholars have undertaken numerous studies on the optimal layout of traffic detectors and have made many achievements. These studies can be classified into statistical analysis, integer planning, dynamic programming, graph theory, artificial intelligence, simulation analysis, transportation planning, and so on, according to different technological research route used [1]. Hu et al. [2] established a model based on linear algebra under a stable traffic network, in order to find a minimal set of detector positions and, then, to estimate the traffic flow in the network. Based on mixed integer linear programming, Danczyk et al. [3] developed a method for the optimization of traffic detector layout. A bi-level programming method was proposed to estimate the real-time OD (Origin Destination) matrix by Gómez et al. [4], based on fuzzy logic theory. Li et al. [5] simplified the traffic network optimization problem into a multi-section optimization problem by sectioning and proposed a minimum investment model, considering the characteristics of traffic information spatial distribution, the location uniformity of link, and the total cost. Based on graph theory and matrix theory, Zhang et al. [6] presented a traffic detector layout method to estimate the traffic flow in each road section. Bartin et al. [7] proposed a method based on road monitoring technology, which accurately estimates the travel time. Zheng et al. [8] developed a method under an abnormal scenario, using fuzzy clustering analysis, regression analysis, and similar methods. Zhang et al. [9] studied the optimal layout spacing of highway traffic detectors based on a deviation threshold traffic event detection algorithm, and analyzed the trend of traffic parameters by VISSIM simulation software under the condition of whether or not an event occurred. A multi-objective model was presented by Wang et al. [10], based on the flow correlation between road sections, in which diverse influencing factors were comprehensively considered. According to the different research purposes, the existing studies can be classified into OD estimation, travel time estimation, event detection, and traffic flow estimation. Yim et al. [11] evaluated different detector location selection methods aimed at estimated OD. According to the theory of maximum possible relative error, Yang et al. [12] proposed four rules for traffic detector location optimization using OD estimation, including an OD coverage rule, a maximum flow rule, a maximum interception flow rule, and a road independent rule. A two-stage model was presented for the purpose of OD estimation by Bianco et al. [13]. Chootinan et al. [14] developed a bi-objective location optimization model to estimate OD, and solved it using a distance-based genetic algorithm. Ehlert et al. [15] studied two kinds of extended models: One considered existing traffic detectors, while the other considered prior information of OD flow. Fei et al. [16] presented a model to maximize the efficiency of data collection and OD demand in a traffic network under the condition of minimizing the uncertainty of the OD demand matrix. Bertini et al. [17] focused on highway travel time estimation for display on roadside variable message signs and described a concept developed from first principles of traffic flow, in order to establish optimal sensor density. Mínguez et al. [18] proposed an optimal vehicle license plate recognition detector layout model based on given prior OD demands. In 2010, Zhou et al. [19] established an optimal distribution model for single-point and point-to-point detectors to estimate the OD matrix based on information theory; and Xing and Zhou et al. [20] considered various sources of error in estimation and prediction. Ban et al. [21] discretized the two dimensions of time and space and proposed a dynamic programming model aiming at highway travel time estimation. Zhang et al. [22] took the minimum travel time estimation error as the optimization objective and studied the optimization of a highway traffic detector layout. Castillo et al. [23] calculated the OD matrix and road flow through traffic data obtained from observation points and solved seven kinds of observation point problems in a traffic network. A new placement configuration for departure detectors was proposed and named the mid-intersection detector by Gholami [24], in which departure detectors can be activated by more than one movement at different times. In addition, due to the complexity and uncertainty of traffic situations [25], the uncertainty of a traffic detector layout should be considered. A multi-objective detector deployment model based on the minimization of demand uncertainty was proposed by Fei et al. [26], which considered the information acquisition and coverage of OD pairs. Li and Ouyang analyzed the reliability of optimal deployment of a traffic sensor network in 2010 in [27], presented a method to simultaneously estimate the travel time and the OD matrix in 2011 in [28], and established a reliable optimal deployment model for traffic detectors based on mixed integer programming in 2012 in [29], in which an effective measure (including flow coverage, vehicle mile coverage, square error reduction, and so on) and alternative schemes under traffic detector failure were proposed. Zhu et al. [30] established a new two-stage random model, considering uncertain detector failure factors. To address the problem of traffic detector optimal layout, scholars have carried out a series of studies in terms of the research object, research content, research purpose, research technical route, research angle, application scenario, and so on. The existing studies can be classified into highway and urban roads, according to the research context. Furthermore, they can be classified as considering detector number, detector location, detector density, data accuracy, and so one, according to the research object; or classified into OD estimation, travel time estimation, event detection, and traffic flow estimation, according to different research purpose. Finally, they can also be classified as using mathematical methods, computer methods, transportation planning methods, and so on, according to the research technology route used. Although the principles and methods in the literature radically vary, the common aim is to find the best traffic detector layout, in order to maximize the revenue and minimize the cost. From a literature review on the optimization of traffic detector layout, preconditions or assumptions have been essential for some studies; however, it is hard to collect enough data for analysis in a complex traffic network. The models presented in some studies are highly complex; however, this is more of theoretical significance than of practical significance in actual engineering. It is undeniable that the above research has made a variety of breakthroughs; however, most studies have only focused on one aspect related to traffic detector layout (e.g., traffic parameters or OD). Furthermore, there is little information about observation of the real-time state of a traffic network from a global perspective. Traffic networks may be complex, either statically or dynamically, but the existing research has focused mainly on static traffic network scenarios. For that reason, based on the complex network theory, the TNCM model is proposed in this paper, with the aim of observing a traffic network with applicability to multiple scenarios. Network Centrality Network centrality has been widely used in various fields (e.g., urban planning, urban geography, economic geography, and so on) in developed countries (e.g., the United States and U.K.), in applications such as urban crime, network monitoring design, community planning, residential area planning, urban spatial structure analysis, urban land use density, among others. As network centrality reflects the importance of the location of a node or link in the network, the results are helpful to determine the key points in a network. Network centrality plays an important role in complex network topology analysis. Linton et al. [31] introduced the concept and measurement standard of network centrality, which is the key to complex networks. Chang et al. [32] indicated that the centrality analysis of traffic networks is of great significance in transport planning and transportation construction. There are four kinds of network centrality: point centrality, eigenvector centrality, betweenness centrality, and closeness centrality. Point centrality is defined as the number of other nodes linked to a node. Eigenvector centrality means that the eigenvector centrality of a node is proportional to the sum of the relative index values of all nodes connected to it when all nodes have relative index values. Betweenness centrality includes point betweenness centrality and the edge betweenness centrality: point betweenness centrality is the proportion of the number of paths passing through a node in the shortest path of all node pairs in the network compared to the total number of shortest paths, while edge betweenness centrality is the proportion of the number of paths passing through an edge in the shortest path of all nodes in the network compared to the total number of shortest paths. Closeness centrality refers to the average distance of the shortest path from a node to all other nodes. The above four network centralities have different emphases. Point centrality focuses on the measurement of the local importance of nodes, which is simple and of low computational complexity. Considering the quality and quantity of each node, eigenvector centrality can evaluate the relative importance of a node more objectively, which is also simple and of low computational complexity. As an important global geometric quantity, betweenness centrality can reflect the control of a node or link to the network flow in terms of congestion, and the influence of the corresponding node or edge in the whole network, which is relatively complex and of high computational complexity. Closeness centrality can evaluate the degree to which a node disseminating information is independent of other nodes. The closer the node is to other nodes, the more independent it is in disseminating information. As a non-core node should pass through other nodes to spread information, it is easily subject to other nodes. Therefore, a node is the center of network if it has minimal distance to other nodes. Considering the above analysis of network centrality, point centrality and eigenvector centrality can only analyze the local importance of nodes. This paper analyzes the network from an overall perspective and, thus, modifies the betweenness centrality to provide an optimal selection of links. Materials and Methods This paper focuses on traffic networks and proposes a centrality model to optimize the layout of traffic detectors in different scenarios. The purpose of this is to generate a traffic detector layout covering as many important links as possible. Traffic flow is not the only factor that influences a link's importance: many other factors, such as network topology, manager preference weight, road network status, and so on, can also play a role. In addition, the detector failure factor also needs to be considered. So, the problem can be described as: under the given constraints, which links should be installed with traffic detectors to monitor the traffic network better and obtain more valuable traffic information, as measured by the covered path's importance. According to the centrality model, the first step is to analyze the network and determine link centrality and path centrality, followed by solving the optimization model using a genetic algorithm. Description of the Traffic Network A traffic network is composed of several links (roads) and nodes (intersections). In this paper, it is considered as a weighted directed graph. In general, a typical traffic network can be expressed as and represents the set of efficient paths between all OD pairs, = { , , ⋯ , }, and = | | is the number of OD pairs. The efficient paths defined here are all simple paths; that is, each node in a path's node sequence appears only once. Centrality of Static Network This section will discuss the key concepts of links and paths for describing the static network used in TNCM. Link Centrality of an Unweighted Network Link centrality, based on betweenness centrality, is the numerical representation of how important a link is in a network. The link centrality of a link is the proportion of the shortest paths that pass through the link, compared to all shortest paths in the network: where ( , ) is the number of shortest paths between and , and ( , | ) is the number of shortest paths between and that pass through link . The route choice of a user travelling is closely related to their travel time; thus, the link centrality of a link is positively related to its traffic flow. In particular, if the number of efficient paths between an OD pair is limited to 1 and the travel strategy is based on the shortest travel time, then the link centrality is the proportion of traffic flow in the total traffic flow. According to the definition of link centrality, the higher the link centrality of a link, the greater its connectivity and influence in the traffic network. The probability of selecting this link is also higher when a user travels and is more significant to observe. Under the same path or traffic coverage, covering important links and important paths is better. Take the Fishbone network as an example (as shown as Figure Link Centrality of Weighted Network The link centrality of an unweighted network quantifies the importance of the link from the perspective of the network topology. However, in an actual traffic network, a link's weight is also affected by many other factors, such as OD demand, design planning, and decision-maker's preferences. Take the OD demand as an example, which indicates travel demand between origin and destination, represented as traffic flow. When the OD demand is known, the travel flow of users in each OD pair may be different and, consequently, the link's importance changes. A link with high flow should have a higher link centrality. Therefore, the weight of a link is introduced to fix the link centrality in a weighted network in which the link centrality is determined by network topology, traffic flow, and other factors. where ( ) is the gain coefficient of the link centrality of link in the weighted network. In particular, ( ) is equal to 1 if all link weights are equal. In this case, the link centrality of the weighted network is equivalent to that of the unweighted network; that is, The weight of a link can be specified manually or calculated using the traffic flow. Taking the Fishbone network as an example, the travel time and capacity of each link are equal, and the specific parameters are shown in Table 1. [3,5] D: [9,10] OD demand: (3,9,7500), (3,10,7300), (5,9,1000), (5,10,1200) Based on the OD demand, a traffic network that prefers to use the "upper half" is constructed. As the network has a symmetrical structure, the importance of the upper half is higher than that of the lower half. In the results of traffic assignment, although the flow of link 14 is 1,0536 as that of link 6, the centrality of link 14 is higher than that of link 6, as it handles more traffic demand. Taking the traffic flow as the link weight, we can calculate the link centrality. The results are shown in Figure 2. The numbers in brackets after the link ID are the link centralities in the unweighted network and the weighted network, respectively. It can be seen that the link centrality in the upper half generally increases, while the link centrality in the lower half decreases or increases only slightly. The link centrality of a weighted network evaluates the importance of a link by considering various factors comprehensively, which maximizes the influence of the link when the network changes and has good adaptability to change. Path Centrality In the monitoring of a traffic network, priority should be given to links with a large link centrality, in order to cover the most important links. However, there may be a strong correlation between links. If only link centrality is the priority strategy, resource waste will occur. A special observation network is shown in Figure 3. The number on the link represents the link centrality, where nodes 1, 2, 3, 4, and 5 are the origin and nodes 9 and 10 are the destination. If the number of traffic detectors is 3, then the layout solution with the road centrality as the priority strategy is (6,7), (7,8), and (8,10). Obviously, (6,7) and (7,8) are duplicate arrangements. Therefore, it is necessary to consider the path, instead of the link, to reduce the strong correlation between links and maximize the overall observation revenue. Path centrality is used to represent the importance of a path and the detector is deployed to cover more important paths, eliminating the impact of link correlation. Definition 3: Path centrality, ( ). The path centrality represents the importance of a path, which is calculated by the weighted average of the centrality of all links on the path. Among them, the weight of each link is the link centrality, which can reduce the influence attenuation of important links in the path: (3) In Figure 3, there are eight shortest paths between nodes 1, 2, 3, and 4 and nodes 9 and 10, where ( ) is 0.673. There are two paths from node 5 to nodes 9 and 10, where ( ) is 0.1. In order to cover more important paths, links (6,7), (5,9), and (5,10) are selected for the detector layout, in order to cover more travel routes. Obviously, this is an optimal solution under the current conditions. Dynamic Network Centrality From the above analysis, it is clear that the two key factors of link centrality are the link weight and link travel time, which are both fixed values in a static network. However, in a real traffic network, changes in conditions, such as traffic jams, accidents, traffic control, road maintenance, and special weather conditions, are inevitable. A typical dynamic network change is shown in Figure 4. The weight and the travel time of a link change with time. At the time t = 2, traffic along the link (2,4) is heavy and the travel time increases from T to 5T; while the traffic flow demand along link (3,4) decreases, the weight drops to 0.5, and the traffic time decreases to 0.9T, as the road is smooth. At the time t = n, the congestion along link (2,4) is relieved, the travel time is reduced to 2T, and the traffic time and weight of link (3,4) are restored to that at t = 1. For such networks, a static representation will lead to the loss of a lot of valuable information. Therefore, it is necessary to consider the influence of network changes on centrality. Definition 4: Network sequence diagram. A dynamic traffic network is sampled, according to the time, in order to obtain the corresponding static traffic network at different times. The static network at each sampling time is called a snapshot network. For example, the snapshot obtained at the first time is , the snapshot obtained at the second time is , and so on. t=1 t=2 t=n The link centrality of link under the network sequence diagram is a fitting result of the link centrality of link at all times. Here, the weighted average method is adopted, which is defined as follows: where ( ) represents the centrality of link in and ( ) represents the sampling weight of link at time . The greater the sampling weight, the greater the observation value and observation preference of the link at this time. Using ( ) to describe the centrality of link in will produce a large error if the network changes a lot, as the sample distribution is too scattered and we are unable to obtain a good fitting result. In this paper, the centrality distance of the traffic network is used to describe the change. The function d( , ) is a function defined on . The traffic network at any time is regarded as a multi-dimensional vector composed of all link centralities, and the network centrality distance between traffic networks is the standard Euclidean distance between these vectors. Detector Failure Scenario In an actual traffic network, a detector may break down and affect the acquisition of traffic information. Therefore, the impact of detector failure should be taken into account when obtaining a detector layout. There may be many failure states in a detector. For the sake of simplification, this paper assumes that there are only two states in the detector: 0 for normal state and 1 for failure state. The failure probability of each detector is independent and distributed in [0,1]. It is assumed that the failure probability of each detector is a fixed value . For a layout solution using detectors, there are 2 failure scenarios in ξ: Any failure scenario ξ , ξ ∈ ξ is an n-dimensional vector, where the value of each dimension is the state of the corresponding deployed detector, 0 or 1. Let be the number of detectors in failure state in scenario ξ ; then, the probability of occurrence of ξ is (1 − ) . The relationship between the solution , the failure scenario ξ , and the traffic information state vector can be described as follows: where represents whether there is a traffic detector at link , = 1 indicates that link is covered by the detector, and = 0 indicates the opposite; represents the failure state of the detector at road when ξ occurs, = 1 indicates that the detector installed at road is in failure state, = 0 indicates that the detector installed at road work normally, and does not exist for roads with no detector; and represents whether real-time traffic information can be obtained at link : if = 1, traffic information at link can be obtained, while = 0 indicates the opposite. Traffic Network Centrality Model (TNCM) Based on the above discussion and considering the dynamic characteristics of a traffic network and detector failure scenario, this paper proposes an optimal traffic detector layout model based on network centrality, which is called TNCM (Traffic Network Centrality Model). The relevant definitions are as follows: : Path centrality vector, where represents the path centrality of path . : Path cover vector, where = 1 means path is covered by a detector, while = 0 means that path is not covered. * : Related to budget constraints and represents the number of detectors. : Specifies the layout of the link vector; = 1 means that link must have a detector, while = 0 means that link does not have such a constraint; and the number of specified links must not exceed the budget constraints. The objective function and constraints of TNCM are as follows: Equation (8) is the objective function of the model, indicating the maximum expected value of path centrality covered. For a layout scheme using detectors, there are 2 failure scenarios in ξ. The total expected value of path centrality covered is the sum of expected values under all failure scenarios. Constraint (9) indicates that the number of detectors installed on any road meets the 0-1 variable constraint, constraint (10) indicates that detectors must be deployed at the specified links, and constraint (11) indicates that the number of detectors must be equal to the budget constraints. The TNCM is solved by a genetic algorithm, which directly operates the structural object without limitations in terms of derivation and function continuity, and has inherent implicit parallelism and better global optimization ability. The probabilistic optimization method can automatically obtain and guide the optimized search space without definite rules, as well as adjusting the search direction adaptively. The detailed steps followed in this study using TNCM are shown as Figure 5. There are two main stages, listed as follows: the dynamic scenario, samples should be taken according to the requirements (P1-1) to construct a dynamic network diagram. Each sequence item is a static network. In particular, a static scenario is equivalent to a dynamic scenario where the number of items in the sequence is 1. When calculating the link centrality for a static network, the UE (Users Equilibrium) model (P1-3-1) performs the traffic assignment to generate the gain coefficient of each link, if the OD matrix is given. Finally, based on the observation weights, P1-4 fits a static network with link centrality to represent the dynamic network diagram. This static network is the key output of stage 1. 2. Stage 2: Optimization problem. This stage deals with a single-target optimization problem (P2-1) with inputs, such as a static network with link centrality, and constraints, including specified links with a detector installed, detector number limit, and detector failure probabilities. By solving this problem with the genetic algorithm (P2-2), the optimal layout solution is obtained. In this paper, the OD matrix is optional. If the OD matrix is given, the classic user equilibrium model is used to obtain the link flow, which is used as the input for calculating link weight and validating model results. Otherwise, a default OD matrix is used. In this case, there is an OD demand between each node and the weight of each link is set equal to 1. Data and Results In this paper, the traffic network of Sioux Falls, South Dakota, USA is used for a case study, where the data were obtained from [33]. The network consists of 24 nodes and 76 links. Link travel times and capacities are known. Analysis of Traffic Network Assuming that the link weights were equal to 1, links 16 The key links in the traffic network can be judged by the importance of the links. Taking the link centrality as the evaluating standard, the links are divided into 8 grades, L1-L8, as shown in Figure 6. It can be seen that links 16 and 19 were the most important links, while links 30 and 51 were the least important links. The higher the level of a link, the more important the link's traffic information. Monitoring paths that pass through important links will get the maximum observation benefit. Case Study The following content will analyze a static network case, a dynamic network case, and a detector failure case, respectively. Case Study of Static Network In the static network case, it was assumed that there was an OD demand between each node, the weight of each link was equal to 1, the paths chosen by users between OD pairs were the shortest three paths, and that the detector failure probability was = 0. Let the number of detectors be 18, then the detector layout solution is as shown in Figure 7. The path centrality of the optimal solution was 70.613619 and the coverage rate was 85%. The links installed with detectors tended to be links with high link centrality. To compare the layout schemes with different detector number constraints, different numbers of detectors were evaluated {8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24}. The layout solutions are shown in Table 2. As the number of detectors increased, the covered path centrality increased. The analysis of layout solutions under the limitation of different detector count is shown in Figure 8 and described as follows. • (a) The shortest path coverage: with an increase in the number of detectors, coverage of the efficient paths between OD-that is, the shortest paths-increased gradually. The coverage rate with = 3 was the highest and that with = 1 was the lowest. In the traffic network, the path centrality of the shortest path was the largest, as well as the coverage difficulty. • (b) OD pair coverage: when the number of detectors increased between 8 and 15, the OD pair coverage increased rapidly, after which the growth flattened. When the number was 18 or more, the coverage rate was larger than 95%. For a static network with OD demand in all nodes, this coverage rate is effective in monitoring the traffic network. detectors was 12 or more, the proportion of 60% coverage links was maintained at 100%. From another point of view, the layout solution can effectively observe the traffic network. • (d) Average revenue: As the number of detectors increased, the average revenue obtained by a single detector gradually decreased, but was still larger than the minimum revenue of the whole network. As shown in Figure 9, when the number of detectors increased to 22, the revenue obtained by adding detectors became small, and the detector layout reached a stable state. Figure 10 shows the link-level distribution of links with detectors installed under the restriction of the different numbers of detectors. When the number of detectors was small (e.g., 8 or 12), all L8 links were selected. When the number of detectors was large (e.g., 16 or 24), there was a certain probability that L8 and L7 links were not selected, instead being replaced by several links with a lower level. This shows that, when the number of detectors is large, correlation is more likely to exist between the selected links. TNCM tries to minimize the impact of correlation and increase the observation revenue of the whole network. The selection probability of each link-level in all cases is shown in Table 3, which shows that the links where detectors were installed were more inclined to be links with high link centrality. The case discussed in this section is the dynamic sequential network corresponding to the static network of the previous section. The traffic network was sampled over a day and simulated at different times, which were divided between times 0 and 2, and so on. The differences in sampling at different times were reflected in the aspects of travel time, OD traffic volume, decision-maker preference weight, and observation weight, which are explained as follows: • When T = 4, links 39, 71, and 76 needed to be observed, so the decision-maker preference weight of the link was increased. Table 4. The morning peak and evening peak are the key observation periods and, so, the observation weights of these two periods were larger. According to the parameters given above, the network sequence diagram with uniform weight and observation weight was constructed. The distance between the networks in at different times and the fitted network are shown in Figure 11. It can be seen from the comparison that, compared to with uniform weight, the distance between with observation weight at times {6,8,18} and the fitted network was significantly reduced, as the observation network increased the observation weight at those times. With the number of detectors limited to 10, the detector layout solution of the dynamic network is shown in Table 5. Comparing the dynamic network solution's performance on with the optimal solution of , the benefit loss of covered path centrality is shown in Figure 13. Overall, the layout of the dynamic network achieved good performance for all time periods. Therefore, it is effective to use a network sequence diagram to deal with the detector layout problem of a dynamic network. Case Study of Detector Failure The detector layouts for the static and dynamic network cases were found under the assumption that the detector failure probability was 0. In fact, the whole observation network revenue is affected by detector breakdown during operations. As the detector layout for the dynamic network will eventually be transformed into the layout of its fitting static network, this section will analyze the impact of detector failure on the observation network revenue and layout solution in a static network. As Figure 14 shows, for any number of detectors, the higher the probability of detector failure, the lower the path centrality covered; that is, the lower the revenue of the observation network. Let the number of detectors be limited to 11. The layout solution of detectors in two scenarios of failure probability ( = 0 and = 0.25) are shown in Table 6. There were differences in the optimal solution, and only eight of the same links were selected. Therefore, the detector failure probability affects the selected links, as well as the revenue of the observation network. Discussion With the continuous development of traffic networks, traffic information is of great significance for both long-term planning and short-term prediction. The effective acquisition of traffic information forms the basis of traffic management and control. Due to the complexity and changeability of a traffic network, the traffic demands and road conditions are in a state of dynamic change. Consequently, monitoring also has a biased demand, which causes traffic detector layout design to become a very complex problem. Traditional methods, such as traffic flow estimation, travel time estimation, and so on, have achieved good results; however, they cannot represent the real-time state of a traffic network from a global perspective. Therefore, TNCM was proposed in this paper, which is based on the traffic network topology and considers the travel time between links, OD traffic demand, observation demand of urban managers, dynamic characteristics of traffic network, detector failure, and other factors. With the path centrality covered by the detectors as the optimization goal, it is expected to monitor the more important links in the network and solve the complex layout problem of traffic detectors. The research results of this paper as follows: 1. Based on complex network theory, traffic networks are abstracted as directed weighted networks. Link centrality is introduced to describe the importance of links, and path centrality is introduced to describe the importance of an effective path in traffic flow and to effectively reduce the impact of correlation between links. The link centrality in the weighted network comprehensively considers the factors of network topology, preference weight, traffic flow, and other factors, describing the traffic network more comprehensively. 2. The network sequence diagram is used to describe the dynamic changes in a traffic network. Considering the network over a period of time truly reflects the traffic status at each sampled moment. According to the observation weight, the dynamic network is fitted to a static network, and the detector layout is based on the fitted network. The centrality distance can effectively indicate the difference between networks at different times. Adjusting the observation weight can dynamically adjust the similarity between the traffic networks at different times and the fitted network. 3. In a static network, when the OD demand is known, the user equilibrium model is used to allocate traffic flow for links and paths. In this paper, the gain coefficient is used to represent the influences of traffic flow, manager's preference, traffic state, and other factors on link centrality. In addition, detector failure is considered, in order to optimize the layout solution and minimize the influence of failure on the detector layout. We used the network of Sioux Falls as an example to analyze the traffic detector layout solution under three different scenarios-static network, dynamic network, and detector failure-following which the TNCM was solved by a genetic algorithm. Compared with the correlation method [33], more practical constraints are incorporated. Compared with the two-stage traffic detector stochastic placement model [30], TNCM does not rely on OD. Without prior knowledge, the model can cover the common constraints in the static network (e.g., cost constraints, specified links, detector failure, and so on) and effectively process the dynamic scenario. In the case of a static network, under the condition that the number of different detectors is limited, the layout solution of TNCM has better performance in path coverage, OD coverage, link coverage, and link revenue, which demonstrates the effectiveness of the solution. The link-level distribution of selected links indicates that the more important links are, the more likely they are to be selected for detector installation. After a certain number of detectors is reached, the model may select some sub-important links, instead of the more important links, to eliminate the correlation between links. In the case of a dynamic network, a network sequence diagram was constructed for 12 time periods of a day, according to the real traffic conditions. The observation preference at different times is reflected in the observation weight parameter. The network sequence diagram effectively describes the real traffic changes. It was seen, by comparison of the final layout solution and the optimal solution at each time, that the revenue loss caused by a dynamic change was relatively small and that the maximum loss was not more than 5.8%, which indicates a reasonable layout solution. Conclusions Based on complex network theory, TNCM was proposed in this paper, which utilizes the idea of network centrality. It was proved that TNCM is effective and operable in the case of the Sioux Falls network and, so, it is not only suitable for small and medium networks, but also large networks. Therefore, TNCM can be widely used for practical applications. It is mainly dependent on the traffic network topology and is independent of pre-conditions such as OD traffic, prior matrix, and so on. It was demonstrated that TNCM can be effectively applied to actual road networks, considering cost, road section, detector failure, and so on, while paying less attention to route choice behavior. It provides an effective solution for the traffic detector layout under the demand for complex traffic network monitoring, which contributes to finding the minimum amount of detectors to achieve a given coverage of path centrality, thus reducing traffic detector layout cost. The aim of this paper was to monitor a traffic network effectively by considering the traffic detector layout. An innovative method was put forward, which offers a new idea for the optimization of traffic detector layout. It not only provides a valuable reference for practical engineering, but also provides a scientific decision-making basis for traffic managers. Furthermore, it provides practical guidance for the construction of intelligent transportation and smart cities. Furthermore, it promotes the sustainable development of cities. Based on the TNCM, a traffic detector layout solution for a typical road network, composed of one expressway and several major roads in Hefei of China's Anhui Province, is provided for relevant departments. This study also exhibits several limitations: (1) considering detector failure, collecting multiple detectors in a single link can raise the monitoring availability of the link. However, in TNCM, each link can only have one detector at a time, which may prevent us from finding the optimal solution; and (2) there are other valuable layout objectives, such as the most covered links of interest, the minimum detector layout amount, and so on, while the single objective considered by TNCM is the path centrality with the maximum coverage, which limits the available scenarios. Future research may be conducted in a few directions, considering these deficiencies: multiple detectors could be installed in the same link, and the single-objective optimization model may be extended to a multiobjective optimization model to increase the number of available scenarios. Furthermore, the gain factor describes the influence of different factors on the link centrality; therefore, how to introduce multiple factors into TNCM reasonably should be investigated.
2020-03-12T10:55:31.747Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "9a8617225489959a86b2d0ac69455ef248503a7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/5/2048/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e7e76100802cacef45cdba114b407bbbe8065e86", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252718002
pes2o/s2orc
v3-fos-license
Neutrophil-to-lymphocyte ratio as a potential biomarker in predicting influenza susceptibility Background Human population exposed to influenza viruses exhibited wide variation in susceptibility. The ratio of neutrophils to lymphocytes (NLR) has been examined to be a marker of systemic inflammation. We sought to investigate the relationship between influenza susceptibility and the NLR taken before influenza virus infection. Methods We investigated blood samples from five independent influenza challenge cohorts prior to influenza inoculation at the cellular level by using digital cytometry. We used multi-cohort gene expression analysis to compare the NLR between the symptomatic infected (SI) and asymptomatic uninfected (AU) subjects. We then used a network analysis approach to identify host factors associated with NLR and influenza susceptibility. Results The baseline NLR was significantly higher in the SI group in both discovery and validation cohorts. The NLR achieved an AUC of 0.724 on the H3N2 data, and 0.736 on the H1N1 data in predicting influenza susceptibility. We identified four key modules that were not only significantly correlated with the baseline NLR, but also differentially expressed between the SI and AU groups. Genes within these four modules were enriched in pathways involved in B cell-mediated immune responses, cellular metabolism, cell cycle, and signal transduction, respectively. Conclusions This study identified the NLR as a potential biomarker for predicting disease susceptibility to symptomatic influenza. An elevated NLR was detected in susceptible hosts, who may have defects in B cell-mediated immunity or impaired function in cellular metabolism, cell cycle or signal transduction. Our work can serve as a comparative model to provide insights into the COVID-19 susceptibility. Background: Human population exposed to influenza viruses exhibited wide variation in susceptibility. The ratio of neutrophils to lymphocytes (NLR) has been examined to be a marker of systemic inflammation. We sought to investigate the relationship between influenza susceptibility and the NLR taken before influenza virus infection. Methods: We investigated blood samples from five independent influenza challenge cohorts prior to influenza inoculation at the cellular level by using digital cytometry. We used multi-cohort gene expression analysis to compare the NLR between the symptomatic infected (SI) and asymptomatic uninfected (AU) subjects. We then used a network analysis approach to identify host factors associated with NLR and influenza susceptibility. Results: The baseline NLR was significantly higher in the SI group in both discovery and validation cohorts. The NLR achieved an AUC of . on the H N data, and . on the H N data in predicting influenza susceptibility. We identified four key modules that were not only significantly correlated with the baseline NLR, but also di erentially expressed between the SI and AU groups. Genes within these four modules were enriched in pathways involved in B cell-mediated immune responses, cellular metabolism, cell cycle, and signal transduction, respectively. . Introduction Influenza viruses are highly contagious human respiratory pathogens that cause recurrent epidemics and occasional global pandemics. Seasonal influenza vaccines are traditionally trivalent and include components of influenza A viruses of the H1N1 and H3N2 subtypes and an influenza B virus. The pandemic influenza A(H1N1)pdm09 virus gave rise to the first influenza pandemic of the twenty-first century. The subtype H3N2 has been the most frequently occurring seasonal influenza since 1968, causing a significant threat to public health. Human population exposed to influenza viruses exhibited wide variation in susceptibility (Clohisey and Baillie, 2019). Earlier studies demonstrated that host factors, such as age, pregnancy, obesity, cardiovascular disease, and host genetics (Horby et al., 2012(Horby et al., , 2013Mertz et al., 2013;Patarčić et al., 2015), played a critical role in susceptibility to influenza viruses. In addition, several host factors of preexisting immune cell composition in blood have now been reported to associate with influenza susceptibility. The proportions of pre-existing CD4 + T cells recognizing nucleoprotein and matrix were inversely associated with total symptom scores and virus shedding of H3N2 (Wilkinson et al., 2012). The subjects having a higher proportion of pre-existing CD8 + T cells to conserved viral epitopes developed less severe illness after A(H1N1)pdm09 infection (Sridhar et al., 2013). Moreover, the proportion of KLRD1-expressing natural killer cells at baseline (i.e., prior to exposure to influenza) was lower in symptomatic shedders compared to asymptomatic nonshedders who were inoculated with H3N2 or H1N1 influenza (Bongen et al., 2018). When there is local infection, various leukocyte populations are recruited to the infection site which is a critical early component of inflammatory responses (Luster et al., 2005;Leick et al., 2014). Neutrophils are the most abundant leukocytes in the circulation and the first to be recruited to the site of infection where they enhance local innate responses (Rosales, 2018). The innate immune system not only responses quickly to invasion by an infectious agent but also plays essential roles in activating adaptive immune responses (Clark and Kupper, 2005;Mantovani et al., 2011). While innate cells at the infection site (resident innate cells and newly recruited neutrophils) are generating antimicrobial and pro-inflammatory responses that will slow down the infection, they are also initiating steps to deliver the pathogens to lymphoid tissues where lymphocytes (T and B cells) can recognize them and generate adaptive immune responses (Schmolke and García-Sastre, 2010;Hufford et al., 2012;Chen et al., 2018). Innate immune responses and the inflammatory responses play critical roles in eliminating infections but also can be harmful when not adequately controlled. Overproduction of various normally beneficial mediators and uncontrolled local or systemic responses can cause illness and even death (Chen and Nuñez, 2010;Brandes et al., 2013). Prior studies have revealed that the host's inflammatory responses were likely to influence both the likelihood of influenza virus infection and disease severity (Hayden et al., 1998;Julkunen et al., 2000;Price et al., 2015). When the baseline levels of systemic inflammation increased, the host may be excessively susceptible to influenza virus infection (Clohisey and Baillie, 2019). The neutrophil-to-lymphocyte ratio (NLR), the ratio of the absolute neutrophil and lymphocyte counts, which can be measured during routine hematology is a simple and reliable method to evaluate the extent of systemic inflammation (Zahorec, 2001). The NLR was examined as a new prospective marker to estimate systemic inflammation and clinical outcomes in cancer patients (Templeton et al., 2014;Faria et al., 2016;Howard et al., 2019). A few studies have also reported the roles of NLR in influenza virus infection. For patients infected with avian influenza virus H7N9, the NLR taken within 24 h after admission was found to be independently associated with fatality (Zhang et al., 2019). Moreover, the NLR can be used to predict swine influenza virus infection among patients presenting with influenza like symptoms while awaiting throat swab culture and virus isolation reports (Indavarapu and Akinapelli, 2011). In patients with influenza virus infection, excessive neutrophil activation was examined to predict fatal outcome, and neutrophil-related host factors were associated with severe disease (Tang et al., 2019). Previous studies have observed a decline in lymphocyte count in patients infected with influenza virus Shen et al., 2014). Both host's inflammatory responses and immune responses played essential roles in the likelihood of influenza virus infection and disease severity. The NLR which conjugates two interconnected arms of the immune system: innate immunity and adaptive immunity is an emerging biomarker of the relationships between the immune system and diseases. However, the relationship between influenza susceptibility and the baseline levels of the NLR has not been systematic investigated so far. Digital cytometry, which quantifies cell type composition in a sample by using computational methods, allows interpretation of heterogeneous bulk blood or solid tissue transcriptomes at the cellular level. CIBERSORTx is a widely used computational method to deconvolve cell type composition and proportions (Newman et al., 2019). Existing cell type deconvolution methods normally require a signature matrix which is a collection of cell type-specific gene expression profiles. The signature matrix has been examined to determine the accuracy of deconvolution (Vallania et al., 2018). Thus, we combined CIBERSORTx with three well-defined immune signature matrices (Newman et al., 2015;Vallania et al., 2018;Monaco et al., 2019), respectively, to get reliable estimates of the NLR. In this study, we performed a systematic investigation of the relationship between influenza susceptibility and the baseline NLR. Herein, we utilized digital cytometry to investigate heterogeneity of blood immune cell populations prior to infection. Using multi-cohort gene expression analysis, we found that the baseline NLR was significantly higher in symptomatic . infected group compared to asymptomatic uninfected group. We then used a network analysis approach to identify host factors which were statistically significantly associated with the baseline NLR, and to detect several key biological pathways that may contribute to disease susceptibility to symptomatic influenza. . . Description of experimental human influenza challenge cohorts We collected 5 human influenza virus challenge cohorts from the NCBI Gene Expression Omnibus (GEO) database. For each influenza challenge cohort, healthy adults (aged 18-45) were inoculated with A/Wisconsin/67/2005 (H3N2) or A/Brisbane/59/2007 (H1N1) influenza, and genome-wide gene expression profiles from peripheral blood collected prior to influenza challenge and the subsequent 2-7 days were assessed. All volunteers were selected based on low pre-existing immunity to the challenge virus. Subjects were classified as symptomatic or asymptomatic based on a modified Jackson score calculated from self-reported daily symptoms (Jackson et al., 1958). The infected and uninfected classification were determined by viral titers from nasopharyngeal washes using virus quantitative culture or virus quantitative PCR (Liu et al., 2016). We only considered samples from subjects whose viral titer and symptom status agree, i.e., those who were either asymptomatic and uninfected (AU) or symptomatic and infected (SI). The GSE73072 data set was profiled using Affymetrix microarrays, and it included four challenge studies which were referred to as DEE2 (H3N2), DEE5 (H3N2), DEE3 (H1N1), and DEE4 (H1N1). We utilized all samples taken before inoculation as baseline samples. The baseline samples for the DEE2 cohort were taken at −23 h post-inoculation (hpi) or immediately prior to inoculation (0 hpi), and those for the DEE5, DEE3, and DEE4 cohorts were taken at −30, −21 or 0, and −21 or 0 hpi, respectively. The baseline samples of DEE2 (H3N2) and DEE5 (H3N2) challenge studies were used as discovery cohorts, and those of the DEE3 (H1N1), DEE4 (H1N1) were used as validation cohorts. In addition, we utilized baseline samples of the GSE61754 cohort, which was profiled by Illumina microarrays as a cross-platform validation cohort. Table 1 summarizes the infection data for these included influenza challenge cohorts. . . Dissecting immune cell composition from whole blood samples To explore the relationship across all samples prior to inoculation, we performed clustering analyses on the batch corrected profiles in the GSE73072 H3N2 and H1N1 cohorts, respectively ( Figures 1A,B). We conducted hierarchical clustering on samples based on similarities in the top 5,000 gene expressions with the highest variance. This preliminary examination indicated that expression profiles differed between the SI and AU groups prior to inoculation ( Figures 1C,D). We next performed digital cytometry to investigate heterogeneity of blood immune cell populations prior to inoculation (Figure 2A). To accurately dissect immune cell composition of whole blood samples from healthy subjects inoculated with live H1N1 or H3N2 influenza, we applied CIBERSORTx to expression profiles of whole blood samples by using a well-defined signature matrix named as sigmatrixMicro from a previous study (Monaco et al., 2019; see Section 4 for details). Among 11 immune cell populations dissected from whole blood samples, we observed that neutrophils were consistently dominant across five challenge cohorts, followed by T cells, monocytes, B cells, and NK cells ( Figure 2B). . . Variation in baseline NLR Neutrophils and lymphocytes were the two most common leukocytes in the blood. We therefore performed comparable analyses for the estimated lymphocyte and neutrophil proportions between the SI and AU groups in baseline samples. We found that proportions of lymphocytes were significantly lower (p < 0.01; Figure 3A) whereas proportions of neutrophils were significantly higher (p < 0.05; Figure 3A) at baseline in the SI group compared to the AU group in the GSE73072 cohort for H3N2 influenza. We also observed significantly lower lymphocyte (p < 0.05; Figure 3B) but higher neutrophil proportions (p < 0.01; Figure 3B) at baseline in the SI group in the GSE73072 cohort inoculated for H1N1 influenza. For the GSE61754 cohort, we observed the same trend, though these differences were not statistically significant (p ≥ 0.05; Figure 3C). The ratio of neutrophils to lymphocytes (NLR) were assessed, which was defined as the ratio of the estimated neutrophil and lymphocyte proportions. We found that the SI group had significantly higher baseline NLR than the AU group (p = 0.004; Figure 4A) in the discovery cohort. Higher baseline NLR in the SI group was also validated in the GSE73072 (H1N1) cohort (p = 0.0065; Figure 4B) and GSE61754 cohort (p = 0.28; Figure 4C). Although the difference in the GSE61754 cohort was not statistically significant, the baseline NLR was also higher in the SI group. In the GSE61754 cohort, 7 of 15 volunteers were vaccinated with a novel influenza vaccine MVA-NP+M1 30 days prior to influenza challenge (Davenport et al., 2015). The MVA-NP+M1 was designed to boost cross-reactive T-cell responses to antigens that were conserved across all subtypes (Lillie et al., 2012). Correspondingly, a markedly elevated memory T cell proportion was detected In the GSE61754 cohort . /fmicb. . ( Figure 2B). Two of seven vaccinees developed laboratoryconfirmed influenza (symptomatic infection) after challenge. We analyzed the baseline NLR between the SI and AU groups with pre-existing elevated memory T cells. The baseline NLR was still higher in the SI group, but not reached statistical significance because of the small sample size (p = 0.19; Figure 4D). We further performed a multi-cohort meta-analysis to evaluate the differences in baseline NLR between the SI and AU groups. A forest plot of estimated differences on all five challenge cohorts indicated the baseline NLR was significantly higher in the SI group (Hedges'g = 0.96, 95% CI = 0.54-1.38, p < 0.0001; Figure 5A). Robust in silico quantification of immune cell populations from peripheral blood requires a signature matrix and a deconvolution method, and the deconvolution accuracy is largely determined by a signature matrix but not a deconvolution method (Vallania et al., 2018). Therefore, two additional signature matrices of immunoStates and LM22 were tested. Differences in estimated baseline NLR were also validated by combining CIBERSORTx with immunoStates and LM22, respectively. We observed that the baseline NLR was still significantly higher in the SI group by using immunoStates (Hedges'g = 0.83, 95% CI = 0.42-1.24, p < 0.0001; Figure 5B) or LM22 (Hedges'g = 0.86, 95% CI = 0.32-1.40, p = 0.002; Figure 5C) as a signature matrix. . . NLR increases in peripheral blood after influenza virus infection To examine concordance between proportions of neutrophil/lymphocyte measured through standard laboratory workout and the deconvolution estimates, we correlated the laboratory measurements with deconvolution estimates in the SI group of influenza H3N2. We collected the laboratory measurements of neutrophil/lymphocyte proportions from (Huang et al., 2011), in which they involved the same volunteers included in the GSE73072 H3N2 cohort. The laboratory measurements of neutrophil/lymphocyte proportions were obtained daily from day 1 to 7 including baseline (prior to inoculation). In the SI group of GSE73072 H3N2 cohort, the mean estimated proportions of neutrophil/lymphocyte were strongly positively correlated with the laboratory measurements (Neutrophil: R = 0.93, p = 0.00079, Figure 7A; Lymphocyte: Figure 7B). These data further validate the deconvolution approach can correctly estimate the proportions of neutrophil and lymphocyte, as well as the NLR. We further investigated the temporal alterations of the neutrophil/lymphocyte proportions by influenza virus infection. In the GSE73072 H3N2 cohort, we observed that subjects in the AU group demonstrated no significant changes in the neutrophil/lymphocyte proportions at any time postinoculation ( Figures 8A,B). However, subjects in the SI group underwent a slightly drop in the neutrophil proportion by 12 hpi in the early stage and then a significantly rise by day 2 post-inoculation, while experience a concomitant rise and fall in the lymphocyte proportions ( Figures 8A,B). The temporal changes in the neutrophil/lymphocyte proportions we estimated using digital cytometry method were consistent with the changes detected using white blood cell counts in laboratory measurements ( Figures 7A,B) (Douglas et al., 1966;Huang et al., 2011;McClain et al., 2013). Our findings parallel the observation of a relative lymphopenia/neutrophilia in influenza virus infection, caused by a leukocyte redistribution between blood, lymph nodes, and tissues. This redistribution is usually transient and profound changes appear on day 2 post-infection (Music et al., 2016). For the temporal alterations of the NLR by influenza virus infection, in both the discovery ( Figure 8C) and validation cohorts ( Figures 8D,E), we observed that subjects in the AU group demonstrated no significant changes at any time post-inoculation, while those in the SI group underwent a significantly rise by days 2-3 post-inoculation and then gradually returned to baseline by day 7 post-inoculation as symptoms resolved. . /fmicb. . In the GSE111368 data set, samples of 94 adult patients hospitalized with A(H1N1)pdm09 influenza virus infection were collected at three time points T1 (recruitment), T2 (∼2 days after T1), and T3 (at least 4 weeks after T1) covering the periods of influenza illness and clinical recovery. We observed that infected patients developed a significant increase in the NLR ( Figure 9A) and neutrophil proportion ( Figure 9B) compared to healthy control subjects (HC) during the period of influenza illness (T1 and T2), whereas there was no significant difference in the NLR compared to HC once the patients had clinically recovered (T3). An opposite alteration in lymphocyte proportion was detected ( Figure 9C). The temporal changes in . /fmicb. . the neutrophil/lymphocyte proportions and the NLR among patients hospitalized with influenza were consistent with the changes detected in the influenza challenge cohorts. . . Network analysis identified four disease modules associated with baseline NLR To investigate disease modules associated with the baseline NLR, we performed gene co-expression network analysis in the discovery cohort using WGCNA (Zhang and Horvath, 2005; Figure 10). WGCNA constructs a network based on the pairwise correlations between gene expression profiles. It has been demonstrated that batch effects can lead to false correlations between gene expression profiles, thus introduced false edge connections or lose true edge connections (Parsana et al., 2019). Principal component analysis (PCA) revealed that both the discovery and validation cohorts included two sample batches ( Figures 1A,B), therefore we regressed out the batch effects using linear models and constructed co-expression networks using batch corrected profiles (see Section 4 for details). The WGCNA method found clusters (modules) of genes with highly correlated expression profiles and interconnectivity across samples. Using WGCNA, we build a gene dendrogram by using the topological overlap measure (TOM) as a proximity measure. We identified modules using dynamic tree cut approach and those closely related modules whose correlations of module eigengenes larger than 0.75 were merged. We then detected 22 distinct gene modules from the dendrogram ( Figure 11A). To identify modules associated with the NLR, we calculated Pearson's correlation coefficient between the module eigengenes and the NLR. Of these 22 identified modules, 12 modules were statistically significant (p < 0.05) when correlated with the NLR for H3N2 ( Figure 11B). In the GSE73072 cohort for H1N1, we observed the same direction of the correlation value in 10 of these 12 modules ( Figure 11B). Furthermore, significant differential expressions between the SI and AU groups were detected in 4 of these 12 modules ( Figure 11C). Three modules (steelblue, darkslateblue, salmon) were negatively correlated with the NLR and significantly downregulated in the SI group, while the blue module was positively correlated with the NLR and significantly up-regulated in the SI group ( Figure 11C). All four modules also shown significant difference in the validation cohort (GSE73072 H1N1; Figure 11D). Hence, these four modules were identified as significant disease modules associated with the NLR: module steelblue with 70 genes, module darkslateblue with 1,296 genes, module salmon with 1,526 genes, and module blue with 2,388 genes in total. . . Functional enrichment analysis in four disease modules associated with baseline NLR To investigate the biological functions of the four disease modules, the enrichment of Gene Ontology (GO) Biological Process and Reactome ontologies in each module were analyzed and the top terms of each category are shown in Figures 12A,B. The GO enrichment results revealed that the steelblue module was significantly enriched in B cell . /fmicb. . activation, proliferation and differentiation, and humoral immune response. The Reactome enrichment results revealed that this module was mainly enriched in B cell activation and B cell receptor signaling pathways. Both GO and Reactome enriched terms indicated that genes in the steelblue module played critical roles in B cell-mediated immune responses. For the darkslateblue module, the genes were significantly enriched in GO terms of RNA catabolic and metabolic processes, RNA processing, translational initiation, viral gene expression and transcription, and cellular respiration, and were significantly enriched in Reactome terms of rRNA processing and translation, that implied genes in this module were mainly involved in the cellular metabolism. The genes in the salmon module were significantly enriched in GO terms . /fmicb. . of RNA localization, RNA transport and DNA biosynthetic process, and were significantly enriched in Reactome terms of SUMOylation, DNA damage response and DNA repair, that indicated genes in this module were mainly involved in regulating the cell cycle. The GO enrichment results revealed that genes in the blue module were significantly enriched in regulation of membrane potential, pattern specification process, G protein-coupled receptor (GPCR) signaling pathway and regulation of ion transmembrane transport. The Reactome enrichment results revealed that . /fmicb. . genes in this module were significantly enriched in GPCR ligand binding, peptide ligand-binding receptors and anti-inflammatory cytokines production pathways. Both functional enrichment analyses indicated that genes in the blue module were mainly involved in the cellular signal transduction pathways. . . Host factors contribute to susceptibility of symptomatic influenza Influenza viruses depend on the host's cellular machinery to replicate, produce, and spread progeny virus particles. Further examination of the four disease modules identified a certain . /fmicb. . number of genes encoding host factors that could contribute to individual viral susceptibility. Of the three modules significantly down-regulated in the SI group compared to the AU group, the steelblue module showed the biggest decrease in modular expression, followed by the darkslateblue and salmon modules. Examination of differentially expressed genes (DEGs) within the steelblue module revealed a broad downregulation ( Figure 13A). and required for the formation of germinal centers (Teitell, 2003). These results indicated that the DEGs within the steelblue module were typical B cell markers and transcription factors associated with B cell activation and differentiation. Downregulation of gene expression in the SI group was also observed in genes within the darkslateblue module ( Figure 13B). This downregulation included genes involved in cellular metabolism, such as genes encoding the ribosomal proteins ( Figure 14A) involved in viral gene expression and transcription, RNA catabolic process, translational initiation and protein targeting to ER, and genes encoding the mitochondrial complex I (NADH: ubiquinone oxidoreductase) subunits ( Figure 14B) involved in cellular respiration, as well as genes IMP3, NOP56, NOP2, QTRT1 involved in ncRNA and rRNA metabolic processes, ribonucleoprotein complex biogenesis, and ribosome biogenesis. For the salmon module involved in regulating cell cycle, the DEGs still displayed a broad downregulation in the SI group ( Figure 13C). For the blue module involved in cellular signal transduction, the DEGs displayed an opposite trend of expression with an increased expression in the SI group ( Figure 13D). . Discussion In the present study, a systematic analysis of the relationship between influenza susceptibility and the baseline level of the NLR was developed. We examined five independent influenza challenge cohorts at the cellular level and found that individuals in the SI group had significantly higher baseline NLR than those in the AU group. The NLR achieved an AUC of 0.724 on the H3N2 data, and 0.736 on the external H1N1 data in predicting disease susceptibility to symptomatic influenza. The mechanisms underlying the association of the higher baseline NLR and the increased susceptibility to symptomatic influenza are poorly understood. The NLR is a biomarker conjugates two interconnected arms of the immune system: innate immunity and adaptive immunity. Neutrophils are the first line of innate immune defense against viral infection (Kaufmann, 2008). They migrate to infection sites for eliminating infectious particles, but also provide signals to other innate and adaptive immune . /fmicb. . cells about an invading foreign threat (Mantovani et al., 2011). The adaptive immunity is orchestrated mainly via T, B, and NK lymphocytes which provide antigen-specific responses. Prior studies revealed that the NLR was a particularly attractive measure of systemic inflammation (Zahorec, 2001). Neutrophils are crucial for innate immunity and are one of the main cell types involved in the inflammatory responses. The host innate immunity is activated by the inflammatory responses to control pathogen infection. Lymphocytes generate adaptive immune responses to eliminate specific pathogens. It is well established that the systemic inflammatory response is typically associated with decline in circulating lymphocyte count and increase in neutrophil count. Neutrophilia and lymphocytopenia are typical phenomena during systemic inflammation (de Jager et al., 2010;Templeton et al., 2014;Qun et al., 2020). Continuous infiltration of neutrophils at the site of infection raising an immune response produces exaggerated cytokines and chemokine that might result in the cytokine storm and contribute to severe disease Frontiers in Microbiology frontiersin.org . /fmicb. . during influenza virus infection (Bordon et al., 2013;Gu et al., 2019). Neutrophil extracellular traps (NETs) are released by neutrophils to contain infections. However, when not properly regulated, NETs have the potential to propagate inflammation (Porto and Stein, 2016;Twaddell et al., 2019). Beside this, more and more evidence has supported that neutrophils can significantly suppressed activation of CD4 + and CD8 + T cells, and further suppressed the immune responses (Pillay et al., 2013;Zemans, 2018). On the other side, lymphocytes are required for maintaining an effective immune response. The causes of lymphocytopenia as the marker of a depressed cellmediated immunity, have been extensively studied (Cunha et al., 2011;Shen et al., 2014;Zhou and Ye, 2021). Lymphocytopenia also render the host susceptible to severe hyperinflammation . Thus, a healthy partnership between neutrophils and lymphocytes plays a very important role in the onset and resolution of inflammation which has a vital role in maintaining the health and integrity of an individual organism toward invading pathogens (Trammell and Toth, 2008;Buonacera et al., 2022). The condition of baseline NLR are indicative of the balance between the activation of host inflammatory responses and immune responses, therefore, it is a potential biomarker for predicting susceptibility to influenza virus infection. Hence, understanding of main NLR-related host . /fmicb. . factors associated with baseline systemic inflammatory status and influenza susceptibility may open doors for preventing influenza virus infection. Despite many genomic and transcriptomic studies being conducted to identify host factors that are crucial for influenza susceptibility, the contribution of NLR-related host factors has not been fully explored. Using weighted gene co-expression network analysis (WGCNA), we identified four modules of the NLR-related systemic host factors associated with influenza susceptibility. In the discovery cohort for H3N2 influenza, we found that these four modules were not only significantly correlated with the baseline NLR, but also differentially expressed between the SI and AU groups. The reproducibility of these relationships was validated in an independent cohort for H1N1 influenza. Functional enrichment analyses revealed that these four modules were mainly involved in B cell-mediated immune responses, cellular metabolism, cell cycle, and cellular signal transduction, respectively. Three of the four modules (i.e., modules involved in B cellmediated immune responses, cellular metabolism, cell cycle) were significantly down-regulated in the SI group. Humoral immunity and cell-mediated immunity are two arms of adaptive immune responses. B cells play a major role in the humoral immune response (Marshall et al., 2018). Antigen binding to B cell receptor initiates B cell activation. The humoral immune system produces antigen-specific antibodies that can protect against primary and secondary infection. Antibodies against the hemagglutinin of influenza virus could prevent viral entry and replication (Wu and Wilson, 2020). Beside this, rapid B cell responses contribute to efficient viral clearance through neutralizing the virus and reducing virus spread (Gerhard et al., 1997;Rothaeusler and Baumgarth, 2010). The overall immune status at baseline, including the composition of B cell subsets and the up-or down-expression of genes related to B cell receptor signaling have been found to predict post-vaccination responses (Tsang et al., 2014;Fourati et al., 2016;HIPC-CHI Signatures Project Team and HIPC-I Consortium, 2017;Parvandeh et al., 2019). These studies implicated the pre-vaccination status of B cell signaling as important indicators of immune state that influenced the antibody response as well as vaccination outcome. Our study identified several B cell signaling pathways and transcription factors (POU2AF1 and E2F5) that regulated B cell activation and differentiation were significantly downregulated in the SI group compared to the AU group. These results indicated the status of B cell signaling at baseline can be a useful predictor of influenza symptomatic infection. Moreover, influenza virus infection usually reprograms host cell's metabolism to assist virus replication. Influenza viruses require host cell ribosomes for expression of viral proteins. Ribosomal proteins (RPs) are major components of ribosomes. Recent studies revealed that RPs possessed antiviral function. Some RPs can interact with viral proteins to inhibit virus transcription (Abbas et al., 2012;Li et al., 2016). The ribosomal protein RPL10 was identified as a downstream effector of the NIK (NSP-interacting kinase)-mediated antiviral signaling pathway (Rocha et al., 2008). Beside this, the ribosomal protein RPL13A was reported as an innate immune factor for antiviral defense (Mazumder et al., 2014). Previous work reported that influenza virus suppressed host cellular respiration which was related to mitochondrial dysfunction (Derakhshan et al., 2006), but the molecular mechanism by which influenza virus alters cellular respiration is still unclear. Viruses deeply rely on host post-translational modifications for their replication. SUMOylation is an important post-translational modification controlling various cellular processes. Viruses could take advantage of the cellular SUMOylation system to facilitate viral propagation (Pal et al., 2011;Han et al., 2014;Domingues et al., 2015), but the SUMOylation system could also serve an antiviral function to restrict viral replication. Recent studies have indicated SUMOylation with a critical role in activating host intracellular pathogen defenses (Boutell et al., 2011;Li et al., 2012). Specifically, the SUMO pathway was revealed to contribute to intrinsic antiviral resistance to herpes simplex virus type-1 infection (Boutell et al., 2011). Moreover, influenza viruses introduced DNA damage in host cells during infection (Li et al., 2015). The DNA damage response (DDR) is a complex signal transduction pathway that can detect DNA damage and transduce this information to the cell to influence cellular responses to DNA damage (Ciccia and Elledge, 2010). Prior studies revealed that the DDR may inhibit (Lau et al., 2004) viral replication. We further identified the module involved in cellular signal transduction was significantly up-regulated in the SI group, and eigengenes of this module were positively correlated with the baseline NLR. Within this module, we identified several key signaling pathways contributing to the efficiency of viral replication, including GPCR signaling and ion transport pathways. Influenza virus infection induces activation of a variety of cellular signaling pathways, which are required by virus replication (Ludwig et al., 2006;Ludwig, 2009). GPCRs contribute directly to stimulate the Raf/MEK/ERK signaling pathway (Rozengurt, 2007), which is crucial for influenza virus replication and has been demonstrated to possess antiviral properties (Pleschka et al., 2001). Ion channels expressed by host cells have emerged as key regulators of virus entry, and ion channels drugs have attracted some attention as suitable antiviral agents. Our findings highlighted the potential contribution of some key pathways involved in B cell-mediated immune responses, cellular metabolism, cell cycle, and signal transduction to influenza susceptibility. These identified pathways were concordant with underlying mechanisms that had previously been reported to be associated with host-virus interactions. The WGCNA network analysis is an unsupervised approach, which does not use a priori phenotype information (e.g., infection susceptibility). Thus, it provided an integrated and global view of host factors and allowed us to gain insights into the main host factors contributing to a healthy partnership between neutrophils and lymphocytes, and identify the main NLR-related host factors associated with influenza susceptibility. The recent coronavirus disease 2019 (COVID 19) pandemic has resulted in significant morbidity and mortality worldwide (World Health Organization, 2020). Influenza and COVID-19 are both contagious respiratory illnesses, but COVID-19 is caused by infection with a coronavirus first identified in 2019. The NLR has recently generated a lot of interest regarding the role of potential poor prognosis in COVID-19 patients. Many studies have shown that the NLR was associated with disease severity and mortality for COVID-19 patients (Chan and Rout, 2020;Kong et al., 2020;Lagunas-Rangel, 2020;Regolo et al., 2022). A number of recently published studies have found that an elevated NLR on admission can serve as an early warning signal of severe COVID-19 (Feng et al., 2020;. A recent metanalysis (Henry et al., 2020) showed that lymphopenia and neutrophilia at hospital admission are associated with poor outcomes in patients with COVID-19. These studies exhibited that systemic inflammation played a key role in the development of severe COVID-19 which is in concordance with influenza. To our knowledge, no studies on baseline (i.e., prior to exposure to SARS-CoV-2) NLR to COVID-19 susceptibility have been reported. Influenza is our best comparative model for COVID-19 (Moore et al., 2020), hence our work can serve as a comparative model to provide insights into the COVID-19 susceptibility. Several limitations of this study are noteworthy. The number of baseline samples available in the influenza challenge studies were low. We only included symptomatic infected and asymptomatic uninfected subjects in our study, thus the findings reported here are restricted to individuals who unambiguously reported health or illness, i.e., viral titer and symptom status agree. Moreover, individuals participated in the challenge studies were young and healthy adults, which may limit the broad applicability of our results to children, elderly or high-risk populations. Furthermore, as this study focused on H3N2 and H1N1 influenza, the association between baseline NLR and influenza susceptibility may not be extended to other type of influenza strains and infections. In conclusion, our work identified the NLR as a simple and useful biomarker for predicting disease susceptibility to symptomatic influenza. An elevated NLR was detected in susceptible hosts, who may have defects in B cell-mediated immunity or impaired function in cellular metabolism, cell cycle, or signal transduction. The understanding of main NLR-related host factors associated with baseline systemic inflammatory status and influenza susceptibility may open doors for preventing influenza virus infection. Further study will be required to understand the underlying mechanism of susceptibility to influenza virus infection, and may yield therapeutic targets. For the GSE73072 data set, we directly downloaded its preprocessed expression matrices from GEO, which had been normalized using the robust multi-array (RMA) method and log2-transformed. For the GSE61754 data set, we utilized the expression matrix available from GEO, which had been preprocessed using the variance stabilization and normalization method. For the GSE111368 cohort, we obtained the expression profiles from GEO which had been log2-transformed and normalized with a 75th percentile-shift algorithm by the original author. The infection and symptom status for the GSE73072 cohort were found at the web link (https://d rive.google.com/open?id=0B2vLBS4X1c1ENzZpT216eGY1RjQ) provided by the original author, and those for the GSE61754 and GSE111368 cohorts were retrieved from their GEO Series Matrix Files. . . Deconvolving whole blood gene expression samples To quantify the proportions of human blood cell types, we utilized a signature matrix, named sigmatrixMicro provided in a previous study (Monaco et al., 2019), whose transcriptomic profiles were generated by microarray. The sigmatrixMicro consisted of 819 cell type-specific genes in 11 immune cell types. We next performed deconvolution with support vector regression using the CIBERSORTx method (Newman et al., 2019). Cell proportions within whole blood samples were estimated by combining CIBERSORTx with sigmatrixMicro in non-log linear space. All microarray data sets were quantile normalized before running CIBERSORTx. Bulk-mode batch correction was applied to remove technical differences between whole blood mixtures and the signature matrix. To demonstrate that the differences in estimated baseline NLR between the SI and AU groups were robust to the signature matrix used, we quantified the proportions of human blood cell types in the same manner as described above except that two different signature matrices (LM22 and immunoStates) were used instead of sigmatrixMicro. The LM22 signature matrix contains 547 genes in 22 immune cell types and was obtained from the CIBERSORT website (https://cibersort.stanford.edu; Newman et al., 2015). It was built using samples from healthy subjects and profiled by Affymetrix microarray. The immunoStates provided in a previous study (Vallania et al., 2018) consisted of 317 cell type-specific genes in 20 immune cell types and was built using 6,160 samples with different disease states across 42 microarray platforms. . . Application of batch correction to GSE cohorts The discovery cohort (GSE73072 H3N2) included two challenge studies. Principal component analysis (PCA) revealed that the discovery cohort included two sample batches, therefore the removeBatchEffect function provided in the limma package was used to correct the batch effects of gene expression values in the discovery cohort ( Figure 1A). The PCA analysis was performed again on the corrected data, and the batch effects of the two challenge studies in the discovery cohort were basically eliminated. The same strategy was employed to remove batch effects in the GSE73072 H1N1 cohort ( Figure 1B). . . Co-expression network construction by WGCNA WGCNA is the most widely used approach for weighted correlation network analysis. Co-expression networks were built using the R package WGCNA. The analysis was performed using the batch corrected gene expression profiles and only genes with variances ranked in top 9,000 were used. The softthreshold power β was set to 10 which was selected based on the pickSoftThreshold function. A hierarchical clustering was performed using 1-TOM with β = 10 as the pairwise distance and average linkage distance as the cluster distance. We identified modules using dynamic tree cut approach with a minimal module size of 30 and a deepsplit cutoff of 4, and those closely related modules whose correlations of module eigengenes larger than 0.75 were merged by setting a branch merge cutoff height of 0.25. . . Functional enrichment analysis The pathway enrichment analyses of genes in four disease modules were analyzed. The biological processes from gene ontology (GO) and molecular pathways from the Reactome database are two of the most commonly used pathway enrichment analysis resources. GO enrichment analyses were conducted with enrichGO function in the R package clusterProfiler (version 3.18.1) with FDR < 0.05, and here the background set of genes was defined as default. Reactome enrichment analyses were performed with enrichPathway function in the R package ReactomePA (version 1.9.4). . . Meta analysis and statistical analysis Meta-analyses using random-effect models were performed with metagen function in the R package meta. Hedges' adjusted g . (Cooper et al., 2019) was used to standardize the mean difference in the NLR between the SI and AU groups. The pooling weights were calculated as inverse of the effect size variance. All tests were performed two-sided, and a p-value cutoff for statistical significance was set as 0.05. Wilcoxon tests were conducted to identify statistically significant differences in estimated baseline NLR, lymphocyte, and neutrophil proportions between the SI and AU groups. Statistically significant changes in modular expression between the SI and AU groups were assessed using Student's t-test. Statistically significances were indicated as follows: ns, not significant; * p < 0.05, * * p < 0.01, * * * p < 0.001, * * * * p < 0.0001. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material. Ethics statement Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Author contributions Conceptualization, writing-original draft preparation, supervision, project administration, and funding acquisition: WS. Methodology, validation, formal analysis, and visualization: GW and WS. Software and writing-review and editing: GW, CLv, CLi, and WS. Data curation: GW, CLv, and WS. All authors have read and agreed to the published version of the manuscript. Funding This work was supported by Shantou University Medical College Development Funds (510858027).
2022-10-06T13:33:05.479Z
2022-10-06T00:00:00.000
{ "year": 2022, "sha1": "e1ca4fc92664fffb5fd916544f29fe2ea26326a3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e1ca4fc92664fffb5fd916544f29fe2ea26326a3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259198965
pes2o/s2orc
v3-fos-license
Crykey: Rapid Identification of SARS-CoV-2 Cryptic Mutations in Wastewater We present Crykey, a computational tool for rapidly identifying cryptic mutations of SARS-CoV-2. Specifically, we identify co-occurring single nucleotide mutations on the same sequencing read, called linked-read mutations, that are rare or entirely missing in existing databases, and have the potential to represent novel cryptic lineages found in wastewater. While previous approaches exist for identifying cryptic linked-read mutations from specific regions of the SARS-CoV-2 genome, there is a need for computational tools capable of efficiently tracking cryptic mutations across the entire genome and for tens of thousands of samples and with increased scrutiny, given their potential to represent either artifacts or hidden SARS-CoV-2 lineages. Crykey fills this gap by identifying rare linked-read mutations that pass stringent computational filters to limit the potential for artifacts. We evaluate the utility of Crykey on >3,000 wastewater and >22,000 clinical samples; our findings are three-fold: i) we identify hundreds of cryptic mutations that cover the entire SARS-CoV-2 genome, ii) we track the presence of these cryptic mutations across multiple wastewater treatment plants and over a three years of sampling in Houston, and iii) we find a handful of cryptic mutations in wastewater mirror cryptic mutations in clinical samples and investigate their potential to represent real cryptic lineages. In summary, Crykey enables large-scale detection of cryptic mutations representing potential cryptic lineages in wastewater. Introduction Wastewater monitoring serves as an important tool that can supplement clinical testing for COVID surveillance [1][2][3][4][5][6][7][8][9] . Multiple studies have demonstrated that the signal of SARS-CoV-2 variants of concern (VOC) can be extracted from wastewater samples collected from local regions [10][11][12][13][14][15] . This allows early detection of community spread of the viral variants that precedes clinical testing by up to two weeks 9 . Further, wastewater provides information on the genomic diversity and lineage abundance estimation of circulating lineages in the community, and overcomes the sampling bias inherent to clinical surveillance [16][17][18] . Wastewater monitoring for SARS-CoV-2 can also be used to detect novel cryptic lineages 19 ; here, we define a cryptic lineage of SARS-CoV-2 as a set of co-occurring mutations that have never been reported or rarely observed (prevalence less than 0.0001) in publically available assembled genomes. However, non-uniform sequencing coverage caused by amplicon efficiency difference and environmental RNA degradation creates a challenge for detecting and phasing cryptic lineages from variant calling results 5,20,21 . Previous methods have reported the detection of cryptic lineages from wastewater monitoring, but they required ultra-deep sequencing on specific targeted regions of SARS-CoV-2, or hybrid sequencing technology, and thus are not compatible with most wastewater sequencing protocols used for routine monitoring 16,22 . In addition, the origin of the cryptic lineages is still an open question. Previous studies have reported that rare lineages are found in wastewater samples but not in clinical samples, suggesting that undersampling in clinical surveillance might explain the case 23,24 . There is also some speculation that cryptic lineages are associated with human intra-host minor variants, or originate from non-human hosts 19 . In this manuscript, we introduce Crykey, a fast computational method for detecting cryptic lineages of SARS-CoV-2 from wastewater samples that exploits co-occurrence of SNVs on the same sequencing read across the full length of SARS-CoV-2 genome. We used Crykey to analyze SARS-CoV-2 sequencing data from wastewater samples collected in Houston, Texas, USA. Collectively, our results highlight that Crykey can accurately identify cryptic combinations of SNVs from the sequencing data that have not been found or have low prevalence (less than 0.0001) in GISAID's EpiCoV database. To investigate the derivation of cryptic lineages, we took a closer look at the cryptic lineages found in Houston wastewater, and found that the number of cryptic lineages increases when a new strain of VOC starts to circulate in the community. By analyzing more than 5000 clinical samples collected from the Houston area, and 9000 clinical samples collected across the US from other states, we found longitudinal connections between intra-host minor variants and wastewater cryptic lineages. Our result indicates that a large proportion of the cryptic lineages, especially long lasting cryptic lineages, are derived from human intra-host landscape. We found the cryptic lineages are geographically constrained, and are specific to certain PANGO lineages of the consensus genome of the human hosts. This research shows that wastewater monitoring can detect low frequency intra-host variants, which may be useful for understanding transmission events in communities. Results Crykey is a computational method for cryptic lineage identification in wastewater on a full genome scale. Figure 1 shows the workflow and algorithm of Crykey. Crykey first selects candidate cryptic lineages by searching co-occurring mutations supported by the same sequencing read (Figure 1c). Then each candidate is queried against a pre-built database to check if the combination of the mutations is novel/rare in terms of its prevalence ( Figure 1d). The detailed performance benchmark can be found in Supplementary Section 1. We applied Crykey on SARS-CoV-2 data sequenced from 3,175 wastewater samples collected from Houston wastewater treatment plants. The samples were collected from February 2021 to November 2022, and identified a large number of mutation combinations that originated from cryptic lineages of SARS-CoV-2. Genomic Distribution of Cryptic Mutation Sets We identified a total of 716 cryptic mutation sets in our samples (see Methods). Figure 2a shows the location of the cryptic mutation sets on the reference genome, their mean allele frequencies, rarity in the GISAID database, and the number of weeks (not necessarily consecutive) detected in wastewater samples. We observed cryptic mutation sets across the entire SARS-CoV-2 genome. Some regions of the genome were enriched in cryptic mutations, such as the S and N genes (see Figure 2b) with 20.3% of the cryptic mutation sets containing mutations located on the S gene, and 27.9% of the cryptic mutation sets containing mutations located on the N gene. More than 92% of the cryptic mutation sets had mean allele frequency less than 0.5 in the wastewater samples, with a few exceptions. We found that the most cryptic mutation sets that were rare in the GISAID database also have low mean allele frequency in the samples that support them (see Figure 2a). The occurrence of those cryptic mutation sets (counted by week) varied significantly, ranging from 1 week to 33 weeks, which are shown by the size of the dot in Figure 2a. In most regions of the SARS-CoV-2 genome, non-synonymous cryptic mutation sets are dominated in number compared to cryptic mutation sets that include synonymous mutations. The only exception is the N gene (see Figure 2b). We also calculated the dN/dS ratio of unique SNVs in cryptic lineage and found a noticeable dN/dS increase on S gene, indicating positive selection effects (See Figure 4). The detailed results of dN/dS analysis can be found in Supplementary Section 2. Emergence of cryptic lineages co-occurred with surges in community infections driven by new variants The emergence of the cryptic mutations co-occurred with increases in SARS-CoV-2 infections across the city, and corresponding increase in the citywide wastewater viral load, which happened when a new VOC started to circulate in the community. For example, the number of new cryptic lineages and viral load both increased significantly around July, 2021 (see Figure 3a), which corresponded to the Delta wave in Houston and (see Figure 3b). Similar patterns were observed during the emergence of B.1.1.529 lineage (Omicron) in December, 2021, and the emergence of BA.2 (Omicron) in May, 2022, and BA.5 (Omicron) in July, 2022. This may be the effect of both new VOC strains introducing associated cryptic mutations, as well as increasing viral load causing more signals to be captured. The detection of cryptic lineages is also a function of data quality. We observed fewer new cryptic lineages during the BA.2 and BA.5 period (from Apr, 2022 to Aug, 2022) when the data quality dropped and became a limiting factor of the detection of the cryptic lineages. The different bin width in Figure 3a indicates the normalized sample qualities derived from breadth of the genome coverage of the wastewater samples collected from the same week, where wider bins represent better data quality. Long Lasting Cryptic Lineage We next asked whether there were any cryptic lineages that persisted over several weeks to months in the Houston community. Most of the cryptic lineages did not persist for a long period of time, with over 85.4% (612/716) of the cryptic mutations found in 3 or less distinct time points (samples are collected on a weekly frequency). Interestingly, some cryptic mutations were detected in samples from multiple wastewater treatment plants across the city and persisted for over 10 weeks (Supplementary Figure 1). Figure 5 shows the most persistent cryptic lineage observed in Houston wastewater; it contains the co-occurrence of two SNVs, A29039T and G29049A, which causes K256* (stop codon) and R259Q amino acid changes on the N gene. This cryptic lineage was detected over 33 weeks (shown in Figure 5b), mostly between Aug, 2021, and Feb, 2022. Detections peaked in late November, 2021, where the cryptic lineage was observed in 16 wastewater treatment plants concurrently (shown in Figure 5a). The presence of this cryptic lineage phased out in late February 2022 and remained silent for two months. Then it re-appeared for a short period of time around May, 2022. Figure 5a shows that the mean allele frequencies of the combination of the SNVs were generally lower than 0.10, with a few exceptions at some wastewater treatment plants during early weeks. The count of supported reads that contained both SNVs at the same time are shown using different colors. Mutations in Cryptic Lineages We next asked whether the individual mutations that make up the cryptic lineages were associated with other known PANGO lineages. To do this, we defined a mutation that is present in more than 50% of the assemblies in GISAID under the same known PANGO lineage as a signature variant for that PANGO lineage 25 . Based on the PANGO associated signature variants that each cryptic lineage contained, we classified the wastewater cryptic lineages into three categories: cryptic lineages that hold no signature variants, cryptic lineages that hold signature variants specific to a single PANGO lineage, cryptic lineages that contain signature variants that are specific to more than one different PANGO lineages. We found that more than 76.5% (548 out of 716) of cryptic lineages contained signature variants specific to only one PANGO lineage. In less than 6.1% (44 out of 716) of the cases, the cryptic lineages did not contain any signature variants specific to any known PANGO lineage, which means all of the mutations had a prevalence rate less than 0.5 in all PANGO lineages. The remaining 17.3% (124 out of 716) cryptic lineages held signature variants specific to more than one different PANGO lineage, which are likely indicative of recombination events. Cryptic lineage Detection in Clinical Samples The sources of cryptic lineages present in wastewater are not well understood. Possible sources include undersampling in clinical surveillance, contribution from intra-host low allele frequency variants, and possible animal hosts. To test whether the cryptic lineages originated from an intra-host environment, we selected 5,060 sequenced clinical samples collected within Greater Houston between 12/06/2021 to 01/31/2022 (8 weeks). We checked whether the co-occurrence of the mutations in 20 cryptic lineages detected in Houston wastewater were also present in the clinical sequencing reads. The details of sample processing and cryptic lineage selection can be found in the methods section and Supplementary Figure 2 shows the distribution of samples over the 8-week period. We calculated the prevalence rate of the cryptic lineage in clinical samples as the number of samples supporting the cryptic lineage over the total number of samples. We found that more than half of the 20 cryptic lineages investigated were also prevalent in clinical sequencing reads. Figure 6 shows the top 12 cryptic lineages of the highest prevalence rate in the clinical samples, including 5 short-duration cryptic lineages CR1-CR5 (detected only over 2 weeks), and 7 long duration cryptic lineages CR6-CR12 (4-8 week occurrence). The results suggested that both long-and short-duration cryptic lineages detected in wastewater can be found within clinical samples at low allele frequencies, most of the time . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; below 0.05. Of the 12 wastewater cryptic lineages with the highest clinical prevalence rates, 11 were associated with the Omicron strains, except CR2. The PANGO assignment of the consensus genomes shows most of the cryptic-supported clinical samples were labeled as BA.1.15 and BA.1.1. The Omicron-related cryptic lineages shared a pattern that the prevalence rate in the clinical samples increased as the Omicron variant spread in the city, which is reflected by both the viral load in the wastewater (shown in Figure 3a) and sequence count in the database (shown in Figure 3b). These cryptic lineages appeared as intrahost variants in clinical samples prior to appearing in wastewater samples. In contrast,C6402T-G6456A, a cryptic lineage specific to Delta strain,was detected in the wastewater in the first 2 weeks of the 8-week period, while also being present at trace levels over the first 6 weeks in the clinical samples. The intra-host allele frequency of a cryptic lineage varies by host. Supplementary Figure 4 shows the intra-host allele frequencies for the cryptic lineage A29039T-G29049A during the clinical sampling period. However, the overall mean AF is relatively stable as time goes by, which suggests the detection of cryptic lineages in wastewater samples is determined by the prevalence rate (vary by specific cryptic lineages) in the population. As a VOC, in this case Omicron, starts to become dominated in the region, new cryptic lineage specific to the VOC emerges and gains its popularity among the hosts. At the same time, old cryptic lineage specific to the previous dominating VOC, in this case Delta, dies out, as fewer and fewer people are infected with its associated strain. Geographically Constrained Cryptic Lineages To assess whether there were geographic patterns at a national level associated with cryptic lineages,we processed 8,969 clinical samples collected from states outside of Texas over the same 8-week time period (between 12/06/21 and 01/31/22) as from Texas (see Supplementary Figure 3 for detailed geological distribution). We used the same method to search for the previous most significant 12 cryptic lineages found in wastewater and clinical samples from Houston for those clinical samples outside of Texas. Two of the cryptic lineages were shared across clinical samples from Houston, Maryland, and Massachusetts. In addition, we identified 5 additional cryptic lineages shared across clinical samples from Houston and Maryland. Five of the cryptic lineages were specific to Houston and were not found in other states. Cryptic lineage A27259C-C27335T-A27344T-A27345T was found in samples from Houston, Maryland, and Massachusetts (see Figure 7a), and cryptic lineage C10449A-T10459C, was found in Houston and Maryland (see Figure 7b). We found although the same cryptic lineage may be shared by specific regions, the prevalence rate can vary between each of them. For A27259C-C27335T-A27344T-A27345T, Houston and Maryland have a much higher proportion of the cryptic samples supporting the cryptic lineage as compared to Massachusetts. In addition, the demographics of PANGO assignments for the cryptic supporting samples were also different between states. Although the two cryptic lineages are all associated with Omicron, Houston was dominated by BA. Discussion Wastewater monitoring for SARS-CoV-2 has been widely used for genomic surveillance during the COVID pandemic 14 . It reveals information about genetic diversity of the virus within communities that can complement . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. clinical surveillance 26 . Recent studies have shown that the presence of novel variants may be introduced during the transmission of SARS-CoV-2, and by exploring wastewater data scientists have been able to identify novel strains of SARS-CoV-2 that are not captured by clinical sampling and sequencing 19 . Our contribution centers on a novel cryptic lineage detection tool, Crykey, designed for the analysis of wastewater sequencing data. Specifically, Crykey detects the co-occurence of the SNVs that are present on the same reads or read pairs and efficiently cross-references them against databases of SARS-CoV-2 genomes. Our method is fully compatible with standard variant calling pipeline for SARS-CoV-2 genomic analysis, and takes advantage of variant calling results across the entire SARS-CoV-2 genome. We highlight the utility of genome-wide identification of cryptic mutations with Crykey via the identification of multiple novel cryptic lineages in the Houston wastewater. Our findings show that cryptic lineages emerge during massive transmission events as a circulating lineage of the SARS-CoV-2 in the communities becomes dominant. We found that the waves of newly emerging cryptic lineages from Houston tie well with number of Delta strain genomes submitted from Texas in the GISAID database, Omicron (B.1.1.529 and its descendants, excluding BA.2), BA.2 (B.1.1.529.2 and its descendants, excluding BA.5), and BA.5 in Texas, USA (see Figure 3). As the viral load in wastewater increased, the detection of cryptic lineages also increased The BA.2 and BA.5 waves (May, 2022 to October, 2022) were exceptions to this trend, where there was a marked increase in wastewater viral load, however, new cryptic lineages appeared only at the beginning and towards the end of the viral load spike. It is hard to untangle whether this effect was due to specific genomic features of the BA.2 and BA.5 variants, or whether it is a consequence of the lower quality of the sequencing data. We also show that most of the cryptic lineages appeared over approximately 2 weeks, however we also observed some persistent cryptic lineages (Supplementary Figure 1). Short-duration cryptic lineages were also generally found in only a few wastewater treatment plants and with low allele frequencies. One possible explanation is that the viral shedding from a community was dominated by the established variant of concern (VOC) lineages that have spread across the city, and thus cryptic lineages make up a small fraction of the total abundance. Due to this, cryptic lineages in wastewater are more likely to remain below the detection limit until they reach some degree of prevalence in the population. We suspect that the short duration of cryptic lineages being observed indicates peaks of local spread of the cryptic lineage within a community. Once their prevalence in the population drops, those cryptic lineages were no longer detectable in wastewater samples. On the other hand, some cryptic lineages lasted for months and reappeared multiple times. One of the cryptic lineages we identified present in 33 weeks of wastewater data across multiple wastewater treatment plants around Houston (see Figure 5). The cryptic lineage contains two nucleotide mutations, A29039T and G29049A, and cause K256* and R259Q amino acid changes on the N gene. The combination of these SNVs is rare, and there are only 3 sequences in the GISAID database that contain both mutations, and none of those sequences originated from the United States. Previous study shows that K256 is one of the eight lysine residues in the protein N of SARS-CoV-2 that is likely to be directly involved in RNA binding 27 . A29039T variant causes a stop codon to be generated, and this variant may affect the linker region suppressing the immunogenic domain of the nucleocapsid protein, which leads to possible vaccine escape 27,28 . This may explain the extreme persistence of this cryptic lineage. R259 in the protein N of SARS-CoV-2 belongs to one of the identified guanosine triphosphate binding pockets, and is well conserved in multiple human . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; coronaviruses, including NL63, 229E, HKU1, OC43, as well as in MERS and SARS-CoV-1 29 . However, R259Q mutation alone is reported multiple times at low prevalence rates in some of the SARS-CoV-2 lineages, mostly belonging to Delta variant 30 . To the best of our knowledge, the effect of the combination of these two mutations is still unknown and requires further investigation. Why are cryptic lineages not captured by clinical surveillance? One possible explanation is that cryptic lineages have low community prevalence rates 19,24,23 . As only a small portion of samples from individuals infected with SARS-CoV-2 are sequenced, cryptic lineages infecting a small proportion of the population are likely to be missed by clinical surveillance. Clinical data also suffers from sampling bias, where people with severe symptoms and access to healthcare resources are more likely to be represented in the databases. Supplementary Figure 1 shows that most of the cryptic lineages detected in Houston wastewater were only found over 1 to 3 weeks, and these short duration lineages may represent those not captured by clinical testing. We also report the detection of multiple cryptic lineages that persisted for more than 10 weeks.These lineages more likely originate from cryptic lineages that were present in the clinical samples at low allele frequencies, and hence were part of the intra-host viral diversity 19,24 . It is common to only report consensus level mutations (i.e., mutations with allele frequencies greater or equal to 0.5), or consensus genomes/assemblies to the public database such as GISAID. As a result, although the cryptic lineages were sampled, they remained unreported. To test this hypothesis, we evaluated a dataset of 5060 patient derived SARS-CoV-2 sequencing samples collected within Greater Houston (including Houston and its surrounding residential areas) from 12/06/2021 to 01/31/2022 (8 weeks), and performed read alignment for these data. Thus, we showed that cryptic lineages detected in wastewater originated from intra-host low frequency co-occurring variants in the clinical samples ( Figures 5 and 6). Further, these cryptic lineages show strong geographic specificity. Even though some of the cryptic lineages found in Houston clinical samples were also found in Maryland and Massachusetts, they were specific to clinical samples from those two states at minor allele frequencies. Challenges and limitations One of the limitations of Crykey is that the detection of cryptic lineages is heavily dependent on the quality of the sequenced wastewater samples 31,32 . As shown in Figure 3, the number of newly emerging cryptic lineages follows the same pattern as the viral load until June 2022, where the samples collected afterwards had worse quality in terms of breadth of coverage. The performance of Crykey is limited because the samples do not have enough sequencing depth across most of the regions of SARS-CoV-2 genome during those weeks. Due to the limitation of the short read platform and protocol used for sequencing the sample, there is a natural limit on the distance between the SNVs we can use for the co-occurrence analysis. Furthermore, regions corresponding to sequencing adapters create gaps along the genome and pose a challenge for identifying sets of cryptic mutations that span longer regions. However, these limitations could be addressed using long read sequencing, assuming intact longer amplicons are recoverable from wastewater samples. The sources of cryptic lineages in wastewater and their relative contributions requires further research. We found that a subset of the cryptic lineages were supported by reads from clinical sequencing. However, there are some cases in which a cryptic lineage persisted for weeks in wastewater samples, but had little to no trace . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. in clinical samples. The key reasons why these cryptic lineages were not captured by clinical surveillance remains unknown. One hypothesis put forward by a previous study suggests that such cryptic lineages are carried by non-human hosts 19 . Cryptic lineages were detected in wastewater for time periods ranging from a single week to months of continuous detection. Explanations for these patterns and their variability requires further investigation. We observed that the number of unique emerging cryptic lineages was related to massive transmission events. We suspect the specific cryptic lineages were derivatives of the dominant circulating VOC strain in the population, and as new VOC stains became dominant, the cryptic lineages associated with the previous VOC strain faded out as well. The concept of searching for combinations of mutations inside of a viral genome that have never, or rarely been reported is universal, and Crykey is not only limited to finding SARS-CoV-2 cryptic lineages in wastewater. The usage of the method could be expanded to multiple pathogens, such as influenza A virus, as long as the pathogens are being monitored in wastewater, and we have established a database of known assemblies for such pathogens 33,34 . In conclusion, Crykey is a software framework designed to identify cryptic lineage of SARS-CoV-2 from wastewater samples, by leveraging the detection co-occurrence of SNVs on the same sequencing reads of the full SARS-CoV-2 genome. We apply this tool to detect numerous novel cryptic lineages present in the population, some persisting for months. We show that for a subset of cryptic lineages, their source is likely intrahost minor variants that are widespread in the population and present a low allele frequencies within an individual infected with SARS-CoV-2. This represents the first study to show that wastewater monitoring can be used to detect population-level intrahost minor variants, which could be used to better understand transmission events at a large scale [35][36][37] . Methods The workflow of the Crykey pipeline can be divided into 3 steps, including database construction, sample processing to find cryptic lineage candidates, and rarity calculations for each of the candidates found in the previous step. The following are the details for each of the steps, as well as the filtering and analysis methods used in this manuscript. SNP Database Construction The database used in Crykey is built based on the multiple sequence alignment (MSA) generated by the GISAID. We extracted the SNPs for each of the SARS-CoV-2 genome in the MSA using vdb with the command vdbCreate -N input.msa 38 . We then trimmed the list of mutations associated with each genome sequence with the vdb trim command. Combining the lineage assignment of each genome sequence in the metadata, we calculate the prevalence rate of each mutation in each of the lineages of SARS-CoV-2, as well as build a SNP database containing SNP information for each individual SARS-CoV-2 sequence. Searching for Possible Cryptic Mutation Combinations . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; In order to identify the cryptic lineage, Crykey first builds a default mutation lookup table where each mutation in the GISAID is associated with a set of lineages and the specific weeks (based on sample collection date) of occurrence in GISAID, regardless of prevalence rate. A second mutation lookup table is built at the same time where only mutations with prevalence rate greater than 0.5 are stored, which allows us to perform a fast query on whether a set of SNPs belongs to any of the SARS-CoV-2 genome in a given time period. Then for a given sample, Crykey takes its associated alignment file (BAM) and variant calling output (VCF) file as input. It first extracts and filters the SNPs from VCF file with a user defined minimum depth of coverage (default: 10) and a user defined minimum allele frequency (default: 0.02). Then, it annotates each SNPs with snpEff and removes mutations not in the coding region. For each sample, Crykey searches through the BAM file and extracts read pairs that contain a combination of multiple SNPs. Using the mutation lookup table of prevalence rate greater than 0.5, Crykey is able to quickly identify whether the SNP combination from the read may belong to a single lineage of a certain week by using set intersection. By the pigeonhole principle, if the intersection is non-empty, it is guaranteed that the SNP combination has been reported to the public database. If the intersection is an empty set, we consider the combination as a cryptic candidate set. Rarity of Cryptic Mutation Combinations For each cryptic candidate set of mutations, an exact search is performed by querying the SNPs in the set against the default mutation lookup table and the SNP database. By using the default mutation lookup table, the lineage and specific week of the co-occurrence of all SNPs in the set can be quickly determined, and allows Crykey to minimize the search space while querying the SNP database while searching for exact assemblies that contain such SNPs. Prevalence of Cryptic Mutations By evaluating the prevalence of the mutations in each of the cryptic lineages, we categorized the cryptic lineages into 3 categories: (1) cryptic lineages containing no signature variants of any known PANGO lineages; (2) cryptic lineages containing signature variants of a single PANGO lineage; and (3) cryptic lineages containing signature variants of multiple different PANGO lineages, where signature variant is defined as a mutation with prevalence rate over 0.5 in a known PANGO lineage. As expected, most of the cryptic lineages contained signature variants of a single PANGO lineage. This also indicates that most of the cryptic lineages were associated with the circulating strains in the population, as it incorporates the mutations from the known parent strains, with some additional new variants. Additional information on cryptic lineages containing signature variants specific to multiple PANGO lineages can be found in Supplementary section 3. Filtering Results from Houston Wastewater Crykey applies both within-sample and cross-sample filtering on the cryptic candidates. We filtered each candidate mutation combination by keeping the ones with both the number of supported reads above a minimum threshold (default: 5), and allele frequency above a minimum threshold (default: 0.01). The cross-sample filtering is based on the minimum number of samples supporting the candidate (default: 2). In the next step, the remaining candidates are queried against the SNP database, and Crykey outputs a complementary report on whether or not the SNP combinations found in the sample are truly novel, which means no sequences in the database supports the combination, or if the SNP combination is rarely seen in the database. If the SNP combinations can be found in the database, Crykey reports the number of sequences found containing such combinations in each of the lineages, as well as the total number of sequences in those . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. lineages. We further removed cryptic candidates with prevalence rate above 0.0001 in the GISAID database, and performed our analysis based on the filtered results. In the final step, we excluded the cryptic candidates that contain variants located between reference genome positions 1 to 55 and 29,804 to 29,903. In addition, we masked cryptic candidates that contain variants located on 25 nucleotide positions between 56 and 29,804 based on suggestions from previous studies, as those locations are highly homoplasic and variants are likely to be recurrent artifacts 39,40 . After read mapping, the BAM files are collected for searching cryptic lineages detected in wastewater samples. 20 wastewater cryptic lineages detected during the 8 week sampling period were selected for testing. 10 of the 20 wastewater cryptic lineages are cryptic lineages that occurred exactly 2 out of 8 weeks, representing the cryptic lineages with a short burst detection pattern, and we chose those which had the most occurrence in terms of wastewater treatment plants where the cryptic lineage had been detected. The rest of the 10 wastewater cryptic lineages are cryptic lineages that occurred most of the weeks during the sampling period, ranging from 4 to 8 weeks occurrence, representing the cryptic lineages with a long lasting detection pattern. We examined the alignments in the clinical samples, and counted the total number of reads spanning the regions that the cryptic lineages located, and counted the number of reads supporting all mutations from the cryptic lineages at the same time. 5 bases towards both ends of the reads were ignored to avoid noise caused by sequencing errors. The allele frequency of a cryptic lineage was calculated as the number of cryptic lineage supporting reads over the number of total reads in the region. During the analysis, we further filtered the results, and samples with cryptic lineage supporting read count less than 5 are considered as cryptic lineage absent. We counted forward and reverse read fragments that do and do not fully support all cryptic mutations, and calculated both the p-value of the Fisher's exact test and strand bias scores described in the previous studies 45,46 . Samples with cryptic lineage reads containing strain bias scores greater or equal to 1 or p-value of the Fisher's exact test less than 0.05 are considered as cryptic lineage absent. Figure 1. Workflow and algorithms of Crykey. a) Crykey constructs a genome-to-SNP database, as well as a set of mutation lookup tables using GISAID provided multiple sequence alignments and metadata. b) Crykey searches for multiple variants located on the same read, and uses the mutation lookup table to identify whether the combination of the variants is a candidate cryptic lineage. Each candidate cryptic lineage is queried against the genome-to-SNP database to calculate its prevalence rate. c) Algorithm to search candidate cryptic lineages with an example of a read containing two variants A and B at the same time. d) Algorithm for the exact search once the candidate cryptic lineages are generated. The example shows a candidate cryptic lineage containing variant A and B. Figures . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; Figure 2. Distribution of cryptic lineages found in Houston wastewater on SARS-CoV-2 genome. In both a) and b), the locations of cryptic lineages (cryptic mutation sets) found in Houston wastewater samples on the SARS-CoV-2 reference genome are shown on the x-axis, with gene annotation on the top of the figures. In panel a), each cryptic lineage is represent by a colored dot, the y-axis indicates its mean allele frequency in the wastewater sample, and the color indicates its rarity , defined as -log10 * (n+1), where n is the number of genomes supporting the cryptic lineage in the GISAID EpiCoV database. Darker color suggests that the cryptic lineage is rarely reported or even never been reported. The size of the dot shows the number of weeks the cryptic lineage has been detected. Larger dots indicate the cryptic lineage persists longer in the community. Panel b) is a histogram showing the count of cryptic lineages found in different regions of the reference genome, by dividing the genome into bins of 400 bp. The cryptic lineages containing exclusively non-synonymous mutations are marked in orange, and the cryptic sets containing at least one synonymous mutation are marked in gray. Higher bars indicating the more cryptic mutations are found in the associated region. The higher ratio between orange and gray bars indicating the associated region tends to have more cryptic lineages only containing non-synonymous mutations. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. Wider bin suggests better wastewater sequencing quality, with a higher proportion of the genome being covered after read alignment. The corresponding normalized viral load in the wastewater samples is shown as a dotted line with the values on the right y-axis, and the viral load is normalized based on the viral load from samples collected on July 6, 2020 in Houston. Panel b) shows the weekly number of SARS-CoV-2 sequences in the GISAID EpiCoV database that are originating from Texas, USA. The count of sequences is calculated based on sample collection date. The sequences are binned and colored based on their PANGO lineage assignments. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; https://doi.org/10.1101/2023.06.16.23291524 doi: medRxiv preprint
2023-06-21T01:35:00.602Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "0483e0159e15d20f89d15dab7a5fea83f096fbf1", "oa_license": "CCBYNCND", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2023/06/20/2023.06.16.23291524.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "0483e0159e15d20f89d15dab7a5fea83f096fbf1", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
49475512
pes2o/s2orc
v3-fos-license
Alzheimer’s disease and Type 2 Diabetes Mellitus: Similar Memory and Executive Functions Impairments? Copyright: © 2017 Ballesteros S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction The rapid increase in the number of older adults living in our society is accompanying by an exponential increase in the number of citizens who will suffer cognitive decline and dementia in the next decades. Alzheimer´s disease (AD) is the most common senile dementia. This neurodegenerative disease accounts for more than half of all the cases of dementia [1,2]. T2DM is highly prevalent condition among older adults and has become a major public health concern in many developed countries [3][4][5]. Several studies have focused in AD and T2DM trying to clarify the interconnections between both diseases in order to put forward prevention actions and more effective treatments. Nowadays, the incidence of both, AD and type 2 Diabetes Mellitus (T2DM) is a major public health problem in Europe, the United States and many developed countries [4]. Diabetes mellitus is a metabolic syndrome related to unhealthy diet habits and lack of exercise. This lifelong disorder is characterized by chronic hiperglycaemia resulting from defects in insulin secretion, insulin action, or both. People with diabetes are at risk of suffering cardiovascular and cerebrovascular disease. T2DM is considered a risk factor to develop AD and other types of dementia [6,7]. A wealth of studies suggest the shared characteristics between AD and T2DM, including impaired neurogenesis, blood brain barrier dysfunction, inflammation, hyperglycemia, insulin resistance, vascular dysfunction and cognitive deficits [8,9]. The metabolic and vascular changes appearing in diabetic patients are in certain ways similar to those occurring in the brain of ADs [10][11][12]. T2DM patients have elevated basal cortical levels and problems with the hypothalamicpituitary-adrenocortical axis (HPA) feedback regulation. The hippocampus is also damaged early in the course of diabetes. AD begins with impaired synaptic function, resulting from the accumulation of Abstract Alzheimer´s disease (AD) accounts for more than half of all the cases of dementia. T2DM is a highly prevalent chronic metabolic condition among older adults, and is considered a risk factor to develop AD and other types of dementia. Currently, the incidence of both, AD and type 2 Diabetes Mellitus (T2DM) is a major public health problem in developed countries. Given the similarities between the metabolic and vascular changes occurring in the brain of diabetic patients and in AD patients, a relevant question is whether a series of main cognitive abilities, including episodic memory, working memory and executive functions are similarly impaired in AD and T2DM patients. Recent research has shown a clear dissociation between implicit and explicit memory. Results have shown intact implicit memory in both clinical groups, similar to that of healthy older adults, and impaired episodic (explicit) memory in both groups of patients, especially in ADs. At the same time, visuospatial and verbal working memory (updating and maintenance of information assessed with n-back tasks) showed significant declines in AD and T2DM but larger in ADs. Executive control assessed with the Wisconsin Card Sorting Test (WCST) showed similar declines in both groups of patients. Neuropsychologists and clinicians need to take into account the decline of long-term episodic memory and executive control processes in T2DM for their negative impact on treatment management. At the same time, the spared implicit memory of AD and T2DM patients could be used to support rehabilitation. amyloid-β (Aβ) peptide and causing early memory impairments [4,10]. The pathogenesis of AD begins with impaired synaptic function, which might result from the accumulation of Aβ peptide causing memory loss, especially in the early stages of AD. On the other hand, insulin resistance in peripheral tissues and organs, coupled with relative insulin deficiency, produces T2DM. At the same time, central insulin resistance and reduced brain insulin levels that might have resulted from T2DM produce the accumulation of Aβ and, as a result, AD. Cerebrovascular inflammation, along with accumulation of Aβ, disrupts synaptic function initiating the AD pathological syndrome [11]. It is worth to reconsider the similarity between AD and T2DM taking into account the possible role the fatty-acid receptor G proteincoupled receptor 40 (GPR40). GPR40 induces Ca 2+ mobilization to reveal dual effects in the brain for adult neurogenesis and stroke as well as in the pancreas for insulin secretion and T2DM. GPR40 is expressed in neurons of the central nervous system and in β-cells of the pancreas [13,14]. GPR40-deficient β-cells secrete less insulin in response to free fatty acids, suggesting that GPR40 influences insulin segregation. This protein is also related to neurogenesis in the adult primate hippocampus [15,16]. However, as most of the studies on this subject have been conducted in animals, much more research with humans is needed to find out whether GPR40 could help to explain the relations between the brain and pancreas as well as T2DM as a risk factor for AD [14,17]. At the moment, as noted by Yamashima [14] whether and how GPR40 expression in the central nervous system could have functional effects is not well understood but the beneficial (e.g. glucose tolerance, brainlipid sensing, learning and memory) as well as detrimental (e.g. type 2 diabetes mellitus, stroke, retinopathy) roles of this protein must be considered to explain possible divergent results in further human studies. Do T2DM and AD Patients Suffer Similar Cognitive Declines? Given the similarities between the metabolic and vascular changes occurring in the brain of diabetic and AD patients [8][9][10], a relevant question is whether several main cognitive abilities that are critical for independent living such as episodic memory and executive functions are similarly impaired in these two chronic conditions. Although there are many individual differences, episodic memory deficits appear early in the course of AD [2,18]. Results from several recent studies have shown impaired episodic (explicit, voluntary memory) memory and spared implicit (unconscious, involuntary) memory in both clinical conditions [19][20][21][22][23]. Executive control processes are affected early in the course of AD and other dementias [24,25] as well as in T2DM [21][22][23][26][27][28]. Previous studies comparing the cognitive functioning of diabetic patients and cognitively healthy older adults showed conflicting results. While some researchers did not find impairments in T2DM patients compared to normal aging [29], others reported some impairments in attention, executive functions, processing speed, and episodic memory [30]. In view of the conflicting results and the scarcity of studies relating cognitive functioning of AD patients with T2DM patients, Redondo et al. compared the performance of groups of AD patients and patients suffering T2DM in tasks designed to assess episodic (explicit) and implicit (unconscious) memory [22,28], and others designed to assess speed of processing, maintenance and updating in verbal and visuospatial working memory and executive functions in order to compare their performance with that of cognitively healthy older adults [28]. The results showed a clear dissociation between implicit and explicit memory: intact implicit memory in both clinical groups, similar to that of healthy older adults, and impaired episodic (explicit) memory in both clinical groups, especially in AD patients. There was a negative trend in episodic memory from healthy elders to T2DM patients and from them to AD patients [22]. At the same time, despite the good glycemic control of the T2DM patients, visuospatial and verbal working memory (assessed with n-back tasks) showed also significant impairments in AD and T2DM but larger in ADs. Working memory also declined in T2DM but less than in AD patients. Executive control assessed with the Wisconsin Card Sorting Test (WCST) showed similar declines in both groups of patients. These results were in line with previous AD studies [24,[31][32][33][34] and with other T2DM studies [35][36][37]. Neuropsychologists use the WCST to assess the integrity of frontal lobe functions and perseveration in the same response is interpreted as a failure of the executive control function. An increase in the number of perseverative errors suggests the shrinking of the prefrontal cortex. Recent research indicates the existence of a shared pathophysiological link between glycemic variability and AD. So, it seems necessary the routinely screening of cognitive functions (especially, long-term episodic memory and executive control functions) in T2DM patients [17]. Longitudinal studies are necessary to investigate whether episodic memory and working memory functions of the diabetic patients with the pass of time deteriorate as much as those of AD patients. In sort, T2DM patients despite their good glycemic control showed significant working memory declines but as occurred in episodic memory not as large as those shown by AD patients [17,23]. The executive control of the diabetic patients despite their low glycosylated hemoglobin levels did not differ from that of the AD patients [24,28]. Conclusion The incidence of AD and T2DM represents a major public health problem in modern societies. The fact that T2DM patients suffer early cognitive deficits similar in certain ways to those suffered by AD patients needs to be taken into account when implementing new prevention and intervention programs especially designed for diabetic elders. On the other hand, more research is needed to better understand the interrelations between these two highly prevalent chronic diseases and the role that factors such as glycemic control and comorbidities play in cognitive functioning. Cognitive dysfunction has not been targeted by current management strategies for T2DM [38]. Neuropsychologists and clinicians need to take into account the decline of long-term episodic memory and executive control processes in T2DM for their negative impact on treatment management. At the same time, the spared implicit memory of AD and T2DM patients could be used to support rehabilitation programs developed to teach them how to manage the disease better.
2019-05-11T13:06:37.512Z
2017-10-23T00:00:00.000
{ "year": 2017, "sha1": "fd670fce3abca26d92792b4a406d24ea75c4d0e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2161-0460.1000389", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "10ff94912fe675372cb06b813a526ec8f1901c03", "s2fieldsofstudy": [ "Psychology", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51027711
pes2o/s2orc
v3-fos-license
Universal and distorsion-free entanglement concentration of multiqubit quantum states in the W class We propose a multipartite extension of Matsumoto and Hayashi's distortion-free entanglement concentration protocol, which takes $n$ copies of a general multipartite state and, via local measurements, produces a maximally-entangled multipartite state between local spaces of dimensions $\sim 2^{n E_i}$, where $E_i$ are the local entropies of the input state. However, the extended protocol is generally not universal in the sense that for the same measurement outcomes, the output state will still depend on the input state. Our main result is that when specialized to any state in the multiqubit W class, the protocol is also universal, so that as in the biparatite version, the output is a unique, maximally-entangled state for each given set of measurement outcomes. Our analysis brings to the forefront a new and interesting family of maximally-entangled multipartite states, which we term Kronecker states. A recurrence relation to obtain the coefficients of the W-class Kronecker states is also given. I. INTRODUCTION An important problem in quantum information is to determine the extent to which the entanglement of many copies of a quantum state shared by several parties can be concentrated with minimal loss into a more compressed, maximally-entangled form, using only local operations and classical communication (LOCC). In the bipartite case, the problem was essentially solved for pure states in the asymptotic limit by Bennett et al [1,2] who proved that entanglement concentration into copies of a basic entanglement unit, the EPR pair, can be achieved reversibly at an optimal asymptotic rate given by the so-called entanglement entropy. However, a similar approach to optimal entanglement concentration in the multipartite setting has proved to be considerably more challenging. As opposed to the bipartite case, multipartite entangled states cannot in general be reversibly transformed into EPR pairs [2], even asymptotically [3], and the quest for the so-called minimal reversible entanglement generating set (MREGS) [2], which would presumably serve as the multipartite entanglement units, has so far proved elusive [4,5]. Given these difficulties, it may be worth exploring multipartite concentration schemes where the compressed target states are not necessarily tensor copies of fundamental entanglement units. In the bipartite case, one such scheme is the universal distortion-free entanglement concentration protocol of Matsumoto and Hayashi (MH) [6,7]. In contrast to standard concentration schemes, the protocol extracts from n copies of any bipartite state |ψ , with entanglement entropy E(ψ), a single copy of a bipartite state that is always guaranteed to be maximally-entangled, of Schmidt rank ∼ 2 nE(ψ) asymptotically. This makes it attractive to explore a multipartite generalization of the protocol, as concentration now refers to the local ranks of a target maximally-entangled state, a notion that is more portable to the multipartite setting than that of the singlet, or more generally, MREG, yield. In addi-tion, the MH protocol is adapted to a symmetry that is also present in the multipartite case, namely that of the tensor product |ψ ⊗n under permutations of the copies. The symmetry, which is made explicit via Schur-Weyl duality, enters in the protocol through local projections onto subspaces transforming irreducibly under the symmetric group S n ; this guarantees that the protocol is universal in that no information about the Schmidt basis of the state |ψ is required, and distortion-free in that the targets are always the maximally-entangled S n -invariant states residing in tensor products of S n irreducible modules. Such S n -invariant states and the symmetry-adapted local projections can easily be extended to a multipartite setting, although whether the resulting protocol remains universal is a question that would need to be revisited. The purpose of this paper is to examine the extension of the MH concentration protocol to the multipartite setting, with the the question of universality in mind. Using Schur-Wey duality and S n -symmetry adapted local measurements, we first show that as in the bipartite case, it is possible to obtain from n copies of an N -party state |ψ , a state residing in the S n -invariant sector of certain tensor products of irreducible S n modules. Such states, which we term Kronecker states, are maximally-entangled in the multipartite sense, as defined by [8], and so in particular are maximally-entangled when viewed as bipartite states between a single party and the rest. Also, as in the bipartite protocol, the rate exponents characterizing the asymptotic local ranks of these output states are given by the corresponding marginal entropies of |ψ . However, in contrast to the bipartite case, we also show that generically, there is a residual indeterminacy in the output state that makes the protocol non-universal. Our second and main result identifies a class of multiqubit states for which this residual randomness is absent, namely the class of states that are SLOCC-equivalent to the W-state |W ∝ |10 . . . 0 + |01 . . . 0 + · · · + |00 . . . 1 . (1) Our main conclusion is therefore that the universality of the bipartite protocol extends to any state in the W arXiv:1712.09174v2 [quant-ph] 19 Jun 2018 class. The proof of our main result is based on a unique simplification that ensues when the set of SLOCC covariants, which can in principle be used to separate SLOCC orbits, is restricted to the W class. For this class, the SLOCC covariants can be computed explicitly. The explicit knowledge of the W-class covariants makes it possible to efficiently compute the coefficients of the corresponding Kronecker vectors. The paper is structured as follows: In section II, we briefly review the multilocal Schur-Weyl decomposition of the n-fold tensor product of multipartite Hilbert spaces. In section III we introduce the Kronecker states as invariant states in a tensor product of S n irreducible representations. In section IV we discuss the multipartite extension of the MH protocol, and show that for general SLOCC classes, the protocol is not universal. Theorem 1 in Section V states our main result: the universality of the extended MH protocol when restricted to the W class. Section VI discusses the machinery of SLOCC covariants and, as shown in Theorem 2, their restriction to the W class; Theorem 1 then follows as a straightforward consequence of this second theorem. In Section VII we show how to explicitly compute the various states that appear in the Schur-Weyl decomposition of an n-fold tensor product of a given W-class state, according to the results of Theorem 1. In particular, we introduce the recurrence relation from which the W -class Kronecker state coefficients can be computed efficiently. Some conclusions are given in Section VIII. II. MATHEMATICAL PRELIMINARIES We begin by developing the appropriate symmetryadapted decomposition for tensor products |ψ ⊗n of an N -partite state |ψ ∈ H, where H = N i=1 H (i) and i labels the parties; for simplicity we assume that for all i, H (i) ∼ = C d for some d ≥ 2. For a single copy, reversible local quantum operations are described by elements g of the local group GL ×N d , where g = ⊗ N i=1 g (i) and the g (i) are elements of GL d , the linear complex group in d dimensions. Two states |ψ , |φ ∈ H are then said to be SLOCC equivalent [2] if |φ = g|ψ , for some g ∈ GL ×N d ; as usual, local unitary (LU) equivalence refers to equivalence under g ∈ U ×N d (g (i) ∈ U d ). For multiple copies, the corresponding space H ⊗n can also be viewed as an N -partite system, with local spaces (H (i) ) ⊗n . The action of the local group GL ×N d can then be extended to H ⊗n , with each g (i) acting as (g (i) ) ⊗n on its corresponding local space. In addition, there is a natural action of the permutation group S n on any given local space, which on a product basis is given by π : |e 1 e 2 · · · e n → |π −1 (e 1 e 2 · · · e n ) . By Schur-Weyl duality [9], (C d ) ⊗n decomposes into GL d ×S n irreducible representations (irreps) as where V λ and [λ] are the GL d and S n irreps respectively, and where both representations are labeled by integer partitions λ = (λ 1 , λ 2 , . . . , λ d ) of n of at most d parts (denoted by λ d n), with r i=1 λ i = n, λ i ≥ λ i+1 > 0, and d ≤ d for GL d . We note that for large n and fixed d, dimV λ grows polynomially in n [10], whereas dim[λ] grows exponentially, with a rate exponent asymptotically approaching the Shannon entropy of the so-called reduced partition λ = λ/n. More precisely, we have [6] 1 Now, applying Schur-Weyl duality to each of the local spaces in H, we obtain the decomposition , and λ = (λ (1) , · · · , λ (N ) ) with all λ (i) d n, which achieves a decomposition of the (H) ⊗n into irreps of GL ×N d × S ×N n . However, the tensor product |ψ ⊗n is invariant when the same permutation is applied to all parties. This means that in fact, where χ λ (π) are the S n characters. For N = 2, k λµ = δ µν from S n character orthogonality, and for N > 3, k λ can be expanded in terms of the standard (N = 3) Kronecker coefficients k λµν [11], using the S n character formula χ λ (π)χ µ (π) = ν k λµν χ ν (π). For fixed N and d, the generalized Kronecker coefficient grows polynomially in n. This follows from the asymptotics of the standard Kronecker coefficients [12], and of the number of partitions of n with fixed number of parts [13], both of which are polynomial in n. III. KRONECKER STATES We will henceforth refer to any normalized state |K λ ∈ [λ] Sn as a Kronecker state. When considered as entangled states in the N -party tensor product space [λ], Kronecker states are the natural distortion-free target states in the multipartite generalization of the MH protocol, as follows from the lemma: Lemma 1.-For any normalized vector |K λ ∈ [λ] Sn , let ρ i (K λ ) ∈ L([λ (i) ]) be the one-party density matrix obtained by tracing |K λ K λ | over [λ (i) Then all ρ i (K λ ) are multiples of the identity. The lemma follows from the S n invariance of Kronecker vectors, which extends to the reduced matrices ρ i (K λ ), together with Schur's lemma. Therefore, all Kronecker states share the unique properties that follow from having maximally-mixed marginals: from the Kempf-Ness theorem [14,15], any two such states are either LU-equivalent, or else SLOCC-inequivalent; they are maximally entangled in the multipartite sense of belonging to the maximally entangled set (MES) of states as defined in [8] (up to LU equivalence); clearly, they are also maximally entangled with respect to any bipartition involving one party and the rest, with entanglement entropy scaling with n as E i (K λ ) nH(λ) asymptotically, as follows from (3). IV. GENERALIZED MH PROTOCOL The multipartite extension of the MH protocol is based on equation (5). Choosing an orthonormal basis { |K λ,s } (s = 1 · · · k λ ), for each [λ] Sn , the general form for the expansion of |ψ ⊗n is then where the |Φ λ,s (ψ) are unnormalized states spanning a subspace of V λ of dimension at most k λ . As in the bipartite protocol, each party then performs a measurement of the set of projectors {P λ (i) |λ (i) n} onto the subspaces V λ (i) ⊗ [λ (i) ] of the local product spaces (H (i) ) ⊗n . This implements a global measurement of the projectors (4). Thus, |ψ ⊗n is projected to one of the terms in (7): with probability While this probability will be hard to compute in general, it suffices by the Keyl-Werner theorem [16] that the marginal probabilities p(λ (i) |ψ) exhibit asymptotic concentration-of-measure around the reduced partition λ (i) r (i) , where r (i) is the spectrum of the partial density matrix ρ i of |ψ for the local Hilbert space H (i) , with the eigenvalues arranged in non-decreasing order. Extending the estimation theorem of [10] to the Nparty case, we then have that for any ball B (r) = {r : |r (i) − r (i) | 1 < , ∀i} around the local spectra r = (r (1) , · · · , r (N ) ), there is an n 0 such that the reduced partitions satisfy Consequently, the projection (8) yields λ arbitrarily close to r with unit probability as n → ∞. Thus, across any bipartition involving one party and the rest, the per-copy entanglement yields E i (K λ,s )/n of the states |K λ,s resulting from (8) asymptotically tend to the corresponding bipartite entanglement entropies E i (ψ) of |ψ . There is a caveat, however. The universality of the bipartite MH protocol rests on the fact that the resulting state from (8) is a separable state |Φ λ |K λ , where the target |K λ is the maximally entangled state between the spaces [λ (1) ] = [λ (2) ]. Thus, the target state is readily obtained by simply discarding the V λ space, in which the state |Φ λ has O(log n) entanglement. From the viewpoint of the multipartite protocol, this is due to the fact that the bipartite Kronecker coefficient is k λ ≤ 1. But more generally, k λ > 1 for N > 2, so the projection (8) will generally yield a state with a residual entanglement between V λ and [λ] Sn with Schmidt rank of at most k λ , and therefore O(log n) entanglement entropy. Figure 1 illustrates this fact for the tripartite GHZ class; indeed, we have verified numerically that the residual entanglement has the maximal Schmidt rank k λ for this class of states (see Appendix A for details on the techniques used to obtain these Schmidt coefficients). A pure Kronecker vector can therefore only be obtained by performing an additional set of local measurements on the individual V λ (i) spaces in order to break the entanglement between V λ and [λ] Sn . For each set of outcomes of these additional measurements, the resulting state will be a linear superposition of the |K λ,s , with coefficients that will generally depend on |ψ . This means that in general, the protocol is not universal, since we can only produce Kronecker states randomly from an ensemble that depends on |ψ and the outcomes of the additional measurement. Moreover, as Kronecker states are generically not locally interconvertible, it will generally be impossible to obtain, by local means, a unique target Kronecker state for each set of outcomes in the total measurement sequence. V. UNIVERSALITY IN THE W-CLASS Interestingly, it turns out that for states in certain nontrivial SLOCC classes, the Schmidt rank of the projected state (8) can be one, even if k λ > 1. Our main result is that this is the case for states in the N -qubit W class: Theorem 1.-Let |ψ be a state in the multipartite W SLOCC-class, so that |ψ = g|W for some g ∈ GL ×N 2 . Then, the multilocal Wedderburn decomposition of |ψ ⊗n simplifies to the form where Λ (W ) n is the set of λ with all λ (i) 2 n for which |Φ λ,s (ψ) ⊗ |K λ,s as a function of n, for the is the closest integer to x. The graph suggests that the largest Schmidt coefficient γ1 converges to a numerical value lower than 1, and hence that the protocol is not even approximately universal in the asymptotic limit. the reduced second rows λ and each |K (W ) λ is a unique Kronecker vector in [λ] Sn that is common to the whole W class. Thus, Matsumoto and Hayashi's universal distortionfree protocol extends mutatis mutandi to all multiqubit states in the W class, with a unique target maximally entangled multipartite state |K (W ) λ obtained in the projection (8) (Fig. 2 provides a graphical representation of one such state). Note that the extension encompasses all entangled states in the case N = 2, which are SLOCC-equivalent to the two-qubit W state. The conditions in (12) ensure that the support of p(λ|ψ) is compatible with the correspondence between partitions and marginal spectra, since replacing λ (i) by the spectra r (i) , the leftmost inequality in (12) gives the marginal spectral condition satisfied by all N -qubit states [17], while the rightmost inequality is the generalization of an additional spectral condition satisfied by W class states [18]. Theorem 1 is a corollary of a second theorem we present concerning the restriction to the W class of the ring of so-called multiqubit SLOCC covariants [15], which are closely related to the possible states |Φ λ,s (ψ) that can appear in the decomposition (7). for n = 7 and a triplet for which k λ = 2: The labels correspond to the elements of the each [λ (i) ] basis, ordered lexicographically. Each sphere represents the coefficient for the corresponding product basis element, with the radius representing the magnitude and the color representing the sign. VI. W CLASS SLOCC COVARIANTS To establish the connection wth the SLOCC covariants, let |ψ and |ψ be unnormalized states such that |ψ = g|ψ for some g ∈ SL ×N 2 (all g (i) with unit determinant). Then, the |Φ λ,s (ψ) in (7) satisfy where D λ is the GL ×N 2 representation matrix for V λ and we omit the index s. Now, a GL 2 irrep V λ when restricted to SL 2 , is isomorphic to the space of homogeneous polynomials P ν (x) of degree ν = λ 1 − λ 2 in indeterminates x ≡ (x 0 , x 1 ) T , with coefficients transforming equivalently to components of V λ vectors under the action (g, f (x)) → f (g T x) [11]; specifically, with the correspondence coefficients in the basis m ν,ω (x) transform with the same matrix as those of V λ vectors with respect to an SL 2 highest-weight basis |ν, ω (the standard angular momentum basis |j, m with j = ν/2 and m = ν/2 − ω). Thus we may associate to any |Φ λ (ψ) a so-called SLOCCcovariant I Φ (ψ, x)a multihomogeneous polynomial in ψ and N auxiliary variables where (g (i) ) T x (i) = x (i) [15]. The covariant is of multidegree (n, ν), where n is the degree in ψ and ν is the tuple of degrees in the auxiliary variables x, with ν i = λ 2 . Now, it is known that the ring of SLOCC covariants is finitely-generated, and that a generating set can be obtained in principle using Cayley's Omega process [19], also know as the process of iterated transvectants, adapted to the multiqubit case [15]. The process starts from the base form associated with the state, and iteratively generates new covariants from old ones through their transvectant, defined as where the Ω operator is We will show that from the base form corresponding to any state in the W class, the process generates at most one linearly independent covariant for a multidegree (n, ν). To this end, we use the fact that any state in the W class is, up to LU transformations, completely specified by its marginal spectra [20], and is LU-equivalent to the state [21] where 0 is a sequence of all zeros and 1 i is a sequence with a "1" at position i and otherwise all zeros, and c (i) are real with N i=0 c (i) = 1. Thus, for states in the class, it suffices to use the base form With the notation , we then have: Theorem 2.-Any non-vanishing covariant of multidegree (n, ν), generated from the base form A ψ through the process of iterated transvectants must be such that: where e ≡ (1, 1, . . . , 1). To prove the theorem we use the fact that if F (x) and G(x) are multihomogeneous of multidegrees f and g in the auxiliary variables, then their transvectant (17) will also be multihomogeneous functions of multidegree f + g − 2l. Following Olver [19], we can then adapt the transvectant to functions of the projective coordi- 0 associated with multihomogeneous functions. Explicitly, for any F (x) of multidegree f , its projective form F (p) is defined by so in particular, the base form (20) has projective form For simplicity, consider a transvectant involving a single pair of auxiliary variables; then following [19], the projective transvectant is given by where c(l, k, f, g) = l!(−1) k f −l+k , where the proportionality constant is numerical and may be equal to zero. The multivariable generalization of this result is straightforward: Hence, the projective forms of all covariants generated from the base form (20) through the Omega process are always proportional to a power of A ψ . Re-expressing the covariants in their homogeneous forms, we find that any covariant derived from the base form A ψ (x) must then be of the form for some l and m ≥ l, where l = i l i . This can be checked inductively by noting that (20) is of this form, with (l = 0, m = 1), and that the transvectant preserves the form. Equation (21) in theorem 2 is obtained by setting m = n, ν = ne − 2l, and w = n − l to match the degrees. Solving for the l i we obtain l i = (n−ν i )/2 and w as defined in theorem. Finally, condition i) follows from the fact that the l i are integers, and conditions ii) and iii) from the fact that iterated transvectants that start with A ψ cannot generate negative powers in the x (i) 0 or the x (i) 1 . Theorem 1 then follows from the correspondence between the states |Φ λ (ψ) ∈ V λ and covariants of degrees (n, ν). The product form (11) is a consequence of there being at most one linearly independent covariant for ν i = λ OF THE W CLASS Having established the decomposition (11) for multiqubit W-class states, in this section we address the question of the explicit form of the states |Φ λ (ψ) and the target Kronecker states |K (W ) λ . We shall work in the Schur-Weyl basis that arises naturally from the so-called Schur transform [22], which we briefly describe in the next subsection. The coefficients of the state |Φ λ (ψ) are readily obtained from the results of Theorem 2 up to normalization (Eq. (33)). This explicit form can then be used to obtain a recurrence relation for the coefficients of |K (W ) λ using the recurrence relations of the Schur transform (Eqs. (39) and (42)). VII.2. The states |Φ λ (ψ) As discussed earlier, any state in the W class is LUequivalent to a state |ψ of the form (19), where the coefficients c (i) can be regarded as implicit functions of the marginal spectra of the state. Hence, up to LUequivalence, the state |Φ λ (ψ) is proportional to the state in correspondence with the covariant I (n,ν) ψ in (21) using the mapping (14) between covariants and SL 2 basis elements. The mapping can also be expressed in terms of the GL 2 basis elements in the Schur-Weyl basis, by noting that the SL 2 weights are obtained by subtracting λ 2 from the GL 2 weights, so that: Since |Φ λ (ψ) can only be determined from the covariant I (n,ν) ψ up to a normalization, it will be convenient to define a fiducial, also unnormalized state | Φ λ (ψ) through the correspondence where I (n,ν) ψ (x) is as defined in (21). Using the definition (20) of the base form A ψ (x), expanding as a polynomial in the auxiliary variables, and recalling that ν (i) = λ where the sum is over all weights ω (i) such that N i=0 ω (i) = n and λ Note that for the W state, c (0) = 0, c (i) = 1/ √ N , and hence where the sum is over weights such that N i=1 ω (i) = n. To determine the state |Φ λ (ψ) , it suffices to find the the proportionality constant η λ such that |Φ λ (ψ) = η λ | Φ λ (ψ) , which is independent of the state |ψ ; therefore, the constant can be expressed in terms of quantities involving the W -state, namely where Z λ (ψ) = | Φ λ (ψ) 2 and p(λ|ψ) = |Φ λ (ψ) 2 . Thus, since Z λ (ψ) and Z λ (W ) can be computed from (33) and/or (35), the constant η λ and hence the probabiltities p(λ|ψ) = |Φ λ (ψ) 2 can be obtained for a general state once the corresponding probability p(λ|W ) for the W state is known. This probability can in principle be obtained from the results the next subsection. , with η λ as defined in (36). Then, the Schur Weyl decomposition of |W ⊗n can be written in terms of that of |W ⊗n−1 as Letting | K λ = q K λ,q |λ, q , using expansion (35) and the recurrence relation (30), we obtain the recurrence relation between the coefficients of | K λ for n in terms of those for n − 1: where primed and unprimed quantities are related as in (30) and where B W = {1 1 , · · · 1 N } is the set of binary sequences in the W state. Note that this equation must be independent of the weights ω if the weights satisfy the condition N i=1 ω (i) = n of expansion (35). Using (27) and (34), we can show that for any given party, where λ = λ−(1−q n , q n ), and ω = ω −s n . We therefore see that the numerator differs from 1 only when s n = 1. Replacing into (40) and using the facts that s n runs over all sequences where only one of the entries has s n = 1 and N i=1 ω (i) = n, we finally obtain the proportionality F λ,q constant in the recurrence relation (39): where it is understood that the coefficient vanishes whenever the denominator vanishes. As expected, this coefficient is independent of the weights. Once | K λ is obtained from the recurrence relation (39), we have η λ = | K λ , and the states |Φ λ (ψ) and |K λ are then completely determined, as are the probabilities p(λ|ψ). An alternative method to compute these probabilities exactly is presented in Appendix B. Figure 2 illustrates the set of coefficients obtained using (39) and (42) for N = 3 and n = 7. Some explicit coefficient values for N = 3 and N = 4 and n ≤ 5 are also given in the supplementary material. VIII. CONCLUSION In summary, we have shown that the multipartite extension of the HM protocol is able to produce maximally entangled multipartite states with exponentially large local ranks described by asymptotic rates given by the von Neummann entropies of the reduced one-party density matrices of the state. We have also shown that while the multipartite protocol is generally not universal, it remains universal within the class of multiqubit W states. In proving our result, we have obtained the explicit form of all non-vanishing SLOCC covariants for multiqubit states in the W class, which for a given multidgree are unique up to a constant. Our result identifies in the Kronceker states |K (W ) λ a new family of large-rank, maximally entangled multipartite states, the coefficients of which can be recursively computed with a simple algorithm. The interesting entanglement and combinatorial properties of these states may prove useful for quantum information tasks. Our main result establishes the universality of the multipartite MH protocol when restricted to the W class, and provides a way of computing all elements involved in the Schur-Weyl decomposition (11), including the probability p(λ|ψ) = Φ λ (ψ)|Φ λ (ψ) . Additionally, we provide in Appendix B, an alternative formula to compute the probability p(λ|W ), which can then be used to obtain p(λ|ψ) for a general W-class state using relation (36). However, none of these results are practical to further characterize the asymptotic concentration of measure of p(λ|ψ) beyond what can be inferred from the Keyl-Werner theorem. It therefore remains an open question as to what is the explicit form of the rate function R(λ|ψ) = lim n→∞ 1 n log p(λ|ψ) that exactly characterizes this concentration of measure in the same way that the relative entropy D(λ|r ψ ) does in the bipartite case. Looking further, our results suggest an intriguing connection between the SLOCC class of a general state |ψ and the residual entanglement of the states k λ s=1 |Φ λ,s (ψ) ⊗|K λ,s arising in the MH multipartite protocol. We believe that a better understanding of this connection may shed an additional light on the nonlocal properties of different SLOCC classes and their relation to the general problem of asymptotic interconvertibility of multipartite entangled states. ACKNOWLEDGMENTS AB would like to thank M. Christandl for helpful discussions. Appendix A: Gram matrix and Schmidt coefficients In this appendix we show how to explicitly compute the Schmidt coefficients of the state k λ s=1 |Φ λ,s (ψ) ⊗|K λ,s for N -qubit GHZ class states of the form with 0 < α < 1. These computations can be used to show that for GHZ states the Schmidt rank is indeed larger than one as discussed in section IV and that the Schmidt coefficients, when arranged in decreasing value, appear to show an exponential decay that is independent of n as shown in Figure 1. We begin with the n-th tensor product of |ψ GHZ , which can be expanded in the product basis of the N parties in terms of identical sequences s as where ω is the Hamming weight of each sequence and . Now, performing a multilocal Schur transform, the expansion of the state in the multilocal Schur-Weyl basis becomes, as discussed in section VII.1, where |λ, ω ×N , q = N i=1 |λ (i) , ω, q (i) and we use the notation s ∼ ω to denote sequences s with Hamming weight ω. This can be written in a manner similar to (7), namely, where |λ, ω ×N are the basis vectors of V λ with the same weight ω in each party, and are the unnormalized Kronecker states ∈ [λ] Sn relative to each |λ, ω , where the weight ω runs over all values in the range between max(λ where G λ is the Gram matrix with components The Schmidt coefficients of k λ s=1 |Φ λ,s (ψ) ⊗|K λ,s are then the eigenvalues γ i of the Gram matrix G λ . The overlaps K λ ω |K λ ω in the Gram matrix can be computed relatively efficiently as we now show. First, we write (A8) as Under permutations, the matrix elements B λ,ω,q s transform as where S λ q,q (π) is the representation matrix for π in the irrep. Hence, C λ ω ,ω (s, s ) only depends on the type of the joint sequence (s, s ) T = ((s 1 , s 1 )(s 2 , s 2 ) · · · (s n , s n )), which can be represented by a 2×2 joint sequence weight matrix Θ, where the matrix elements Θ ij , with (i, j) ∈ {0, 1} 2 , indicate the number of times that the pair (i, j) appears in the joint sequence (s, s ) T . Therefore, replacing the sum over sequences s, s with a sum over all possible joint sequence weights Θ, (A9) can be expressed as where the the asterisk indicates that the sum is restricted to joint sequence weights satisfying the conditions Note that the term n! ij Θij ! is the number of sequence pairs (s, s ) with joint weight Θ. Up to a factor of n! the quantities C λ (i) ω,ω (Θ) are the so-called Louck polynomials [23,24], which are the matrix-valued coefficients in the expansion of the GL 2 representation matrix D λ (X) in terms of monomials of components of X ∈ GL 2 ; explicitly in terms of our definition of C λ ω,ω , From the orthogonality and completeness relations of the Schur-Weyl basis, we can also obtain the orthogonality condition * and the completeness condition where in both cases, Θ is understood to be compatible with the weights ω, ω . From the constraints (A14)-(A16), the matrix Θ has only one independent parameter which we choose to be Θ 01 and henceforth denote as x. The Louck polynomials can then be expressed in terms of the so-called Hahn-Eberlein polynomials [25], which are easily programmable on a computer and are defined as The relation between the Louck and the Hahn-Eberlein polynomials reads where ω > (resp. ω < ) is the greater (resp. lesser) of ω and ω and A λ,ω is as defined in Eq. (34). Therefore, for fixed weights ω, ω , the sum in (A13) can be taken over x, where the constraints on x are such that all matrix elements of Θ are non-negative and The result shown in Figure 1 in the main body of the paper corresponds to the ranked eigenvalues of the Gram matrix G λ for the case α = 1/3, and partitions that are typical according to the Keyl-Werner theorem, so that the reduced partitions satisfy λ (2/3, 1/3). Another view of this result is provided by Fig. 3, which suggests that the residual Schmidt coefficients exhibit an exponential decay law that appears insensitive to the value of n. Appendix B: The probability p(λ|W ) In this appendix we will give an expression to explicitly calculate the probability p(λ|W ), that together with equation (36) allows us to calculate p(λ|ψ) for any ψ in the W class. Given the |W state (using the notation of section VI), and expanding in the computational basis, we obtain where ω = (ω (1) , . . . , ω (N ) ) is the tuple of Hamming weights, s = (s (1) , . . . , s (N ) ) the tuple of sequences (where s (i) ∼ ω (i) ), and the * in the sum represents the constraint that the sequences s must be generated from the n-fold tensor product of the W state, i.e., (s i s (2) i · · · s (N ) i ) ∈ {(100 · · · 0), (010 · · · 0), . . . , (000 · · · 1)} for all i. Perform-ing a multilocal Schur transform, (B2) becomes so that the probability p(λ|W ) is Using (A10), the probability is expressed in terms of Louck polynomials as with where the Θ (i) are joint sequence weights of the sequences s (i) and s (i) . Replacing the sums over the isotypical sequences s, s compatible with the state W to a sum over Θ = (Θ (1) . . . Θ (N ) ) we have where Z(Θ, ω) is a combinatorial factor counting the number of joint sequences (s (1) , · · · s (N ) , s (1) · · · s (N ) ) such that: 1) every pair s (i) and s (i) is of weight ω (i) and of compatible joint weight Θ (i) , and 2) the joint sequence (s (1) , · · · s (N ) ) and (s (1) , · · · s (N ) ) are generated from the W state. These constrains can be written in terms of a tensor Q defined by the 4N equations Thus Z(Θ, ω) can be expressed through the N 2 components of Q as where the sum runs over the N 2 − 3N + 1 independent components of Q compatible with equations (B8) and (B9). Using the independent parameter x (i) for each Θ (i) , it can be shown that (B10) can be expressed as the following constant term identity where C.T. stands for the term that is constant in all the z i . Using equations (A20) and (A21), to calculate efficiently the Louck polynomials in terms of Hahn-Eberlein polynomials, and (B11) to calculate Z(Θ, ω), the probability p(λ|W ) can then be explicitly computed using (B7). Appendix C: Tables of Kronecker state coefficients In this supplementary material we present some examples of W-class Kronecker vector coefficients, obtained using the results of section VII.3 of the main article. For each subspace [λ (i) ] we label the basis elements of the corresponding Schur transform basis |λ (i) , q (see Sec. ) by the ordinal index of the binary sequence q when the set of admissible binary sequences for the partition λ (i) is ordered lexicographically. For instance, in Table I the coefficients of the Kronecker state corresponding to the partition λ (i) = (2, 1) (for i = 1, 2, 3) of n = 3 copies of the three party W state. In this case the pos-sible binary sequences q are 001 and 010 with labels are 1 and 2 respectively. Thus, for example, the multipartite label (1, 2, 1) denotes the coefficient of the term |λ (1) , 001 |λ (2) , 010 |λ (3) , 001 .
2018-06-19T22:50:18.000Z
2017-12-26T00:00:00.000
{ "year": 2017, "sha1": "97ca6a7fc78e72efa633d93c9071f0485d127844", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1712.09174", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "97ca6a7fc78e72efa633d93c9071f0485d127844", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
8793679
pes2o/s2orc
v3-fos-license
One-Loop Self-Dual and N=4 Super Yang-Mills We conjecture a simple relationship between the one-loop maximally helicity violating gluon amplitudes of ordinary QCD (all helicities identical) and those of N=4 supersymmetric Yang-Mills (all but two helicities identical). Because the amplitudes in self-dual Yang Mills have been shown to be the same as the maximally helicity violating ones in QCD, this conjecture implies that they are also related to the maximally helicity violating ones of N=4 supersymmetric Yang-Mills. We have an explicit proof of the relation up to the six-point amplitude; for amplitudes with more external legs, it remains a conjecture. A similar conjecture relates amplitudes in self-dual gravity to maximally helicity violating N=8 supergravity amplitudes. Introduction The development of sophisticated techniques [1] for computing one-loop helicity amplitudes in fourdimensional gauge theories has allowed various workers to obtain explicit expressions for a number of infinite sequences of such amplitudes [2,3,4,5]. In particular, the nonvanishing maximally helicity violating (MHV) one-loop gluon amplitudes in QCD (where all helicities are identical) and in N = 4 supersymmetric Yang-Mills (where all but two helicities are identical) are remarkably simple. This suggests that they may possess an additional symmetry beyond the gauge symmetry. At tree level, Nair [6] has observed that the MHV n-gluon amplitudes [7] (also known as Parke-Taylor amplitudes) may be derived from a free-fermion Wess-Zumino-Witten model which contains an infinite-dimensional symmetry algebra. (The construction was actually for an N = 4 supersymmetric gauge theory, but the superpartners do not contribute at tree level, so the results also apply to ordinary QCD.) Duff and Isham [8], and more recently Bardeen [9], have pointed out that tree-level gluon currents with all identical helicities in ordinary QCD may be obtained from self-dual Yang-Mills. Selivanov has also produced similar results using a different ansatz [10]. Selfdual Yang-Mills is the prototypical integrable model and as such possesses an infinite-dimensional symmetry algebra [11]. In a spacetime of signature (2,2), it arises from the N = 2 string [12]. Recently, Cangemi [13,14] and Chalmers and Siegel [15] showed that a connection between amplitudes in self-dual Yang-Mills and the maximally helicity violating all-plus helicity amplitudes in QCD continues to hold at one-loop. Indeed, the one-loop amplitudes generated by various self-dual Yang-Mills actions [16,17,15] are identical to the QCD all-plus helicity amplitudes. It is intriguing that the action of Chalmers and Siegel leads to a perturbatively solvable theory: the only non-vanishing amplitudes in the perturbative expansion are the known all-plus helicity one-loop amplitudes in QCD! Bardeen has suggested that an anomaly in the symmetry algebra determines the structure of these amplitudes [9]. In this paper we examine the relationship between the one-loop MHV amplitudes in N = 4 supersymmetric Yang-Mills theory and the all-plus helicity QCD amplitudes (i.e., the self-dual Yang-Mills amplitudes). We conjecture a 'dimension shifting' relationship between the two sets of amplitudes, in which the all-plus amplitudes are given essentially by evaluating the loop integration for the N = 4 MHV amplitudes in a dimensions larger by four(D = 8). We have explicitly verified the conjecture for amplitudes with up to six external legs, and have evidence that it holds for an arbitrary number of external legs. A similar conjecture can be made to link the one-loop npoint amplitudes of self-dual gravity [18,15] (the all-plus helicity graviton amplitudes), with MHV amplitudes in N = 8 supergravity. We have verified this conjecture for the four-point amplitude. The underlying symmetry responsible for the simplicity of these amplitudes, and their relation to each other, remains to be clarified. Preliminaries We now review two basic tools necessary to present the conjecture, namely color-ordering and the spinor helicity formalism. Further details may be found in review articles [19,1], whose normalizations and conventions we follow. One-loop SU (N c ) gauge theory amplitudes can be written in terms of independent colorordered partial amplitudes multiplied by an associated color structure [20,21]. As a simple example, the decomposition of the four-gluon amplitude (with adjoint particles in the loop) is where we have abbreviated the arguments of the 'partial amplitudes', A n;j , by the labels i of the legs and the T a i are fundamental representation matrices, normalized so that Tr(T a T b ) = δ ab . The ρ and σ permutation sums are over the ones which alter the color trace structure. The structure for any number of legs is similar, with no more than two color traces appearing in each term (at one loop). String theory suggests, and it has been proven in field theory, that the A n;j>1 may be obtained from A n;1 by an appropriate permutation sum [21,4,22]. Thus, we need only consider the A n;1 -they contain the information necessary to reconstruct the full one-loop amplitude, and any identity proven for the A n;1 extends automatically to the full amplitude. The relations we find are for special choices of the external gluon helicities. In the helicity formalism of Xu, Zhang and Chang [23] the gluon polarization vectors are expressed in terms of Weyl spinors |k ± as where k is the gluon momentum and q is an arbitrary null 'reference momentum' which drops out of final gauge-invariant amplitudes. The plus and minus labels on the polarization vectors refer to the gluon helicities and we use the notation ij ≡ k These spinor products are anti-symmetric and satisfy i j When performing a calculation in dimensional regularization [24] it is convenient to choose a scheme which is compatible with the spinor helicity formalism. We use the four-dimensional helicity scheme [25] which is equivalent at one loop to a helicity form of Siegel's dimensional reduction scheme [26]. The conversion to the standard MS scheme is discussed in refs. [25,27]. Previously obtained amplitudes. The simplest one-loop QCD n-gluon helicity amplitude is the one with all identical helicities [2,3], where and the label 'gluon' denotes a gluon circulating in the loop. As indicated by the '+' superscripts on the gluon labels, we have chosen the all-plus helicity configuration; the all-minus helicity configuration is related by parity. This amplitude contains no poles in the dimensional regularization parameter ǫ = (4 − D)/2; it is both ultraviolet and infrared finite. In a supersymmetric theory identical helicity amplitudes vanish by a supersymmetry identity [28]. This implies that the contribution of a massless adjoint representation Weyl fermion or complex scalar circulating in the loop is the same up to a statistics factor [29,2], A scalar n;1 (1 + , 2 + , . . . , n + ) = −A fermion n;1 (1 + , 2 + , . . . , n + ) = A gluon n;1 (1 + , 2 + , . . . , n + ) , where the labels 'scalar' and 'fermion' again refer to the particle circulating in the loop. The next simplest amplitude is the N = 4 supersymmetric Yang-Mills MHV amplitude [4], where P i,j = where the K i are the external momenta for the integral, which are in general sums of adjacent external massless momenta k i for the amplitude, as indicated in fig The N = 4 supersymmetric Yang-Mills MHV amplitudes (5) have some features in common with the all-plus helicity QCD amplitudes (3). Neither contains multi-particle poles. The appearance exclusively of two-particle poles is reminiscent of the 'Bethe ansatz' for integrable systems [9]. On the other hand, the N = 4 supersymmetric Yang-Mills amplitudes contain infrared singularities as well as logarithms and dilogarithms which are not found in the all-plus helicity amplitudes. In this paper we will argue that up to an overall prefactor the two amplitudes are actually the same after an appropriate shift of the dimension D appearing in the loop integrals (6). In refs. [13,15] it was shown that self-dual Yang-Mills generates the same amplitudes as the allplus helicity QCD amplitudes. These comparisons were done on the actions and Feynman rules, so that the equivalence holds to all orders of the dimensional regularization parameter, assuming that we are using a form of dimensional regularization that modifies the dimension of the loop momentum [26,25], but preserves the number of physical states to their D = 4 values. With this type of regularization we can define a simple analytic continuation of self dual-Yang-Mills (whose definition contains the four-dimensional Levi-Civita tensor) in the dimensional regularization parameter. The basic relationship we conjecture is, where D = 4 − 2ǫ and the dimension shift on the N = 4 amplitude takes ǫ → ǫ − 2 and I D m → I D+4 m . It leaves the external momenta and helicities invariant (as well as the explicit prefactor of ǫ(1 − ǫ)). One can motivate the conjecture in the ǫ → 0 limit by recognizing that the box integral functions in the N = 4 supersymmetric expression (5) have a common logarithmic ultraviolet divergence as D → 8, I D=8−2ǫ 4:i 1 ,i 2 ∼ 1/6ǫ as ǫ → 0, which is canceled by the explicit ǫ on the right-hand-side of (7). One then uses to see that that these terms generate the 'even' terms in A scalar n;1 (i.e., those terms obtained by neglecting the γ 5 in tr[(1 − γ 5 ) · · ·] in eq. (3)). On the other hand, one cannot check the 'odd' (γ 5 ) terms in this way; we shall see (for n = 5, 6) that they come from O(ǫ) terms in (5) which are promoted to O(ǫ 0 ) through the dimension shift. In other words, because it involves a shift in ǫ, the dimension shift in (7) only makes sense when the amplitudes are expressed to all orders in ǫ. The previously calculated amplitudes (3) and (5) are valid only through O(ǫ 0 ), so we must inspect the terms higher order in ǫ to fully check the conjecture. The conjecture (7) may also be reformulated in terms of the loop momentum integration. The D-dimensional integration in eq. (6) may be broken up into four-and (−2ǫ)-dimensional parts, allowing us to define where µ is the (−2ǫ)-dimensional part of the original loop momentum. (We follow the standard prescription that the (−2ǫ)-dimensional subspace is orthogonal to the four-dimensional one.) Explicit evaluation of the (−2ǫ)-dimensional parts of the integrals relates the integrals with powers of µ 2 in the numerator to higher-dimensional integrals (see, for example, appendix A.2 of ref. [31]), With the definition of the integrals (9) we may reformulate the conjecture (7) as A gluon n;1 (1 + , 2 + , . . . , n + ) = 2 where the symbol '[µ 4 ]' indicates that we insert an extra factor of µ 4 into every loop integrand before performing the integrals. Evidence for the conjecture. We shall present evidence for the conjecture (7), but first let us address a seeming puzzle with it. The all-plus helicity amplitude is invariant under a cyclic relabeling of the legs, whereas the cyclic invariance of the N = 4 supersymmetric amplitude is not obvious, because the two negative helicities break the manifest invariance. However, the cyclic symmetry of the N = 4 MHV amplitude, up to the overall prefactor of i j 4 , follows from a supersymmetry identity. To prove this, use standard supersymmetry identities [28] to relate the n-gluon amplitude to the two scalar, (n − 2) gluon amplitude. After interchanging the two scalars, which does not affect the amplitude, use the same supersymmetry identities to obtain an amplitude with the negative helicity gluon in a different position. This argument works for the N = 4 multiplet because the two gluon helicity states are related by supersymmetry (without using a CPT transformation). We have verified the conjecture for the four-, five-and six-point amplitudes by explicitly calculating both sides of eq. (7) to all orders in ǫ. To calculate the all-plus helicity amplitudes we use the unitarity-based method recently reviewed in ref. [1]. In this method the amplitudes are constructed from cut loop momentum integrals, depicted in fig. 2, For practical purposes we may think of µ 2 as a mass that gets integrated over. Using recursive techniques [32,3] we find where the superscript s on ℓ 1 and ℓ 2 indicates that these are the scalar lines. Integrating these tree amplitudes according to eq. (12), and reconstructing the complete analytic form of the loop amplitudes we have A scalar 5;1 (1 + , 2 + , 3 + , 4 + , 5 + ) = A scalar 6;1 (1 + , 2 + , 3 + , 4 + , 5 + , 6 + ) = i 1 2 2 3 3 4 4 5 5 6 6 1 where s ij = (k i + k j ) 2 , the totally antisymmetric symbol is defined by and I (i) n denotes the scalar integral obtained by removing the loop propagator between legs i−1 and i from the (n + 1)-point scalar integral. It is easy to verify that each of the amplitudes (14)- (16) properly reduces to the expression in eq. (3), using values of the integrals in the ǫ → 0 limit, We comment that these amplitudes may be converted to ones with a massive loop simply by performing the shift µ 2 → µ 2 + m 2 [31]. Just as in the massless case, a supersymmetry identity implies that the all-plus helicity amplitude depends only on the number of statistics-weighted states circulating in the loop; thus the above conversion (4) also applies for a massive fermion in the loop. One may convert these amplitudes from QCD to QED simply by summing over permutations of the external legs. The N = 4 four-point amplitude was first calculated by Green, Schwarz and Brink [33] using the low energy limit of superstring theory. We obtained the five-point amplitude by slightly modifying the string-based [25,34] calculation of ref. [35] to keep the terms higher order in ǫ. For the six-point amplitudes we used a string-motivated diagrammatic approach to ensure manifest supersymmetric cancellations [35,29,36], after which the diagrams were evaluated numerically. (Hexagon integrals I 6 with external momenta restricted to four-dimensions are linear combinations of the six pentagon integrals I (i) 5 [37,30]; therefore we had to reduce the hexagon to pentagons before making any comparison.) In all these cases we find that the dimension-shifting formula (7) is satisfied, thus proving the conjecture up through n = 6. What evidence can we find for an arbitrary number of external legs? We noted above that if we start with the N = 4 supersymmetric amplitudes (5) valid through O(ǫ 0 ), perform the dimension shift, and then take the ǫ → 0 limit, we reproduce all 'even' terms in the all-plus helicity amplitudes After shifting D → D + 4, and multiplying by the prefactor −ǫ(1 − ǫ) this becomes which contributes at order ǫ 0 because the integral is ultraviolet divergent. From the explicit forms of the all-orders-in-ǫ five-and six-point amplitudes, it is clear that the 'odd' terms arise from integral functions not contributing through order ǫ 0 in A N =4 . As a stronger check, we may appeal to the universal behavior [38] of the amplitudes as kinematic invariants vanish. Of particular utility is the behavior of amplitudes as two momenta become collinear [7,19,2,1]. In these limits an n-point amplitude must reduce to sums of (n − 1)-point amplitudes multiplied by 'splitting functions' which are singular in the collinear limit. The constraints of factorization are sufficiently powerful that in many cases one may obtain the correct amplitude simply by finding a function that satisfies the constraints [2]. Since the conjecture (7) holds for up to six-point amplitudes, consistency of the collinear limits suggests that it will continue to hold for higher-point amplitudes. This argument is not a proof either, given the possible appearance of functions which are non-singular in all factorization limits; these limits do not constrain such functions. An example of such a function for the n-point amplitude (if n is even) is tr[123 · · · n] 1 2 2 3 3 4 · · · n 1 I D=n+4−2ǫ n . This function does appear in the six-point (n = 6) amplitude (16), but only at O(ǫ). While collinear factorization does not prove the conjecture for n > 6, it severely constrains terms which violate it. Another way to check the conjecture (7) is to inspect the cuts (to all orders in ǫ) on both sides of the equation. This is convenient since the cut of a one-loop amplitude is a product of two tree amplitudes integrated over phase space. Tree amplitudes are in turn easier to manipulate than loop amplitudes. The cut relationship implied by the conjecture (7) Figure 3. Equality needed for conjecture to be true. In the cut on the left, only scalars cross the cut; in the cut on the right, the entire N = 4 supersymmetry multiplet appears. A proof of this identity would lead directly to a proof of the conjecture (7). We offer no proof to all orders in µ; but as a first step, let us consider this equation to leading order in µ 2 . The leading order on both sides is µ 4 . On the N = 4 side of the equation we can use the amplitudes to zeroth order in µ 2 . These cuts were evaluated (to obtain those terms in the amplitudes which do not vanish as ǫ → 0) in ref. [4] with the result, where we used ℓ 2 = ℓ 1 − P m 1 ,m 2 = ℓ 1 + P m 2 +1,m 1 −1 and ℓ 2 i = 0 on the cut. We can now compare this result with the leading order in µ 2 for the all-plus helicity case. Recursive techniques [32,3] lead to the general form of the tree amplitudes for n plus-helicity gluons and two scalars, A tree n (−ℓ s 1 , 1 + , . . . , n + , ℓ s 2 ) = i Using this expression to construct the cuts one reproduces eq. (23), so that eq. (22) is satisfied to leading order in µ 2 . (The overall factor of 2 arises because complex scalars are composed of two states.) The agreement, even before performing the phase-space integrals, suggests that, in general, on the cuts the conjecture holds for the integrands. Gravity. String theory implies that gravity amplitudes are closely related to gauge theory amplitudes. This observation has been used to obtain gravity amplitudes at both tree level [39] and at loop level [40] and suggests that one can find conjectures similar to eq. (7), but for gravity. Using the explicit results for four-graviton amplitudes obtained via string-based calculations [40], extended to all-orders in ǫ, we find where the amplitude on the left is the pure gravity all-plus helicity amplitudes and the one on the right the N = 8 supergravity amplitude. As in the QCD case, the all-plus amplitude is independent of the massless particle types circulating in the loop, but depends only on the number of states in the loop. Following the QCD case, we may conjecture that the relation in eq. (25) continues to hold for an arbitrary number of external legs. For gravity the one-loop amplitudes are not known beyond four external legs. One can, however, argue [15] that the above amplitudes will also correspond to those for self-dual gravity [18]. Speculations. In this paper we have provided evidence that two infinite sequences of maximally helicity violating gauge theory amplitudes, which at first sight seem quite different, are in fact closely related to each other through a "dimension shift". Is this result just a curiosity, or an indication of a deeper relation between a non-supersymmetric theory (self-dual Yang-Mills) and a supersymmetric one (N = 4 super Yang-Mills) ? We cannot yet answer this question directly. It may prove profitable to pursue the connection mentioned in the introduction, between maximal helicity violation and self-dual Yang-Mills theory [9,13,15], since the latter is known to possess an infinite-dimensional symmetry algebra [11]. (See ref. [14] for a review.) In two-dimensional integrable models, which are related to self-dual Yang-Mills theory through dimensional reduction, the extended symmetry algebra is responsible for a lack of multi-particle poles in the scattering amplitude. Bardeen has emphasized that the absence of multi-particle poles in the maximally helicity violating tree-level currents is reminiscent of the Bethe ansatz [9]. Thus it might be worthwhile to examine the other four-dimensional gauge theory amplitudes that lack multi-particle poles. The list of such amplitudes is quite limited. In non-supersymmetric QCD, beyond one loop all amplitudes with six or more legs contain multi-particle poles, as can be verified by checking their factorization onto a product of two one-loop amplitudes. On the other hand, the nonvanishing maximally helicity violating amplitudes in supersymmetric theories (those amplitudes with all but two helicities identical) do not develop multi-particle poles, to all orders of perturbation theory. (The residues of the would-be poles vanish by supersymmetry identities [28].) The simplest of the one-loop MHV supersymmetric amplitudes are the N = 4 amplitudes, which is why we chose to investigate their relationship to the self-dual Yang-Mills amplitudes in this letter. (N = 1 partial amplitudes [5] are more complicated and certainly do not possess the cyclic invariance of the N = 4 amplitudes.) Finally, we speculate whether to take seriously the appearance of dimensions shifted upwards by four units (eight units for gravity) in the relations we have found. If we take ǫ → 0 so that the left-hand side of eq. (7) is in D = 4, we find that the self-dual gauge amplitudes are related to the one-loop ultraviolet divergences of an N = 4 supersymmetric gauge theory in D = 8. (N = 4 refers to the number of D = 4 supersymmetries.) Coincidentally, such theories have recently been considered in the context of certain (7 + 1)-brane configurations in string theory (also known as compactifications of F theory on K3) where they describe the low-energy world-volume theory [41,42]. The corresponding theory for the gravity relation (25) would be N = 8 supergravity in D = 12, which happens to be the "critical dimension" for F theory [41]. Perhaps it is also relevant that self-dual theories in four-dimensions (with signature (2, 2)) have been proposed for the worldvolume dynamics of F theory [43]. At this stage, though, it is safest to say that the underlying reason for the relationships (7) and (25) remains to be clarified.
2014-10-01T00:00:00.000Z
1996-11-18T00:00:00.000
{ "year": 1996, "sha1": "0ce794d1edf081121db561ed72035de794c6125e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9611127", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8f80a14cd37f138ef41fa76bcdbaecdf76ad393c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
103227812
pes2o/s2orc
v3-fos-license
Development and Validation of High-Performance Liquid Chromatography Method for Determination of Some Pesticide Residues in Table Grape This study presents the development and validation of a new reversed-phase high-performance liquid chromatography (RP-HPLC) method for simultaneous determination of captan, folpet, and metalaxyl residues in table grape samples with ultraviolet – diode array detection (UV – DAD). Successful separation and quantitative determination of analytes was carried out on LiChrospher 60 RP-select B (250 × 4 mm, 5 μ m) analytical column. Mixture of acetonitrile – 0.1% formic acid in water (65:35, v/v) was used as a mobile phase, with flow rate of 1 mL/min, constant column temperature at 25 °C, and UV detection at 220 nm. The target residues were extracted with acetone by ultrasonication, followed by a cleanup using liquid – liquid extraction (LLE) and solid-phase extraction (SPE). The obtained values for multiple correlation coefficients ( R 2 > 0.90), relative standard deviation (RSD) of retention times, peak areas and heights (RSD ≤ 2.25%), and recoveries ranging from 90.55% to 105.40%, with RSD of 0.02% to 5.37%, revealed that the developed method has a good linearity, precision, and accuracy for all analytes. Hence, it is suitable for routine determination of investigated fungicides in table grape samples. Introduction Viticulture is one of the leading agricultural sectors and has great economic importance for the Republic of Macedonia. Due to the favorable climate, the grapes are characterized by remarkable quality and significant export potential. In Republic of Macedonia, there is a tradition of many years of successful vines cultivation, especially the table grape sorts. The assortment of table grapes includes several classes from very early to very late varieties of table grapes. Besides many other conditions, protecting the vines from diseases is more than necessary to increase the quality grapes production. Due to these reasons, the use of fungicides is inevitable. On the other hand, because fungicides are a potential risk to human health, monitoring of pesticide residues in food especially fruits and vegetables is required. Table grape was а part of the monitoring for pesticide residues in primary agricultural products of plant origin in Republic of Macedonia for 2013 year. Among the most commonly used fungicides to protect the vines from diseases are captan, folpet, and metalaxyl, and therefore, these fungicides have been covered by the monitoring program. To ensure the food safety and consumers' health protection, in most countries, maximum residue levels (MRLs) of pesticides in foodstuff have been established. The MRLs of pesticides contained in table grape were set up by the European Union (EU) Regulation (European Commission [EC]) No. 396/2005 [1], and they were estimated at 0.02 mg/kg for captan and folpet, and 2 mg/kg for metalaxyl. In order to monitor food safety, it is highly necessary to develop and employ reliable methods for determination of pesticide residues. However, the HPLC method for simultaneous determination of captan, folpet, and metalaxyl residues in grape using UV-DAD was not found. Hence, the objective of this paper was to develop method for the simultaneous determination of captan, folpet, and metalaxyl residues in table grape samples using rapid resolution liquid chromatography (RRLC) system coupled with UV-DAD. Experimental Equipment and Materials. The chromatographic analysis was performed on an Agilent 1260 Infinity RRLC system equipped with: vacuum degasser (G1322A), binary pump (G1312B), autosampler (G1329B), a thermostatted column compartment (G1316A), UV-VIS diode array detector (G1316B), and ChemStation software. For the better dissolving of the stock solutions and sample preparation, an ultrasonic bath "Elma" was used. The experiments were carried out using LiChrospher 60 RP-select B (125 mm × 4 mm, 5 μm) and LiChrospher 60 RP-select B (250 mm × 4 mm, 5 μm) analytical columns produced by Merck (Germany). Evaporation of samples was enabled with vacuum rotary evaporator Büchi (Switzerland). For the SPE, a vacuum manifold Visiprep (Supelco, Sigma-Aldrich) was employed, and for vortexing of samples, IKA Vortex Genius 3 (Germany) was used. Preparation of Standard Solutions. Stock solutions of captan, folpet, and metalaxyl were prepared by dissolving 0.0100 g, 0.0242 g, and 0.0080 g of the pure analytical standards with acetonitrile in a 25 mL volumetric flask. The solutions were degassed for 15 min in an ultrasonic bath and stored in a refrigerator at 4°C. Stock solutions were used for fortification of table grape samples and for preparation of standard mixture with the following pesticide concentrations: 2 mg/kg for metalaxyl and 0.02 mg/kg for captan and folpet, in 10 mL volumetric flask by dilution with the mixture of acetonitrile-0.1% formic acid in water (65:35, v/v). Extraction procedure. Ten different varieties (5 white, 4 red, and 1 pink) of table grape samples were taken from three vine-growing regions in Republic of Macedonia. Blank samples were prepared from table grape that was not treated with tested pesticides. For determination of linearity, precision, and recovery, spiking samples were prepared by fortifying 100 g homogenized table grape sample with three sets of concentrations: 0.014 mg/kg, 0.02 mg/kg, and 0.024 mg/kg (for captan and folpet), and 1.4 mg/kg, 2 mg/kg, and 2.4 mg/kg (for metalaxyl). Unspiked samples were used for blanks. For each concentration level, five samples (n = 5) were prepared. For determination of a limit of quantification (LOQ), a 100 g homogenized table grape sample was spiked with 0.01 mg/kg of captan and folpet and with 1 mg/kg metalaxyl. One hundered grams of homogenized sample was measured into a conical flask with stopper, and 150 mL acetone was added. The mixture was ultrasonicated for 60 min. After extraction, the mixture was filtered through a Büchner funnel using double filter paper under vacuum. Approximately 20 mL of acetone was used to wash the flask and filter residues. The extract was transferred into round-bottomed flask and concentrated using a rotary evaporator under vacuum to obtain about 5 mL of extract. After that, the extract was decanted into a separating funnel, 100 mL distilled water and 20 g NaCl were added, and extracted twice with 40 mL ethyl acetate. The extracts were dried over sodium sulfate and evaporated to dryness in a rotary evaporator. The obtained residue was dissolved with 10 mL mixture of water and methanol (90:10, v/v) and filtered through a Büchner funnel using double filter paper under vacuum followed by SPE. The SPE procedure was carried out using Supelclean ENVI-18 tubes (6 mL, 0.5 g, produced by Supelco, Sigma-Aldrich, Germany). The conditioning of SPE cartridges was performed with 3 mL of methanol, followed by 3 mL of water at a flow rate of 2 mL/min. After that, 9 mL of the sample extract was passed through the cartridges and then washed the tubes with 3 mL of water. Subsequently, the cartridges were dried for 10 min under a vacuum. The retained pesticides were eluted with 3 mL of methanol-ethyl acetate (75:25, v/v). The eluates were evaporated to dryness under the gentle stream of nitrogen. The residues were redissolved with 1 mL of methanol by vortexing for 1 min, then filtered through 0.45 μm Iso-Disc PTFE syringe filters, and transferred into vials for HPLC analysis. The injection volume of each sample was 30 μL. Matrix effect evaluation. The quantitative measurement of matrix effect (ME) was done by comparing the peak areas from standard solutions (n = 3) of the examined pesticides in solvent (acetonitrile-0.1% formic acid in water [65:35, v/v]) with the peak areas obtained from standard solutions of the same pesticides prepared in blank table grape extract, at the following concentrations: 0.02 mg/kg for captan and folpet and 2 mg/kg for metalaxyl. The ME was calculated using the following equation [17]: where X 1 is the average area of the pesticide standard in solvent (acetonitrile-0.1% formic acid in water [65:35, v/v]), at a specific concentration, and X 2 , the average area of the pesticide standard in blank table grape extract, at the same concentration. By using this formula, it was possible to calculate the positive or negative matrix effect, which is an increase or decrease of the detector response. Chromatography Study. In preliminary experiments, two reversed-phase analytical columns with same stationary phases and different length, such as LiChrospher 60 RP-select B (125 mm × 4 mm, 5 μm) and LiChrospher 60 RP-select B (250 mm × 4 mm, 5 μm), were employed. The LiChrospher 60 RP-Select B was chosen because it offers excellent separation properties for basic compounds, but also is suitable for determination of neutral and acidic substances. This sorbent prevents secondary interactions with basic substances, ensures that they are eluted as highly symmetrical peaks, delivers highly reproducible results, and secures the reliability of HPLC method [19]. Also, different mixtures of acetonitrile-water (80%-40% acetonitrile) and acetonitrile-0.1% formic acid in water (80%-40% acetonitrile) as mobile phases in isocratic elution mode were tested. The investigations show that the better results were given on the longer column LiChrospher 60 RP-select B (250 mm × 4 mm, 5 μm), probably due to its higher efficiency as a result of the higher number of theoretical plates. The UV spectra (Figure 2) of examined pesticides show that they have absorption maxima around 220 nm. Hence, the chromatographic analysis for their simultaneous determination was carried out at 220 nm. The best separation of the analytes with symmetrical peak shapes and satisfy purity indexes was achieved under isocratic elution with mobile phase consisted of acetonitrile-0.1% formic acid in water (65:35, v/v) (Figure 3а), flow rate of 1 mL/min, constant column temperature at 25°C, and UV detection at 220 nm. The obtained values for column dead time, retention times of components (t R ), the calculated values for retention factors (k′), separation factors (α), and resolution (Rs) are given in Table 1. As can be seen from this table, computed values for retention factors (k′) were below 20, which is the highest optimal value for this parameter; for separation factors (α), above 1.2; and for resolution (Rs), above 7, which implies that, under the stipulated chromatographic conditions, high separation of the investigated pesticides was reached [20]. For quite some time, ultrasonication was the applied procedure for the extraction of many substances, among which are the pesticides [21]. Тhe most commonly used solvent for extraction of pesticide residues was acetone, due to several advantages including high volatility and effectiveness and low toxicity and cost. Also, аcetone is completely miscible with water, thus allowing a good penetration in the aqueous part of the sample [22]. Therefore, the target residues firstly were extracted with acetone by ultrasonication, followed by a cleanup using LLE and SPE before the analysis. Method validation. Specificity, selectivity, linearity, matrix effect, precision expressed as repeatability of retention time, peak area and peak height, and recovery were examined to assess the validity of the developed method in accordance with EU regulations and EU documents [23,24]. Specificity and selectivity. To confirm the specificity of the developed method, UV-DAD was used to check the peak purity and analyte peak identity. The purity index for all analytes was greater than 999 (the maximum value for the peak purity index [PPI] should be 1000), which means that the chromatographic peak was not affected by any other compound. In addition, identification of the analytes was done using the values for the retention time and match factor obtained by overlaid spectra of a pure analytical standard (from spectra library) and absorption Linearity. The linearity of the developed method was determined for all compounds separately, with triplicate injections (30 μL) of the spiked standards in the table grape sample matrix in the range from 30% less than MRLs to 20% above ( Table 2). The obtained results for multiple correlation coefficients (R 2 ≥ 0.90) suggested that the method has a satisfactory linearity for all analytes ( Table 2). Matrix effect. The quantitative determination of matrix effect was done using the Eq. (1). Matrix effect represents the noticed effect of an increase (enhancement) or decrease in detector response (a positive or negative matrix effect, respectively) of a pesticide present in a matrix extract compared with the same pesticide present in just solvent [17]. The calculated matrix effect for investigated pesticides exceeded 39% (Table 3) and indicated a significant matrix effect. Captan and metalaxyl showed a significant negative matrix effect, while significant positive matrix effect was noticed for folpet. When matrix effects are significant (i.e., >20%), calibration should be generated using standards prepared in blank matrix extracts (matrix matched standards) [23,24]. For these reasons, the calibration was conducted this way. Limit of quantification. The LOQ for each compound was determined by spiking a table grape sample with 0.01 mg/kg of captan and folpet and with 1 mg/kg metalaxyl, the concentrations of which correspond to 50% of MRL for each compound. The signal-to-noise ratio (S/N) at the concentration level corresponding to 50% of MRL for each compound (0.01 mg/kg for captan and folpet and 1 mg/kg for metalaxyl) was found to be ≥10 for all examined pesticides. Therefore, the LOQ was estimated to be ≤0.01 mg/kg for captan and folpet and ≤1 mg/kg for metalaxyl in this study. These results are acceptable for determining the pesticide residues, according to the EU rules [24]. Precision. The precision was expressed as repeatability of obtained results from eight successive injections (30 μL) of the spiked table grape samples at MRLs for each of the analytes ( Table 4). The computed values of RSD for retention time, peak area, and peak height indicated an excellent precision of the proposed method. Accuracy. The accuracy of the method was determined by recovery studies in table grape samples (pesticides free) spiked with the investigated pesticides at three concentation levels ( Table 5). The obtained values for recovery and for relative standard deviation were within the following ranges: 90.55%-105.40% and 0.02%-5.37%, respectively. The mean recovery at each fortification level in the range of 70%-120% and relative standard deviation (RSD) ≤20% per level are acceptable according to EU criteria [24]. Consequently, it can be concluded that the proposed method is convenient to determination of the target pesticide residues in table grape. The investigations show that only folpet residues were present in table grape samples. Residues of folpet were found in all samples of red grapes, but they were also in some white grape samples. As can be seen from Table 6, in four samples, including rose grape samples, measurable quantities of the target fungicide residues were not detected. The determined concentration of folpet was below the MRL in three samples, close below MRL in one sample, and, in two samples, was equal to MRL. Conclusions A new, precise, accurate, and reliable method for simultaneous determination of metalaxyl, captan, and folpet residues in table grape samples using RP-HPLC and UV-DAD has been developed and validated. Successful separation and quantification were achieved using isocratic elution with mobile phase consisted of acetonitrile-0.1% formic acid in water (65:35, v/v), flow rate of 1 mL/min, constant column temperature at 25°C, and UV detection at 220 nm. The run time of analysis under the stipulated chromatographic conditions was about 6 min. The results from the method validation revealed that the proposed method has a satisfactory linearity (R 2 > 0.90) and excellent precision of retention times, peak areas, and heights (RSD ≤ 2.25%). The obtained values for recoveries ranging from 90.55% to 105.40%, with RSD of 0.02%-5.37%, revealed that the proposed method is convenient for routine determination of investigated fungicides in table grape samples. This method was successfully applied to determine the captan, folpet, and metalaxyl residues in table grape samples from ten different varieties (5 white, 4 red, and 1 pink) taken from three vine-growing regions in Republic of Macedonia. The obtained results show that folpet was frequently detected fungicide in the analyzed table grape samples, and found concentrations were less or equal to MRL according to Regulation (EC) No. 396/2005.
2019-04-09T13:04:08.579Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "096d1420404f9feeff5c33251384fadf05ff07da", "oa_license": "CCBYNC", "oa_url": "https://akjournals.com/downloadpdf/journals/1326/30/4/article-p250.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "95edbea4961ce06d358a09d839febc4deec9b08a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
260742648
pes2o/s2orc
v3-fos-license
Muscle cramps : A ‘ complication ’ of cirrhosis DEFINITION The general term ‘cramp’ has been used to define a variety of muscle symptoms that involve pain or contraction of a single muscle or muscle group. According to the American Association of Electrodiagnostic Medicine Glossary of Terms (1), this broad definition of ‘cramp’ also encompasses a confusing array of additional terms such as ‘spasm’, ‘myalgia’ and ‘contracture’. A more precise definition for ‘muscle cramp’ is required. A working definition was established to help refine the physiological, electromyographic and clinical characteristics of muscle cramps (2,3). A ‘true muscle cramp’ is characterized clinically as an involuntary, painful, visible or palpable muscle contraction. The onset is abrupt, generally occurring at rest, and often is nocturnal. The pain is intense yet brief. The pain and the contraction resolve spontaneously in seconds to several minutes. The calf is the area most commonly affected, but cramps in the fingers and hands occur in 30% of patients (4,5). True muscle cramps are painful and electromyographically exhibit an increased frequency of motor unit action potentials that spread throughout the muscle group and produce a sustained muscle contraction. This feature helps to differentiate various muscle conditions because involuntary muscle contractions are not observed in disorders such as myositis or myalgia, and pain is not a feature of myotonia (6) (Table 1). A muscle ‘contracture’ is also involuntary; however, it is electrically silent (7). Involuntary muscle con- The focus herein is on true muscle cramps and not on other cramp-like phenomena. PREVALENCE Many benign and pathological conditions are associated with muscle cramps (Table 2).Regardless of the cause, the clinical and electromyographic characteristics are identical, strongly suggesting a common pathophysiology (8)(9)(10). The prevalence of benign muscle cramps has been difficult to determine.Lack of a consistent definition, subjectiveness of the symptom and selection bias taint the few studies that have been performed.A questionnaire study from the Netherlands found that 8% of an unselected adult population reported at least weekly muscle cramps (11).Objective studies using electromyography revealed that symptomatic muscle cramps are present in as much as 16% of the general population (12). Through careful observation, it was recognized that patients with chronic liver disease suffered from these same muscle cramps.The uniformity of these complaints prompted further investigation to establish the prevalence of cramping in this patient population.Konikoff and Theodor (13) were the first to report that 88% in a series of 33 patients with cirrhosis had painful muscle cramps.The symptoms occurred several times per week in more than half of the patients (13).Chao et al (14) reported a prevalence of muscle cramps of 64% in a cohort of Chinese patients with established cirrhosis.A controlled study by Kobayashi et al (15) revealed that 31% of 80 cirrhotic Japanese patients had muscle cramps at least once per week.This prevalence was significantly greater than the prevalence in two control groups -one age-and sex-matched healthy group, and one group with chronic noncirrhotic liver disease; muscle cramps occurred in 7% and 5% of these patients, respectively. The prevalence of muscle cramps in patients with underlying cirrhosis in the United States had not been determined until recently.Abrams et al ( 16) compared the reports of painful muscle cramps in patients with cirrhosis with that of two other control groups -patients with chronic liver disease without cirrhosis and patients with congestive heart failure managed with diuretics.The latter group was included because diuretics have been implicated as a cause of muscle cramps (6).In addition, the use of beta-blockers has been associated with muscle cramps; hence, patients requiring these agents were excluded from the study.In this questionnaire study, 52% of patients with cirrhosis described painful muscle cramps, in contrast to 7.5% and 20% of patients in the respective control groups.Only 50% of those with cirrhosis were maintained on constant diuretics compared with 90% of the heart failure group, suggesting that the use of diuretics was not the primary cause of the muscle cramps.Furthermore, weekly cramps were reported in 22% of those with cirrhosis and in only 5% of those in the control groups (P<0.02). Chronic, painful muscle cramps are a common symptom in patients with cirrhosis.The prevalence ranges from 22% to 88%, reflecting the lack of uniform diagnostic criteria.Clinically relevant muscle cramps should be defined by their frequency and severity, and hence should occur at least once per week, affect the patient's quality of life and require analgesia.These features occur in 12% to 42% of patients with cirrhosis (15)(16)(17).The higher prevalence seen in patients with cirrhosis compared with those without cirrhosis suggests that cirrhosis itself is involved in the pathogenesis. PATHOPHYSIOLOGY The pathophysiology of muscle cramps is poorly understood (18).The leading theory is that they have a neural origin.Muscle cramps can be experimentally induced by repetitive 22D Can J Gastroenterol Vol 14 Suppl D November 2000 Marotta et al electrical stimulation of peripheral nerves (19,20).Cramps also persist during spinal anesthesia and in areas distal to transected nerves (5).These data, coupled with the clinical relation of muscle cramps and lower motor neuron diseases (eg, amyotrophic lateral sclerosis, radiculopathies), suggest that the neural origin is localized to the peripheral nervous system (4).The precise abnormality in the peripheral nervous system is unknown.Neurophysiological studies have shown that high frequency bursts of action potentials arise from abnormally excitable terminal motor nerve fibres that spontaneously propagate the impulses to other muscle groups, leading to the clinical manifestation of muscle cramps (6).These intramuscular motor nerve terminals are unmyelinated and possess physiological properties different from those of extramuscular nerve fibres.The hyperexcitability of these motor nerve terminals may be related to an abnormal sensitivity to neurochemical transmitters or to the local electrochemical environment.The varied causes of muscle cramps (dehydration, diuretics, diarrhea, hemodialysis) have in common the potential to alter the intramuscular extracellular electrochemical environment (21,22). TREATMENT Treatment of muscle cramps remains rather empirical.Quinine sulphate is the most widely used agent, yet many other treatments have had anecdotal success, including fluoride (23), vitamin B12 (24), vitamin E (25), taurine (26), riboflavin (27), verapamil (28), calcium (29), tocainide (30), hydroquinine (31) and transcutaneous nerve stimulation (32).Quinine sulphate: Quinine sulphate and related derivatives are the most commonly prescribed agents in the United States for the treatment of muscle cramps (33).The pharmacological basis of these agents stems from their ability to increase the refractory period of skeletal muscle cells and to decrease the excitability of the motor nerve terminals (34).Quinine and its derivatives undergo both renal and hepatic metabolism.Indeed, it is suggested that these agents be used with extreme caution in the presence of hepatic and/or renal insufficiency because of the potential for toxic accumulation of active metabolites.Although there is sound pharmacological grounds for the use of these agents, evidence of therapeutic efficacy is poor and is based on uncontrolled studies conducted in the 1930s and 1940s.Recent randomized, controlled clinical trials have been performed, but have been limited by small sample size and the subjectiveness of the outcome measure.Five randomized, controlled trials have addressed the use of quinine for muscle cramps, and the results have been mixed (35)(36)(37)(38)(39).The discrepancy in efficacy may be related to differences in patient populations, concomitant use of diuretics and varied dosages of quinine.The positive studies comprised a combined total of 26 patients.Based on these limited data, quinine remains the most prescribed agent for the treatment of muscle cramps regardless of the cause (40). Therapeutic trials in patients with cirrhosis and muscle cramps had not been performed until 1991, at which time Lee et al (41) completed a randomized study using quinidine sulphate, a dextroisomer of quinine, in the treatment of 31 patients with cirrhosis.Patients were enrolled if they reported having had at least two muscle cramps per week for at least one year.Quinidine sulphate was given orally for four weeks (200 mg twice daily).The serum quinidine level was followed and was kept at a level considered subtherapeutic for antiarrhythmic use.There was an impressive benefit -88% of patients in the treatment arm and 13% in the placebo group showed a greater than 50% reduction in the number of reported muscle cramps during the trial period.The decrease in muscle cramps was inversely proportional to the serum quinidine level, and apart from mild diarrhea seen in five patients (31%), quinidine was well tolerated.Based on these results, serum levels of quinidine that are below the recommended therapeutic range for antiarrhythmic action are thought to be effective therapy for muscle cramps in patients with cirrhosis.Higher serum quinidine levels (not observed in this trial) can lead to adverse effects, including nausea, vomiting, tinnitus, headache, rash and hypersensitivity reactions.Serious toxicity, including visual disturbance (42), permanent blindness (43,44), agranulocytosis (45), immune thrombocytopenia (46), renal insufficiency, prolongation of the QT interval, cardiac arrhythmia and sudden cardiac death can occur and must be recognized.It is recommended that serum quinidine levels be monitored and routine electrocardiograms be performed as monitoring procedures during therapy with this agent.Taurine: Taurine is an amino acid that has been shown to reduce skeletal muscle hyperexcitability (26).Intramuscular taurine levels are depleted in cirrhosis, leading to motor nerve terminal hyperexcitability and subsequent muscle cramps (47).It has been shown to be effective for the treatment of muscle cramps in patients with cirrhosis (48); in this study, taurine given orally, 6 g daily for six months resulted in the complete resolution of muscle cramps in 66% of 12 patients with cirrhosis of the liver.This pilot study depicted a safe and seemingly effective therapeutic strategy.Larger, controlled trials are required.Eperisone hydrochloride: Eperisone hydrochloride, an antispastic agent, was used in the treatment of 18 patients with cirrhosis; 11 (61%) patients treated with 150 to 300 mg/day for eight weeks reported complete resolution of muscle cramps, and the remainder experienced a reduction in the frequency of muscle cramps (15).Treatment was discontinued in three (17%) patients -in one because of unrelated issues and in the other two because of adverse gastrointestinal complaints.Tocopherol (vitamin E): Traber et al (49) described that low tocopherol levels in nervous tissue could produce peripheral neuropathy.Thus, vitamin E deficiency may have a role in the pathogenesis of muscle cramps (50,51).Anecdotal reports of improvement in the symptoms of benign, nocturnal leg cramps using oral tocopherol (vitamin E) have led to further evaluation of this agent in individuals with cirrhosis (25).Konikoff et al (17) treated 13 patients with cirrhosis and painful muscle cramps with vitamin E (tocopherol ace-tate) 200 mg three times daily for four weeks.There was significant improvement in the level of pain, and in the frequency and duration of the muscle cramps in all patients, without any adverse effects.Human albumin: Angeli et al (52) suggested that the reduced effective circulating plasma volume seen in cirrhosis contributes to the pathogenesis of muscle cramps.A therapeutic trial in which intravenous placebo was administered weekly for four weeks, followed by intravenous human albumin (100 mL of 25% human albumin) weekly for four weeks was performed in 12 patients with cirrhosis.No significant change was reported in the frequency of muscle cramps during the placebo phase, whereas a significant reduction occurred during the albumin infusion phase.Unfortunately, the improvement was transient -noticeable only during the treatment phase. SUMMARY True muscle cramps are characterized by involuntary, painful contractions.The prevalence of such cramping in cirrhotic patients varies from 22% to 88%.Clinically significant muscle cramps -those that occur with great frequency, affect the individual's quality of life or require significant analgesia -occur less frequently (8% to 20%). The pathogenesis of muscle cramps remains unknown but involves the peripheral nervous system, specifically at the level of the intramuscular motor nerve terminals.The hyperexcitability of these unmyelinated fibres is likely due to an abnormal sensitivity to the local extracellular environment.Disturbance to local constituents (electrolytes, minerals, water composition, vitamin levels) occurs in many conditions that feature muscle cramps (ie, dehydration, hemodialysis, diuretic use and cirrhosis). A single pharmacological agent that displays unequivocal benefit has yet to evolve (Table 3).Quinine and its derivatives are the most common agents prescribed for this condition, yet adequate clinical trials have yet to be performed to support this practice fully.Although, anecdotally, these agents are efficacious, further studies are required.The continued use of these derivatives is acceptable, yet their potential toxicities must be appreciated. Oral taurine exhibits a treatment benefit and has minimal toxicity; this agent requires further evaluation.The use of vitamin E is perhaps the most interesting development in the therapy of muscle cramps in patients with cirrhosis.This agent has virtually no toxicity and, hence, is the logical choice for further clinical evaluation.Human albumin infusion improves symptoms yet is invasive, transient and expensive. True muscle cramps should be considered a complication, a symptom or an extrahepatic manifestation of cirrhosis.They occur commonly and can adversely affect an individual's quality of life.Patients with cirrhosis should be questioned specifically for the presence, frequency and severity of muscle cramps.The efficacy and safety of agents such as vitamin E and taurine must be further evaluated before their widespread use is recommended, yet these agents hold sufficient promise.Therapies that are invasive or costly cannot be justified for this troubling but not life-threatening symptom. 24D Can J Gastroenterol Vol 14 Suppl D November 2000 Marotta et al
2018-04-03T00:44:37.384Z
2000-11-01T00:00:00.000
{ "year": 2000, "sha1": "8c62e8d702abafcc91708584ba3b4fadc782cffa", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cjgh/2000/214916.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a54088c7d5ea320f02ef940d05304a4329e5054e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267118898
pes2o/s2orc
v3-fos-license
MRI-visible enlarged perivascular spaces in basal ganglia rather than centrum semiovale was associated with aneurysmal subarachnoid hemorrhage Background The subarachnoid space is continuous with the perivascular compartment in the central nervous system. However, whether the topography and severity of enlarged perivascular spaces (EPVS) correlates with spontaneous subarachnoid hemorrhage (SAH) remains unknown. Based on the underlying arteriopathy distributions, we hypothesized that EPVS in basal ganglia (BG-EPVS) are more closely associated with aneurysmal subarachnoid hemorrhage (aSAH) than other SAH without aneurysm. Methods Magnetic resonance imaging (MRI) scans of 271 consecutive SAH survivors with and without aneurysm were analyzed for EPVS and other markers of imaging data. In the subgroup analysis, we compared the clinical characteristics and EPVS of SAH participants with and without pre-existing known risk factors (hypertension, diabetes, and smoking history) using multivariable logistic regression. Results Patients with aSAH (n = 195) had a higher severity of BG-EPVS and centrum semiovale EPVS (CSO-EPVS) than those without aneurysm (n = 76). Importantly, BG-EPVS predominance pattern (BG-EPVS>CSO-EPVS) only existed in aSAH survivors rather than other SAH without aneurysm. In the subgroup analysis, interestingly, we also found that a high degree of BG-EPVS showed an independent relationship with aSAH in patients without pre-existing risk factors (e.g., hypertension). Conclusion In this cohort study, BG-EPVS predominance pattern was associated with aSAH patients compared with those without aneurysm. Moreover, BG-EPVS still showed a strong association with aSAH survivors without pre-existing vascular risk factors. Our present study suggested the BG-EPVS as a potential MRI-visible characteristic would shed light on the pathogenesis of glymphatic function at the skull base for aSAH. Introduction Subarachnoid hemorrhage (SAH) is a devastating disease with a high fatality rate which also causes substantial disability among survivors.Aneurysm rupture is the leading cause of spontaneous SAH, defined as aneurysmal subarachnoid hemorrhage (aSAH) (1).On the other hand, the causes of spontaneous SAH rather than aneurysm include arteriovenous malformation (AVM), anticoagulation use, vasculitis, etc. (2).The subarachnoid space is continuous with the perivascular spaces, which are interstitial fluidfilled cavities surrounding the small penetrating vessels (3).Recently, magnetic resonance imaging (MRI)-visible enlarged perivascular spaces (EPVS) emerged as a potential neuroimaging marker indicating the different underlying arteriopathy: EPVS in the basal ganglia (BG-EPVS) is associated with hypertensive arteriopathy, while EPVS in the centrum semiovale (CSO-EPVS) correlates with age-related cerebral amyloid angiopathy (CAA) (4).However, few studies have focused on the prevalence and distribution of MRI-visible EPVS in the SAH population.Therefore, we first sought to examine and compare the severity and topography of EPVS in the SAH with and without aneurysm in 2 specialist stroke centers.Moreover, as the subclinical imaging marker of cerebral small vessel diseases, the prevalence and location of EPVS would be associated with vascular risk factors.Hence, the second goal of this study was to explore whether EPVS correlated with these risk variables in SAH survivors. Study population We analyzed data from a prospective cohort study of consecutive spontaneous SAH survivors admitted Shanghai Tenth People's Hospital and Tongren Hospital from September 2014 to June 2020.Patients were enrolled in this study if they had spontaneous SAH and underwent MRI scans from 2 weeks to 1 month after the onset of SAH.The exclusion criteria were as follows: (1) secondary causes of SAH, such as trauma or hemorrhagic transformation of ischemic infarction; (2) unknown time of SAH onset; (3) contraindication to MRI, such as non-MRI compatible implants; (4) patients whose status is very severe or deteriorated in way they cannot perform MRI. Figure 1 shows the flow chart of participant enrollment. We retrieved baseline clinical and demographic information, including age, sex, educational level, family history of intracranial aneurysm, spontaneous SAH onset to MRI scan time, previous history (hypertension, diabetes) and the location (anterior or posterior circulation) and size of aneurysm. Hunt-Hess and World Federation of Neurological Surgeons SAH grading scales were used to access the severity of admission neurologic grade (5).We used the Hunt-Hess scale as the primary instrument for grading neurologic impairment, with grades 1, 2, and 3 classified as good grade and grades 4 and 5 (stupor and coma, respectively) as poor grade.The amount of blood on the computed tomography (CT) scan was classified according to the Fisher grade. The Glasgow Coma Scale (GCS) scale, Hunt-Hess Scale, World Federation of Neurological Societies Scale (WFNS), The Modified Rankin Scale (mRS) were assessed by trained neurologists, The education level is specifically divided into illiteracy, primary school, middle school, high school, college and above, corresponding to 0, 6, 9, 12, 16 years of education, respectively. Neuroimaging acquisition and analysis All patients underwent MRI according to a standardized protocol as part of their routine clinical assessment.The protocol included T1and T2-weighted fluid-attenuated inversion recovery (FLAIR), diffusion-weighted imaging (DWI) with 2 b values (0 and 1,000), and apparent diffusion coefficient (ADC) sequences.All studies were performed using 3.0-T scanners.Sequences typically included 24-30 slices of 5 mm thickness with a matrix size of 128 × 128.Digital subtraction angiography (DSA) was the only qualified measurement for determining the presence or absence of an aneurysm in this study.Diagnosis of aSAH with DSA was evaluated by two senior neurosurgeons. As reported in previous studies (6, 7), EPVS on axial T2-weighted MRI were separated in basal ganglia and centrum semiovale regions and stratified into 3 groups: <10, 10-20, and > 20.The numbers referred to the EPVS on one side of the brain.The side or slice with the highest number of EPVS after all relevant slices for each anatomic area were reviewed.In the presence of confluent white matter hyperintensities (WMH), an estimation was made for the closest EPVS rating category.In cases of large lobar or deep SAH, EPVS was assessed in the contralateral hemisphere, the closest category ipsilateral to the lesion was estimated, and the highest severity was recorded.For this analysis, we defined high BG-EPVS or CSO-EPVS as ≥10.We also defined a composite variable containing three categories by comparing the degree of CSO-EPVS and BG-EPVS burden: a high degree of CSO-EPVS (i.e., CSO-EPVS > BG-EPVS), an equal degree in the two regions (i.e., CSO-EPVS = BG-EPVS), and a high degree of BG-EPVS (i.e., BG-EPVS > CSO-EPVS).The severity of WMH was rated according to the Fazekas scale, as previously described (8).The total Fazekas score was calculated by adding the periventricular and deep white matter lesion scores.All MRI analysis were reviewed blinded to clinical data by two trained image analysts blinded to patients' clinical characteristics according to the STandards for ReportIng Vascular changes on nEuroimaging (STRIVE).In case of disagreement, consensus was made.(9). Statistical analysis We divided the patients with SAH into two groups according to the presence or absence of aneurysm and compared the clinical and imaging characteristics using univariate binary logistic regression.To determine whether a specific subgroup was associated with the incidence of aSAH, particularly in female survivors, we performed multivariate regression analysis.Patients were divided based on pre-existing vascular risk factors, including hypertension, diabetes, and smoking history.A p-value ≤0.05 was defined as statistically significant.Statistical analyses were performed using SPSS 24.0 (IBM Corp., Armonk, NY, United States). Baseline demographic, clinical, and neuroimaging characteristics Among the 327 spontaneous SAH survivors, 271 patients who underwent head CT and MRI were included in the cohort (Figure 1).The main reason for exclusion (n = 56 patients) was that SAH survivors were too unwell to undergo MRI measurement (n = 43 patients), informed consent issue (n = 6), missing key variables (n = 7), Patients with spontaneous SAH who were excluded from the study were not significantly different from those included in mean age (61.42 vs. 58.53;p = 0.082) and female sex (50.0% vs. 60.6%;p = 0.115).The median time from the SAH onset to MRI was 17.5 days.Patients who were excluded had a higher Hunt-Hess grade (82.8% vs. 13.4%;p < 0.01). Finally, 271 patients with SAH were included in this study.Of these, 195 (72%) patients had an aneurysm [mean age, 59.5; 122 (63%) female], and 76 did not [mean age, 54.2; 31 (41%) female].Supplementary Table S1 summarize the clinical and radiological data of the SAH cohort obtained using univariate analysis.In comparison to SAH without aneurysm, aSAH survivors were more likely to be older, female, and have hypertension, a higher bleeding amount (Fisher grade), more severe WMH, high CSO-EPVS, and high BG-EPVS (Table 1).Overall, patients with an aneurysm were more Discussion In this cohort study, we found that the severity and distribution of EPVS differed according to the cause of SAH.Not only the high severity of EPVS was more common in SAH survivors with aneurysms than those without aneurysms, but also a BG-EPVS predominant pattern showed a strong independent association with aSAH survivors.Interestingly, the BG-EPVS predominant feature still exists in aSAH without pre-existing vascular risk factors.Our findings provide new evidence that the topographic pattern of BG-EPVS might be a characteristic of the underlying arteriopathy and glymphatic dysfunction in aSAH. We found more than 70 and 60% of the high degree of EPVS (≥10) in the basal ganglia and centrum semiovale in aSAH survivors in our cohort, respectively, which is consistent with two recent retrospective studies (5, 10).In the current concept, PVS is identified as the space that surround small arterioles and venules in the brain, which has been proved to participate in process of fluid exchange and clearing waste products from the central nervous system (11) pathological characteristic of SAH is the accumulation of extravasated blood in subarachnoid space.As the subarachnoid space is continuous with the paravascular compartment in the brain, the event of SAH may result in the occlusion of PVS with clots formation and the disturbance of fluid drainage, finally leading to PVS extended (12, 13).This observation was evidenced in several animal studies showing that EPVS was associated with the reduced brain clearance of waste products and glymphatic dysfunction after a SAH attack occurred (14,15).One important finding of our study was the positive association of the BG-EPVS predominant pattern with aSAH, when we use a method of assessing EPVS predominance pattern, either CSO-EPVS, BG-EPVS or CSO-EPVS=BG-EPVS, as described by Charidimou et al. (16).In the present study, the BG-EPVS predominant pattern was found in 53.8% of the survivors with aneurysm, but it was detected in 15.8% of other SAH patients without aneurysm.A recent breakthrough in the CNS was the discovery of glymphatic system, which was identified as a route moving CSF into the brain along perivascular spaces, and eventually removing metabolic waste in the CSF to the periphery (17)(18)(19) Therefore, our present finding might be explained by a remarkable study demonstrating that basal rather than dorsal meningeal lymphatic vessels are the main route for macromolecule drainage of CSF into the peripheral lymphatic (20).We proposed that the BG-EPVS predominant pattern might reflect basal meningeal lymphatic vessels as a major exit pathway for extravasated after aSAH.Moreover, according to previous literatures, using the same EPVS rating scale as in our study, a high degree of BG-EPVS was associated with hypertensive arteriopathy, but a high degree of CSO-PVS correlated with cerebral amyloid angiopathy (16,21).Therefore, our present study indicated the opinion that aSAH was more closely related to hypertensive arteriopathy, while other SAH without aneurysm may linked to non-hypertensive arteriopathy such as cerebral amyloid angiopathy.Notably, due to the absence of a non-SAH control group, a further study is required to directly compare the EPVS predominant pattern, or mean total EPVS score, with that of healthy controls. Compared with intracerebral hemorrhage, CAA, and ischemic stroke, aSAH is supposed to be an earlier-onset subtype of spontaneous stroke, suggesting a higher ratio of younger patients and the absence of vascular risk factors among these patients (22,23).Age was found to be associated with aSAH in univariate analysis, but without significance in logistic regression model.Previous study indicated that EPVS is a common occurrence in the aging population (24), but only advanced age seems to be the major risk factor associated with EPVS (25).As the age of the participants in our study mostly ranged from young to middle age (<60 years), we suppose the influence of age on EPVS was mild.To the best of our knowledge, no previous study has explored the relationship between EPVS and aSAH in patients without pre-existing risk factors.As the imaging marker of cerebral small vessel diseases, the prevalence and location of EPVS would be associated with vascular risk factors and aging.Thus, we investigated the association of EPVS with these "low-risk" aSAH survivors to minimize their effects on EPVS.It has been demonstrated venous insufficiency could perivenular edema, subsequently leading to impairment of glymphatic clearance and retention of fluid in the PVS which facilitates the onset of EPVS (26)(27)(28).Due to the fact that basal ganglia is drained by deep medullary veins only, while the centrum semiovale can be drained alternatively by cortical veins and superficial medullary veins, Min L and colleagues proposed the hypothesis of BG-EPVS but not CSO-EPVS predominant pattern that glymphatic system in the centrum semiovale could still maintain intact function under the circumstance of deep medullary veins impairment, making the PVS in the centrum semiovale higher compensatory capacity (29).Therefore, as our opinion, it might be the potential cause of BG-EPVS predominant pattern in these "low-risk" aSAH survivors that PVS in basal ganglia was more vulnerable due to decreased compensatory capacity for drainage of fluid and metabolic waste after an attack of aSAH, which encouraged the occurrence of BG-EPVS. Strengths of our study include the standardized evaluation of MRI scans for a range of small vessel disease imaging markers, certainty of the existence of aneurysm by experienced neurosurgeons using DSA, the use of EPVS predominant patterns, and the enrollment of survivor from more than one stroke center.Our study has several limitations.First, due to the cross-sectional nature of the prospectively collected data from this cohort, it was difficult to determine the change in the occurrence of EPVS before and after SAH, as well as the causal link between them.We also acknowledge that most of excluded patients did not undergo MRI imaging due to their severe hemorrhage, which might lead to potential selection bias toward mild to moderate aSAH cases and relatively less prevalence of high degree EPVS in our study.Third, the patients without aneurysm did not undergo repeated DSA, which is needed in cases where no aneurysm was found.Although the family history of intracranial aneurysm was captured in this study, it might be another limitation that the ratio of positive history was too low to be significant in both groups.As the genetic sample was not routinely collected, future studies should consider the possibility that EPVS is genetic and might modulate the family history of aneurysm. Conclusion In conclusion, our study demonstrated that the BG-EPVS predominant pattern is a topographic characteristic in aSAH, as well as a positive association of BG-EPVS with aSAH survivors seemly "low risk." This finding provided the topographic pattern of BG-EPVS as a potential MRI-visible feature, shedding light on the pathogenesis of glymphatic function at the skull base for aSAH. FIGURE 1 Flowchart FIGURE 1Flowchart of participant enrollment.SAH, subarachnoid hemorrhage; n, number of patients. The FIGURE 2 EPVS FIGURE 2EPVS predominance patterns in SAH with aneurysm vs. without aneurysm. TABLE 2 Multivariate analysis showing variables independently associated aSAH patients without hypertension. TABLE 4 Multivariate analysis showing variables independently associated aSAH patients without smoking history.
2024-01-24T18:16:59.986Z
2024-01-15T00:00:00.000
{ "year": 2024, "sha1": "543af0b0976ce0aa17972a0e04176fd26e7b1850", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2024.1341499/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41044719d411c2e05d4bad2bb15dd6c6bf080e1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
104562738
pes2o/s2orc
v3-fos-license
Effects of lean alkanolamine temperature on the performance of CO2 absorption processes using alkanolamine solutions Acid gas removal from the natural gas using alkanolamine processes is the most common technology used for sweetening of natural gas. Based on the sour and sweet gas specifications, several alkanolamine solutions can be used for acid gas removal, all of which are well developed processes. However, one of the remaining issues is the costs associated with the processes. In this study, DEA, DGA and mixed (MDEA+DEA) processes are designed for sweetening the natural gas produced in one of the gas fields having high CO2/H2S ratio. For each process, seven scenarios are designed to investigate the effects of the cooler’s operating parameters on the performance of the process. For each scenario, the duty of the cooler is varied in order to have a specific lean amine temperature entering the absorber. Each scenario is simulated using Aspen HYSYS and economically evaluated using Aspen economic evaluation. Based on the results of this study, the required solution circulation rates slightly increases when the lean amine temperature increases. However, Lower process capital costs and lower cooler’s duty were obtained by operating the DEA and DGA processes at higher values of lean amine temperature. Also, operating at lower lean amine temperatures resulted in lower hydrocarbon pick up in case of MDEA+DEA process. Introduction The processes using Alkanolamine solutions for acid gas removal from natural gas are the most common processes used for the removal of acid gases from natural gas. The alkanolamine processes are well developed processes, each of which is suitable for sweetening the natural gas with certain sour and sweet gas specifications [1][2][3][4][5][6][7][8][9]. However, one of the main issues is the large costs associated with these processes [10][11][12]. Numerous studies have been carried out to reduce the costs associated with these processes. Polasek et al studied alternative flow schemes for natural gas sweetening [11], Bae et al studied split flow configuration for the process [13], Warudkar et al studied the effects of stripper operating parameters [10], Cousins et al studied modifications on the process flow sheet [14], Sohbi et al and Fouad et al studied effects of using mixed alakanolamines [6,7], Kazemi et al and Ghanbarabadi et al performed comparative studies between different processes [15,16], Nuchitprasitichai et al, Øi et aland Mores et al used optimization techniques [12,17,18] Freeman et al proposed using concentrated piperazine mixtures [8] and Banat et al used energy analysis method [19] for reducing costs and energy requirements of the sweetening processes. For the sweetening of the natural gas with certain specifications, several processes might be applicable. One of the questions which arise in these situations is that which process is the most economical process to be used for sweetening of the natural gas with ISSN: 2638-1974 Volume these specifications? Also, one of important parameters which affect the costs associated with a sweetening process is the lean solution temperature entering the absorber. Changing the lean amine temperature might have an impact on the solution flow rate needed in order to reach the wanted specifications of sweet natural gas which strongly affects the costs associated with the natural gas sweetening processes. On the other hand, the choice of lean amine temperature, affects the duty that is needed to be applied on the cooler. Another question is that at what temperature, should the lean solution enter the absorber in order to have the best economic performance? In this study I tried to answer these two questions for the case of the natural gas produced in a gas field having high CO 2 /H 2 S ratio, which has relatively high CO 2 and low H 2 S contents and has low pressure. In this study, the effects of cooler's operating parameters are investigated. The DEA, DGA and mixed(MDEA+DEA) processes are designed for sweetening of the natural gas produced in a gas field having high CO 2 /H 2 S ratio. Seven scenarios are designed to investigate the effects of cooler's operating parameters on the performance of the processes. Each scenario is simulated using Aspen HYSYS and economically evaluated using Aspen economic evaluation. The results of simulation and economic evaluation are then studied to select the optimum operating conditions for the process's cooler. Although there have been some studies on the effects of lean amine parameters on the performance of the sweetening processes [20], I couldn't find a comprehensive research, studying the suggested target parameters for the selected processes. Feed gas specifications All the three processes are designed for sweetening the natural gas produced in a gas field having high CO 2 /H 2 S ratio. The sour gas produced in this field has the specifications. It can be seen from the data presented that the natural gas produced in this gas field has high CO 2 /H 2 S ratio, high CO 2 content, low H 2 S content and low pressure. Thus, it is expected that the results of this study would be applicable for sweetening of the natural gas produced in similar gas fields. In this study, the desirable sweet gas specifications are supposed to be concentrations lower than 1mol% CO 2 and lower than 4ppm H 2 S. An overview of the three processes Alkalonomines are widely used for acid gas removal from natural gas [1,15,[21][22][23][24][25][26], they are classified to primary amines, secondary amines and tertiary amines based on the number of alkyl groups having bonds with the N atom in the structure of amino group. The most common alkanolamines used are Monoethanolamine (primary), Diethanolamine (secondaray) and methyldiethanolamine (tertiary) [15,[25][26][27][28][29]. Selection of an alkanolamine process for sweetening of natural gas affects the capital and operating costs, energy requirements, sizing of the equipment and in some cases the type of equipment needed for sweetening [25,27]. The alkanolamines absorb the acid gases from natural gas via reactions (1-2) [17,30]. DEA Diethanolamine, abbreviated as DEA is a secondary amine, aqueous solutions of which are used to absorb hydrogen sulfide and carbon dioxide from natural gas [25,26,31]. Many products such as COS, CS 2 , SO 3 and SO 2 can catalyze degradation or deactivation of alkanolamine solutions [2,32]. Due to low reaction rate with CS 2 and COS, when considerable amounts of CS 2 and COS are present in the sour gas, DEA and other secondary amines are the better choice for natural gas sweetening [25,26]. DEA solutions are rather unselective and could be used for absorption of either H 2 S or CO 2 from the natural gas [26]. DEA solutions are industrially used with concentrations between 25-40wt% [27,33]. The DEA sweetening process is simulated using Aspen HYSYS simulator and the different cases of simulation are economically evaluated using aspen economic evaluation (Icarous), the results are compared to that of DGA and MDEA+DEA processes. For simulation of this process, the DBR-Amine property package has been used. The simulation flow sheet is shown in Figure 1. A tray absorber with 20 theoretical stages was used. Also, a tray column with 18 theoretical stages was used for modeling the regenerator column. The pressure of the regenerator varies between 27.5 psia (condenser) to 31.5 psia (reboiler). The rich DEA pressure is reduced to 90 psia in the valve and no pressure drop was assumed in the two phase separator. DGA Diglycolamine is a primary amine used for natural gas sweetening. The low vapor pressure of DGA allows using aqueous solutions of this amine in rather high concentrations (40-70 wt%) for natural gas sweetening which results in low amine circulation rates needed for the natural gas sweetening [25,33]. DGA solutions are particularly effective for treatment of low pressure natural gas. DGA has a tendency to selectively absorb CO 2 in presence of H 2 S [33], however DGA absorbs aromatic compounds which causes the sulfur recovery unit to be more complicated [34], thus, DGA is a good choice for sweetening of natural gas with relatively high CO 2 concentration. Based on these statements, DGA is selected as one of the alternatives for sweetening of natural gas with the specifications. In this study a 65wt% aqueous solution of DGA is used for sweetening of the natural gas. Aspen Hysys and Aspen economic evaluation have been used for simulation and economical evaluation of this process. The DBR-Amine property package was used for simulation of this process. The simulation flow sheet for this process is shown in Figure 2. A tray absorber with 20 theoretical stages was used. Also, a tray column with 20 theoretical stages was used for modeling the regenerator column. The pressure of the regenerator is set to 24 psia. The rich DEA pressure is reduced to 25 psia in the valve and no pressure drop was assumed in the two phase separator. MDEA+DEA Methyldiethanolamine (MDEA) is a tertiary amine known to have higher selectivity in absorbing H 2 S in presence of CO 2 [27]. The reaction of MDEA with H 2 S is almost instantaneous while its reaction with CO 2 is occurs at lower rates. However, numerous studies show that addition of small amounts of primary or secondary amines to a tertiary amine causes the overall CO 2 absorption rate of the process to increase [6,25,27,33,[35][36][37]. For sweetening of the natural gas, because of relatively high CO 2 content in the sour gas, I decided to add 10wt% percent of a secondary amine (DEA) to the solution to increase the CO 2 absorption rate of the MDEA process which can make this process a promising process for sweetening of the natural gas described in section 2. The other reason for mixing the suggested amine solutions is to combine the reactivity of the secondary amine and relatively low regeneration energy requirements of tertiary the amine. MDEA's typical concentration in aqueous solutions is 30-50wt% in industrial applications. In this study an aqueous solution of 40wt% MDEA and 10wt%DEA is selected for sweetening the natural gas introduced in section 2 which is one the cases with the best performance regarding absorption of CO 2 [6]. Aspen HYSYS is used for simulation of this process and Aspen economic evaluation is used for economically evaluating this process. The DBR-Amine property package is used for simulation of this process. The simulation flow sheet is shown in Figure 3. A tray absorber with 20 theoretical stages was used. Also, a tray column with 20 theoretical stages was used for modeling the regenerator column. The pressure of the regenerator is set to 24 psia. The rich DEA pressure is reduced to 25 psia in the valve and no pressure drop was assumed in the two phase separator. Simulation results and operating conditions For each of the three processes, seven different scenarios have been designed for studying the effects of cooler's operating parameters on the performance of the three sweetening processes. Each of these scenarios, shows the characteristics of the system at a certain operating condition of the cooler. The cooler's duty in each scenario is varied until the lean solution temperature reached the designed value. In each scenario, the process's parameters are changed in such a way to reach concentrations lower than 1mol% CO 2 and lower than 4ppm H 2 S for the sweet natural gas. In simulation of these processes, the minimum temperature approach for all of the heat exchangers has been assumed to be 10 o C and the pump's adiabatic efficiency was set at 75%. After completing the simulation of three processes, for each process these seven scenarios are applied and the process is economically evaluated using aspen economic evaluation v7.3. One of the most important characteristics of a sweetening process is the circulation rate (gpm) of the solution [15,38]. Increasing the solution flow rate causes the capital and operating costs, sizing of equipment and energy requirements of the process to increase [15,25,39]. The results of solution flow rate of the processes in different scenarios are shown in Figure 4. It is clear from the data presented in Figure 4 that the amine circulation rate for the mixed amine process is higher than that of DGA and DEA in seven scenarios. It is also shown in Figure 4 that when the lean amine temperature increases, the solution flow rate needed for each process slightly increases and the minimum required solution circulation rate is observed at the lowest lean amine temperature. As mentioned earlier, increasing the solution flow rate in a sweetening plant causes the plant's capital and operating costs along with the energy requirements and sizing of the equipment to increase. On the other hand, reducing the temperature of the lean solution requires larger duty of the cooler. This larger duty could be obtained by increasing the contact area of heat exchanger or changing the cooling material, in either way, this change will cause the plant's operation to be more expensive. Based on these statements, it seems that there should be an optimum point of operation for the cooler of a sweetening plant. In this study I tried to find this point for three different sweetening processes. As shown in Figure 4, the solution circulation rate for the mixed process is significantly higher than solution circulation rate of the DEA process. This observation is attributed to be due to the fact that methyldiethanolamine selectively absorbs H 2 S and has lower capacities for absorption of CO 2 [25,26,40]. Another important aspect of operation of sweetening processes is the fraction of hydrocarbons absorbed into the solution in the contactor. Based on previous studies, the hydrocarbon co-absorption is mainly a disadvantage of physical and physical-chemical solutions [9,25,27,41,42], however, I examined this parameter on the three chemical absorption systems to verify the simulation results. As shown in Figure 5, although for the mixed amine process at lower temperatures hydrocarbon pick up is enhanced, the hydrocarbon pick up by the solution remains at a very low rate for different cooler's operating conditions in the three processes. The maximum hydrocarbon pick up by the solution in the 21 simulation scenarios was 0.0004 for the mixed amine process. It is also observed in Figure 5 that at temperatures higher than 45 o C, the hydrocarbon pick up by the MDEA+DEA process decreases. However the hydrocarbon pick up by the DEA and DGA processes is not affected by lean amine temperature. Since the chemical reactions leading to absorption of acid gases into the alkanolamine solutions are exothermic [25,[43][44][45], it is expected that the temperature of rich amine be higher than that of the lean amine entering the contactor and the temperature difference between these streams can be a parameter showing the intensity of absorption process in the contactor. In Figure 6 and Figure 7 the temperature difference between rich and lean amine streams, and the rich amine temperatures are shown. Based on the data shown in Figure 7, the temperature of reach amine increases when the lean amine temperature entering the contactor is increased. However, for the three processes the temperature difference between the two streams decreases with increasing the lean amine temperature. For the DEA process, the rich amine temperature is even lower than the temperature of leanamine at lean amine temperatures higher than 35 o C. This observation is attributed to be due to higher heat transfer between the cold feed gas (at 21 o C) and the lean amine due to increase in temperature difference between feed gas and the lean amine streams. Another important issue that must be addressed here, is that the rich amine temperature directly affects the energy requirements of the system because the rich amine at the bottom of contactor needs to be regenerated at high temperatures. Thus, when the rich amine temperature is increased, the system's energy requirements (or heat exchanger's contact area) will decrease. Another important characteristic of the sweetening processes is the energy requirements. The lean amine temperature directly affects the duty that needs to be applied in the cooler. Lean amine temperature also affects the stripper's energy requirements and the heat exchanger duty. Figure 8 shows that when the lean solution temperature decreases, the cooler's duty increases for the three processes which is an expected observation because the temperature difference around the cooler increases by decreasing the outlet temperature. The minimum cooler duty is observed at the highest lean amine temperature which is in accordance to the expected trend. It is also shown in Figure 9 that the heat exchanger duty follows a reducing trend by increasing the lean amine temperature. Another important observation in Figure 9 is considerably lower heat exchanger duty for the DGA process compared to DEA and MDEA+DEA processes. This observation is because of the fact that the temperature of the rich amine in the DGA process is considerably higher than that of the other two processes. Low cooler and heat exchanger duty of the DEA process are also attributed to be due to lower solution circulation rate of this process compared to the DGA and MDEA+DEA processes. After completing simulation of seven scenarios for each of the processes, each scenario is economically evaluated using aspen economic evaluation. It has been assumed that the projects are about be constructed in 2014. The results are obtained in US$ or US$/year for different scenarios. Parameters such as complexity of the processes, start date and level of instrumentation are taken into account for estimation capital and operating costs of the processes. As shown in Figure 10, based on the results of economic evaluation, the capital costs of the MDEA+DEA process passes through a minimum when the lean amine temperature reaches 40 o C. Also it is clear that with increasing the lean amine temperature from 30 o C to 60 o C, the capital costs of the DEA and DGA processes follow a decreasing trend. The lowest process capital cost is obtained when the DEA process is used and the lean amine temperature of this process is the maximum examined temperature and the capital costs of the DGA process are slightly higher than capital costs of the DEA process. The annual operating cost results of the seven scenarios simulated for each of the processes are shown in Figure 11. According to the data shown in Figure 11, the annual operating costs of the three processes are not strong functions of the lean amine temperature. These observations can be justified by undermining the data shown in Figure 8 and Figure 9. It was mentioned earlier that the stripper's reboiler duty doesn't vary with changing the lean amine temperature, also, it was mentioned that decreasing the lean amine temperature has a positive effect on the heat exchanger's duty and a negative effect on the cooler's duty. Based on these information it is concluded that the negative and positive effects of this change are not very steep or that these effects neutralize each other and this the reason that no discernable change in utility costs and subsequently annual operating costs of the system is reported. It is also clear from Figure 11that the annual operating costs and utility costs of the DEA process are lower than that of the DGA and the MDEA+DEA processes. Considering a life cycle of 25 years for operating the three processes, the dominant costs associated with the processes are the annual operating costs and utility costs. From the data shown in Figure 11 it is observed that the annual operating costs and utility costs of the DGA and DEA processes are not affected by the choice of lean amine temperature, so for these Volume 2 • Issue 1 • 1000124 Int J Petrochem Res. ISSN: 2638-1974 two processes, the lean amine temperature doesn't play a crucial part in the costs of the processes. However, for the MDEA+DEA process, the results are more complicated and the annual operating costs of the process don't follow a simple trend and the minimum annual operating costs are observed at lean amine temperature of 30 o C. Considering a life cycle of 25 years, this temperature shows the best economic performance for this process. Figure 11. Effects of the lean amine temperature on the annual operating costs of the processes Based on the fore mentioned discussions, there are several advantages in operating the DGA, DEA and MDEA+DEA sweetening processes with higher lean amine temperatures. Lower process's capital costs, lower rich amine hydrocarbon pick up in case of MDEA+DEA process and lower cooler's duty are obtained by operating the process at higher lean amine temperatures. An improvement to the results of this research can be investigation of cost and energy requirements of other suitable sweetening processes. Investigating costs and energy requirements of other suitable processes for the sweetening of natural gas with specifications close to the natural gas that i have considered, can be the topic of future studies. Conclusion Effects of cooler's operating parameters on the performance of three sweetening processes designed for sweetening the natural gas produced in a gas field having high CO 2 /H 2 S ratio (with the specifications described in section 2) have been investigated. DEA, DGA and MDEA+DEA processes have been selected for sweetening the natural gas produced in this gas field. Each of these processes was designed in such a way to reach concentrations lower than 1mol% CO 2 and lower than 4ppm H 2 S for the sweet gas. Based on the results of this study, for DEA and DGA processes, in the range of lean amine temperature between 30 o C -60 o C, operating the processes with higher lean amine temperature exhibit several advantages. Lower process's capital costs, lower rich amine hydrocarbon pick up in case of MDEA+DEA process and lower cooler's duty were obtained by operating the processes at higher values of lean amine temperature. Although the circulation rate of the solution needed to reach concentrations lower than 1mol% CO 2 and lower than 4ppm H 2 S for the sweet gas slightly increased when the lean amine temperature increased, it is recommended to operate the DGA and DEA sweetening processes at higher lean amine temperatures. An improvement to the results of this research can be investigation of cost and energy requirements of other suitable sweetening processes. Investigating costs and energy requirements of other suitable processes for the sweetening of natural gas with specifications close to the natural gas that I have considered, can be the topic of future studies.
2019-04-10T13:12:18.151Z
2018-06-04T00:00:00.000
{ "year": 2018, "sha1": "7ef7ce01646cd73f248ed26e213caaa9183a1ece", "oa_license": "CCBY", "oa_url": "https://madridge.org/international-journal-of-petrochemistry/ijpr-1000124.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6f4505ca157745f4d70475260ab790a1c1bd192c", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
18463050
pes2o/s2orc
v3-fos-license
A Robust H ∞ Controller for an UAV Flight Control System The objective of this paper is the implementation and validation of a robust H ∞ controller for an UAV to track all types of manoeuvres in the presence of noisy environment. A robust inner-outer loop strategy is implemented. To design the H ∞ robust controller in the inner loop, H ∞ control methodology is used. The two controllers that conform the outer loop are designed using the H ∞ Loop Shaping technique. The reference vector used in the control architecture formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft. The simulation results show that the proposed control scheme works well despite the presence of noise and uncertainties, so the control system satisfies the requirements. Introduction There is a considerable and great interest in using unmanned air vehicles (UAVs) to perform a multitude of tasks [1]. UAVs are gaining more powerful skills to accomplish a wide range of missions with high efficiency and high accuracy rate. They are becoming vital warfare and homeland security platforms because they significantly reduce both the costs and the risk to human life and first-responder capabilities. UAVs have many typical applications such as intervention in industrial plants, natural disasters intervention, cooperation with other ground robots in demining operations, through aerial mapping, remote environmental research, pollution assessment and monitoring, fire-fighting management, security, for example, border monitoring, law enforcement, scientific missions, agricultural and fisheries applications, oceanography, or communications relays for wideband applications. Due to their numerous benefits, it would be nice to decrease the global cost of this type of aircraft. In this sense, the flight control design problem for low cost UAV still requires significant efforts, being the control and dynamic modeling of UAVs which is an attractive field of research. The control of UAVs is not an easy task as the UAV is a multi-input multioutput (MIMO), under actuated, unstable, and highly coupled system. Many traditional control strategies have been used over the years for the control of UAVs, such as linear quadratic regulator (LQR) [2,3]. Robust techniques have also been applied to design controllers to achieve robust performance and simultaneously guarantee stability when system deviates from its nominal design condition and/or is subjected to exogenous disturbances. In particular, robust ∞ control method by Zames [4,5] has been used in flight control systems for both lateral and longitudinal dynamics of aircraft [6][7][8]. In this work, an inner-outer loop control architecture applied to the longitudinal and lateral flight motions is implemented using the ∞ Loop Shaping Design procedure [9,10] to synthesize the inner-loop controller. The technique decouples the longitudinal and lateral dynamics and minimizes the cross effects involved. The feasibility of the controller is analyzed. The control scheme is implemented on a 6-DOF nonlinear simulation model. Different simulation results are presented to show the robustness of the proposed control architecture. The paper is structured as follows. Section 2 presents the aircraft model and its linearization. Section 3 describes the control problem, presenting the control objectives and the control scheme. Design results are analyzed in Section 4. Flight test results are presented in Section 5. Aircraft dynamics is described as a full 6-degree-offreedom (DOF) 13-state high fidelity UAV nonlinear model. The nonlinear model has been developed in standard body axes centered at the aircraft center of gravity where points forward, through the aircraft noise, is directed to the starboard (right), and is directed through the belly of the aircraft. Using the notation given by Stevens and Lewis [11], the flight dynamic model that describes the rigid body motion of the aircraft is given by the following equations. Force equations are as follows: ] . (2) Kinematic equations are as follows: Navigation equations are as follows: where is the mass; ( , , ) are the body axis velocity states; ( , , ) are the body axis rates; , , are the roll, pitch, and yaw angles, respectively; and ( , , ℎ) are the north, east, and height positions. = ( , , ) represents the aerodynamic force vector and = ( , , ) represents the moment vectors. is the aircraft inertia: B is the inertial top body transformation matrix; ( , , ) is the gravity vector, which is the transformation of the (0, 0, ) NED-frame gravity vector to the body axis frame, as shown below: ] . The Scientific World Journal 3 The resulting model is described by a thirteen-state order model [12]. Due to the complexity and the uncertainty inherent to aerodynamic systems, the dynamic model was identified by a complete identification flight set through the full envelope. See Stevens and Lewis for details [11]. Linearized Dynamic Model. The nonlinear dynamic model described in Section 2.1. is linearized about certain trimmed operating conditions. This process is accomplished by perturbing the state and control variables from steady state. The mathematical formulation of the dynamic system is modeled with standard continuous time invariant state space formulation given by (8). Where is a 13 × 13 matrix, a 13 × 4 matrix, a 12 × 13 matrix, and is a 12 × 4 matrix, The control vector ( ) is defined by throttle ( ), elevator ( ), aileron ( ), and rudder ( ). Control Objectives. The main objective is the design of a robust controller to track all types of input commands in a noisy environment. The controller has to be designed as a trade-off robustness and performance in order to fulfill the specifications described in this section. Closed Loop Specifications. Stability of the aircraft, minimal overshoot, and reasonably long settling time are important constraints in the design. Translated into physical design goals, the controller must perform the following specifications: (i) altitude response: overshoot < 5%, rise time < 5 s, and settling time < 20 s, (ii) heading angle response: overshoot < 5%, rise time < 3 s, and settling time < 10 s, (iii) flight path angle response: overshoot < 5%, rise time < 1 s, and settling time < 5 s, (iv) airspeed response: overshoot < 5%, rise time < 3 s, and settling time < 10 s, (v) cross coupling between airspeed and altitude: for a step in commanded altitude of 30 m, the peak value of the transient of the absolute error between airspeed and commanded airspeed should be smaller than 0.5 ms −1 ; conversely, for a step in commanded airspeed of 2 ms −1 , the peak value of the transient of the absolute error between altitude and commanded altitude should be smaller than 5 m. Gust Rejection. Second objective of the control system is to include robustness to gust effects on the aircraft. In this sense, turbulence can be considered as a stochastic process defined by its velocity spectra. For an aircraft flying at a cruise speed , a commonly used velocity spectra for turbulence model is the Dryden spectra [13]: where is the frequency in rad s −1 , is the turbulence standard deviation, and V is the turbulence scale length. The turbulence parameters values for severe gust conditions are given by [14] = 0.1 + 0.00733ℎ, 300 < ℎ < 600 m = 3.04 + 0.00244ℎ, 600 < ℎ < 1400 m = 6.45 m/s, 1400 < ℎ < 5800 m where ℎ is the altitude. Our gust rejection specification is to reject all disturbances below 13 rad s −1 . Noise Rejection. Basically, the measured variables for the lateral control are the lateral acceleration and the yaw and roll rates measured in body fixed axis. For the selected sensors, the noise is high and concentrated in the frequency range above 30 rad s −1 . Thus, high frequency specification is that in which all noise spectra, which normally occur above 30 rad s −1 , should be rejected. Robustness Specifications. The controller designed has to be robust against uncertainty in the plant model. The robust specifications are defined as follows. 4 The Scientific World Journal (i) Centre of gravity variation is as follows: stability and sufficient performance should be maintained for horizontal centre of gravity variations between 15% and 31% cbar (mean aerodynamic chord). (ii) Vertical centre of gravity must not suffer variations: it should remain at 0% cbar. (iii) Mass variations are as follows: stability and sufficient performance should be maintained for aircraft mass variations between 18 and 30 kg. (iv) Time delay is as follows: stability and sufficient performance should be maintained for transport delays from 0 to 60 ms. (v) Speed variations are as follows: stability and sufficient performance should be maintained for speed variations from 1.23VS (stall velocity) to 55 m s −1 (200 Km/h). Controller Design. The control architecture is based on that proposed by Tucker and Walker [13]. As Figure 2 shows, basically, it consists of two loops: an inner-loop controller to achieve stability and robustness to expected parameter uncertainty and an outer loop for tracking reference performances. The design of the inner loop is focused on maintaining the vertical velocity deviation, the heading angle deviation, and the airspeed deviation near zero. Two different controllers conform to the outer loop: the altitude controller and the heading angle-lateral deviation controller. Both controllers are synthesized using the ∞ Loop Shaping technique (see [9,15,16]). Figure 3 shows the general framework used in the design process. Figure 4 shows the inner loop architecture. Its main goal is to minimize both the deviation to desired output and the control effort. ∈ 3 is the reference input vector, whose components are the vertical speed, airspeed, and the roll angle. ∈ 4 is the control signal. The Inner Loop Synthesis Procedure. 1 ∈ 3 is the vector of performance outputs. 2 ∈ 2 is the vector of weighted control inputs. The feedback variables are the vertical speed, airspeed, the roll angle, the pitch rate, the yaw rate, the roll rate, and the sideslip. The total plant total is formed by the plant (the linearized UAV model), the actuators model, and the corresponding delays. These delays are modelized using the first order Pade approximations. They are used to represent plant uncertainties in the high frequency range such as modeling errors and neglected actuator dynamics. Four delays of 100 ms are included in the plant model, one in each input including the throttle. The actuator model for , , and is given by the firstorder linear approximation 10/( + 10) and the engine model is represented by 2/( + 2). The sensor noise is represented by means of white noise model. The standard deviations of the sensor noise corresponding to the output vector are 0.1 ms −2 for accelerations, 0.005 rad s −1 for angular velocity, 5 m for position, and 0.5 ms −1 for velocity. The controller is designed using the ∞ technique. It must guarantee the stability and follow an ideal model, the so called matching model ( ). That is, the closed loop system output 1 is expected to match ∈ 3 , the output of the ideal model . The matching model , which defines the behaviour of the vertical speed, the true speed, and the heading angle, consists of the following three second-order systems: ] . The matching model is selected to accomplish desired behaviour of the vertical speed, airspeed, and roll angle to achieve the closed loop specifications detailed in Section 3.1. The cross coupling terms are zero, thus, defining the requirement for closed loop system as decoupled. Four weights ( = 1, . . . , 4) are used in the inner loop to accomplish the frequency dependent specifications on performance and robustness. They are added to maximize disturbances rejection and to minimize wind gusts effects and sensor noises. 1 is related with reference tracking. So, its elements are selected as low pass filters. The yaw rate and roll rate are selected as pass band filters. 2 is devoted to minimize the control effort. This is why it is selected as a high pass filter, where its gain and bandwidth are chosen to allow low frequency control effort and to minimize high frequency control effort. 3 and 4 are unity matrix. They weight turbulences and output disturbances, respectively. The controller's synthesis is accomplished using an iterative procedure. First, the weights are selected; then the controller is synthesized and finally the resulting system performances are analysed. After this iterative process, the weights selected are the following: The Outer Loop Synthesis Procedure. Two different controllers conform to the outer loop: the altitude controller (see Figure 5) and the heading angle-lateral deviation controller (see Figure 6). The two outer-loop controllers are synthesized using the ∞ Loop Shaping technique [16]. Figure 5 shows the first problem to be solved, where is the controller and 1 and 2 are the weights used to tune the optimization. The simplified models of the plant used to synthesize these controllers are those defined in the matching model. In the design of the altitude controller, an output integrator is used to provide height and vertical velocity outputs. An input integrator is used to improve the low frequency behaviour. In a similar way in the heading angle-lateral deviation controller design an output integrator is used to provide yaw 6 The Scientific World Journal angle and its derivative outputs. An input integrator is used to improve the low frequency behaviour. The gamma values encountered are 3.18 and 2.5 for the altitude controller and the heading angle-lateral deviation controller, respectively. The heading angle and lateral deviation controllers have been built together due to the hard interaction between the variables implied which motivates a tedious iterative process when individual controllers were designed. In this approach, these two controllers are synthesized jointly. Design Results The performance of a system can be represented by the sensitivity function . The maximum singular value of is an important boundary in this case. By using the largest singular value, we are effectively assessing the worst case scenario. Performance specification means the minimization of the sensitivity function as much as possible for low frequencies. At the same time, control effort should be small in the high frequency range. Figure 7 shows the sensitivity function. It is easy to see that our goal of minimising the sensitivity at low frequencies has been achieved. At high frequencies, the gain is unity and around the bandwidth there is a peak in the response. This behaviour of the sensitivity enables good tracking reference at the low frequency range and noise reduction and robustness in the high frequency range. Figure 8 shows the control effort behaviour which is lower in the high frequency range as it was expected. Since the ∞ controller designed produces a 46-size state space realization, it is necessary to apply controller reduction techniques. A final state realization for the controller of dimension 27 is achieved using Hankel minimum degree approximation (MDA) without balancing reduction method [17]. This method has been applied iteratively checking the frequency and time responses every step to evaluate the performance of the proposed UAV control scheme. One example of the time response in one step of this iterative process is shown in Figure 9. Figure 10 shows the effect of an incorrect order reduction. This performance is obtained when an order reduction is forced and the reduced controller is not able to maintain the desired specifications. Simulation Tests Results In order to validate the controller designed, a set of test cases have been developed. Below, an experience corresponding to 45-degree heading angle step response is shown. The results allow checking the performance of the aircraft in a noisy environment along this type of manoeuvre. The airplane desired reference is illustrated in Figure 11. The dashed line is the desired trajectory. Figure 12 shows the airplane simulated trajectory tracked. The dashed line is the desired trajectory and the continuous line is the real one. The controller is able to manage adequately the output and to calculate the control vector. Control variables evolutions are shown in Figure 13. The throttle varies around 2% and elevator, ailerons, and rudder present a smooth behaviour. The aileron and rudder are deflected by the controller to order the 45-degree change of direction. Immediately, a sustentation loose typical in this type of manoeuvers is suffered by the aircraft. To compensate this trend, the elevator acts to raise the noise of the aircraft and slightly increase the throttle to maintain the velocity. Figure 13 confirms that the control variables remain far from its saturation values. The power demand is less than 40% and the elevator, aileron, and rudder demanded deflections are less than 5 degrees. In this case, if the altitude holder is not connected, in 5 s, the airplane suffers an altitude loss of 3 m and rapidly it recovers the desired altitude, in about 5 s more. The UAV quickly corrects its heading angle turning to reduce the error. In about 4.5 s, the error is null; however, the airplane continues turning. This is produced because of the lateral deviation. If the airplane stops its turning movement in 4.5 s, it would continue straight ahead along a parallel line to the desired trajectory. To reduce the lateral deviation, it must continue turning and augmenting, in a first stage, the heading angle error. Following this strategy, the controller gains its tracking heading angle and its lateral deviation reduction goal. Flight Test Results For testing the whole system and the performance of the controller in flight, many real tests are accomplished. These tests are scheduled to validate in essence the physical design of the UAV, communications equipment, engine capabilities, and onboard software. A very important part of onboard software is the flight control system. To manage the UAV platform, a ground station is developed (see Figure 14). the computer. It also allows showing the main variables of the UAV which are sent through a radio link. The ground station allows introducing a set of waypoints. The autopilot takes care of both navigation and stability of the plane. The mission is planned via waypoints, placing on a geo referred map the position of each waypoint at the beginning of the mission. This mission can be easily modified during its execution by adding/changing/removing waypoints in the map. The system provides a user-friendly interface used to display the plane position in real time on a map during the mission and to monitor some UAV parameters such as battery levels, speed, position and orientation, or the sensors measurements. The system also provides a radio link which allows a continuous exchange of data between the plane and the control station. In an emergency case, the aircraft can switch to a PIL (pilot in the loop) mode in which the plane can be teleoperated from the control station by using a control-stick while the onboard autopilot remains on sleep mode. The selected test to illustrate the aptitudes of the autopilot designed is a circuit formed by four waypoints which is shown in Figure 15. The tracking reference trajectory is shaped for the waypoints labelled from one to four. The circles around the waypoint determine the instance when the reference input changes to the next waypoint (goal condition). The reference is provided to the autopilot as a psi angle function and is built in a soft way using a combination of a step and a ramp. The reference is shown in the Figure 16. Figure 17 shows how the UAV is capable of managing adequately the uncertainties and disturbances introduced by the modelling inaccuracies and the noisy output provided by the sensor. The response of the aircraft is not oscillating and it reaches the correct trajectory quickly when covering 600 m approximately which means 20 s at 30 ms −1 of mean velocity. The entire trajectory covered is around 160 s and the psi angle and lateral deviation error are minimized satisfactorily. A desired decoupling between lateral and longitudinal dynamics is achieved. Figure 18 shows the noisy accelerations output provided by the inertial sensors to the controller. Figure 19 shows the onboard equipment mounted on the UAV. In Figure 20, the UAV during the test cases is shown. Conclusions The dynamics of the UAVs are highly nonlinear and continuously vary with time. Also, it is subjected to severe external disturbances. Due to this, dynamic and parametric uncertainties arise in the mathematical model of the UAVs over different operating conditions. This paper addresses the problem of designing a robust control system for UAVs in the presence of uncertainties using ∞ technique. The controller implemented allows the UAV to track all types of manoeuvres in the presence of noisy environment. The reference vector used, formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft that is based on commanding the desired reference vector and lets the controller select throttle position and surfaces deflections. This kind of pilot-machine interaction appears to be a more intuitive approximation. The frequency domain analyses show that the proposed controller guarantees good performance, attenuating high frequency noise and also supplying suitable control signals. The tracking performance of the UAV is within the desired tracking performance range. The control efforts during soft manoeuvres are in the same way moderated. The first results obtained with the real UAV with the controller designed appeared to be very suitable. The desired behaviour is introduced using a matching model ( ). This architecture allows modifying the desired performances without varying the controller architecture. The architecture selected to decouple the longitudinal and lateral dynamics provides very good performances. The outer-loop controller gives a very good behaviour in case of step responses, ramp responses, and combinations of these two input types. It is important to note that the outer loop shows a signal derivative at the input and this should be avoided. This signal derivative is not part of the real implementation. In this case, the inputs of the outer loop are provided directly by the GPS (height and vertical velocity). The specifications and robustness performances have been validated by mean of simulation and real tests.
2018-04-03T01:28:58.261Z
2015-06-09T00:00:00.000
{ "year": 2015, "sha1": "a49d16c85455379fbf8b3c8e4e34c4d7afd5b96e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2015/403236", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a52232c398d5b4517096924aa3c23b1a31df5ab", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
73484751
pes2o/s2orc
v3-fos-license
Endoscopic cystogastrostomy versus surgical cystogastrostomy in the management of acute pancreatic pseudocysts Background: Studies comparing surgical versus endoscopic drainage of pseudocyst customarily include patients with both acute and chronic pseudocysts and the endoscopic modalities used for drainage are protean. We compared the outcomes following endoscopic cystogastrostomy (ECG) and surgical cystogastrostomy (SCG) in patients with acute pseudocyst. Methods: Seventy-three patients with acute pseudocyst requiring drainage from 2011 to 2014 were analysed (18 patients excluded: transpapillary drainage n = 15; cystojejunostomy n = 3). The remaining 55 patients were divided into two groups, ECG n = 35 and SCG n = 20, and their outcomes (technical success, successful drainage, complication rate and hospital stay) were compared. Results: The technical success (31/35 [89%] vs. 20/20 [100%] P = 0.28), complication rate (10/35 [28.6%] vs. 2/20 [10%]; P = 0.17) and median hospital stay (6.5 days [range 2–12] vs. 5 days [range 3–12]; P = 0.22) were comparable in both the groups, except successful drainage which was higher in surgical group (27/35 [78%] vs. 20/20 [100%] P = 0.04). The conversion rate to surgical procedure was 17%. The location of cyst towards tail of pancreas and presence of necrosis were the main causes of technical failure and failure of successful endoscopic drainage, respectively. Conclusion: Surgical drainage albeit remains the gold standard for management of pseudocyst drainage; endoscopic drainage should be considered a first-line treatment in patients with acute pseudocyst considering the reasonably good success rate. minimal necrosis (which were previously included as pseudocyst now considered as walled off necrosis [WON]) and pseudocyst. [1] Only five studies which include a randomised trial comparing endoscopic with surgical drainage have been reported till date. [2][3][4][5][6] Furthermore, these studies have included pseudocysts secondary to both acute and chronic pancreatitis along with variable information about the amount of necrosis present. Endoscopic retrograde cholangiopancreatography (ERCP) was used for documenting the site of leak in some of these studies, while the morbidity and mortality related to ERCP has not been accounted for. Endoscopic cystogastrostomy (ECG), a less invasive procedure usually done under sedation), may be considered as first-line therapy for pseudocyst if it is corroborated to be not inferior to surgical drainage. Accordingly, we retrospectively compared the outcomes of endoscopic and surgical drainage in patients with acute pseudocysts. This study is likely to be archetypal and form a basis for further studies on this comparison. METHODS The present study was carried out in the Department of Gastrointestinal Surgery and Gastroenterology at Govind Ballabh Pant Institute of Post Graduate Medical Education and Research, New Delhi, India. It is a retrospective study from a prospectively maintained departmental database. The records of all patients who underwent cystogastric or cystoduodenal drainage either surgically or endoscopically for pseudocyst from January 2011 to June 2014 were reviewed. As per our institutional protocol, all patients were evaluated by both surgeon and gastroenterologist. Those having pseudocyst in vicinity of stomach and duodenum evaluated as potential candidate for cystogastrostomy/cystodudenostomy. Those with good impression on either stomach or duodenum and without any gastropathy/gastric varices were attempted endoscopic drainage first, and those with subtle impressions or no impressions but cyst adjacent to stomach and duodenum were given option of SCG only. All patients who eventually underwent endoscopic or surgical drainage included in study. Clinical, imaging, endoscopic, and surgical data were collated and entered. Definition Acute pancreatic pseudocyst is defined as an encapsulated collection of fluid with a well-defined inflammatory wall usually outside the pancreas with minimal or no necrosis which usually occurs more than 4 weeks after onset of acute pancreatitis. [1,7] Workup All patients were evaluated with standard haematological and biochemical investigations along with imaging studies. Contrast-enhanced computed tomography scan of abdomen was done in all cases. Magnetic resonance imaging (MRI) abdomen was included in the latter half of the study to differentiate simple pseudocyst from pseudocyst with minimal necrosis or WON. Procedure ECG was performed by senior gastroenterologists under fluoroscopy. Under conscious sedation, after administering pre-procedural antibiotic, the puncture was made with needle knife papillotome at the most prominent impression site seen on endoscopy [ Figure 1]. The gush of fluid confirmed successful puncture following which the 0.035-inch guide wire was passed into the cyst cavity. The tract was dilated with 10-14 mm balloon and a double pigtail plastic stent was placed in the tract. The patients were allowed orally 12-24 h after procedure. In surgical cystogastrostomy (SCG), an anterior gastrotomy was performed. The pseudocyst contents were aspirated through the posterior gastric wall to confirm the cyst position. A posterior gastrostomy was made to create a communication between the pseudocyst and the stomach, and a part of pseudocyst wall was routinely taken for biopsy. A running inter-locking suture was used for haemostasis and to maintain apposition of the pseudocyst wall to the posterior wall of the stomach. The anterior gastrotomy was then closed. Nasogastric tube was removed on postoperative day 1 and oral liquids were started and progressed to soft diet. Follow-up After ECG, patients were followed up at 4 weeks and then at 8-12 weeks with ultrasonography of abdomen to rule out residual collection. Stent was removed after 2-3 months if investigations revealed no residual collection in asymptomatic patients. Subsequently, patients were followed up in a similar fashion at 6 months and 1 year. After SCG, patients were followed up at 4 weeks, 3 months, 6 months and 1 year with haemogram, liver function tests and ultrasonography of abdomen. Baseline characteristics The baseline characteristics including age, sex, aetiology of pancreatitis, size and number of the cyst, haemoglobin, total leucocyte count, serum albumin, presence of portal hypertension and presence of necrosis were compared among the two groups. Outcome parameters The outcome parameters analysed include technical success and successful drainage, length of the hospital stay and occurrence of complications. Technical success was defined as the ability to access the cyst irrespective of whether successful drainage had taken place or not. Successful drainage was defined as complete resolution or decrease in the size of collection on ultrasound along with abatement of symptoms after the first intervention. Statistical analysis Continuous data were compared using a two-sample t-test or the Wilcoxon rank-sum test. Categorical data were expressed as frequencies and percentages and were compared using the Chi-square or the Fisher exact test. Statistical significance was determined as a P ≤ 0.05. All statistical analysis was performed using GraphPad Prism version 7 (GraphPad Software Inc., La Jolla, CA, USA) and SPSS for Windows 22.1 software (SPSS, Chicago, Illinois, USA). RESULTS Seventy-three patients (ECG n = 50, SCG n = 23) with pseudocyst underwent drainage during the study period. Eighteen patients were excluded due to various reasons as shown in Figure 2. A total of 55 patients who were diagnosed to have acute pseudocyst were included in the study. Of these 55 patients, 35 underwent ECG while 20 patients underwent surgical drainage. Median time of intervention in endoscopic group was 3 (range 1.5-24) months and in surgical group was 4 (range 2-12) months. The wall thickness of cyst was <10 mm in patient undergoing endoscopic drainage. Patients in the two groups were comparable with respect to age, sex, type and number of the pseudocyst, aetiology of pancreatitis and presence of portal hypertension [ Table 1]. The etiology of pancreatitis was alcohol, gallstone, idiopathic and traumatic in 12, 6, 10, 7 patients in the ECG group and 9, 7, 2 and 2 patients in surgical group, respectively. This difference was also not statistically significant (P = 0.18). Outcomes The technical success was achieved in 31 out of 35 (89%) in ECG group and in 20 out of 20 (100%) in SCG group (P = 0.28) [ Table 2]. The location of cyst was towards the tail region in two out of four patients with technical failure. Out of four patients who had technical failure, three had inadvertent gastric perforation who were treated by immediate surgical repair of perforation with external drainage of the pseudocyst (n = 2) and cystojejunostomy in one patient. One patient, who had failure due to slippage of guide wire, underwent emergency cystogastrostomy. Although the successful drainage was significantly higher in surgical group compared to endoscopic [P = 0.22]). There was no mortality in either group. The complication rate was not statistically different between the two groups (P = 0.17) [ Table 2]. Two patients in surgical group had surgical site infection. Overall failure rate of ECG group was 22% compared to none in surgical group (P = 0.04). Overall 6/35 patients in ECG group required surgical intervention (technical failure; n = 4, failure of drainage; n = 2). Therefore, the surgical conversion rate was 17% [ Table 3]. Six patients in the surgical group underwent additional procedures (cholecystectomy, n = 4; ligation of incidental pseudoaneurysm, n = 2). At a median follow-up of 24 months, none of the patients had recurrence of pseudocyst. Although the first successful ECG was performed by Khawaja and Goldmann and Kozarek in 1983, it was Beckingham et al. [8] in a comprehensive review reported about endoscopic drainage of pseudocyst in 1997. They concluded that ECG provides minimally invasive approach to pseudocyst management with similar success and recurrence rate compared to surgical drainage along with lower morbidity related to the procedure/anaesthesia. They found pseudocysts having wall thickness <1 cm bulging into stomach/duodenum and those communicating with main pancreatic duct were suitable for endoscopic treatment. Many patients had pseudocysts secondary to chronic pancreatitis. The head-to-head comparison of endoscopic versus surgical drainage of pseudocyst secondary to chronic pancreatitis is difficult as the ductal changes in these patients may also necessitate treatment along with the pseudocyst. The surgical management is considered to be superior and more definitive compared to endoscopic treatment in pain relief and recurrence rate in such patients. Recent management of pancreatic pseudocysts relies on differentiating the acute from chronic pancreatitis and associated duct abnormalities. [9,10] Therefore, we compared patients with acute pseudocyst in proximity to stomach or duodenum which were amenable to either type of drainage as the duct is often normal in this group of patients. The meta-analysis [11] comparing the outcomes of endoscopic with surgical drainage for pseudocyst included only five published studies so far. Of these, three are retrospective and two are prospective studies. All except one study by Melman et al. [3] showed comparable outcome in surgical and endoscopic cyst drainage. Although this analysis recommends endoscopic drainage as the first-line approach in pseudocyst, more than half of patients had chronic pseudocyst. In our series, more patients underwent endoscopic drainage compared to surgical drainage (~2:1) similar to many of the other series reported. In contrast to study by Varadarajulu et al. [2] where more patients had pseudocyst in setting of chronic pancreatitis, we included patients with acute pseudocyst alone. The patients in the randomised controlled trial (RCT) needed endoscopic retrograde pancreatography before the intervention for the management of pancreatic duct structural changes due to chronic pancreatitis. In our study, we did not perform pancreatogram or pancreatic stenting as we had excluded patients with pseudocyst complicating chronic pancreatitis. It is of importance to note that patients in the surgical group had a significantly large size pseudocyst with a higher incidence of necrotic debris within the cyst and raised leucocyte count. Nonetheless, the drainage was successful in all patients with uneventful recovery and without any increase in complication rate. Cystogastrostomy was initially performed via open method; however, laparoscopic approach was used in recent cases with ~10% overall complication rate, suggesting surgical drainage a safe option. Johnson et al. [4] also found endoscopic drainage comparable to surgical drainage in their retrospective analysis for management of pseudocyst. However, half of their patients had chronic pancreatitis and more than 50% of the patients underwent surgical procedures other than pseudocyst drainage. There was no statistically significant difference in technical success and successful drainages between the two groups in our study, but the overall success was significantly higher in surgical group (20/20 vs. 27/35; P = 0.04). There were four cases of technical failure in ECG group (3 gastric perforations and 1 case of slipped guide wire). The evaluation of patients who had technical failure revealed that two pseudocysts were located in the pancreatic tail region and the other two had less prominent endoscopic impression. Endoscopic ultrasonography to guide puncture and drainage, which was not used in most cases of our study, might actually improve the success in such situations. All patients with technical failure underwent immediate surgical intervention and had an uneventful recovery, indicating that surgery should be undertaken as early as possible in such situations. In another comparative analysis of pseudocyst drainage, Sandulescu et al. [12] reported success rate of 77% (10/13) using endoscopic technique. Bleeding at the puncture site, thick pseudocyst wall and thick contents were the causes of failure in remaining three patients. Inadequate drainage can be attributed to the presence of necrotic debris, inadequate cystogastrostomy opening, slippage of stents and presence of multiple loculations. In our series, the main reason for unsuccessful endoscopic drainage was the presence of necrosis. Four patients who developed sepsis due to inadequate drainage had evidence of necrotic debris on preoperative imaging. Two of them underwent surgical drainage, while the other two could be managed with percutaneous drainage of the collection. Complications resulting after endoscopic drainage could be life-threatening if not managed appropriately. The use of self-expanding metallic stent may further decrease the incidence of this complication. Although this is a moot point, the recent meta-analysis suggests no difference in the efficacy of plastic versus metal stents for transmural drainage of pancreatic fluid collections. [11] In the only RCT, [2] to date, the length of hospital stay was significantly less in the endoscopic group which is at variance to our observation where patients were kept in the hospital after ECG for a longer duration as many of them were from far-flung areas. The location of cyst towards tail and absence of endoscopic impression are the predictors for technical failure, while the presence of necrosis is the main predictor for failure of successful drainage in our study. While managing psuedocysts with above features, one should have a low threshold for surgical drainage. The need for additional procedures such as cholecystectomy and pseduoaneurysm requiring surgical intervention is other possible indication for surgical drainage. There are a few limitations to this study. First, being a retrospective analysis of prospective maintained database, there could be an element of selection bias. Second, MRI was performed in the latter half of the study. This could have affected the failure rates of ECG although two out of four patients with failed drainage had MRI before procedure. The transmural route of drainage alone was used in both techniques avoiding any kind of bias. Despite these drawbacks, we assume the study is very useful. Reviewing the literature, it is the first study which compares drainage of patients with acute pseudocyst alone. CONCLUSION The present study shows that ECG is a viable option as first-line management in patients with acute pseudocyst and should be exercised in institutions having a good surgical backup. The technical success as well as the successful accomplishment of the drainage of cyst cavity was acceptable though lower than surgery; however, there were far more complications in the ECG group, leading to a higher but acceptable surgical conversion rate. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-03-11T17:17:32.306Z
2020-03-11T00:00:00.000
{ "year": 2020, "sha1": "3c46c788f39bcc9639e743f03a46711bf12d8413", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jmas.jmas_109_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "5fc4164e442edc766f8c2535693e460f4558a640", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250329379
pes2o/s2orc
v3-fos-license
Understanding the COVID-19 Pandemic in Nursing Homes (Aragón, Spain): Sociodemographic and Clinical Factors Associated With Hospitalization and Mortality Old people residing in nursing homes have been a vulnerable group to the coronavirus disease 2019 (COVID-19) pandemic, with high rates of infection and death. Our objective was to describe the profile of institutionalized patients with a confirmed COVID-19 infection and the socioeconomic and morbidity factors associated with hospitalization and death. We conducted a retrospective cohort study including data from subjects aged 65 years or older residing in a nursing home with a confirmed COVID-19 infection from March 2020 to March 2021 (4,632 individuals) in Aragón (Spain). We analyzed their sociodemographic and clinical profiles and factors related to hospitalization and mortality at 7, 30, and 90 days of COVID-19 diagnosis using logistic regression analyses. We found that the risk of hospitalization and mortality varied according to sociodemographic and morbidity profile. There were inequalities in hospitalization by socioeconomic status and gender. Patients with low contributory pensions and women had a lower risk of hospitalization. Diabetes mellitus, heart failure, and chronic kidney disease were associated with a higher risk of hospitalization. On the contrary, people with dementia showed the highest risk of mortality with no hospitalization. Patient-specific factors must be considered to develop equitable and effective measures in nursing homes to be prepared for future health threats. INTRODUCTION In March 2020, the coronavirus disease 2019 (COVID-19) outbreak in China was declared a global pandemic (1). From that day, and according to the World Health Organization COVID-19 Dashboard (2), by January 2022, more than 315 million confirmed cases have been diagnosed worldwide. In Spain, almost 8 million cases have been declared and more than 90,000 people have died (3) in an unprecedented public health crisis. One of the facts that the pandemic has brought to light is its greater impact on vulnerable groups. Inequalities have been observed in the risk of COVID-19 disease, with a higher risk of infection in groups with worse socioeconomic conditions. COVID-19 infection has shown a socioeconomic gradient, which has been linked to the type of job, the existence of lower health literacy or higher exposure rates, among others (4)(5)(6)(7). This vulnerability has also been associated with the area of residence, due to household crowding and the existence of chronic stressors (8,9), and both, individual and area vulnerability, mutually potentiate each other (10). These differences are not only limited to the risk of infection but also to the diagnosis of the disease and the medical attention received by these patients. Access to diagnostic tests (11) and to healthcare attention (12) seems to be worse for those people living in deprived areas, even in the universal healthcare systems. This may result in poorer care for the most vulnerable groups, amplifying existing inequalities. The elderly population has been the most affected by the COVID-19 pandemic, especially in terms of mortality. Among the elderly, institutionalized people residing in nursing homes have been a particularly vulnerable group, showing high rates of infection and death in the 1 month of the pandemic and before the appearance of vaccines (13). The greatest impact of the COVID-19 pandemic on this group has been associated with both physical and psychological vulnerability, as well as with the living conditions related to the fact of residing in an institution (14,15). In Spain, this fact has been particularly serious, as it is an aging country, with an aging index in 2020 of 125.75% (125 people aged over 64 years for every 100 aged under 16 years) (16). In addition, more than 300,000 elderly people live in nursing homes (17), where the effect of the pandemic was devastating: it is estimated that, only during the first wave, around 20,000 institutionalized people died, and the mortality rate for elderly people living in long-term care (LTC) facilities was 6% (18). These high mortality rates have been associated with high levels of community transmission and deficient nursing homes-related policy responses (19). When analyzing COVID-19 mortality in nursing homes, factors, such as the patient's complex chronic conditions, the location, or the capacity of the center, have been analyzed (20,21). Nonetheless, other aspects, such as the determinants of hospital admission or the existence of socioeconomic inequalities, are still unknown. Therefore, gaining a broad view of the factors involved in mortality and the healthcare received by these patients is an unavoidable task to prevent its recurrence. To this end, the objective of this study was to describe the profile of institutionalized patients with a confirmed COVID-19 infection in Aragón (Spain) and the socioeconomic and morbidity factors associated with hospitalization and death. Design, Information Sources, and Study Population Retrospective cohort study data were obtained from the Aragón-COVID-19 cohort. This is a health data collection of all individuals undergoing COVID-19 testing in the Spanish region of Aragón, an Autonomous Community in the northeastern Spain with a high aging rate 21.7% of people over 64 years of age (22). The Aragón-COVID-19 cohort includes information gathered from administrative health data sources as well as electronic health records of the Aragón Health Service. The people included in the cohort were tested either when they presented symptoms compatible with COVID-19 or when they had close contact with a confirmed subject. All COVID-19 cases were confirmed by polymerase chain reaction (PCR) or COVID antigen testing. Individuals in the cohort were included from 9, March, 2020, the first epidemiological week with COVID-19 cases reported in Aragón, to 14, March, 2021, the end of the fourth wave in Aragón. On this date, 103,281 people were COVID-19 confirmed cases. For this study, we selected subjects aged 65 years or older residing in a nursing home with a confirmed COVID-19 infection. This information was obtained from the Aragón health service user database (BDU) (Figure 1). The research protocol of this study was approved by The Clinical Research Ethics Committee of Aragón (CEICA) (PI20/184). Variables of the Study We considered the sociodemographic and clinical information of all the institutionalized individuals in the Aragón-COVID-19 cohort with a COVID-19 confirmed infection. Regarding sociodemographic characteristics, we considered sex, age (65-79 years; ≥80 years of age), and socioeconomic level. The socioeconomic level was calculated on the basis of pharmacy copayment levels and social security benefits received, according to the type of user of the Aragón health service. From the combination of these two variables, 5 mutually exclusive categories were obtained for institutionalized patients as follows: individuals with a contributory pension < 18,000e per year; individuals with a contributory pension ≥ 18,000e per year; individuals affiliated with the mutual insurance system for civil servants; individuals receiving free medicines (people with minimum integration income or who no longer receive the unemployment allowance); and other situations not previously considered. Information related to the patient's clinical status was obtained from the morbidity-adjusted groups (GMA) (23). This source of information considers all medical diagnoses available in primary healthcare and hospitalization (hospital discharge records (CMBD) and emergency service). We considered GMA information from January 2020 in order to know the health status prior to the COVID-19 diagnosis of the individuals. The variables analyzed from GMA were weight complexity (obtained from the aggregation of the patient's different diagnoses); number of chronic morbidities; and existence of a medical diagnosis of diabetes mellitus, obesity, hypertension, stroke, ischemic heart disease, heart failure, chronic obstructive pulmonary disease (COPD), chronic kidney disease, depression, or dementia. These medical diagnoses were selected due to their high prevalence in this group of age. The outcomes evaluated in patients with a COVID-19 confirmed infection were hospitalization and mortality by all causes. Only hospitalizations occurring within 14 days before and after COVID-19 diagnosis were considered in the study. In addition, since the cause of death was not available, we considered mortality from 3 days before diagnosis (as some patients died before the results of the test were obtained) to 90 days after. Both variables were obtained from the basic minimum dataset of hospital discharge (CMBDH) of Aragón. Analyses First, we described the sociodemographic and clinical characteristics of all the individuals, over 64 years of age, living in a nursing home in Aragón with a confirmed diagnosis of COVID-19. In addition, a description of the sociodemographic and clinical profiles of the patients according to their hospitalization and mortality was conducted. To evaluate possible differences in the factors associated with mortality, this outcome was categorized into three different categories, namely, mortality at 7, 30, and 90 days after diagnosis. Categorical variables were described by percentages. Weight complexity and number of diagnoses had a non-normal distribution, so median and interquartile ranges were used to describe these variables. Statistical differences between categories were assessed using chi-square and Mann-Whitney U-tests. To find out which sociodemographic and clinical characteristics were associated with the risk of hospitalization and death in institutionalized patients, univariate and multivariate logistic regression analyses were conducted. We performed explanatory logistic regression models. These models were adjusted by those available variables that were associated with hospitalization and death in the literature. All analyses were performed using the R Statistical Software (the R Foundation for Statistical Computing, Vienna, Austria). RESULTS We identified 4,632 people aged 65 years or older residing in a nursing home with a COVID-19 confirmed infection in Aragón from March 2020 to March 2021. The description of the subjects included in the study according to their socioeconomic and clinical conditions, and their differences by sex, can be consulted online in Supplementary Table S1. They were mainly over 80 Table 1. Hospitalization was slightly more frequent in men than in women (p < 0.001) and in people with a contributory pension of 18,000e per year or more (p < 0.001). Regarding clinical diagnoses, people residing in a nursing home with a diagnosis of DM, ischemic heart disease, heart failure, COPD, or chronic kidney disease showed a higher frequency of hospitalization. No statistical differences were observed by age groups and a diagnosis of obesity, stroke, hypertension, depression, or dementia. In 145 individuals (109 women and 36 men), no previous morbidity was recorded. We evaluated mortality within 7, 30, and 90 days after COVID-19 diagnosis. In Table 2, results related to the socioeconomic and clinical profiles of both dead and alive patients for each cutoff point are available. A total of 1,458 people aged 65 years or older residing in a nursing home with a confirmed COVID-19 infection in Aragón died within 90 days of COVID-19 diagnosis from all causes (31.5%). Mortality in men and in people aged 80 years or older was higher for the three time intervals considered. Differences in socioeconomic status were observed at 30 and 90 days. Regarding morbidity, mortality increased in people with a high number of diseases and with high complexity for all the time intervals evaluated. Mortality was higher for the three moments evaluated for heart failure and chronic kidney disease. A higher risk of death at 30 and 90 days of COVID-19 diagnosis was also observed in people with ischemic heart disease, COPD, and dementia. In contrast, people with obesity showed a lower mortality at 90 days (p = 0.045). We analyzed those COVID-19 confirmed institutionalized patients who died within 90 days after diagnosis and their probability of having been hospitalized by COVID-19 ( Table 3). Of the 1,458 patients who died, 523 (35.8%) patients were not hospitalized by COVID-19. Differences in hospitalization were observed according to sex. Those women who died showed a lower prevalence of hospitalization than men (p < 0.001). People who died with a high number of chronic diseases, diabetes mellitus and heart failure were more frequently hospitalized. However, people who died with dementia showed a lower probability of hospitalization (p < 0.001). We conducted multivariate models to analyze those factors associated with the risk of hospitalization by COVID-19 and death at 7, 30, and 90 days in our population ( Table 4). Women showed a lower risk of hospitalization and death than men. The risk of hospitalization and death was also higher in people aged 80 years or older than in those aged 65-79 years. Regarding socioeconomic status, people with a contributory pension of e18,000 or more showed a higher risk of hospitalization than those with low contributory pensions [odds ratio (OR): 1.24; 95% CI 1.04-1.48]. No differences were observed according to death. Finally, the number of chronic diagnoses was associated with a higher risk of hospitalization and death at 7 and 90 days. High complexity was only associated with a higher risk of death at 30 days (p = 0.004). We observed differences in the risk of hospitalization and mortality risk according to chronic morbidity. The existence of DM, heart failure, and chronic kidney insufficiency was associated with a higher risk of hospitalization (Figure 2). Regarding mortality, heart failure was associated with a higher risk of mortality for all the cutoff points considered (OR: 1.62; 95% CI 1.20-2.15 at 7 days). Other diagnoses associated with a higher risk of mortality at 90 days were chronic kidney disease (OR: 1.24; 95% CI 1.08-1.42) and dementia (OR: 1.28; 95% CI 1.12-1.46) (Figure 3). When we analyzed the risk of hospitalization in those who died of any cause at 90 days, multivariate analyses showed that the risk of hospitalization was lower in women than in men (OR: 0.67; 95% CI 0.53-0.84). An increasing number of diseases were associated with a high risk of hospitalization (OR: 1.07; 95% CI 1.03-1.11). No differences were observed by age, complexity, or socioeconomic position. We also observed differences in hospitalization in patients who died according to chronic morbidity. The diagnoses of DM, obesity, and heart failure were associated with a higher risk of hospitalization. On the contrary, a diagnosis of dementia was associated with a lower risk of hospitalization (OR: 0.64; 95% CI 0.51-0.80) (Figure 4). DISCUSSION In Aragón, 38.3% of COVID-19 confirmed patients over 64 years of age residing in a nursing home were hospitalized. The risk of hospitalization varied according to sociodemographic and morbidity profiles. Therefore, the risk of hospitalization was higher in men and in older people. Those with a contributory pension equal to or > 18,000e per year showed a slightly higher risk of hospitalization than those with lower pensions. People with a diagnosis of DM, heart failure, or chronic kidney disease also showed a higher risk of hospitalization. Of all COVID-19 confirmed patients residing in a nursing home, 31.5% died at 90 days of COVID-19 diagnosis. Mortality was higher in men and in older patients. Heart failure was the diagnosis showing a stronger association with the risk of death. Finally, 35.8% of the residents with a COVID-19 confirmed diagnosis who died had not been hospitalized. Hospitalization in those patients who died was positively associated with being men and a diagnosis of DM, obesity, or heart failure. On the contrary, patients with dementia showed a higher risk of mortality without hospitalization. COVID-19 has had a devastating impact on old people residing in nursing homes. In Aragón, almost 40% of the patients required hospitalization and one of three patients died. Some personal factors have been associated with the vulnerability of these subjects as follows: the existence of frailty patients (24), low Barthel index, or the high prevalence of comorbidities (25,26). In addition, organizational factors have been involved in this equation. A large number of beds in many LTC facilities, the very low staffing ratios, shortage of qualified professionals, or the deficient coordination between social and health services (27) are some of the factors that can explain the high impact of the COVID-19 pandemic in Spanish nursing homes. There is a relationship between the sociodemographic characteristics of the patients with a COVID-19 confirmed diagnosis living in a nursing home and their risk of hospitalization and death. Men showed a higher risk of hospitalization than women, as well as a higher risk of death, after adjusting by age, socioeconomic position, number of chronic diseases, and complexity. This fact has already been described widely in the literature (28,29) and has been related to biological, psychosocial, and behavioral factors (30). However, it is striking that among those patients who died, women also had a lower risk of being hospitalized. Another study conducted in Spain on the general population (31) found that women presented different symptoms at disease onset, clinical outcomes, and treatment patterns, with differences in hospitalization and intensive care unit admission. Further research is required to explore the factors that could have conditioned this gender bias. We also found differences in hospitalization according to socioeconomic level but not for mortality risk. Those old patients living in a nursing home with a contributory pension equal to or higher than 18,000e per year had a higher risk of hospitalization than those with lower pensions, after taking into account age, sex, and morbidities. When we selected those people who died at 90 days of diagnosis, there were no differences in hospitalization by socioeconomic status, but differences existed when considering mortality at 7 days (OR: 4.5; 95% CI 1.9-12.6). Nonetheless, when analyzing the profile of those patients who survived, people with a contributory pension of 18,000e per year or higher had a high risk of hospitalization (p = 0.046). It has been described the association between low socioeconomic status and a higher risk of hospitalization and death by COVID-19 in the general population (12,32) but, to the best of our knowledge, this is the first study to assess the influence of individual socioeconomic status on the risk of hospitalization and death from COVID-19 in institutionalized patients. A poor individual socioeconomic level may reflect deficient conditions of the nursing homes, which could result in poorer care for these patients, but also the existence of few social and support networks. The suffering of some chronic diseases was associated with hospitalization and death. In this sense, patients with heart failure had the highest risk of hospitalization and death, after controlling by sex, age, and socioeconomic position. Our results are consistent with other studies, in which patients with underlying cardiovascular disease have an increased risk of mechanical ventilation and death by COVID-19 (33,34). In the case of COVID-19 infection in patients with this illness, it seems to be associated with a significant risk of developing acute decompensation (35). Patients with heart failure have also shown an increased risk of COVID-19 infection due to reduced immunity, frailty, and low hemodynamic ability to cope with severe infections (36). In contrast, people with dementia had the highest risk of mortality with no hospitalization. Lockdown and quarantine have had a high impact on patients with dementia living in nursing homes. Changes in their routines and physical inactivity lead to a worsening of their functional and cognitive status (37) and an increased stress in an already vulnerable population (38), resulting in a high risk of mortality by COVID-19 (39,40). Some of the reasons proposed to explain this fact were the advanced age of these patients and the existence of comorbidities. Nonetheless, in our study, a high risk of mortality at 90 days in people with dementia was observed even after adjusting by the presence of other comorbidities. Other authors have pointed out the presence of atypical symptoms of infection (41), namely, the onset of hypoactive delirium and worsening functional status (42), as the cause of an increased mortality in this group. This atypical presentation could explain the lower risk of hospitalization observed in patients with dementia and COVID-19 who died. This study has several strengths. We analyzed all the individuals residing in a nursing home with a confirmed COVID-19 infection from the population of Aragón, including data from administrative health data sources and electronic health records. Clinical diagnoses were obtained from GMA. This source of information combines diagnoses from primary healthcare and from hospital admissions, which makes this a high-sensitivity classification. Finally, we used a combination of two different socioeconomic indicators (pharmacy copayment levels and the type of user of the Aragón Health Service) to categorize the socioeconomic level of the individuals. The combination of these two variables has already been used in other analyses of health inequities at a population level (10,43) and provides a good knowledge of the individual socioeconomic position. Nevertheless, this study has some limitations. There are limitations inherent to observational studies, such as quality of data and cases with incomplete data. Second, neither the cause of death nor the cause of hospitalization was available. The cause of death was not identified because the information from the Aragón-COVID-19 cohort could not be matched with the information available in the mortality registry. To address this issue, only deaths occurring up to 90 days after COVID-19 diagnosis were considered. A total of 481 institutionalized patients over 64 years of age died after 90 days of COVID-19 diagnosis, with a median of 207 days. In addition, we only considered hospital admissions within 14 days, both before and after COVID-19, as the hospital discharge records (CMBD), where hospital cause is codified, were not available in the Aragón-COVID-19 cohort. In this case, 218 patients were hospitalized but did not fulfill our criteria, with a median of −21 days. Instead of the possible bias, we considered that the established criteria allow us to define plausible ranges for identifying both death and hospitalization due to COVID-19. Finally, some of the patients who were not hospitalized could have been treated in one of the "COVID centers" set up in Aragón in the first waves of the pandemic. This information was not available for its consideration. CONCLUSION Many challenges have been faced by nursing homes in this COVID-19 pandemic. The characteristics of its residents and the delay of the measures taken have had a devastating effect in terms of morbidity and mortality. In this study, we found gender and socioeconomic inequalities in the risk of hospitalization of these patients, as well as an increased risk of hospitalization and death for some diagnostic groups. The LTC facilities must be prepared for future health threats, and this requires an appropriate implementation of geriatric interventions (44) and taking into account patient-specific factors, in order to develop equitable and effective measures. As we have observed in our analyses, patients with underlying cardiac pathologies may require special attention, given their potential severity. In contrast, people with dementia showed the highest risk of mortality with no hospitalization. In this group of patients, a strict medical support and control (39) or the implementation of applications to promote interaction with family members (38) is necessary. Finally, the professionals involved should be aware of the existence of gender and socioeconomic biases when assessing and caring for patients, in order to avoid adopting measures that contribute to increase the existing inequalities. DATA AVAILABILITY STATEMENT Aragon-COVID19 data is available under request to IACS. Requests to access these datasets should be directed to https:// www.iacs.es. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Clinical Research Ethics Committee of Aragón (CEICA). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2022-07-07T15:17:49.885Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "2a2e48ceb48c0da94d3ec0474b5b75609e2bf988", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2a2e48ceb48c0da94d3ec0474b5b75609e2bf988", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
73677667
pes2o/s2orc
v3-fos-license
On a multi-scale approach to analyze the joint statistics of longitudinal and transverse increments experimentally in small scale turbulence We analyze the relationship of longitudinal and transverse increment statistics measured in isotropic small-scale turbulence. This is done by means of the theory of Markov processes leading to a phenomenological Fokker-Planck equation for the two increments from which a generalized Karman equation is derived. We discuss in detail the analysis and show that the estimated equation can describe the statistics of the turbulent cascade. A remarkably result is that the main differences between longitudinal and transverse increments can be explained by a simple rescaling-symmetry, namely the cascade speed of the transverse increments is 1.5 times faster than that of the longitudinal increments. Small differences can be found in the skewness and in a higher order intermittency term. The rescaling symmetry is compatible with the Kolmogorov constants and the Karman equation and give new insight into the use of extended self similarity (ESS) for transverse increments. Based on the results we propose an extended self similarity for the transverse increments (ESST). I. INTRODUCTION Small scale turbulence is not yet full understood. A complete theory based on the Navier-Stokes equation has not been achieved yet, thus our present understanding relies for the most part on phenomenological and experimental approaches. It is assumed that turbulence forms an universal state which exhibits stationarity, isotropic and homogeneity in a statistical sense [25,44]. In general, turbulence are driven on large scale, i.e. energy is injected into large scale motions and is dissipated on small scales resulting in a net flux of energy from large to small scales [56]. The energy flux results from the inherent instability and the subsequent breakup of vortices into smaller ones. For local isotropic turbulence the main challenge is to understand spatial correlation. Usually the turbulent field U(x, t) is characterized by increments u r = e · [U(x + r, t) − U(x, t)], where e denotes a unit vector in a certain direction, x denotes a reference point and r a displacement vector. The increments are taken as stochastic variables in dependence of the scale variable r = |r| and by varying r correlation on different scales can be studied with the increments. In the following, we assign u r to longitudinal increments, that means e is parallel to r and v r to transversal increments, for which e is orthogonal to r. For specific length r i we write u i and v i in a short way. The central challenge in turbulence is to explain the statistics of exceptional frequent occurrence of large velocity increments on small scales r, which can not be understood with normal statistics. This is the so called intermittency problem. The work of Kolmogorov [37,38] is still the foundation of the small-scale turbulence theory. The starting hypothesis is that the possible symmetries of the Navier-Stokes equation are recovered in a statistical sense for high Reynolds numbers. These symmetries are homogeneity, i.e. the statistics of the increments is independent of the reference point x, isotropy, i.e. the statistics does not change under rotation of the frame of reference, and stationarity, i.e. the statistics is not time dependent. A further hypothesis is scale invariance, i.e. loosely spoken the structures of different sizes looks similar. Kolmogorov has considered the statistics in terms of structure functions (moments of the velocity increments), for which scale-invariance reads as u n r = (r/r 1 ) ξn u n 1 implying u n r ∝ r ξn with two different distances r and r 1 . Using this hypothesis, Kolmogorov has furthermore assumed that for high Reynolds number the statistics of velocity increments depends only on the energy dissipation ǫ and the scale r and ended up by using dimension arguments with u n r = C n ǫ n/3 r n/3 for η ≪ r ≪ L. The constants C n denote the Kolmogorov constants, η is the scale where the dissipation takes place and L, the integral length scale, is the largest scale of the flow. However there are correction due to the fluctuating dissipation energy leading to deviations from the exponent ξ n = n/3. This typical property of turbulence is an other aspect of the above mentioned intermittency. According to the refined self similarity hypothesis (RSH) of Kolmogorov, which takes into account the fluctuating energy dissipation ǫ r averaged over a volume V with the extension r (ǫ r = V ǫdx), the scale-invariance is given by u n r = c n ǫ n/3 r r n/3 ∝ r ξn . (1) The first model for the exponents ξ n is Kolmogorov's log-normal model, which results in ξ n = n/3 − µn(n − 3)/18 [39]. If instead of the structure functions the probability density functions (pdf) p(u r ) are considert, this nontrivial scaling behavior correspond to a change of the form of the pdfs with r. To explain intermittency, i.e. the non-trivial scaling behaviour, is still one of the prominent challenges in turbulence. At first, the examinations were only concentrated on the longitudinal increments. There was hope that the exponent ξ n can describe the self-similarity of every quantity of the velocity field. But recently, a lot of afford was undertaken to consider also the transverse increments and they seem to have essential different properties [1,3,4,9,10,11,13,14,15,15,16,18,27,28,30,31,33,34,35,40,41,46,47,48,50,58,63,65,69]. This implies that the models as well as the considerations concerning only the longitudinal increments are too specific and a better understanding of the turbulence has to include the transverse increments. Longitudinal and transverse increments belong to different geometric / kinematic structures in the flow. The longitudinal increments can be associated with strain-like structures, the transverse increments with vorticity-like structures. Thus it is natural to modify RSH of equation (1) and to use local averaged enstrophy (squared vorticity) rather than local averaged dissipation for the transverse increments (refined similarity hypothesis for transverse increments, RSHT, see [11,15]), v n r = C t,n Ω n/3 r r n/3 ∝ r ξt,n . If intermittency results from the fluctuating dissipation and enstrophy, then the deviations from the Kolmogorov 41 theory [37] can be investigated by the scaling of ǫ n/3 r and Ω n/3 r , respectively. For infinite high Reynolds number the scaling of averaged dissipation and enstrophy should be the same [46], but in many experiments and simulation one observe differences in both exponents for finite Reynolds number [9,11,14,18,28,31,33,47,58,64,69]. At least four arguments are used to explain this observations: 1) anisotropy, 2) effect of Reynolds number, 3) influence of boundary conditions, 4) intermittency. Anisotropy typically exists in every flow on large scales and it can influence the exponent stronger than intermittency itself [58,69]. Decreasing the Reynolds number effects the scaling exponents in such a way that the differences between them increase [4,18,58,69]. The structure functions u n r and v n r as well as the scaling properties involved can not define unambiguously the turbulent field; for instance flows with different structures may have the same scaling properties [64]. The same is true for the probability distributions p(u r ) and p(v r ), which are in essential equivalent to the structure functions. The reason is that these quantities are just two-point statistics. Definitive more general and detailed are the multipoint (or multi-scale) distributions p(u 1 , v 1 , r 1 ; . . . ; u n , v n , r n ) (or multi-scale structure functions i u mi i j v mj j ) for different scales r i . These probabilities also consider the joint statistics of longitudinal and transverse increments and thus enables also to describe the interaction between them. Furthermore they describe the simultaneous occurrence of increments u i , v i on several length scales r i . In this paper, we focus on an approach to characterize multi-scale statistics. It has been shown that it is possible to get access to the joint probability distribution p(u 1 , r 1 ; u 2 , r 2 ; . . . ; u n , r n ) via a Fokker-Planck equation estimated directly from measured data [20,21,43]. This has attracted interest and was applied to different problems of the complexity of turbulence like energy dissipation [32,42,45], universality of turbulence [55], the theoretical derivation of the Fokker-Planck-equation from the Navier-Stokes equation in the limit of high Reynolds number [17] and the analysis of stochastic time series [23,59,62]. The characteristic of this method is that it is based on pure data analysis, i.e. it is a parameter-free method, and that the few underlying assumptions are verifiable. Thus no special model-ideas are interwoven with the procedure. In this article we extend this approach to analyze the joint statistics of longitudinal and transverse increments. A first result concerning the different cascade speed of longitudinal and transverse cascade has been published in a preceding letter [60]. The aim of this article is to present the used method and its extension to longitudinal and transverse increments in detail and to discuss the similarities and differences of longitudinal and transverse increments. We start with a short description of the concepts of Markov processes and define the notation. In section III we describe the experiment and characterize the data with two-point statistics (i.e. with one scale) in section IV. Next, we analyze longitudinal and transverse increments separately by means of Markov-processes with respect to multi-points or multi-scale statistics. Then we go over to a combined analysis of both increments and discuss a new symmetry which connect both increments. An interpretation in section V and a conclusion will finish the paper. II. THE MATHEMATICS OF MARKOV PROCESSES The basis of our argumentation and of the used analysis is the theory of Markov and diffusion processes. Therefore we will give a concise description of them, which also serve as a guideline for the analysis of the turbulent signal. The foundations are known since Kolmogorov [36], but a detailed presentation can also be found in standard textbooks such as [26,29,57]. We restrict ourself to the case of a two dimensional stochastic variable, denoted by the stochastic vector Usually, a stochastic process is formulated in the time t but one can also imagine different independent scalars; for our purpose to characterize the turbulent cascade the scale r is the independent variable. The stochastic process underlying the evolution of u r in the scale r is Markovian, if the conditional pdf p ( u 1 , r 1 | u 2 , r 2 ; . . . ; u n , r n , ) with r 1 ≤ r 2 ≤ · · · ≤ r n fulfills the relation where p ( u 1 , r 1 | u 2 , r 2 , u 3 , r 3 ; . . . ; u n , r n ) denotes the probability for finding certain values for u 1 at some scale r 1 , provided that the values of u i at the larger scales r i > r 1 , i > 1, are known. The condition (4) states that the increment distribution of u 1 on r 1 only depends on the increment value on one larger scale r 2 and that further scales do not give more information. Markov properties imply a remarkable property. The knowledge of the conditional pdf p ( u r , r | u 0 , r 0 ) (with r ≤ r 0 ) is sufficient to determine any n-point pdf: i.e. the single conditioned pdfs contain the entire information about the stochastic process For Markov processes the evolution of the conditional pdf in the scale r can be described by the Kramers-Moyal expansion, a partial differential equation for p ( u r , r | u 0 , r 0 ) in the variables u r and r. According to Pawula's theorem, this expansion truncates after the second term if the fourth order expansion coefficient vanishes [57]. In this case, the Kramers-Moyal expansion reduces to the Fokker-Planck equation (or Kolmogorov equation) ij (u, r) p( u, r | u 0 , r 0 ) . Note that we have multiplied both sides of the usually used Fokker-Planck equation with r; on the right hand side we have incorporated this factor in the definition of the Kramers-Moyal coefficients (8). The Fokker-Planck equation describes not only the evolution of the conditional probability p( u r , r | u r0 , r 0 ) but also the evolution of the pdf p(u r , r), as can bee seen by integrating (6) over u 0 . The drift vector D (1) and the diffusion matrix D (2) of (6) are defined via the limit where the coefficients M (k) are given by The coefficients M (k) are conditional expectation values ·|u on the veloctiy increment u and can easily be determined from experimental data. One can find estimates for the D (k) by extrapolating the measured conditional moments M (k) in dependence of ∆r towards ∆r = 0 [54]. An other possibility would be to approximate the drift and diffusion coefficients by the coefficients M (k) for one finite ∆r. The corrections of higher order in ∆r can then be taken into consideration by correction terms. The approved coefficients can be used to recalculate the corrections, and if one perform this procedure recursively, the result converge towards the drift and diffusion coefficients [53]. Both approaches give similar results, but the approximation of [53] has some pitfalls, if it does not examined the limes, which is an essential point of this analysis, see also [22]. A solution p(u, l) of the Fokker-Planck equation can be derived from the Chapman-Kolmogorov equation p(u, l) = p(u, l|u 0 , l 0 )p(u 0 , l 0 )du 0 . Here we use the logarithmic scale l := ln(L/r) to transform the Fokker-Planck equation into the usual form ∂p/∂l = . . . instead of −r∂p/∂r=. . . . For sufficient small ∆l ≡ l − l 0 , such that D (i) are constant in ∆l, the conditioned probability can be approximated by For larger distances l − l 0 , a solution can be obtained by reiterating the Chapman-Kolmogrov equation with this approximation: Below we use this path-integral approximation to solve the Fokker-Planck equation numerically. The error analysis for a diffusion process in two variables is more complicated than for one dimension. We perform the error analysis according to [61]; here we just sketch the procedure and give the resulting equations for the error estimations. An error analysis comprises a quantity to estimate and a stochastic quantity which leads to an uncertainty. The two quantitates to estimate are the drift and diffusion coefficients and it is assumed that the stochastic quantity is given solely due to the stochastic nature of the process. Because the randomness is determined by D (2) , the error of D (1) and D (2) can be expressed by D (2) itself. The following results are valid for small ∆r. The error of the drift coefficient is given by where N is the number of samples used to estimate D (1) . To calculate the error for the diffusion matrix, one has to transform the diffusion coefficient by a suitable orthogonal transformation matrix B in the diagonal form. In the diagonal system one has The error of D (2) ii is then given by ∆ D which leads to the error of the initial diffusion matrix by the inverse transformation ∆D (2) where is an estimator for the average value of a χ 2 -distributed stochastic variable ζ = (m + Γ) 2 with a normal distributed stochastic variable Γ obeying Γ = 0 and Γ 2 = 1. Thus σ(m, N ) defines the 32%-confidence interval of the estimator ζ. In the above presentation the central quantity was the pdf but very often one is interested in the structure functions u m v n = u m v n p(u, v, r)dudv. From the Fokker-Planck equation a hierarchical equation for the structure functions can be derived by using u m v n f (u, v) = u m v n f (u, v)p(u, v, r)dudv: 11 (u, v, r) 12 (u, v, r) . This equation allows a direct comparison of the Fokker-Planck equation with structure functions. III. THE EXPERIMENTAL SETUP For the subsequent analysis we use two data sets measured in the central region of a wake behind a cylinder. The Reynolds numbers are R λ = 180 and R λ = 550, respectively. If nothing else is said, the data set with R λ = 180 is used. The high Reynolds number data set is used for comparison. For the first data set the local velocity is measured in a wake 60 diameters behind a cylinder with cross section D = 20 mm. The Reynolds number is 13200 with a Taylor based Reynolds number of R λ =180. We have measured the velocity component U in the mean flow direction, the V component orthogonal to the cylinder axis with an X-wire (Dantec's frame 90N10 with Dantec's 55P51 X-wire). We collect 1.25·10 8 samples with a sampling frequency of 25 kHz using a 16bit A/D converter; high frequency electronic noise was suppressed with a low-pass filter to prevent aliasing. We use Taylor's hypothesis of frozen turbulence to convert time lags into spatial displacements. With the sampling frequency of 25 kHz and a mean velocity of 9.1 m/s, the spatial resolution of the measurement is not better than 0.36 mm. For the integral length scale L = r0 0 R(r)dr, where r 0 represent the first zero-crossing of the autocorrelation R(r) function, we obtain a value of L =137 mm for the stream-wise component U and L t =125 mm for the component V . Taylor's microscale λ is calculated using the method proposed by [5]. For isotropic turbulence, λ can be written as where σ is the standard deviation of the turbulent fluctuations. The limit has been calculated by fitting a second-order polynomial to the data, resulting in λ =4.8 mm for the stream-wise component and λ t =3.0 mm for the V -component. The dissipation scale η can not be resolved, because it is smaller than the length of the hot-wire and smaller than the spatial resolution calculated with the finite sampling frequency and the Taylor First of all, we characterize our data by two-point statistics (structure functions, pdf of increments etc.). In this way, we have a reference to results presented frequently in the literature and we get a first impression of the differences between longitudinal and transverse increments. A central pre-assumption for many considerations of small scale turbulence is isotropy. To study isotropy we use the Kármán equation (isotropy relation) because it is the simplest exact relation for structure functions which rely on isotropy. The Kármán equation connects the second order longitudinal and transversal structure functions: Taking the validity of the Kármán equation as indication of isotropy, we see that for small distances r < L the isotropic relation is well fulfilled, the relative error is within 5%. For large distances r > L isotropy is violated (large scale anisotropy), the ratio v 2 r / u 2 r is approximately 0.66, which is a typical value in the literature. Next, we analyze the data with respect to scaling properties. Fig. 2a) shows the energy spectrum with Kolmogorov's -5/3-law for comparison. In Fig. 2b) the third order structure function is plotted in a compensated representation, i.e. |u 3 r | /r is plotted against r to estimate the scaling range. The maximum lies at 10 −2 m, which defines according to Kolmogorov's 4/5-law the position and the width of the scaling range. Because the scaling range is narrow, we use the extended self similarity (ESS) [7,8], see also appendix VII, |v r | n ∝ |u r | 3 ξ t n (20) to estimate the scaling exponents of the structure functions. In Fig. 3 we show the application of ESS to the third, fourth and sixth order structure functions. The third order structure function is of especially interest, because it serve as the reference in ESS. In all three figures it can be seen that the transverse exponent is smaller than the longitudinal one, ξ t n < ξ l n . This result is well accepted [2,15,18,49,69]. As a consequence, one scaling group is not enough to characterize the turbulent flow and the statistics is more complex than previous thought. To make this more clear, Fig. 4 shows the exponents ξ l n and ξ t n up to order 8. For the sixth order structure function we get ξ l 6 = 1.76 ± 0.04. Fitting Kolmogorov's log-normal model to the values of ξ l n yields the intermittency parameter µ =0.24. Both is in good accordance with the experimentally expected values, see for example [19]. The transverse exponents are clearly smaller then the longitudinal exponents for n > 3, i.e. the transverse structures are significantly more intermittent, see Fig. 4. This heavily discussed point will be discussed in detail at the end of this article. An other quantity which enables to quantify intermittency is the flatness F α ≡ α 4 / α 2 2 , where α is u r or v r , respectively. For a Gaussian distribution it is F α ≡ 3 and for an intermittent distribution it is F α > 3. As shown in Fig. 5, for r < L both components are intermittent. For small r the transverse increments are considerably more intermittent than the longitudinal one. In Fig. 6 the probability density of longitudinal and transversal velocity increments are plotted. The velocity distribution on each scale is normalized with the respective standard deviation, i.e. u σ := u r /σ u,r and v σ := v r /σ v,r , to compare only the form of the curves. In Fig. 7 the longitudinal and transversal probability densities for two different length scales (r = L/5 and r = λ) are shown. In both figures, the intermittent character of the statistics can be seen. Whereas for L = L/5 the main difference is seen only for positive increments, i.e. the main difference is the skewness, we find for smaller scales that the distributions for the transverse increments is for positive and negative increments more intermittent, see Fig. 7b). Next we present briefly results from the high Reynolds number data set. The Kármán equation is well fulfilled, i.e. the data are isotropic in good approximation, see Fig. 8. Fig. 9a) presents the energy spectrum of the longitudinal increments which shows a distinct scaling range with the exponent -5/3 in accordance to Kolmogorov's theory. The scaling range is more pronounced for this data set as it can be seen also from the third order structure function, compare Figs. 9b) and 2b). In this section, we have presented our data with respect to two-point statistics, i.e. regarding only the statistics of increments for one fixed scale separately. The results are comparable to them given in the literature for moderate Reynolds numbers. Differences between longitudinal and transverse increments are clearly visible in pdfs and structure functions indicating that the transverse increments are more intermittent as the longitudinal. In the following we apply a new analysis to study the dependence between different length scale and the interaction between both increments which was not studied so far. B. Multi-Point Statistics: FP-Analysis In this section we present the analysis of the multi-point statistics separately for the longitudinal and transverse increments as it was described in section II. There are in essential three steps: First we show the validity of the Markov properties. Secondly, we calculate the Kramers-Moyal coefficients and show that the data obey a diffusion process. At last, as a verification, we integrate the resulting Fokker-Planck equation for the simple and for the conditioned probability distribution and show that the estimated Fokker-Planck equation describes correctly the data. Inserting a comment on the increment definition: different from the usual increment definition, we define increments for the multi-point examinations according to u r := e · [U(x + r/2) − U(x − r/2)]. In appendix VIII, we compare both definitions. Markov Properties The foundation of the following analysis is the validity of the Markov properties. They can be tested directly on data by their definition (4). Because of the finite numbers of measured data points we restrict ourself to the verification of p(u 1 , r 1 |u 2 , r 2 ) = p(u 1 , r 1 |u 2 , r 2 ; u 3 , r 3 ). Fig. 10 shows both side of this equation for the longitudinal increments for the three length scales r 1 = ∆r, r 2 = 2∆r and r 3 = 3∆r with ∆r = 68.3 mm≈ L/2 . It can be seen that both distributions coincide. For a length scale below a certain threshold, the Markov properties are not fulfilled, see Fig. 11. We call the associated length scale 'Markov coherence length' or short 'Markov length' l m , i.e. above l m the Markovian properties are fulfilled, below l m not [24]. To quantify the results and to get a more objective and systematic measure for the Markov properties and the Markov length, we perform a Wilcoxon test, which compares two random samples with size m and n (see [54,67]). For the Wilcoxon test, one has to count the number of inversions of two samples, here for the single and double conditioned variable u 1 | u2 and u 1 | u2,u3 . We calculate t := |Q − Q p=p |/σ(m, n), where Q is the number of inversions calculated from the experimental data for the variables u 1 | u2 and u 1 | u2,u3 ; Q p=p = mn/2 and σ(m, n) = mn(m + n + 1)/12 are the number of inversions and the standard deviation, respectively, assuming that both variables have the same distribution. Thus, it is t = 1 if both samples come from the same universe, or have the same distribution. Fig. 12 shows this measurement in dependence of ∆r. For small ∆r the Markov properties are not fulfilled, whereas for larger distances the deviations are not significant anymore. We identify the distance ∆r = l m , where t drops to 1 as the Markov length [70]. This value can be estimated by fitting an exponential function to the values and by interpreting the passage through the value 1 as the Markov length, as it is presented in Fig. 12. The Markov properties are also fulfilled for transverse increments, but with a smaller Markov length, see Fig. 13. The Markov length varies within 20% with respect to the condition u 2 but remains about constant with respect to r. For the longitudinal increments the Markov length lies in the range 7.4 mm < l m,l < 9.6 mm, for the transversal increments the Markov length lies in the range 5.5 mm < l m,t < 6.8 mm. The ratio is l m,l /l m,t ≈ 1.4 as it is known for the Taylor length [51]. Note, that up to now wee see that l m ≈ λ [54]. The analysis of the data Rλ=550 give in principle analogous results. We conclude from the results that the 'cascade' of longitudinal and transverse increments can be described by a Markov-process for stepsizes larger than the Markov length l m , which becomes important again for the estimation of the Kramers-Moyal coefficients, as will be seen below. Kramers-Moyal Coefficients The drift coefficients D (1) and the diffusion coefficients D (2) are calculated according to Eq. (7) directly from the measured data following the procedure described in [22,54]. The crucial point is the estimation of the limit lim ∆r→0 M (i) , see M (2) (v, r, FIG. 14: The coefficients M (1) and M (2) in dependence of the step-size ∆r for r = L/2. The two upper figures show the limit of the drift coefficient for the longitudinal and transverse increments, respectively. The two lower figures show the limit of the diffusion coefficient for the longitudinal and transverse increments. The dotted lines represent the Markov length. Circles: first order polynomial to extrapolate the limit ∆r → 0. The linear dependence is the first order approximation of the limit, see [22]. The resulting drift coefficients D (1) and diffusion coefficients D (2) are shown in Fig. 15 for the length scale r = L/2. The drift coefficient can be approximated by a straight line with negative slope. Small deviations from this behavior are visible for the transverse component. For the diffusion coefficient qualitative differences between both increments are visible. In contrast to the transverse coefficient, the longitudinal coefficient is not symmetric under reflection u → −u. This is compatible with Kolmogorov's 4/5-law, which states that the longitudinal distribution are skewed. The diffusion coefficient can be approximated by a second order polynomial, so we have where α = u, v. Due to the reflection symmetry v → −v of the transverse increments it is d v 2 (r) ≡ 0. The drift and diffusion coefficients are the first two coefficients of the Kramers-Moyal expansion. According to the Pawula-theorem all higher coefficients vanish, if the fourth-order coefficients are zero and thus the expansion simplifies to a Fokker-Planck equation. In Fig. 16 this fourth order coefficients are plotted for r = L/2. The coefficient D (4) for the longitudinal coefficient vanish within the error bars. The corresponding transversal coefficient has a value slightly above zero. But one can estimate with the Kramers-Moyal expansion that the contribution of this coefficient is less than one-hundredth in comparison to the diffusion coefficient. Therefore, we want to assume in the following a vanishing fourth order coefficient. This assumption is additionally justified below by showing that the Fokker-Planck equation describes the increment statistics well. Next, the r-dependence in Eq. (21) is investigated. It can be estimated by fitting the approximation (21) to the numerical Kramers-Moyal coefficients. The result is depicted in Fig. 17 for the longitudinal (black squares) and (4) (v, r) for r = L/2. It can be seen that within the error-bars the longitudinal coefficient is zero. The transversal coefficient is slightly above zero, but its contribution to the Kramers-Moyal expansion is small. transverse increments (white squares). The form is remarkably simple, it can be approximated by Here, we denote by X l and X t the longitudinal and transverse quantities, respectively. 21) for the longitudinal (black squares; α = u) and transverse increments (white squares; α = v) If the notation is unique, we omit the index l or t. Integration of the Fokker-Planck Equation Next we want to demonstrate that the Fokker-Planck equation can describe correctly the statistics of the turbulent field. If the estimated drift and diffusion coefficients (22) are used to solve the Fokker-Planck equation numerically, the resulting distributions can be compared with the distributions estimated directly from the data. First, the numerical estimation of the single distributions p(u, r) and p(v, r) in dependence of r are discussed, for which the distribution on the integral scale p(u, r = L) and p(v, r = L) are used as initial conditions. In Fig. 18 p(u, r) and p(v, r) are shown for several length scales. The curves are in good agreement with the data. It is important to stress that also the intermittency effects and the skewness can be described well. A similar calculation has been done for the conditional distributions p(u, r|u 0 , r 0 ) and p(v, r|v 0 , r 0 ) starting at the integral scale with Dirac's delta function p(u, r = L|u 0 , r 0 = L) = δ(u) and p(v, r = L|v 0 , r 0 = L) = δ(v) as initial condition. The solutions down to r = L/2 are shown in Figs. 19 and 20 in dependence of the initial value u 0 and v 0 . Comparing the conditional probability distributions in Fig. 19 with those in Fig. 20 it is evident that the transverse statistics relaxes faster (the contour lines are more horizontal). On the basis of these results we conclude that the increment statistics can be well described by a Fokker-Planck equation, whose drift and diffusion coefficients are given by (22). Thus, we can examine the increment statistics by means of the drift coefficient D (1) and diffusion coefficient D (2) . A closer look at the d-coefficients of Eq. (22) shown in Fig. 17 gives insight into the differences of the longitudinal and transverse increment statistic. It can be seen in Fig. 17a) b) and d) that the absolute value of the transverse coefficients are larger than the longitudinal one. This means that the transverse cascade are in a way 'faster' and more noisy. This behavior can be state more precisely by taking into account a remarkably symmetry, namely, if the r-dependence of the transverse increments is rescaled by the factor 2/3, i.e. r → 2r/3, the dominating terms fall on top of the others, see Fig. 21. Thus the main statistical differences between longitudinal and transverse increments vanish by this rescaling. Differences only remains in the diffusion term, see Fig. 22. In section V we discuss this result in further detail. C. Joint multipoint-statistics of longitudinal and transverse increments In the preceding section we have analyzed the statistics of longitudinal and transverse increments separately. But this separation is restrictive because the dynamics of both components come from one velocity field and are therefore connected. They can not be separated due to the nonlinear advection term in the Navier-Stokes equation. Therefore we extend the above analysis and examine the joint stochastic properties. We chose the both increments as one state with the scale parameter r, see Eq. (3). The aim is to estimate a Fokker-Planck equation (6) in these two variables. We proceed similar to the one dimension analysis. The main difference is that one need much more data and one has to estimate much more coefficients, because the drift-coefficient is now a vector and the diffusion coefficient a matrix. The verification of the Markov properties for two variables is doubtful because one has to estimate the double conditioned probability function for a two dimensional process, i.e. a six-dimensional function with a finite number of data points. One needs approximately 10 4 times more data points in comparison to the one dimensional case if the results should be similar significant. The limiting factor is the duration of the measurement and the amount of data. But we know separately for both components that the the Markov properties are valid. In general, if two variables have Markov properties, also the joint statistics have Markov properties (the reverse is not true in general [57]). Therefore we assume that the combined process is Markovian as well. The next step is to estimate the Kramers-Moyal coefficients. First, one has to calculate the approximation of the drift vector M ij (u, v, r, ∆r) in dependence of ∆r, see Eq. (8). As in the one dimensional case we calculate the approximation of the drift and diffusion coefficients by fitting a linear polynomial to these functions in dependence of ∆r above the Markov length. In doing so, we have to consider two It can be seen that the diagonal coefficient are not constant but have a parabolic form, which is more pronounced for the D To summarize, we can approximate the form of the drift and diffusion coefficients by a low-order polynomial in u and v: The d-coefficients contain the r-dependence. Their lower index labels the associated Kramers-Moyal coefficient, the upper index the order of the coefficient with respect to u and v. In the Fokker-Planck equation, the coefficients occur symmetric with respect to reflection v → −v. Thus a reflection v → −v does not change anything, whereas this symmetry is violated for the longitudinal increment. Of course, the Kramers-Moyal coefficients can be better approximated using higher order in u and v, but their contributions are small and their value are not well defined at the edges of the available data range, because high velocities are too rare to ensure a good statistics. Note, the significance of higher order terms in D (1) and D (2) is important for the closure of Eqn. (16) but their investigation should not be addressed in this article. In order to describe the statistics with the Fokker-Planck equation completely, the r-dependence of the d-coefficients has to be estimated, see Fig. 25. It can be approximated by Before we interpret the results, it has to be shown that the Fokker-Planck equation together with the coefficients (23) and (24) can reproduce the statistics of the measured data. There are in principal two ways to verify the estimated drift and diffusion coefficients. One way is to solve the Fokker-Planck equation, the other way is to calculate the structure functions, which can be done for example with the hierarchical equation for the structure functions (16). In both cases the results can be compared with the corresponding quantities estimated directly from the data. In Fig. 26 the solution of the hierarchical structure function equation (16) is shown for n = 0 and m = 1, . . . , 6. It is in good agreement with the structure functions. To show that also the joint probability distributions can be reproduced, we solve the Fokker-Planck equation by calculating the path-integral (10) numerically. We use the estimated Kramers-Moyal coefficients and start the simulation on the integral scale r = L with a Gaussian distribution for p(u 0 , r = L) and integrate down to r = 2λ. Figs. 27 and 28 show the results. Because of the good accordance we assume that the d-coefficients can be used to characterize the statistics of longitudinal and transverse increments. The number of coefficients in (24) can be reduced by a remarkably symmetry. If one multiplies the r-dependence for the transverse increments with a factor 2/3, the related coefficients of longitudinal and transverse increments coincides, i.e. coefficients is halved. This symmetry will be examined in more detail below. V. INTERPRETATION OF THE RESULTS From the above analysis we know that the drift and diffusion coefficients contain the information of the smallscale statistics of the turbulence. More specifically, they contain the joint statistics of longitudinal and transverse increments as well as the statistics on multi scales. Thus we can study the structure of small-scale turbulence with aid of the drift and diffusion coefficients. First we will argue that the hierarchical equation (16) is a kind of generalized Kármán equation and we discuss the similarities and differences between these two equations. Then we consider the 2/3-rescaling symmetry in more detail, show that it is consistent with known results and find that the coupling of longitudinal and transversal increments is important for intermittency. At least we show that the scaling exponents of the transverse structure function can fake a too small value if the frequently used extended self similarity (ESS) is used to estimate the transverse scaling exponent without taking into account the rescaling. This is an important result belonging to a frequently discussed issue in the analysis of fully-developed turbulence: the problem of possible differences in the scaling properties of longitudinal and transverse velocity increments in isotropic small scale turbulence. A. Generalized Kármán equation The hierarchical equation (16) Although the Kármán equation holds for Re→ ∞ and our result is obtained for quite moderate Reynolds number, the structure of both equations is remarkably similar, except for the additive term d 11 . In Fig. 29a) is shown how well these equations reproduce −∂ r u 2 . The agreement of (27) to the data is better than that of the Kármán equation because the flow seems to be slightly anisotropic. Thus we can also analyze anisotropic effects with the Fokker-Planck equation. We want to look in more detail at the differences by comparing the terms associated to the same structure functions on the right hand sides of the two equations, see Fig. 29b). It can be seen that the pre-factors of the structure functions are different for equation (26) and (27): for example it is (2d u 1 + d uu 11 ) u 2 ≡ −2 u 2 . This reflects the different meaning of the two equations. Note that the anisotropy is clearer visible in Fig. 29a) than in Fig. 1 because in 29a) the difference of two quantities of similar magnitude, right hand side of Eq. (26), is plotted and thereby the deviation to the isotropic case is more pronounced. Whereas the Kármán equation connects only the second order structure functions, the Fokker-Planck equation is an equation also for higher orders. Furthermore its solution not only gives the relation between the structure functions but can even reproduce the structure functions itself. These informations are included in the d-coefficients. An additional difference is that the Fokker-Planck equation also includes the information of the multi-scale statistics. We want to mention that also the second Kármán equation [44] −r ∂ u 3 has a similar structure as the hierarchical equation of the Fokker-Planck equation (Eq. (27) for m = 3 and n = 0): B. Different 'cascade speed' In Fig. 25 we have recognized by the drift and diffusion coefficients that the dominating differences between longitudinal and transverse increments can be found in a 3/2-times faster 'cascade speed' of the transverse increments, which we have interpreted as a rescaling symmetry. This has some consequences for the increment statistics. First we want to examine how this symmetry becomes apparent in the structure functions. Then we discuss small differences between longitudinal and transverse increments, which exist beyond the differences in the different cascade speeds. At last we show that the rescaling is compatible to known results supporting our findings. The rescaling-symmetry of the cascades speeds can also be observed directly at the structure functions u m r and v m r . This can be studied with aid of the hierarchical equation (16) and the coefficients in (23). We apply the rescaling to the transverse hierarchical equation and label all functions with a tilde, whose argument is multiplied by a factor 2/3. Then the equations for longitudinal and transverse structure functions read Due to the 2/3-rescaling symmetry the coefficientsd v 2 ,d 22 ,d u 11 andd vv 11 can be replaced by the corresponding coefficients of the longitudinal equation; the only exception is the double underlined coefficient: Equation (30) and (32) would have the same solution without the underlined terms as can be seen by comparison. In other words, without these terms the longitudinal and transverse structure functions also obey the 2/3-rescaling symmetry, |v m (r)| = |u m (3r/2)| , were we set u r ≡ u(r) and v r ≡ v(r). To focus on the deviations of the 2/3-rescaling symmetry, we subtract (32) from equation (30), i.e. we examine the differences of u m (r) and v m (2r/3) ≡ v m (r) : This equation is a differential equation for the differences u m r − v m r of the longitudinal and transverse structure functions. If the initial condition of this equation is zero, the solution would be zero without the underlined terms. Furthermore, because md u 1 + m(m − 1)d uu 11 /2 < 0 for all order m of interest, this equation has stable solutions and deviations in the initial conditions will decrease fast converging to the 2/3-rescaling apart from the discussed terms. The simply underlined term violates the 2/3-symmetry because of d uu 11 (r) = d vv 22 (2r/3), the double underlined term violates it because of the not symmetric occurrence of the coefficient d u 11 in the equation. For even m the last term is the smallest one because of the odd moment and because of the small pre-factor, which is on the same magnitude as the quadratic term. The influence of the two last terms become larger with increasing order of the moments. Although the deviations of the 2/3-rescaling are small, they have a crucial meaning for the small scale turbulence: without the terms d uu 11 and d vv 22 there were no intermittency, without the term d u 11 there was no skewness of the longitudinal increments. To study how the form of the Kramers-Moyal coefficients leads to increasing intermittency with decreasing scales, we use the flatness F u ≡ u 4 r / u 2 r 2 . For a Gaussian distributed function, the flatness has the value 3 and deviations from this value can be interpreted as intermittency. If we differentiate the flatness with respect to r, we can relate the Fokker-Planck equation and the flatness (we shorten S mn := u m r v n ): The similar structure of Eq. (34) and (36) allows to discuss both equations together. The direction of the cascade is from large to smaller scales r, so that ∂ r F u < 0 means increasing intermittency towards smaller scales. The first term is the dominating negative term. The second term vanishes for large scales, because of the approximately Gaussian form of the distribution and is positive if intermittency increase. The third term simplifies to 12d vv 11 (S 20 ) 2 σ 2 12 because of the approximate Gaussian character for large scales with the covariance σ 2 12 . This term describes the coupling between both components and is larger than zero, i.e. it acts against intermittency. The fourth term belongs to higher order corrections and describes the influence of the skewness onto the intermittency. To summarize, only the two quadratic terms d uu 11 and d vv 22 significantly produce intermittency of the longitudinal and the transverse component, respectively. Similar consideration can be done for the skewness S u := u 3 r /( u 2 r ) 3/2 , which can also be expressed by the Kramers-Moyal coefficients (it is S v ≡ 0 because of reflection symmetry): A positive derivative ∂ r S u > 0 means an increasing skewness for decreasing scales. The first term on the right hand side of Eq. (36) is always negative. The second term is positive and is therefore necessary for a skewed distribution. The third term amplifies an existing skewness. Therefore the coefficient d u 11 is essential for the skewness, but also the 'intermittency' term d uu 11 leads to an increasing skewness. We can summarize the results in a short form: If the r-dependence of the transverse component is rescaled by a factor of 2/3, then the only differences can be found in intermittency contributions and the skewness. C. Compatibility to known results To support our findings of the rescaling symmetry, we show that it is compatible to the Kármán equation and to the ratio of Kolmogorov constants. These two aspects were already presented in [60], but here we go in more detail and add some new aspects. The Kármán equation (18) is consistent with the 2/3-rescaling symmetry, as one can see by interpreting the Kármán equation as a first-order Taylor expansion with the 'small' quantity r/2: v 2 (r) = u 2 (r) + 1 2 r ∂ ∂r u 2 (r) This equation correspond to the 2/3-rescaling symmetry of the second order structure functions except for the Lagrange remainder [12] R = r 2 ∂ ρρ u 2 (ρ) /8 with r < ρ < 3r/2. If we assume Kolmogorov-scaling [37] u 2 ∝ r 2/3 (intermittency effects for the second order structure function are negligible), the remainder R = r 2 ξ 2 (ξ 2 − 1)ρ ξ2−2 /8 is below 2.1% of u 2 (3r/2) for all r. An exact calculation yields that the relative error R/ u 2 (3r/2) is independent of r and R is ≈ 1.7% of u 2 (3r/2) , a value below the typical errors for u 2 r of data from hot-wire anemometry. For the used data the approximation of u 2 (r) + r ∂ r u 2 (r) /r by u 2 (3r/2) is valid within 4% for all scales, see Fig. 30a). Thus the 2/3-rescaling is a very good approximation of the Kármán equation. Because the only assumptions for these equation is a solenoidal and isotropic field, the 2/3-rescaling is a property of isotropic turbulence, which should hold also for the limit Re→ ∞. It is important to discuss the quality of the approximation (37) with respect to scaling exponents. The remainder R = r 2 /8ξ 2 (ξ 2 − 1)ρ ξ2−2 vanishes identically only for structure functions linear in r (ξ 2 = 1). But as one can see from u 2 (r) + r/2 ∂ r u 2 (r) ∝ r ξ2 and u 2 (3r/2) ∝ r ξ2 the exponent is neither changed by the above approximation for pure scaling nor is the local exponent changed significantly for real data as one can see from figure 30b). The reason is that the validity of the approximation depend only weak on the exponent. In reverse this does mean that the 2/3-rescaling does not distort the exponent strong. The 2/3-rescaling can be related to the ratio of the Kolmogorov constants. Let us suppose that the structure functions scale with a power law, v n (r) = c n t r ξ t n and u n (r) = c n l r ξ l n , even though our measured structure functions are still far away from showing an ideal scaling behavior [55]. Note that the c n constants are related to the Kolmogorov constants but includes the energy dissipation ǫ n/3 r and the enstrophy Ω n/3 r . If we neglect the differences in intermittency and skewness, we can relate the structure functions according to the above mentioned rescaling: v n (r) = u n (3r/2) = c n t r ξ t n = c n l (3r/2) ξ l n . We end up with the relation ξ l n = ξ t n and c n t /c n l = (3/2) and c 4 t /c 4 l = 16/9 given in [1]. Thus one needs only one constant 2/3 to explain the two ratios c 2 t /c 2 l and c 4 t /c 4 l . We conclude that the 2/3-rescaling is the underlying relation between longitudinal and transverse Kolmogorov constants and therefore gives a forecast also for Kolmogorov-constants of higher orders. For finite Reynolds number the differences between the ratio of the Kolmogorov constants and the 2/3-rescaling become more obvious. The ratio v 2 / u 2 = (c n t r ξ t n )/(c n l r ξ l n ) = c t /c l = 4/3 is not fulfilled anymore, see Fig. 31a). Whereas for high Reynolds numbers this quantity can be interpreted as a ratio of amplitudes of the structure functions, this meaning gets lost for finite Reynold numbers. But if we treat the structure function as the independent variable and r as the dependent variable, i.e. r is a function of the structure functions, and calculate the ratio r( v 2 = x)/r ′ ( u 2 = x), we get an almost constant value close to 2/3, see Fig. 31b). This is a remarkably property: The constant ratio of the Kolmogorov constants are just a special case of the 2/3-rescaling. Whereas the former is valid only for high Reynolds number, the 2/3-rescaling is also a property of moderate Reynolds-number flows. A further consequence belongs to the discussion whether the Kolmogorov constants are universal or not [6,52,68]. Even if the Kolmogorov constants are not universal, the ratio r( |v m | = x)/r ′ ( |u m | = x) seems to be universal if the turbulence is isotropic. D. Intermittency and the coupling of longitudinal and transverse increments From the form of the Fokker-Planck equation it can be seen that intermittency is closely related to the interaction of longitudinal and transverse increments. More precisely, a necessary condition of intermittency is the mutual dependence of longitudinal and transversal increments. This can be deduced as follows. First, we neglect the small differences in the intermittency and the skewness. If the 2/3-rescaling is applied, the form of the Fokker-Planck equation becomes symmetric with respect to exchange of u r and v r . Moreover, because of the linear drift term and the rotational symmetric diffusion term, the Fokker-Planck equation becomes rotational symmetric and one can use circular coordinates with radius ρ 2 = u 2 r + v 2 r . Thus rotational symmetric functions F (ρ 2 = u 2 r + v 2 r ) solve the Fokker-Planck equation. To consider the question wether u r and v r are coupled, we construct a contradiction by assuming that u r and v r are independent, in the sense that F (u 2 r , v 2 r ) = F (u 2 r )F (v 2 r ). This means for a solution F the relation F (ρ 2 ) = F (u 2 r + v 2 r ) = F (u 2 r )F (v 2 r ) holds. But this equation is only fulfilled for Gaussian distributed functions F what is in contradiction to the fact that the turbulent flow is intermittent. Thus we conclude the two increments u r and v r depend of each other. Intermittency models has to take into account this coupling between the two increments and the one dimensional intermittency models has to be extended. E. Transverse scaling exponents In this section we focus on the wide debate about possible differences between the scaling exponents of longitudinal and transverse structure functions and consider the implications of the rescaling symmetry on it. In Fig. 4 we have shown the recognized result that using ESS the transverse exponents are smaller than the longitudinal. Nevertheless, we use a new ansatz to reconsider this result and combine the differences of the longitudinal and transverse cascade speeds with ESS. Because the assumption for the longitudinal structure functions in ESS is |u n (r)| ∝ (rf l (r)) ξ l n , see (40), the corresponding relation for the transverse increments has to be |v n (r)| ∝ (rf t (r)) ξ t n with a function f t different from f l . Notice that the implicit assumption in ESS was f t ≡ f l , see (41) and (42). Using for the second order structure function the 3/2-rescaling found from the Fokker-Planck equation and assuming that the differences between skewness and intermittency are small for this order, we find a relation for the two functions: Starting with |v 2 (r)| = |u 2 (3r/2)| we get rf t (r) = (3rf l (3r/2)/2) ξ l 2 /ξ t 2 (we have chosen the proportionality in ESS without loss of generality in such a way, that the equals sign holds). Because the intermittency corrections is small for the second order, it is ξ l 2 ≈ ξ t 2 and we get f t (r) = 3f l (3r/2)/2. For arbitrary structure functions it then holds which we call in the following ESST, extended self similarity for the transverse structure function. Fig. 32 shows the application of ESST to the transverse structure functions. As an remarkably result, the differences between both exponents vanish. Thus one scaling group is sufficient to characterize both increments. Notice that the differences between the exponents found with ESS are due to a none existing scaling behavior of the structure functions with r. It is evident that our rescaling does not change the exponents in case of pure scaling behavior |v(r)| n ∝ (3r/2) ξ t n ∝ r ξ t n , which is expected to be valid if the Reynolds number goes to infinity (see also [55]). Next we perform the analog analysis for our second data set with the higher Reynolds number R λ = 550. In Fig. 33a) the sixth order structure function is plotted using ESS and in Fig. 33b) using ESST. Again we find that the differences in the scaling exponents vanish for ESST (ESS: ξ l 6 = 1.74 ± 0.03, ξ t 6 = 1.60 ± 0.03; ESST: ξ l 6 = 1.74 ± 0.03, ξ t 6 = 1.75 ± 0.03). In Fig. 34 the exponent ξ l n as well as ξ t n is plotted up to order 8. The differences between the exponents ESS vanish within the uncertainties applying ESST instead of ESS. To see in more detail the influence of ESST to the exponents, in Fig. 35 the local exponents ξ α n (r) = ∂ log α n /∂ log |u| 3 (α = u, v n = 4, 6) are shown. The use of ESS for the longitudinal exponents results in an almost constant local exponent. The transverse exponents have oscillations, which were explained by log-periodic oscillations. If ESST is applied, the differences between longitudinal and transverse exponents vanish within these oscillations. The local exponent ξ α n = ∂ log α n /∂ log |β| 3 with α = u(r) and β = u(r) (straight line); α = v(r) and β = u(r) (black squares); α = v(r) and β = u(3r/2) (squares). a) Local exponent of the fourth order structure function and b) for the sixth order structure function. VI. CONCLUSION We have analyzed the differences and similarities between longitudinal and transverse increments. Differences in their structure functions and probability distributions were found, which were observed already from other groups. To look in more detail at these differences we have used a method to estimate by pure data analysis a phenomenological Fokker-Planck equation and have extended this analysis to both, longitudinal and transverse increments. We have checked carefully the mathematical prerequisites of this method and found that they are well fulfilled for our data. Then we have estimated the drift and diffusion coefficient of the Fokker-Planck equation directly from the data. The solution of the Fokker-Planck equation with these coefficients reproduces well the probability distributions and the structure functions. Thus, the statistics of the joint probability of longitudinal and transverse increments is encoded in the drift and diffusion coefficients. These new quantities for analyzing the turbulent flow determine a stochastic process in r, which can be interpreted as a 'cascading process'. The statistics of both increments are decoupled in the deterministic drift vector but are coupled via the nonlinear and no diagonal diffusion matrix. A remarkably result is that in a first approximation the longitudinal and transverse drift and diffusion coefficients can be transformed to each other by a simple rescaling, namely, by multiplying the scale of the longitudinal increments with the factor 3/2. With this rescaling the frequently discussed issue of the differences between longitudinal and transverse structure functions can be explained. In this context the extended self similarity method (ESS) has to be changed for the transverse structure functions according to the rescaling symmetry, leading to an extended self-similarity for the transverse increments (ESST). From it, we get the remarkably result that the previous found differences in the scaling exponents vanish. Accordingly, the seemingly differences between the exponents were a result of the non-scaling behavior. Thus it is evident that with increasing Reynolds number the differences in the scaling exponents diminish, as more and more ideal scaling behavior occurs. It is interesting to note that the 3/2 rescaling, which we interpret as different speed of the longitudinal and transverse cascade, is compatible with the Kármán equation. This leads to the proposal that the 3/2 rescaling is valid for large Reynolds number Re→ ∞. Besides the features of structure functions our method of analyzing the statistics by means of a Fokker-Planck equation, provides further insights in the complexity of the cascading process. Access to stochastic properties on multi-scales and the interaction between both increments are given. In this closer look principal differences between longtidudinal and transverse increments can be identified, which are not grasped by the investigation with structure functions. The physical meaning of these differences has to be clarified for an extended understanding of the small scale turbulence. We acknowledge fruitful discussions with R. Friedrich, A. Naert and teamwork with S. Lück and F. Durst. This work was supported by the DFG-grant Pe 478/9. VII. APPENDIX A: EXTENDED SELF SIMILARITY Benzi et. al. [7,8] have noticed that structure functions show extended self similarity (ESS) and that this property can be used to estimate scaling exponents for moderate Reynolds number for which a scaling regime is not welldeveloped. The basic assumption of ESS is that the structure functions of different order have a similar shape, which can be written as |u r | n ∝ (rf (r)) ξ l n (40) with the common function f (r). Additionaly it was shown that |u 3 | and u 3 have the same scaling properties and if one choses according to Kolmogorov's 4/5-law ξ l 3 = 1 one can write ESS in the form |u r | n ∝ |u r | 3 ξ l n . Using this expression, the scaling range extends down to 3η-5η and the exponent is universal, i.e. does not depend on the Reynolds number. For the transverse structure functions ESS was applied in two different ways. Some groups have plotted v n r ∝ |v r | 3 ξ t n [13,48]. But in recent time it has been argued that |v r | n ∝ |u r | 3 ξ t n is theoretical more justified because Kolmogorov's 4/5-law gives an exact prediction for the longitudinal third order structure function and therefore |u r | 3 is a good point of reference [2,15,18,49,69]. VIII. APPENDIX B: LEFT BOUNDED INCREMENTS In the case of multi-point statistics with n increments on different scales r 1 , . . . , r n , one has to define the relative position of increments. A general definition is u r (α) = e · [U(x + αr) − U(x − (1 − α)r)] with α ∈ [0, 1]. We call the case α = 1 left-bounded increments and the case α = 1/2 centered increments. The left-bounded increments are the usual way to define the relative position of several increments, see for example [37]. Although, for this increment definition, the Markov properties are violated as one can see in Fig. 36a). For u 3 = 0 the Markov properties are fulfilled. For u 3 = σ we get Markov properties around the expected Markov-length, but for larger scales the Markov properties are violated. Nevertheless, the violation of the Markov properties does not affect the Kramers-Moyal coefficients very much; even the deviations in d u 2 does not effect the total Fokker-Planck equation very much, because this term is small in comparison to the others, see Fig. 37. The violated Markov properties become more obviously, if the conditioned moments estimated from the Fokker-Planck equation are compared with those from the data, see Fig. 36b). It can be seen that the left-bounded increments does not describe the conditional moments. Therefore we use the centered increments for our analysis. For a discussion on the background of random numbers and random walks see Ref. [66]
2018-12-27T12:38:24.246Z
2004-09-06T00:00:00.000
{ "year": 2004, "sha1": "341c62365a847c96d0494737e7fc7e25cbbf0d24", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "341c62365a847c96d0494737e7fc7e25cbbf0d24", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218673942
pes2o/s2orc
v3-fos-license
Experimental study of the $^{11}\text{B}(p,3\alpha)\gamma$ reaction at $E_p = 0.5-2.7$ MeV Our understanding of the low-lying resonance structure in $^{12}$C remains incomplete. We have used the $^{11}\text{B}(p,3\alpha)\gamma$ reaction at proton energies of $E_p=0.5-2.7$ MeV as a selective probe of the excitation region above the $3\alpha$ threshold in $^{12}$C. Transitions to individual levels in $^{12}$C were identified by measuring the 3$\alpha$ final state with a compact array of charged-particle detectors. Previously identified transitions to narrow levels were confirmed and new transitions to broader levels were observed for the first time. Here, we report cross sections, deduce partial $\gamma$-decay widths and discuss the relative importance of direct and resonant capture mechanisms. Introduction The p+ 11 B reaction has been extensively used to study the excitation structure of the 12 C nucleus. This includes measurement of proton widths Γ p , the partial γ widths Γ γ0 and Γ γ1 to the two lowest levels in 12 C, and the partial α widths Γ α0 and Γ α1 to the two lowest levels in 8 Be [1,2,3]. The focus of the present work are the two isospin T = 1 resonances occurring at proton energies of E p = 2.00 MeV and 2.64 MeV which correspond to the levels 17.76, 0 + and 18.85, 3 − . 1 The γ decay of these levels to lower-lying, unbound levels in 12 C was studied by Hanna et al. [4] who identified two rather strong transitions feeding two narrow levels above the 3α threshold: 17.76, 0 + → 12.71, 1 + and 18.35, 3 − → 9.64, 3 − . Using the conventional approach of detecting the γ transitions with a large scintillator, Hanna et al. could not have identified weak transitions or transitions to broad levels. Recently, such transitions have been studied using a technique where the final level is identified by measuring the momenta of the three α particles resulting from its breakup [5,6,7]. Here we wish to explore, first, if γ transitions from the levels 17.76, 0 + and 18.85, 3 − to broad, lower-lying levels similar to those observed in Ref. [7] can be identified, and second, if the strength of the transitions already observed by Hanna et al. can be confirmed with this indirect detection method. Experiment The experiment was performed at the 5 MV Van der Graaf accelerator at the Department of Physics and Astronomy at Aarhus University. The proton beam was directed on the target using electrostatic deflection plates and a magnetic bending stage. The beam size was defined by two variable apertures placed after the magnet both set at a separation of 2 mm and placed 0.5 m apart. The ion energy was adjusted by means of a generating voltmeter, which was calibrated on an absolute scale using the 27 Al(p, α) 24 Mg and 27 Al(p, γ) 28 Si reactions. The energy spread of the beam was less than 1 keV. Beam intensities of several 100 nA can be delivered by the accelerator, but only beams of less than 1 nA were used for the experiment discussed here. The beam current was measured by a Faraday cup placed in a 1 m long beam pipe downstream of the target chamber, specially designed to reduce the amount of beam back-scattered from the Faraday cup to the detector setup. Long measurements were performed at proton energies of E p = 2.00 MeV and 2.64 MeV. At the lower energy, a total of 295 µC was directed on the target over a period of 211 hours, which corresponds to an average current of 0.39 nA. For the higher energy setting, the corresponding numbers are 124 µC, 77 hours, and 0.45 nA. Additionally, multiple, short measurements were performed across the energy range 0.5-3.5 MeV as reported in Ref. [8]. The target consisted of a layer of 12.6(1.2) µg/cm 2 isotope-enriched 11 B deposited on a 4 µg/cm 2 carbon backing [8]. The target was placed in the middle of a compact array of double sided Si strip detectors (DSSDs) at an angle of 45 • with respect to the axis defined by the beam, as shown in Fig. 1. Annular DSSDs with 24 ring strips and 32 annular strips were placed upstream and downstream of the target, and two square DSSDs with 16 horizontal strips and 16 vertical strips were placed on either side of the target orthogonal to the beam axis. The electronics and data acquisition consisted of a VME based system with ADCs and TDCs fed by signals from a chain of preamplifiers and amplifiers. The dead time was around 10% with trigger rates of several kHz. Event selection The data is analyzed following an approach similar to that of Laursen et al. [7]. Particle energies and hit positions on the DSSDs are determined by requiring an energy difference of at most ±50 keV in front and back strips. Energy conservation cannot be used as a condition to reduce unwanted background because we are searching for events where some of the energy is carried away by a γ ray. However, the momentum carried away by the γ ray is sufficiently small that we can require momentum conservation of the three α particles. Hence, 3α events are identified as triple coincidence events fulfilling both a TDC cut of ±15 ns and momentum conservation, but not necessarily energy conservation. Additional kinematic cuts were applied to further suppress unwanted backgroud due to random coincidences involving p + 11 B and p + p elastic scattering. Figures 2 and 3 show scatter plots of the total momentum in the centre of mass (c.m.) frame versus the 12 C excitation energy calculated from the triple-coincidence events. The intense groups of events just below and x axis is the excitation energy in 12 C, and the y axis is the total momentum in the centre of mass frame, both determined from the energies and positions of the three detected particles. The events enclosed by the red contour fulfill momentum conservation, but not energy conservation, and are therefore interpreted as γ-delayed 3α emissions from 12 C. just above E x = 18 MeV in the two figures correspond to 3α decays directly from the levels 17.76, 0 + and 18.85, 3 − , respectively. These events fulfill both energy and momentum conservation. The events further to the left from these intense regions are interpreted as events where some of the energy is carried away by a γ ray, and they are therefore the events of interest. Figures 4 and 5 focus specifically on those events fulfilling momentum conservation, but not energy conservation. The upper panels show scatter plots of the excitation energy in 12 C versus the individual energies of the three α-particles in the 12 C rest frame. These scatter plots show the different 3α breakup mechanisms of the levels in 12 C populated in the γ decays. The diagonal lines from the lower left to the upper right represent breakups that proceed by α decay to the ground state of 8 Be. Owing to parity and angular momentum conservation this decay mechanism is only allowed for natural-parity levels in 12 C. The two α particles from the subsequent breakup of 8 Be, detected in coincidence with the primary α particle, form a broad band running from left to right with half the slope of the upper diagonal. The positions of known levels in 12 C are indicated on the scatter plots. The lower panels of Figures 4 and 5 show the projections of the scatter plots on the excitation energy axes with the shaded histograms providing the projection selectively for the events on the diagonals, which fulfill the condition E 2α < 210 keV for at least one pair of α particles, E 2α being the relative kinetic energy of the pair. The coloured curves on these plots will be discussed later. From the trigger rate and the width of the TDC gate we estimate the number of random coincidences to be 6 events in Figure 4 and 9 events in Figure 5. Cross sections We determine the capture cross section, σ γ , from the number of observed events in each excitation energy bin, taking into account the triple-α detection efficiency, the target thickness, the integrated current on target, and the dead-time of the data acquisition system. The cross sections thus obtained at E p = 2.00 MeV and 2.64 MeV are summarized in Table 1 and 2, respectively. The detection efficiency depends on the 3α breakup mechanism and differs significantly between breakups that do and do not proceed via the ground state (g.s.) of 8 Be. The green and magenta (short-and long-dashed) Table 1: 11 B(p, 3α)γ cross section at Ep = 2.00 MeV. Ex is the 12 C excitation energy inferred from the momenta of the three α particles; σγ is the cross section and is subject to an additional 10% systematic uncertainty from the target thickness; the events are divided into two groups: those that correspond to breakups proceeding via the 8 Be ground state (gs) and those that do not (exc). The first energy bin (I) is not included since all the events in this bin are attributed to p + 10 B. (15) curves in the lower panels of Figures 4 and 5 show the detection efficiencies determined from Monte-Carlo simulations. For the excited channel, phase-space (Φ) simulations were used to estimate the detection efficiency in all excitation energy bins, except the bins containing the 11.83, 2 − and 12.71, 1 + levels where more accurate models [10] were used. The error resulting from adopting the phase-space approximation is estimated to be at most ∼ 15%, which we include as an additional uncertainy on the detection efficiency for those energy bins where phasespace simulations were used. For the other bins, and for the g.s. channel where the angular distributions of Ref. [8] were used, we adopt a 5% model uncertainty. We note that the ratio of triple-coincidence events to single events predicted by the simulation for the g.s. channel is 15% below the experimental ratio. We ascribe this to inaccuracies in the representation of the beam-targetdetector geometry in the simulation and account for it by including an additional 15% uncertainty on our efficiency estimate. We find the detection efficiency to be insensitive to uncertainties in the ADC thresholds, except for the lowest excitation energy bin (E x < 9.2 MeV) where ADC thresholds contribute an estimated 8% to the overall uncertainty. These uncertainty contributions are all added up in quadrature, and finally added linearly with the statistical counting uncertainty to obtain the overall uncertainty on the cross section in each excitation energy bin. Deduced γ-ray widths The excitation functions of the γ rays to the 9.64, 3 − and 12.71, 1 + levels have been measured in considerable detail in the energy range E p = 1.8-3.0 MeV by Hanna et al. [4] by means of conventional γ-ray spectroscopy. Both excitation functions were found to be resonant, allowing the authors to attribute the γ rays to the transitions 17.76, 0 + → 12.71, 1 + and 18.35, 3 − → 9.64, 3 − , respectively. One drawback of the indirect experimental approach adopted in the present work, which involves detecting the three α particles rather than the γ ray, is the reduced event rate compared to conventional γ-ray spectroscopy. Therefore, excitation functions could not be obtained in a reasonable amount of time and measurements were limited to a few selected beam energies. In the absence of excitation functions to support a resonant inter-pretation of the measured cross sections, we rely on the findings of Hanna et al. [4] concerning the resonant character of the γ rays to the 9.64, 3 − and 12.71, 1 + levels, as well as theoretical estimates of the direct-capture cross section, to justify a resonant interpretation of the new γ rays observed in this work. The theoretical estimates of direct-capture cross section will be discussed next. Direct capture For the purpose of estimating the (E1) direct-capture capture cross section, we adopt the model of Rolfs [11] which approximates the many-nucleon problem by a twobody problem in which the projectile and target are treated as inert cores and their interaction is described by a squarewell potential with the depth adjusted to reproduce the binding energy of the final state. This simple model was found to yield accurate results for the capture reaction 16 O(p, γ) to the two bound levels in 17 F, which both are well described by a simple, single-particle configurations involving only a single orbital [11]. Here, we apply the model to capture transitions to levels in 12 C which are not well described by single-particle configurations and also are unbound with respect to decay to the 3α final state. Therefore, we do not expect the model to be very accurate and will use its predictions merely as order-of-magnitude estimates, accurate only within a factor of 2-3 or so. Estimates of the direct capture-cross section to four known levels in 12 C computed with the model of Rolfs using the parameters listed in Table 3, are shown in Fig. 6. The computed cross sections are proportional to the assumed spectroscopic factor, which is not predicted by the model itself. For the 12.71, 1 + level we take the spectroscopic factor from Ref. [12]. For the remaining levels we use the average values of the spectroscopic factors compiled in Ref. [13], noting that there is a substantial spread (∼ 50%) in the spectroscopic factors obtained by different authors. In all cases, we assume a single-orbital configuration, with i = 1 for the 12.71, 1 + level and i = 2 for the remaining levels. The channel radius was taken to be 4.38 fm. The excitation functions measured by Hanna et al. [4] (at 90 • ) indicate that direct capture contributes at most ∼ Table 3: Parameters used for estimating the cross section for direct capture to four levels in 12 C based on the model of Ref. [11]. i are the orbital angular momenta in the entrance channel, f is the orbital angular momentum assumed for the final state, and S is the spectroscopic factor. The spectroscopic factor of the 12.71, 1 + level was taken from Ref. [12]; for the remaining levels we use the average values of the spectroscopic factors reported in Ref. [13]. The channel radius was taken to be 4.38 fm. Figure 6: Estimates of the cross section for p + 11 B direct capture to four selected levels in 12 C based on the model of Ref. [11]. 10% to the total capture cross section to the 12.71, 1 + level at E p = 2.00 MeV, corresponding to 1.4 µb, which is within a factor of two of the cross section predicted by the model (2.6 µb). Similarly, the direct-capture contribution to the cross section to the 9.64, 3 − level can be estimated to be at most ∼ 15% of the total capture cross section at 2.64 MeV, corresponding to 0.4 µb, a factor of four below the model prediction (1.6 µb). Thus, we conclude that our rather crude model provides reasonable estimates of the directcapture cross section, with a tendency to overestimate the actual cross section by a factor of two to four. Comparing the predicted direct-capture cross sections (Fig. 6) to the measured total capture cross sections (Tables 1 and 2), we conclude that resonant capture is likely to be the dominant mechanism in most energy bins, but with a substantial contribution from direct capture. Resonant capture The goal of the analysis is to calculate the partial γ widths of the levels in 12 C mediating the observed (resonant) capture transitions. For this we use the resonant cross section formula, where ω = 1 8 (2J + 1) is the spin statistical factor appropriate for p+ 11 B. Using this equation the partial γ decay widths can be determined from the measured cross sections, provided the partial proton decay widths (Γ p ) and the total widths (Γ) are known. In Table 4, we list known levels in the excitation region E x = 16.5-18.5 MeV, which can mediate resonant captures to lower-lying levels at the beam energies investigated in this work. The levels and their properties are obtained from the most recent TUNL compilation [13] with a few exceptions, as discussed below. Fig. 7 gives a schematic representation of the levels listed in Table 4. The quantity y, shown on the abscissa, is calculated from the expression, where the resonance shape is approximated as a Breit-Wigner distribution multiplied by the penetrability for the lowest possible relative orbital angular momentum, We note that on resonance, σ γ,R = yΓ γ Γ p /Γ. The energies (Ê x ) and total widths (Γ) of the levels listed in Table 4 are generally well constrained, whereas proton widths (Γ p ) are either missing or quoted without uncertainties. Proton widths have typically been determined by subtracting the α widths (Γ α0 , Γ α1 ) from the total width. In particular, Γ α1 has been poorly constrained in previous experiments due to the complex 3α correlations in this channel [2], and therefore the proton widths should be used with some caution. Also, the possibility should not be discounted that the excitation region 16.5-18.5 MeV contains broad T = 0 levels with large α widths (Γ α > 1 MeV), which have not been clearly resolved in previous studies. We proceed by briefly reviewing the available data for each of the levels in Table 4. Unless otherwise stated, the data is taken directly from the most recent TUNL compilation [13]. 16.62, 2 − . The properties of this level are well established, although the precision of Γ p is unclear. The level is clearly observed in (p, p), (p, α 1 ), and (p, γ 1 ), as established already in the 1950s and 1960s, e.g., Refs. [14,2]. There is also compelling evidence for smaller γ branches to the ground state and the 12.71, 1 + and 15.11, 1 + levels [15], but since the excitation functions were not measured the evidence is not conclusive. 17.76, 0 + . The level is seen very clearly in (p, p), (p, α 0 ), and (p, γ 12.71 ). The total width has been determined rather accurately by Hanna et al. and the proton width appears reliable. The level energy of 17.768 MeV was determined from the centroid of the resonance peak in the (p, α 0 ) spectrum of Ref. [8]. 18.13, 1 + . Evidence for the existence of this level comes from a single study of (p, γ 15.11 ) [16]. There are no constraints on the proton width and the spin-parity and isospin assignments are not conclusive. 18.16, 2 − . Evidence for the existence of this level also comes from a single study, in this case of (p, d) [17]. There are no constraints on the proton width and the spin-parity and isospin assignments are not conclusive. It was suggested in Ref. [17] that the 18.16, 2 − and 18.13, 1 + levels might be one and the same level. Indeed, a spin-parity assignment of 2 − appears compatible with the data of Ref. [16]. In the TUNL compilation [13], the two levels are assumed to be one and the same, but here the 1 + spin-parity assignment of Ref. [16] is preferred, while the level energy and width is taken from Ref. [17]. However, the very different widths reported in the two studies contradict a single-level interpretation. Therefore, we assume the resonances reported in Refs. [16,17] to correspond to distinct levels. 18.35, 3 − & 2 − ,2 + . A multidude of experimental probes provide evidence for the existence of at least two, if not three, levels at 18.35 MeV, cf. the discussion in Ref. [18]. One of these levels, which is observed both in the spectra of (p, α 0 ) and (p, α 1 ) and in the excitation curves of (p, γ 0 ), (p, γ 1 ), and (p, γ 9.64 ), has been firmly assigned as 3 − and isospin T = 1, with additional evidence to support this assignment coming from (e, e ) and 11 B(d, nα 0 ) data [18]. On the other hand, (p, p ) and (π, π ) data provide substantial evidence for the presence of an isospin-mixed 2 − level at 18.35 MeV with a width similar to that of the 3 − level, while (α, α ) data suggest a 2 + level at this energy with isospin T = 0 [19]. γ rays to the 12.71, 1 + and 15.11, 1 + levels have also been observed at this energy [15], but in the absence of yield-curve measurements they cannot be attributed to the 18.35-MeV level(s) with certainty. Given the complicated situation with two or possibly three overlapping levels, the widths quoted in Table 4 should be used with some caution. 18.39, 0 − . The level has only been observed in (p, p ). Its spin-parity assignment appears firm although it is based solely on cross-section arguments [2], while the isospin remains unknown. The total width and proton width both appear reliable. Partial γ widths In the following, we provide a resonant interpretation of the observed capture cross sections that ignores the subdominant direct-capture component, i.e., σ γ ≈ σ γ,R . With this approximation, partial γ-decay widths can be deduced directly from Eq. 1. For those levels where the proton width is unknown, we adopt Γ p = Γ. This effectively renders the γ-ray widths deduced for these levels lower limits. For the purpose of estimating off-resonance contributions, we adopt the resonance shapes shown in Fig. 7, taking into account the energy-dependence of the γ-ray transition rate. We discuss the energy bins I-VII separately, starting with the lowest-energy bin. The deduced γ-ray widths are summarized in Table 5. The yield in the lowest-energy bin is attributed entirely to p + 10 B → 2α + 3 He, as confirmed by separate measurements performed on an isotope-enriched 10 B target. II). At E p = 2.64 MeV, the 9.64, 3 − level is observed very clearly in the 8 Be gs channel. The inferred cross section is somewhat smaller than that of Hanna et al. [4], but consistent within uncertainties. The cross section may be accounted for by isovector M 1 transitions from the negativeparity levels at 18.35 MeV. An isoscalar E1 transition from the 2 + level cannot by itself account for the full cross section, as this would require a strength of 0.0055(15) W.u., exceeding the upper limit of 0.002 W.u. recommended for such transitions [20]. However, it was noted by Hanna et al. that the angular distribution of the γ ray to the 9.64, 3 − level is suggestive of mixing between two opposite parity levels, which provides some evidence for a sub-dominant contribution from the 2 + level. The two events observed in the 8 Be exc channel may be attributed to α decay of the 9.64, 3 − level via the ghost of the 8 Be ground state, which has been estimated to account for 2% of the α-decay intensity [21]. In Table 5, we give the widths required for each of the two candidate transitions to produce the full observed cross section. The width of 4.7(12) eV obtained for the 18.35, 3 − → 9.64, 3 − transition agrees within uncertainties with the less precise width of 5.7(23) eV reported by Hanna et al. [4]. Another estimate of this width can be obtained by combining Γ γ1 = 3.2(10) eV from Ref. [2], with the intensity ratio I γ9.64 /I γ1 = 0.68 from Ref. [15] measured at θ = 55 • . This yields ∼ 2.2 eV, in reasonable agreement with our value and that of Hanna et al. Finally, we note that the cross section at E p = 2.00 MeV is consistent with feeding of the 9.64, 3 − level via the low-energy tails of the 18.35-MeV levels. III). Feeding to the 10.84, 1 − level is observed both at E p = 2.00 MeV and 2.64 MeV. At the lower proton energy, where the level is seen very clearly, the cross section is most readily accounted for by an isovector E1 transition from the 17.78, 0 + level with a strength of 0.0128(25) W.u., which is typical for such transitions in light nuclei [20]. An isovector M 1 transition from the broad 17.12, 1 − level is also a possibility, although the short measurements performed at E p ∼ 1.4 MeV and 2.37 MeV indicate that such a transition could not be the dominant contribution at E p = 2.00 MeV. Assuming this were the case, we would expect to observe 3.0-3.5 events at E p ∼ 1.4 MeV whereas only one event was observed (1.8σ discrepancy [22]) and 2.5-3.0 events at E p = 2.37 MeV whereas only one event was observed (1.3σ discrepancy). (We note that there is a slight mismatch between the energies of the two observed events, E x = 11.15 MeV and 11.25 MeV, respectively, and the energy of the 10.84, 1 − level, leading to some uncertainty in their interpretation.) The feeding observed in the 8 Be exc channel may be attributed to α decay of the 10.84, 1 − level via the ghost of the 8 Be ground state, which has been esimated to account for 8% of the α-decay intensity [21]. At E p = 2.64 MeV, where the feeding of the 10.84, 1 − level is less pronounced, the cross section is consistent with isovector M 1 transitions from the 18.35, 2 − level or the 18.38, 0 − level, while an isoscalar E1 transition from the 18.35, 2 + level is ruled out because the required strength exceeds the recommended upper limit for such transitions [20]. Finally, we note that in the short measurement performed at E p ∼ 0.65 MeV, a single event was detected in the 8 Be gs channel. This event had an energy consistent with that of the 10.84, 1 − level and could be accounted for by an isovector M 1 transition from the 16.62, 2 − level with a strength of 0.070-1.00 W.u., which is typical for transitions of this kind in light nuclei [20]. IV). At E p = 2.00 MeV, a peak occurs in the cross section at E x ∼ 11.8 MeV in both the 8 Be gs and 8 Be exc channel. While the 11.83, 2 − level provides a natural explanation for the peak in the 8 Be exc channel, this level cannot account for the peak in the 8 Be gs channel, which requires a level of natural parity. At E p = 2.64 MeV, strength is observed in both channels, but there is no clear indication of a peak at 11.8 MeV, suggesting that the 11.83, 2 − level only makes a minor contribution to the cross section at this proton energy. The feeding to the 11.83, 2 − level at E p = 2.00 MeV is most naturally accounted for by an isovector M 1 transition from the 17.23, 1 − level with a strength of 1.4(4) W.u. An E1 transition from the 18.13, 1 + level could also be contributing, but cannot account for the entire feeding. If it did, we would expect to observe 43(7) events at E p = 2.64 MeV whereas only 19 events were observed in the 8 Be exc channel (3.0σ discrepancy). An isovector M 1 transition from the 16.62, 2 − level provides yet another potential feeding mechanism, inconsistent only at the level of 2.2σ with the low-statistics data collected at E p = 0.65 MeV, but requires a rather large strength of 5.4 W.u. to account for the entire cross section. We now turn to the observation of a peak-like structure at E x ∼ 11.8 MeV in the 8 Be gs channel at E p = 2.00 MeV, which is intriguing since no narrow levels with naturalparity are known to exist at this energy in 12 C. Unfortunately, the data provide few constraints on the quantum numbers of the level, only ruling out spins J ≥ 4: Feeding of a 0 + level can be accounted for by an M 1 transition from the 17.23, 1 − level; feeding of a 1 − level by M 1 transitions from the 16.62, 2 − and 17.23, 1 − levels or an E1 transition from the 17.76, 0 + level; feeding of a 2 + level by E1 transitions from the 16.62, 2 − and 17.23, 1 − levels; and feeding of a 3 − level by an M 1 transition from the 16.62, 2 − level. In all cases, the required strengths are within expectations for light nuclei [20] and consistent with the cross sections measured at the other beam energies. The cross section measurement at E p = 2.64 MeV also provides limited insight into the properties of the final level: Any of the spin-parities 1 − , 2 + , 3 − , and 4 + can be accounted for by more than one transition. Only a 0 + assignment seems improbable as it requires an isoscalar E2 transition from the 18.35, 2 + level with a rather large strength of 18(6) W.u. V). Feeding to the 12.71, 1 + level is observed very clearly at E p = 2.00 MeV and also at E p = 2.64 MeV albeit less clearly. Some cross section is also observed in the 8 Be gs channel which cannot be accounted for by the 12.71, 1 + level. The cross section obtained at the lower proton energy is about two times larger than that of Hanna et al. Even considering the substantial uncertainty on the value of Hanna et al., the discrepancy is significant. However, we note that Hanna et al. relied on the γ 1 yield reported by Segel et al. [2] for normalizing their data, and this yield disagrees with other measurements by up to 50% as discussed in Ref. [2]. Also, the (p, α 0 ) cross section reported by Segel et al. has recently been found to be underestimated by a factor of 1.50 +0. 15 −0.11 [8]. Taken together, these observations cast doubt on the accuracy of the normalization of the measurements of Hanna et al. indicating a potential ∼ 50% underestimation. As already noted by Hanna et al., the feeding to the 12.71, 1 + level at E p = 2.00 MeV can be accounted for by a rather strong isovector M 1 transition from the 17.76, 0 + level. Indeed, the feeding cannot be accounted for in any other way. Adopting our larger cross section, the required strength is 4.4(9) +2.2 −1.1 W.u., making the transition one of the strongest of its kind [20]. The feeding observed at E p = 2.64 MeV cannot be accounted for by the high-energy tail of the 17.76, 0 + level, but requires an isovector E1 transition from either the 18.35, 2 − or the 18.39, 0 − level. While the cross section observed in the 8 Be gs channel is relatively small, it is of substantial interest since no natural-parity levels are known at E x ∼ 12.7 MeV. There is, however, evidence for a broad (Γ = 1.7 MeV) level at E x = 13.3 MeV with spin-parity 4 + , the low-energy tail of which could potentially account for the observed cross section. This possibility will be explored further below. VI). The excitation region E x = 13-14.5 MeV is known to contain a 4 − level at 13.32 MeV, which decays entirely via the 8 Be exc channel, and a 4 + level at 14.08 MeV, which decays predominantly via the 8 Be exc channel (78%). Recently, evidence has been found for a very broad (Γ = 1.7 MeV) 4 + level at 13.3 MeV. We observe relatively little feeding into this region, consistent with the expected inhibition of γ transitions that require large changes in spin. We note that the factor of ∼ 4 enhancement of the cross section in the 8 Be exc channel compared to the 8 Be gs channel appears consistent with the known decay properties of the known levels, especially if the broad 13.3-MeV level is assumed to have a substantial decay component to the 8 Be ground state. Only isovector E1/M 1 transitions from the 18.35, 3 − level can account for the feeding to the 4 ± levels. However, this mechanism should produce a factor of ∼ 15 enhancement of the cross section at E p = 2.64 MeV relative to 2.00 MeV, which is not observed. The discrepancy could potentially be reduced somewhat if the asymmetric shape of the 18.35-MeV level were taken into account, but it seems unlikely that this can fully explain the discrepancy. This suggests two possibilities: some of the cross section observed at the lower proton energy is to be attributed to (i) feeding to an unknown natural-parity level with E x ∼ 13-14 MeV and J ≤ 2, or (ii) feeding from an unknown level with E x ∼ 17-18 MeV and J ≥ 2. VII). At both E p = 2.00 MeV and 2.64 MeV, we observe substantial feeding to the excitation region above 14.5 MeV, especially in the 8 Be exc channel. It seems natural to ascribe the majority of this cross section to the broad level at 15.44 MeV, tentatively assigned as 2 + although 0 + has also been proposed, but the feeding to this level is problematic: Adopting the 2 + assignment, the cross section observed at E p = 2.00 MeV can only be accounted for by an isovector E1 transition from the 17.23, 1 − level, but we dismiss this possibility because the required strength of 2.5(8) W.u. exceeds the recommended upper limit of 0.5 W.u. [20] by a factor of five. (Transitions from the levels above 18 MeV can be dismissed because they overpredict the cross section at E p = 2.64 MeV.) Adopting instead the 0 + assignment, the conclusion is the same: no transition from any of the known levels can account for the observed feeding while conforming to the recommended upper limits of Ref. [20]. At E p = 2.64 MeV, the feeding can be accounted for by a rather strong M 1 transition from the 18.35, 2 − level with a strength of (at least) 0.29(9) W.u., but only if the 2 + assignment is adopted for the 15.44-MeV level. Summary and conclusions We summarize our findings as follows: The 11 B(p, 3α)γ cross sections measured at E p = 2.00 MeV and 2.64 MeV give clear evidence of feeding to the four known levels 9.64, 3 − , 10.84, 1 − , 11.83, 2 − , and 12.71, 1 + , but by themselves these levels cannot fully account for the observed cross sections. In particular, we find evidence for feeding to a natural-parity level near E x ∼ 11.8 MeV. Evidence for natural-parity strength in this region was also found in a previous study of the γ de-excitations of the 16.11, 2 + level [7] and in studies of the β decays of 12 B and 12 N [23]. The feeding to the 9.64, 3 − , 10.84, 1 − , 11.83, 2 − , and 12.71, 1 + levels can be explained in terms of isovector M 1 and E1 transitions from the known levels above the p + 11 B threshold. The transitions proposed to account for the feeding to the 10.84, 1 − and 12.71, 1 + levels at E p = 2.64 MeV are of some interest, as they provide evidence for significant T = 1 admixture in the 18.35, 2 − level and/or the 18.39, 0 − level. It is also worth noting that the larger and more precise width obtained for the 17.76, 0 + → 12.71, 1 + transition makes this one of the strongest M 1 transition in any nucleus [20]. Higher-statistics measurements at E p = 0.65 MeV and 1.4 MeV would be highly desirable to confirm the tentative observation of M 1 transitions from the 16.62, 2 − and 17.23, 1 − levels, both feeding into the 10.84, 1 − level. Such measurements would also yield improved constraints on the spin-parity of the natural-parity level observed at E x ∼ 11.8 MeV. For these studies, it could prove advantageous to adopt a detector geometry similar to that of Ref. [7], which allows significantly larger beam currents at the cost of a substantial reduction of the detection efficiency in the 8 Be exc channel. The interpretation of the feeding observed into the excitation region above 13 MeV remains unclear, especially at E p = 2.00 MeV where the measured cross section could not be explained in terms of transitions between known levels. Here, too, additional measurements would be desirable. An analysis of new complete-kinematics data on the 11 B(p, 3α) reaction, currently in progress, will provide an improved understanding of the α 1 channel. This, together with a multi-channel R-matrix analysis that includes recent data on (p, p) and (p, α 0 ) as well as existing data on other channels, should lead to an improved understanding of the excitation region E x ∼ 16-18 MeV, which may require revision of some of the conclusions drawn from the present study. While theoretical estimates suggest the resonant capture component to be dominant, the direct capture component is not negligible and could in some instances make a substantial contribution to the observed cross section. Such direct contributions were not considered in the derivation of the partial γ-ray widths given in Table 5. Improved theoretical calculations of the direct component would be of significant interest. Theoretical calculations of the ra-diative widths deduced in this work would also be of interest. Finally, we remark that the 2 + → 0 + and 4 + → 2 + transitions in 8 Be contribute only at the sub-nb level to the cross sections measured in this work, and hence can be safely ignored.
2020-05-19T01:01:07.621Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "6e76859d7632a39a97cab8fc2e890c9e718d9136", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.07825", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6e76859d7632a39a97cab8fc2e890c9e718d9136", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55125696
pes2o/s2orc
v3-fos-license
An Empirical Study on the Distinctive Teaching Mode and Practice of International Business Innovation Class in GDUFS Accommodating to the development of globalization, China witnesses a mounting demand for international business talents who are proficient in both foreign languages and business knowledge and adept at international cooperation and competition in business context. In order to meet this need, Guangdong University of Foreign Studies, taking advantage of its resources in foreign languages and outstanding professional teachers, made a breakthrough in multidisciplinary teaching reform and took initiavtive to set up the program of International Business Innovation Class in autumn, 2010. In July 2014, it has delivered its first batch of graduates. Hence, it is of great significance to conduct relevant investigations on the teaching mode and practice timely so as to obtain data for assessing its teaching effectiveness and students’ satisfaction. Based on empirical study, this paper evaluates three factors: learners’ recognition, features and problems of this program, which aims at identifying existing problems in practical teaching, improving teaching quality and students’ satisfaction and proposing feasible suggestions to address them. In a broader context, it attempts to provide a quotable paradigm for similar innovation programs or similar institutions’ internationalization. Introduction Economic globalization not only propels human resources, capital, goods, services and information to achieve cross-border flows, but also promotes the internationalization of higher education whose ultimate goal is to realize talent training internationalization. Training a large number of international creative talents are the requirements of developing the society and knowledge-based economy and seizing the historic opportunity (Wang & Zhu, 2004).Accommodating to the development of globalization and transformation of regional economic development in our country, China witnesses an unabated but increasing demand for international versatile business talents with a good command of foreign languages, professional knowledge and skills of business trade, competition and cooperation.To meet this demand, many colleges and universities have carried out exploration and practice in reforming business talent training mode. As southern China's largest gathering of foreign talents, Guangdong University of Foreign Studies, taking full advantages of its advantages in foreign languages and outstanding professional teachers, has made a material breakthrough in interdisciplinary education reform.In practical teaching, GDUFS took initiatives to establish an international business innovation class so as to achieve in-depth integration of international business major and business English major in autumn of 2010.Students of this program are selected the second time from the freshmen at the beginning of the first semester by examinations.Such programme has been favored by majority of students and parents.In July 2014, it has delivered its first batch of graduates.Therefore, it is of great practical significance to implement relevant studies on the teaching mode and practice of this reform at this critical moment so as to assess its strengths and weaknesses. Talent training is realized in the process of teaching while the quality of education is embodied by the curriculum structure and teaching content.Cultivating international business professionals cannot be achieved without the support and implementation of international teaching curriculum, teaching management, teaching team, teaching methods, teaching practice and teaching concepts.Hence, based on empirical study, this paper focuses on evaluating and analyzing learners' satisfaction, features and problems of distinctive teaching mode and practice of international business innovation class in GDUFS so as to identify existing problems in practical teaching, improving teaching quality, students' satisfaction and proposing innovative and feasible suggestions to address them.In a broader context, it attempts to provide a quotable paradigm for similar innovation programs or similar institutions' internationalization. To realize the research purposes, three research questions were formulated as follows: Research Question 1: To what extent, are students from international business innovation class in GDUFS satisfied with the effectiveness of current teaching mode and practice? Research Question 2: What do they think are features and problems of this program? Research Question 3: What solutions should be taken to solve problems identified in the program? Definition of Teaching Mode Currently, scholars still hold controversial views on the definition of teaching mode, but most of them mainly cite the following definition: teaching model is a relatively stable, systematic and theoretical teaching paradigm that is under the guidance of certain teaching ideas and respecting a certain topic about teaching activities.Teaching mode includes the following components: teaching ideas, teaching objectives, teaching methods, teaching process, teaching environment, teaching criteria and teaching evaluation (Wang, 2008).Meanwhile, the practice of teaching mode actually needs the support of teaching team. To be specific, as for international business innovation class in GDUFS, this program is under the guidance of such teaching idea and objective: emphasis on the integration of both professional English proficiency and skillful competence of business communication and cooperation.With regard to the teaching methods, it implements the form of "English Immersion".In terms of teaching process, it stimulates both students' self-regulatory learning and reseaeh capability, which contributes to transform students into independent business talents with critical thinking pattern.Besides, in order to provide students with international business experience, this program offers them plenty of internship opportunities.Moreover, this program also features its need-based credit management, effective evaluation system and double-bachelor degree.In brief, all these components construct the teaching mode and practice of international business innovation class in GDUFS and are also included as variables in the investigation. Relevant Researches With the accelerating pace of higher education's internationalization, the competition among colleges and universities is more and more fierce.They are pushed to explore and reform existing teaching mode so as to keep pace with higher education's internationalization.In such context, international teaching mode has also been given great attention to in the academia.Existing researches from home and abroad provide supporting evidence for the background and necessity of higher education's internationalization while some scholars focus on innovation of teaching mode since the quality of education is embodied by teaching mode.Johnnchan (2015) mentioned that higher education systems around the world were undergoing fundamental change and reform due to external pressures-including internationalization of higher education.Craft, Hall, and Costello (2014) suggested that teachers' passion towards the courses is the significant engine in driving the innovation of teaching mode.Deng (2006) conducted a qualitative study on teaching mode of home and abroad and held the view that it was essential to realize internationalization of teaching objective, teaching content and teaching method and cultivate innovative and internationalized talents with high-tech knowledge and skills since such practice contributes to China's accommodation with the tendency of higher education's internationalization. Xiao and Wang (2007) analyzed the advantages and problems of current English teaching mode applied by colleges and universities in home and abroad. With the rapid development of business English program, more and more scholars focus on the study of its teaching mode.Hu (2002) divided international business English teaching mode into three types: "English +International Business", "International Business+ English" and '"International Business ∩ English".Zhu (2005) conducted an empirical study on the English immersion teaching model in business English major and put forward with some feasible suggestions to improve this model in practice.Cao (2008) discussed the distinctive features in business English teaching and found the drawbacks in the traditional teaching model.On this basis, she proposes a new model of teaching business English-scenario-simulation teaching model.Dong and Liu (2015) emphasized the importance of cross-cultural communication in internationalization of business English teaching mode. Besides, there are researches focusing on international business program.Guo (2008), taking international business program as an example, summarized and evaluated the English immersion teaching model in practice.Tang, Huang and Cai (2012) explored internationalization of curriculum construction in "international business (bilingual)" in terms of teaching content, teaching resources, teaching approaches and methods, teaching team and teaching researching.However, no researches on the teaching mode in GDUFS's international business innovation class can be found.The teaching mode of this program is similar to "International Business ∩ English" model but distinguishes itself from business English major and traditional international business major.To be specific, international business courses are given in the form of English immersion, which makes it possible for students to have a good command of both business knowledge and English proficiency.If teaching plan is reasonably arranged, students can be delivered double-bachelor degrees-Bachelor of Arts and Bachelor of Management. Based on the literature review, it can be found that there is a paucity of academic researches in the following two aspects: 1) Studies on internationalization of teaching mode in higher education are mainly conducted by a qualitative method and lack empirical investigation that can provide paradigm for the establishment, assessment and modification of internationalized teaching mode; 2) Although there are already plenty of researches on teaching mode of international business major and business English major, there is still a gap to be filled in researches directly related to the theory and practice of teaching mode of "international business innovative class" since such program distinguishes itself form traditional international business major and business English major.Therefore, this study takes international business innovative class in GDUFS as an example to conduct a quantitative study on evaluating the effectiveness of its teaching mode in practical teaching in terms of the following six dimensions: international curriculum construction, teaching team, teaching methods, teaching management, teaching practice and teaching mode. Research Paradigm According to the different stances behind ontology, epistemology and methodology, the paradigms of social research can be divided into four categories: positivism, post-positivism, critical theory and constructivism.Each has its own advantages and disadvantages.According to its research purpose, this paper chooses positivism. Sampling The subjects are students of "international business innovation class" in GDUFS.One hundred and eleven subjects completed the questionnaire.The effective rate of questionnaires was 74%.Among them, one subjects volunteered to participate in an in-depth interview.Although there are only 111 subjects are included, the results are representative since there are only about 224 students from international business innovative class with 56 students in each grade, which means that this study has sampled approximately 50% of the population.After data collection, the demographic information of the samples is as follows.As can be seen from Table 1, 83.8% (n = 93) were female participants while 16.2% (n = 18) were male.Most of them were freshmen (39.6%), sophomores (34.2%) and juniors (26.1%) and no subjects are seniors.Furthermore, the gender proportion of male and female subjects is 6:31, which is related to the fact that GDUFS belongs to the category of foreign language universities. Data Analysis Research materials are obtained by questionnaire investigation, structural interview and literature analysis.SPSS20.0 statistical software is used in this paper for data analysis.Frequency analysis, cross-table analysis and Chi-Square tests are conducted so as to identify and interpret the problems of this program.Besides, in-depth interviews are also implemented to address the existing problems. Results and Discussion Based on the results of SPSS20.0, this section analyzes and evaluates students' satisfaction towards the teaching model and practice of this program.Besides, referring to the answers in the open-ended questions and in-depth interview, it discusses the distinctive features of the teaching model.Finally, it reveals the existing problems of this program and proposes certain feasible suggestions.The following tables and figures are the results of frequency analysis, cross-table analysis and Chi-Square tests. Frequency Analysis of Each Variable In this section, frequency analysis is conducted to present the subjects' satisfaction of the distinctive teaching mode of international business innovation class in GDUFS (See Figure 1) and the corresponding (See Figure 2) and problems (See Figure 3) of this mode. Figure 1.Subjects' satisfaction of the teaching mode As is illustrated in Figure 1, 69.4% of subjects are satisfied with the current teaching model and practice while 30.6% of them hold the opposite view,which manifests that such teaching model and practice are generally recognized by the students of this program but some issues still need to be improved.In reference to the questionnaires and interview, they reveal that the reason that makes the subjects most satisfactory lies in the outstanding and excellent teaching staff.When being asked the following question: what aspects in practical teaching make you satisfied most?The interviewee answers it as illustrated in Extract 1. Extract 1: I think that I am lucky to be in this program.Here I make friends who share common interest in international business.What impressed me most, I would like to say, is my exceptional teachers.Most of them are both proficient in English and business knowledge.Moreover, they are good at lead us to think from a global perspective.Anyway, I think my teachers help a lot in my learning process. Other factors also play an important role in influencing subjects' satisfaction including "English immersion teaching model", "favorable university policies for this program", "relatively flexible credit and length of education management system (for example, optional courses can be based on students' interest rather than compulsory categories)","introduction of textbook of original version", "internationalized courses and distinctive courses (such as golf courses, lectures about red wine and so on)".In addition, the reasons why subjects are dissatisfied with the teaching model lie in the following aspects: no essential differences between this program and other programs while only much more stressful courses and overwhelming study burden; lack of internship opportunity; lack of guidance of supervisors.However, supervisors do play a significant role in such program whose objective is to cultivate intellectual elites.Under-graduate mentorship contributes to embody quality of elite education and is an important model and system in cultivating versatile, personalized and innovative talents (Wu, 2010).Hence, to promote the effectiveness of this program, harmonious mentorship can be a quite important factor. Figure 2. Frequency descriptions of distinctive features and advantages of this teaching model As is illustrated in Figure 2, the distinctive features and advantages are ranked in the order of highlight: teaching in the form of English immersion (=64.9%),delivering double-bachelor degrees (=51.4%),outstanding teaching staff (=41.4%),curriculum system focusing on both professional knowledge and language (=31.5%),internationalization (=27.9%),emphasizing on cultivating both autonomy and research capability (=26.1%),reasonable internationalized courses (=12.6%),internationalized internship opportunities(=12.6%),students' need-based credit management and teaching evaluation system(=5.4%).It can be found that teaching in the form of English immersion is this program's most distinctive feature, which conforms to the university-running idea of fostering business talents with global view and intercultural communication skill in our university.Besides, delivering double-bachelor degrees is another important feature of this teaching model.For one thing, it is one of the driving forces that attract freshmen.For another thing, it makes it possible for the implementation of "international business ∩English model".In the meanwhile, outstanding teaching staffs also provide support for fostering innovative business professionals with global views.With regard to the left five features listed in Figure 2, they are far from the distinctive advantages of this teaching model. Moreover, an issue that should be given special attention to lies in the puzzle from the undergraduates of grade 2012.They are concerned about whether they will be delivered double-bachelor degrees due to the effect of national education policy change.Therefore, respecting this issue, timely and precise communication between the under-graduate and the university are needed so as to consolidate the effectiveness of education. Figure 3. Rank of the problems in the teaching mode and practice As is illustrated in Figure 3,the problems of the teaching mode and practice are ranked in the order of seriousness: unreasonable curriculum design and overwhelming study pressure (=73%), no significant differences with other majors in the same school (=65.8%),imbalance between internship opportunity and students' need (=55.9%),inflexible credit management system (=48.6%),ineffective teaching evaluation system (=37.8%)and imbalance between teaching staff structure and teaching demand (=32.4%).No significant differences with other majors in the same school but overwhelming study pressure and lack of high-quality internship opportunity are the most obvious problems that make students strongly dissatisfied.Overwhelming study pressure restrict students' individual arrangement of learning time, which puts them at the passive place and restricts their possibility to foster innovative thinking and discernment.Moreover, they are dissatisfied with the quantity and quality of the internship opportunities offered by the university.What they desire more is the opportunity to have a deeper insight into the enterprises' culture and management and suchlike rather than internship within the campus or going to the Canton fair. Cross-table Analysis and Chi-Square Tests The output result from SPSS shows that there is significant difference in perception of this teaching mode's distinctive features and advantages in terms of different genders and grades.However, there is no significant difference in subjects' attitude toward the problems of teaching model in terms of grade (Pearson Chi-Square=10.787, sig.=0.547) and gender (Pearson Chi-Square=5.209,sig.=0.517).For the limited space, here just report the variables that have significant differences. As is illustrated in Table 2, the value of Pearson Chi-Square is 47.406 and the value of significance is 0.000, which manifests that there is significant difference between female subjects and male subjects' attitude toward the features of this teaching model.To be specific, female subjects regard "teaching in the form of English immersion (65.5%)", "delivering double-bachelor degrees (=55.9%)" and "outstanding staff (44.1%)" as the main features and advantages of this mode while male subjects think that "internationalization (=66.7%)","teaching in the form of English immersion (61.1%)" and "emphasizing on cultivating both autonomy and research capability (=38.9%)" are the distinctive features of the teaching model.Such phenomenon may be explained by the differences in female and male subjects' different learning needs.As is illustrated in Table 3, the value of Pearson Chi-Square is 31.553and the value of significance is 0.011, which manifests that there is significant difference in subjects' attitude toward the features of teaching model in terms of grades.As for freshmen and sophomores, their views towards the advantages of the teaching model are similar.They put "teaching in the form of English immersion", "delivering double-bachelor degree" and "outstanding staff" in the first place.As for juniors, they emphasize "internationalization" rather than "outstanding staff" which may be explained by the different needs of students from higher grade and lower grade.Lower grade students may need more teaching staff's guidance while higher ones need more for broadening horizon.b. 8 cells (29.6%) have expected count less than 5.The minimum expected count is .99. Conclusions Under the pressure of economic globalization and instability, many colleges and universities are urged to promote internationalization of higher education.Guangdong University of Foreign Studies, taking full advantage of its strength in foreign language and prominent teaching staff, has established an international business innovation class which is representative in its international teaching mode.Given the existing deficiency in empirical studies on teaching mode and practice of international business innovation class which is a distinctive paradigm of university's efforts in realizing internationalization of higher education, it is of great practical significance to conduct a quantitative research on evaluating and analyzing learners' satisfaction, features and problems of distinctive teaching mode and practice of international business innovation class in GDUFS, which is guided by three research questions.On the one hand, it quantitatively evaluates students' satisfaction and identified features and problems of this teaching mode, which contributes to the improvement and modification of teaching mode in this program of GDUFS and similar programs in other institutions.In a broader sense, this study presents a quotable paragidm for internationalization of higher education. In reference to results of questionnaire and interview, this study provides insights into distinctive teaching model and practice of "international business innovation class" in at least three ways.First, students of this program are generally satisfied with its teaching model and practice (Research Question 1).Second, the most distinctive features of this program lie in "teaching in the form of English immersion", "delivering double-bachelor degrees" and "outstanding teaching staff".Moreover, there is significant difference in distinctive features and advantages in terms of different genders and grades.Third, the most obvious problems in the teaching model lie in "unreasonable curriculum design and overwhelming study pressure", "no significant differences with other majors in the same school" and "imbalance between internship opportunity and students' need" (Research Question 2). Finally, this paper proposes some feasible suggestions for reference: 1) The differences between this program and other majors should not only lie in the number of credits but the structure of curriculum and credit design.For example, classic module of arts should not be compulsory but optional and other professional courses should be given more autonomy.In this way, it can not only reduce students' learning pressure but also offer them more time to be critical thinking and innovative learning.Besides, it helps to enhance students' breadth and depth of knowledge learning, which is important in realizing innovation education. 2) The university should improve the quality of internship opportunity that allows students to put their theories into practice.Embedded learning system is far from enough.3) Increase the proportion of teaching staff with both good language and professional knowledge and skills by helping the existing teaching staff's transformation from language teachers into business English teachers, which can be realized by teaching development.4)Asfor English immersion model, its application should be based on students' needs, capability and the courses' difficulty degree.For example, when the terminologies of the course are too difficulty, certain Chinese can be used in the class for effective teaching.5)Improve the system of supervisor management and teaching evaluation so as to timely address existing problems.A harmonious mentorship enhances the interaction between students and superviors, which is a good way for teachers to understand students'specific needs of gender and grade differences while a scientific evaluation system helps to monitor the effectiveness of teaching mode.Although the suggestions above are all based on GDUFS' experience, it can provide some enlightenments for similar innovation classes in the university and other institutions with similar programs. However, the research still has some limitations as follows.First, samples from seniors or graduates of this program were not included in the investigation, which may help to link the effectiveness of internationalition of teaching mode with market demand for intellectual international business talents.In later study, researchers may also include these two groups.Second, all the findings and suggestions are limited to the teaching mode of international business innovation class in GDUFS.Hence, for further study, comparative study on teaching mode in similar programs in GDUFS or other universities can be conducted. Table 1 . Demographic data of the samples Table 2 . Features of the teaching model*genders crosstabulation Table 3 . Features of the distinctive teaching model*grades crosstabulation
2018-12-12T03:26:31.761Z
2016-03-21T00:00:00.000
{ "year": 2016, "sha1": "cf3ffb64b2bc5bc0adb243ebe21cca690dde5645", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/58350/31177", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf3ffb64b2bc5bc0adb243ebe21cca690dde5645", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
218681857
pes2o/s2orc
v3-fos-license
Establishment and validation of a model to determine the progression risk of low grade intraepithelial neoplasia Objective To establish and validate a model to determine the progression risk of gastric low-grade intraepithelial neoplasia (LGIN). Methods A total of 705 patients with gastric LGIN at the endoscopy center of Jiangsu Provincial People's Hospital during January 2010 and August 2017 were retrospectively reviewed. Basic clinical and pathological information were recorded. According to the time sequence of the initial examination, the first 605 patients were enrolled in the derivation group, and the remaining 100 patients were used in the validation group. SPSS 19 software was used as statistical analysis to determine independent risk factors for progression of LGIN of the stomach and to establish a risk model. The ROC was used to verify the application value of the predictive model. Results Univariate and multivariate analysis suggested that sex, multiple location, congestion, ulceration and form were independent risk factors for prolonged or advanced progression in patients with LGIN. Based on this, a predictive model is constructed: P = ex/(1 + ex) X = − 10.399 + 0.922 × Sex + 1.934 × Multiple Location + 1.382 × Congestion + 0.797 × Ulceration + 0.525 × Form. The higher of the P value means the higher risk of progression. The AUC of the derivation group and validation group were 0.784 and 0.766, respectively. Conclusion Sex, multi-site, hyperemia, ulcer and morphology are independent risk factors for the prolongation or progression of patients with gastric LGIN. These factors are objective and easy to obtain data. Based on this, a predictive model is constructed, which can be used in management of patients. The model can be used to identify high-risk groups in patients with LGIN that may progress to gastric cancer. Strengthening follow-up or endoscopic treatment to improve the detection rate of early cancer or reduce the incidence of gastric cancer can provide a reliable basis for the treatment of LGIN. high-frequency electrocoagulation, argon plasma coagulation, radiofrequency ablation, holmium laser treatment, microwave coagulation therapy, etc. It is recommended by the European guidelines [4] that the found gastric LGIN should be resected and used for a more accurate pathological examination. It is worth noting that the guide emphasizes that the disappearance of LGIN assessed by endoscopic follow-up and biopsy still does not rule out the possibility of progression to aggressive cancer. Depending on the guideline of the American Gastroenterological Association (AGA) and the British Society of Gastroenterology (BSG) [5,6], endoscopic resection is recommended, regardless of the size of the adenoma and whether there is a combination of dysplasia. The American Society for Gastrointestinal Endoscopy (ASGE) guidelines [5] recommend that the LGIN lesions, which are still found after one year follow-up, should be treated with endoscopic resection. In summary, the different treatment principles are provided in these clinical guidelines because there is currently no suitable solution for the evaluation and management of LGIN worldwide. In China, WHO/Vierna classification is recommended for the treatment of gastric LGIN, such as drug treatment, followup or endoscopic treatment. However, there are no uniform criteria accepted to clarify the clinical cases, which are suitable for drug treatment or follow-up, and which require more endoscopic treatment. Endoscopic treatment may not be necessary for some patients with low-grade intraepithelial neoplasia. On the other hand, ignoring the risk of low-grade intraepithelial neoplasia may result in missed diagnosis or misdiagnosis. Furthermore, excessive endoscopic follow-up may increase the risk of examination and the cost of treatment for patients at low risk stage. Therefore, it is necessary and has realistic clinical value to find new methods that can predict and evaluate the likelihood of progression of LGIN. Here, to analyze the factors associated with the progression of low-grade intraepithelial neoplasia, we constructed a LGIN progression risk model by retrospective study and validated the model. The model is used to predict the prognosis of patients with low-grade intraepithelial neoplasia in order to effectively identify high-risk groups that may progress to gastric cancer. By strengthening monitoring and active treatment, it can reduce the incidence of gastric cancer, and avoid excessive inspection and waste of medical resources. Research object A retrospective review of 1011 patients, who underwent gastroscopy at the endoscopy center of Jiangsu Provincial People's Hospital during January 2010 and August 2017 with the age of over 18 years old, were diagnosed as low-grade intraepithelial neoplasia by pathology. The diagnostic criteria refer to the WHO digestive system tumor pathological diagnostic criteria [7]. 54 patients with endoscopic and pathological follow-up records that were absent for more than half a year were recorded. To observe the natural course of low-grade intraepithelial neoplasia, 126 patients who underwent endoscopic ESD or EMR and 17 patients who underwent gastric surgery were excluded. At the same time, 109 patients who underwent endoscopic and pathologically suggesting high-grade intraepithelial neoplasia or gastric cancer within six months were excluded. Data collection Observation indicators: record basic clinical and pathological information, including name, age, gender, outpatient or hospital number, duration of disease, history of gastric surgery, endoscopic findings of lesions (including location, size, morphology, color, phenotype), postoperative histopathological diagnosis and post-stage progression. Observation of the end point: 1. Pathological examination suggests that the low-grade intraepithelial neoplasia is improved or upgraded. 2. Pathological examination to the end point of the study suggests prolonged intraepithelial neoplasia. Risk factors were recorded: time course, age, gender, lesion location, multiple sites, lesion size, lesion morphology, lesion color, lesion phenotype, postoperative histopathological diagnosis of intestinal metaplasia as a risk factor for long-term progression. This study has been approved by the Ethics Committee of the Jiangsu Provincial People's Hospital. The ethical approval number for this study was 2018-SR-276. Statistical methods To build and validate the risk model, we divided the study samples into two groups. According to the time sequence of the initial examination, the first 605 patients were enrolled in the derivation group, and the remaining 100 patients were used in the validation group. Statistical analysis was performed using SPSS 19 software. Continuity variables were expressed as mean ± standard deviation (SD) and compared by t test. Categorical variables were analyzed using Fisher's exact or chi-square test. Multivariate analysis was performed using a logistic regression model to determine independent risk factors for progression of low-grade intraepithelial neoplasia in the stomach. The receiver operating characteristic (ROC) curve was used to verify the application value of the predictive model. A P value < 0.05 was considered statistically significant. Derived risk model A total of 605 patients with low-grade intraepithelial neoplasia in the analysis group were analyzed, with an average age of 58.5 ± 10.82 years. The clinical data are shown in Table 1. There were 102 cases without improvement, including 21 advanced progressed cases and 81 prolonged cases. The average time form progression to high tumor or gastric cancer was 3.29 ± 1.92 years, with the 7.28 years at the longest, and 0.95 years at the least. Univariate and multivariate analysis suggested that sex, multiple location, congestion, ulceration and form were independent risk factors for prolonged or advanced progression in patients with low-grade intraepithelial neoplasia. As shown in Table 2. Based on this, build a predictive model: P = ex/(1 + ex) The higher the P value, the higher the risk. Based on this, the risk model nomogram is drawn, as shown in Fig. 2. The model was constructed based on the clinical data of 605 patients, and the area under the receiver operating characteristic (ROC) curve (AUC) was 0.784, as shown in Fig. 3. Verify the risk model The clinical data of the remaining 100 patients were verified. The clinical data of the low-tumor patients in the validation group are shown in Table 3. The area under the X = −10.399 + 0.922 × Sex + 1.934 × Multiple Location Discussion Endoscopic ESD therapy is recommended without exception for the pathological diagnosed HGIN in domestic and international guidelines. However, the principle of LGIN treatment has been controversial. It is worth noting that LGIN is a precancerous lesion. Although with a low chance of progressing to cancer, it is reported that there is a 0-23% progression rate in HGIN, and the annual rate of gastric cancer progression in LGIN is around 0.6% [9]. Another 10-year follow-up study indicated that 49.4% of the LGIN were reversed, 18.5% of patients with LGIN remained unchanged for a long time, and 32.1% of patients with LGIN developed, of which approximately 17.3% developed advanced gastric cancer [10]. Therefore, the risk of LGIN progressing to gastric cancer cannot be ignored. The progressing of LGIN is slow, with the average time for LGIN progressing to cancer is 10 months to 4 years [11,12]. Although the treatment of LGIN is inconsistent, long-term follow-up is almost recommended by all guidelines. This recommendation exacerbates the patient's financial burden and potential medical risks. Therefore, it is extremely important to establish a LGIN progress risk model. Using this model, the LGIN population with low risk of cancer (no need for regular checkups), the middlerisk LGIN population (requires regular checkups), and the high-risk LGIN population (requiring short-term treatment) can be identified. We found that there is no significant correlation between LGIN progression and age, which is consistent with previous studies [13][14][15]. Man is an independent risk factor for prolonged or progression of low-grade intraepithelial neoplasia, which is consistent with the higher incidence of gastric cancer in men [16][17][18]. Interestingly, multiple site onset as an independent risk factor for LGIN progression is contrary to [19] reported that LGIN of the surface with hyperemia and surface ulcer may progress to high-grade intraepithelial neoplasia or early cancer. Another earlier study [20] also confirmed that the structural of gastric mucosa changes following the LGIN lesions progressed. Central depression or nodular surface is associated with progression of LGIN lesions. In this study, it was found that LGIN prolongation or progression was associated with surface redness, lesion size, erosion, morphology (flat, bulging, central depression) or ulceration in a univariate analysis. It showed that there was no significant correlation between the prolongation or progression of LGIN and the size of the lesion or the surface of the nodule by multivariate analysis. The prolongation or progression of LGIN is only associated with the factors, such as lesion size, nodular surface, reddish surface, morphology (flat, bulging, central depression) or ulceration. This may be related to the reduction in the number of related cases undergo endoscopic treatment due to pathological diagnosis inconsistency in cases with lesions > 2 cm or nodular surface. A large number of data show that the diagnosis results of LGIN from gastroscopy biopsy are different from the results from the large biopsy. It has been reported [21] that the pathological difference between endoscopic forceps biopsy and surgical resection is 20.1%. The surface diameter over than 1 cm, surface redness and nodular surface are significant risk factors. Ryu et al. [22] showed that the central depression, nodular surface and surface redness were significantly associated with ECG and low-grade dysplasia lesions. Therefore, the reduction of LGIN cases may be due to the inconsistent diagnosis. It is believed that size is a common feature of malignant tumors [23]. As the size of the lesion increases, the risk of progression of LGIN increases. Another study showed [24] that the prognosis of gastric mucosal LGIN is related to the morphology of endoscopic lesions. The multiple proliferative lesions had the highest rate of regression, while the ulcerated lesions had a higher rate of progression. About 25.42% of the ulcerated lesions progressed to HGIN and gastric cancer, which is consistent with our results. OLGA/OLGIM staging is currently a method for assessing the accuracy of gastric mucosal atrophy/intestinal metaplasia. OLGA/OLGIM III and IV are high-risk patients with gastric cancer [8,25]. Compared with OLGA, the OLGIM staging system has a higher interdisciplinary diagnostic agreement rate, and lower sensitivity [26,27]. In this study, we found that intestinal metaplasia was not an independent risk factor for progression of LGIN, which may be related to the assessment of intestinal metaplasia. Wu et al. [13] found that the LGIN intestinal metaplasia rate in the stomach corner is significantly higher than that in the cardia. In contrast, the cancer rate in the stomach corner is much lower than that in the sacral region. It is proved that intestinal metaplasia is not necessarily related to the progression of LGIN, which is consistent with our results. The development of gastric cancer is a stepwise process, including gastric mucosal inflammation, mild dysplasia, moderate dysplasia, severe dysplasia, and early cancer [1]. Low-level intraepithelial neoplasia is a state in this transformation process. However, as a retrospective study, the study object was to review the cases that were followed up at the Endoscopic Center of Jiangsu Provincial People's Hospital during January 2010 and August 2017. The follow-up time was less than 7.67 years. A 10-year follow-up study indicated that 18.5% of patients with LGIN remained unchanged for a long time [10].We recorded deferred and progressed cases as non-improved groups. By the end of the study, more than two-thirds of the patients in the non-improved groups were still in a prolonged state, and we could not determine their final outcome. Therefore, the cutoff value calculated from this data should be inaccurate, and we may need further follow-up to get a more accurate result. Today, global research on gastric cancer is focused on the identification and management of HGIN. But there are few studies on LGIN. Here, we focused on the study of early gastric cancer in the target population of LGIN. The duration of disease, age, gender, lesion location, multi-site, lesion size, lesion morphology, lesion color, lesion phenotype, and postoperative histopathological diagnosis of intestinal metaplasia were selected as risk factors for low-grade and longterm progression. These factors are more objective and easily accessible, with highly feasible in clinical use. In this study, patients who underwent endoscopy and pathology for HGIN or gastric cancer within six months were excluded. Therefore, cases in which the pathological diagnosis of biopsy is inconsistent with the pathological diagnosis of gross specimens are excluded. Patients who underwent endoscopic treatment were also excluded. This allows us to observe the natural course of LGIN development. As far as we know, this kind of study has not been reported in the previous studies. Based on this, we established a LGIN Progress Risk Model to identify highrisk groups that may progress to gastric cancer. Strengthening follow-up or endoscopic treatment of the groups can improve the detection rate of early cancer or reduce the incidence of gastric cancer, and provide a reliable basis for the treatment of LGIN. However, this study is a retrospective study and inevitably there will be selective bias in the sample. In addition, in view of China's national conditions, a large number of patients with LGIN have not been reviewed, and may also have an impact on the results of the study. Compliance with ethical standards Conflict of interest Guoxin Zhang, Yuqian Chen, Yini Dang, Huaiming Sang, Xiaoyong Wang, Meihong Chen and Daiwei Lu have no conflicts of interest or financial ties to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-05-19T14:41:14.588Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "1ae79e8d7771644dc751f135e5241fb9e689c8c5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00464-020-07531-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1ae79e8d7771644dc751f135e5241fb9e689c8c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231985780
pes2o/s2orc
v3-fos-license
A Python Framework for Fast Modelling and Simulation of Cellular Nonlinear Networks and other Finite-difference Time-domain Systems This paper introduces and evaluates a freely available cellular nonlinear network simulator optimized for the effective use of GPUs, to achieve fast modelling and simulations. Its relevance is demonstrated for several applications in nonlinear complex dynamical systems, such as slow-growth phenomena as well as for various image processing applications such as edge detection. The simulator is designed as a Jupyter notebook written in Python and functionally tested and optimized to run on the freely available cloud platform Google Collaboratory. Although the simulator, in its actual form, is designed to model the FitzHugh Nagumo Reaction-Diffusion cellular nonlinear network, it can be easily adapted for any other type of finite-difference time-domain model. Four implementation versions are considered, namely using the PyCUDA, NUMBA respectively CUPY libraries (all three supporting GPU computations) as well as a NUMPY-based implementation to be used when GPU is not available. The specificities and performances for each of the four implementations are analyzed concluding that the PyCUDA implementation ensures a very good performance being capable to run up to 14000 Mega cells per seconds (each cell referring to the basic nonlinear dynamic system composing the cellular nonlinear network). I. INTRODUCTION In fractal analysis and nonlinear dynamics, the use of cellular nonlinear networks (CNNs) [1] and more generally finite-difference time-domain (FDTD) models [2] is widespread. For instance, highly relevant phenomena such as Turing patterns, Spiral waves, other slow growth phenomena, tumor modelling, wave propagation and many others may be studied in cellular nonlinear networks frameworks. Particularly interesting, the Reaction-Diffusion CNNs exhibit various emergent dynamics phenomena and life-like behaviors [3] [4]. For such networks, a viable theory for locating emergent behaviors in the parameter space (or gene's space) called local activity theory [5] was proposed and successfully tested [4]. Fluid dynamics, sound propagation, and many other physical phenomena can be modeled in FDTD frameworks such as cellular automata and Lattice Boltzmann Machines [6][7] [8]. Such models need convenient informatic implementations (modelling and simulation frameworks = MSF), and in recent years various commercial or noncommercial solutions were offered, most struggling to offer GPU support and high performance (short simulation times for wide arrays of cells). For instance [9] is one of the most interesting non-commercial solutions for RD-CNNs but for fast simulations requires the use of a personal platform (PC) equipped with a graphical processing unit (GPU). On the other hand, the concept of reproducible research [10] gains more and more interest among research communities and it basically relies on some instruments, often cloud computing platforms, where one can run the same code and verify results as well as adding her/his own contributions. For instance, one can develop Jupyter notebooks [11] capable to run in a free or commercial cloud platform. The MSF discussed herein is thought under the reproducible research perspective and offers four implementation variants for a RD-CNN model that can be easily adapted to a more general FDTD model. Section II describes the mathematical model and how it is translated into 4 different implementations, all capable to run with GPU support freely offered via Google Collaboratory platform [12]. Section III gives an in-depth analysis of the dynamic performances (simulation times) and specific issues for each of the 4 implementations (based on PyCUDA, NUMBA, CUPY and NUMPY libraries) and for some of the GPU processors available on the cloud platform. Several applications including fast search of the parameter space in order to identify meaningful dynamics are given in section IV, with an emphasis on slow growth phenomena (needing good speed to observe dynamics in large arrays i.e. 4096x4096 cells for 100000 iterations) and meaningful image processing tasks such as edge detection. Concluding remarks are given in Section V, summarizing that our tool offers interesting perspectives for open research in nonlinear complex systems. Other models can be adapted straightforwardly, for instance the Cellular Neural Network (CeNN) [13] which, in the context of emerging artificial intelligence (AI) applications can be regarded as a recurrent convolutional neural network (CNN). Following guidelines in [14] and [15] where a NUMBA implementation was first proposed, and using the proposed MSF as a template, one can easily adapt it for the CNN or other models of interest. II. THE RD-CNN MODEL AND ITS FOUR IMPLEMENTATIONS A. The mathematical model and the general framework for the modelling and simulation process The following discrete time model is considered in this paper (here exemplified for the FitzHugh Nagumo model) [14]. The equations defining the RD-CNN model are: (1) The two layers of the RD-CNN are given by the , and , state variables, corresponding to the nonlinear cells. The pair ( , ) ∈ {1, . . , 1, . . } represents the spatial index of the cell located in a 2D array of NNxNM size. The entire group of cells in the grid are associated to arrays A (comprising u values) and B (comprising v values). Initial state values can be programmed with some specific image files (typ=3) or with some randomly generated arrays (typ=2where all cells are randomly generated; typ=1where only a 11x11 square in the middle of the array has random values). For external images, a scaling factor a k may be considered. and are the diffusion coefficients associated to the layers. The nonlinear functions for the above model are given next, and they include some of the gene parameters [ , , , ]: The entire set of the RD-CNN model parameters is called a gene and will be allocated to a single variable Pars as shown next. In the next subsection details on implementing the above model using 4 different approaches are given. For all these implementations there is a unique function get_initial_state() to read the parameters, construct the initial state arrays and the display buffers (A_show, B_show) and another function for displaying the result. These functions are defined in "CELL1" of the notebook which can be run only once. Next figure displays the arguments and returns of the input function which should be always called before running the model. One can identify the parameters discussed above. If another FDTD or cellular nonlinear model is considered a new specific function should be defined using the above as a starting template. Another function disp_simul(), necessary to display and save the simulation result is shown in the next figure with its argument and returns (latest values of the A, B arrays after running the model iter_max iterations). The nnsp argument is an integer representing the number of snapshots to be displayed (for instance nnsp=5 snapshots, these images being stored into the A_show and B_show tensors). The last argument implement is a string which displays the type of simulator (one of the four to be described next) and it will be displayed on the simulation plot as shown in Figure 1. The dynamic performance is expressed here in "ns/cell and iteration" which is the reverse of the Mega-cells/second. In the above case 2.06ns correspond to 1000/2.06 = 484 Megacells/second. Running the next cell in the notebook [16] allows visualization of the latest A (or B) array (in the above case, the one after 200 iterations), as seen in Figure 2, in this case representing some edge extraction example (from the initial state figure displayed in iteration 0). The simulator is available in [16] and is constructed as a Jupyter notebook composed of o series of cells. The first three cells should be run only once, the first reporting on the specific GPU available in the current Google Collab runtime (one can try to get access to a better GPU using the "Factory reset runtime" in the "Runtime" tab. The second cell allows to install the PyCUDA library on Google Collab platforms. Running it may be not necessary on other platforms, where PyCUDA is by default installed (e.g. on Kaggle [17] platforms). The third cell implements the functions discussed previously to input the model parameters and display the results. Then, four cells each implementing the simulator using a different library can be used to effectively run the simulation. A brief description for each one is described next while the detailed implementation is available in [16]. B. PyCUDA implementation The PyCUDA library [18] can be regarded as a "wrapper" for the NVCC compiler, giving thus Pythonic access to the basic NVIDIA tools for programming GPUs. It offers simple functions to load and get variables on/from the GPU and then do computation on the GPU. Except CUDA kernel definition all remaining code is written in Python. The kernel definition for the model in Eq. (1)-(3) is given in Figure 3. As seen, it should follow the NVIDIA C syntax, since it is compiled by the NVIDIA nvcc compiler. The full code is available in [16], and some excerpt emphasizing the kernel definition is given in the next figure. As seen, the model is entirely described in lines 64-65 after some useful definition of the positional indices in lines 61-62. The same positional definition given the localization of the thread in lines 49-50 can be reused when other cellular or FDTD models need to be implemented. The remaining of the code given in the PyCUDA cell does the following: i) reads the simulation parameters; ii) defines the CUDA blockdim and griddim sizes (here some optimization may improve the speed performance); iii) prepares the variables (A, Anew, B, Bnew and Pars arrays and stores them into the GPU global memory); iv) runs the simulation by calling the ca_core() function which executes the above kernels in parallel on the GPU while then swaps the Anew with A (and B with Bnew) preparing the process for the next iteration. Note that Anew represents the state of the A array after one iteration (when A was applied as initial state). All these specific steps are implemented, but in their specific manners, using the other 3 approaches (NUMBA, CUPY, NUMPY). The PYCUDA main loop is represented in the next figure: Using the default parameters of the get_init_state() function, running the PYCUDA implementation will exhibit a slow growth phenomena as seen in Figure 4. As seen, only 2 seconds suffice to run the simulation of a 512x512 array during 10000 iterations. In this case the available GPU was not one of the best. C. NUMBA implementation The NUMBA library [19] is intended to offer powerful compilers (JIT and CUDAJIT) for both GPU and CPU units. Unlike PyCUDA, the description of CUDA kernels is entirely done in Python, giving thus more portability. However, as seen in Section III, the GPU device is not as efficiently used as in the case of PYCUDA library. As in the case of the PYCUDA library, the cell used to implement the simulator has the same three components: i) reads simulation parameters; ii) defines CUDA block and grid dimensions; iii) prepares the specific variables and moves them on the GPU device; iv) runs the simulation by calling iteratively the GPU-executed and compiled function ca_core(). An excerpt of the code (fully provided in [16]) is given in the next figure, emphasizes the kernel definition. Unlike in the case of PyCUDA, 2-dimensional indices are directly used here, with the toroidal boundary condition implemented using the "modulo" operator in Python (%) applied to the positional indices. The entire model is implemented in lines 42-45 and is obviously easy to change it given another mathematical description of the FDTD model. As in the PyCUDA case, the simulation needs calling the kernel in a main loop and storing the snapshot arrays at some given moments. When running in the Google Collaboratory platforms, the first run on the NUMBA-based implementation may produce some error "Numba cannot operate on nonprimary CUDA context …". In our experience, waiting for some time (up to 10 minutes) and repeating the run will endup successfully. As seen in Table I in the next section the speed of the NUMBA implementation is worse, although there is an advantage of Python portability. On the other hand, installing NUMBA on various personal platforms is easier and less cumbersome than PYCUDA. D. NUMPY implementation In many circumstances, it is possible that a GPU device is not available. For such circumstances, a NUMPYbased implementation is considered as a separate cell. The excerpt from the code in [16] implementing the ca_core() function and its main loop is given in the next figure. NUMPY is a widely used library which implements efficient computing when arrays are used. Various linear algebra functions are efficiently implemented, thus allowing high performance without the need of an additional compiler (array methods are already pre-compiled). The main model is now implemented in lines 18-21, and as seen, it is quite easy to replace it with a more general FDTD model. Also, it is quite straightforward to pass from one model implementation library to another, given the templates shown in Figures 3,5 and 6. It is important to note that here one operates with arrays directly (the specific NUMPY philosophy) while the neighborhoods and frontier conditions are implemented using the np.roll() method. A simulation using the same parameters as in Figure 4 is given in Figure 7. The result of the simulation is, as expected, similar but the simulation time is larger. Fig. 7. The simulation of the RD-CNN model on CPU, using the NUMPY library. As seen, the running time is smaller than in the case of GPU usage, however, when restricted platforms (CPU-only) are available using this implementation is a good option. As detailed in [15], CPU-s can be sometimes more effectively exploited using the JIT compiler from NUMBA library. The ca_core() model in this case is simply a rewrite of the GPU NUMBA model where @jit replaces the @cudajit and for loops would replace if directives. E. CUPY implementation of the RD-CNN model The CUPY library [20] was proposed as an alternative to NUMPY (many functions are similar) with GPU support. Consequently, code written for NUMPY can be readily transformed in CUPY-based code (with the advantage of GPU support) using some very simple rules: i) np.method() is replaced with cp.method() where cp is the alias for cupy as well as np is the alias for numpy; ii) in order to use GPUs, variables must be declared as GPU-variables and transferred among GPU and CPU using the specific methods: Ag=cp.array(A)for copying the CPU array A into GPU array Ag, and A=cp.asnumpy(Ag) for the reverse operation. As a result, the ca_core() definition given in Figure 8 is very similar to the NUMPY definition, yet now the simulator benefits from GPU support, if GPU is available. Running the simulator for the same set of parameters gives, as expected, the same result, with some acceleration, but not as efficient as in the case of PYCUDA or NUMBA implementations. F. Performance comparison among implementations. In the following table a comparison is given between all the above-mentioned implementations. One important factor influencing the speed is the array size N. The GPU device was Tesla P100-PCIE and the CPU (NUMPY case) was Intel(R) Xeon(R) CPU @ 2.30GHz. Both devices were allocated by the Google Colab platform. It is likely that performance may slightly differ in other specific device allocation on the cloud platform. In order to compare performance with other simulators, the table reports the speed as Mcells/second i.e. representing how many CA cells (or in general FDTD cells) are computed in a second. The same model (parameters and running time 10000 iterations) as considered in the previously reported simulation was considered (it corresponds to the default get_init_state() function). Clearly, that the most efficient GPU usage is given by the PYCUDA implementation, particularly when large arrays are considered the performance reaches up to 14000 Mcells/second. The NUMBA implementation, on the same GPU ensures a lower, 3-4 times less, speed. For small array size, using CPU can give satisfactory performance while CUPY becomes effective (with respect to NUMPY) for arrays with N>512 but still not so effective as NUMBA or PYCUDA. In fact, this result demonstrates that for the problem considered here, namely the cellular nonlinear model implementation, the GPU-based linear algebra library supported by CUPY is not very well used when compared with user-defined CUDA kernel definition. Although CUPY offers the possibility for user-defined CUDA kernels, we did not implement it since PYCUDA already offers this approach and demonstrated a very good performance. Next table indicates how computing speed is influenced by the choice of the GPU device (on Google Collab. platforms various GPUs are assigned upon their availability). III. APPLICATIONS Having a fast CNN simulator opens interesting possibilities particularly when various complex dynamics behaviors should be associated with specific locations in the parameter space. In [4] an analytic method dubbed "edge of chaos" and based on local activity theory [5] was considered to roughly locate sets of parameters leading to meaningful and interesting dynamic behaviors. Although the local activity method is rather fast since it does not require RD-CNN simulation to detect the "edge of chaos profile", the theory does not include the diffusion coefficients and consequently more simulations are needed to finely locate the specific parameters leading to interesting dynamics. Such interesting dynamic phenomena are slow growth phenomena mimicking various natural phenomena including life, tumor formation and evolutions, pandemics etc. where large number of iterations (and consequently good simulator speeds) are a must. Other relevant dynamics is associated with imageprocessing such as edge extraction and other feature extractors. There is still a lot of un-explored potential for cellular nonlinear networks in artificial intelligence and we believe that using such fast simulators can reveal interesting applications. It is worth mentioning that a cellular nonlinear network acts as a convolutional layer with recurrence, i.e. applied on itself hundreds of times (as specified by the number of iterations) being thus equivalent to a very deep convolutional network (the depth being equivalent to the number of iterations). Such recurrent implementation is an economical alternative to the usual feed-forward convolution neural network, not mentioning that CeNNs (cellular nonlinear networks) are more plausible brain models. In the following we will exemplify with two examples of complex behaviors that can be easily modelled and simulated with the proposed code. A. Fast identfication of emergent behaviors (slow growth) Starting from results in [14] one can use our simulator to rapidly explore some regions in the (Du,Dv) diffusion parameter space. As seen in Figure 9, a large variety of behaviors cand be detected using square random mid patters as initial state or fully random initial states (as on the rightmost row in Figure 9) Figure 10 represents an example of slow growth while Figure 11 gives the final pattern (after 10000 iterations) in four random initial state cases. Fig. 10. A simulation of RD-CNN for the a specific case of "slow-growth" Fig. 11. Four different "slow-growth" patterns emergin from different midrandom square initial states. B. Image processing applications (edge detection) Another interesting application of emergent dynamics, with potential relevance in artificial intelligence, is the detection of features from initial state images. Runing a fast simulator allow fast identification of meaningful processing behaviors, as shown in Figure 12 where some sets of (a,b) parameters can be identified in relationship with meaningful behaviors. Other set of parameters, revealing meaningful behaviors, relevant for the edge detection task were already exposed in the simulations shown in Figures 1 and 2. IV. CONCLUDING REMARKS A freely available modelling and simulation framework (MSF) for fast simulation of cellular nonlinear networks is proposed [16]. It can be easily extended to any FDTD (finitedifference time-domain) model while it was designed for reproducible research and education using the readily available resources from Google Collaboratory. Four variants of implementations using four Python libraries were considered. The fastest version relies on PyCUDA library, capable to achieve as much as 14000 Mcells/second speed using the best GPU platform available in the cloud platform (Tesla P100-PCIE). As a comparison, the freely available Ready package [9] which can be installed on a personal computer ensures no more than 2300 Mcells / second for a grid size of 1024x1024 cells. Of course, the performance here was limited by the local GPU, in this case a Geforce GTX950. Installing a better GPU on a local computer to run Ready or other similar simulator can be rather costly in comparison to the advantage of our cloud-computing based MSF which can freely exploit some better available GPUs. Other important advantage of our solution is related to the possibility to easily embed basic functionalities in more sophisticated applications using the widely available Python libraries. For instance, our further research will focus on combining the MSF (patterns generated by it) with deep learning solutions to classify automatically various types of emergent behaviors, thus providing some more accurate alternative solution to the local activity theory for identification of interesting dynamic behaviors while searching the parameters space and performing rapid simulations of the implemented cellular nonlinear model.
2021-02-23T02:15:35.807Z
2021-02-20T00:00:00.000
{ "year": 2021, "sha1": "7a069dcac7e91e9dae21e0df417263989cc07878", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.10340", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7a069dcac7e91e9dae21e0df417263989cc07878", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
235819640
pes2o/s2orc
v3-fos-license
Spatially explicit assessment of forest road suitability for timber extraction and hauling in Switzerland Efficient forest management, and wood production in particular, requires a forest road network of appropriate density and bearing capacity. The road network affects the choice of a suitable extraction method and the length of the transport route from the forest, while the road standard defines the truck type that can be used. We evaluate the forest road network’s economic suitability for harvesting operations in the entire Swiss forest, an area of about 13,000 km2 covering a range of topographies, based on the Swiss National Forest Inventory’s (NFI) forest road dataset. This dataset is based on information from an interview survey with the local forest services and includes all forest roads in Switzerland capable of carrying trucks. Extraction options and hauling routes are analysed together; thus, the entire logging process is examined. Model results include maps of the most suitable extraction method; extraction costs; hauling costs; and a suitability map based on a combination of the results. While the larger part of the Swiss forest is classified as “suitable” for economic harvesting operations, significant portions also fall into the “limited suitability” and “not suitable” categories. Our analysis provides an objective, country-wide, spatially explicit assessment of timber accessibility. The resulting suitability map helps identify areas where timber harvesting is economic using the current forest road network, and where it is not. The model results can be used in road network planning and management, for example, by comparing road-network re-design scenarios, and compared to the spatial distribution of available wood volume. Introduction Efficient forest management and operations are a prerequisite for many ecosystem services (e.g. timber production, protective forest, recreation, biodiversity) as well as for making timber production profitable. To enable efficient management, it is essential that forests are well accessible by forest roads and state-of-the-art extraction technology can be used, as extraction and hauling account for a large share of the total costs of a forestry operation in most forestry settings (about 60% in Switzerland (Bundesamt für Umwelt BAFU 2017)). Many forestry enterprises are struggling to stay profitable and need to keep costs for off-and on-road wood transportation low. A dense network of forest roads can reduce both extraction costs and hauling costs, by decreasing the extraction distances from the stand to the nearest road, and by shortening on-road hauling distances, respectively. In addition, roads with high bearing capacities can be used by heavy trucks and thus lead to a smaller number of trips necessary to transport a certain volume of wood, further reducing hauling costs. However, constructing and maintaining forest roads suitable for wood extraction and hauling is expensive. In the future, costs for the construction and maintenance of forest roads might rise due to the effects of climate change, because in many regions more intense precipitation events are expected (Croci-Maspoli et al. 2018) and might lead to higher erosion rates. At the same time, shorter soil freezing periods could limit ground trafficability and therefore ground-based wood extraction methods (Henry 2008). In addition, a road 1 3 network that is too dense consumes a part of the forest area and can have negative ecologic effects such as increasing erosion (Spellerberg 1998). For these reasons, it is important to carefully evaluate the forest road network and allocate the (often scarce) resources available for the construction and maintenance of forest roads wisely. An informed evaluation of the current situation requires detailed information about the existing road network and the resulting forest accessibility. The assessment of the economic efficiency of a forest road network should consider the cost contributions from extraction (off-road) and onroad transportation in combination. While expert opinion (and local knowledge) often plays an important role in the evaluation of forest road networks, it is desirable to have a method that is objective and reproducible and can be applied to large areas using the same criteria everywhere. Computer-aided approaches using such a method have been applied in the design of new forest road networks (Stückelberger et al. 2006), but without taking existing infrastructure into account. Before an existing forest road system can be improved, however, areas with insufficient access first have to be identified. The calculation of road density (or alternatively, road spacing), if calculated for a road network of irregular shape, gives a general estimate of the efficiency of the road network in a given area, but since road density is an average for this area, it does not allow to differentiate between easily accessible zones and zones with limited access. Hayati et al. (2012) demonstrate that high road density alone, and even high relative openness, does not necessarily result in overall short hauling distances, if there is large overlap of areas served by more than one road. Other evaluations focus on the extraction distance to the nearest road; this is a quick method yielding a spatially explicit result, but it looks at the extraction process in a simplified way, even if the straight-line distance to the nearest road is complemented with the vertical distance, ignoring soil properties and local steepness (Caliskan and Karahalil 2017;Laschi et al. 2016). Determining optimal extraction methods at sample plots allows to study the characteristics of each plot in detail, but the results cannot be transferred to the surrounding area if the terrain is variable and the road network irregular (Alberdi 2020). A number of studies have analysed either the extraction method (Caliskan and Karahalil 2017;Kuhmaier and Stampfer 2010;Suvinen 2006;Zambelli et al. 2012) or the hauling route (Akay and Kakol 2014), but not both at the same time. An approach to design a new road network that minimises on-and off-road transport (cable-yarding) at the same time was presented by Bont et al. (2015); however, this study did not evaluate an existing road network. Dupire et al. (2015) developed a method to determine forest accessibility based on three different extraction methods in a spatially explicit way for small areas, but did not consider on-road transportation. Havimo et al. (2017) present a model that takes into account road construction and timber transportation costs, as well as information on available stock, but consider only flat terrain and therefore do not differentiate between different extraction methods. In areas with heterogeneous topography, comprising steep as well as level terrain, these approaches are not sufficient to evaluate wood accessibility because different extraction methods are applied depending on the local topography, ground trafficability and potential obstacles to ground-based extraction and cable-yarding, and because the forest road density is often highly variable due to the heterogeneous topography. Moreover, in many countries, forest roads serve a range of purposes in forest management, wildfire control and recreation and are not designed solely for wood extraction purposes; thus, their suitability for heavy wood transport vehicles varies. Our study area is the country of Switzerland, situated in Central Europe, along the Alpine arc ( Fig. 1). About 30% of Switzerland are covered by forest (Abegg et al. 2014). Most forests in Switzerland are managed to fulfil multiple functions, with a combination of protective, recreational, ecological and economic uses. The dominant silvicultural regimes are based on selective cutting in even-aged or uneven-aged forests; clear cuts are prohibited by law. The predominant extraction methods are cable-based and ground-based (e.g. forest tractors), whereas helicopters are used occasionally. The overall objective of our study is to provide fundamentals for decisions on the prioritisation of investments in road upgrade or construction, considering the topographical situation, for an existing forest road network. The following specific research questions are addressed: 1. What is the distribution of different extraction methods (per area or per stock volume) for a particular region, and 2. where is the existing forest road network suitable for economic wood extraction, and where are do gaps lead to insufficient accessibility? Answering these questions requires a method which can spatially explicitly assess the suitability of the forest road network. In our study, we apply the approach described by Bont et al. (2018), as it corresponds best with our objective. It includes the modelling of extraction as well as hauling costs, and analyses the suitability of the terrain for ground-based and cable-based extraction methods. Using the resulting suitability map, any areas with inadequate forest accessibility can easily be identified. We adapt the method for use in a large and topographically diverse area; it has already been successfully tested in steep terrain. In addition, we compare our method to the analysis of sample plots in the Swiss National Forest Inventory (NFI). Materials and methods We use a reproducible, consistent way of assessing forest accessibility in a large area (ca. 40′000 km 2 ) of heterogeneous terrain, taking both wood extraction and hauling into account and assuming the most cost-efficient combination is used. The criteria that are used are clearly defined and can be adapted to different study areas or different users' needs. Our analysis is based on information from the Swiss NFI, which provides data on the forest cover as well as the position and dimensions of all forest roads. Since collecting information on terrain morphology, obstacles and soil properties in the field is expensive, we use existing area-wide data for our analysis, based on, for instance, lidar data (digital terrain model (DTM), topographic landscape model (TLM)) and aerial images (TLM, forest cover map). This allows us to produce a spatially explicit result for the entire country. Our results are compared to plot-based information from the Swiss National Forest Inventory (NFI) and evaluated by two local forestry services. We analyse the existing road network, but the method also allows to compare different scenarios, e.g. the effect of newly built or upgraded roads. Accessibility analysis In our study, we analysed the costs of both extraction (offroad transportation) and hauling (on-road transportation), according to the method described by Bont et al. (2018). The methodology relies on studies that have optimised a cable road layout (Bont and Heinimann 2012;Bont et al. 2019;Heinimann 1986;Pestal 1961;Zweifel 1960) and studies that have analysed extraction and hauling together (Bont et al. 2015). In a first step, potential landings were defined on a given forest road network at 30 m intervals. These were the transshipment points from off-to on-road transportation. Starting from these landings, radial lines of a length corresponding to the maximum yarding distance were drawn at 11.25° intervals. 11.25° intervals correspond to 32 radial lines. Theoretically, a line at 11.25° or 22.5° might not be feasible (e.g. because of obstacles), but an intermediate line at 16° might be. However, since radial lines were tested again at a distance of 30 m, it can be assumed that forest parcels that cannot be reached from one landing because of this gap between lines can usually be reached from a neighbouring landing. Along each straight radial line, the technical feasibility of cable roads was checked by finding the maximum feasible distance for a cable road with a maximum of five supports, using the approach of Pestal (1961) and a digital terrain model (DTM), an obstacle map and the forest cover map as input data (Fig. 2). Even unfavourable cable roads were tested on technical feasibility, e.g. perpendicular to the maximum slope, as in some cases no better alternative to access certain forest parcels could be found. Cable roads of two maximum lengths were differentiated, and up-and down-hill yarding directions were determined by comparing the altitude difference between the start and the end of the cable road. Forest parcels were regarded as accessible if they were within 30 m of a cable road. Next, areas where groundbased extraction is possible were identified, starting from all forest road pixels. Here, basically two extraction methods were considered: Skidder and Forwarder. We assumed that trafficable terrain is the same for both machines and consists of slopes of up to 35% on highly load-bearing ground, for example, soils with a high skeletal content (Eichrodt and Heinimann 2001), or slopes of up to 25% on soil susceptible . Further it was checked if those areas were within the maximum skidding distance from the landings and if they were connected to a road by trafficable area. For skidders, the accessible area can be extended by the use of a winch for pulling timber over 50 m in downhill direction, or 100 m in uphill direction. We did not take into consideration traction-assist winches on machinery, assuming they are used to make wood extraction safer and to avoid rutting, and not to work on steeper slopes. Of course, a given forest pixel may be reached from more than one landing, and using more than one extraction method. Next, harvesting costs were estimated for every forest parcel of 10 × 10 m, based on the extraction method and the hauling distance (Table 1). Where neither ground-based extraction nor cable-yarding was possible, extraction by helicopter was assumed and the area was designated as "unsuitable terrain". No cost was assigned to these pixels since helicopter-based extraction, although regularly applied in Switzerland, is usually not economic. A recent study in the Italian Alps mentions helicopter logging costs (per m 3 ) that were four times the logging costs of a cable-crane system (Manzone and Balsari 2017), in spite of much shorter operation times. Finally, the most economic extraction method was selected and assigned to each forest pixel. In a second step, the potential routes from the landings out of the forest were analysed by network analysis. The endpoints of the transportation route-referred to as "connection points"-were defined as the points where the route reaches the superordinate road network, that is, a road that can be used by the largest truck type considered in the study at any time (5 axle trucks with 40 t total weight), as determined in the interview survey of the Swiss NFI . Since the use of a larger truck reduces the number of trips necessary to transport a certain volume of wood (Table 2), and therefore hauling costs, we minimised the distance between a landing and the nearest connection point while maximising truck size. Travel time or speed was not explicitly considered. This seems justified since we only analyse the route through the forest and from the forest to the nearest higher-order road, so the larger part of the distance travelled consists of low-speed forest roads or small overland roads. The method was originally developed for a study area with predominantly steep terrain (Bont et al. 2018) and was thoroughly tested and evaluated there. We applied it now to an entire country, comprising steep as well as hilly and flat topography. Some elements of the code were adapted; for instance, very steep (> 45°) slopes were considered unsuitable for the operation of a cable-yarding system and were thus labelled as "unsuitable terrain". The analysis focuses on the economic efficiency of the extraction and hauling process; other factors, such as ecological aspects, are not included in the model. 3 The calculations were done in MATLAB® (The Math-Works Inc. 2016). To make dealing with large raster datasets possible and to keep calculations time within reasonable limits, the input raster datasets were tiled and the code adapted to allow parallel computations. Study area We carried out our study for the entire forest area in Switzerland (ca. 13,000 km 2 ). Switzerland is a country of diverse landscape, from flat areas and rolling hills in the Plateau area to steeper hills and mountains of up to 1600 m in the Jura and Pre-Alps regions to mountainous topography in the Alps. About 30% of Switzerland is covered by forest, reaching altitudes of about 2000 m. At lower altitudes, beech and spruce forests dominate, while fir, spruce and lark dominate at higher altitudes. The forests in Switzerland are multifunctional, providing ecosystem services ranging from protection against natural hazards to timber production and recreation. The density of the existing forest road network is generally high, but spatially highly variable: on average, the country-wide density of roads capable of carrying 4-axle (32 t) trucks is 22 m ha −1 , ranging from 0 m ha −1 to 84 m ha −1 in individual forest management districts (Brändli et al. 2016). This depends mainly on the topography. In comparison, average forest road densities in other mountainous European countries range from 7.9 m ha −1 in Bulgaria (Yonov and Velichkov 2004) to 45 m ha −1 in Austria (Ghaffariyan et al. 2010). Forest road standards in Switzerland also vary widely (Fig. 3), with some areas relying on roads that only carry 2-axle trucks of less than 20 tons total weight and others where most roads are dimensioned for 5-or 6-axle trucks (40-ton total weight). Especially roads in steep areas often do not fulfil best-practice requirements and can only be used by small trucks. On the one hand, a large proportion of the roads were built in the 1960 and 1970s, when smaller trucks were used in forestry, and are now approaching the end of their expected lifespan. On the other hand, forest roads often serve other purposes besides wood extraction and hauling, such as access to agricultural land or recreational uses. To allow the use of heavier trucks and improve the competitiveness of the Swiss forestry sector-a goal declared by the federal government (Bundesamt für Umwelt BAFU 2013)lowering the expenses for timber extraction and hauling is a key factor. In order to identify gaps in the forest road network and to close them efficiently, an objective assessment of the existing forest road infrastructure is necessary. The dominant wood extraction methods are ground-based (skidders, forwarders) in flat areas (about 80% of the forest area) and cable-based in steep areas (about 15%), while helicopters are sometimes used in areas where neither ground-based nor cable-based extraction are possible, but forest management is still necessary (e.g. protective forest; about 5%) (Brändli et al. 2020). In the Swiss NFI of 2009-2017, growing stock in the Swiss forest was estimated to be about 350 m 3 ha −1 , considerably higher than in surrounding countries, where values ranged between about 150 m 3 ha −1 (France, Italy) and 300-320 m 3 ha −1 (Austria, Germany) (Brändli et al. 2020). In the nine years between NFI3 and NFI4, net growth was an average of 7.6 m 3 ha −1 a −1 (or 9.1 million m 3 a −1 ) with large differences between different regions, ranging from 4.5 m 3 ha −1 a −1 in the southern parts of the country to 12.0 m 3 ha −1 a −1 in the Plateau region (Brändli et al. 2020). In this time period, 6.5 m 3 ha −1 a −1 of Fig. 3 Left: forest road for 26-ton trucks, right: forest road for 40-ton trucks according to NFI data, Plateau region, Switzerland wood was used in all of Switzerland, i.e. about 85% of the net growth. Estimations for the annual potential wood supply from the Swiss forests range from 6 to about 8 million m 3 (Hofer et al. 2011, Stadelmann et al. 2016). However, regionally only a fraction of the harvesting potential is actually harvested (Fischer and Camin 2015)-not surprisingly, considering low wood prices (an average of 70-90 CHF m −3 between 2015 and 2019 (Wald Schweiz)) and relatively high production costs (40-60 CHF m −3 between 1990 and 2014 on average (Murbach 2016), but over 100 CHF m −3 on a third of the forest area (Brändli et al. 2020)). (1 CHF = 0.94 EUR, or 1.04 USD). Extraction and hauling cost estimations (Table 1, Table 2; Bont et al. (2018)) are based on the timber harvesting productivity model HeProMo (Erni 2003;Fischer and Stadelmann 2019;Frutig et al. 2009;Holm et al. 2020). The costs for ground-based extraction were assumed to be 40 CHF m 3 consistently. This is a comparatively high value which stands in relation to relatively high Swiss salaries; it may be lower in favourable cases and depending on the machinery that is used. However, whether a skidder or a forwarder is used depends on other factors, such as the wood sortiment, too, and is difficult to assess in our model. At the same time, costs of 40 CHF m 3 are still lower than those for a cable yarder, so we do not expect this to introduce a systematic error in favour of cable yarding. With respect to common extraction procedures in Switzerland, maximum yarding distance was set to 1500 m and maximum skidding distance to 400 m. For the model, transshipment points were set at intervals of 30 m along the forest roads. Input data The national swissTLM 3D road dataset provides accurate road geometries for the entire country, including forests. However, this dataset currently does not contain information on road width, bearing capacity and surface material relevant to the usability for forestry operations. This information has been collected by the NFI in an interview survey with foresters of all foresting districts in every inventory cycle since 1983 (Brassel and Lischke 2001; Fraefel and Fischer 2019;Müller et al. 2016). The resulting dataset consists of the swissTLM 3D geometries and additional attributes from the survey. It includes all forest roads that can be used for wood extraction and hauling and feature a minimum width of 2.5 m and a minimum bearing capacity of a 10-ton axle load. The dataset from the latest (4 th ) inventory also features information on road width and bearing capacity and the main roads connecting the larger forest roads to the road network that can be used by trucks of up to 40 tons all year round ('higher-order road network'). The interview survey also produced a "connection point" dataset of the locations where the forest roads or connection roads met the higher-order road network. For this study, all roads that can be used by a truck of 26 tons total weight or more, and are at least 3 m wide, were included in the road dataset (83% of the total road length). This is based on the assumption that the installation of a tower yarder (TY) requires a road width of at least 3 m and trafficability for a 3-axle truck to access the landing. Input data in raster format were: (1) the 2-m digital terrain model swissALTI 3D (swisstopo 2018) and, derived from it, a 2 m slope raster; the NFI's forest cover map (1 m resolution, Waser et al. (2015)); the Swiss soil suitability map (Swiss Federal Statistical Office (FSO) 2000) (originally in vector format); and a map of obstacles to the installation of cable roads created from elements (buildings, cables, railway lines and major roads) of the topographic landscape model swissTLM 3D (swisstopo 2012). All raster data were resampled to a resolution of 10 m. This resolution was chosen as a compromise between accurate representation of the real world on the one hand and too much noise from microtopography and exceedingly long computing times on the other hand. Vector datasets included the NFI forest road dataset with information indicating the type of truck that can be used on each road segment and a point dataset giving connection points where the roads from the forest connect to the superordinate road network ( Fig. 4; Müller et al. (2016)). Results The distribution of extraction methods chosen by our model is spatially highly variable. Table 3 gives the resulting area for each extraction method in all of Switzerland as well as in five sub-regions of the country. We used the "production regions" defined in the Swiss NFI based on relatively homogeneous growth and wood production conditions (climate, topography, soil): Jura, Plateau, Pre-Alps, Alps and Southern Alps (Fig. 5;Fischer and Traub (2019)). Ground-based extraction, up-hill yarding, down-hill yarding and unsuitable terrain (helicopter) were assigned to similar proportions of the total forest area in Switzerland (21-25% each). The remaining 8% of the area were assigned long-distance up-hill yarding and long-distance down-hill yarding. The areas assigned to a certain extraction method were, however, not regularly distributed among the regions: for instance, ground-based extraction was assigned to 40 and 61% of the forest area in the Jura and Plateau, respectively, but only to 3-9% of the forest area in the Pre-Alps, Alps and Southern Alps. In the Plateau, where flat areas and low slope gradients dominate, the dominant extraction method is ground-based, followed by unsuitable terrain, tower yarding and long-distance yarding. Jura and Pre-Alps have larger proportions of all cable-yarder classes than Plateau, but lower proportions of ground-based extraction. The largest difference in extraction methods assigned to the Jura and Pre-Alps regions can be found for the ground-based extraction method, the proportion of which is significantly larger in the Jura compared to the Pre-Alps. This might be due to the larger part of flat areas or the overall higher forest road density in the Jura. The Alps and Southern Alps regions are characterised by a very small part of the area to which ground-based extraction was assigned (4 and 3%, respectively). The values for long-distance yarding are similar in the Alps and the Southern Alps (10 versus 11%), but in the Alps, the proportion of tower yarding is much larger than in the Southern Alps (58 versus 26%), while in the Southern Alps, the proportion of unsuitable terrain is much larger than in the Alps (61 versus 28%). Long-distance yarding (up-and down-hill combined) plays a minor role in all regions, with the largest proportions occurring in the Alps (10%) and Southern Alps (11%). We compared the resulting most suitable extraction method to the growing stock in the respective forest areas. We used the growing stock map compiled by Ginzler et al. (2019) for the entire Swiss forest and analysed it for all of Switzerland as well as for the five production regions, as presented in Table 4. Here, too, the analysed regions show large differences. For example, in the Jura region, more than 80% of the growing stock is found in forests to which ground-based or cableyarding extraction methods were assigned. This is the case for only 39% of the growing stock in the Southern Alps. Generally, the percentages of growing stock closely resemble the percentages of area as presented in Table 3. Resulting extraction-cost estimations range from 40 to 80 CHF m −3 , with a mean of 54 CHF m −3 . Estimated hauling costs range from 0 to 19 CHF m −3 , with a mean of 2 CHF m −3 . Figure 6 illustrates the resulting maps showing extraction method, extraction cost, hauling cost and total cost. In order to assess the overall suitability of the Swiss forest for economic wood extraction and hauling, we applied the criteria defined in Table 6 to the entire forest area. This classification gives the highest rating ("suitable") to all forest areas which were assigned ground-based or tower-yarding extraction, in combination with hauling on roads that allowed the use of trucks of at least 28 t. The second-highest rating ("limited suitability") was given to forest areas which were assigned long-distance yarding, or a hauling route on smaller roads. All forest areas which were declared "unsuitable terrain" fell into the lowest class ("not suitable", Fig. 7). Using these criteria, 75% of the forest area falls into the "suitable" and "limited suitability" classes ( Table 5). The percentages of area falling into the lowest, "not suitable" category closely correspond to the values given in Table 3 for unsuitable terrain (25%), indicating that the entire forest area in the "not suitable" category consists of forests with air-based extraction assigned. These classes are, of course, highly arbitrary and need to be adapted to the problem that is to be addressed. Discussion With our approach, we have achieved a consistent and spatially explicit evaluation of the forest accessibility in a large area of heterogeneous terrain, based on an existing forest road network. In contrast to other forest accessibility analyses, our method combines the evaluation of the most suitable extraction method and the best on-road transport route; it also integrates the roads' varying bearing capacities into the estimation of hauling costs. The results show that wood accessibility is roughly correlated with topographic steepness, i.e. wood extraction and hauling costs are lower in areas with lower slopes. This can be explained by the applicability of cheaper extraction methods as well as higher road densities in the flatter areas. The most frequently used extraction method was cableyarding. Taking all four cable-yarding classes togethertower yarding and long-distance yarding, up-hill and downhill-this method was assigned to 52% of the Swiss forest area. This is not surprising given the large proportion of the forest area located in steep terrain. Similarly, Kuhmaier and Stampfer (2010) found that in steep terrain, cable-yarding was the dominant extraction method assigned by their model. Ca. 65% of the forest area in the study by Caliskan and Karahalil (2017) were assigned cable-crane extraction methods for a study area of partly steep terrain. More surprising is the relatively high proportion of forest area labelled as "unsuitable terrain" and assigned air-based extraction (helicopter). This category resulted most often in the Southern Alps region (61%) and quite often in the Alps (28%), but also considerably often in the Pre-Alps (17%) and Jura (11%) regions. Even in the relatively flat Plateau area, 16% of the forest area was assigned air-based extraction. In some cases, this could be explained by the fact that even though a large road is present in a forest area, it might not be connected to the higher-order road network by suitable connection roads, e.g. due to obstacles like underpasses or bridges, so in our model the road is not used at all. This leads to the classification of a substantial area as "unsuitable terrain" where in reality wood is harvested using cable-or ground-based methods and transported in smaller trucks of less than 26 t total weight. Then, even in the relatively flat terrain of the Plateau area steep forested slopes occur, e.g. along creeks, as well as areas that are separated from the nearest road by obstacles such as power lines or railway lines. Third, also in areas with low relief and generally gentle slopes some forests are connected by small roads, only carrying vehicles of up to 25 t total weight, which are not considered in our model. Finally, a missing connection point on the higher-order road network, or an existing road segment that was not recorded in the survey, would also lead to an area being classified as unsuitable terrain, even though a connection via large enough roads is present in reality. Ground-based extraction was assigned to 21% of the forest area across Switzerland. It is almost negligible in the Southern Alps (3% of the forest area), the Alps (4%) and the Pre-Alps (9%), according to our model, but accounts for a large proportion of the forest area in the Jura (40%) and the Plateau (61%). Accordingly, less than 10% of the growing stock would be extracted using ground-based methods in the Southern Alps, (2%), Alps (3%) and Pre-Alps (8%). These values are comparable to a study by Zambelli et al. (2012) in a predominantly mountainous area (Trentino, Italy) where about 4-16% of estimated timber harvest volume could be extracted by ground-based methods. Ground-based extraction is assigned to stands with small distances to the nearest forest road, low slopes and good soil trafficabilities. Therefore, the reliability of this classification heavily depends on the input data for terrain and soil trafficability. While a digital terrain model of high resolution and accuracy was available as input for our model, information on soil characteristics was much more difficult to obtain. Better soil property maps would significantly improve the reliability of the assessment of trafficable forest areas in our model. In addition, the area where ground-based extraction is used may be underestimated by our model because it does not take into account the use of skid roads, which are often used instead of cable lines in steep terrain in reality. Roads missing in the road dataset will also reduce the forest area assigned to ground-based extraction. Our results show that using the criteria given in Table 6, the Swiss forest road network is suitable for economic wood extraction in about 80% of the forest area in the Jura and Plateau regions; this percentage is considerably lower in the Alps and Southern Alps (30 and 16%, respectively), while the Plateau region falls into the middle with 61%. In the Plateau and Jura regions, forest areas deemed "not suitable" are often limited to small areas with steep topography and forest patches served only by smaller roads. In the regions with more mountainous topography, larger roads can sometimes only be found along the valley bottom, and correspondingly, large areas fall into the "not suitable" category. We compared the resulting extraction methods to data from the 4 th Swiss NFI interview survey on the extraction method that was used or might be used on the NFI sample plots (1.41-km grid). In forests where harvesting actually took place in the time period since the previous survey, the dominant extraction method was "groundbased" (80% ± 1 of the area), followed by "tower-yarding" (15% ± 1) and "helicopter" (5% ± 0) according to NFI data. The most probably applied extraction method for the forests that were not harvested in the same time period was "ground-based" (41% ± 1), followed by "tower-yarding" (30% ± 1) and "helicopter" (29% ± 1) (WSL 2020). This makes sense because a part of the area where expensive extraction methods would have to be applied are, in reality, not harvested at all. For the entire forest area, harvested or not, ground-based extraction was indicated for 53% of the area, tower-yarding for 25% and helicopter for 21% according to NFI data. The higher percentage of groundbased extraction can be explained by the fact that in the NFI survey, ground-based extraction includes winching. In addition, the extraction method often depends on the machinery available in a forest enterprise. We also presented our results-the extraction method map and the general suitability map-to the local forest services in two cantons, Basel and Bern, located in Table 6 different parts of Switzerland, for feedback. The evaluated area covers the Jura, Plateau, Pre-Alps and Alps regions and represents the entire range of topographies in Switzerland. While our contact persons generally considered the results to be accurate, there were also some minor points of disagreement (S. Blatter, M. Opiasa, personal communication, 2019): first, the resulting areas allocated to a certain extraction method were sometimes very small, in some cases down to individual 10 × 10 m pixels. This is owed to the combination of different input data with different spatial granularity and cannot completely be excluded, although applying different extraction methods for very small parcels does not make sense in practice. We stress that the resulting extraction-method map is suitable to give a general overview, not detailed instructions. Second, in practice, wood extraction systems were used in the cantons that were not considered in our model (winching in areas with a slope of more than 35%). Still, we chose not to include any additional methods in the model to keep model complexity within limits. Third, forest services reported additional roads that were missing in our dataset, sometimes but not always because they had been constructed after the forest road survey. This highlights the importance of the completeness and up-to-dateness of the road input dataset. Fourth, cable-yarding was deemed unrealistic in some areas because no anchor trees were available or the distances were too short, or impossible because of very steep slopes (rock faces). We changed the model to exclude cable-yarding across very steep slopes. Fifth, in some cases soil trafficability was reported incorrect, for example, where waterlogging occurred. While a better soil property map for all of Switzerland was not available, this stresses the relevance of detailed information on soils for the assessment of wood extraction methods. When using the model for smaller areas with better soil information available, this should be included in the input data. The accuracy of the results depends strongly on the reliability of the road data, i.e. on how correct and how up-todate they are. It is thus very important to use a road dataset that provides the necessary information on bearing capacities. This information might have been documented during road construction, could be obtained from the owner of the road, requested from forestry services (as in the Swiss NFI) or collected in the field, but in any case should be consistent throughout the study area. The effect of different raster resolutions was not analysed in this study, but would be important to investigate. We speculate that a higher resolution of the input data might improve results in some areas, but also introduce noise (i.e. too much detail) in other areas and in general lead to more small-scale effects in the results (i.e. very small areas with the same assigned extraction method). This effect might limit their practical value, but might be negligible in the case of a general evaluation of a large region. Dupire et al. (2015) demonstrated that in their study area the DTM resolution had an effect on the extraction distance for ground-based as well as cable-road extraction. Our model uses a set of three different extraction methods (with different lengths for the cable-yarding), excluding other methods that may be applied in some regions. Another critical factor is the soil suitability map, which has limited resolution. Better representation of soil trafficability, thus describing vulnerability to compaction (e.g. areas prone to waterlogging, or skeletal fraction) and terrain roughness, might significantly improve the assignment of a suitable extraction method. The model could also be improved by better incorporating the analysis of obstacles to ground-based extraction. Our model focuses on the economic efficiency of wood extraction and hauling. Depending on the problem to be solved, other factors such as ecological or safety considerations have to be taken into account. Extraction costs are difficult to estimate and vary with time; further investigation would be required to analyse the model's sensitivity to changing cost assumptions. While absolute cost estimations should be used with care, given the difficulty of assigning accurate costs to wood extraction procedures and hauling, our model is better suited to give a reasonable relative classification over a large area, for example, by using a suitability index such as the rating shown in Table 6. From the point of view of planning improvements of the forest road network to increase economic efficiency, it would also be important to identify areas where road densities are higher than necessary. Conclusions In this paper, we have presented an approach that allows to assess the economic efficiency of the forest road network in the heterogeneous terrain of Switzerland. Our results show that the model developed by Bont et al. (2018) gives a reasonable assessment of the relative accessibility of wood in the forests of a fairly large region of variable terrain; however, a few changes were introduced to more realistically include areas with flat terrain or gentle slopes. The results of the assessment, together with more detailed information on forest composition, protection function and local soil conditions, for example, can be used to identify areas of insufficient efficiency. This is important because it allows limited means for road construction and improvement to be allocated in a more economically efficient way. The main advantage of our model compared to assessment by expert opinion is that a large area can be assessed objectively, using the same criteria everywhere, thus providing comparable results. This comes at the cost of ignoring local detail. In contrast to many classical approaches, such as road density analysis, our model gives spatially explicit results. The main advantage of our model compared to other area-wide (spatially explicit) methods is that off-and on-road wood transport are considered simultaneously. Our results show that the suitability of the Swiss forest road network for economic wood extraction varies strongly between regions, with the best results for the Jura and Plateau regions. The lowest percentage of forest area with a suitable forest road network is found in the Southern Alps. Ground-based extraction only plays a significant role in the Jura and Plateau regions, whereas in the Pre-Alps and Alps regions, cable-yarding is assigned to a larger percentage of forest area. In the Southern Alps, the largest share of forest area falls into the category of "unsuitable terrain". The results do not intend to be a precise recommendation about which extraction method should be used at a certain place, but an indication of which areas are not covered by a road network of sufficient density and bearing capacity for wood to be harvested economically at the moment. The power of the method lies in making the assessment of different areas comparable, whereas the absolute numbers of the cost estimates strongly depend on the cost assumptions in the model, for which reliable numbers are often difficult to obtain. Individual forest managers might arrive at different conclusions when considering available machinery and preferred vehicle types in their forest districts, as well as the wood volume to be harvested in a given harvesting operation. Note that protective forests are included in the analysed forest area but do not need to fulfil the same efficiency criteria as other forests, since wood harvesting will be conducted here even if it is not economic. Apart from the assessment of the status quo of the forest road network in a given area, the model could be applied to specific case studies such as the evaluation of the wood potential for suggested locations for saw mills, or the analysis of the effects of road upgrades or removals.
2021-07-13T19:01:28.810Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "676ec0b348fcda3f660b9c5df4fc2d02c80727cb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10342-021-01393-w.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "275a2521d84316bfb64c4e7ae605cfe997e8f18e", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
55523764
pes2o/s2orc
v3-fos-license
Annales Geophysicae Low-frequency magnetic field fluctuations in Venus ’ solar wind interaction region : Venus Express observations We investigate wave properties of low-frequency magnetic field fluctuations in Venus’ solar wind interaction region based on the measurements made on board the Venus Express spacecraft. The orbit geometry is very suitable to investigate the fluctuations in Venus’ low-altitude magnetosheath and mid-magnetotail and provides an opportunity for a comparative study of low-frequency waves at Venus and Mars. The spatial distributions of the wave properties, in particular in the dayside and nightside magnetosheath as well as in the tail and mantle region, are similar to observations at Mars. As both planets do not have a global magnetic field, the interaction process of the solar wind with both planets is similar and leads to similar instabilities and wave structures. We focus on the spatial distribution of the wave intensity of the fluctuating magnetic field and detect an enhancement of the intensity in the dayside magnetosheath and a strong decrease towards the terminator. For a detailed investigation of the intensity distribution we adopt an analytical streamline model to describe the plasma flow around Venus. This allows displaying the evolution of the intensity along different streamlines. It is assumed that the waves are generated in the vicinity of the bow shock and are convected downstream with the turbulent magnetosheath flow. However, neither the different Mach numbers upstream and downstream of the bow shock, nor the variation of the cross sectional area and the flow velocity along the streamlines play probably an important role in order to explain the observed concentration of Correspondence to: L. Guicking (l.guicking@tu-bs.de) wave intensity in the dayside magnetosheath and the decay towards the nightside magnetosheath. But, the concept of freely evolving or decaying turbulence is in good qualitative agreement with the observations, as we observe a power law decay of the intensity along the streamlines. The observations support the assumption of wave convection through the magnetosheath, but reveal at the same time that wave sources may not only exist at the bow shock, but also in the magnetosheath. Introduction Waves in plasmas, which are generally considered as fluctuations in the electric field, the magnetic field, the density, and the temperature, play an important role in the interaction processes of the solar wind with planets and other solar system bodies.Because the particle densities in space plasmas surrounding the obstacles are low, collisions between particles occur rarely and the transfer of momentum and energy can only be accomplished by waves.Hence, it is important to study wave characteristics, their origins, and generation mechanisms in order to improve our understanding of the complex interaction processes.Many observations of waves in the ultra-low-frequency and low-frequency range at various planets are reported (e.g.Glassmeier and Espley, 2006).Above all, it is interesting to study the plasma environment L. Guicking et al.: Statistical analysis of wave properties at Venus of Venus, because it does not possess an intrinsic magnetic field and the solar wind interaction is similar to that at Mars (e.g.Cloutier et al., 1999), also lacking a global planetary magnetic field (e.g.Acuña et al., 1998).A statistical study of low-frequency magnetic field oscillations in the Martian plasma environment is presented by Espley et al. (2004). Since the early 1960s Venus has been an object of exploration by more than 20 spacecraft missions from the United States and the former Soviet Union.However, most of the current understanding of the solar wind interaction with Venus comes from the Pioneer Venus Orbiter (PVO) due to its long-lasting mission from 1978 to 1992 (Russell, 1991).At Venus the solar wind interacts directly with the planet's upper atmosphere, creating various boundaries and regions in the Venusian plasma environment (e.g.Luhmann, 1986).Observations of plasma waves, mainly detected by the Orbiter Electric Field Detector (OEFD) on board the PVO at frequencies from 100 Hz to 30 kHz are summarised by Huba and Strangeway (1997) and Strangeway (1991) and are compared to observations at Mars by Strangeway (2004).The temporal resolution of the PVO magnetometer samples reaches 12 Hz (Russell et al., 1980) and thus allows to investigate magnetic field fluctuations up to a few Hz.Waves upstream of the Venusian bow shock have been studied by Orlowski et al. (1994) and Strangeway and Crawford (1995).Downstream of the bow shock Brace et al. (1983) detected ionospheric wave structures nightward of the terminator which they call post-terminator ionospheric waves.Large magnetic field fluctuations in the magnetosheath of Venus with periods from 10 to 40 s have been observed by Luhmann et al. (1983).They supposed that these waves are generated most likely in the vicinity of the quasi-parallel bow shock and are convected downstream in the magnetosheath with the solar wind plasma.Their possible origin has been studied by means of numerical simulations by Winske (1986), suggesting that plasma instabilities can generate these waves either directly by the interaction of the solar wind with the quasiparallel bow shock or with the oxygen ions of planetary origin (pickup ions).Luhmann (1995) investigated magnetic field fluctuations in the low-altitude subsolar magnetosheath and found predominantly linearly polarised waves of transverse character with regard to the background magnetic field which have also been observed at Earth.Grebowsky et al. (2004) observed ultra-low-frequency waves in the vicinity of detected ion pickup regions which suggest an association of wave generation with the pickup ions. The current knowledge about Venus and its space environment is now further completed by the Venus Express spacecraft, the first European mission to the planet Venus (Titov et al., 2006).Launched in November 2005, the spacecraft arrived at Venus in April 2006 and was inserted into a polar orbit with a period of 24 h.The orbit geometry of Venus Express allows magnetic field measurements in the low-altitude region near the terminator and the mid-magnetotail region which were not covered by the PVO (Zhang et al., 2006).The better coverage of the Venusian dayside and nightside magnetosheath as well as the magnetotail provides an opportunity to study statistically magnetic field fluctuations in these regions in more detail.Recently, proton cyclotron waves upstream of the Venusian bow shock (Delva et al., 2008a,b,c) and mirror mode structures in the magnetosheath (Volwerk et al., 2008a,b) have been detected using the magnetic field measurements of the fluxgate magnetometer on board the Venus Express spacecraft (Zhang et al., 2006).Vörös et al. (2008a,b) studied properties of magnetic field fluctuations in the Venusian magnetosheath and wake and observed varying scaling features of the fluctuations in the different regions. In this work we present a statistical study of magnetic field fluctuations near Venus in the low-frequency regime.We adopt the definition given by Espley et al. (2004) who refer frequencies near or below the proton gyrofrequency as the low-frequency range and frequencies below the lowest local ion gyrofrequency as the ultra-low-frequency range.As Espley et al. (2004) provide a statistical study of lowfrequency magnetic field oscillations at Mars, including the magnetosheath, the magnetic pileup region, and the tail, our study provides also the basis for a comparative wave study between Venus and Mars. Data and analysis methods In this study we use a Venus Express magnetometer data set which includes 567 orbits between April 2006 and December 2007 with a temporal resolution of 1 s.The data are corrected for perturbations caused by the spacecraft so that the data have an accuracy of about 1 nT for the absolute field and an accuracy better than 0.1 nT for the variable field.This accuracy has been realised by a novel software procedure applied to the dual-sensor measurements of the Venus Express magnetometer (Zhang et al., 2008a,c).The data set is given in the Venus solar orbital (VSO) coordinate system in which the x-axis points towards the sun, the y-axis is in the opposite direction to the planetary orbital motion and the zaxis completes the right-handed coordinate system pointing northward from the orbital plane.In order to take into account the orbital velocity of Venus with respect to the mean solar wind velocity, we adopt the aberrated VSO coordinate system (x ,y ,z ) suggested by Martinecz et al. (2008) for the presentation of the results.The transformation to the aberrated VSO coordinate system (VSO' coordinate system) is realised by a constant rotation of 5 • around the z-axis. Figure 1 shows the spatial distribution of the observed magnetic field strength in a cylindrical coordinate system in which the magnetic field strength is averaged over the directions around the x -axis.So, the cylindrical symmetry is assumed with regard to the x -axis, leading to a two-dimensional picture with the axes x cyl = x and y cyl = y 2 + z 2 .This means that the x cyl -axis represents the apparent solar wind direction and the y cyl -axis the distance from the x cyl -axis (Martinecz et al., 2008).It is convenient to locate regions and boundaries at Venus.In Fig. 1 the data are presented in the range 2 R V ≤ x cyl ≤ −4 R V and y cyl ≤ 4 R V and each colour-coded bin has a size of 0.05 R V × 0.05 R V (R V : Venus radius).The locations of the bow shock (BS), the upper mantle boundary (UMB), and the ion composition boundary (ICB) determined by the models of Martinecz et al. (2009) are plotted for orientation and distinction of the different solar wind interaction regions, too.The magnetic field strength becomes enhanced downstream of the bow shock.This region downstream of the bow shock, between the bow shock and the UMB, is the so-called magnetosheath and characterised by slowed down and heated plasma with respect to the solar wind plasma upstream of the bow shock. Magnetic field strength The field piles up on the dayside of the planet and forms the magnetic barrier (Zhang et al., 2008b).The UMB and the ICB confine the mantle region which is characterised by a mixture of solar wind ions and ions of planetary origin.Below the ICB the solar wind protons disappear.Finally, the draping of the magnetic field lines around the planet leads to the formation of the magnetotail in which the magnetic field strength is also slightly enhanced compared to the strength upstream of the bow shock. For our statistical study intervals with the time length 100 s are selected with a shift of 1 s from one interval to another.Gaps in the data set greater than 1.5 s are excluded from the analysis.The interval length is chosen as a trade-off between the temporal and spatial resolution as well as the occurrence of data gaps.With respect to the bin size that means that almost every bin is well covered by observations which is shown in Fig. 2. The statistical analysis is performed in the frequency range 30 to 300 mHz, as we focus on lowfrequency magnetic field fluctuations.The lower boundary of this frequency range acts as a band-pass filter such that oscillations with periods longer than 33 s do not contribute to the statistical results.The gyrofrequency is defined as ω = qB/m where B is the magnetic field strength and q and m are the electric charge and the mass of the ion species.The upper boundary of the analysed frequency range, 300 mHz, corresponds to the proton gyrofrequency at a field strength of about 20 nT, a typical magnetic field strength in the Venusian magnetosheath (Fig. 1); therefore the frequency range well covers the low-frequency range in Venus' solar wind interaction region.In spite of that, we realise that it is also attractive to expand or reduce the frequency band depending on the conditions in specific regions (e.g. with respect to the gyrofrequency) which would allow a comparison of www.ann-geophys.net/28/951/2010/Ann.Geophys., 28, 951-967, 2010 Observational coverage different frequency bands.In particular, studying the ultralow-frequency range in more detail would be worthwhile, but we mention again that increasing the frequency resolution is at the expense of the spatial resolution and is thus always a balancing issue. Spectral analysis is widely used in order to determine wave properties such as the wave power, ellipticity, polarisation, and propagation direction.The wave analysis methods are well developed by e.g.Arthur et al. (1976); Samson (1973);McPherron et al. (1972); Means (1972); Sonnerup and Cahill (1967) and are frequently applied to space time series data.The methods are based on several assumptions and there are also limitations.First of all, the analysis method is based on the assumption of plane waves and there is an ambiguity of ±180 • in determining the propagation direction of the waves as one determines the minimum variance direction only.This comes from the fact that the magnetic field is measured by a single spacecraft.Furthermore, from single spacecraft measurements one can only determine wave parameters in the spacecraft frame of reference which is subject to the Doppler shift of frequencies, when the measurements are made in a flow.The analysis of a certain frequency band, finally, leads to averaged wave parameters for this frequency band. The wave analysis is performed as follows: (1) The magnetic field components of each time interval are transformed to a mean field aligned (MFA) coordinate system in which the principal direction (z-axis) is defined as the direction of the mean magnetic field in the interval, the second direction (x-axis) is perpendicular to the plane spanned by the vector pointing into the new z-direction and the spacecraft position vector in VSO coordinates, and the third direction (y-axis) completes the right-handed coordinate system.This transformation allows us to distinguish between the transverse and the compressional power of the fluctuations with respect to the mean magnetic field.(2) The data in the MFA system are Fourier transformed into frequencies.(3) With the Fourier transform B(w) we calculate the power spectral density matrix P for the selected frequency range where i and j are the three components of the magnetic field (i,j = 1,2,3) and the asterisk denotes the complexconjugate.The power spectral density matrix is a 3 × 3 complex matrix and can therefore be written as the sum of its real and imaginary part (P ij = Re(P ij ) + i Im(P ij )).In the MFA system two diagonal elements, P 11 and P 22 , represent an estimate of the transverse power, whereas P 33 corresponds to the compressional power.(4) Finally, the principal axis analysis is applied in order to determine the minimum and maximum variance directions.In particular, diagonalisation of the real part of the power spectral matrix yields three eigenvectors (ξ 1 ,ξ 2 ,ξ 3 ) and three eigenvalues (λ 1 ,λ 2 ,λ 3 ) for the maximum, intermediate, and minimum variance directions, respectively.The matrix of the three eigenvectors T can then (c) shows the power spectra of the magnetic field data in the frequency range from 30 to 500 mHz of the total power (P total ) as well as the transverse (P ⊥ ) and the compressional (P ) component (here a window function is applied to the spectra in order to increase the significance of the structures, but the spectra do not exhibit spectral peaks larger than the 95% confidence interval in this single spectrum).The dotted straight line indicates a linear fit in the logarithmic scaled spectrum revealing the spectral index α = −1.56. be used to transform the entire spectral matrix P into the principal axis (PA) system via P = T −1 PT.Various wave properties are derived from the spectral matrix, the eigenvectors, and the eigenvalues and the results are presented in the following section.Here one should note that the propagation direction of the wave is well determined only if the intermediate eigenvalue is sufficiently larger than the minimum eigenvalue, otherwise the fluctuations are isotropic and the polarisation plane is not clearly determined. Results The analysis procedure in the previous section is demonstrated for one interval (25 May 2006, 01:27:54 UTC-01:29:34 UTC).In this time interval Venus Express is located in the high-latitude dayside magnetosheath.Figure 3 shows the magnetic field components in VSO (first to third panel in Fig. 3a) and MFA coordinates (fourth to sixth panel in Fig. 3a) as well as the magnetic field strength (bottom panel in Fig. 3a), the orbit trajectory in the cylindrical VSO coordinate system (Fig. 3b) and the power spectrum (Fig. 3c). The mean magnetic field is 38 nT corresponding to the proton gyrofrequency f gyro = 580 mHz.The field components in the MFA coordinate system are Fourier transformed and the transverse Power P ⊥ and the compressional power P are determined from the power spectral density matrix.In the vicinity of the dayside bow shock the magnitude of transverse and compressional power are of the same size, whereas in the vicinity of the nightside bow shock the compressional portion increases.In the dayside magnetosheath as well as in the mantle and tail region the compressional power is slightly higher than the compressional power in the nightside magnetosheath.In the solar wind upstream of the bow shock the transverse portion of the power dominates over the compressional portion. We furthermore define the quantity ζ as which is positive in the case of dominating transverse power and negative in the case of dominating compressional power. In the example interval we obtain the value ζ = 0.31, therefore the transverse power exceeds the compressional one.This can also be seen in the power spectrum plot (Fig. 3c) in which the dashed line represents the transverse power spectral density P ⊥ and the dashed-dotted line the compressional power P .After rotation into the PA coordinate system, we obtain further parameters describing the wave properties, namely the ellipticity, the sense of polarisation, the propagation directions, and the intensity.The absolute value of the ellipticity | | can be determined from the eigenvalues and is defined as (Song and Russell, 1999) assuming isotropic noise which means that λ 3 corresponds to the noise in the k-direction.The sign of is the same as the sign of Im(P 12 ).The average sense of polarisation over the frequency band yields the value = −0.72 which indicates a slightly increased left-handed polarisation for the selected frequency band.Furthermore, the angle between the minimum variance direction and the mean magnetic field is about 90 • , so the wave vector direction is practically perpendicular to the mean magnetic field.Finally, the intensity of the fluctuations I is defined as (Song and Russell, 1999) The intensity is a mean spectral density of the chosen frequency band and we obtain I = 157.2nT 2 /Hz.The intensity is an estimate of the total magnetic energy density in the frequency range 30 to 300 mHz.In summary, the analysis of the wave properties in this example exhibits dominating transverse, slightly left-handed polarised fluctuations with a large propagation angle relative to the mean field and enhanced wave intensity.We apply this analysis procedure to all available time intervals of the data set.The spatial distributions of various wave parameters are presented in the same fashion as in Fig. 1. Transverse vs. compressional power Figure 4 displays the spatial distribution of the parameter ζ , the difference between the transverse and compressional power relative to the total power.Some features can be observed here: in the vicinity of the dayside bow shock the transverse and compressional power are almost equal, whereas the compressional power tends to be slightly greater in the vicinity of the nightside bow shock, because of the increased occurrence of red-coloured bins.However, by no means the compressional power clearly dominates this region.In the dayside magnetosheath as well as in the mantle and tail region the compressional power is slightly higher than that in the nightside magnetosheath.But the transverse power still exceeds the compressional power.In the solar wind (upstream of the bow shock) the transverse power dominates over the compressional one. Ellipticity The spatial distribution of the ellipticity is presented in Fig. 5. On average the absolute value of the ellipticity | | reaches higher values in the magnetosheath (around 0.6) with respect to the upstream solar wind and the tail and mantle region.Though, the waves are only moderately polarised, rarely exceeding the value 0.6, but also a few regions exist where the ellipticity is less than 0.6.In the upstream solar wind and in the mantle and tail region the ellipticity values are rather mixed and no clear tendency can be observed. Polarisation Figure 6 shows the spatial distribution of the average sense of polarisation of the low-frequency magnetic field fluctuations.The distribution does not show any clear distinct regions of either dominating left-handed or right-handed polarisation close to Venus.At larger distances from the planet there are areas of enhanced polarisation.However, neither the magnetosheath nor the tail and mantle region show largescale connected areas with a preferred sense of polarisation.It should also be noted that for almost linear polarisation it is not meaningful to discuss the sense of polarisation due to the error in the determination of the ellipticity; the results should be treated with care.Furthermore, direct comparisons to polarisations derived from theoretical studies can lead to misinterpretations, as the Doppler shift from the plasma frame to the spacecraft frame of reference may reverse the senses of polarisation.Waves which propagate downstream from the bow shock do not change the polarisation.This is only the case if the waves propagate in opposite direction to the solar wind flow with a phase velocity lower than the solar wind flow velocity (e.g.Hoppe and Russell, 1983).In this context one should note that on the one hand averaging over the absolute values of the ellipticity may shift the average of a bin away from zero, but on the other hand a reversal of the polarisation between the plasma frame and the spacecraft frame of reference could also act conversely.Therefore, some caution has to be exercised discussing the observations displayed in Figs. 5 and 6.With a single spacecraft we are not able to correct for the Doppler shift. Wave vector direction As already mentioned in Sect.2, the determination of the propagation direction or the wave vector direction is valid only, when the ratio of the intermediate to the minimum eigenvalue (λ 2 /λ 3 ) is large enough.Espley et al. (2004) used a ratio of 2 as a criterion in order to determine the wave vector direction of low-frequency magnetic field oscillations in the Martian plasma environment.We note that according to Song and Russell (1999) high ratios are favoured, because the propagation direction is better determined for higher ratios of the eigenvalues.Therefore, we use a more stringent condition than Espley et al. (2004) and consider only cases with the intermediate to minimum eigenvalue ratio greater than 5.This is a compromise, because otherwise we lose a significant portion of the data set in the analysis.We accept possible limitations so far as we give a statistical result. Figure 7 shows the spatial distribution of the angle between the wave vector and the mean magnetic field in Venus' solar wind interaction region in the cases satisfying the specified eigenvalue ratio criterion.In Fig. 7 one can see that in the low-altitude dayside magnetosheath the wave vector directions are almost perpendicular to the mean magnetic field.Also in the mantle and tail region a majority of the angles reaches values greater than 45 • , whereas in the nightside magnetosheath a tendency to smaller angles can be observed, in particular, events with an angle below 45 • occur quite frequently.Data gaps in the magnetosheath are due to the selection criterion of the intermediate to minimum eigenvalue ratio.In the upstream solar wind the results are fairly mixed with a slight predominance of angles greater than 45 • . Intensity The spatial distribution of the wave intensity I about the mean field in the plasma environment surrounding Venus is presented in Fig. 8.The intensity is largest in the entire dayside magnetosheath and decreases rapidly towards the terminator region.In the nightside magnetosheath the intensity is still moderately enhanced in the vicinity of the bow shock, but further downstream and in the mantle and tail region only very small wave intensities can be observed.We would like to note that the lower end of the colour bar displaying the intensity in Fig. 8 above the threshold of the data accuracy.Almost all intensities are above this threshold and therefore the artificial contribution to our results is expected to be very small.In regions with lower intensities the wave property results may be subject of a somewhat increased uncertainty compared to regions with larger intensities, though.The wave intensity distribution is further investigated and discussed in more detail in the next section, as it may give a hint about how rapidly turbulence evolves in the magnetosheath.This is of great interest in fundamental plasma physics. Spectral indices Power spectra for the both transverse and compressional fluctuations exhibit a power law spectrum in the analysed frequency range.From our statistical analysis we have determined the average spectral indices of the total, transverse, and compressional power in the magnetosheath, mantle, and tail region (Table 1) providing an estimate of the different turbulent states.The spectral indices of the total power in the dayside and nightside magnetosheath are slightly flatter than in the mantle and tail region.Furthermore, in the dayside and nightside magnetosheath as well as in the mantle the spectral indices of the compressional component are steeper than the transverse ones.In the tail the decay of the transverse component is steepest. Discussion The statistical analysis reveals the wave properties of the low-frequency magnetic field fluctuations in various Venusian solar wind interaction regions.The observations are summarised and compared to the wave properties observed at Mars.We also investigate the spatial distribution of the intensity in more detail and discuss possible mechanisms and processes which may lead to the made observations. Wave properties at Venus and Mars In the dayside magnetosheath and the vicinity of the bow shock the compressional and transverse power are approximately equal and the waves are moderately elliptically polarised with changing senses of polarisation.With larger distances from the planet the respective polarisation increases.Closer to the planet the senses of polarisation are not much pronounced.The wave vector has largest angles with respect to the mean magnetic field in the low-altitude magnetosheath and smaller angles at higher altitudes.The wave intensity is enhanced in the entire dayside magnetosheath and drops rapidly at the terminator.Towards the nightside magnetosheath the transverse power increases and exceeds the compressional power in the majority of cases except for the regions close to the vicinity of the bow shock.The ellipticity is still moderate, but also in a few areas the ellipticity becomes lower and left-handed or right-handed senses of polarisation can locally be observed.A significant part of the angles between the propagation direction and the mean magnetic field has values below 45 • , but they increase near the bow shock and the UMB.The wave intensity is significantly lower compared to the dayside magnetosheath, but in the vicinity of the bow shock areas of higher intensity are still present.The transverse and compressional power in the mantle and tail region are fairly mixed with a slight majority of areas in which the transverse power dominates.The ellipticity varies over a wider range compared to the magnetosheath and areas of locally dominated left-handed or right-handed polarisation also occur, but rather at larger distances from the planet.The wave vector directions have in most cases angles greater than 45 • with respect to the mean magnetic field.The intensity is in the entire mantle and tail region at a very low level.In contrast, the upstream solar wind region is characterised by dominating transverse power with a broad spectrum of ellipticity values and various angles between the propagation direction of the waves and the mean field with no clear areas of enhanced wave intensity.The spectral indices (Table 1) indicate a turbulent behaviour of the magnetic field fluctuations at Venus.In particular, the magnetosheath reveals slightly smaller indices than expected for hydrodynamic (α = −5/3) and magnetohydrodynamic (MHD) turbulence (α = −3/2) and the mantle and tail region even more smaller indices meaning that the spectral power decreases more rapidly there.The spectral slopes observed by Vörös et al. (2008a,b) for a case study along the Venus Express trajectory and a statistical analysis of 20 days in the magnetosheath (not close to boundaries) are α ≈ −1 (termed as noisy fluctuations), near the terminator and in the nightside near-planet wake α ≈ −2.5 (termed as wavy structures), and close to boundaries α ≈ −1.6 (termed as turbulent regions).Hence, our observations of a much longer time interval are rather similar in the tail and mantle region, but differ in the magnetosheath.Different behaviour of the transverse and compressional indices indicate that an anisotropy develops with increasing frequencies which tends to be most pronounced in the dayside magnetosheath.Espley et al. (2004) presented a statistical study of lowfrequency magnetic field oscillations in the Martian magnetosheath, the magnetic pileup region, and the tail using observations of the magnetometer/electron reflectometer (MAG/ER) experiment on board Mars Global Surveyor (MGS).Below the local proton gyrofrequency they found waves in the dayside magnetosheath which are predominantly compressional and elliptically polarised with wave vectors that have large angles relative to the mean magnetic field.These oscillations were identified as mirror mode fluctuations.In the nightside magnetosheath they observed waves which are predominantly transverse and elliptical, propagating at smaller angles relative to the mean field.The waves were associated with ion/ion-resonant instabilities arising from counter-streaming plasma populations like solar wind pickup ions of planetary origin.Waves in the Martian magnetic pileup region and tail have considerably smaller amplitudes with linear polarisation and oblique propagation directions.They may a mixture of different wave modes.The intensity of the oscillations was not presented in their study. Our dayside observations at Venus reveal similar results in comparison to the wave properties in the Martian magnetosheath, except for the fact that the compressional fluctuations are not dominating at Venus.Espley et al. (2004) interpret their observed fluctuations as mirror mode waves, but they also admit that this interpretation is in conflict with the observation of a moderate elliptical polarisation of the waves, as theoretical studies suggest that mirror modes are linearly polarised.But they argue that elliptically polarised mirror mode waves have also been observed in Earth's magnetosheath.Volwerk et al. (2008a,b) detected mirror modelike structures in Venus' dayside magnetosheath so that the properties of mirror modes may give a contribution to our statistical results.On the other hand, Luhmann et al. (1983) suggested that magnetic field fluctuations may be generated in the vicinity of the quasi-parallel portion of the bow shock by plasma instabilities and are convected downstream through the magnetosheath.Indeed, this is an agreement with the re-sults of numerical simulations by Winske (1986) who studied the origin of large magnetic fluctuations in the magnetosheath of Venus and concluded that the most likely source of these waves is the bow shock itself.But the numerical simulations have also shown that waves could be generated by direct interaction of the solar wind with oxygen ions of planetary origin which tends to generate right-handed polarised waves, whereas the bow shock related waves would generate left-handed polarised waves.However, the simulation by Winske (1986) is performed in an idealised situation and our results of the sense of polarisation can be subject to an uncertainty due to the Doppler shift.As already mentioned in Sect.3.3 upward propagating waves could reverse their senses of polarisation.Assuming that the waves are mainly bow shock generated and propagate downstream one may conclude that right-handed and left-handed wave generation mechanisms are balanced as our observations show that the senses of polarisation are rather mixed.At Venus the average Parker spiral angle is about 35 • (Luhmann et al., 1997).Thus, the most developed quasi-parallel bow shock geometry in the ecliptic plane (dusk sector) with angles between the bow shock normal n and the interplanetary magnetic field (IMF) B of less than 10 • , we estimate to appear at moderate solar zenith angles (SZA's; the SZA is the angle between the x-axis and the line connecting the point of origin with a point on the bow shock in the VSO coordinate system) of about 30 • to 70 • .In our cylindrical coordinate system the wave intensity related to quasi-parallel bow shock processes would occur in the same angle range, but due to the averaging the wave intensity may be reduced.We notice a localisation of the majority of red-coloured bins in Fig. 8 between 30 • and 70 • SZA.Altogether, the enhanced wave intensity downstream of the bow shock is remarkable.Therefore, we refer a substantial part of the wave activity to bow shock related processes and interpret that, in particular, the wave generation could be associated with the quasiparallel shock itself as also Luhmann et al. (1983) suggested.However, the bins showing the largest wave intensity at low SZA's are closer to the UMB than to the bow shock suggesting that wave sources may also exist in the magnetosheath and in the vicinity of the UMB, respectively.The generation of mirror mode waves in Earth's magnetosheath is believed to occur not only at the bow shock, but also within the magnetosheath (Tátrallyay and Erdős, 2002).The origin and evolution of mirror mode structures observed at Venus (Volwerk et al., 2008a,b) may be similar and mirror modes are thus a candidate for waves generated downstream of the bow shock.A differentiation between the quasi-parallel and quasi-perpendicular bow shock regions based on Venus Express measurements of the IMF direction is beyond the scope of this paper and will be subject of future work.A first case study has recently been presented by Du et al. (2009). The wave properties of the nightside magnetosheaths at Venus and Mars also show similarities: at both planets the power or amplitude is more in the transverse direction than in the compressional one, the oscillations tend to be elliptical, and the propagation directions are below 45 • relative to the mean field.Espley et al. (2004) refer the observations at Mars to ion/ion-resonant instabilities. Finally, the tail and mantle regions of Venus show a wider range of observations.This is also similar to the Martian case where a mixture of wave modes are believed to exist.Boundary related processes and current systems (e.g. the ionospheric current system) may become more important there and instabilities may cause a variety of wave modes. The presented statistical analysis of the low-frequency magnetic field fluctuations reveals first-order results of the wave characteristics in specific regions.They show only the dominating waves properties in different interaction regions and provide a general picture of possible wave modes in Venus' solar wind interaction region.While much of our discussion assumes MHD wave modes, we note that more complex kinetic plasma models take account of temperature anisotropies and non-Maxwellian velocity distributions which lead to various micro-instabilities and additional waves modes.For anisotropic proton-electron plasmas Gary et al. (1993) argue several low-frequency plasma instabilities.For non-Maxwellian plasma distributions Gary (1991Gary ( , 1993) ) also present possible wave generation mechanisms.Case studies give a detailed view of the wave modes and will be an interesting issue of future work using Venus Express data.In this paper we restrict our analysis to the statistical point of view, as it is motivated by a comparative study to Mars.The observations of the wave properties show generally results which emphasise the similarities of the interaction of the planets with the solar wind assuming that the observed wave properties are attributed to the same generation processes. Concerning the post-terminator waves observed by Brace et al. (1983) at Venus, we do not see a relation to our observations at the moment since these waves are observed below 200 km altitude, while the lowest altitude of the Venus Express spacecraft was about 250 km in the data set we analysed.But Brace et al. (1983) discuss as one possible source of the waves turbulence at higher altitudes and thus it will be interesting to look whether this phenomenon and observations can be connected or not once data of lower altitudes are available. Wave intensity and Alfvénic/magnetosonic boundary One possibility to explain the localisation of enhanced wave activity in the dayside magnetosheath may be spatial variations of the Mach numbers in Venus' solar wind interaction region.The solar wind upstream of the bow shock is characterised by supersonic, super-Alfvénic, and supermagnetosonic velocities (e.g.Luhmann et al., 1997).The bow shock is a fast magnetosonic shock wave (e.g.Phillips and McComas, 1991) and thus the solar wind flow is slowed down to submagnetosonic velocities, when crossing the shock wave.Further downstream of the shock the flow should reach again supermagnetosonic velocities at a "magnetosonic" line similar to the so-called sonic line inferred from numerical simulations by Spreiter and Stahara (1980).A similar behaviour may occur for the Alfvén Mach number in MHD.Downstream of the bow shock also the Alfvén Mach number may become less than 1, because of the varying magnetic field strength, density, and temperature.In a sub-Alfvénic/submagnetosonic regime waves are not only convected by the flow, but can also propagate upstream and populate the entire region between the bow shock and an Alfvén or a magnetosonic line.In a super-Alfvénic/supermagnetosonic regime the wave energy transport is only in the direction of the background flow.Thus, the transition region from sub-Alfvénic/submagnetosonic to super-Alfvénic/supermagnetosonic flows could represent a reasonable boundary in the wave activity level.Without further knowledge of parameters like the plasma temperature and the density the location of such a boundary and the spatial distributions of the Mach numbers remain speculative for the moment.It can be investigated in more detail with a comprehensive plasma moment data set which is yet not provided by the Analyser of Space Plasmas and Energetic Atoms (ASPERA-4) on board the Venus Express spacecraft.However, an estimate of the Alfvén Mach number we determined on the basis of preliminary ASPERA-4 density and the magnetic field observations.It indicates an Alfvén Mach number greater than 1 in Venus' magnetosheath and thus waves with velocities up to the Alfvén velocity should be convected with the plasma flow. Wave intensity along streamlines As a further possibility to interpret the observed wave intensity distribution, we discuss the geometric effect on the decay of the waves with the hypothesis that a spatial variation of the wave intensity is due to varying distances between streamlines and the change of the flow velocity.Testing this hypothesis requires knowledge of the magnetosheath flow pattern.For this, we apply an analytical streamline model to describe the flow in the magnetosheath which allows us to trace the evolution of the intensity along different streamlines in the statistical sense.We use the model of a hydrodynamic, irrotational flow (e.g.Vallentine, 1967) past a cylinder for the dayside magnetosheath continued by a flow parallel to a straight line for the nightside.This model was already used successfully by Luhmann et al. (1983) tracing magnetic field fluctuations along streamlines back to the quasi-parallel portion of the bow shock.They also pointed out that the model streamlines are in good agreement with that of the gasdynamic model of Spreiter and Stahara (1980) for the solar wind interaction with Venus.We note here that the nightside mantle boundaries of Martinecz et al. (2009) are modelled in a different way than the dayside boundaries and a continuous transition is lacking at the terminator which cannot be reproduced by the analytical streamline model.As our focus lies on the dayside magnetosheath and the intensity on the nightside is generally much lower, no major deviations are expected by considering a straight streamline parallel to the x -axis.The streamline functions are given as for the dayside (d) and the nightside (n), respectively.v is the nominal velocity (100 km/s), r is the radius of the obstacle, and x , y , z are the Cartesian coordinates.The velocity vector is tangential to the streamline at all points (which is the definition of a streamline).Furthermore, we define the velocity potential functions given as for the dayside and the nightside, respectively.The velocity potential lines are defined such that, when differentiated with respect to distance in any particular direction, it yields the velocity in that direction.The velocity potential lines are perpendicular to the streamlines. Figure 9 shows the spatial distribution of the wave intensity connected to the streamlines.The intensity is averaged over two neighbouring streamlines and velocity potential lines. Considering now a geometric effect on the wave intensity decay along a streamline I (s) due to the varying velocity and cross sectional area, the continuity equation of a stationary flow as a function of the distance along the streamline s is given as where v(s) is the velocity and A(s) the cross sectional area of the flow line.Here, we assume that there is no wave source or sink in the magnetosheath.It is also assumed that the waves are generated in the bow shock region only and wave damping is restricted to frequencies close to the proton cyclotron frequency that is why the damping is not considered here.Equation ( 9) determines theoretically the evolution of the wave intensity and this can be compared to the observations.Here, the flow velocity v(s) and the cross sectional area A(s) can be estimated using the proposed flow model.4) in the panels correspond to the streamlines in Fig. 9).The solid line denotes the calculated intensity function derived from the continuity equation (Eq.9), while the asterisk symbols denote the observed intensities. with increasing distance from the bow shock.At first, the intensity increases slightly and only then the decrease begins (streamlines 2 to 4; at streamline 1 no observations are available close to the bow shock) which may indicate also that a wave source further downstream of the bow shock exists.Generally, Fig. 10 suggests that the variations of the velocity and the cross sectional area along the streamlines are too small to account for the observed spatial decay of the wave intensity.Since the estimated intensities are almost constant, the changing velocity and cross sectional area play only a minor role in the evolution, even if one would assume that a wave source is located in the magnetosheath.Then, the initial value of the estimated intensity in Fig. 10 would change, but the almost constant curve shape would not be changed significantly.For this reason, we rule out the effect of velocity and cross section variations to explain the rapid spatial decay of the wave intensity in the magnetosheath. Turbulence and wave energy transport Magnetic field fluctuations in the magnetosheath are often interpreted to be in a turbulent state.We discuss finally wave energy transport due to turbulence in the magnetosheath as suggested by Luhmann et al. (1983) and Winske (1986).The hypothesis to be tested is if the energy loss of the fluctuations due to dissipation while convected with the flow through the magnetosheath is large enough in order to explain the spatial decay of the wave energy along magnetosheath streamlines. The energy-decay laws for different types of turbulence are discussed comprehensively in the literature by Biskamp (2003) and Davidson (2004), known as freely evolving or decaying turbulence.A hydrodynamical example is windtunnel turbulence (Davidson, 2004) which is generated by an air stream passing through a grid.The interaction of the flow with the obstacle results in turbulence which is carried downstream by the mean flow.This situation is similar to that at Venus, as we relate the wave generation mainly to the bow shock.Hydrodynamic turbulence predicts that the time evolution of the fluctuation energy behaves like a power law where the exponent λ is characteristic for the turbulent system.According to Kolmogorov (1941) one can derive an exponent of λ = 10/7, while for MHD turbulence the exponent is λ = 2/3 (Biskamp, 2003).These exponents are derived under the assumption of self-similarity behaviour in turbulence.More complex models are also possible which consider various ratios of kinetic to magnetic energy and they provide further solutions of Eq. ( 10).Note that the exact solution of Eq. ( 10) is E = E 0 (t − t 0 ) −λ , where the constant t 0 is of the order of the initial eddy-turnover time and that is why the power law decay becomes visible only at sufficiently large times t t 0 (Biskamp, 2003). Since the flow velocity in the magnetosheath is supposed to be super-Alfvénic and the fluctuations are more Alfvénic (transverse), we relate the distance s along a streamline to the elapsed time since the bow shock crossing by using Taylor's hypothesis We note that this a reasonably good approximation for not propagating waves like mirror modes or downstream propagating waves in the plasma frame of reference, but the time scale would increase in case of upward propagating waves and Taylor's hypothesis should be used with caution.The measured intensity I is proportional to the magnetic energy density E of the fluctuations in the frequency range 30 to 300 mHz if normalised by a constant C such that E = CI with C = N f (2µ 0 ) −1 (N: Number of frequency samples over which has been averaged, f : frequency resolution in the spectra, µ 0 : permeability of free space). Figure 11 shows the evolution of the intensity with time after the bow shock crossing.A power law fitting to the decaying part of the observed intensity evolution gives a good agreement with the data.Some parts in the measurements show approximately a constant behaviour (data points closest to the bow shock, particularly in the third panel).They 4) in the panels correspond to the streamlines in Fig. 9).The x-axis is relabelled from Fig. 10 using Taylor's hypothesis.The asterisk symbols denote the observed intensities.The decaying part is fitted by a power law (dashed line).may be related to the constant t 0 which is of the order of a few tens of seconds.The observed exponents λ vary from −1.7 to −3.9 and are steeper than the exponents predicted by theoretical models of turbulence, but nevertheless the observations exhibit the power law decay and it is suggestive of turbulent decay.Considering wave sources in the magnetosheath, the t = 0 point has not to be located necessarily at the bow shock, but can be located further downstream.This would shift the data in Fig. 11 to the left leading to a lower exponent of the power law.Then, the power law exponents would probably be more consistent with the theoretical exponents and therefore, one may consider further wave sources possible.Of course the theoretical models describe an idealised picture of turbulence (spatially unbounded, etc.) and a realistic model would be needed for a quantitative study (considering e.g. the magnetosheath geometry, the bow shock shape and the solar wind conditions).We note in this context that it is also worthwhile to consider in future studies, performing a more detailed discussion on turbulent processes, the spatial distribution and the evolution of the relative amplitude B/B. Conclusions We performed a statistical analysis of low-frequency magnetic field fluctuations in the frequency range 30 to 300 mHz in the Venusian solar wind interaction region.The magnetic field data set was obtained by the fluxgate magnetometer on board the Venus Express spacecraft.The observations cover the low-altitude magnetosheath as well as the midmagnetotail.The observations reveal similar wave properties in the magnetosheaths of Venus and Mars as well as their tail regions, suggesting a similar solar wind interaction between the two planets.However, only a global picture of the dominating wave properties is provided which is suitable to compare the observations at both planets, but we are not resolving individual wave modes.This is left to future studies.We also note that the wave propagation directions and the wave properties in the plasma frame of reference are not determined in our measurements and this makes it difficult to get clearer results.This is due to measurements from a single spacecraft only. The wave intensity reaches a maximum in the dayside magnetosheath and drops rapidly towards the terminator region.The further regions do not show significant wave intensities.Different hypotheses have been considered in order to explain our observation.The influence of varying Mach numbers can not be evaluated accurately at the moment.It has to be studied in more detail in the future with a comprehensive plasma moment data set.A geometric effect plays only a minor role.A reasonable explanation is freely evolving or decaying turbulence, because of the power law behaviour.But the quantitative agreement is poor with the freely decaying turbulence model which may probably be improved by taking wave generation in the magnetosheath into account.On the other hand, further mechanisms like damping could be responsible for the loss of energy.Perhaps the decrease in the magnetic field fluctuations may be compensated by a relative increase in the electric field fluctuation as in high-frequency phenomena in the solar wind (Bale et al., 2005).Unfortunately, this can not be verified by Venus Express, since the spacecraft is not equipped with an electric field detector.Also, a dissipation process could take place like Landau or cyclotron damping.This will be studied using plasma moment data once they are available soon. In summary, we conclude that the observations suggest the convection of waves by the plasma flow.Doubtless, waves are generated at the bow shock or in its vicinity.But the observations indicate also wave sources within the magnetosheath and thus it may occur that beside the bow shock region waves are also generated within the magnetosheath and the vicinity of the UMB, respectively. Fig. 1 . Fig.1.Spatial distribution of the magnetic field strength in the plasma environment surrounding Venus.The measurements are averaged around the axis of the apparent solar wind direction (x cyl ).Downstream of the bow shock (BS) the magnetic field strength is enhanced relative to the solar wind region upstream of the bow shock and piles up towards the terminator in the low-altitude dayside magnetosheath forming the magnetic barrier.Draping of the magnetic field around the planet leads to the formation of the magnetotail which is characterised by a slightly enhanced magnetic field strength in the planet's wake, too. Fig. 2 . Fig. 2. Observational coverage.Almost every bin is well covered by observations, but the spatial distribution is not homogenous.The polar orbit of Venus Express provides in the vicinity of the pericentre relatively more observations. Fig. 3 . Fig. 3.Example of the analysis in the interval from 01:27:54 UTC to 01:29:34 UTC on 25 May 2006: (a) displays the three components of the magnetic field in VSO and MFA coordinates as well as the magnetic field strength.The mean magnetic field strength is 38 nT corresponding to the proton gyrofrequency f gyro = 580 mHz; (b) shows a sketch of the orbit in the dayside plasma environment close to Venus in cylindrical VSO coordinates.The black rectangle denotes the analysed time interval; (c) shows the power spectra of the magnetic field data in the frequency range from 30 to 500 mHz of the total power (P total ) as well as the transverse (P ⊥ ) and the compressional (P ) component (here a window function is applied to the spectra in order to increase the significance of the structures, but the spectra do not exhibit spectral peaks larger than the 95% confidence interval in this single spectrum).The dotted straight line indicates a linear fit in the logarithmic scaled spectrum revealing the spectral index α = −1.56. Fig. 4 . Fig.4.Difference between the transverse and compressional power normalised to the total power about the mean field.The coordinate system is the same as in Fig.1.Blue regions indicate regions where the transverse power dominates (ζ = +1 purely transverse) and red regions where the compressional power dominates (ζ = −1 purely compressional).In the vicinity of the dayside bow shock the magnitude of transverse and compressional power are of the same size, whereas in the vicinity of the nightside bow shock the compressional portion increases.In the dayside magnetosheath as well as in the mantle and tail region the compressional power is slightly higher than the compressional power in the nightside magnetosheath.In the solar wind upstream of the bow shock the transverse portion of the power dominates over the compressional portion. Fig. 5 . Fig. 5. Ellipticity of the waves in the space plasma environment surrounding Venus.The value = 0 denotes a pure linear polarisation and = 1 a pure circular polarisation. Fig. 6 . Fig. 6.Average sense of polarisation of the low-frequency magnetic field fluctuations in Venus' space plasma environment.Blue colours indicate regions of dominating right-handed polarisation and red colours regions of dominating left-handed polarisation.Close to the planet a tendency to less distinct regions of either preferentially left-handed or right-handed polarised waves is observable.The other regions are locally dominated by right-handed or left-handed polarisation, but no connected areas of larger size with a preferential sense of polarisation is visible. Fig. 7 . Fig.7.Spatial distribution of the angles of the wave vector direction with respect to the mean magnetic field.Data gaps in the figure are due to the selection criterion of a ratio of the intermediate to the minimum eigenvalue greater than 5.The low-altitude dayside magnetosheath is characterised by the occurrence of large angles between the wave vector direction and the mean magnetic field.The tail and mantle region is dominated by angles greater than 45 • , whereas in the nightside magnetosheath also smaller angles occur.In the upstream solar wind a wider distribution of angles is observed. Fig. 8 . Fig. 8. Spatial distribution of the wave intensity.The intensity is enhanced in the entire dayside magnetosheath and drops rapidly towards the terminator.In the nightside magnetosheath the intensity a reduced level.No significant intensity occurs in the upstream solar wind and the mantle and tail region. Fig. 9 . Fig. 9. Intensity along streamlines.The intensities connected to the different positions along any streamline are averaged over the neighbouring streamlines and velocity potential lines.Solid lines display magnetosheath streamlines which are also numbered. Fig. 10 . Fig. 10.Evolution of the intensity along different streamlines (the numbers (1) to (4) in the panels correspond to the streamlines in Fig.9).The solid line denotes the calculated intensity function derived from the continuity equation (Eq.9), while the asterisk symbols denote the observed intensities. Fig. 11 . Fig. 11.Evolution of the intensity with time after the bow shock crossing (the numbers (1) to (4) in the panels correspond to the streamlines in Fig.9).The x-axis is relabelled from Fig.10using Taylor's hypothesis.The asterisk symbols denote the observed intensities.The decaying part is fitted by a power law (dashed line). Table 1 . Observed spectral indices with its standard deviations of the mean in Venus' solar wind interaction region.The values in brackets represent the sample standard deviations.
2018-12-11T07:20:07.655Z
2010-04-15T00:00:00.000
{ "year": 2010, "sha1": "951a401778135c6fecc0880283c6e9c4826bd0ec", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/28/951/2010/angeo-28-951-2010.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "951a401778135c6fecc0880283c6e9c4826bd0ec", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232085537
pes2o/s2orc
v3-fos-license
Completion of the gut microbial epi-bile acid pathway ABSTRACT Bile acids are detergent molecules that solubilize dietary lipids and lipid-soluble vitamins. Humans synthesize bile acids with α-orientation hydroxyl groups which can be biotransformed by gut microbiota to toxic, hydrophobic bile acids, such as deoxycholic acid (DCA). Gut microbiota can also convert hydroxyl groups from the α-orientation through an oxo-intermediate to the β-orientation, resulting in more hydrophilic, less toxic bile acids. This interconversion is catalyzed by regio- (C-3 vs. C-7) and stereospecific (α vs. β) hydroxysteroid dehydrogenases (HSDHs). So far, genes encoding the urso- (7α-HSDH & 7β-HSDH) and iso- (3α-HSDH & 3β-HSDH) bile acid pathways have been described. Recently, multiple human gut clostridia were reported to encode 12α-HSDH, which interconverts DCA and 12-oxolithocholic acid (12-oxoLCA). 12β-HSDH completes the epi-bile acid pathway by converting 12-oxoLCA to the 12β-bile acid denoted epiDCA; however, a gene(s) encoding this enzyme has yet to be identified. We confirmed 12β-HSDH activity in cultures of Clostridium paraputrificum ATCC 25780. From six candidate C. paraputrificum ATCC 25780 oxidoreductase genes, we discovered the first gene (DR024_RS09610) encoding bile acid 12β-HSDH. Phylogenetic analysis revealed unforeseen diversity for 12β-HSDH, leading to validation of two additional bile acid 12β-HSDHs through a synthetic biology approach. By comparison to a previous phylogenetic analysis of 12α-HSDH, we identified the first potential C-12 epimerizing strains: Collinsella tanakaei YIT 12063 and Collinsella stercoris DSM 13279. A Hidden Markov Model search against human gut metagenomes located putative 12β-HSDH genes in about 30% of subjects within the cohorts analyzed, indicating this gene is relevant in the human gut microbiome. Introduction The human liver produces all 14 enzymes necessary to convert cholesterol into the dihydroxy bile acid chenodeoxycholic acid (3α,7α-dihydroxy-5βcholan-24-oic acid; CDCA) and the trihydroxy bile acid cholic acid (3α,7α,12α-trihydroxy-5βcholan-24-oic acid; CA). 1 These bile acids are conjugated to taurine or glycine in the liver helping to lower the pK a and maintain solubility, impermeability to cell membranes, and lower the critical micellar concentration, allowing for efficient emulsification of dietary lipids and lipid-soluble vitamins. 2 Bile acids are effective detergents owing to the α-orientation of the hydroxyl groups which produce a hydrophilic-face above the plane of the cyclopentanophenanthrene steroid nucleus, and a hydrophobic-face below the plane of the hydrocarbon rings. 1 Conjugated bile acids emulsify lipids throughout the duodenum, jejunum, and ileum. Once bile acids reach the terminal ileum, high affinity transporters (intestinal bile acid transporter, IBAT) actively transport both conjugated and unconjugated bile acids from the intestinal lumen into ileocytes where they are bound to ileal bile acid binding protein (IBABP) and exported across the basolateral membrane into portal circulation and returned to the liver. 3 This process of recycling bile acids is known as enterohepatic circulation (EHC) and is responsible for recirculating the ~2 g bile acid pool 8-10 times daily. While ~95% efficient, roughly 600-800 mg bile acids escape active transport and enter the large intestine. 2 Anaerobic bacteria adapted to inhabiting the large intestine have evolved enzymes to modify the structure of host bile acids. 2 Conjugated bile acids are hydrolyzed, releasing the amino acids, by bile salt hydrolases (BSH) in diverse gut bacteria representing the major phyla, including Bacteroidetes, Firmicutes, and Actinobacteria, as well as the domain Archaea. 4 In contrast, the unconjugated primary bile acids CA and CDCA are 7α-dehydroxylated by a select few species of gram-positive Firmicutes mostly in the genus Clostridium, forming deoxycholic acid (3α,12αdihydroxy-5β-cholan-24-oic acid; DCA) and lithocholic acid (3α-hydroxy-5β-cholan-24-oic acid; LCA), respectively. 1,5 The secondary bile acids DCA and LCA have increased hydrophobicity relative to their primary counterparts, which is associated with elevated toxicity. 6 DCA and LCA have been causally linked to cancers of the colon, 7 liver, 8 and esophagus. 9 Importantly, gut microbiota can produce less toxic oxo-bile acids and β-hydroxy bile acids as well. 6 Bile acid 3α-, 7α-, and 12α-hydroxyl groups can be reversibly oxidized and epimerized to the βorientation by pyridine nucleotide-dependent hydroxysteroid dehydrogenases (HSDHs) distributed across the major phyla including Firmicutes, Bacteroidetes, Actinobacteria, Proteobacteria, as well as methanogenic archaea. 1,10 HSDH enzymes that recognize bile acids are regio-(C-3 vs. C-7) and stereospecific (α vs. β) for hydroxyl groups decorating the steroid nucleus. Thus, bile acid 12α-HSDH reversibly converts the C-12 position of bile acids from the αorientation, such as on DCA, to 12-oxo bile acids, such as 12-oxolithocholic acid . [11][12][13][14] Bile acid 12β-HSDH completes the epimerization by interconverting 12-oxo bile acids to the 12βconfiguration, forming epi-bile acids. We recently identified and characterized NAD(H)-and NADP(H)-dependent 12α-HSDHs from Eggerthella sp. CAG:298 15 , Clostridium scindens, C. hylemonae, and Peptacetobacter hiranonis (formerly Clostridium hiranonis). 10 In addition to these recently reported 12ɑ-HSDHs, multiple genes encoding enzymes in the urso-(7α-& 7β-HSDH) and iso-(3α-& 3β-HSDH) bile acid pathways have been described to date ( Figure 1). 5 However, a gene encoding 12β-HSDH to complete the epi-bile acid pathway has not yet been reported. The first indication that gut bacteria may encode 12β-HSDH was suggested by the detection of 12βhydroxy bile acids in human feces. [16][17][18] Edenharder and Schneider (1985) reported 12β- Figure 1. A gene encoding 12β-HSDH completes the gut microbial epi-bile acid pathway. Cholic acid (CA) is converted to the oxointermediate, 7-oxodeoxycholic acid , and further to ursoCA through the urso-bile acid pathway catalyzed by NAD(P)dependent 7α-and 7β-HSDH. The secondary bile acid deoxycholic acid (DCA) is formed through the multi-step 7α-dehydroxylation of CA. DCA is biotransformed to 3-oxoDCA by 3α-HSDH and to isoDCA by 3β-HSDH in the iso-bile acid pathway. DCA can be converted to 12-oxolithocholic acid (12-oxoLCA) by 12α-HSDH and from 12-oxoLCA to epiDCA by 12β-HSDH. Examples of bacteria expressing each HSDH are shown below the reaction followed by corresponding gene annotations. Prior to this study, a gene encoding 12β-HSDH had not been identified. dehydrogenation of bile acids by Clostridium paraputrificum, and epimerization of DCA by coculture with E. lenta and C. paraputrificum. 19 Thereafter, Edenharder and Pfützner (1988) characterized crude NADP(H)-dependent 12β-HSDH activity from C. paraputrificum D 762-06. 20 However, little is known about the potential diversity of gut bacteria capable of forming 12β-hydroxy bile acids that molecular analysis is predicted to yield. Here, we report the identification of a gene encoding NADP(H)-dependent 12β-HSDH from C. paraputrificum ATCC 25,780 and characterization of the recombinant gene products purified after heterologous expression in E. coli from C. paraputrificum. We also identify novel taxa encoding bile acid 12β-HSDH by phylogenetic analysis, confirmed by a synthetic biology approach. C. paraputrificum ATCC 25780 possesses bile acid 12β-HSDH activity We first investigated the bile acid metabolizing capability of C. paraputrificum ATCC 25780 because previous studies reported bile acid 12β-HSDH activity in other C. paraputrificum strains, but did not identify the gene(s) responsible. 20 The epi-bile acid pathway of DCA involves the reversible conversion of DCA (3α,12α) to 12-oxoLCA (3α,12-oxo) through the action of 12α-HSDH, and 12-oxoLCA to epiDCA (3α,12β) by 12β-HSDH ( Figure 1). C. paraputrificum ATCC 25780 was incubated with two potential substrates of 12β-HSDH, 12-oxoLCA and epiDCA, along with DCA as a control. In order to contrast the product formed by bile acid 12β-HSDH with that formed by bile acid 12α-HSDH activity, Clostridium scindens ATCC 35704, which is known to express 12α-HSDH, was incubated with the same substrates. When 12-oxoLCA was incubated in cultures of C. paraputrificum ATCC 25780, the primary product eluted at 13.97 min with 391.28 m/z in negative ion mode ( Figure 2). This is consistent with the elution time of epiDCA standard and its 392.57 amu formula weight. With epiDCA as substrate, the culture produced a major peak of 391. 28 Identification of a gene encoding bile acid 12β-HSDH After bile acid 12β-HSDH activity was confirmed in C. paraputrificum ATCC 25780, its genome was searched for genes encoding proteins annotated as oxidoreductases within the NCBI database. HSDHs are NAD(P)-dependent and often members of the large and diverse SDR (short-chain dehydrogenase/ reductase) family. 21 Five SDR family oxidoreductase proteins and one aldo/keto reductase were identified as 12β-HSDH candidates in the C. paraputrificum ATCC 25780 genome and pursued for further study. These six genes were amplified from genomic DNA of C. paraputrificum ATCC 25780, cloned into the pET-28a(+) vector, and overexpressed in E. coli (Table S1). The N-terminal His 6 -tagged recombinant proteins were purified by metal-affinity chromatography and resolved by SDS-PAGE ( Figure 3a). Two out of the six recombinant proteins (WP_027096909.1, WP_027099631.1) were not soluble and bands at the expected molecular masses were apparent in the membrane fraction by SDS-PAGE. These proteins were not explored further. The other four 12β-HSDH candidates (WP_027099077.1, WP_027098355.1, WP_027097937.1, WP_027098604.1) were soluble and visualized by SDS-PAGE. The four soluble recombinant proteins were then screened for pyridine nucleotide-dependent bile acid 12β-HSDH activity by TLC and spectrophotometric assay. Screening reactions were prepared with 12-oxoLCA and NADPH, or epiDCA and NADP + in pH 7.0 phosphate buffer. Only WP_027099077.1 exhibited 12β-HSDH activity by TLC and spectrophotometric assay, which was also confirmed by LC-MS (Figure 3b). Reaction products of WP_027099077.1 with 12-oxoLCA and NADPH, epiDCA and NADP + , DCA and NADP + , and no substrate control were subjected to LC-MS. In the presence of purified recombinant WP_027099077.1 and NADPH, 12-oxoLCA was reduced quantitatively (2 hydrogen addition) to a product that eluted at 13.12 min with 391.28 m/z in negative ion mode. This is consistent with the 392.57 amu formula weight and elution time for epiDCA based on the substrate standard. Additionally, epiDCA was oxidized to a product with an elution time of 13.40 min at 389.27 m/z, agreeing with the retention time and formula weight of 390.56 amu for authentic 12-oxoLCA. DCA (392.57 amu) was not converted by WP_027099077.1 as the sole peak observed matched DCA standard at 14.60 min and 391.29 m/z. The interconversion of 12-oxoLCA and epiDCA, but no activity with DCA, indicates stereospecificity for the 12β-hydroxy position. Thus, DR024_RS09610 has been identified as the first gene reported that encodes bile acid 12β-HSDH (WP_027099077.1). Recombinant C. paraputrificum WP_027099077.1, hereafter referred to as Cp12β-HSDH, had a theoretical subunit molecular mass of 27.4 kDa. The observed subunit molecular mass was 26.4 ± 0.5 kDa by SDS-PAGE, calculated from three independent protein gels. WP_027099077.1 is predicted to be a cytosolic protein that is not membrane-associated by TMHMM v. 2.0. 22 Biochemical characterization of recombinant Cp12β-HSDH The approximate native molecular mass of Cp12β-HSDH was determined by size-exclusion chromatography. Cp12β-HSDH exhibited an elution volume of 15.04 ± 0.02 mL, corresponding to a 54.67 ± 0.79 kDa molecular mass relative to protein standards (Figure 4a). The size-exclusion data along with the theoretical subunit molecular mass of 27.4 kDa suggests Cp12β-HSDH assembles a homodimeric quaternary structure in solution. In order to optimize the enzymatic activity of Cp12β-HSDH, the conversion of pyridine nucleotides at 340 nm was measured in buffers between pH 6.0 and 8.0 by spectrophotometric assay ( Figure 4b). The optimum pH for Cp12β-HSDH in the oxidative direction with epiDCA as the substrate and NADP + as co-substrate was pH 7.5. In the reductive direction where 12-oxoLCA was the substrate and NADPH the co-substrate, the optimum pH was 7.0. Michaelis-Menten kinetics were performed at the pH optimum for each direction. In the reductive direction, Cp12β-HSDH displayed a K m value for 12-oxoLCA at 18.76 ± 0.40 µM which was similar to that of NADPH (Table 1; Figure S1). The K m value in the oxidative direction with epiDCA as substrate was about twice the K m determined for 12-oxoLCA. The K m for NADP + was 36.84 ± 0.55 µM. The V max and k cat were greater in the oxidative than the reductive direction. However, the catalytic efficiency (k cat /K m ) of 12-oxoLCA as substrate was greater than the oxidative direction with epiDCA as substrate. Phylogenetic analysis of Cp12β-HSDH The Cp12β-HSDH sequence from C. paraputrificum ATCC 25780 (WP_027099077.1) was used in a BLASTP search against the NCBI non-redundant protein database in order to determine its prevalence across bacteria. A maximum likelihood phylogeny of 5,000 sequences was constructed, revealing that many sequences most similar to Cp12β-HSDH are found in Firmicutes and Actinobacteria ( Figure S2). Within the 5,000-member phylogeny, a subtree (highlighted gray) of the most closely related proteins to Cp12β-HSDH was selected for closer inspection ( Figure 5). Cp12β-HSDH clustered most closely with other C. paraputrificum sequences Values represent the mean ± SD of three or more replicates. Values represent the mean ± SD of three or more replicates. The subtree also contains many sequences from Actinobacteria, the genera Collinsella and Olsenella among them. Collinsella species are of interest because C. aerofaciens expresses BSH and various HSDHs recognizing sterols, 25 including bile acid 12α-HSDH. 26 To determine if a member of the Actinobacteria encodes a bile acid 12β-HSDH, a sequence more distantly related to Cp12β-HSDH, Olsenella sp. GAM18 WP_120179297.1, was chosen for gene synthesis and protein overexpression because it had not been shown previously to metabolize bile acids ( Figure 5). 12-oxoLCA, epiDCA and DCA were tested as substrates and conversion was measured by spectrophotometric assay. Recombinant WP_120179297.1 displayed activity with 12-oxoLCA at 128% relative to Cp12β-HSDH, 69% relative activity with epiDCA, and showed no reaction with DCA ( Table 2). These data confirm that the more distantly related WP_120179297.1 has bile acid 12β-HSDH activity. Within the extended subtree are various Novosphingobium species. These Alphaproteobacteria deserve mention due to their ability to biodegrade aromatic compounds, such as phenanthrene 27 and estrogen. 28 To test if this cluster has bile acid 12β-HSDH activity, WP_007678535.1 from Novosphingobium sp. AP12 was synthesized, cloned, overexpressed, and purified ( Figure 5). The potential 12β-HSDH activity of WP_007678535.1 was screened using 12-oxoLCA, epiDCA, and DCA as substrates. WP_007678535.1 exhibited no activity with these bile acid substrates (Table 2). Because Novosphingobium strains are frequently plantassociated or isolated from aquatic environments, 29 this enzyme may be specific for other substrates. The genomic context of 12β-HSDH genes from C. paraputrificum ATCC 25780, Eisenbergiella sp. OF01-20, and Olsenella sp. GAM18 was explored ( Figure S3). The three 12β-HSDH genes did not appear to be organized within an operon nor was the genomic context conserved across these organisms. Two organisms present in the 12β-HSDH subtree, Collinsella tanakaei and Collinsella stercoris ( Figure 5, asterisks), were also found in a previous phylogenetic analysis of putative 12α-HSDHs. 10 Due to strain variation within species, we inspected the sequences further on the NCBI database and determined that the pairs of HSDHs are encoded by the same strain within each species. Collinsella tanakaei YIT 12063 12α-HSDH (WP_009141301.1) and 12β-HSDH (WP_009140706.1) are encoded by the genes HMPREF9452_RS06335 and HMPREF9452_RS03390, respectively. Collinsella stercoris DSM 13279 also contains both putative 12α-HSDH (WP_040360544.1; COLSTE_RS02900) and 12β-HSDH (WP_006720039.1; COLSTE_RS01465). 10 While the paired activity of 12α/12β-HSDH has not been tested in culture, these organisms may be novel epi-bile acid epimerizing strains that convert bile acid 12α-hydroxyl groups to the epi-configuration. To our knowledge, these are the first strains identified with C-12 epimerizing ability. Hidden Markov Model search of putative 12β-HSDH genes in human gut metagenomes To understand the distribution of potential 12β-HSDH genes in the human colonic microbiome, a Hidden Markov Model (HMM) search was performed against metagenome assembled genomes (MAGs) from four publicly available cohorts [30][31][32][33] using reference sequences from the 12β-HSDHs characterized in this paper ( Figure 5). Putative 12β-HSDH genes inferred by HMM search were found in ~30% of the subjects (198/666) (Figure 6a). Twenty-two subjects exhibited two different organisms containing the gene. This gene was found in healthy subjects as well as subjects with the following disease states: colorectal cancer, colorectal adenoma, fatty liver, hypertension, and type 2 diabetes. Two hundred twenty microbial genomes contained putative 12β-HSDH genes among 16,936 total available genomes. Putative 12β-HSDH genes were most often identified in the phylum Firmicutes, which was dominated by genes in Lachnospira eligens (formerly Eubacterium eligens) (Figure 6b). The gene from L. eligens was widespread across subjects in each of the four cohorts. This large proportion of hits from L. eligens may reflect its higher relative abundance allowing it to be assembled better into genomes. Sequences from this organism also appeared multiple times in the 12β-HSDH subtree ( Figure 5). Lachnospira eligens is a pectin degrader capable of promoting antiinflammatory cytokine IL-10 production in vitro 34 and has been proposed as a probiotic for atherosclerosis. 35 The gene was also present in C. paraputrificum along with other unidentified Clostridium sp. and Eubacterium sp. Actinobacteria had few members with the gene, represented by Collinsella intestinalis, Collinsella tanakaei, and Olsenella sp. Phocaeicola coprocola (formerly Bacteroides coprocola) was the only member of phylum Bacteroidetes with the gene. Phylogenetic analysis of regio-and stereospecific HSDHs Next, the phylogenetic relationship between Cp12β-HSDH (WP_027099077.1) and other regio-and stereospecific HSDHs was explored. To accomplish this, we updated the HSDH phylogeny presented by Mythen et al. (2018) by including additional bacterial or archaeal HSDH sequences of known or putative function along with representative eukaryotic sequences ( Figure 7; Table S2). 15 The sequences included span the known HSDH functional capacities, with some recognizing bile acids and others recognizing steroids like cortisol or progesterone. Most members of each HSDH class are clustered together, which is apparent by each highlight color encompassing more than one HSDH of the same known function. Furthermore, most bacterial HSDHs grouped separately from their eukaryotic counterparts. Prokaryotic sequences were interspersed among the eukaryotic with some exceptions in grouping by HSDH function. Cp12β-HSDH, the two other confirmed bile acid 12β-HSDHs (WP_118677302.1, WP_120179297.1), and additional similar sequences from across our bile acid 12β-HSDH subtree formed their own cluster. These sequences shared a branch with bacterial bile acid 12α-HSDHs as well as eukaryotic 3β-HSD/Δ 5 →Δ 4 -isomerases. Bile acid 12α-HSDH sequences included various clostridia (EDS06338.1, EEG75500.1, EEA85268.1, ERJ00208.1) 10, 36 and Eggerthella (CDD59475.1). 15 Collinsella aerofaciens (EBA39192.1), which has been reported to express bile acid 12α-HSDH activity, 25 grouped with the known bile acid 12α-HSDHs along with two human gut archaeal sequences from Methanosphaera stadtmanae and Methanobrevibacter smithii. Discussion Microbial bile acid HSDHs have been studied since the early 1970s, with much of the original work focusing on 3α-and 7α-HSDHs. 14,49 Thereafter, 3β-, 7β-and 12α-HSDH activity was observed in cultures of various microbiota, 1 including Eggerthella lenta (formerly Eubacterium lentum) which is capable of oxidizing CA and DCA at C-12 and epimerizing bile acids at C-3. 50 In the mid-1980s, C. paraputrificum, C. tertium, and Clostridioides difficile each in binary cultures with E. lenta were shown to epimerize DCA via a 12-oxo -intermediate to epiDCA. 18 Since then, HSDH genes encoding the iso-and urso-bile acid pathways and 12α-HSDH were identified, but not 12β-HSDH. 5 In this work, we identified the first bile acid 12β-HSDH gene, completing the microbial epi-bile acid pathway. Edenharder & Pfützner (1988) initially characterized NADP(H)-dependent 12β-HSDH from crude extracts of the fecal isolate C. paraputrificum strain D 762-06, with differing results from our findings. 19 Gel filtration analysis of crude extract from C. paraputrificum strain D 762-06 suggested a molecular mass of 126 kDa, whereas our current work with Cp12β-HSDH from ATCC 25780 is estimated at 54.6 KDa by gel filtration chromatography. The strain used in this study, C. paraputrificum ATCC 25780, was also isolated from feces. 51 It is possible that these are the same NADP(H)-dependent enzymes by sequence from two different strains of C. paraputrificum and that the recombinant protein quaternary structure is unstable, resulting in a dimeric form in our study. Alternatively, these bacterial strains may have distinct versions of 12β-HSDH with different amino acid sequences, as we have shown previously for 12α-HSDH from Eggerthella lenta. 35,52 Indeed, the 12β-HSDH from C. paraputrificum strain D 762-06 was reported to be partially membrane associated, whereas hydropathy prediction by TMHMM v. 2.0 found no evidence of transmembrane domains in Cp12β-HSDH. In addition, pH optima for the conversion of 12-oxoLCA between Cp12β-HSDH (7.0) and the native 12β-HSDH (10.0) from strain D 762-06 differed. Oxidation of epiDCA was optimal at pH 7.5 for Cp12β-HSDH, and reported as pH 7.8 for the crude native enzyme from strain D 762-06. 19 Further work will be needed to determine if distinct bile acid 12β-HSDHs are present in C. paraputrificum strains. Cp12β-HSDH exhibited a dimeric quaternary structure by size-exclusion chromatography under our experimental conditions. Although future crystallization of Cp12β-HSDH would better illustrate its true polymeric state, HSDHs are often either tetrameric 42,53 or dimeric. 54,55 Cp12β-HSDH was more specific for bile acids lacking a position 7-hydroxyl group: epiDCA and 12-oxoLCA, over epiCA and 12-oxoCDCA. Cp12β-HSDH also had lower activity with 3,12-dioxoLCA versus 12-oxoLCA. This indicates that both the 7-hydroxyl and 3-oxo groups hinder the ability of Cp12β-HSDH to convert the substrate. An x-ray crystal structure of Cp12β-HSDH may shed light on why this apparent steric hindrance occurs. Phylogenetic analysis of Cp12β-HSDH coupled with synthetic biological "sampling" and validation at different points along the branches revealed shared 12β-HSDH function among Eisenbergiella sp. OF01-20 and Olsenella sp. GAM18, lending functional credibility to sequences throughout the subtree ( Figure 5; Table 2). Eisenbergiella sp. OF01-20 was originally sequenced from a human gut microbiota cultivation project (Integrated Microbial Genomes [IMG] Genome ID: 2840324701). Eisenbergiella spp. are often present at relative abundances of less than 0.1% in human fecal samples. 56,57 Olsenella sp. GAM18 was initially isolated from humans (IMG Genome ID: 2841219092). The relative abundance of Olsenella was shown to be about 2% within the gut microbiome of some individuals. 58 Our subtree includes more abundant gut taxa such as Ruminococcus (relative abundance ~5%) 59,60 and Collinsella (relative abundance ~8%), 59 as well. Due to limitations in 16S rDNA sequencing depth, it is difficult to conclude if the species in our subtree are found at relevant levels in the human gut or if 12β-HSDH genes are present. Therefore, we performed a HMM search to assess the relative prevalence of 12β-HSDH genes. About 30% of subjects had putative 12β-HSDH genes, indicating the relevance of this gene in the human gut microbiome. The HMM search revealed that 220 microbial genomes out of 16,936 total contained putative 12β-HSDH genes. While concrete prevalence is difficult to establish, putative 12β-HSDH genes are less widespread than the ubiquitous bile-acid metabolizing gene, bile salt hydrolase, 4 which was present in 2,456/16,936 total genomes in these cohorts. These data expand the limited metagenomic work that has focused on bile acid HSDH genes in the human gut. 61 Two organisms from our 12β-HSDH subtree were also identified in a previous 12α-HSDH phylogeny from Doden et al. (2018). 10 12α-HSDH (WP_040360544.1; COLSTE_RS02900). 10 Although the dual 12α/12β-HSDH activity is untested in culture, we predict these strains are novel C-12 epimerizers. Epimerizing strains have been identified for the C-3 19,41 and C-7 hydroxyl 1,62 positions, however, this is the first indication of bacteria capable of C-12 epimerization. Indeed, this function joins the vast repertoire of HSDHs already studied in many Firmicutes and Actinobacteria. 1 Bile acid 12α-HSDH activity has been detected in Eggerthella species 35,52 in the phylum Actinobacteria and various clostridia [10][11][12]36 in the phylum Firmicutes. Similarly, 3α-and 3β-HSDH are widespread among Firmicutes, 1,65 and 3α-HSDH has also been reported in Eggerthella species. 13,35,41 7α-and 7β-HSDH were shown in numerous Firmicutes 14,37,65 and the Actinobacteria Collinsella aerofaciens. 24 Along with these bile acid-specific HSDHs, the glucocorticoid 20α-and 20β-HSDHs are evident in both Firmicutes 43,45 and Actinobacteria such as Bifidobacterium adolescentis. 42, Until this study, there were no reports of genes encoding 12β-HSDH and the activity had only been shown in C. paraputrificum, C. tertium and C. difficile. 18 Thus, our phylogenetic analysis revealed hitherto unknown diversity for bile acid 12β-HSDHs within the Firmicutes and Actinobacteria. Bacteroidetes sequences were notably absent within our 12β-HSDH subtree and only one sequence was identified in our HMM search, although Bacteroidetes have been shown to encode multiple other HSDHs. 1,49 Interestingly, C. tertium and C. difficile enzymes were also not present in our phylogenetic analysis even though this activity has been reported for strains of these clostridia, 18 indicating that genes encoding other forms of bile acid 12β-HSDH are present in the gut microbiome. The distribution pattern of microbial HSDHs is becoming increasingly clear ( Figures 5 & 7), although in many cases the evolutionary pressures on gut microbes for encoding particular regio-and stereospecific HSDH enzymes is not clear. As observed with BSH enzymes, the functional importance of HSDHs may be strain-dependent. In some strains, the mere ability to acquire or dispose of reducing equivalents may be important, and the class of enzyme unimportant. Bile acid hydroxylation patterns affect the binding and activation/inhibition of host nuclear receptors. 66 HSDH enzymes may thus act in interkingdom-signaling, a hypothesis that has recent support based on the effect of oxidized and epimerized bile acids on the function of regulatory T cells. 67,68 The concerted action of pairs of HSDHs result in bile acid products with reduced toxicity for microbes expressing the HSDH(s) or for an important inter-species partner, which was likely a factor in the evolution of these enzymes. Examples of strains of species capable of epimerizing bile acid hydroxyl groups are found in the literature, and the physicochemical properties and reduced toxicity of β-hydroxy bile acids are known, providing hypotheses for physiological function. Clostridium limosum (now Hathewaya limosa) expresses both bile acid-inducible NADP-dependent 7α-and 7β-HSDH capable of converting CDCA to UDCA. 69 CDCA is more hydrophilic and more toxic to bacteria than UDCA. 6,70 Indeed, treatment with UDCA increases the hydrophilicity of the biliary pool, reducing cellular toxicity and improving biliary disorders. 71 Similarly, strains of Eggerthella lenta 15,41 and Ruminococcus gnavus 41 express both NADPH-dependent 3α-and 3β-HSDHs capable of forming 3β-bile acids (iso-bile acids). Iso-bile acids are also more hydrophilic and less toxic to bacteria than the α-hydroxy isomers. 41 At least some strains of R. gnavus also express NADPH-dependent 7β-HSDH, contributing to the epimerization of CDCA to UDCA. 39 It may be speculated that R. gnavus HSDHs function in detoxification of hydrophobic bile acids such as CDCA and DCA; however, further work is needed. Analogous to E. lenta and R. gnavus, C. paraputrificum is another example of a strain encoding multiple HSDHs that favor formation of β-hydroxy bile acids. 19 C. paraputrificum strains encode the iso-bile acid pathway as well as NADPH-dependent 12β-HSDH. 18,19 While little is known about the biological effects of 12β-bile acids (epi-bile acids), the physicochemical properties relative to 12α-hydroxy bile acids should approximate that of iso-and urso-derivatives. 6,41,70 An important question emerging from these observations is whether one particular epimeric product rather than another has important consequences on the fitness of the bacterium generating them, or if the increased hydrophilicity and reduced toxicity are the key factors. Since the initial detection of epi-bile acids by Eneroth et. al. and Ali et. al., [15][16][17] the measurement of bile acid metabolomes in clinical samples has become commonplace, 72 yet few studies measure or report epi-bile acids. Recently, 12β-hydroxy and 12-oxo-bile acids have been quantified in human feces by Franco et. al. (2019). 12-oxoLCA was the most abundant oxo-bile acid in feces at concentrations of about one half that of DCA in stool. While epiDCA itself was not measured, 3-oxo-12βhydroxy-CDCA was shown at 12 ± 4 µg/g wet feces. 73 Additionally, epiDCA has been reported in biliary bile of angelfish, likely produced from bacterial origin, so the 12β-HSDH gene may be widespread among resident microbiota of diverse vertebrate taxa. 74 A critical limitation to the study of epi-bile acids is the absence of commercially available standards, although there are methods available for chemical synthesis. 75,76 The newly identified bile acid 12β-HSDHs could be employed for the enzymatic production of epi-bile acid standards from oxo-intermediates. The physiological effects of epi-bile acids are poorly characterized, particularly in the GI tract. Borgstrӧm and colleagues compared infusion of CA, ursoCA, and epiCA on bile flow, lipid secretion, bile acid synthesis, and bile micellar formation. In contrast to ursoCA and CA, epiCA was secreted into bile in an unconjugated form. The 12β-hydroxyl group may hinder the enzyme responsible for conjugation. Additionally, epiCA infusion increased the rate of secretion of newly synthesized bile salts. 77 Another study reported increased 12-oxoLCA levels in rats with high tumor incidence when they were fed a high safflower oil or corn oil diet. 78 While the toxicity of epi-bile acids has not yet been tested relative to the secondary bile acids DCA or LCA, both 12-oxoLCA and epiDCA are less hydrophobic than DCA by LC-MS (Figures 2 & 3). Due to the involvement of DCA in cancers of the liver and colon, 7,8 bile acid 12β-HSDH may be of therapeutic importance in modulating the bile acid pool in favor of epiDCA over toxic DCA. Future studies with animal models will be imperative to determine the effects of epibile acids on host physiology. Whole cell bile acid conversion assay C. paraputrificum ATCC 25780 and C. scindens ATCC 35704 were cultivated in anaerobic brain heart infusion (BHI) broth for 24 hrs. Two mL anaerobic BHI was inoculated with 1:10 dilution of either organism along with 50 μM bile acid substrate and incubated at 37°C for 24 hours. The bacterial cultures were centrifuged at 10,000 × g for 5 min to remove bacterial cells and the conditioned medium was adjusted to pH 3.0. Solid phase extraction was used to extract bile acid products from bacterial culture. Waters tC18 vacuum cartridges (3 cc) (Milford, MA, USA) were preconditioned with 6 mL 100% hexanes, 3 mL 100% acetone, 6 mL 100% methanol, and 6 mL water (pH 3.0). The conditioned medium was added to the cartridges and vacuum was applied to pull media through dropwise. Cartridges were washed with 6 mL water (pH 3.0) and 40% methanol. Bile acid products were eluted with 3 mL 100% methanol. Eluates were then evaporated under nitrogen gas and the residues dissolved in 200 μL 100% methanol for LC-MS analysis. Liquid chromatography-mass spectrometry LC-MS analysis for all samples was performed using a Waters Acquity UPLC system coupled to a Waters SYNAPT G2-Si ESI mass spectrometer (Milford, MA, USA). LC was performed with a Waters Acquity UPLC HSS T3 C18 column (1.8 μm particle size, 2.1 mm x 100 mm) at a column temperature of 40°C. Samples were injected at 1 μL. Mobile phase A was water and B was acetonitrile. The mobile phase gradient was as follows: 0 min 100% mobile phase A, 0.5 min 100% A, 25 min 0% A, 25.1 min 100% A, 28 min 100% A. The flow rate was 0.5 mL/min. MS was carried out in negative ion mode with a desolvation temperature of 300°C and desolvation gas flow of 700 L/hr. The capillary voltage was 3,000 V. Source temperature was 100° C and cone voltage was 30 V. Chromatographs and mass spectrometry data were analyzed using Waters MassLynx software (Milford, MA, USA). Isolation of genomic DNA Genomic DNA was extracted from C. paraputrificum ATCC 25780 using the Fast DNA isolation kit from Mo-Bio (Carlsbad, CA, USA) according to the manufacturer's protocol for polymerase chain reaction and molecular cloning applications. Heterologous expression of potential 12β-HSDH proteins The pET-28a(+) and pET-46 Ek/LIC vectors were obtained from Novagen (San Diego, CA, USA). Restriction enzymes were purchased from NEB (Ipswich, MA, USA). Inserts were generated by PCR amplification with cloning primers from Integrative DNA Technologies (Coralville, IA, USA) of C. paraputrificum ATCC 25780 genomic DNA or genes synthesized in E. coli K12 codon usage (IDT, Coralville, IA, USA). Cloning primers and genes created by gene synthesis are listed in Table S1. Inserts were amplified using the Phusion High Fidelity Polymerase (Stratagene, La Jolla, CA, USA) and cloned into pET-28a(+) after insert and vector were double digested with the appropriate restriction endonuclease and treated with DNA Ligase, or annealed into pET-46 Ek/LIC after treatment with T4 DNA Polymerase. Recombinant plasmids were transformed via heat shock method, plated, and grown overnight at 37°C on lysogeny broth (LB) agar plates supplemented with antibiotic (50 µg/mL kanamycin or 100 µg/mL ampicillin). Vectors were either transformed into chemically competent E. coli DH5α cells and grown with kanamycin (pET-28a(+)) or transformed into NovaBlue GigaSingles™ Competent cells and grown with ampicillin (pET-46 Ek/LIC). A single colony from each transformation was inoculated into LB medium (5 mL) containing the corresponding antibiotic and grown to saturation. Recombinant plasmids were extracted from cell pellets using the QIAprep For protein expression, the extracted recombinant plasmids were transformed into E. coli BL-21 CodonPlus (DE3) RIPL chemically competent cells by heat shock method and cultured overnight at 37° C on LB agar plates supplemented with ampicillin or kanamycin (100 µg/ml; 50 µg/mL) and chloramphenicol (50 µg/ml). Selected colonies were inoculated into 10 mL of LB medium supplemented with antibiotics and grown at 37°C for 6 hours with vigorous aeration. The pre-cultures were added to fresh LB medium (1 L), supplemented with antibiotics, and aerated at 37°C until reaching an OD 600nm of 0.3. IPTG was added to each culture at a final concentration of 0.1 mM to induce and the temperature was decreased to 16°C for a 16-hour incubation. Cells were pelleted and resuspended in binding buffer (20 mM Tris-HCl, 300 mM NaCl, 10 mM 2-mercaptoethanol, pH 7.9). The cells were subjected to five passages through an EmulsiFlex C-3 cell homogenizer (Avestin, Ottawa, Canada), and the cell debris was separated by centrifugation. The recombinant protein in the soluble fraction was then purified using TALON® Metal Affinity Resin (Clontech Laboratories, Mountain View, CA, USA) per the manufacturer's protocol. The recombinant protein was eluted using an elution buffer composed of 20 mM Tris-HCl, 300 mM NaCl, 10 mM 2-mercaptoethanol, and 250 mM imidazole at pH 7.9. The resulting purified protein was analyzed using sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE). The observed subunit mass for each was calculated by migration distance of purified protein to standard proteins in ImageJ (https://imagej.nih.gov/ij/ docs/faqs.html). TMHMM v. 2.0 was used to predict transmembrane helices. 21 Enzyme Assays Pure recombinant 12β-HSDH reaction mixtures were made using 50 μM substrate, 150 μM cofactor and 10 nM enzyme in 150 mM NaCl, 50 mM sodium phosphate buffer at the pH optima 7.0 or 7.5. Reactions were monitored by spectrophotometric assay measuring the oxidation or reduction of NADP(H) aerobically at 340 nm (6,220 M −1 .cm −1 ) continuously for 1.5 min on a NanoDrop 2000c UV-Vis spectrophotometer (Fisher Scientific, Pittsburgh, PA, USA) using a 10 mm quartz cuvette (Starna Cells, Atascadera, CA, USA). Additional reactions were incubated overnight at room temperature and extracted by vortexing with two volumes ethyl acetate twice. The organic layer was recovered and evaporated under nitrogen gas. The products were dissolved in 50 μL methanol and LC-MS was performed as described above or used for thin layer chromatography. The buffers for investigation of the optimal pH of recombinant 12β-HSDH contained 150 mM NaCl and one of the following buffering agents: 50 mM sodium acetate (pH 6.0), 50 mM sodium phosphate (pH 6.5 to 7.5), and 50 mM Tris-Cl (pH 8.0). Substrate specificity was performed according to the above reaction conditions at the optimal pH. The reaction mixtures for kinetic analysis were 10 nM enzyme, sodium phosphate buffer (pH 7.0), and 150 µM NADPH for varying concentrations of 12-oxoLCA or 80 µM 12-oxoLCA for varying NADPH concentrations in the reductive direction. The oxidative reaction mixture contained 10 nM enzyme, sodium phosphate buffer (pH 7.5), and 300 µM NADP + when epiDCA concentrations were changed or 100 µM epiDCA when NADP + was varied. Kinetic parameters were estimated with GraphPad Prism (GraphPad Prism, La Jolla, CA, USA) to fit the data using nonlinear regression to the Michaelis-Menten equation. Thin layer chromatography Reaction mixtures were made using 50 μM substrate, 150 μM cofactor and 10 nM enzyme in 150 mM NaCl, 50 mM sodium phosphate buffer at pH 7.0. Reactions were incubated overnight at room temperature and extracted by vortexing with two volumes ethyl acetate twice. The organic layer was recovered and evaporated under nitrogen gas. The products were dissolved in 50 μL methanol and spotted on a TLC plate (silica gel IB2-F flexible TLC sheet, 20 × 20 cm, 250 μm analytical layer; J. T. Baker, Avantor Performance Materials, LLC, PA, USA). The steroids were separated with a 70:20:2 toluene-1,4-dioxane-acetic acid mobile phase and visualized using a 10% phosphomolybdic acid in ethanol spray and heating for 15 min at 100°C. 79 Native molecular weight determination Size-exclusion chromatography was performed using a Superose 6 10/300 GL analytical column (GE Healthcare, Piscataway, NJ, USA) connected to an ÄKTAxpress chromatography system (GE Healthcare, Piscataway, NJ, USA) at 4°C. The column was equilibrated with 50 mM Tris-Cl and 150 mM NaCl at a pH of 7.5. The purified protein was loaded onto the analytical column at a concentration of 10 mg/mL and eluted at a flow rate of 0.3 ml/min. The native molecular mass of 12β-HSDH was determined by comparing its elution volume to that of Gel Filtration Standard proteins (Bio-Rad, Hercules, CA, USA): thyroglobulin, γ-globulin, ovalbumin, myoglobin, vitamin B 12 . Phylogenetic Analysis The sequence of the C. paraputrificum 12β-HSDH protein (accession number WP_027099077.1) was used as query for a similarity search against the NCBI non-redundant protein database by BLASTP, 80 with a maximum E-value threshold of 1e-10 and a limit of 5,000 results. Retrieved sequences were aligned with Muscle v. 3.8.1551 81 and analyzed by maximum likelihood with RAxML v. 8.2.11. 82 Selection of the best-fitting amino acid substitution model and number of bootstrap pseudoreplicates were performed automatically, and substitution rate heterogeneity was modeled with gamma distributed rate categories. The resulting phylogenetic tree was formatted by Dendroscope v. 3.5.10 83 and further cosmetic modifications were performed with the vector editor Inkscape, v. 0.92.4 (https://inkscape.org). For closer analysis of the phylogenetic affiliation of C. paraputrificum ATCC 25780 12β-HSDH, sequences from the well-supported subtree where this sequence is located in the 5,000-sequence tree, plus an outgroup, were reanalyzed for confirming the relative placement of all sequences nearest to Cp12β-HSDH. The methods used were the same as described above for the full tree. A maximum-likelihood tree of representative HSDH sequences was inferred by selecting sequences from each HSDH subfamily, based on the tree from Mythen et al. (2018), 35 with the addition of eukaryotic, archaeal, and other bacterial sequences deposited in the public databases. Phylogenetic inference methods were the same as described above. Hidden Markov Model Search A Hidden Markov Model (HMM) search was performed using a custom HMM profile against a concatenated file of metagenome assembled genomes (MAGs) from four publicly available cohorts. [29][30][31][32] MAGs were filtered for genome completeness, quality, and contamination as described. 84 For generation of the custom 12β-HSDH profile, reference sequences from the 12β-HSDHs characterized in this paper were aligned with MAFFT, manually trimmed, and constructed using hmmscan. 85 The MAG database was searched using HMMSearch version 3.3.0 85 , using an individually identified cutoff of 350.00. Resulting hits were then filtered to remove results less than 70% completeness and closest matched species were recorded. The HMM search file is publicly available at: https:// github.com/AnantharamanLab/doden_et_al_2021.
2021-03-02T19:27:00.902Z
2020-09-27T00:00:00.000
{ "year": 2021, "sha1": "1316ffc49465f12735e0d07560d22a1ea6dfb53c", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19490976.2021.1907271?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9125d9cb59ae7a3837b7c9aa73a8e900c1dc0554", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
75139155
pes2o/s2orc
v3-fos-license
Frequency-specific activation of the peripheral auditory system using optoacoustic laser stimulation Hearing impairment is one of the most common sensory deficits in humans. Hearing aids are helpful to patients but can have poor sound quality or transmission due to insufficient output or acoustic feedback, such as for high frequencies. Implantable devices partially overcome these issues but require surgery with limited locations for device attachment. Here, we investigate a new optoacoustic approach to vibrate the hearing organ with laser stimulation to improve frequency bandwidth, not requiring attachment to specific vibratory structures, and potentially reduce acoustic feedback. We developed a laser pulse modulation strategy and simulated its response at the umbo (1–10 kHz) based on a convolution-based model. We achieved frequency-specific activation in which non-contact laser stimulation of the umbo, as well as within the middle ear at the round window and otic capsule, induced precise shifts in the maximal vibratory response of the umbo and neural activation within the inferior colliculus of guinea pigs, corresponding to the targeted, modelled and then stimulated frequency. There was also no acoustic feedback detected from laser stimulation with our experimental setup. These findings open up the potential for using a convolution-based optoacoustic approach as a new type of laser hearing aid or middle ear implant. Hearing impairment is one of the most common sensory deficits in humans. Hearing aids are helpful to patients but can have poor sound quality or transmission due to insufficient output or acoustic feedback, such as for high frequencies. Implantable devices partially overcome these issues but require surgery with limited locations for device attachment. Here, we investigate a new optoacoustic approach to vibrate the hearing organ with laser stimulation to improve frequency bandwidth, not requiring attachment to specific vibratory structures, and potentially reduce acoustic feedback. We developed a laser pulse modulation strategy and simulated its response at the umbo (1-10 kHz) based on a convolution-based model. We achieved frequency-specific activation in which non-contact laser stimulation of the umbo, as well as within the middle ear at the round window and otic capsule, induced precise shifts in the maximal vibratory response of the umbo and neural activation within the inferior colliculus of guinea pigs, corresponding to the targeted, modelled and then stimulated frequency. There was also no acoustic feedback detected from laser stimulation with our experimental setup. These findings open up the potential for using a convolution-based optoacoustic approach as a new type of laser hearing aid or middle ear implant. There are about 360 million individuals worldwide who struggle with hearing impairment and have difficulties in communicating on a daily basis 1 . Many of these hearing impaired individuals do not seek help for their hearing loss and can become isolated from society or even lose their jobs. Of the hearing impaired patients who obtain clinical support and receive a hearing aid, a notable proportion of them do not continue to wear and use their hearing devices (up to at least 24%) 2,3 . This lack of use, despite remarkable improvements in the technology of the hearing devices, are in part due to their poor performance or discomfort for a portion of these patients 2,3 . Several factors contribute to the unsatisfactory outcomes of conventional hearing aids, including insufficient frequency specificity or bandwidth, acoustic feedback issues, and discomfort due to the occlusion effect (i.e., earpiece placed into the ear canal to improve sound transmission and minimize acoustic feedback issues when increasing sound amplification) with recurrent auditory canal inflammation 2,3 . In addition to the conventional hearing aids described above that present acoustic amplification to the ear, there are also hearing devices that directly contact and transmit vibrations to the head or hearing structures 4 . Several types of these devices can be considered as implantable or partial implantable auditory prostheses, including middle ear implants and various bone-anchored hearing devices. The advantages of these implantable hearing devices are that they can transmit greater energy and broader bandwidth of information to the hearing system compared to conventional hearing aids, especially for patients with severe hearing loss who cannot comfortably wear an earpiece that occludes the ear 5 . The disadvantage is that they require surgery for placement with possible complications. For example, passive percutaneous bone conduction hearing devices (skin penetrating and bone anchored) such as the Baha ® (Cochlear AG, Sidney, Australia) can have good amplification and sound transmission; however, wound infection around the skin penetrating area 6 decreases their acceptance in the hearing impaired population. There are also transcutaneous passive bone conduction hearing devices such as the Baha ® Attract System (Cochlear AG, Sidney, Australia) 7 and the Sophono TM8 or active transcutaneous devices such as the BONEBRIDGE (MED-EL GmbH, Innsbruck, Austria) 9 . However, they have lower sound amplification and modulation capacity due to the transcutaneous interface and can induce pressure issues of the stressed skin between the magnets used to secure the device in its proper location. In cases where bone conduction hearing devices are insufficient, active middle ear implants such as the Vibrant Soundbridge 10 or Carina ® (Cochlear AG, Sidney, Australia) 11 provide another option that can lead to satisfying results, but the coupling of the actuator to the middle ear bone structures is challenging in middle ears with pathologies 12 . Alternative stimulation strategies are therefore needed for the therapy of hearing deficits and pathologies that are still not sufficiently addressed with the current hearing technology. For example, there are patients with chronic inflammation that induces structural changes of the ear drum and middle ear (e.g., cholesteatoma) that can cause chronic leaking and pain within and around the ear as well as damage of structures involved with sound transmission to the inner ear (e.g., ear drum and ossicular chain) 13 . Acoustic hearing aids with an earpiece that occludes the ear could potentially transmit sufficient sound information for these types of patients but occluding the inflamed ear can lead to significant discomfort 5 . Current middle ear implants may not require ear occlusion but cannot be sufficiently attached or adapted to these damaged structures while conventional bone conduction devices are not well tolerated by patients with the tendency for chronic inflammation around the anchor of the bone conduction system 14 . Considering these various limitations experienced by many hearing impaired patients, laser stimulation is emerging as a new form of energy that can be sharply focused to activate specific structures or biological tissue and can be applied without the need for directly contacting the targeted vibratory structures [15][16][17] . Laser stimulation could be potentially used to vibrate the ear drum with reduced acoustic feedback since the energy transmitted is light instead of sound and without requiring occlusion of the ear canal. If activation of middle ear structures is required due to peripheral ear damage or inflammation, a laser fiber or bundle of fibers could be inserted into the middle ear cavity and directed towards different mechanical structures without requiring attachment to specific locations that may be damaged or inaccessible in some patients, as is encountered for middle ear implants. Therefore, the use of optical energy as a non-contact stimulation method for the activation of the peripheral hearing organ is a promising solution for hearing impaired patients with these types of limitations, and needs to be further explored. When developing or implementing a device to improve hearing, one needs to keep in mind the fundamental organizing principle of the hearing organ, which exhibits a spatially ordered frequency coding representation across the cochlea (inner ear) that is maintained up through multiple ascending auditory nuclei within the brain 18,19 . Different locations along the cochlea and across neurons within each auditory nuclei are sensitive to specific sound frequencies. The hearing organ and the brain are naturally designed to sense and extract different frequency components of sound over time to encode and elicit intelligible hearing perception. This frequency-specific transmission and extraction of sound first begins through the outer ear canal to the vibratory structures of the middle ear through mechanical vibrations. From the middle ear, the sound pressure waves are transmitted to the inner ear through fluid vibrations within the cochlea that activate inner hair cells along the tonotopically organized cochlea, ultimately reaching the auditory brain through the auditory nerve fibers. Through the perspective of these physical characteristics of transmission of sound energy from the outer ear to the brain, there are several options for introducing light into the system for hearing purposes: (1) transform the light into mechanical energy (optoacoustic) through very quick laser pulses that lead to mechanical rather than thermal perturbations, in order to directly vibrate outer, middle or inner ear structures 17,20-23 ; (2) activate neuronal structures, such as inner hair cells or auditory nerve fibers, directly with light, which is possible with Infrared Neural Stimulation (INS) 15,24 ; or (3) transfect the neuronal structures with light sensitive ion channels to become sensitive to laser stimulation, a technique known as optogenetics 5,25,26 . Out of these, INS and optogenetics are appropriate for direct stimulation of the inner ear and the auditory neurons in severe to profound hearing impaired patients. However, the proportion of individuals having conductive, sensorineural or combined hearing loss with residual hearing that can still be activated mechanically is a much larger population, and despite all the improvements in the technology of auditory prostheses, not all patients obtain sufficient or useful hearing. Currently, there are laser-driven hearing devices in which laser is transmitted to a transducer placed at the tympanic membrane (TM), known as the Earlens device 27 , or the round window 28 to convert the optical signal to mechanical vibrations in those structures. Both devices demonstrate sufficient vibration amplitudes needed by patients with severe hearing impairment for frequencies up to at least 10 kHz. However, these laser devices require placement of a transducer onto a hearing structure that may not be preferred by some patients or clinicians. The only commercially available laser-based hearing device, the Earlens hearing aid, also requires an intact middle ear. Therefore, for those individuals not sufficiently benefiting from commercially available hearing devices, a non-contact optoacoustic-based activation approach of the hearing system may offer another hearing option. The first results related to optoacoustic activation of the hearing system was described as artifacts within the inner ear induced through monochrome laser pulses by Friedberger and Ren in 2006 29 . In 2009, Wenzel et al. demonstrated that controlled direct optoacoustic stimulation 20,21 of the inner ear is possible, which was followed up with further experiments demonstrating the ability to directly vibrate structures from the ear drum up to the inner ear without the need for direct contact with the hearing structures 21 . There still remain questions on what type of laser pulse patterns to use for optoacoustic stimulation and which peripheral structures to target for sufficient transmission of sound information. For coding speech and other complex signals such as music, one critical question that needs to be addressed based on the fundamental organization of the hearing system described above is if optoacoustic stimulation can induce frequency-specific activation of peripheral hearing structures. This study seeks to answer that question through simulations and experiments in a guinea pig model to demonstrate that laser pulse stimulation of the ear drum and middle ear structures can achieve precise and predictable frequency-specific activation of the auditory system. We were inspired by the amplitude modulation (AM) concept that is well known in radio-frequency engineering, in which a constant www.nature.com/scientificreports www.nature.com/scientificreports/ carrier frequency could potentially induce vibration waves in the targeted structure that follow the various modulating frequencies (Fig. 1). The activation effects of our laser stimulation paradigm were assessed in three ways: (1) computational modelling with a convolution based model to predict the vibration effects; (2) vibration measurements from the TM performed in extracted peripheral ear specimens in response to laser stimulation; and (3) neural recordings in the central nucleus of the inferior colliculus (ICC) in anaesthetized animals to characterize activation across the well-defined frequency or tonotopic gradient of the ICC in response to laser stimulation. Overall, our results validate the ability to systematically vibrate outer ear and middle ear structures using amplitude modulated laser pulses without requiring contact with the vibratory structures. Frequency-specific activation was also possible with minimal acoustic feedback. Therefore, optoacoustic stimulation offers a valuable tool for contact-free induction of focused mechanical vibrations in targeted biological structures, which can be potentially used for a new generation of hearing aid devices as well as for research purposes. Results Stimulation principle. We amplitude modulated a predetermined laser pulse rate (LPR, as the carrier frequency) with different laser modulation rates (LMRs) as shown in Fig. 1. In our set of experiments, the LPR was either 32 kHz or 50 kHz, and the LMRs were 1 kHz, 2 kHz, 4 kHz, 8 kHz or 10 kHz, presented individually, to activate audible frequency regions in guinea pigs. Laser Doppler vibrometer (LDV) measurements at the central point of the ear drum (umbo) in explanted specimens ( Fig. 2a; see Methods) and neurophysiological recordings in the ICC in anesthetized guinea pigs ( Fig. 2b; see Methods) were collected in response to stimulation with these different laser patterns presented to the ear drum or middle ear structures. The averaged maximal power of laser stimulation ranged between 20 and 500 mW, in which even at the highest level causing extensive activity of the auditory system, our calibration microphone still could not detect any laser-induced acoustic energy down to the recording noise floor of 30 dB SPL (Fig. 2a). For our experimental setup, we positioned a microphone (Brüel & Kjaer free-field microphone with Type 2670 preamplifier, TYPE 4939-A-011, 2850 Naerum, Denmark) near the ear canal opening and recorded sound signals generated from laser stimulation of the umbo and did not detect any acoustic feedback. Since the form of energy used to stimulate the hearing system is light instead of sound, we expect minimal acoustic feedback. A further explanation could be that our experimental setup was not sensitive or small enough to detect the low sound pressures within the ear canal to assess if there is any acoustic feedback at the optical fiber tip or generated by vibrating structures within the ear canal. This open question will be investigated in future experiments for determining optimal placement of the hearing aid microphone. Computational modeling of umbo vibrations. As the first step of our modeling procedure, we recorded the velocity at the umbo in response to stimulation with a 50 µJ laser pulse (mathematically represented as equation (1)) at the same position. We then calculated the displacement using these velocity data, which is represented As a consequence, the number of pulses and the total energy per sinusoid period varied depending on the LMR as presented in Table 2. Increasing the LMR from 1 kHz to 8 kHz leads to a decreased number of pulses per sinusoid period. Increasing the LPR from 32 kHz to 50 kHz leads to an increased number of pulses per sinusoid and an increased number of pulses per stimulation unit. www.nature.com/scientificreports www.nature.com/scientificreports/ as the impulse response of the system: we calculated the single-sided displacement spectrum of y[n] and normalized the data by the value of the first peak (fundamental frequency f0; Fig. 3a,b). As expected, the f0 value consistently aligned with the presented LMR, regardless of the LPR, in which the peak could be shifted to 1 kHz, 2 kHz, 4 kHz, 8 kHz and 10 kHz (1 kHz and 8 kHz are shown in Fig. 3a,b for LPR of 50 kHz; other LMR and LPR examples are shown in Supplementary Fig. S2). The second dominant peak in the modeled spectrum appeared at the LPR as shown in Fig. 3a,b and in Supplementary Fig. S2. The additional frequency peaks in the signal, above and below the LPR peak (i.e., LPR sidebands), confirmed the amplitude modulation characteristic of our stimulation strategy 30 , in which the frequency difference between the LPR peak and the sidebands is equal to the LMR. This could be demonstrated across all the LMR/LPR combinations tested ( Supplementary Fig. S2). The harmonic components of the LPR could be identified as well. Vibration measurements at the umbo in extracted specimens. We validated the modeled frequency specific vibrations at the ear drum level induced through our proposed paradigm by directly recording these vibrations at the umbo with the LDV, in response to stimulation at the umbo, with the actual amplitude modulated laser pulse sequence. These optoacoustic induced vibrations at the umbo were dependent on the applied Figure 2. Experimental set up. The trigger-signals for the recordings as well as the sinusoids for the laser stimulation were generated on a PC. They had an onset that was synchronized to laser stimulation. The stimulation laser was operated with a pre-determined laser pulse rate (LPR) of either 32 kHz or 50 kHz. The sinusoid signals were generated with a specific laser modulation rate (LMR) (Fig. 1). The signal (duration of 100 ms with 0.5 ms rise/fall ramp time) was transferred to the input of the acousto-optic modulator (AOM) using the laser fiber (Ø 365 µm) that was connected to the AOM. The distance between the fiber and the tympanic membrane was less than 1 mm. (a) For the vibration recordings in response to the laser stimulation in extracted specimens, we placed a scanning laser Doppler (LDV) at a distance of 20 cm from the TM. Using the built-in camera of the LDV, the TM could be displayed on the monitor, the measured points were visualized and the recordings could be monitored and controlled. The acoustic feedback signal was measured at a distance of 1 cm from the ear drum after optical stimulation with 1 kHz and 8 kHz LMR and 50 kHz LPR. (b) For the in vivo recordings of spike activity in the ICC, a multi-site electrode array with 16 channels was connected via a custom-made head stage to a biosignal amplifier (g.USBamp). The reference was displayed on channel 1 to check the noise level. The raw data was saved for each channel unfiltered for offline analysis on the PC. www.nature.com/scientificreports www.nature.com/scientificreports/ power as well as the LPR. Using our stimulation paradigm, the fundamental frequency f0 was clearly present in the spectrum and matched the targeted modulation frequency (LMR) (Fig. 3c,d). Consistent with the modeled data, we demonstrated that our novel stimulation paradigm induces the controlled shift of the fundamental frequency f0 from 1 kHz to 2 kHz, 4 kHz, 8 kHz or 10 kHz for both LPRs (Fig. 3c, Fig. S2). The consistency of our results across experiments was analyzed by pooling the modeling data (7 cases) and measured data (8 animals). These data demonstrate that the frequency of the first peak in the spectrum was the same as the applied LMR in all cases (Fig. 4a). Besides the fundamental frequency f0, the recorded vibration spectra revealed two other main frequency peaks: at the 2nd harmonic (h2 with = × f 2 f 2 0 ; Fig. 3c,d) and at the LPR. As reported for the modelled data, additional frequency peaks in the signal above and below the LPR peak (the sidebands) confirmed the amplitude modulation characteristic of our stimulation strategy across all LMR/LPR combinations. As control conditions, we assessed the displacement spectrum of the umbo after acoustic stimulation with pure tones at 60 dB SPL (Fig. 3e,f). These control conditions demonstrated that the fundamental frequency f0 peak in response to optical stimulation is consistent with those elicited by acoustic stimulation, however, without additional h2 and LPR distortion peaks associated with the laser pulses and the modulation paradigm. The extent of the distortion caused by laser stimulation was further assessed for the different LPR/LMR combinations tested by comparing the signal to noise ratio (SNR) of the fundamental frequency peak (SNR-f0) with the peak at the second harmonic (SNR-h2) and the peak at the LPR (SNR-LPR) (Fig. 4b,c). SNR-f0 was greater than SNR-h2 usually by more than 7 to 20 dB (statistical comparisons across all combinations are presented in Table 1), which may be sufficient to minimize sound distortions produced perceptually by those harmonics. Future psychophysical studies will be needed to confirm the perceptual effects of these distortion components. The SNR-LPR values were typically lower than the SNR-f0 values. The SNR-h2 values were even lower than the SNR-LPR. Additionally, the LPR frequency components are located much higher than the typical audible frequency range for humans and may not be perceptible. When comparing the variability of the different peaks (f0-SNR, h2-SNR and LPR-SNR), the fundamental frequency f0-SNR also demonstrated the least variation having the highest SNR of 27 The peak at the fundamental frequency f0 (solid arrow) could be shifted across the applied LMR from 1 kHz to 8 kHz. An additional peak appeared at the LPR (dotted arrow) with characteristic sidebands, proving our applied modulation method. The normalized single side displacement spectra after optical stimulation with 50 kHz LPR, 1 kHz (c) and 8 kHz (d) LMR, demonstrate a similar frequency pattern as the recorded displacement spectra. An additional peak appeared at the second harmonic (h2). As a control, normalized displacement spectra after acoustic stimulation with 60 dB SPL acoustic pure tones at the frequencies 1 kHz and 8 kHz are displayed in (e) and (f), respectively. The fundamental frequency f0 peak is present and consistent with the laser pulse spectra; however, the harmonic components and the LPR peaks caused by the laser pulses are not present. www.nature.com/scientificreports www.nature.com/scientificreports/ Overall, these results demonstrate the greater detectability and reliability of the fundamental frequency f0 peak compared to the other peaks in the spectrum and the ability to systematically shift the fundamental frequency f0 peak with our modulated laser pulse approach, based on vibration measurements at the umbo. Validation of frequency-specific activation measured in the ICC in response to stimulation at the umbo. To further assess the capabilities of achieving frequency-specific activation of the peripheral hearing system with our developed laser pulse sequences applied at the umbo, the activation effects occurring directly in the auditory brain in a live animal was needed. For this validation, we positioned multi-site electrode arrays within the ICC of anesthetized guinea pigs and recorded neural spiking activity in response to laser pulse stimulation at the umbo using different stimulation parameters. Examples representing post stimulus time histograms, PSTHs, are shown in Fig. 5a,b. Consistent with the findings from vibration measurements in explanted specimens, optoacoustic stimulation at the umbo with low LMRs elicited activation of the ICC regions most sensitive to low frequencies, whereas optical stimulation with high LMRs elicited activation of the ICC regions most sensitive to high frequencies. We were able to observe a shift of the activated frequency region of the ICC within each animal in accordance to the predicted, calculated and in explanted specimens tested LMR (Fig. 5a,b). The channel with the strongest activation represented the targeted acoustic BF site at threshold corresponding to the LMR. To demonstrate the reliability of our data regarding the frequency-specific activation with optoacoustic stimulation, we analyzed the correlation between the measured BF and the target modulated frequency (LMR) at the optical threshold. Across 12 animals (Fig. 6a), an almost linear mapping from the optical LMR to the acoustic BF of the best activated channel, could be observed. The calculated Pearson correlation analyses for 32 kHz LPR and 50 kHz LPR were significant with a strong positive effect between the BF at threshold and the LMR (p < 0.001, r = 0.958 for 32 kHz; p < 0.001, r = 0.987 for 50 kHz; Fig. 6a). The optical thresholds were also determined from the PSTH data. Depending on the auditory sensitivity of each animal we observed some variability in the power threshold levels across the tested LMRs, particularly for the 32 kHz LPR (Fig. 6c,d), suggesting that a higher LPR may be more reliable for future implementation. Additionally, our data demonstrated that laser pulse stimulation with 32 kHz LPR (Fig. 6c) required higher power . Vibration measurements results -pooled. The mean frequency value of the first peak in each spectrum versus the LMR demonstrates a clear linear mapping for the vibration recordings in in extracted specimens (8 animals) as well as for the modeled data across 7 computations (a). The first frequency peak corresponded to the LMR in all cases. We calculated the signal to noise ratio (SNR) at the peaks by dividing the peak frequency (f0, h2, LPR) through the same frequency of the noise signal (i.e., no stimulus condition). The SNR at the LMR peak (black), the h2 peak (grey) and the LPR peak (white) at 32 kHz (b) and 50 kHz (c) are plotted for comparison in dB (stimulated at 200 mW average peak power). Asterisks correspond to a statistical alpha-level of 0.05, with Bonferroni corrected post-hoc tests. www.nature.com/scientificreports www.nature.com/scientificreports/ levels than with 50 kHz LPR (Fig. 6d), suggesting the use of higher LPRs may enable safer levels of stimulation in addition to greater reliability of activation (for acoustic control conditions, see Supplementary Fig. S3). In order to provide an estimate of the acoustic level equivalent for optical stimulation, we analyzed the equivalent acoustic sound pressure level (air conduction) that is comparable to optical stimulation with each LPR-LMR combination at 80 mW (Fig. 6e). The maximal DSR was achieved for 1 kHz LMR and 50 kHz LPR resulting in an acoustic SPL equivalent of 47.8 dB (+−4.7 dB SE). Frequency-specific activation for laser stimulation of middle ear structures measured in the ICC. A further advantage of our laser pulse amplitude modulation paradigm is its wide applicability for vibrating different structures in the peripheral auditory system, including the ear drum and structures within the middle ear without needing to be anchored onto or in direct contact with those targeted vibratory structures. Consistent with the results for stimulation at the umbo (ear drum), the applied LMR correlated with the activated BF channels of our ICC electrode for the stimulation at the round window (at the basal end of the cochlea, Fig. 5c) and for stimulation at the otic capsule (the bony encasement of the cochlea, Fig. 5d), in which both locations were accessible in the bulla (middle ear in rodents). Encouragingly, the BF at threshold plotted versus the LMR demonstrated a strong positive linear correlation for stimulation at the round window (p < 0.001, r = 0.983, Fig. 6b-circles) and at the otic capsule (p < 0.001, r = 0.994, Fig. 6b-diamonds), similar to what was observed for stimulation of the umbo (Fig. 6b in same animals, consistent with Fig. 6a from different animals). These results confirm our hypotheses that optoacoustic frequency-specific activation is also possible by presenting laser pulse stimulation to different middle ear structures. . The maximal activity is clearly shifting from one channel to the other in accordance to the LMR value. The optical stimulation at the round window (c) led to a similar activation pattern. The optical stimulation at the otic capsule also demonstrated the ability to achieve frequency-specific activation, however it required more energy (e.g., 120 mW) (d). The BF of each site is labeled above each set of PSTH plots. PSTHs are based on 100 trials with 1-ms time bins. Discussions There are about 360 million people worldwide suffering from disabling hearing impairment with at least moderate hearing loss in the better hearing ear, and 32 million of these people are children. This number is rising due to a growing global population and longer life expectancies as well as the increasing noise exposure from the mapping was demonstrated between the best frequency (BF) at threshold for the most activated site and the LMR in response to optical stimulation at the umbo (32 kHz LPR and 50 kHz LPR). There was strong positive correlation between the two variables LMR and BF at threshold for 32 kHz LPR (p < 0.001, r = 0.958) and 50 kHz LPR (p < 0.001, r = 0.987). (b) Comparison of the data recorded at the umbo with the two additional stimulation sites (the round window membrane and the otic capsule) at 50 kHz LPR. The correlation between the two variables LMR and BF at threshold was calculated (p < 0.001, r = 0.983 at the round window membrane and p < 0.001, r = 0.994 at the otic capsule) (n = 4). The optical threshold in peak power at the umbo is displayed for 32 kHz LPR in (c) and for 50 kHz LPR in (d). The equivalent levels for acoustic stimulation with pure tones in comparison to optical stimulation with 80 mW, using the same frequency as the LMR, is plotted for the different LMR-LPR combinations in (e). The data demonstrated mean levels between 34.6 dB SPL (10 kHz LMR, 32 kHz LPR) and 47. 8 www.nature.com/scientificreports www.nature.com/scientificreports/ environment, at the workplace and through recreational activities 31 . Although technical and scientific progress has improved the performance level of conventional hearing devices, there are still many hearing impaired people who need alternative hearing solutions. For example, patients with hearing aids struggle in noisy environments and many patients are unable to amplify the sound inputs sufficiently to activate enough frequency bands of information without acoustic feedback distortion, especially those who cannot occlude their ear with an earpiece due to discomfort or inflammation 3 . Middle ear implants and partial implantable bone conduction devices could provide greater amplification for certain hearing-impaired patients without requiring ear occlusion. However, these devices require surgery and anchoring of device components to specific structures. This anchoring as well as the targeted structure can suffer from recurrent inflammation disturbing its vibratory function and/or making device attachment difficult in some patients 12 . Therefore, laser stimulation could serve as a new type of stimulation modality that could potentially enable greater hearing performance across a wider patient population by offering a non-contact vibratory input with minimal or no acoustic feedback issues. The first non-ablative therapeutic use of laser into the inner ear, the cochlea, was reported in 2004 32 with the purpose to modulate cochlear mechanics. Infrared laser light as a stimulation method for peripheral nerve activation (INS) 15,24 as well as the optogenetic stimulation approach 16,26 have been reported and target patients with severe to profound sensorineural hearing loss (i.e., significant loss of functional hair cells). Additionally to INS, the "optophonic" stimulation of the inner ear with near infrared laser light has been observed 23,33 in which sound could be generated by the laser stimulus that vibrated and activated intact auditory sensory cells (hair cells). Another laser approach described by Wenzel et al. 17 demonstrated the activation of the inner ear with green pulsed laser light (532 nm) using the optoacoustic effect by delivering very short pulses that induced direct vibrations of the cochlear basilar membrane. In the current study, we have extended this previous work by demonstrating that optoacoustic stimulation can also vibrate structures within the outer and middle ear without requiring direct contact with those targeted structures. By presenting amplitude modulated pulse patterns using a convolution-based method, optoacoustic stimulation can achieve controllable frequency specific activation at the ear drum and more centrally in the ICC, which is a critical requirement for a hearing device. One advantage over other laser-based hearing devices designed for the outer or middle ear, such as the Earlens, is that the optoacoustic approach would not require a sensor-actuator component be attached to the hearing system (e.g., the ear drum). Similar to the Earlens 27,34 , since the transmitted energy is light instead of sound, acoustic feedback can be greatly reduced for an optoacoustic-based hearing aid, potentially providing greater frequency bandwidth and sound quality to the patients without requiring full occlusion of the ear canal. In cases where there is damage to certain regions of the middle ear pathway (e.g, ossicular chain and/or the round window niche being not sufficiently accessible), in which the Earlens or conventional middle ear implants may not be suitable, an optoacoustic hearing device could be implanted into the middle ear cavity with a fiber or fiber bundle targeting bone regions beyond the damaged area, especially in cases where there is no reliable location to attach the actuator of a middle ear implant. There are still several challenges that need to be addressed before an optoacoustic hearing device can be translated into patients. First, an optical fiber or fiber bundle needs to be positioned safely and stably into the ear canal or into the middle ear cavity. One option for the outer ear canal would be to embed the fiber(s) into a vented earpiece that can be inserted into the ear canal or in the inner portions of the concha. The location of the microphone should be positioned as close to the entrance of the ear canal to take advantage of the high-frequency pinna-diffraction, possibly embedding it in the same earpiece, assuming acoustic feedback within the ear canal caused by the vibrating ear drum is minimal 34 and needs to be characterized in a future study. Within the middle ear cavity, the fiber(s) could be embedded in a miniaturized and biocompatible guide tube or casing that is fixed to healthy bony wall within the mastoid and the fiber(s) can be oriented towards different portions of the ossicular chain or bony area in close proximity to the inner ear, round window or surrounding bony structures (e.g. promontorium). Second, a stimulation strategy is needed that can transmit complex signal patterns, such as speech and music, to the outer and middle ear structures. The fact that we were able to reliably predict the measured vibratory response at the ear drum level using a LTI modelling approach (i.e., based on a linear, time-invariant assumption) provides optimism that the results observed in our study with simple pure tones and sinusoid modulated laser pulse sequences may reasonably translate to more complex inputs, in which those complex signals can be modelled as a sum of pure tones in a LTI framework. The brain is undoubtedly a non-linear system, and thus further experiments will still need to be pursued to assess how the responses in the ICC and auditory cortex, as well as hearing perception, actually compares between the complex acoustic inputs and the laser pulse sequences modulated by those complex inputs (e.g., by the envelope and/or fine structure features of those inputs). Third, we need to determine the safe range of levels of optoacoustic stimulation within the outer and middle ear and if the maximum level is sufficient to provide a wide dynamic range for useful hearing. Finally, we need to calculate the total energy required to drive an optoacoustic hearing device and to minimize overall consumption to ensure the device can last at least for a full day with daily charging of the device overnight. In summary, we have developed and evaluated a laser-based pulse amplitude modulation approach that can provide a versatile and non-contact method to precisely vibrate structures within the peripheral hearing system. This new type of technology could provide an alternative solution for implantable and non-implantable hearing aids by replacing the speaker or the sound transducer (e.g., force mass transducer in a middle ear implant) with a non-contact and focused energy transfer modality via laser pulses to be used in hearing impaired patients who are not sufficiently satisfied or aided with current hearing aid technologies. Acoustic stimuli and trigger signals. We generated the acoustic-and the trigger-signals on a PC (Hewlett-Packard Company /HP Inc., Palo Alto, CA, USA) in the control room (Fig. 2). We transferred the MATLAB ® (R2014a, The MathWorks Inc., Natick, USA) generated sinusoid (duration of 100 ms with 0.5 ms rise/ fall ramp time) to a waveform generator (33500b Waveform Generator, Agilent Technologies, Santa Clara, USA) as an arbitrary file via Virtual Instrument Software Architecture (VISA) interface. We regulated the sound pressure level through a programmable attenuator (g.PAH, g.tec medical engineering GmbH, Schiedlberg, Austria) and transmitted the acoustic signal through a free field loudspeaker. The speaker was positioned and calibrated for a 10-cm distance to the left ear. The trigger signal for the recordings lasted 100 µs and had an onset that was synchronized to the laser or acoustic stimulation signal. Laser. We used a 532 nm pulsed Neodymium-doped Yttrium Orthovanadate (Nd:YVO4) laser system (INCA, Xiton Photonics GmbH, Kaiserslautern, Germany). The stimulation laser was operated with a pre-determined laser pulse rate (LPR) of either 32 kHz or 50 kHz. We generated the sinusoid signal with a specific laser modulation rate (LMR) (Fig. 1) as described for the acoustic signals. The signal (duration of 100 ms with 0.5 ms rise/fall ramp time) was transferred to the input of the acousto-optic modulator (AOM) (Xiton Photonics GmbH; Fig. 2a). The laser pulses were then delivered to the target structure (ear drum, otic capsule, round window) using the laser fiber (Ø 365 µm) that was connected to the AOM. To keep the number of pulses per stimulation unit constant over the different modulated frequencies tested, the number of pulses under the envelope had to vary from one modulated frequency to the other as presented in Table 2. We changed the amplitude of the laser pulses corresponding to the LMR at 1 kHz, 2 kHz, 4 kHz, 8 kHz and 10 kHz from an averaged maximal peak power between 20 and 500 mW (Table 3). Fig. 4b,c, we conducted one factorial ANOVAs after checking the data regarding the normal distribution and variance homogeneity. For the comparison between the individual groups, we performed post-hoc Bonferroni tests. The reported alpha level was 0.05. Vibration measurements in explanted specimens. Surgical technique and laser Doppler vibrometer setup. We explanted the temporal bones, removed the cartilaginous outer ear canal and exposed the tympanic membrane (TM). To improve the signal to noise ratio (SNR), we placed micron sized glass beads (Ø 50 µm/bead) on the TM. With a custom-made holder we fixed the head, inserted the laser fiber into the outer ear canal and directed it towards the TM. For the vibration recordings in response to the laser stimulation, we placed a scanning laser Doppler vibrometer PSV 500 (Polytec GmbH, Waldbronn, Germany) (LDV) at a distance of 20 cm from the TM. Using the built-in camera of the LDV, the TM could be displayed on the monitor, the measured points were visualized and the recordings could be monitored and controlled (Fig. 2a). In order to obtain an adequate SNR, the scan points had to be set exactly on the reflective glass beads. Stimuli and procedure. Stimuli. For the acoustic control measurements at the beginning of the LDV recordings in each experiment we applied pure tones (1 kHz, 2 kHz, 4 kHz, 8 kHz and 10 kHz) with 0 dB SPL, 60 dB SPL, 70 dB SPL and 80 dB SPL. (Supplementary Fig. S4). These data enabled comparisons between the acoustic-and the optical-induced vibrations of the ear drum. The averaged maximal power for laser stimulation www.nature.com/scientificreports www.nature.com/scientificreports/ was between 20 and 500 mW ( Table 3). The acoustic and optical stimulation rate was 5 stimulation units/s. A minimum of 17 runs were recorded per frequency/level combination and the results were analyzed offline. Measurements. We connected the vibrometer with the external trigger signal of the waveform generator (33500b Waveform Generator, Agilent Technologies, Santa Clara, USA) and with the laser scanning head that was connected to the laptop running the Polytec GmbH control software (Fig. 2a). We performed the velocity measurements in the time domain and saved these for offline analysis with Polytec File Access Software in combination with MATLAB ® . After the 2D-adjustment, the scan points were set. Data analysis. Offline processing. To suppress artifacts, we used a bandpass filter between 300 and 12000 Hz and averaged a minimum of 17 runs. We calculated the displacement from the measured velocity data through numerical integration, and determined the single sided amplitude spectra using Fourier transformation. The fundamental frequency as well as the distortions were analyzed. Signal to noise ratio (SNR). We calculated the SNR at the peaks by dividing the peak frequency (f0, h2, LPR) by the same frequency of the noise signal (i.e., no stimulus condition). We then converted the SNR to a dB scale using equation (4). We discarded measurements if the f0-SNR was below 6 dB. If the h2-SNR or the LPR-SNR was below 0 dB it was set to 0 dB. Convolution based computational modeling. As described in signal processing theory 35 , the output of a linear time invariant (LTI) system can be calculated by convoluting the impulse response of the system with an input signal. Based on this idea, we implemented a convolution based model in MATLAB ® : We recorded the LDV vibration velocity at the umbo in response to optical stimulation with a 50 µJ laser pulse, which is as an approximation to the Dirac impulse (equation (1)). We considered the LDV velocity recording in response to stimulation with a 50 µJ laser pulse that can be represented as the impulse response of the system using this equation: = δ h[n] T{ [n]}. The convolution of these impulse response h[n] with an input function representing the laser pulses (equation (2)) leads to the model function y[n] (equation (3)). We compared this modeled function with the displacement measurements made at the umbo in response to the actual laser modulated pulse stimulation ( Supplementary Fig. S1). Neural recordings in the ICC in anesthetized animals. Animal model, anesthesia and surgical procedure. For the neural recordings in animals, we extended the initial anesthesia described above with additional 1/4-1/2 of the initial dosage every 30 to 60 minutes to maintain an areflexive state. We injected 50 ml/kg bodyweight saline subcutaneous per day divided in dosages every 1-2 hours. The body temperature was stabilized at 38 °C during the experiment using a DC heating pad. We removed the left auricle and exposed the meatus acusticus externus to attain access to the TM. Additionally, through a retoauricular incision we opened the middle ear (the bulla) and exposed the cochlea. We fixed the head with a custom-made holder, opened the skull and exposed the inferior colliculus. We inserted an A1x16 single shank electrode (NeuroNexus Technologies, Ann Arbor, MI, USA) in an angle of 45° to the horizontal line using a micromanipulator (MN-151, Narishige International Limited, London, UK) with a custom built angle adjusting system in order to position the shank along the tonotopic gradient of the ICC 36,37 . We placed platinum subdermal needles as a reference signal (vertex) and ground (neck). We then performed the acoustic stimulation to determine the final position of the electrode (for details see Stimuli and recordings) and covered the brain with agar to reduce its swelling, pulsations and drying. We inserted the laser fiber into the outer ear canal and directed it towards the TM using a second micromanipulator (Mk1 Manipulator, Singer Instruments, Roadwater, UK) (n = 12 animals). For other stimulation locations, we directed the fiber towards the round window membrane or the otic capsule at the basal turn level (n = 4 animals). The LPR was 50 kHz in those set of experiments. Acoustic and optical stimuli and recordings. The A1x16 electrode had an inter channel spacing of 100 µm and an area of 703 µm 2 and was connected via a custom-made head stage to the biosignal amplifier (g.USBamp, g.tec medical engineering GmbH, Schiedlberg, Austria) (Fig. 2b). The reference was displayed in channel 1 to check the noise level. The recording software was implemented in Simulink ® (The MathWorks Inc., Natick, USA). We saved the raw data for each channel unfiltered for offline analysis. To check for the correct insertion depth and position inside the ICC, we performed online analysis (local field potentials (LFP) for each channel, spike activity and total spike rate (TSR)) in response to acoustic stimulation with pure tones (1 kHz, 2 kHz, 4 kHz, 8 kHz and 10 kHz) at different levels (40-60 dB SPL in 10 dB steps). The stimulation rate for all acoustic and optical stimulation sets in vivo was 1.86 stimulation units/s. The spike activity was bandpass filtered between 300 and 3000 Hz as described by Lim et al. 38 and the spike trains were displayed 10 ms before and 250 ms after the stimulation trigger signal for each channel. We calculated the LFP by averaging the neural signal for each channel over 20 trials after filtering between 20 and 3000 Hz. We calculated the TSR by dividing the number of spikes in the 5 to 105 ms interval after the trigger onset by 100 ms. For 3 animals, each level-rate combination of our optical stimulation paradigm was performed twice, in which the second time was performed in reverse ordering (backwards) from the first one, to prove the reliability of our recordings. (2019) 9:4171 | https://doi.org/10.1038/s41598-019-40860-8 www.nature.com/scientificreports www.nature.com/scientificreports/ Data Analysis. We performed the data analysis in response to the acoustic and the optical stimulation using a spike detection algorithm. Spikes were defined as signal peaks having values with a standard deviation of 3.5 above the background noise. The time at which the spike was detected (timestamp) corresponded to the largest negative peak of that spike (Supplementary Fig. S5). Best frequency (BF) and driven spike rate (DSR). To determine the BF for each electrode channel, we applied acoustic pure tone stimulation in a random order for six frequencies per octave between 2 kHz and 32 kHz (high frequency electrode position) or between 1 and 22 kHz (low frequency electrode position) depending on the probe position of each experiment (n = 5 animals for high frequency experiments, n = 11 animals low frequency experiments). We generated the acoustic frequency response map (FRM) online with at least 7 runs per frequency and level pair (Supplementary Fig. S6a). To determine the DSR for a set time window, we subtracted the spontaneous spike rate (SSR) from the TSR for each channel. We then normalized the DSR by the maximum spike rate per channel across the presented stimuli and plotted the normalized spike rates as a FRM (Supplementary Fig. S6a). We determined the BF as the frequency with the maximum activation at 10 dB SPL above the visually-determined acoustic threshold per channel. The BF versus site number plot demonstrated the tonotopy of our insertion (Supplementary Fig. S6b). Post Stimulus Time Histograms (PSTHs). We used PSTHs with timestamps across at least 100 runs (1-ms bin) to analyze the data and determine the neural response on each channel after acoustic or optical stimulation. The corresponding acoustic BFs were mapped to each histogram channel for further assessment (Supplementary Fig. S7). Equivalent Level Estimation between Acoustic and Optical Stimulation. For the estimation of the acoustic sound pressure level equivalent for pure tone stimulation in comparison to optical stimulation, the ICC spike activity in response to optical stimulation with 80 mW for each LPR-LMR combination was recorded. The DSR was compared to the acoustic level at the same electrode channel after stimulation with the best frequency. We discarded values, if the BF channel at maximal acoustic stimulation differed more than 1 channel from the BF channel at threshold. Code availability. Custom code for recording and analysis is available from the corresponding authors on reasonable request. Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding authors on reasonable request.
2019-03-13T13:53:53.555Z
2019-03-12T00:00:00.000
{ "year": 2019, "sha1": "75d0b2d8ea9182ec9b9130936ff92493c6756cd8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-019-40860-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f501961e515738bb26567ade52444fea8cd7b819", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
119270280
pes2o/s2orc
v3-fos-license
Jets, Bulk Matter, and their Interaction in Heavy Ion Collisions at Several TeV We discuss a theoretical scheme that accounts for bulk matter, jets, and the interaction between the two. The aim is a complete description of particle production at all transverse momentum ($p_{t}$) scales. In this picture, the hard initial scatterings result in mainly longitudinal flux tubes, with transversely moving pieces carrying the $p_{t}$ of the partons from hard scatterings. These flux tubes constitute eventually both bulk matter (which thermalizes and flows) and jets. We introduce a criterion based on parton energy loss to decide whether a given string segment contributes to the bulk or leaves the matter to end up as a jet of hadrons. Essentially low $p_{t}$ segments from inside the volume will constitute the bulk, high $p_{t}$ segments (or segments very close to the surface) contribute to the jets. The latter ones appear after the usual flux tube breaking via q-qbar production (Schwinger mechanism). Interesting is the transition region: Intermediate $p_{t}$ segments produced inside the matter close to the surface but having enough energy to escape, are supposed to pick up q-qbar pairs from the thermal matter rather than creating them via the Schwinger mechanism. This represents a communication between jets and the flowing bulk matter (fluid-jet interaction). Also very important is the interaction between jet hadrons and the soft hadrons from the fluid freeze-out. We employ the new picture to investigate Pb-Pb collisions at 2.76 TeV. We discuss the centrality and $p_{t}$ dependence of particle production and long range dihadron correlations at small and large $p_{t}$. I. INTRODUCTION Traditionally the physics of ultrarelativistc heavy ion collisions is discussed in terms of different categories like collective dynamics, parton-jet physics, and fluctuationcorrelation studies, although these different topics are highly correlated. In this article, a complete dynamical picture of particle production at all p t scales will be presented, which accounts for the production and evolution of bulk matter and jets, and the very important interaction between the two components (which is not only the well known parton energy loss). The consequences of these interactions can be nicely seen in long range dihadron correlations. The physical picture of our approach is the following: Initial hard scatterings result in mainly longitudinal flux tubes, with transversely moving pieces carrying the p t of the partons from hard scatterings. These flux tubes constitute eventually both bulk matter (which thermalizes, flows, and finally hadronizes) and jets, according to some criteria based on partonic energy loss. We will consider a sharp fluid freeze-out hypersurface, defined by a constant temperature. Freeze-out here simply means the end of the fluid phase, but the hadrons still interact. High energy flux tube segments will leave the fluid, providing jet hadrons via the usual Schwinger mechanism of flux-tube breaking caused by quark-antiquark production. But the jets may also be produced at the freeze-out surface. Here we assume that the quark-antiquark needed for the flux tube breaking is provided by the fluid, with properties (momentum, flavor) determined by the fluid rather than the Schwinger mechanism. Considering transverse fluid velocities up to 0.7c, and thermal parton momentum distributions, one may get a "push" of a couple of GeV to be added to the transverse momentum of the string segment. This will be a crucial effect for intermediate p t jet hadrons. There is another important issue. Even for hadrons with transverse momenta of 10-20 GeV, there is a large probability of a jet hadron formation before it enters the dense hadronic medium. This means a significant probability of scatterings between jet hadrons and soft hadrons (from freeze-out), having essentially two consequences: an increase of low p t particle production, and a reduction of yields at high p t . In addition there are of course the well known hadronic interactions between the soft hadrons. We have discussed different processes which all affect p t spectra. It is, however, possible to disentangle different contributions, by looking at dihadron correlations. These are extremely useful tools heavily used by experimental groups at the RHIC and the LHC [1][2][3][4][5]. Recently, the CMS and ALICE collaborations published results on such correlations in Pb-Pb collisions at 2.76 GeV and different centralities, over a more or less broad range in relative pseudorapidity (∆η) and full coverage of the relative azimuthal angle (∆φ) [4,5]. Different combinations of transverse momenta p assoc t and p trigg t of associated and trigger particles in the range between 0.25 GeV/c and 15 GeV/c are investigated. Considering long range correlations (|∆η| > A, A ≥ 0.8), the coefficients V n∆ of the harmonic decomposition factorize as V n∆ = v(p assoc t ) v(p trigg t ) -not only for small transverse momenta but also for example for large p trigg t and small p assoc t . For small momenta the situation seems to be clear: the correlation is flow dominated. But factorization does not necessarily mean that both hadrons carry the flow from the fluid! This can in particular not be the answer for observed correlations for large p trigg t -here we have to deal with an interaction between the flowing bulk and jets, which makes the observed correlations very interesting in particular as a test of our ideas concerning bulk-jet separation and interaction. Another challenge : The ATLAS collaboration showed recently results [6] on elliptical flow of charged particles with respect to an event plane in the opposite η hemisphere (also a kind of long range correlation). The v 2 values are quite large up to values of p t = 20GeV/c, for eight different centrality ranges. Can we understand this in a quantitative fashion? The heavy ion results shown in this paper are based on 2000000 events simulated with EPOS2.17v3. A central (0-5%) Pb-Pb event takes on the average around 2 HS06 hours CPU time, using machines with an average scaling factor of 8.7 [7]. Always six events share the same parton configuration and hydrodynamic evolution, with only particle production and hadronic rescattering being redone (to gain statistics). This is taken care of when considering mixed events in correlation studies. II. THE BASIS: FLUX TUBES FROM A MULTIPLE SCATTERING APPROACH The starting point is a multiple scattering approach corresponding to a marriage of Gribov-Regge theory and perturbative QCD (pQCD), see Fig. 1. An elementary scattering corresponds to a parton ladder, containing a hard scattering calculable based on pQCD, including initial and final state radiation (for details see [8]). These ladders are identified with flux tubes (see Fig. 2), which are mainly longitudinal objects, with transversely moving parts, carrying the transverse momenta of the hard scatterings. These objects are also referred to as kinky strings. One should note that here multiple scattering does not mean just a rescattering of hard partons, it rather means a multiple exchange of complete parton ladders, leading to many flux tubes. In this case, the energy sharing between the different scatterings will be very important, to be discussed later. The consistent quantum mechanical treatment of the multiple scattering is quite involved, it is based on cutting rule techniques to obtain partial cross sections, which are then simulated with the help of Markov chain techniques : (Color online) An elementary parton ladder, whose final state is identified with a color flux tube (kinky string). [9]. As said before, the final state partonic system corresponding to elementary parton ladders are identified with flux tubes. The relativistic string picture [10][11][12] is very attractive, because its dynamics is essentially derived from general principles as covariance and gauge invariance. The simplest possible string is a surface X(α, β) in 3+1 dimensional space-time, with piecewise constant initial velocities ∂X/∂β. These velocities are identified with parton velocities, which provides a one to one mapping from partons to strings. For details see [8,9]. The high transverse momentum (p t ) partons will show up as transversely moving string pieces, see Fig. 3(a). Despite the fact that in the TeV energy range most processes are hard, and despite the theoretical importance of very high p t partons, it should not be forgotten that the latter processes are rare, most kinks carry only few GeV of transverse momentum, and the energy is nevertheless essentially longitudinal. In case of elementary reactions, the strings will break (see Fig. 3(b) via the production of quark-antiquark pairs according to the so-called area law (b) Flux tube breaking via q −q production, which screens the color field (Schwinger mechanism). [8,9,13,14]. The string segments are identified with final hadrons and resonances. This picture has been very successful to describe particle production in electron-positron annihilation or in proton-proton scattering at very high energies. In the latter case, not only low p t particles are described correctly, for example for pp scattering at 7 TeV [15,16], also jet production is covered. As discussed earlier, the high transverse momenta of the hard partons show up as kinks, transversely moving string regions. After string breaking, the string pieces from these transversely moving areas represent the jets of particles associated with the hard partons. To demonstrate that this picture also works quantitatively, we compute the inclusive p t distri- bution of jets, reconstructed with the anti-kt algorithm [17] and compare with data [18], see Fig. 4. In Fig. 5, we compare our p t distribution of partons with a parton model calculation based on CTEQ6 parton distribution functions [19], in both cases leading order with a K-factor of 2. III. JET -BULK SEPARATION In heavy ion collisions and also in high multiplicity events in proton-proton scattering at very high energies, the density of strings will be so high that the strings cannot decay independently as described above. Here we have to modify the procedure as discussed in the following. The starting point are still the flux tubes (kinky strings) originating from elementary collisions. These flux tubes will constitute both, bulk matter which thermalizes and expands collectively, and jets. The criterion which decides whether a string piece ends up as bulk or jet, will be based on energy loss. In the following we consider a flux tube in matter, where "matter" first means the presence of a high density of other flux tubes, which then thermalize. A more quantitative discussion will follow. Three possibilities should occur, referred to as A, B, C, see Fig. 6 : A String segments far from the surface and/or being slow will simply constitute matter, they loose their character as individual strings. This matter will evolve hydrodynamically and finally hadronize ("soft hadrons"). B Some string pieces (like those close to transversely moving kinks) will be formed outside the matter, they will escape and constitute jets ("jet hadrons"). C There are finally also string pieces produced inside matter or at the surface, but having enough energy to escape and show up as jets ("jet hadrons"). They are affected by the flowing matter ("fluid-jet interaction"). Let us discuss how the above ideas are realized. In principle the formation and expansion of matter and the interaction of partons with matter is a dynamical process. However, the initial distribution of energy density and the knowledge of the initial momenta of partons (or string segments) allows already an estimate about the fate of the string segments. By "initial time" we mean some early proper time τ 0 which is a parameter of the model. Strictly speaking, energy loss concerns partons, modifying eventually the kink momenta in our picture, and the momenta of the string segments after breaking will be reduced. We will therefore base our discussion on energy loss on string segments. We estimate the energy loss ∆E of string segments along their trajectory to be inspired by [20], where ρ is the density of string segments at initial proper time τ 0 , V 0 is an elementary volume cell size (technical parameter, taken to be 0.147 fm 3 ), L 0 is a (technical) length scale (taken to be 1 fm), E the energy of the segment in the "Bjorken frame" moving with a rapidity y equal to the space-time rapidity η s , dL is a length element, and k Eloss and E 0 are parameters. We introduce an energy cutoff E 0 to have sufficient energy loss for slowly moving segments. A string segment will contribute to the bulk (type A segment), when its energy loss is bigger than its energy, i.e. ∆E ≥ E. ( All the other segments are allowed to leave the bulk (type B or C segments). Only the bulk segments are used to determine the initial conditions for hydrodynamics, following the same procedure as explained in [8] (with some new elements, as discussed in the next section). Starting from this initial condition, the bulk matter will evolve according to the equations of ideal hydrodynamics till "hadronization", which occurs at some "hadronization temperature" T H [8]. Hadronization means that we change from matter description to particle description, but hadrons still interact among each other, realized via a hadronic cascade procedure [21], already discussed in [8]. After having performed the hydrodynamic expansion, we have to come back to the string segments which escape the bulk because their energy is bigger than the energy loss. We employ a formation time: the string segments are formed at times t distributed as exp(−t/γτ form ), with some parameter τ form which is taken to be 1fm/c. If the formation time is such that the segment is produced outside the "hadronization surface" defined by T H , the segment will escape as it is (type B segment). Most interesting are the segments which are formed inside but still escape, because they have E > ∆E. These are type C segments. They escape, but their properties change. Actually such a segment leaves "matter" at the hadronization surface at a particular space-time point x, which is characterized by some collective flow velocity v(x). We assume that the string breaking in this case is modified such that the quark and antiquark (or diquark) necessary for the string breaking are taken from the flowing fluid rather than being produced via the Schwinger mechanism. So the new string segment is composed of a quark and antiquark (diquark) carrying the flow velocity, and the string piece in between, which has not been changed. This string piece may or may not carry large momentum, depending on whether it is close to a kink or not, the former possibility shown in Fig. 7. In any case, due to the fluid-jet interaction, the properties of this segment change drastically compared to the normal fragmentation: • The quark and antiquark (or diquark ) from the fluid provide a push in the direction of the moving fluid. • The quark (antiquark) flavors are determined from Bose-Einstein statistics, with more strangeness production compared to the Schwinger mechanism. • The probability p diq to have a diquark rather than an antiquark will be bigger compared to a highly suppressed diquark-antidiquark breakup in the Schwinger picture (p diq is a parameter). Our procedure has 4 parameters: . It allows to cover in a single scheme the production of jets, of bulk, and the interaction between the two. IV. FORMATION TIMES A crucial ingredient to the mechanism of fluid-jet interaction is the formation time of jet hadrons (the hadrons which leave the fluid). The probability distribution of the formation times t of jet hadrons with gamma factors γ is given as (we use τ form = 1 fm/c). The probability of having a formation point inside the fluid is obtained as an integral over eq. (3), with t max being the time corresponding to a formation at the fluid surface. Rather than making a simulation, we are going to present a very simple formula providing a rough estimate of the p t dependence of this probability. For a collision of two Pb nuclei in some centrality interval, characterized by the mean impact parameter b, we use ct max = r Pb −b/2, where r Pb is the radius parameter used in the Wood-Saxon distribution of nucleons. Considering transversely moving hadrons of mass m, we have γ ≈ p t /mc. The estimate P inside of the probability to form (pre)hadrons inside the fluid is In Fig. 8, we show the result for the 0-5% and the 20-30% most central events in Pb-Pb collisions at 2.76 TeV, using cτ form = 1 fm, mc 2 = 1 GeV, r Pb = 6.5 fm, and for the average impact parameters b = 1.8 fm (0-5%) and b = 7.8 fm (20-30%). By construction, the probability P fluid−jet of having a fluid-jet interaction is equal to the probability of forming (pre)hadrons inside the fluid, so its estimate is given by P inside . From Fig. 8, we see that the probability is quite large for intermediate values of p t , but even large values (50 GeV/c) are significantly affected. Whether the effect of the interaction can be seen in some observable is a different question and will be discussed later. Several authors have already discussed about "inmedium hadronization", see for example Ref. [22], where one also finds an overview about earlier models on this subject. V. HYDRODYNAMICS The bulk matter extracted as described above provides the initial condition for a hydrodynamic evolution. As explained in [8], we compute the energy momentum tensor and the flavor flow vector at some position x (at τ = τ 0 ) from the four-momenta of the bulk string segments. The time τ = τ 0 is as well taken to be the initial time for the hydrodynamic evolution. This seems to be a drastic simplification, the justification being as follows: we imagine to have a purely longitudinal scenario (descibed by flux tubes) till some proper time τ flux < τ 0 . During this stage there is practically no transverse expansion, and the energy per unit of space-time rapidity does not change. This property should not change drastically beyond τ flux , so we assume it will continue to hold during thermalization between τ flux and τ 0 . So although we cannot say anything about the precise mechanism which leads to thermalization, and therefore we cannot compute the real T µν , we expect at least the elements T 00 and T 0i to stay close to the flux tube values, and we can use the flux tube results to compute the energy density. Only T ij will change considerably, but this does not affect our calculation much. We employ three-dimensional ideal hydrodynamics as described in [8], with some modification to be discussed in the following. As in [8], we construct the equation of state as where p H is the pressure of a resonance gas, and p Q the pressure of an ideal quark gluon plasma, including bag pressure. We use an updated λ: with The new λ provides an equation of state in agreement with recent lattice data [23], see Fig. 9. Apart of the new equation of state, we use the same procedure to obtain energy density and pressure from the string segments, as described in [8]. However, • doing the calculation for Pb-Pb collisions at 2.76 TeV, we get too much elliptical flow (20-30%), a hint that one should include viscosity. Taking the usual small radii of the elementary flux-tubes, we get extremely strongly fluctuating energy densities (in the transverse plane). Viscosity will quickly reduces these strong fluctuations. • We try to mimic viscous effects by taking artificially large values of the flux tube radii (we take 1 fm), in order to get smoother initial conditions. This has the effect of reducing the elliptical flow by 20-30%, as needed. In Fig. 10, we show an example of such an initial energy density. VI. CENTRALITY DEPENDENCE OF THE MULTIPLICITY AND FAKE SCALING LAWS As a very first check of the new approach, we consider the centrality dependence of the charged particle yield. Although very basic, there is quite some confusion about this quantity. Whereas hard processes scale roughly with the number of binary nucleon-nucleon collisions (in a simple geometrical picture), the centrality dependence of the charged particle yield (dominated by low p t ) is very different: it looks more like a scaling with the number of participating nucleons. This reminds us of the good old "wounded nucleon model", which has a physical meaning -at low energies: the projectile and target nucleons are excited ("wounded"), and this is the main source of particle production. Amazingly, this approximate participant scaling holds also at higher energies, the centrality dependence at the LHC is almost identical to the one at the RHIC [24]. This is quite strange, since one might believe that at higher energies hard processes dominate, so one could expect more binary scaling. But this is not the case. What do we get in the multiple scattering approach? In Fig. 11, we plot the yield per participant ( dn/dη(0)/Npart ) as a function of the centrality. Npart is the number of participating nucleons. As in the data [25], we obtain a moderately increasing yield per participant. How can this happen? How can one get something like a wounded nucleon result at the LHC? In the model we can of course easily check the relative contribution of particle production from remnant decays. In Fig. 12, we plot the relative fraction of particle production from remnants as a function of the rapidity. As expected, remnant particle production is important at large rapidities, but the contribution at mid-rapidity is close to zero. So the physical mechanism of soft particle production is not a wounded nucleon picture. In our approach, the source of particle production is the flux tubes, originating from elementary scatterings, which are in principle proportional to the number of binary nucleon-nucleon collisions. But there are impor- tant effect due to energy conservation and shadowing, discussed in detail in [8,9]. In our multiple scattering approach (which determines the initial conditions), the complete AA scattering amplitude is expressed in terms of elementary contributions, which are parton ladders, later showing up as strings. Each parton ladder is characterized by the light cone momentum fractions x + k and x − k of the "ladder ends", which are the outer partons of the ladder, see Fig. 13 (also transverse momenta are considered, but not discussed here). It is a unique feature of our approach that we do a precise bookkeeping of energy and momentum: For each nucleon (projectile or target) the initial energy-momentum has to be shared by all the ladders connected to this nucleon and the nucleon remnants, i.e. for all nucleons i we have all ladders k connected to nucleon i where x ± remn i is the momentum fraction of the nucleon remnant i. These are very strong conditions, which affect the results substantially, see [9]. The most important consequence relevant for our discussion here is the fact that parton ladders leading to low p t particles are suppressed compared to what is expected from binary scaling. We will get a nuclear modification factor which is less than one at low p t , as shown in Fig. 14 for central Pb-Pb collisions at 2.76 TeV. So although particle production at central rapidities in very high energy collisions is dominated by binary scattering (providing the initial energy density), particle production does not increase proportional to the number of binary scatterings, due to energy conservation. It is absolutely necessary that binary scaling is broken showing the breaking of binary scaling at low pt (due to energy conservation). The resulting "approximate participant scaling at low pt" is a pure coincidence! at low p t , because it is simply an experimental fact. The usual explanation is a two component picture: hard scattering at high p t which shows binary scaling and a soft component which scales as the number of participants In our picture, binary collisions determine everything. But certain binary collisions are suppressed due to energy conservation, leading to a deviation from R AA = 1. We will discuss the p t dependence of R AA in the next section. Here we present for completeness the pseudorapidity distributions of charged particles for different centralities, see Fig. 15, where we compare our calculation with data from ATLAS [25]. VII. TRANSVERSE MOMENTUM DEPENDENCE OF PARTICLE YIELDS: IMPORTANCE OF HADRONIC RESCATTERING OF SOFT AND JET HADRONS We first investigate particle production at low transverse momenta. In figs. 16, 17, and 18, we show transverse momentum distributions of pions, kaons, and protons, for central and semi-peripheral Pb-Pb collisions at 2.76 TeV. We compare the full calculation including hydrodynamic evolution and hadronic final state cascade (solid lines) with the calculation without cascade (dashed lines) and with data from ALICE [26]. In order to understand the results, one has to recall that not only the "soft" particles produced from the fluid may interact, but also the jet-particles having enough energy to escape the fluid may interact with these soft particles. In particular intermediate p t jet particles are can- didates, because their formation time will produce them just in the high density hadronic region. Let us discuss the consequences of these interactions, by comparing the solid and dashed curves in the figures. • We see in particular in Fig. 16 a strong reduction of protons at low p t due to hadronic rescattering, which can be attributed to proton-antiproton annihilation among the soft hadrons. • We see also a sizable increase of pion production at low p t , which is due to inelastic rescatterings of jet hadrons with soft ones In figs. 16, 17, and 18, we only show results up to 3 GeV/c, because this is the range where data on protons, pions, and kaons are available. It is nevertheless interesting to know the effect of jet-soft scattering beyond 3 GeV/c. We therefore plot in Fig. 19 the ratio of the full calculation to the one without hadronic cascade, for the p t spectra of charged particles (dominated by pions) in central Pb-Pb collisions at 2.76 TeV, up to 20 GeV/c. There is a big effect at intermediate values of p t -up to 20 GeV/c ! In other words, jet-soft rescattering is very important in this range. Similar observation have already been made in [27] for AuAu collisions at the RHIC. The big effect of the jet-soft interaction can be under- stood by plotting 1 − R, with R being the ratio (with / without cascade) plotted in Fig. 19, together with the probability estimate P inside to produce a jet (pre)hadron inside the fluid, see fig. 20. These early produced hadrons go through the dense hadronic phase (of soft hadrons), and P inside is therefore also a measure of the probability of having a jet-soft interaction. We see indeed at large p t (in absolute terms, without adding factors). Even though we are running out of statistics, it is clear from the above discussion that the effect goes well beyond 20 GeV/c. To compare the p t spectra with experimental data, one uses often the so-called nuclear modification factor R AA , which is the ratio of the inclusive transverse momentum spectrum of particles in nucleus-nucleus scatterings over the proton-proton ones, normalized by the number N coll of binary collisions. Doing this procedure, we obtain the curves shown in Fig. 21, where we plot our simulation results for charged particle production together with the data from ALICE [28]. We show the full model, including hydrodynamic evolution and final state hadronic cascade [21] and its jet contribution from the string segments which escaped from the bulk and which did not rescatter. We also show the bulk contribution (originating from the hadronized fluid) from the calculation without final state hadronic cascade. The two latter curves do not add up to give the full result -the difference is due to the "secondary interactions" discussed earlier: • Fluid-jet interaction -pushing the jet hadrons at intermediate p t to higher transverse momenta. • Jet-soft interactions between jet hadrons and soft ones from fluid freeze-out. There are also soft-soft interactions (among soft hadrons from fluid freeze-out), which are important for baryon yields, but not so much for the charged particle results. From the above discussion it is clear that even considering elementary quantities as charged particle yields, it is difficult to make any quantitative analysis without considering these "secondary interactions". We sketch the different interactions in Fig. 22. VIII. DIHADRON CORRELATIONS IN PB-PB AT 2.76 TEV Our prescription for bulk-jet separation and interaction should also strongly affect dihadron correlations, which provide much more information than simple spectra. With all parameters (k Eloss , E 0 , τ seg , p diq ) being fixed from the considerations in the last section, we now compute dihadron correlation functions defined as where S is the number of pairs in real events, and M the number of pairs for mixed events. As an example, we show in Fig. 23 the correlation function for the p t of the trigger particle (p trigg t ) in the interval 4.5-5.5 GeV/c and the p t of the associated particle (p assoc t ) in the range 2-2.5 GeV/c, in the 0-10% most central Pb-Pb collisions at 2.76 TeV. Besides the jet peak at ∆φ = 0 and ∆η = 0, we clearly identify a completely flat ridge over the full range in ∆η at ∆φ = 0. The reason for the ridge structure is the fact that there is an azimuthal asymmetry of the initial energy density (see Fig. 10). Although the energy density is biggest around space-time rapidity η s = 0 and drops fast towards forward and backward η s , the shape of the asymmetry is preserved. This leads finally to an asymmetric flow, again very similar at different values of η s , and this "makes" the long range correlation at ∆φ = 0. The smooth η s dependence of the energy density in our approach (see Fig. 24) is due to the fact the energy density is calculated from flux tubes. And these flux tubes have to be treated correctly as continuous longitudinal objects (as we do). In an earlier version, we treated flux tubes via the randomly (in η s ) distributed flux tube segments, obtained from a string fragmentation procedures. This gives a bumpy structure in η s -the ridge is not flat any more but has a Gaussian shape! So the flux-tube basis is an essential ingredient for obtaining a perfect ridge shape, as observed in the data. In Fig. 25, we show a correlation function for p trigg t in the interval 5.5-8.0 GeV/c and p assoc t in the range 2-2.5 GeV/c, again in the 0-10% most central Pb-Pb collisions at 2.76 TeV. Although the trigger p t is too large to originate from freeze-out (from the flowing fluid), one still observes a ridge structure, which is due to the fluid-jet interaction. Let us consider again the situation of an initial azimuthal anisotropy in the energy density which is transported into a corresponding anisotropy in the flow, as discussed earlier. We sketch in Fig. 26 the (somewhat exaggerated) situation of a triangular transverse flow pattern with maximal flow around φ = 0 o , 120 o , and 240 o (with respect to the y-axis). The flow maxima are indicated by blue arrows. Again it is very important that this flow pattern is (not necessarily in magnitude, but in shape) very similar at different longitudinal positions -in the figure indicated by the two transverse planes P and P ′ , corresponding to two different space-time rapidities η s and η ′ s . A soft hadron (S) produced at η s at the fluid surface close to the position of maximal flow (for example at φ = 0 o ), will be boosted by the latter one and therefore carry information about this flow. A jet hadron (J) produced at η ′ s at the same angle (φ = 0 o ) close to the surface, will pick up a quark and an antiquark, both carrying flow, which adds the corresponding transverse momentum to the p t of the string segment (red element in the figure). It is the same flow which affects the jet hadron at η ′ s and the soft hadron at η s , which creates the dihadron correlation at ∆φ = 0, the "ridge". The correlation remains visible, even when the flow contribution to the jet hadron is only 10%, this is why the correlation is still present even for trigger transverse momenta beyond 10 GeV/c. We will now discuss some examples of semi-peripheral Pb-Pb collisions at 2.76 GeV/c. In Fig. 27 4.5 GeV/c and p assoc t in the range 1-1.5 GeV/c, in the 40-50% most central Pb-Pb collisions at 2.76 TeV. It can be clearly seen from the figure that the elliptical flow (∼ cos(2∆φ)) is dominant, besides the jet peak at ∆η = 0, ∆φ = 0. But also here higher order harmonics (∼ cos(i∆φ)) contribute, as we will discuss later. In the jet peak, we clearly see a very similar elliptical flow structure as in the previous example. The correlation functions are essentially flat as a function of ∆η, for large ∆η. One therefore gets complete information about the long range correlations by integrating over ∆η, where we use A = 0.8 and the maximum B = 2. This function agrees perfectly with its Fourier decomposition, using the first five terms. This is very convenient, because it allows to discuss the features of the correlation functions for different options for p trigg t and p assoc t by simply considering the Fourier coefficients. In figs. 29 and 30, we plot some coefficients V n∆ as a function of p trigg t for different intervals of p assoc t . The value of p trigg t is actually the mean value in a certain interval, the largest interval being 8-15 GeV/c. We compare our simulation (stars) with the results from ALICE [5] (circles). In the semi-peripheral collisions of Fig. 29, we see clearly the dominance of elliptical flow: the n = 2 coefficients are by far the largest. Nevertheless, also the higher harmonics contribute. We see in all cases an increase of the coefficients with p assoc dies out. But V 2∆ does not at all drop to zero at high p t because here the correlations between soft and jet particles come into play -the jet particles which suffered a push by the fluid, as discussed earlier (fluid-jet interaction). The fluid transfers at maximum few GeV/c of transverse momentum to the jet, but this is easily visible in the correlation (even at 20 GeV/c). The results for semi-peripheral collisions are very robust and depend little on model parameters. The most important ingredient is the elliptical initial shape on the energy density, given by the nuclear geometry. The effects depend of course on the flow velocity at freeze-out, but this is not a parameter but itself a robust result (with a maximum around 0.7c). Finally the results depend on the jet formation time, which should be around 1 fm/c (the value we actually took without really attempting a fine-tuning). The V 2∆ coefficients for central Pb-Pb collisions (Fig. 30) are first of all smaller as compared to the semiperipheral ones (note the different scales in figs. 29 and 30), simply because the dominant effect of large initial ellipticity from the geometry is absent. Here the ellipticity is purely random. Apart from this, we observe the same features as for the semi-peripheral collisions: increase with transverse momenta up to 2-3 GeV/c, then decrease. A big difference in central Pb-Pb collisions compared to semi-peripheral ones is the fact that the higher harmonics and in particular V 3∆ contribute substantially, because here both elliptical and triangular initial shape are of random origin (and therefore comparable), whereas for more peripheral collisions the geometrical elliptical shape dominates everything else. In Fig. 30 it seems that our calculation underestimates V 2∆ , in particular for the largest p assoc t range (2-2.5. GeV). Fortunately, similar data exist from CMS [4], for p assoc t in the range 2-4 GeV/c in the 0-5% most central Pb-Pb collisions. In Fig. 31, we plot the corresponding coefficients V 2∆ and V 3∆ . Here we slightly overpredict the data! IX. V2 AND FORMATION TIMES Whereas dihadron correlations provide the most complete information about particle production -in particular concerning the role of the "flowing" fluid, one may get the essential information by considering the elliptical flow coefficient v 2 of single particle production, which is defined as where φ is the azimuth angle of a particle, and φ Ref some reference plane. In [6], for the particles in the forward (backward) η hemisphere, the reference plane is the event plane angle φ backward (φ forward ), obtained from counting all particles in the opposite hemisphere. The angles are obtained from where the average is done in the forward / backward η hemisphere within 3.2 < |η| < 4.8. The v 2 coefficient is then computed as Resolution correction is taken care of by dividing this expression by R = cos [2∆φ] , with ∆φ = φ backward − φ forward , as in ref [6]. Relating particles with the event plane of the opposite hemisphere, we have kind of a long range correlation, but less clean than using dihadron correlations with a ∆η > A requirement. But as mentioned before, the essential features can be seen as well. In Fig. 32, we plot v 2 as a function of the transverse momentum for different centralities in Pb-Pb collisions at 2.76 TeV. The magnitude of the elliptical flow coefficients increase at low p t to reach a maximum around 2-3.5 GeV/c and then drop slowly at large p t . The behavior at high p t is the most interesting aspect: even at 10 GeV/c, there is a significant amount of elliptical flow, due to the fluid-jet interaction, which pushes jet particles in the direction of the collective flow at the freeze-out surface (and this effect will continue up to even higher p t , but we are simply running out of statistics). The high p t behavior is closely related to the formation time discussion we had earlier. The non-vanishing v 2 at high p t is mainly due to fluid-jet interactions, so the values should be related to the estimated probability P inside to form the jet hadron inside the fluid, which is equivalent to the fluid-jet interaction probability. In Fig. 33, we show P inside (multiplied by an arbitrary factor), together with the calculated and experimental v 2 already show in Fig. 32, which shows that v 2 is indeed proportional to the fluid-jet interaction probability. To compute P inside according to eq. (5), we use cτ form = 1 fm, mc 2 = 1 GeV, r Pb = 6.5 fm, and b = 11.5 fm (50-60%). Having a finite P inside is a necessary condition to get a substantial v 2 at some large value of p t , but not a sufficient one. Let us therefore estimate the effect of the fluidjet interaction on v 2 . One may consider a toy model, with soft particle emission due to flow into some preferred azimuthal direction, say φ flow . Let us assume a jet hadron getting pushed by the fluid in the direction of φ flow , which corresponds to adding some p soft t to the "hard" transverse momentum p hard t of the flux-tube segment. Without loss of generality, we may set φ flow = 0. The total transverse momentum of the jet hadron is Assuming a flat φ distribution, the probability distribution for ψ is (2π) −1 dφ/dψ, which is in the case of p soft t ≪ p hard t given as which should only be considered for −π/2 < ψ < π/2, since ψ and φ flow have to correspond to the same hemisphere. We get anisotropies of the order of p soft t /p hard t , which means at transverse momenta around 10 GeV/c, a soft push as little as 1 GeV/c can produce anisotropies of the order of 10%. X. COALESCENCE For many years, different models have been employed to treat particle production at different transverse momentum scales. The so-called intermediate range from 2 to 6 GeV/c has been the domain of coalescence models [29][30][31][32][33], where hadrons are produced by recombining quarks from the plasma, to be distinguished from "fragmentation" of partons. In our picture, there are certain aspects which give similar results as coalescence, but it is not a coalescence approach. Already the notion "intermediate p t " extends to say 20 GeV/c and not 6. The corresponding transverse momentum of hadrons does not originate from plasma quarks and antiquarks -the main part is coming from the original flux tube. Whereas usual flux tube breaking in vacuum creates quark-antiquark pairs via a tunneling process, the fluid-jet interaction amounts to replacing these quark-antiquark pairs by partons from the plasma. So our jet hadrons finally carry "some" transverse momentum from fluid partons, but only a small fraction. But this is enough to create for example anisotropies in dihadron correlations. It will also affect strongly baryon to meson rations, as we are going to discuss in a separate publication. XI. SUMMARY We presented a theoretical scheme which accounts for bulk matter, jets, and the interaction between the two. The criterion for bulk-jet separation is based on parton energy loss. But in addition to the latter mechanism, there are very important new phenomena which have not been discussed so far: The interaction between jet hadrons and soft ones (from fluid freeze-out), and the interaction between the fluid and jet hadrons at the moment of the creation of the latter ones. Particle production between zero and (at least) 20 GeV/c is affected. We understand quantitatively azimuthal anisotropies in single particle production and dihadron (long range) correlations at large values of p t .
2012-06-23T09:18:04.000Z
2012-03-26T00:00:00.000
{ "year": 2012, "sha1": "a858c13057754a90a753b483b8c74f93c75dcd83", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.5704", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a858c13057754a90a753b483b8c74f93c75dcd83", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3574076
pes2o/s2orc
v3-fos-license
Fluorescence in situ hybridization in combination with the comet assay and micronucleus test in genetic toxicology Comet assay and micronucleus (MN) test are widely applied in genotoxicity testing and biomonitoring. While comet assay permits to measure direct DNA-strand breaking capacity of a tested agent MN test allows estimating the induced amount of chromosome and/or genome mutations. The potential of these two methods can be enhanced by the combination with fluorescence in situ hybridization (FISH) techniques. FISH plus comet assay allows the recognition of targets of DNA damage and repairing directly. FISH combined with MN test is able to characterize the occurrence of different chromosomes in MN and to identify potential chromosomal targets of mutagenic substances. Thus, combination of FISH with the comet assay or MN test proved to be promising techniques for evaluation of the distribution of DNA and chromosome damage in the entire genome of individual cells. FISH technique also permits to study comet and MN formation, necessary for correct application of these methods. This paper reviews the relevant literature on advantages and limitations of Comet-FISH and MN-FISH assays application in genetic toxicology. Introduction A considerable battery of assays exists for the detection of different genotoxic effects of compounds in experimental systems in vitro, or for investigations of exposure to genotoxic agents in vivo. The single cell gel electrophoresis, called shortly 'comet assay', as well as the micronucleus (MN) test are broadly applied test systems to check for genotoxic effects. In addition to classical cytogenetic methods for scoring chromosomal aberrations, fluorescence in situ hybridization (FISH) is used in genetic toxicology for analysis of chromosome damage with increased efficiency and specificity for identifying certain kinds of chromosomal aberrations. The comet assay, MN test and FISH presented in International Programme on Chemical Safety (IPCS) guidelines among the most often studied genotoxicity endpoints for the monitoring of genotoxic effects of carcinogens in humans [1]. Recently FISH technique was successfully combined with comet and MN assays for simultaneously measuring the overall level of DNA and chromosome damage, and localizing of specific genome domains within an individual cell. Principles and application of the comet assay The comet assay is a rapid and very sensitive fluorescent microscopy-based method for measuring DNA damage, protection and repair at the level of individual cells [2][3][4][5][6][7]. In this assay cells are embedded in agarose, lysed and then electrophoresed. Negatively charged broken DNA strands exit from the lysed cell under the electric field and form a comet with "head" and "tail." The amount of DNA in the tail, relative to the head, is proportional to the amount of strand breaks. The limit of the comet assay sensitivity is approximately 50 strand breaks per diploid mammalian cell [8]. It permits to reveal mainly early, still repairable, moderate DNA damage and can be used in virtually any eukaryotic cell. In order to achieve various objectives, different modifications of the comet assay have been developed. In its alkaline version, which is mainly used, DNA singlestrand breaks, DNA double-strand breaks, alkali-labile sites, and single-strand breaks associated with incomplete excision repair sites cause increased DNA migration [9]. In the neutral variant the DNA molecule itself is preserved as a double stranded structure which enables uncovering of double stranded DNA breaks [10,11]. Crosslinkage of DNA-DNA/DNA-protein leading to decreased DNA migration can be identified by the failure to detect single-strand breaks that were known to be present [12]. Oxidized purins and pyrimidins, could be revealed by incubating lysed cells with base damage-specific endonucleases before electrophoresis [13]. The comet assay has manifold applications in fundamental research for DNA damage and repair, in genotoxicity testing, human biomonitoring and molecular epidemiology and ecotoxicology [5,14,15]. Principles and application of MN test The MN test is one of the preferred methods for assessing DNA damage at the chromosome level. It permits to measure both chromosome loss and chromosome breakage [16,17]. Metaphase analysis provides the most detailed analysis of numerical and structural chromosome aberrations, however, it is very time consuming and needs highly skilled personnel. The MN assay was developed as a simpler short-term screening test and now accepted as valid alternative to the chromosome aberration assay. In this method, chromosome aberrations are detected indirectly via chromatin loss from the nucleus leading to MN in the cytoplasm of the cell [18,19]. MN are expressed only in dividing cells. Adding to cell cultures cytochalasin-B, an inhibitor of the mitotic spindle that prevents cytokinesis, permits to recognize cells that have completed one nuclear division by their binucleated appearance [20,21]. The cytokinesisblock micronucleus (CBMN) assay allows higher precision because the data obtained are not affected by altered cell division kinetics [22]. Recently the CBMN assay has in fact evolved into a "cytome" method for measuring chromosomal instability, DNA repair capacity, nuclear division rate, mitogenic response and occurrence of necrotic and apoptotic cells [23]. The MN test has become one of the most commonly used methods in genotoxicity testing and biomonitoring populations at risk [15,24,25]. This test has been recommended for monitoring in product development and regulatory tests of new drugs [26]. Principles and application of FISH technique FISH is a powerful technique for localization of specific DNA sequences within interphase chromatin and metaphase chromosomes and the identification of both structural and numerical chromosome changes. The detection of nucleotidic sequences on examined DNA molecule consists in hybridizing a DNA probe to its complementary sequence on chromosomal preparations. Probes are labeled either directly, by incorporation of fluorescent nucleotides, or indirectly, by incorporation of reporter molecules that are subsequently detected by fluorescent antibodies. Probes and targets are finally visualized in situ by microscopy analysis. FISH technique protocols and wide variety of current applications of FISH technology are presented [27][28][29][30][31][32][33]. Structural and numerical chromosomal aberrations have been considered important biological end points in genotoxic studies. FISH with chromosome-specific DNA probes has increased the sensitivity and ease of detecting chromosomal aberrations, especially stable chromosomal aberrations. Now FISH is being increasingly utilized in genetic toxicology for the detection of chromosome damage induced in vitro and in vivo by chemical and physical agents [34][35][36][37]. Overcoming of limitations of comet and MN assays by FISH Compared with other assays, analysis of comets and MN bring along several advantages, including speed and ease of analysis and no requirement for metaphase cells in MN test and no need for dividing cells in the comet assay. However, results from the comet assay alone reflect only the level of overall DNA damage in single cells. The same is typical for MN test as it does not even permit to distinguish MN containing whole chromosomes from MN containing chromosome fragments. The introduction of FISH [27] in comet and MN assays has allowed adding new abilities and to enhance resolution and validity of these two methods. FISH permitted to supplement potential of the comet assay with an opportunity to recognize genome regions of interest on comet images. Thus, Comet-FISH is applied for the analysis of damage and repair of different genes, chromosomes and chromosome regions compared to whole genomic DNA within the comet, or visualization of genomic loci in three-dimensional organization of chromatin and elucidation of mechanism of comet formation and DNA organization in comets. By MN test combined with FISH the genetic contents of the MN can be characterized. The application of FISH probes allows to distinguish MN originating either from chromosome loss or breakage and to determine the involvement of specific chromosomes and chromosome fragments in MN formation. Using MN-FISH the clastogenic or aneugenic action of different factors, the chromosomal origin of spontaneous and mutagen-induced MN, and the relative contribution of all chromosomes in MN formation can be studied. Therefore, FISH was recognized as a valuable addition to comet and MN assays [38,39]. The simultaneous use of these methodologies will enable to achieve a higher sensitivity for the adequate hazard assessment of mutagens and will lead to a better understanding of the biological mechanisms involved. Literature data concerning combined application of FISH with comet and MN assays in genetic toxicology are discussed in the following. Methodological aspects of Comet-FISH Comet-FISH was first applied in human cells to compare the localization of specific chromosomal domains in native interphase nuclei with their distribution in comet-head and -tail after electrophoreses [40]. As heatdenaturation necessary for FISH is impossible within a comet fixed in low melting point agarose chemical denaturation of the DNA with alkali solutions was introduced [40] and applied in human leukocytes and the cell line HT1376 [41]. Soon thereafter the term ''Comet-FISH'' was introduced [42]. Two versions of Comet-FISH, one based on the alkaline and one on the neutral versions of the comet assay were developed subsequently [38,43]. The reliability of Comet-FISH was confirmed in some experiments. It could be shown that in Comet-FISH comparable results to metaphase based molecular cytogenetic approaches are obtained with respect to hybridization sensitivity and reproducibility [44] and the proportion of DNA elements from specific chromosomal domains in comet heads and tails corresponded to the expected localization based on the distribution of cleavage sites for specific endonucleases [45]. Various DNA probes were successfully applied with the comet assay for analysis of damage and repair of specific genome loci (genes, chromosomes and chromosome regions). The size of the region of interest investigated by Comet-FISH varies from gene [46] to whole chromosome [44]. In the comet assay 10 to 800 kb fragments are analyzed and fragments smaller than 10 kb might get lost in agarose gel [44]. However, DNA probes smaller than 10 kb cannot be used also in routine FISH. Results of various FISH probes application with the comet assay is summarized in Table 1. Microscopic evaluation of the Comet-FISH images includes record of the number of probe signals and their localization on comet. The position of the fluorescence signals indicates whether the sequence of interest lies within the undamaged (head) or within or in the vicinity of a damaged (tail) region of DNA. Repositioning of gene-specific signals from tail to head over the incubation period provides evidence for repair of all the lesions within and around the locus of interest [47]. The level of DNA-damage and -repair in specific domains can be expressed as percent of FISH signals present in head vs. tail [48]. However, doing FISH on comet assay preparations is different from that of routine FISH mainly by the fact that it is performed not on flattened interphase nuclei fixed to a glass slide but on three-dimensional (3-D) preserved ones. This 3-D state reflects to some degree the real organization of chromatin in the living cell. On the other hand, the 3-D shape leads also to serious difficulties in the visualization and scoring of the signals [47]. Thus, analysis of Comet-FISH images is not easy automatized as the individual analysis of each comet is necessary to determine the distribution of signals between head and tail. Nevertheless, it is expected that automated systems for scoring of at least certain kinds of FISH signals might be elaborated in the near future [47]. FISH in elucidation of comets formation Although the comet method is very popular, there is still no agreement on how the comet tail is formed. Understanding of comet formation and the factors influencing this process is necessary for the comet assay correct application in genetic toxicology. In the comet assay, cells are electrophoresed in a way that fragmented and relaxed DNA migrates towards anode further than intact DNA, producing a situation resembling a comet. Relaxation of DNA loops was proposed to be the primary basis for comet formation under neutral [11,49] and alkaline [4] conditions. The comet tails obtained after neutral electrophoresis seem to consist of DNA loops which are attached to structures in the nucleus, since the DNA cannot move in the second direction after two-dimensional electrophoresis. Under alkaline electrophoresis conditions, however, the entire comet tail moves in the new electrophoresis direction. Thus, it appears that the alkaline comet tails consist of free DNA fragments [50]. The application of FISH has allowed to explain further aspects of comet formation. An important question is whether the complementary strands within a loop migrate into the tail independently or together upon alkaline denaturation and electrophoresis. Comet-FISH with a probe for the p53 gene was applied to cells that had been damaged by ionizing radiation. The results obtained favored the idea that both strands in a loop migrate into the tail, but separately, even in cases in which one strand is broken and the complementary strand is intact [51]. Before the era of Comet-FISH it was generally accepted that, when single cell gel electrophoreses is performed undamaged DNA remains in the comet head and the fraction of damaged DNA moves to the comet tail. FISH experiments indicated that besides the presence of breaks there are other factors determining the ability of particular DNA-sequences to migrate into the tail. These include the nature of the damage and organization of chromatin [47]. DNA from regions closely and extensively associated with the nuclear matrix, such as replicating DNA, does not move into the tail in standard alkaline comet assay [52]. Furthermore, fragments of gene-poor chromosomes were found more frequently in comet tail of UV-A-irradiated lymphocytes than fragments of gene-rich ones. It was suggested that chromosomes with high gene density are more resistant to DNA-damaging agents [44]. An alternative explanation would be the association of gene-rich regions with sites of transcription, which are located on the surface of the nuclear matrix i.e. in the head of the comet [44]. Similarly, the inability of the DHFR gene and one of ends of MGMT gene to leave the comet head even when the DNA is released from its supercoiling in CHO cells was explained by their attachment to 'matrix-associated region' [46]. Thus, the data obtained with Comet-FISH contributed in understanding of comet formation and correct interpretation of the comet assay results. Comet-FISH applications in genetic toxicology Combined application of FISH with the comet assay offered the unique possibility to evaluate gene or chromosome damage and repair relative to the overall genome and compare repair rates of individual genes This methodology permits to detect mutagen-induced sitespecific breaks in DNA regions that are relevant for development of various diseases or to recognize genome targets of action of environmental genotoxic agents. Recognition of sites of damage, promotes interpretation of induced in vitro and in vivo genotoxic effects and understanding of their biological impact. FISH in study of gene damage and repair Comet-FISH was found to be suitable for detection of DNA damage induced by genotoxic compounds e.g. in colon cancer relevant genes (TP53, KRAS, APC) in primary human colon and colon adenoma LT97 cells. Here this approach really facilitated studies on effects of nutrition-related carcinogens [53][54][55][56]. Comet-FISH revealed that strand breaks in the human tumor suppressor p53 gene are repaired very quickly compared with total DNA in RT4 and RT112 bladder carcinoma cells after γ-irradiation [57] and in mitomycin C-treated RT4 cells [51]. Preferential repair of the p53 locus was shown also in the panel of malignant breast cancer cell lines (MCF-7, MDA-MB-468 and CRL-2336) [58] and in normal lymphocytes [46] following genotoxic treatment. Comet-FISH is also an effective alternative for measuring transcription-coupled repair (TCR), since the comet assay constitutes an extremely sensitive test for detection of DNA damage and repair by genotoxic agents at subtoxic, physiologically relevant exposures. Application of Comet-FISH for analysis of TCR was discussed elsewhere [48]. Localization and repair rates of DHFR and MGMT genes in CHO cells and p53 gene in human cells treated with H 2 O 2 or photosensitizer plus light to induce oxidative damage were monitored using Comet-FISH with oligonucleotide probes for 5' and 3' regions of the genes investigated. CHO cells shown preferential repair of oxidative damage in the MGMT gene. Strand breaks in the human p53 gene were repaired more rapidly than total DNA. This approach can be applied to other genes treated with a range of damaging agents [46]. It has been shown, that damage of specific genes can be applied as biomarkers of genotoxic exposure. Ret, Ab1 and Trp53 genes fragmentation in Comet-FISH Analysis of damage of genes Ret, Ab1 and Trp53 as biomarker of X-radiation exposure in vivo [59] Spatial distribution of chromosome-specific DNA sequences [40] Chromosome locus-specific (centromere-, telomere-and regionspecific) DNA damage within the vicinity of the locus of interest Analysis of leukaemia-specific chromosome damage [61] Analysis of sensitivity of telomeres toward anticancer drugs [63][64][65] WCP DNA damage within the chromosome of interest Distribution of DNA damage in genome [44,60] Genetic alterations in carcinogenesis of the upper aerodigestive tract [62] Selected probes Different genomic regions Transcription-coupled DNA repair [48] Elucidation of comet formation [44][45][46]51,52] Hovhannisyan Molecular Cytogenetics 2010, 3:17 http://www.molecularcytogenetics.org/content/3/1/17 assay was proposed as in vivo biomarkers of X-radiation exposure in C57BL/6 and CBA/J mice. At the same time the comet assay alone, when applied to the same specimens, produced no significant results because of interindividual variability [59]. FISH in study of chromosome damage Comet-FISH in UV-A-irradiated human lymphocytes with whole chromosome painting (wcp), centromere-, telomere-and region-specific probes demonstrated comparably high sensitivity of chromosomes X and 8 towards UV-A-induced DNA damage [60]. Studying 12 human chromosomes with wcp probes an inverse correlation between chromosomes gene density and their sensitivity towards UV-A-radiation was revealed [44]. Leukaemia-specific chromosome damage (breakage at 5q31 and 11q23) in TK6 lymphoblastoid cells exposed to melphalan, etoposide or hydroquinone was studied using Comet-FISH [61]. Comet-FISH analysis of selected genetic alterations, related with risk factors in carcinogenesis of the upper aerodigestive tract revealed significantly higher benzo[a]pyrene-diolepoxide-induced damage levels in chromosomes 3, 5 and 8 compared with chromosome 1 in epithelia cells of patients with squamous cell carcinoma [62]. In our experiments Comet-FISH with telomere-specific peptide nucleic acid (PNA) probes was applied for measuring telomeric DNA sensitivity toward drugs used in cancer therapy in normal human leukocytes [63,64] and in tumor cell lines CCRF-CEM, CHO and HT1080 [65]. Distribution of telomere signals in head and tail of comet, obtained from BLM-treated human leukocytes is presented in Figure 1. Human leukocytes showed preferential cisplatin-DNA crosslinks formation in telomeres and telomere-related regions. Telomeres in CHO and CCRF-CEM cells were about 2-3 times more sensitive towards BLM than global DNA, while in HT1080 telomeres were less fragile than total DNA. The higher fragility of telomeres compared to the total DNA in non treated human leukocytes [64] reflects findings about concentration of telomeres mainly near the nuclear membrane [40]. MN-FISH Methodological aspects of MN-FISH FISH analysis of MN is based on the achievements of interphase FISH [66]. Commercial FISH probes for selective painting of individual chromosomes and specific DNA sequences and software's for image analysis are also suitable for description of MN composition. A major condition of the quantitative accuracy of the MN assay is integrity of cell membrane and preservation of the cytoplasm during the cell harvesting [67] while interphase FISH technique allows the destruction of cellular membrane. MN test was successfully combined with different kinds of DNA probes which recognize centromeres, other chromosome-specific regions and whole chromosomes inside of micronuclei and main nuclei. The analysis of MN combined with centromeric DNA probes for all chromosomes allows discrimination between centromere negative MN or MN originating from chromosomal breakage (clastogenic effect) and centromere positive MN or MN containing whole chromosomes (aneugenic effect). Сentromere detection can be expected to be more accurate in distinguishing the two main types of MN than anti-kinetochore antibody staining [68] because MN can be formed from entire chromosomes with a disrupted kinetochore [69][70][71] and show no kinetochore signal. Application of chromosome-specific centromeric probes permits evaluation of different chromosomes sensitivity toward genotoxic agents. This approach also allows detection of non-disjunctional events (i.e., unequal distribution of homologous chromosomes in daughter nuclei) in binucleated cells [72]. The application of other chromosome region-specific and wcp probes permits evaluation of their participation in formation of spontaneous [73] and induced [74,75] MN. Wcp probes target the euchromatic parts of a chromosome and thereby reveal both whole chromosomes and acentric fragments in MN [76]. However, they fail to distinguish between an entire chromosome and material from large chromosomal fragments in a particular MN. MN with whole chromosomes can be discriminated using chromosome-specific centromeric probes on the same cells [74]. However the identification of the chromosome-specific contents of MN is still very incomplete due to a lack of methods by which the DNA within the MN could be fully investigated. Moreover, the absolute number of fragments enclosed in a MN could not be quantified precisely. To our knowledge till now there are no successful attempts of application of interphase chromosome-specific multicolor banding (ICS-MCB) [27,77,78] for analysis of MN contents. Description of the chromosomal contents of MN has been limited also by the number of simultaneously applied colors and chromosomes evaluated per study. There are only a few studies with analysis of participation of all human chromosomes in MN formation. Frequency of the presence of all 24 chromosomes in MN was analyzed by dual-color FISH technique [79]. However, since only two probes on the same slide were applied, this study was time consuming. This approach was also limited in its ability to detect MN that might contain DNA from multiple chromosomes. Spectral karyotyping (SKY) technology [80] offered unique possibility for simultaneous classification of all 24 chromosomes in humans [81]. But this technology is expensive and is limited accessible in MN analysis. Wide introduction of SKY in researches provides a promising opportunity for development of our knowledge about the chromosomal contents of MN. Results of various FISH probes application with the MN test is summarized in Table 2. FISH in elucidation of MN formation MN can arise after mitosis from acentric chromosomal fragments or whole chromosomes that are not included in each daughter nucleus. Therefore, in MN test, chromosome aberrations are detected indirectly via DNA loss from the nucleus leading to MN in the cytoplasm of the cell. FISH technique permits to identify the chromosomal origin of MN and thus improve our understanding of mechanisms of MN formation. Anaphase aberrations and MN formation in woman lymphocytes were compared using pancentromeric and X-chromosome painting probes. It was shown, that micronucleation of the X chromosome in women's lymphocytes is probably the result of frequent lagging behind of the X chromosome during anaphase [82]. FISH with MN test permits to reveal involvement of different genes in induction of MN and aneuploidy. CBMN assay with centromere-specific probes in XPDdefective human fibroblast cells demonstrated that the XPD gene product plays a role in the events which protect human cells from the aneugenic effects of chemicals [72]. The data on the contents of MN in blood cells of workers exposed to welding fumes indicated that, detoxification gene GSTM1 positive subjects showed an increased centromere negative MN frequency and GSTT1 null subjects showed an elevated centromere positive MN frequency [83]. Thus, MN-FISH combined with analysis of anaphase aberrations and genetic polymorphisms has contributed to the understanding of the processes that accompany the formation of MN. MN-FISH applications in genetic toxicology FISH analysis of MN with application of the different kinds of DNA-probes offered the possibility to precise the nature of genotoxic effects revealed in MN test. Application of MN-FISH in many researches has allowed to reveal the occurrence of different chromosomes in Table 2 Results of various FISH probes application with MN test Clastogenic and aneugenic effects detection and analysis of mechanisms of aneuploidy by MN test with centromeric probes The distinction between the clastogenic and aneugenic effects (leading to structural and numerical chromosome alterations, respectively), by identifying the origin of MN, is important for genotoxicity testing or for biomonitoring of genotoxic exposure and effect. This approach with application of centromeric and chromosome-specific centromeric probes has appeared useful and widely applied in various researches. Centromere-specific FISH analysis of the MN was applied for in vitro genotoxicity testing in studies of toxins of phytoplankton domoic acid (DA) and okadaic acid (OA) in human intestinal cell line Caco-2 [84], Al, Cd, Hg, and Sb salts in human blood cells [85], industrial chemical acrylamide and the traditional Chinese medicine Tripterygium hypoglaucum (level) hutch in mouse NIH 3T3 fibroblasts [86], anti-tumor agents cycloplatam and its parent drugs cisplatin and carboplatin in human lymphocytes [87]. MN-FISH was applied for analysis of genotoxicity in vivo of exposure to nitrous oxide in lymphocytes of operating-room nurses [88] and antihypertensive drug nimodipine in lymphocytes of treated patients [89]. By using FISH analysis with the mouse-satellite DNA-probe it could be shown that nicotine is a clastogen [90], while antitumor drugs topotecan and irinotecan [91] and antibiotic rifampicin [90] are aneugens as well as clastogens in somatic cells in vivo. The results obtained are useful for understanding of possible by-effects of action of medicines. Environmental lead exposure increases both centromere-positive and centromere-negative MN in blood lymphocytes of children, however, the contribution of centromere-positive MN was significantly higher than in the controls [92]. The correlation between centromeric and acentromeric MN frequencies in chronically irradiated human populations and rate of exposure allows to discuss the possibility of application of centromerespecific FISH with CBMN analysis in biodosimetry [93]. Application of FISH with MN test allows not only to distinguish between clastogenic and aneugenic effects, but also enables to discriminate between two mechanisms of aneuploidy induction: chromosome loss into MN or chromosome non-disjunction so that one daughter cell becomes trisomic and the other becomes monosomic [19,72]. It was shown that chromosome migration impairment would lead to increased frequency of MN containing a single centromere whereas centrosome amplification would induce MN with three or more centromeric signals [94]. Studies with chromosome-specific centromeric probes support the observation that chromosome non-disjunction is the major mechanism of spontaneous [72,95] and induced by diethylstilboestrol [72], vincristine and demecolcine [96] and ionizing radiation [97] aneuploidy production. Chromosome loss is main mechanism of okadaic acid-induced aneuploidy [98]. Detection of MN contents with chromosome-specific centromeric probes Well known fact of a non-random distribution of chromosome damages arise spontaneously [99,100] and after exposure to chemicals [101,102] or radiation [103,104] can be successfully investigated by evaluation of relative rate of micronucleation of different chromosomes or chromosome fragments [39,81,105]. With application of chromosome-specific centromeric probes in was shown that both the X and the Y chromosomes are overrepresented in lymphocyte MN of men but that the Y chromosome is overrepresented only in older subjects [106]. Occurrence of acrocentric chromosomes in spontaneous MN is neither overrepresented nor influenced by age or sex [107]. Treatment of lymphocytes with aneuploidogens vincristine and demecolcine in vitro increased frequency of micronucleation and malsegregation of chromosomes X and 8 in different age groups of women [96]. Aneuploidy of chromosome 8 was more frequent than aneuploidy of chromosome 7 in human lymphocytes treated with the 1,2,4-benzenetriol in vitro probably as only cells with non-lethal chromosome aberration could survive to be detected [108]. Non-disjunction and micronucleation of X chromosome was revealed in vitro in human lymphocytes treated with chemotherapeutic agents melphalan, chlorambucil and p-N,N-bis(2-chloroethyl) aminophenylacetic acid [109]. Reasons of preliminary inclusion of some chromosomes in spontaneous and induced MN require further investigation. But it is known, that chromosome-specific aneuploidy play key roles in the development and progression of cancers. Thus, precise identification of the specific chromosomes and chromosome regions involved in the observed alternations should be continuously important areas for future research. Detection of MN contents with probes for chromosome regions and wcp Analysis of chromosomal contents of spontaneous MN in normal woman lymphocytes using SKY [80] and FISH technologies demonstrated that the vast majority of MN appears to be derived from a single chromosome as a result of chromosome lagging. SKY analysis showed that all of the 23 chromosomes could be present in the MN, overall, the X chromosome was seen most frequently [81]. In spontaneously arising MN of blood cells of patients with immunodeficiency, centromeric instability, and facial anomalies syndrome (ICF) chromosome 1 appeared to be present in a higher proportion compared to chromosomes 9 and 16. Chromosome 18, not associated with the ICF syndrome, showed no signal in any of the MN observed [73]. FISH analysis of MN contents in human lymphocytes has shown lack of ethyl methanesulfonateinduced repair in chromosome 1 heterochromatin. This result is clarified frequent involvement of band 1q12 in chromosome 1 rearrangements in human cancer cells [75]. We studied the involvement of chromosomes 7, 18 and X in mitomycin C (MMC)-induced MN using wcp and chromosome-specific centromere probes. It was shown that X-chromosomal material was over represented in female-and under represented in male-derived MN. MN with centromeric and wcp signals from chromosome X in MMC-treated human leukocytes is presented in Figure 2. We speculated about a preferred inclusion of the inactive female X chromosome into MN [74]. The contribution of different chromosomes in clastogen MMC-and aneugen diethylstilboestrol (DES)induced MN was analysed in human lymphocytes using painting probes for all chromosomes. FISH analysis showed that DNA from chromosomes 9 and 1 was overrepresented in MMC-induced MN. The occurrence of chromosomes in DES-induced MN is more random than that in MMC-induced MN [79]. Results of application of wcp for chromosomes 1, 7, 11, 14, 17 and 21 with pancentromeric probes in MN induced by ionizing radiation in human lymphocytes support a random model of radiation-induced cytogenetic damage for the six chromosomes studied [105]. Until now, FISH has not been widely applied in plant mutagenesis because DNA probes required for chromosomes of particular plant species are very limited. The study [110] is a rare example of a detailed identification of the specific chromosomes or chromosome fragments involved in the mutagen-induced MN in barley cells. Conclusions In summary, combined application of the FISH technique with comet and MN assays permits to improve the ability of these widely used genotoxicity tests. Tests for estimation of genotoxicity belong to the express methods and should be easy and rapid in application. These advantages determine some limitations of methods, namely inability to recognize damage of certain loci of genome. MN and comet assays with application of different kinds of FISH probes offer unique possibility to detect on the same specimen the total DNA and chromosome damage and evaluate damage of specific regions of genome as well. MN and comets appear by loss of DNA material from the nucleus in micronuclei and in comet tail, respectively. Therefore, both methods reflect secondary rather than primary effects of DNA damage. FISH analysis of origin and contents of MN and comets promotes the better understanding of mechanisms of their formation necessary for correct application of methods. Special modifications for concurrent application of FISH with comet and MN assays were elaborated. It was confirmed that data obtained with FISH on MN and comets are comparable with results of metaphase and interphase FISH. Comet-FISH technique permits to gain valuable and reliable information, particularly about DNA damage and repair in general and also in relation to the organization of the nucleus. Some questions relating to the behaviour and organization of DNA within the comet were clarified using FISH technique. Comet-FISH was applied for detection of DNA damage and repair of cancer relevant genes, for measuring transcription-coupled repair, identification of genome targets of action of various genotoxic agents, including anti-tumour preparations. MN-FISH permits to discriminate aneugenic and clastogenic effect in MN and to recognize contents of MN. MN-FISH was applied in various researches for elucidation of mechanisms of genomic instability, distribution of chromosome breaks in genome and, to some extent, the etiology of certain human maladies. Indeed, the available data demonstrate that FISH technique is able to develop the genotoxicity assessment using the comet assay and MN test.
2014-10-01T00:00:00.000Z
2010-09-15T00:00:00.000
{ "year": 2010, "sha1": "f0f3806b2c12d0b4a2e4ae01771a28603c65302a", "oa_license": "CCBY", "oa_url": "https://molecularcytogenetics.biomedcentral.com/track/pdf/10.1186/1755-8166-3-17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "138a8a3abbe0857b5f2bc0d0bfab3c1a0978bbbb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
189944656
pes2o/s2orc
v3-fos-license
A Two‐Week Treatment with Plant Extracts Changes Gut Microbiota, Caecum Metabolome, and Markers of Lipid Metabolism in ob/ob Mice Scope Targeting gut microbiota dysbiosis by prebiotics is effective, though side effects such as abdominal bloating and flatulence may arise following high prebiotic consumption over weeks. The aim is therefore to optimize the current protocol for prebiotic use. Methods and results To examine the prebiotic properties of plant extracts, two independent studies are conducted in ob/ob mice, over two weeks. In the first study, Porphyra umbilicalis and Melissa officinalis L. extracts are evaluated; in the second study, a high vs low dose of an Emblica officinalis Gaertn extract is assessed. These plant extracts affect gut microbiota, caecum metabolome, and induce a significant lower plasma triacylglycerols (TG) following treatment with P. umbilicalis and significantly higher plasma free fatty acids (FFA) following treatment with the low‐dose of E. officinalis Gaertn. Glucose‐ and insulin‐tolerance are not affected but white adipose tissue and liver gene expression are modified. In the first study, IL‐6 hepatic gene expression is significantly (adjusted p = 0.0015) and positively (r = 0.80) correlated with the bacterial order Clostridiales in all mice. Conclusion The data show that a two‐week treatment with plant extracts affects the dysbiotic gut microbiota and changes both caecum metabolome and markers of lipid metabolism in ob/ob mice. Introduction The gut microbiota is recognized as a major actor in host pathophysiology, with functions extending beyond those related to digestion. [1] Both taxonomic (relative (%) abundance of bacterial groups) and functional (microbial metabolic pathways) alterations of gut microbiota, named dysbioses, have been associated with several pathologies, in particular metabolic diseases such as type 2 diabetes and obesity. [2][3][4] Thus, multiple strategies targeting gut microbiota dysbiosis may be effective in restoring physiological conditions. One of these strategies is the use of prebiotics, originally defined as to "non-digestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria already resident in the colon." [5] In several experimental models of dysbiotic gut microbiota, including those of obesity induced by a fat-rich diet, prebiotics dampened the effects of the diet and intestinal inflammation by acting on the gut microbiota. [6] Nevertheless, beyond being merely beneficial, prebiotics may also induce side-effects such as gut ballooning and/or increased flatulence. [7,8] These effects are related to both an excessive production of gas (via fermentation of prebiotics by gut bacteria [9] ) and to the important amount (grams per day per patient) of prebiotics sometimes needed to achieve beneficial effects on health. In the interest of alleviating these undesirable effects, increasing attention has been paid to the potentially prebiotic effects of substrates other than oligosaccharides. The International Scientific Association for Probiotics and Prebiotics (ISAPP) proposed the following update of the prebiotic definition: "a substrate selectively utilized by host microorganisms conferring a health benefit." [10] This new definition expands the concept of prebiotic to polyunsaturated fatty acids, phytochemicals, and phenolics. Recent studies have suggested that plant extracts rich in polyphenols might modulate a dysbiotic gut microbiota in various animal models including mice fed on a fat-rich diet. [6,11] Based on this evidence, we conducted a proof-of-concept study in ob/ob mice, a well-known model of gut microbiota dysbiosis (increased Firmicutes to Bacteroidetes ratio) and metabolic disease [12] to evaluate the ability of Porphyra umbilicalis and Melissa officinalis L. extracts to induce changes in a dysbiotic gut microbiota and also to determine whether these changes may be associated with some ameliorations in both lipid and glucose metabolism. [12] It is noteworthy that in clinical practice prebiotics are proposed to patients suffering from gastrointestinal problems possibly due to a gut microbiota dysbiosis. Thus, a good indicator for treatment by a particular prebiotic is its capacity to change an already dysbiotic gut microbiota. Porphyra umbilicalis is a potential functional food with a high content of dietary fibers, minerals, and trace elements as well as proteins and lipids that can exert multiple biological activities and has a high antioxidant capacity. [13] Melissa officinalis L. is known to have multiple pharmacological effects including anxiolytic, antiviral, and antispasmodic activities, as well as to have an impact on mood, memory, and cognition. [14] Then, we performed a second independent study in which ob/ob mice were treated with two doses (high vs low) of an extract of Emblica officinalis Gaertn (also known as amla or Phyllanthus emblica Linn). The fruit of E. officinalis Gaertn is a rich source of vitamin C and tannins and has a strong antioxidant activity. [15,16] It has also been shown to improve glucose and lipid metabolism in both normal subjects and type 2 diabetic patients, though at doses as high as two to three grams per day for three weeks. [17] Gut (fecal) microbiota was analyzed both at baseline (before treatment) and after two weeks of treatment. The overall caecum metabolome, various markers of lipid and glucose metabolism as well as the expression of key metabolic and inflammatory genes in white adipose tissue and liver were also studied. Animal Model and Tissue Collection The two independent studies described above were conducted in 12-week-old C57Bl/6J male ob/ob mice (Charles River, L'Arbresle, France) fed on a normal chow (NC) and then treated with plant extracts dissolved in sterile water or sterile water only (control group), as described below. Mice were group-housed (five mice per cage) in a specific pathogen-free controlled environment (12-h daylight-cycle, light off at 7:00 p.m.). At the end of the study, mice were sacrificed by cervical dislocation in a fed state (at morning) to avoid a fasting-induced change in the composition of the gut microbiota and tissues were collected and snapfrozen in liquid nitrogen. All experimental procedures involving animals were approved by the local ethical committee and performed in accordance with relevant guidelines and regulations (APAFIS#8111-2016120716262061 v10). Nutritional analysis of the three plant extracts (Capinov SAS, Landerneau, France; Table 1) showed that the extract of P. umbilicalis contained a high concentration of proteins and fibers compared to the extracts of M. officinalis L. and E. officinalis Gaertn. The extract of E. officinalis Gaertn had the highest content in carbohydrates (15 times more carbohydrates in E. officinalis Gaertn extract than that of P. umbilicalis). With regard to lipids, P. umbilicalis extract (which is a marine plant extract) has the highest concentration of palmitic and oleic acids whereas M. officinalis L. and E. officinalis Gaertn extracts (terrestrial plants) contain linolenic and linoleic acids that were not detected in the extract of P. umbilicalis. E. officinalis Gaertn extract, the only fruit extract tested, showed the highest concentration of linoleic acid. Dosage Information/Dosage Regimen Plant extracts were dissolved in sterile water (vehicle, abbreviated as veh) and provided in special bottles (each with a metal bead avoiding waste of the plant extract solution used to fill it) for the mice to drink for 2 weeks (solutions changed every two days) at the following doses (quantity of plant extract powder/day/mouse): P. umbilicalis (Pum) 500 mg/day/mouse; M. officinalis L. (Mel) 500 mg/day/mouse); E. officinalis Gaertn: high dose (Eof-H) 125 mg per day per mouse and low dose (ten times lower, Eof-L) 13 mg per day per mouse. Importantly, before treatment, water intake was measured for each group during two days and then the bottles were filled accordingly, to provide the doses reported above. For the dose-effect study, E. officinalis Gaertn was chosen, based on a clinical trial that evaluated the effect of 1000 mg of E. officinalis Gaertn. on endothelial function in patients with type 2 diabetes. [18] This dose, equivalent to a dose of 250 mg kg −1 body weight per day calculated as the Human Equivalent Dose, [19] proved most effective in that trial. Taxonomic and Predicted Functional Analysis of the Gut Microbiota Total DNA was extracted from feces both at baseline and after treatment with the plant extracts as previously described. [20] The 16S bacterial DNA V3-V4 regions were targeted by 357wf-785R primers and analyzed by MiSeq (RTLGenomics, http://rtlgenomics.com/, Texas, USA). An average of 21 600 sequences was generated per sample in the first study and of 23 234 in the second study. A complete description of the bioinformatic filters applied is available at http://www.rtlgenomics.com/docs/Data_Analysis_Methodology.pdf. The cladograms in Figures 1A,2A,3A,4A as well as LDA scores in Figures 1C,2C,3C,4C were drawn using the Huttenhower Galaxy web application (http://huttenhower.sph.harvard.edu/galaxy/) via the LEfSe algorithm. [21] Diversity indices were calculated using the software Past 3.23 (Hammer, Ø., Harper, D.A.T., and P. D. Ryan, 2001. PAST: Paleontological Statistics Software Package for Education and Data Analysis. Palaeontologia Electronica 4(1): 9pp). The predictive functional analysis of the gut microbiota was performed via PICRUSt. [22] Metabolomic Analysis of the Intestinal (Caecum) Content The metabolomic (total metabolites) analysis of the caecum content was performed as previously described. [23] shows the taxonomic levels represented by rings with phyla at the innermost and genera at the outermost ring and each circle is a bacterial member within that level). B: Indices of gut microbiota diversity. C: LDA score for predictive microbial pathway identified via PICRUSt. [22] D: Principal component analysis (PCA) and histogram of the overall metabolites in the caecum content; n = 5. ***p < 0.001, ****p < 0.0001, two-way ANOVA followed by a two-stage linear step-up procedure of Benjamini, Krieger and Yekutieli to correct for multiple comparisons by controlling the false discovery rate (<0.05); for (D) a one-way PERMANOVA with a Bonferroni correction was applied. (Since the cladogram does not show the ob ob veh B vs the ob ob veh F groups, it means that the vehicle induced no significant changes in the gut microbiota of the control group at p < 0.01). Fed Blood Glucose Measurement, Oral Glucose-Tolerance Test (OGTT), and Intraperitoneal Insulin-Tolerance Test (IPITT) Fed blood glucose was measured and OGTT and IPITT were performed at week 3, immediately after the two-week treatment, as described elsewhere. [23] IPITT was performed four days after OGTT and mice were fasted for 3 h and then injected with 5 U kg −1 insulin. The area under the curve (AUC) is shown in mmol/L x min as inset for OGTTs and calculated by the trapezoidal rule [24] using GraphPad Prism version 7.00 for Windows Vista (GraphPad Software, San Diego, CA). RNA Extraction, Reverse Transcription, and qPCR Total RNA was extracted from homogenized tissues (epididymal white adipose tissue or liver) using Tri Reagent solution (Euromedex, Souffelweyersheim, France) following the manufacturer's protocol. RNA purity was assessed using Nanodrop (Thermo, Evry, France). For reverse transcription, 1 µg of RNA was used with 1 U of M-MLV Reverse transcriptase (Thermo), 15 ng random hexamers, 10 mm DTT, and 1 mm dNTPs. After 1 h at 37°C, reverse transcriptase was inactivated by heating for Indices of gut microbiota diversity. C: LDA score for predictive microbial pathway identified via PICRUSt. [22] D: PCA and histogram of the total metabolites in the caecum content; n = 5. ***p < 0.001, ****p < 0.0001, two-way ANOVA followed by a two-stage linear step-up procedure of Benjamini, Krieger and Yekutieli to correct for multiple comparisons by controlling the False Discovery Rate (<0.05), for (D) a one-way PERMANOVA with a Bonferroni correction was applied. (Since the cladogram does not show the ob ob Mel B and ob ob veh B groups, it means that these groups are characterized by no higher bacterial taxon compared to the final point of the related group). 10 min at 65°C. cDNAs were diluted 1:5 with ultrapure water. For qPCR reactions, 2.5 µL cDNA were mixed with 6.25 µL of Taqman UNIV PCR Master mix 2X, 0.625 µL Taqman assay 20X (Thermo), and 3.125 µL of ultrapure water. Amplification was performed in a Stratagene Mx 3005P thermocycler (Agilent, Les Ulis, France) using the following temperature conditions: 2 min at 50°C, 10 min at 95°C, and 40 cycles of alternance of 15 s at 95°C per 1 min at 60°C. For each condition, expression was quantified in duplicate and 18S rRNA was used as the endogenous control in the comparative cycle threshold (CT) method. The Taqman assay identification number for each couple of primers used to study the related gene is available upon request. Statistical Analysis The results are presented as mean ± SEM. n = 5. Statistical analyses were performed by two-way ANOVA followed by a two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli to correct for multiple comparisons by controlling the False Discovery Rate (<0.05) or Kruskal-Wallis test plus a two-stage step-up method of Benjamini, Krieger and Yekutieli correction for multiple comparisons by controlling the False Discovery Rate (<0.05) or Mann-Whitney test, as indicated in the figure legend, by using GraphPad Prism version 7.00 for Windows Vista (GraphPad Software, San Diego, CA). Significant values were considered starting at p < 0.05 or as reported after corrections. For the taxonomical and predictive functional analysis of gut microbiota significant values were considered starting at p < 0.01 (Figures 1-3). For the correlation between IL-6 and the order Clostridiales the P value was corrected according to the Benjamini-Hochberg correction for multiple comparisons, with a false discovery rate <0.05, n = 15 (all mice from the first study). Principal Component Analysis graphs were drawn and related statistical analyses were performed by using Past 3.23 and one-way PERMANOVA analysis with Bonferroni's correction. Indices of gut microbiota diversity. C: LDA score for predictive microbial pathway identified via PICRUSt. [22] D: PCA and histogram of the total metabolites in the caecum content; n = 5. ****p < 0.0001, two-way ANOVA followed by a two-stage linear step-up procedure of Benjamini, Krieger and Yekutieli to correct for multiple comparisons by controlling the False Discovery Rate (<0.05), for (D) a one-way PERMANOVA with a Bonferroni correction was applied. (Since the cladogram does not show the ob ob veh B group, it means that this group is characterized by no higher bacterial taxon compared to the final point of the related group). Analysis of Gut Microbiota in ob/ob Mice after a Two-Week Treatment with Plant Extracts The aim of this study was to evaluate the putative prebiotic properties of plant extracts on gut microbiota during a short treatment period of two weeks. As our objective was to investigate the impact of these extracts on a dysbiotic gut microbiota, the experiments were conducted in ob/ob mice, a known murine model of gut microbiota dysbiosis. [12] We compared the gut microbiota before (baseline point, B) and after the treatment (final point, F), for each group of mice. To strengthen our results and avoid their over-interpretation, we also performed the analysis described above on a control group of mice (one group per study) treated with the vehicle (sterile water) in which the plant extracts were dissolved. Treatment with Porphyra umbilicalis (Pum) induced a significant increase in bacteria belonging to the phylum Gammaproteobacteria and to the families Enterobacteriaceae and Porphyromonadaceae as well as the genus Barnesiella (the two last groups belonging to phylum Bacteroidetes); by contrast, the Pum treatment appeared to reduce the abundance of the bacterial order Coriobacteriales and its component taxon Olsenella and the bacterial family Ruminococcaceae ( Figure 1A); in terms of overall microbial diversity, treatment with Pum decreased the Chao-1 diversity index, which indicates the diversity related to rare species ( Figure 1B). Then, we analysed the microbial pathways that could be affected by Pum treatment by performing a PICRUSt-based predictive analysis of the microbiome. [22] The analysis showed that Pum treatment increased the microbial pathway related to glyoxylate and dicarboxylate metabolism ( Figure 1C). C: LDA score for predictive microbial pathway identified via PICRUSt. [22] D: PCA and histogram of the overall metabolites in the caecum content; n = 5. ****p < 0.0001, two-way ANOVA followed by a two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli to correct for multiple comparisons by controlling the False Discovery Rate (<0.05). (Figure 2A); in terms of overall microbial diversity, treatment with Mel increased the Chao-1 diversity index ( Figure 2B). With respect to the predicted microbiome, treatment with Mel increased the microbial pathway related to fatty acid metabolism and appeared to reduce that related to sphingolipid metabolism, found to be enriched at baseline ( Figure 2C). Mice receiving Melissa officinalis L. (Mel) extract showed a significant increase in the bacterial family Porphyromonadaceae Treatment with the high dose of Emblica officinalis Gaertn (Eof-H) increased the abundance of the genus Eubacterium belonging to the family Eubacteriaceae (belonging to the Firmicutes phylum, Figure 3A) without affecting either the overall microbial diversity ( Figure 3B) or the predicted microbiome ( Figure 3C). By contrast, the changes induced by the low dose of E. officinalis Gaertn (Eof-L) were modest and the taxa implicated were not identifiable (unknown taxa), though at baseline mice displayed a higher abundance of the genus Paraeggerthella, thus likely decreased by Eof-L ( Figure 4A). These data were associated with no change in over-all microbial diversity ( Figure 4B) but with an increased naphthalene degradation microbial pathway ( Figure 4C). Overall, these data show that the plant extracts tested are capable of changing the dysbiotic gut microbiota of ob/ob mice over a short period of two weeks, Eof-H being more effective than Eof-L. Metabolomic Analysis of the Intestinal (Caecum) Content of ob/ob Mice Treated with Plant Extracts for Two Weeks To measure the metabolization of plant extracts by gut bacteria in terms of production of both short-chain fatty acids (SCFAs) (key molecules of microbial origin involved in the modulation of host metabolism [25][26][27] ) and other metabolites, we performed a metabolomic analysis of the intestinal (caecum) content. [28][29][30] The overall caecal metabolomic profiles of control mice differed to a highly significant extent from those of mice treated with Pum www.advancedsciencenews.com www.mnf-journal.com or Mel extracts, as evidenced by an independent cluster for each ( Figures 1D and 2D, left panel). In detail, mice treated with Pum extract showed significant higher levels of propionate, acetate, and glutamate ( Figure 1D, right panel); mice treated with Mel extract showed significantly higher levels of butyrate, propionate, and ethanol and a significantly lower level of lactate ( Figure 2D, right panel). Mice treated with Eof-H showed an overall caecal metabolomic profile significantly dissimilar from that of the control group manifesting a significantly higher level in both acetate and ethanol ( Figure 3D, left and right panel, respectively); by contrast, mice treated with Eof-L showed an overall caecum content profile similar to that of control mice, despite a significantly lower level in butyrate and a significantly higher level in glutamate ( Figure 4D, left and right panel, respectively). Overall, these data show that a two-week treatment with P. umbilicalis and M. officinalis L. extracts affects the caecum metabolome in a murine model of genetically-induced obesity. Analysis of Markers of Lipid Metabolism in the Plasma of ob/ob Mice Treated with Plant Extracts for Two Weeks Since we observed changes in both the gut microbiota and the caecum metabolome and it is known that gut microbiota can affect lipid homeostasis, [26,[31][32][33] we therefore analyzed certain key markers of lipid metabolism. In the first study, ob/ob mice treated with Pum extract had significantly lower plasma TG levels compared to control mice ( Table 2); by contrast, in the second study, ob/ob mice treated with Eof-L showed significantly higher plasma FFA levels compared to control mice. Overall, these data show that a short treatment of two weeks with P. umbilicalis extracts and a low dose of E. officinalis Gaertn extract can affect key markers of lipid metabolism in a murine model of geneticallyinduced obesity. Analysis of Markers of Glucose Metabolism and Survey of Body Weight in ob/ob Mice Treated with Plant Extracts for Two Weeks Given the link between lipid and glucose metabolism and gut microbiota activity, [32,34] we analyzed markers of glucose metabolism in vivo and also monitored body weight. Treatment with Pum and Mel extracts did not significantly affect neither fed blood glucose nor body weight (Figure 5A,B, respectively). Then, to determine whether these plant extracts may affect blood glucose on a dynamic basis, we performed an OGTT and an IPITT. No significant changes were observed irrespective of the parameter and the group (Figure 5C,D). Treatment with Eof-H and Eof-L extracts did not significantly affect neither fed blood glucose nor body weight; the control group showed a significant reduction of the body weight over two weeks (Figure 6A,B). Both OGTT and IPITT were not significantly affected irrespective of the group (Figure 6C,D). Overall, these data show that, in a murine model of genetically-induced obesity, a two-week treatment with the tested plant extracts did not significantly affect markers of glucose metabolism, despite the changes observed in gut microbiota, caecum metabolome, and lipid metabolism. Analysis of Gene Expression in White Adipose Tissue and Liver of ob/ob Mice Treated with Plant Extracts for Two Weeks A targeted analysis of the expression of genes related to lipid, glucose, and energy metabolism as well as inflammation was performed in white adipose tissue (WAT) and liver. Mice treated with Mel extract showed a significantly higher expression of the gene Acaca (acetyl-CoA carboxylase alpha, related to lipogenesis) in the WAT ( Figure 5E, upper panel). However, the expression profile for all genes analyzed in the WAT was not dissimilar to that of control mice, whatever the treatment ( Figure 5E, lower panel). By contrast, in the liver, a significantly higher expression of the genes Pck1 (phosphoenolpyruvate carboxykinase 1, related to glucose metabolism) and IL-6 (interleukin-6, related to inflammation) was observed, in mice treated with Pum and Mel extracts, respectively ( Figure 5F, upper panel). Moreover, mice treated with Pum, but not those treated with Mel, displayed a significantly different expression profile for all hepatic genes analysed compared to control mice ( Figure 5F, lower panel). The change in the hepatic expression of gene IL-6 was not associated with a change in IL-6 plasma levels, in any group ( Figure 5G). Eof-H extract did not significantly affect WAT gene expression whereas mice treated with Eof-L extract showed a significantly lower expression of the gene IL-6 in the WAT ( Figure 6E, upper panel), which was not associated with a change in IL-6 plasma levels, whatever the group ( Figure 6F). The expression profile for all genes analysed in the WAT was not dissimilar to that of control mice, whatever the treatment ( Figure 6E, lower panel). In the liver, a significantly lower expression was observed for genes Nr1H2 (nuclear receptor subfamily 1, group H, member 2, related to energy metabolism), TNF-α, and Ccl5 (tumor necrosis factor-α and chemokine (C-C motif) ligand 5, respectively, both related to inflammation), in mice treated with Eof-H; by contrast, mice treated with Eof-L only showed a significantly lower expression in gene Ccl5 expression ( Figure 6G, upper panel). Interestingly, both mice treated with Eof-H and those treated with Eof-L displayed a significantly different expression profile for all hepatic genes analysed compared to control mice ( Figure 6G, lower panel). Overall, these results suggest a plant extract-dependent and tissue-specific effect on the expression of genes analysed in the WAT and the liver of ob/ob mice. We also investigated whether the expression of the genes significantly modulated in both studies and tissues might be correlated with specific bacterial taxa as well as microbial pathways of the gut microbiota together with caecum metabolites and IL-6 plasma levels. We did not identify any significant correlation between the expression of genes in the WAT and bacterial taxa in either study (data not shown). By contrast, in the first study, the hepatic expression of IL-6 (but not IL-6 plasma levels) was found to be significantly (adjusted p = 0.0015) and positively (r = 0.80, Spearman correlation) correlated with the bacterial order Clostridiales. Discussion The aim of this study was to evaluate whether extracts of three plants Porphyra umbilicalis, Melissa officinalis L., and Emblica officinalis Gaertn might have prebiotic properties when administered for two weeks to ob/ob mice. Our criteria to define prebiotic properties were: i) the impact on gut microbiota, ii) generation of SCFAs, and iii) improvement in lipid and/or glucose metabolism. Since in current medical practice prebiotics are proposed to patients experiencing gut discomfort, we chose ob/ob mice, a known model of metabolic diseases and gut microbiota dysbiosis. [12] Targeting the gut microbiota of ob/ob mice was already proven to be effective in ameliorating glucose metabolism, [35] supporting our rationale. Moreover, prebiotics are usually administered for a long period (at least four weeks) and at high doses (up to 10 g per day per patient). Thus, some patients may experience undesirable effects such as gut ballooning and/or increased flatulence [7,8] (parameters that could not be taken into account in our study), due to fermentation of prebiotics by gut bacteria. [9] Based on these evidences, we opted for a short period of two weeks and the use of polyphenol extracts. Importantly, intestinal discomfort (ballooning and flatulence) has not been reported in clinical studies after the administration of such extracts. [36][37][38] The modifications in the gut microbiota induced by Pum and Mel extracts were associated with a significant shift of the overall metabolome of the caecum content. In particular, mice treated with Pum and those treated with Mel displayed higher caecal levels of propionate than control mice. It is also noteworthy that Pum induced a substantial reduction in plasma TG, which could www.advancedsciencenews.com www.mnf-journal.com be mechanistically linked to the observed increase in propionate, as previously reported. [39] The increase in propionate and reduction in TG were associated with a change in hepatic gene expression but not with a modulation of glucose metabolism. Certain prebiotics, such as oligofructoses, have been shown to be capable of increasing glucose-tolerance, improving intestinal physiology and reducing fat-mass and inflammation in the same murine model as that we used. [40] However, in that study, the ob/ob mice were younger (10 weeks old) and the treatment with oligofructoses [41] (added to a control diet in a proportion of 9:1 [weight of control diet:weight of fibers]) was longer (5 weeks) than the protocol we employed. These differences could provide an explanation for the inconsistency between these previously reported data and our findings. With regard to the modulation of gut microbiota, treatment with Pum extract indirectly reduced the abundance of the genus Ruminococcus, found to be higher at baseline. The genus Ruminococcus was positively associated with a better lipid utilization in mice, via the production of the bacteriocin Albusin B, a toxin capable of enhancing lipid utilization in BALB/c mice. [42] These data are in contrast with ours since we show that reduced plasma TG are instead associated with reduced levels of Ruminococcus. This discrepancy suggests that bacteria belonging to the Ruminococcus genus may have different metabolic effects depending on both the pathophysiology and the genetic background of the animal model used (BALB/c vs C57Bl/6 ob/ob mice). Modifications of the gut microbiota induced by Eof-H were associated with a significant shift of the overall metabolome of the caecum content. Specifically, mice treated with Eof-H displayed higher levels in both acetate and ethanol. These changes were associated with a modulation of the gene expression in the liver. In mice treated with Eof-L we observed a modest impact on the gut microbiota (related to unknown taxa) and the predicted microbial pathways. Moreover, this treatment did not induce a significant shift of the overall metabolome, despite the observed lower levels of butyrate and higher levels of glutamate. The butyrate levels were also modulated by Mel in the first study, though toward higher levels. The effective impact of butyrate could be controversial since its increase could favor infections by certain pathogenic E. coli such as the strain O157:H7, as shown by Zumbrun et al. [43] Modulations of butyrate levels may therefore represent either a positive or a negative outcome, depending on a very specific context. [44] In terms of metabolic modulations, mice treated with Eof-L exhibited higher plasma levels of FFA. This finding could reflect either increased lipolysis or reduced lipid storage into the adipose tissue [32] and was associated with a lower IL-6 gene expression in the WAT. Increased FFA plasma levels are generally associated with a detrimental metabolic condition. However, elevated FFA plasma levels have been demonstrated in axenic mice fed a highfat diet, which are known to resist diet-induced obesity. [32] Thus, based on this study reported by Bäckhed et al., the increase in FFA we observed in mice treated with Eof-L could be interpreted as a reduction in energy storage in the WAT of ob/ob mice, which could represent a favorable metabolic condition. In conclusion, a two-week treatment with Porphyra umbilicalis, Melissa officinalis L., and E. officinalis Gaertn extracts can significantly affect a dysbiotic gut microbiota, caecum metabolome, and markers of lipid metabolism in ob/ob mice. We also observed a change in the expression of certain genes involved in glucose, lipid, energy metabolism, as well as inflammation in both WAT and liver. The extent of these changes was dependent on both the nature of the plant-extract administered and the dose. With regard to the dose, mice cohousing during treatment may have induced variation in the dosing of each animal. However, we have not observed aberrant intragroup variances for whatever the studied parameter, suggesting that the putative variation in the consumption occurred to a little extent. Nevertheless, these changes were associated neither with modulations of body weight nor glucose metabolism and in general cannot be identified as metabolic improvements. Thus, in terms of validation of the tested plant extracts as prebiotics, apart from the changes induced in the gut microbiota, we face a dichotomy between the effects observed on gut microbiota and lipid metabolism vs those observed on glucose metabolism. Despite our intent was not to compare plant extracts to each other but rather to appreciate metabolic intra-group effects, baseline difference in body weight among the groups of mice might be a limitation to further metabolic interpretation. Moreover, for the translational relevance of our results, more data on patients are required to assess whether both a limited concentration of prebiotics and a reduced time of treatment (2 weeks) could be effective and limit side-effects. Our data may stimulate a debate on how any substrate with putative prebiotic properties should be validated, and in particular whether this validation should target a specific parameter or a wider panel of metabolic parameters, regardless of changes in the gut microbiota and/or microbiome.
2019-06-18T13:41:16.384Z
2019-06-25T00:00:00.000
{ "year": 2019, "sha1": "2593c17ba9d7e70a3c325dba11d8792838184956", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mnfr.201900403", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3b4641a57d57219eac3f998a1777989d3bf66bd3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211768534
pes2o/s2orc
v3-fos-license
Minimize the competence gap between ferrous foundry small firms and vocational high school in Indonesia readiness industry 4.0 the purpose of this paper is to identify the Competence that should be taught in Vocational High School which still takes care of the skills needed for the future. The research method used is descriptive qualitative. To get the required Competence of metal Ferro casting industry from Vocational High School students through two sides. The first side is observation and interviewing in the ferrous foundry to get the required Competence. The next side is by way of interviews and observations at Vocational High School to know the difference Competence is taught. Competences that should be taught are provide material on the foundations of metal casting, pattern-making techniques and power points or Prezi, engineering drawings and AutoCAD, manual and modern casting technique, Solidwork and the solid cast, mold making techniques and core, and Ferro metal casting techniques with industry. Introduction Politics, economics, education, culture and social change drastically due to the development of technology and information. The development of technology and information has an impact on the political policies of the government [6]. The economy shifts from a door-to-door marketing system to Start-Up marketing [5]. The curriculum, learning subjects, learning environment, teachers and students must follow the development of this technology [10]. Culture and social change follow the existing environment due to the changing environment due to the development of technology and information [4]. Indonesia faces the challenge of setting up industry and workforce in accordance with industry 4.0 caused by the development of this technology and information. Preparation for Indonesia to face this 4.0 industry revolution is Indonesia to make a roadmap to industry 4.0. one of the roadmaps to industry 4.0 that Indonesia set 10 national priorities. Indonesia will encourage 10 national priorities in "Making Indonesia 4.0" initiatives: 1) improving the flow of goods and materials, 2) Re-design of industrial zones 3) Accommodating sustainability standards 4) Empowering SMEs 5) Building a national digital infrastructure 6 ) Attracting foreign investment 7) Improving the quality of human resources 8) Development of innovation ecosystem 9) Incentives for technology investment 10) Harmonization of rules and policies [6]. There are 2 national priorities that are concentrated in this paper that empower the Micro and SMEs industry as well as improving the quality of human resources. Micro industries and SMEs are empowered because 62% of Indonesian workers work on micro and SMEs with low productivity. [6] Micro and SMEs productivity are still low due to the lack of applied technology, poorly trained workers, poor production management, low demand for goods, poor marketing, product results that are still inferior to foreign products and raw materials which is expensive. Micro industries and SMEs can be empowered through financial incentives, technical assistance, employee HR improvement, product standardization, management standardization, product development and product marketing. One of the things that can improve workers' human resources, standardization product, and product development is through teaching factory. [12] [13]. Teaching factory is a concept that integrates between corporate or industry environment with class. Teaching factory was originally derived from the concept of medical science and specifically. Between schools and hospitals walk parallel and go hand in hand. The purpose of walking parallel and coherent is to integrate learning and work environments. Once the integration between the learning and work environment occurs when the real and relevant learning experience increases [12]. Teaching factory is growing rapidly in companies and industry in America. Computer hardware and software companies use this teaching factory concept. Then progressed to the manufacturing company. Students are involved in designing and developing new products needed by the market in this manufacturing company [12]. Manufacturing industry that includes the process of machining, joining, forming and casting [3]. Metal casting is a manufacturing process that produces a very complex form [1]. This Focus paper is the process of manufacture metal casting. Metal casting is divided into two branches namely ferrous and non-ferrous metal casting. This focus paper is ferrous metal casting. The micro and small-scale ferrous metal foundry industry in Indonesia is spread and concentrated in Klaten area. Teaching factory will be able to run well if ferrous foundry cooperation of micro and small enterprises with vocational high school competence of metal casting technique skill go hand in hand. vocational high school competence of metal casting technique expertise in central Java is SMK N 2 Klaten and SMK Batur Jaya Ceper 1 Klaten. The concept of learning model is taken in Klaten area because Klaten is a district in central java which become center of the industry of Ferro foundry micro and SMEs. So the purpose of this paper is to identify the Competence that should be taught in Vocational High School which still takes care of the skills needed for the future. Competence should be able to help and improve the good cooperation between school and ferrous foundry. Method The research method used is descriptive qualitative. To get the required Competence of metal Ferro casting industry from Vocational High School students through two sides. The first side is observation and interviewing in the ferrous foundry to get the required Competence. The next side is by way of interviews and observations at Vocational High School to know the difference Competence is taught. Ferrous Foundry analysis 3.1.1 Tacit knowledge Knowledge is divided into 2 namely tacit knowledge and explicit knowledge [14] [15]. Explicit knowledge is the knowledge that can be explained through sentences, can be described with pictures and writing. While tacit knowledge is action, work procedure, work routine, commitment from the worker, worker idealism and worker emotion that can not be explained by the worker [9]. Tacit knowledge can be shared through interviews with someone who can be trusted by the worker itself [14] [17]. therefore, to conduct interviews should build trust first with workers, through frequent Vocational high school analysis Based on KKNI (qualification framework of Indonesia) Vocational High School entry at level 2 that is (1) Be able to perform a specific task, using commonly used tools, and information, and work procedures, and demonstrate performance with measurable quality, under the direct supervision of their superiors. (2) Have basic operational knowledge and factual knowledge of specific work areas, so as to be able to choose the available solutions to common problems. (3)Responsible for the work itself and can be given the responsibility of guiding others. Based on the depth interview that has been done in Vocational high school. Learning that has been done in Vocational high school is to use project-based learning. However, learning only uses aluminum materials. So the furnaces used in the Ferro metal foundry industry are completely different from those in the school. The consideration is the furnace for Ferro metal casting requires more electrical energy, of course, will cause a high cost for practice. Therefore, schools only rely on the job training (OJT). Though OJT usually cannot be used to teach Ferro metal casting. Because industry only focuses on production, so the students tend to only help the existing jobs in the industry. Based on the existing national curriculum, vocational high school competence of metal casting technology expertise. There are 4 main subjects namely pattern making techniques, mold making techniques and core, manual casting techniques and casting techniques with the machine. Of the four subjects that really can be taught by the school is the technique of pattern making, mold making techniques and core and manual casting techniques. Techniques of pattern-making are still taught manually. The pattern-making technique is more likely to produce wood patterns and resins. Though small firms foundry industry that is in Klaten, already using the aluminum pattern to facilitate the pattern of sand casting. This means that there is a gap here in terms of the use of pattern materials used in schools with those in the industry. In addition, the foundry industry needs people who are able to communicate with customers. However, in school children have not been taught to communicate well. Schools are also still not implementing computer programs that can help in planning the pattern and gating system good foundry. Molding and core making techniques are still taught manually. The molding technique is still using manual cup and drag by hand. The mold making technique is in accordance with the existing in metal casting industry Ferro small firms in Klaten. However, schools must also develop competencies to try to adapt sophisticated foundry industries. Students should be familiar with material handling conveyors, monorail, crane and hoist and Automated Guided Vehicles. The core manufacturing technique is still taught manually. The making of the core in schools with existing in the small firms metal casting industry in Klaten is appropriate. Core manufacture still uses heat from gas fuel to condense the core sand to be used. However, schools should also try to start teaching children about core making using CNC machines. Means that students should be familiar with CNC machines. Manual casting technique taught school is sand casting technique. Manual casting techniques in schools still use a crucible furnace that can only heat aluminum. So, students only get non-ferrous metal competence. Competence of ferrous metal casting has not been taught due to the absence of furnace that suits small metal firms foundry industry in Klaten. Teaching Factories to Minimize GAP Competence Between Vocational High School and Ferrous Foundry Small Firms in Klaten Between vocational high school and metal Ferro metal casting industry firm in Klaten must cooperate to develop industry and vocational high school competence of metal casting techniques. Micro and SMEs Foundry in Klaten usually only try to increase production and improve marketing. Therefore, to start cooperation. Vocational high school should start to establish good cooperation with equally profitable industries. Cooperation must be built based on competence to be improved. (Figure 1) Figure 1. Collaboration win-win solution Vocational high school in Indonesia is implemented for 3 years. The first year the school should prepare is to provide material on the foundations of metal casting, pattern-making techniques and power points or basic Prezi, engineering drawings and AutoCAD [12]. The second year is a simple manual casting technique, powerpoint or Prezi professional, AutoCAD 3D and the solid cast. So the mold making techniques and core are taught all on the subjects of manual casting techniques. The third year of vocational high school teaches Ferro metal casting techniques with industry. The learning method used is the method of learning research. The first step is the school has established cooperation with partner industry. After the school works with the foundry industry, the next step is the students get a picture of the workpiece that will be developed through discussion between teacher and industry party. The teacher then gives an overview of the workpiece to be made. Students design 3D work drawings with AutoCAD [20] then moved to SolidCast [2] to be used for planning the patterns to be created and the gating system used. Then students make a good prototype of aluminum, then make a powerpoint or Prezi that will be presented to the industry. Once the industry approves products that are likely to be developed, then students are invited to the ferrous metal casting industry. After students are in the shop floor of ferrous metal casting industry, students then make molds and gating systems that have been planned before. students then pay attention to iron smelting process directly from Furnace maintenance process (Knocking out of the furnace and ladle linings, Relining the furnace or ladles, turn on and off furnace according to SOP), charging, melting, removal of slag, refining, and tapping. After the smelting process is complete the next step students move the molten metal ferrous from the ladle to mold. After cool, the next step is to disassemble the mold in accordance with the Competence that is often done by workers. After the workpiece is complete the next step is to check the casting results visually and spectrograph. After the analysis is complete then make a presentation to be presented to the industry the results that have been done. Conclusion Competence that should be taught in Vocational High School are provide material on the foundations of metal casting, pattern-making techniques and power points or basic Prezi, engineering drawings and AutoCAD, manual casting technique, powerpoint or Prezi professional to presentation with industry, Solidwork and the solid cast, mold making techniques and core, and Ferro metal casting techniques with industry
2019-11-14T17:13:29.486Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "d8dd992f96c8b52151d8b2032859a0d16e83a246", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1273/1/012051", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3c3090022d160cd643d6c2da485cd074cdd6ac50", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business", "Physics" ] }
252878652
pes2o/s2orc
v3-fos-license
Evaluation of penehyclidine for prevention of post operative nausea and vomitting in patients undergoing total thyroidectomy under total intravenous anaesthesia with propofol-remifentanil Backgroud Postoperative nausea and vomiting (PONV) is one of the most common complications after total thyroidectomy under general anesthesia. Total intravenous anesthesia (TIVA) has been documented to prevent PONV in patients undergoing total thyroidectomy. Penehyclidine, an anticholinergic agent with an elimination half-life of over 10 h, is widely used as premedication to reduce glandular secretion. This study aimed to explore the preventative effects of penehyclidine with propofol-remifentanil-TIVA to single-TIVA on PONV in patients undergoing total thyroidectomy. Methods A total of 100 patients scheduled for total thyroidectomy were randomly assigned to either the penehyclidine group (n = 50) or TIVA group (n = 50). Propofol and remifentanil were was used for TIVA in all patients. No patients who received premedication. Patients were administrated with either 5 ml of normal saline or 0.5 mg of penehyclidine soon after anesthesia induction. The incidence of nausea and vomiting, the severity of nausea, the requirement of rescue antiemetics, and adverse effects were investigated during the first 24 h in two time periods (0–2 h and 2–24 h). Results The overall PONV incidence during the 24 h after surgery was significantly lower in the penehyclidine group compared with the TIVA group (12% vs 36%, P < 0.005). Besides, the incidence of nausea and the incidence of vomiting were significantly lower in the penehyclidine group compared with the TIVA group at 2–24 h after surgery. However, there was no significant difference between the two groups at 0–2 h after surgery. Conclusions Administration of penehyclidine under TIVA with propofol-remifentanil is more effective for prevention of PONV than TIVA alone, especially 2–24 h after total thyroidectomy. Trial registration https://www.chictr.org.cn/edit.aspx?pid=132463&htm=4 (Ref: ChiCTR2100050278, the full date of first registration: 25/08/2021). nerve. Therefore, patients are prone to various complications after thyroid surgery, among which postoperative nausea and vomiting (PONV) is the most common complication. The occurrence of PONV in thyroid surgery is associated with many risk factors. The common risk factors include female, nonsmokers, a history of PONV or motion sickness, and the use of opioids [1]. PONV increases the risk of aspiration of gastric contents, suture dehiscence, postoperative bleeding, and airway obstruction by hematoma, which may affect the surgical treatment and postoperative recovery time [2]. The incidence of PONV after thyroid surgery is reported to be 60-80% when no prophylactic antiemetic is administered [3,4]. TIVA has been documented to prevent PONV after various surgeries [3]. In addition, TIVA has been recommended by recent guidelines as an equivalent intervention for the prevention of PONV, comparable to one single antiemetic [4]. However, the use of TIVA with a single-drug pharmacological prophylaxis such as 5-HT3 antagonists did not decrease PONV sufficiently across previous study [5]. Many drugs have been tried for the prevention of PONV, and anticholinergics has been shown to be effective in this regard [6][7][8]. The recommended anticholinergic agent to prevent PONV is transdermal scopolamine patch [9,10]. Other anticholinergic drugs for preventing PONV, such as glycopyrrolate and atropine, have been shown to be ineffective [11]. Currently, the effect of penehyclidine, a new anticholinergic agent with a long elimination half-life, has been proved to mitigate PONV in patients after strabismus surgery [12]. However, no data is used on penehyclidine as an antiemetic against PONV in patients undergoing thyroid surgery receiving TIVA. This study was to compare the preventative effects of penehyclidine under TIVA with propofol-remifentanil to single-TIVA on PONV in patients undergoing total thyroidectomy. Methods The study was approved by the Review Board of the First Affiliated Hospital with Nanjing Medical University (number 2019-SR-238) and the trial was registered at https:// www. chictr. org. cn/ edit. aspx? pid= 13246 3& htm=4(Ref:ChiCTR2100050278,the full date of first registration: 25/08/2021). Written informed consent was obtained from all the subjects or their legal guardians. A total of 181 subjects, who were American Society of Anesthesiologist (ASA) physical status I-II and aged 24 ~ 64, scheduled for total thyroidectomy with central compartment node dissection years were screened. Exclusion criteria were body mass index of more than 30 kg/m 2 , smoking history, history of PONV or motion sickness, severe cardiopulmonary disease, history of hepatic or renal disease, medication with steroids, or cognitive impairment. The subjects requiring radical neck dissection were excluded because their operation time would be longer than those of simple total thyroidectomy. All subjects were in a euthyroid state at the time of surgery. The same surgeon performed the thyroid surgery using similar techniques. The patients were randomly allocated to the TIVA group or penehyclidine group by computer-generated randomization in a 1:1 ratio. All patients did not receive premedication before surgery. Each patient was monitored with electrocardiography, non-invasive blood pressure monitor, and pulse oximetry. General anesthesia was induced with propofol (Corden Pharma S.P.A, Caponago, Italy) 1.5-2.5 mg/kg and fentanyl (Humanwell Healthcare CO.,LTD., China) 2 μg/kg, and orotracheal intubation was performed after administration of cisatracurium (Jiangsu Hengrui Medicine CO.,LTD., China) 0.15 mg/kg. Anaesthesia was maintained with propofol infusion at a rate of 60-200 μg·kg − 1 ·min − 1 , and remifentanil (Humanwell Healthcare CO.,LTD., China) infusion at a rate of 0.1-0.15 μg·kg − 1 ·min − 1 without the use of inhalational anaesthetics. Lactic Ringer's solution was infused at a rate of 10-15 ml/kg/h throughout the surgery. Mechanical ventilation was used with a tidal volume of 6-8 ml/kg and a frequency of 10-12 beats per minute to keep end tidal CO 2 at 35-45 mmHg throughout the surgery. Fresh gas was adjusted to 1 L oxygen to 1 L air with an oxygen concentration of about 60%. In the PACU, residual muscle relaxation was not antagonized by neostigmine and atropine. The anesthesia nurse who prepared the drug/placebo mixtures according to the group assignment was not involved in this study. After anesthesia induction, 0.5 mg penehyclidine (Avanc Pharmaceutical CO.,LTD., China) in 5 ml or an equal volume of 0.9% normal saline (Shanghai Baxter Medical Supplies CO.,LTD., China) was administrated immediately in the penehyclidine and TIVA group, respectively. A resident blinded to the treatment evaluated nausea and its severity, vomiting, postoperative pain, the requirement of rescue antiemetic, use of additional analgesics, and side effects at 2 and 24 h after surgery. Patients were instructed before the operation. The intensity of nausea was based on a 10-point numerical rating scale (NRS: 0 = no nausea at all to 10 = the most severe nausea). The severity of nausea was finally described by NRS scores (mild 1-3, moderate 4-6, severe 7-10). The severity of pain was measured on a 10-point visual analog scale (VAS) (0 = no pain; 10 = most severe pain) [13]. The patients who complained of severe nausea and/or vomiting were rescued with 3 mg granisetron (Shandong Shenglu Pharmaceutical CO.,LTD., China), and severe pain VAS score of more than 5 was treated with 40 mg of parecoxib (Pharmacia &Upjohn Company LLC, U.S.A). The sample size was calculated based on the incidence of PONV (40%) with TIVA in the literature reviews [5,14]. Assuming a 30% reduction in the incidence of PONV in penehyclidine group could be considered clinically significant. The value of α would be 0.05 with a power (1β) of 0.8. A total of 36 patients per group were required. All values are expressed as mean ± standard deviation or number percentage. Continuous variables were compared using the Student's t-test or Mann-Whitney U test according to the normality. Categorical variables were compared using the Chi-square test or Fischer's exact test, as appropriate. Ranked data was compared using the Mann-Whitney U test. A P-value < 0.05 was considered statistically significant. SPSS software for Windows version 25.0 (IBM Corp., Armonk, NY, USA) was used. Result A total of 181 patients were enrolled in this study and 100 patients completed the protocol between December 2019 and January 2021 (Fig. 1). The patient characteristics (including age, gender, body weight), operation data, and fentanyl consumption were statistically similar between two groups (Table 1). A total of 181 patients were randomly allocated to penehyclidine or TIVA groups. Among them, 66 patients dropped out due to not meeting inclusion criteria and 15 patients declined to participate this study. Therefore, 100 patients were finally analyzed. N = 50 patients in TIVA and in penehyclidine group. The overall PONV incidence during the 24 h after surgery was significantly lower in the penehyclidine group compared with TIVA group (12% vs 36%, P = 0.005; Fig. 3). Besides, the incidence of nausea (10% vs. 32%, P = 0.007) and the incidence of vomiting (4% vs. 24%, P = 0.009; Fig. 2) were significantly lower in the penehyclidine group compared with the TIVA group at 2-24 h after surgery. However, there was no significant difference between the penehyclidine and TIVA group at 0-2 h after surgery. The overall PONV incidence 24 h after surgery, proportion of patients who required rescue antiemetic treatments, and severity of nausea were significantly lower in the penehyclidine group than in the TIVA group (6% vs. 24%, P = 0.025; P = 0.001; Fig. 3). There were no significant differences in total consumption of fentanyl, VAS pain score and the rescue analgesic requirement during the study period. There were also no significant differences in the incidences of dry mouth, headache and dizziness between the two groups ( Table 2). Discussion PONV is one of the most common complications and the most unpleasant aspect after thyroid surgery under general anesthesia. This complication can delay patient discharge from the hospital and increase the cost of care [15,16]. Thyroid surgery is specifically associated with a high incidence of PONV. The main cause of the high incidence of PONV after thyroid surgery is not thoroughly clear, but it is thought to result from the hyperextension of the neck and strong vagal stimulation [17]. Hyperextension of neck posture may lead to cerebral blood flow disorders which can cause central nausea and vomiting [18]. And strong vagal stimulation by surgical handling of neck structures may exacerbate the incidence of PONV [19,20]. Muscarinic receptors are involved in PONV by various mechanisms [21,22]. Golding et al. [23] Reported that M3 and M5 acetylcholine receptors have been shown to reduce motion sickness, a risk factor of PONV. The vestibular system is densely packed with M1 receptors, and cholinergic transmission from the vestibular nuclei to the central nervous system centers and from the medullary reticular formation to the vomiting center is blocked by anticholinergics. Additionally, in thyroid surgery, surgical handling of neck structures strongly stimulates the vagus nerve in neck [24]. Anticholinergics have been shown to be effective to prevent PONV, and the recommended anticholinergic drug is scopolamine [9,11]. Due to its short half-life, scopolamine is used as a transdermal patch before surgery. Penehyclidine (2-hydroxyl-2-cyclopentyl-2-phenylethoxy) is a new long-acting anticholinergic drug with anti-muscarinic and anti-nicotinic activities that has It is widely used as a pharmacologic agent for organic phosphorus poisoning and preoperative medication, but its effect on PONV is unclear. Penehyclidine has a greater selectivity for muscarinic 1(M1) and muscarinic 3 (M3) subtypes of acetylcholine receptors but no effect on muscarinic 2 (M2) subtype of acetylcholine receptors [25]. Given its mechanism of action, its effect on PONV was to be expected. Previous reports showed that penehyclidine mitigated the incidence of PONV in patients after strabismus surgery [12] and gynecological laparoscopic surgery [26]. In our study, we also found that penehyclidine reduced PONV in patients undergoing thyroid surgery. In these surgeries, the draw reaction is a routine operation which may be related to the higher incidence of PONV. The previous studies have demonstrated that propofol prevent the incidence of PONV during the early 0-2 h postoperative period rather than late [5,27], which is consistent with the results of our study. Our analysis shows that patients receiving TIVA had a higher incidence of PONV in the late postoperative phase, starting at 2 h after surgery. TIVA has been documented to prevent PONV after thyroid surgery. Apfel et al. [28]suggested that the risk factors for early PONV (< 2 h) and late PONV (2-24 h) are different, and inhalation or TIVA is not a risk factor for late PONV. A longer-acting antiemetic drug may be necessary to prevent late PONV after TIVA [27,29]. Penehyclidine has a longer elimination half-life (10.4 ± 1.22 h) than that of ondansetron (3.5 h) or granisetron (4.9 h) or ramosetron (9 h) [30,31]. Our study suggests that penehyclidine effectively reduced the late incidence of PONV (2-24 h) than early PONV (0-2 h) in patients after TIVA. The use of TIVA with a single-drug pharmacological prophylaxis did not decrease PONV acrossing to the previous study [5].However, However, in our study, the use of TIVA with penehyclidine decreases PONV sufficiently and mitigates the severity of nausea after thyroid surgery. Administration of penehyclidine after anesthesia induction can be widely used as a pharmacologic agent on PONV in patients undergoing thyroid surgery. The main side effects of penehyclidine are dry mouth, headache and central anticholinergic syndrome. In the present investigation, none of the patients presented with central anticholinergic syndrome, and there was no difference between the two groups in the incidence of dry mouth and headache. These may possibly be explained by the use of a limited dose of 0.5 mg penehyclidine. Potential risk factors contributing to PONV, such as etomidate and neostigmine were not administrated in the thyroid surgery [32]. The gender of the patients was mostly female, which was consistent with previous reports (female-to-male ratio 2-4:1) [33]. Besides, we strictly performed the randomization and double-blinded technique during the study. A limitation of the current study should be noted. We anticipated a reduction of about 30% between the two groups before our study. However, the actual reduction in overall PONV incidence was 24% (36% in TIVA group vs 12% in penehyclidine group, P = 0.005) during the 24 h after surgery. But the relative reduction rate of 30-40% in general PONV study is considered clinically relevant, the acquisition of a relative risk reduction of 67% in our study can be considered clinically significant [24,34]. However, this operation was performed as a TIVA with propofol-remifentanil infusion. How high if using inhalational agents is unknown. Further studies are needed to research penehyclidine in more patients at more diverse surgical settings using different anesthetic techniques. Conclusions In conclusion, administration of penehyclidine after total intravenous anaesthesia with propofol-remifentanil significantly reduces the incidence of PONV especially 2-24 h after thyroidectomy. Penehyclidine, a widely used preoperative anticholinergic agent, can be considered a as an effective anti-emetic protector in patients undergoing thyroid surgery.
2022-10-14T14:55:05.467Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "6b26b607318062abdc278df272e0415bf63cec69", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "6b26b607318062abdc278df272e0415bf63cec69", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270834718
pes2o/s2orc
v3-fos-license
Higher homocysteine and fibrinogen are associated with early-onset post-stroke depression in patients with acute ischemic stroke Background Post-stroke depression (PSD) is a well-established psychiatric complication following stroke. Nevertheless, the relationship between early-onset PSD and homocysteine (Hcy) or fibrinogen remains uncertain. Methods Acute ischemic stroke (AIS) patients who met the established criteria were enrolled in this study. Early-onset PSD was diagnosed two weeks after the stroke. The severity of depressive symptoms was assessed by the Hamilton Depression Scale-17 items (HAMD-17), with patients scored ≥7 assigned to the early-onset PSD group. Spearman rank correlation analysis was employed to evaluate the associations between Hcy, fibrinogen, and HAMD scores across all patients. Logistic regression analysis was conducted to investigate the relationship between Hcy, fibrinogen, and early-onset PSD. Receiver operating characteristic curve (ROC) analysis was ASSDalso performed to detect the predictive ability of Hcy and fibrinogen for early-onset PSD. Results Among the 380 recruited patients, a total of 106 (27.89%) patients were diagnosed with early-onset PSD. The univariate analysis suggested that patients in the PSD group had a higher admission National Institutes of Health Stroke Scale (NIHSS) score, modified Rankin Scale score (mRS), Hcy, and fibrinogen levels than patients in the non-PSD group (P<0.05). The logistic regression model indicated that Hcy (odds ratio [OR], 1.344; 95% confidence interval [CI] 1.209–1.494, P<0.001) and fibrinogen (OR, 1.57 6; 95% CI 1.302–1.985, P<0.001) were independently related to early-onset PSD. Area under curve (AUC) of Hcy, fibrinogen, and Hcy combined fibrinogen to predict early-onset PSD was 0.754, 0.698, and 0.803, respectively. Conclusion This study indicates that Hcy and fibrinogen may be independent risk factors for early-onset PSD and can be used as predictive indicators for early-onset PSD. Introduction Post-stroke depression (PSD) is a severe and frequent complication of mental disorders followed by stroke, with an estimated prevalence range from 18% to 33% (1,2).The clinical manifestations of PSD are characterized by low mood, loss of interest, and even suicidal tendencies (3).PSD is associated with increased death rates and places additional burdens on both the affected individuals' families and society as a whole (4).PSD can occur in different stages after stroke, early-onset PSD refers to patients exhibiting depression within two weeks after acute stroke onset (5,6).Compared to late-onset PSD, early-onset PSD is characterized by a greater prevalence of depressive symptoms and is substantially associated with a greater chance of poor outcomes (7).Thus, it is of great value to detect predictive factor for early-onset PSD. Inflammation plays a crucial role in the pathophysiology of PSD (8).Several studies have reported a strong correlation between Hcy levels and inflammation (9, 10).Fibrinogen is a common biomarker of inflammation (11,12).Studies have shown that serum Hcy levels are elevated in patients with depression and correlate with the severity of depressive symptoms (13).Reducing Hcy might help alleviate anxiety and depression (14).In addition, previous studies have shown that elevated Hcy at admission is associated with PSD at 3 months after stroke (15).High-sensitivity C-reactive protein combined with Hcy can more accurately predict PSD at three months and one year after stroke (16,17).Genetic factors may affect the serum levels of Hcy (18).The C677T is a nonsynonymous variant, leading to a reduction in the activity of methylenetetrahydrofolate reductase (MTHFR) and folate levels and elevating serum Hcy levels (19,20).MTHFR C677T mutations and folate deficiency could increase the risk of coronary heart disease and ischemic stroke in later life (21).A previous study found that the MTHFR C677T AG genotype and A allele increased the risk of PSD (22).Furthermore, a recent study showed that MTHFR C677T may exert an effect on PSD via mediating Hcy level (18).So far, the relationship between Hcy and early-onset PSD is still unclear.Based on the above research, we speculate that Hcy may be closely related to the occurrence and development of early-onset PSD. Previous studies have found that patients with acute coronary syndrome revealed high levels of fibrinogen, and they were significantly higher among patients with anxiety and depressive disorders (23).In healthy individuals, psychological distress is also correlated with hypercoagulability (24).Both coagulation and fibrinolysis increase from baseline in response to acute stress, but coagulation increases more than fibrinolysis (25).Meanwhile, a high level of fibrinogen has been associated with depression 1-or 3months post-stroke onset (26,27).The underlying mechanism of how fibrinogen causes PSD remains unclear.Some researchers have proposed that fibrinogen might be a mediator between stroke severity and inflammatory response, the latter leading to depressive symptoms (26,28).To date, there have been few studies concerning the relationship between fibrinogen and early-onset PSD. At present, numerous risk factors, including gender, age, inflammatory cytokines, lesion localization, stroke severity, a history of depression, and symptomatic plaque enhancement, have been identified by researchers as being associated with PSD (2,29).However, the relationship between Hcy and early-onset PSD remains unknown, and there have been few studies concerning the relationship between fibrinogen and early-onset PSD.In this study, we aimed to investigate the association of Hcy and fibrinogen with early-onset PSD and to explore whether Hcy and fibrinogen can serve as predictive indicators for early-onset PSD. Study design and participants Acute ischemic stroke (AIS) patients were prospectively recruited from the Affiliated Changsha Central Hospital, Hengyang Medical School, University of South China, between January and August 2023.This study was approved by the Ethics Committee of the Affiliated Changsha Central Hospital, Hengyang Medical School, University of South China.Eligible participants were enrolled in the final analysis if they met the following criteria.Inclusion criteria: (1) patients who fulfilled the diagnostic criteria for ischemic stroke as outlined in the Chinese guidelines for diagnosis and treatment of acute ischemic stroke 2018 (30), with AIS confirmed by computed tomography (CT) or magnetic resonance imaging (MRI) within 24 hours after admission; and (2) patients aged between 18 and 85 years; and (3) patients who were admitted to the hospital within 72 hours after the onset of stroke.Exclusion criteria: (1) patients with severe aphasia or dysarthria and a consciousness disorder that prevents them from completing evaluations and questionnaires; (2) patients with a pre-existing diagnosis of dementia or significant cognitive impairment prior to the stroke; (3) patients with severe heart, liver, or renal insufficiency; (4) patients who self-report having any psychiatric illness, including depression, or who were using psychotropic drugs prior to the stroke onset; (5) patients with medical histories of other central nervous system diseases such as Parkinson's disease and epilepsy; (6) patients with malignant tumors that may potentially cause metabolic abnormalities; and (7) patients with nutritional disorders.There were 380 AIS patients recruited between January and August 2023 (Figure 1). Clinical characterization All participants underwent standard evaluations of their age, sex, body mass index, vascular risk factors (hypertension, diabetes mellitus, atrial fibrillation, coronary artery disease, current drinking and smoking, and laboratory data) on the day of admission.Smoking was defined as a patient who had smoked continuously for 5 years with at least 10 cigarettes per day.Alcohol consumption was defined as a patient who had drunk continuously for 5 years with at least 20g ethanol per day.The National Institutes of Health Stroke Scale (NIHSS) was used to evaluate stroke severity at admission by trained neurologists.NIHSS scores were assessed within 24 h after admission.The Barthel Index (BI) scores were assessed at discharge.In addition, the functional outcome was also assessed according to the modified Rankin Scale (mRS) score at follow-up after 1 month.The lesion site and stroke subtype were determined by computed tomography, magnetic resonance, electrocardiogram, echocardiography, carotid ultrasonography, and transcranial doppler. Clinical assessment and subject grouping According to the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V) (31), trained neurologists and psychiatrists diagnosed patients with PSD 2 weeks after stroke onset.The severity of depressive symptoms was evaluated with the Hamilton Depression Scale 17 items (HAMD-17) (32).According to the recommendation, HAMD-17 scores <7 mean normal condition (33), and patients with these scores were enrolled in the non-PSD group.Patients with HAMD-17 scores greater than or equal to 7 were included in the PSD group.A score of 7−17, 18 −23, and more than 24 indicates mild depression, moderate depression, and severe depression, respectively (33-35).According to the scores obtained, patients were classified into the mild, moderate, or severe PSD groups. Blood Biomarker Examination. Blood samples of all patients who within 72 hours after the onset of stroke were collected at 6-7 a.m. the day after fasting for at least 8 h.Two milliliters of EDTA-anticoagulated whole blood were used for routine blood tests (automated hematology analyzer, BZ6800, CHINA) that included white blood cell (WBC), neutrophils, and lymphocyte counts.Five milliliters of blood containing coagulant were used for a common biochemical examination (automatic analyzer, HITACHI 7600, JAPAN) that included creatinine (Cr), uric acid (UA), triglycerides (TG), total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), homocysteine (Hcy), and fibrinogen.All the indicators were tested using commercial kits, which were operated by qualified professionals in accordance with the specifications. FIGURE 1 The study flow diagram.AIS, acute ischemic stroke; PSD, post-stroke depression. Statistical analysis Data analysis was performed using SPSS 25.0 (IBM SPSS Statistics software, Version 25.0).Categorical variables were expressed as n (%), and normally distributed continuous variables were expressed as means (standard deviation, SD), otherwise, the data are presented as the median (interquartile range).Differences in baseline characteristics between groups were analyzed using the one-way ANOVA test or Mann-Whitney U test for continuous variables as well as the chi-squared test or Fisher's exact test for categorical variables, as appropriate.We used the scatter plot to show the distribution of serum Hcy and fibrinogen levels among PSD of different severity.Analyses using the Spearman rank correlation were carried out to investigate the correlation that exists between the Hcy, fibrinogen, and HAMD scores in all patients.Binary logistic regression analysis for risk factors with PSD.A MedCalc 15.6.0 (MedCalc Software Acacialaan 22, B-8400 Ostend, Belgium) packet program was used to obtain a receiver operating characteristic (ROC) curve to test the overall discriminative ability of Hcy and fibrinogen for PSD.A two-tailed value of P<0.05 was considered significant. Logistic regression analysis for risk factors with early-onset PSD Logistic regression models were used to examine the risk factors associated with early-onset PSD.Table 2 illustrates the results of crude models for early-onset PSD.Crude models showed that gender, current smoking, NIHSS score, mRS score, BI score, Hcy, and fibrinogen were associated with early-onset PSD (P<0.05).In addition, increased Hcy level depends on age.We classified age as a binary categorical variable.After adjustment for all potential confounders, fibrinogen (OR, 1.576; 95% CI 1.302-1.985,P<0.001) and Hcy (OR, 1.344; 95% CI 1.209-1.494,P<0.001) were identified as independent factors for early-onset PSD (Figure 4). ROC curves for the Hcy and fibrinogen were used to test the overall discriminative ability for early-onset PSD We employed ROC curves to test the overall discriminatory ability of Hcy and fibrinogen for early-onset PSD (Figure 5).We observed that the area under the curve (AUC) of the Hcy level to discriminate early-onset PSD was 0.754 (95% CI, 0.708-0.797,P<0.001), the optimal cutoff was 10.55, and the sensitivity and specificity were 67.7% and 73.5%, respectively.The AUC of the fibrinogen was 0.698 (95% CI, 0.649-0.744,P<0.001), the cutoff was 2.95, and the sensitivity and specificity were 47.2% and 86.7%, respectively.In addition, we also conducted an ROC analysis for the combination of Hcy and fibrinogen levels in discriminating between non-PSD and early-onset PSD.The AUC for Hcy and fibrinogen was 0.803 (95% CI: 0.759-0.842,P<0.001), the optimal cutoff was 0.69, and the sensitivity and specificity were 77.3% and 71.4%, respectively. Discussion Our findings provide several novel observations.First, patients in the early-onset PSD group had higher Hcy, fibrinogen, NIHSS scores and mRS scores and a lower BI score than patients in the non-PSD group.Second, the logistic regression model indicated that Hcy and fibrinogen were independent factors for early-onset PSD.Finally, based on ROC analysis, the Hcy combined fibrinogen exhibited respectable early-onset PSD discriminating power.Together, these data support the hypothesis that Hcy and fibrinogen were associated with early-onset PSD. In our study, 27.89% of the patients were diagnosed with earlyonset PSD at 2 weeks after stroke onset.Similar to our results, a meta-analysis reported that approximately 31% of stroke survivors were found to have depression at any time-point up to 5 years after stroke (36).We discovered that the early-onset PSD group showed a higher NIHSS and mRS score and a lower BI score compared to the non-PSD group, which indicated the higher severity of disability and stroke in the early-onset PSD group.Adverse physical conditions may be stressors for developing psychological problems like depression (37).Previous studies have shown that Hcy level increased with age (38).We classified age as a binary categorical variable, and after adjustment for all potential confounders, we discovered that serum levels of Hcy were independently related to early-onset PSD.The result indicated that high serum levels of Hcy in the acute phase of ischemic stroke may be a risk factor for early-onset PSD.Elevated serum Hcy levels at admission were associated with depression at 2 weeks and 6 months after stroke onset (18,39), which is consistent with our findings.In addition, Hcy has been associated with depressive symptoms 1 year after stroke in older Swedish adults (40).Previous studies have shown that stroke patients with higher Hcy and lower folate levels may be more susceptible to PSD (41).A meta-analysis reports that serum Hcy level can be used as a biomarker to predict the risk of early-onset PSD (42).In our study, the Hcy exhibited respectable early-onset PSD discriminating power with AUC values of 0.754, this is consistent with previous studies (43). Previous studies have proposed that there are several factors influencing Hcy levels, such as age, sex, BMI, smoking, coffee drinking, poor nutrition, vitamin intake, folate, and ultraviolet radiation (44, 45).In our study, there was no significant correlation between age and Hcy.In addition, genetic variants in the folate and methionine cycles affect males and menopausal women differently (46).However, there was no significant relationship between Hcy concentrations and sex in our study.We believe that the variances in the ethnicity of the research populations, the small sample sizes, the medication status, and the severity of the condition may be the causes of the discrepancies between various studies. The role of Hcy in early-onset PSD may involve multiple mechanisms.Firstly, an increase in Hcy levels can lead to a reduction in S-adenosylmethionine synthesis in the methionine cycle, which leads to depression (47).Secondly, hyperhomocysteinemia is toxic to nerve cells and endothelial cells.High serum Hcy levels cause mitochondrial dysfunction, leading to neuronal damage in the ischemic cerebral cortex and hippocampus of rats (48).Hcy can aggravate depression-like disorders in post-stroke rats (49).Meanwhile, Hcy exerts a neurotoxic effect on hippocampal neuronal cells by regulating ionic glutamate receptors and inducing apoptosis in hippocampal neurons (50), which promotes the occurrence and development of depression.Thirdly, elevated Hcy levels lead to vascular endothelial dysfunction (51) and an inflammatory response (52).Depression has been associated to the emergence and progression of proinflammatory cytokines, vascular endothelial cell injury and death, and disruption of the blood brain barrier (53,54).Furthermore, a strong association between Hcy level and inflammation has been reported in various studies (9, 10), and inflammatory processes have been implicated in the pathophysiology of depression (55). As a type of plasmatic coagulation factor and inflammatory marker, increased fibrinogen concentrations are common in ischemic stroke patients and associated with psychological distress (26).In our research, it was shown that fibrinogen levels were positively correlated with early-onset PSD severity.Furthermore, after adjustment for confounding factors, we found that serum fibrinogen levels were independently associated with early-onset PSD, consistent with previous findings (26,28,43).In this study, the AUC of the fibrinogen was 0.698 and may be used as a predictive indicator for early-onset PSD.Although the underlying mechanism by which elevated fibrinogen levels cause post-stroke depressive symptomology remains inconclusive, accumulating evidence supports the role of fibrinogen in the inflammatory response (56).Fibrinogen can increase during any inflammatory event and serves to control systemic inflammatory signals (57,58).According to a study, fibrinogen might accumulate in inflammatory foci, and extravascular deposits would make inflammation worse (56).Moreover, in vitro experiments revealed the important role of fibrinogen in driving inflammation and identified the mechanism by which fibrinogen controls leukocyte function (56).Additionally, several studies have suggested that fibrinogen might be involved in the expression of proinflammatory cytokines IL-6, IL-1b, TNF-a, and IFN-g (59, 60), and these inflammatory factors are involved in the pathogenesis of PSD (8). In summary, our study indicates that Hcy and fibrinogen were associated with early-onset PSD.The results of our ROC curve analysis showed that the Hcy and fibrinogen levels had appropriate sensitivity and specificity to discriminate early-onset PSD.Clearly, the Hcy level is more discriminating than the fibrinogen, suggesting that the Hcy level at admission may be a useful tool to predict earlyonset PSD.Finally, we found that the combination of Hcy and fibrinogen had a better ability to discriminate early-onset PSD with an AUC of 0.803, suggesting the combination of these two markers has a greater value in predicting early-onset PSD. This study has the following limitations: (1) it was a singlecenter study with a relatively small sample size; therefore, the study findings still need to be further confirmed by multi-center and large-sample clinical studies; (2) patients with severe aphasia, unconsciousness, or dementia during hospitalization were excluded from the research, which would have resulted in biases in the prevalence of early-onset PSD; (3) our study did not include several risk variables that may impact depressive episodes, such as social support, educational background, and increasing life stress; (4) we observed PSD only 2 weeks after stroke onset.Short-term observational study conclusions might not be thorough enough; (5) patients were not tested for the vitamin B and folates levels.Future research needs to consider the effect of vitamins B and folate on Hcy levels; and (6) we did not collect homocysteine metabolism genes, and taking genetic factors into account will be important research in the future.To completely understand how Hcy and fibrinogen levels impact the incidence of early-onset PSD, more clinical research with large-scale, long-term intervention and follow-up is required.Binary logistic analysis of independent variables associated with early-onset PSD.Based on ROC analysis, the Hcy, fibrinogen and Hcy combine fibrinogen exhibited respectable early-onset PSD discriminating power with AUC values of 0.754, 0.698 and 0.803, respectively. Conclusion In conclusion, our study indicated that elevated serum levels of Hcy and fibrinogen may be independent risk factors for early-onset PSD and can be used as predictive indicators for early-onset PSD.The combination of Hcy and fibrinogen may provide greater predictive value.Therapies that reduce serum Hcy and fibrinogen levels may be potential targets for intervention and prevention of early-onset PSD. TABLE 1 Characteristics of patients in the non-PSD and PSD groups. NIHSS, National Institutes of Health Stroke Scale; mRS, modified Rankin Scale; BI, Barthel Index; HAMD-17, Hamilton depression scale 17 items; WBC, white blood cell; Hcy, homocysteine; HDL-C, high-density lipoprotein cholesterol; IQR, interquartile range; LDL-C, low-density lipoprotein cholesterol; TC, total cholesterol; TG, triglyceride; Cr, creatinine; UA, Uric acid.Values are shown as number (percentage) or as medians (IQR) and mean (SD).Differences in characteristics between groups were analyzed using one-way ANOVA test or Mann-Whitney U test for continuous variables as well as the chi-squared test or Fisher's exact test for categorical variables.A two-tailed value of P<0.05 was considered significant. TABLE 2 Logistic regression analysis for risk factors associated with early-onset PSD.
2024-06-30T15:20:48.030Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "f6458141e7b817810bfb05905f5a355643b58a1d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fcd270d6c747c47d6b91b6ac7a5f648026bb3f60", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13561087
pes2o/s2orc
v3-fos-license
The Outcome of Agitation in Poisoned Patients in an Iranian Tertiary Care University Hospital Introduction. This study was conducted to evaluate and document the frequency and causes of agitation, the symptoms accompanying this condition in intoxications, relationship between agitation score on admission and different variables, and the outcome of therapy in a tertiary care referral poisoning center in Iran. Methods. In this prospective observational study which was done in 2012, 3010 patients were screened for agitation at the time of admission using the Richmond Agitation Sedation Scale. Demographic data including age, gender, and the drug ingested were also recorded. The patients' outcome was categorized as recovery without complications, recovery with complications (hyperthermia, renal failure, and other causes), and death. Results. Agitation was observed in 56 patients (males, n = 41), mostly aged 19–40 years (n = 38) and more frequently in illegal substance (stimulants, opioids and also alcohol) abusers. Agitation score was not significantly related to the age, gender, and previous history of psychiatric disorders. Forty nine patients had recovery without any complication. The need for mechanical ventilation was the most frequent complication. None of the patients died. Conclusion. Drug abuse seems to be a must-to-consider etiology for patients presenting with acute agitation and its morbidity and mortality could be low in agitated poisoning cases if prompt supportive care is performed. Introduction Agitation is defined as restlessness accompanied by excessive and aimless motor or cognitive activity, which is usually experienced by tension and anxiety [1,2]. The underlying causes of agitation fall into five most prevalent categories including neurological diseases, drug intoxication or withdrawal symptoms, psychological disorders, metabolic diseases, and infections [3,4]. Clinically, in intoxicated cases agitation is presented as frequent movements of head and limbs and attempt for extubation despite attempts of the staff to calm the patient [5]. Agitation could lead to different complications such as malignant hyperthermia, rhabdomyolysis, renal failure, and even death [6]. Several agents and conditions are suspected to cause agitation after poisoning or overdose [7,8]. These include the abuse of drugs such as cocaine, amphetamine, and hallucinogens; the withdrawal syndrome of alcohol, sedatives, and opioids; and intoxication with anticholinergic, antihistamines, tricyclic antidepressants, neuroleptics, monoamino oxidase inhibitors, and salicylic acid. Previous studies on patients with antihistamine and methamphetamine intoxication have shown that only some of these patients experience agitation [9,10]. Moreover, agitation has been reported as an unusual presentation in intoxication of some drugs such as baclofen, olanzapine, phenytoin, risperidone, aripiprazole, adrenaline, and abuse of dextromethorphan [11][12][13][14][15][16][17]. Studies on agitated critically ill patients hospitalized in intensive care units (ICUs) have shown that presence of agitation is associated with prolonged duration of hospitalization in ICU, higher prevalence of nosocomial infections, unplanned extubation, higher morbidity rate, and higher hospital costs [3,4,18]. Jaber and colleagues have previously found that the abuse of sedative drugs, neglected hyperthermia, untreated hyponatremia or hypernatremia, alcohol abuse, and the underlying psychiatric disorders are all risk factors for agitation.In their study, patients' age was not found to be a risk factor for this clinical condition [4]. Sever agitation may also cause rhabdomyolysis, renal failure, and hyperthermia [1]. Considering the importance of the outcome of agitation for the treatment in poisoned patients and the necessity of its early diagnosis to reduce the morbidity and mortality caused by this condition, current study was conducted to evaluate and document the frequency and causes of agitation, the symptoms accompanying this condition in intoxications, relationship between agitation score on admission and different variables, and the outcome of therapy in a tertiary care referral poisoning center in Iran. Material and Methods In this prospective observational study, which was done in the department of toxicological emergencies of Noor and Ali-Asghar (PBUH) University hospital during the year 2009, all poisoned patients either intentionally or accidentally who were referred to the poisoning emergency department and had clinical signs of agitation [19] at the time of admission were included. The study protocol was approved by the institutional board of human studies at Isfahan University of Medical Sciences. In addition, after the study design was accurately explained to each patient, informed consent was taken from them for inclusion to this study. If the patient was not able or had not the capacity for decision making, informed consent for inclusion to this study was taken from the patients' first degree family. We used a nonprobability method for sampling and patients were enrolled consecutively. Patients who were discharged on their personal will and those who had agitation due to reasons other than acute intoxication (based on a documented neurology consultation and brain computer tomography scan) were excluded from the study. After performing primary clinical evaluations and providing basic supportive care for the patients, the agitation score was determined by a medical toxicologist using the Richmond agitation sedation scale (RASS) in patients who met the inclusion criteria. RASS for the patients was evaluated within 30-60 seconds using three phases of observation, response to verbal simulation, and response to physical simulation. RASS scale is a 10-point scale ranging from combative (+4) to unarousable (−5). The scores +1 to +4 are used to estimate agitation and anxiety, 0 denotes the calm and alert state, and −1 to −5 denote sedation [20]. Demographic data for all patients including age, gender, and the drug ingested were also recorded. After the documentation of data and RASS score all patients underwent routine treatment protocol for agitation control according to our institutional guidelines which was intravenous midazolam (0.1-0.3 mg/kg) and restraint of the agitated patient. This bolus dose was repeated or followed by infusion of midazolam (0.05 mg/kg-0.3 mg/kg). If the patients did not respond to this therapeutic measure, an intravenous dose of sodium thiopental (1-2 mg/kg) followed by a 1 mg/kg infusion was started. The airway, respiration, and blood pressure were also checked closely and patients were transfer to the poisoning ICU for closer monitoring if needed. The patients' outcome was categorized as recovery without complications, recovery with complications (hyperthermia, renal failure, and other causes), and death. Collected data were analyzed using SPSS software version 13.0 (SPSS Inc., Chicago, IL, USA). Means were compared using independent Student's -test. The probable statistical relationship between the agitation score and the outcome was determined using the Spearman's correlation test. The median values for the ordinal variables were compared using the Mann-Whitney test, and the frequency distribution of agitation with regard to different factors was compared using the Chisquare test. value less than 0.05 was considered significant. Results Among 3010 poisoning patients admitted to our poisoning referral center during the study period, agitation was observed in 56 patients at the time of admission. The highest prevalence of agitation was observed in the age group of 19-40, consisting of 67.9% of all the cases. There was not statistical deference for the presence of agitation in patients with positive past medical history for psychiatric disorders ( = 12) and patients without it ( = 40) or with unknown history ( = 4) ( = 0.24). Agitation was more common in men (73.2%). Comparison of the median value of agitation score on admission indicated that the groups were not significantly different in this respect ( = 0.114). The mean agitation scores in patients with positive and negative history of psychiatric disorders were 1.9 ± 0.90 and 2.3 ± 0.93 ( value <0.05), respectively. The median agitation score was 2 for the both groups ( = 0.245). Agitation was observed in 33.4% of the patients following illegal substance abuse (stimulants, alcohol, and opioids) Neurology Research International 3 ( Table 1). The highest mean agitation score obtained was 3, which was observed in opioid intoxications (tramadol intoxication and those patients received naloxone after opioid intoxication). The results regarding the clinical symptoms and paraclinical evaluation have been shown in Tables 2 and 3. Agitation score was not significantly related to the age, gender, and previous history of psychiatric disorders ( > 0.05). Length of hospital stay was between 2 and 24 hours. Forty nine patients had recovery without any complication. The need for mechanical ventilation was the most frequent complication in our agitated patients (Table 4). Discussion This study was performed to evaluate the causes and outcome of agitation in poisoning patients and determine the relationship between agitation score on admission and different variables. Our results showed that the highest prevalence of intoxicated patients with agitation was in the age range of 19-40 which is not consistent with a previous study that reported this in a lower range of age [21]. According to our personal experience after doing many discharge interviews with these patients we think that this high prevalence of intoxication with agitation in young adults may be attributable to the identity issues, the gap between children's values and their parents' , the high economical inflation rate, and unemployment. In a study performed in an eighteen-bed MICU of a tertiary care center, it was also found that the age was not a risk factor for occurrence of agitation [22]. It should be mentioned that most of the patients referred to our center were male and the underlying causes for most cases of agitation were opioids cases receiving naloxone which could be justified by the higher prevalence of opioid addiction in men [23,24]. Although agitation has not been reported in opioid intoxications, the addicts may experience agitation in case of receiving excessive doses of naloxone. In the current study, seven patients received naloxone before being referred by the emergency ambulance services and three patients were agitated following intake of oral doses of naltrexone. Also tramadol intoxication may cause agitation in some patients. Anticonvulsants, antipsychotics and TCAs with their anticholinergic effects, amphetamines with their sympathomimetic effects, diphenoxylate (opioid) with its atropine ingredient, pesticides, and antihypertensions can cause agitation as is shown in this study and also by others [25][26][27][28]. Most of the patients had normal vital signs on admission and their agitation score was less than 2 (62.5%). Few patients had tachycardia as expected in patients with agitation. Low median score of agitation may be due to the small amount of ingested dose of drug. In our study, some patients had some levels of decreased consciousness that all of them recovered without complications and it can be justified with agitating score less than 2 observed in our patients. The complications were seen in 7 patients of whom 3 had agitation score more than 2. No mortality was observed. Low morbidity and mortality rate might be due to low toxicity level as presented by low number of agitations score in most of the patients. In our study we did not evaluate the severity of poisoning according to perceived stress scale (PSS) scoring system which could be considered a limitation. We recommend further with a larger sample size for determining the cutoff point score of agitation in predicting tachycardia and complications. Conclusions Drug abuse seems to be a must-to-consider etiology for patients presenting with acute agitation and its morbidity and mortality could be low in agitated poisoning cases if prompt supportive care is performed. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Authors' Contribution Nastaran Eizadi-Mood, Ahmad Yaraghi, and Ali Mohammad Sabzghabaee contributed in designing and conducting the study. Elham Khalilidehkordi collected the data. Seyyed Mohammad Mahdy Mirhosseini, and Elham Beheshtian helped in data analysis. Nastaran Eizadi-Mood and Ali Mohammad Sabzghabaee rechecked the statistical analysis and prepared the paper. All the authors have assisted in preparation of the paper and have read and approved the content of the paper and are accountable for all aspects of the work. The results are presented as number of patients (%).
2016-05-12T22:15:10.714Z
2014-12-04T00:00:00.000
{ "year": 2014, "sha1": "18dc45b18203b10b6f083d4ebceb3e0216182e99", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/nri/2014/275064.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb703921c1e8ce5c3a0fb324a3fc5d0fecb9ace9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231614713
pes2o/s2orc
v3-fos-license
A Practical Guide to Sparse k-Means Clustering for Studying Molecular Development of the Human Brain Studying the molecular development of the human brain presents unique challenges for selecting a data analysis approach. The rare and valuable nature of human postmortem brain tissue, especially for developmental studies, means the sample sizes are small (n), but the use of high throughput genomic and proteomic methods measure the expression levels for hundreds or thousands of variables [e.g., genes or proteins (p)] for each sample. This leads to a data structure that is high dimensional (p ≫ n) and introduces the curse of dimensionality, which poses a challenge for traditional statistical approaches. In contrast, high dimensional analyses, especially cluster analyses developed for sparse data, have worked well for analyzing genomic datasets where p ≫ n. Here we explore applying a lasso-based clustering method developed for high dimensional genomic data with small sample sizes. Using protein and gene data from the developing human visual cortex, we compared clustering methods. We identified an application of sparse k-means clustering [robust sparse k-means clustering (RSKC)] that partitioned samples into age-related clusters that reflect lifespan stages from birth to aging. RSKC adaptively selects a subset of the genes or proteins contributing to partitioning samples into age-related clusters that progress across the lifespan. This approach addresses a problem in current studies that could not identify multiple postnatal clusters. Moreover, clusters encompassed a range of ages like a series of overlapping waves illustrating that chronological- and brain-age have a complex relationship. In addition, a recently developed workflow to create plasticity phenotypes (Balsor et al., 2020) was applied to the clusters and revealed neurobiologically relevant features that identified how the human visual cortex changes across the lifespan. These methods can help address the growing demand for multimodal integration, from molecular machinery to brain imaging signals, to understand the human brain’s development. INTRODUCTION As molecular tools have become integrated with human neuroscience, there has been a renewed interest in mapping human brain development. Many studies have compared molecular changes among age groups (Law et al., 2003;Duncan et al., 2010;Pinto et al., 2010;Kang et al., 2011;Siu et al., 2015Siu et al., , 2017Zhu et al., 2018) using distinct life-span stages that developmentalists have described based on physical, cognitive, and psychosocial maturation (Sigelman and Rider, 2017). However, age-binning assumes that those stages are a good fit for molecular development of the brain. In contrast, other areas of human neuroscience are applying data-driven approaches such as principal component analysis (PCA) (Bray, 2017) or unsupervised clustering (Lebenberg et al., 2018) to identify age-related changes in brain development. Applying cluster analysis to studying the molecular development of the human brain is challenging because of the limited availability of developmental postmortem tissue samples. Nevertheless, clustering algorithms have been developed for high dimensional biological datasets that have a small sample size (n) but measurements from many molecular features (p) (e.g., genes or proteins). Here we apply one of those approaches, sparse k-means clustering (Witten and Tibshirani, 2010;Kondo et al., 2016), to illustrate a data-driven approach for studying brain development that uses the expression of many genes or proteins to partition samples into age-related clusters. Then we show that clustering can identify aspects of human visual cortex development that are not apparent in typical developmental ontologies. Cellular and molecular findings from postmortem brain tissue are used as benchmarks for linking age-related changes in non-invasive brain imaging signals with the underlying neurobiology. For example, many imaging studies reference synaptic development measurements (Huttenlocher and Dabholkar, 1997) to account for rapid changes in cerebral cortex MRI signals during the first few years of life. More recently, gene expression databases have been used to identify candidate cellular and molecular features, such as those underlying cortical thinning throughout the life-span (Vidal-Pineiro et al., 2020) or testosterone-related structural properties of the adolescent cerebral cortex (Liao et al., 2021). However, the rare and valuable nature of human postmortem brain samples means that gene expression studies have small sample sizes, especially compared to modern MRI studies that use a population neuroscience approach and aggregate data from hundreds or thousands of subjects (Paus, 2016). The issue of sample size is especially critical for brain development, as even well-established tissue banks (e.g., NIH NeuroBioBank) have fewer than 250 samples for most age groups and fewer than 50 for key ages of child development. Finally, the labor-intensive nature of molecular techniques means that studies can only use a subset of the available samples [e.g., (Pinto et al., 2010) n = 28; (Kang et al., 2011) n = 57; (Siu et al., 2015(Siu et al., , 2017 n = 30; (Zhu et al., 2018) n = 26]. Nevertheless, the high dimensional data collected by molecular studies provide a wealth of information about how the brain changes across the life-span. Although MRI and postmortem studies of human brain development face different methodological challenges, they share many analytical approaches. Both rely on analyses from the high dimensional toolbox to uncover information relevant to the complexities of brain development. Differences in experimental design, however, place distinct constraints on those analyses. High throughput molecular tools have significantly increased the amount of information obtained from each postmortem sample, generating long lists of gene or protein expression values. Those values represent a vector that describes where each sample exists in a high dimensional space that captures the molecular complexity of human brain development. However, the large number of measurements but small number of samples means that the high dimensional space is sparse with points spread virtually equidistantly across the space. The challenge is to determine how samples cluster together in that sparse space and if those data-driven clusters reflect stages of human development. Cluster analysis is not new in biology (Eisen et al., 1998;Tamayo et al., 1999;Hastie et al., 2000Hastie et al., , 2001, but applying it to postmortem studies of human brain development presents unique problems because of the small sample sizes of those studies. When standard clustering techniques have been used to study gene expression changes in human brain development, clusters are found for regional and prenatal versus postnatal groups, but distinct postnatal clusters matching developmental stages have not been reported (Colantuoni et al., 2011;Kang et al., 2011;Carlyle et al., 2017;Li et al., 2018;Zhu et al., 2018;Disorder et al., 2021). Accordingly, it has been challenging to link cognitive, perceptual, or social-emotional stages and prolonged development found using brain imaging with the underlying maturation of molecular mechanisms in the human brain. Here we provide a practical guide to sparse clustering that focuses on overcoming the small sample size problem to reveal postnatal patterns of molecular development in the human brain. We introduce sparsity-based clustering, and one approach in particular, sparse k-means clustering, developed to address the problem of datasets with a large number of observations from proteins or genes (p) but a small number of samples (n) resulting in a data structure that is p n (Witten and Tibshirani, 2010). Finally, we illustrate the value of applying clustering by interrogating the neurobiological features of the clusters to reveal new aspects of the developing human visual cortex. Challenges Clustering Small Sample Sizes Currently, transcriptomic, proteomic, and other omics datasets of human brain development include measurements of many molecular features from a small number of samples. The combinatorial nature of those data makes it challenging to use traditional statistical comparisons to understand the many molecular changes that occur in the developing brain. Instead, high dimensional analyses that use all of the data are needed to classify the biological features that differentiate the human brain across the lifespan. However, even when clustering is used, the complexity of the findings can still be challenging to interpret, and studies may need to group the data into predefined age categories to describe the spatiotemporal dynamics of the developing brain ). In the mathematical notation used for clustering algorithms, the genes or proteins are called features or observations and are represented by p, while the number of samples is represented by n. Most human brain development datasets are either p ≈ n or p > n and are best described as high dimensional datasets with more features than samples. When clustering those data, algorithms can borrow strength from the large number of features that represent each sample in high dimensional space. However, if only a subset of the features contributes to partitioning the samples into clusters, then the analyses may run into the curse of dimensionality (Bellman, 1983). For brain development, this means that developmentally relevant features may become obscured as more and more genes or proteins that do not contribute to developmental changes are included in the dataset. A central problem in analyzing these p > n datasets is to identify the molecular features associated with age-related clusters from a very large set of candidate genes. Two approaches for focusing on relevant features include either preprocessing the data using dimension reduction methods (e.g., PCA, tSNE) or using sparsity-based clustering algorithms that retain all of the features but subset or reweight them during clustering (see Supplementary Material). Some of the common approaches to unsupervised dimension reduction and clustering often used in neuroscience, like PCA and tSNE, can effectively separate data points into clusters in low-dimensional space, especially if there are large differences in features that fall on orthogonal sets of dimensions. For example, tSNE analysis of transcriptomic data identified separate clusters for cortical and cerebellar development (Kang et al., 2011;Carlyle et al., 2017), and PCA has shown that age can explain a large fraction of the variation in protein expression during cortical development (Pinto et al., 2015;Breen et al., 2018). Some of these approaches represent linear combinations of genes or proteins, and focus on reducing dimensionality by identifying correlated features. Problems arise when the features that differentiate clusters are not orthogonal, which may cause linear methods like PCA breakdown and reduce the data onto inappropriate dimensions (Chang, 1983). Thus, traditional dimension reduction and clustering methods are prone to pruning off too much information and, thereby, may miss subtle but significant changes in the human brain's molecular development. In contrast, sparsity-based clustering methods follow a different approach that keeps all of the features and reweights them in a dissimilarity matrix. Approaches to Sparsity-Based Clustering Because traditional dimension reduction methods may prune off too much information or miss more subtle changes in the human brain's molecular development, we tested a set of sparsity-based clustering algorithms. Here, sparsity refers to the idea that not all 30,000 genes play a role in brain development and irrelevant dimensions may mask clusters. Furthermore, as more and more features are included, observations become increasingly spread out until they are virtually equidistant. Sparsity-based clustering is a useful approach for analyzing those high dimensional data because the algorithms are not distance-based and can identify a smaller number of molecular features that reflect the spatiotemporal dynamics of neurodevelopment. In this section, we introduce and compare four clustering methods designed to handle data sparsity but it is not an exhaustive review of sparsity-based clustering. The agglomerative approach of CLIQUE (Agrawal et al., 1998) finds grids or subspaces in high dimensional data by assigning the desired number of equal length intervals (xi) to the grid and a global density value (tau) as input parameters. Notably, CLIQUE does not specify the number of clusters in the arguments, but instead compares how many points are in each rectangle of the grid with the overall density parameter and continues to partition the subspaces until the density is less than tau. A rectangle in the grid is considered to be dense if the proportion of points in it exceeds the tau parameter. CLIQUE then identifies a cluster as the maximal set of dense units in a subspace. For example, using an interval (xi) of 2, each dimension of the data is partitioned into two non-overlapping rectangles (units) and dense units are identified for further partitioning if they contain a greater proportion of the total number of points than the input value for tau. This approach does not strictly partition points into unique clusters and usually results in data points being assigned to more than one cluster. CLIQUE is also prone to classifying points as outliers and excluding them from the analysis. The divisive clustering of PROCLUS (Aggarwal et al., 1999) is based on medoids and uses a three-step top-down approach to projected clustering. The steps involve (1) initializing the number of clusters (k) and the number of dimensions to consider in the subspace search, (2) iteratively assigning medoids to find the best clusters for the local dimensions, and (3) a final pass to refine the clusters. Typically, PROCLUS has better accuracy than CLIQUE in partitioning points into clusters, but the a priori selection of cluster size (k) is not easy and demands an iterative approach to finding clusters. Furthermore, by restricting the subspace search size, some essential features may be omitted from the analysis. Both CLIQUE and PROCLUS were developed for datasets with many more samples (n), often 2-3 orders of magnitude larger than most datasets of human brain development. Although those algorithms are accurate for large datasets with thousands of samples, they are less well suited for discovering clusters in small sample sizes. So we needed to test sparsity-based clustering designed for small datasets, and this criteria led us to select two more approaches to sparse hierarchical clustering, SPARCL and robust and sparse k-means clustering (RSKC) (Kondo, 2016;Witten and Tibshirani, 2018). SPARCL was developed by Witten and Tibshirani (2010) to adaptively select and reweigh the subset of features during clustering thus eliminating the need for data reduction preprocessing. The algorithm uses a lasso-type penalty to address the challenge of clustering samples that differ on a small number of features. The reweighted variables then become the input to k-means hierarchical clustering. The adaptive feature selection of SPARCL focuses on the subset of genes or proteins that underlie differences among clusters, so this process is similar to removing noise from the data. Thus, SPARCL simultaneously clusters the samples and identifies the dominant features thereby making it easier to determine the subset of proteins or genes responsible for partitioning samples into different clusters. SPARCL has many strengths for analyzing datasets with p ≈ n or p > n; however, it can form clusters containing just one observation (Witten and Tibshirani, 2010). A more recent extension of the algorithm, RSKC, addresses small clusters by assuming that outlier observations cause this problem. RSKC uses the same clustering framework as SPARCL, except that it is "robust" to outliers . RSKC iteratively identifies clusters in the data, then identifies clusters with a small number of data points (e.g., n = 1) and flags those data points as potential outliers. The outliers are temporarily removed from the analysis, and clustering proceeds as outlined above for SPARCL. Once all clusters have been identified, the outliers are re-inserted in the highdimensional space and grouped with the nearest neighbor cluster. Thus, RSKC identifies clusters in the data and includes all of the data points. Datasets Our lab has been studying development of human visual cortex (V1) by quantifying expression of synaptic and other neural proteins using a library of postmortem tissue samples (n = 31, age range 21 days -79 years, male/female = 18/13) (Supplementary Table 1). In addition, genome-wide exonlevel transcriptomic data that was collected by Kang et al. (2011) was used and the postnatal V1 data were extracted (n = 48, age range 4 months -82 years, male/female = 27/21) (Supplementary Table 2). The transcriptomic data were used to test the reproducibility and scalability of the sparsity-based clustering. The preprocessed exon array data from Kang et al. (2011) were downloaded from the Gene Expression Omnibus (GSE25219). The exon-summarized expression data for 17,656 probes were extracted, and probe identifiers were matched to genes. If a gene was matched by two or more probes and the probes were highly correlated as determined by Kang et al. (2011) (Pearson correlation, r ≥ 0.9), then the expression values were averaged for a total of 17,237 genes. The clustering methods were tested using three groups of protein or gene data. The first group of protein data was from a series of studies using the Murphy lab postmortem samples to examine the development of molecular mechanisms that regulate experience-dependent plasticity in human V1 (Murphy et al., 2005;Pinto et al., 2010Pinto et al., , 2015Williams et al., 2010;Siu et al., 2015Siu et al., , 2017. Western blotting was run using each sample (2-5 times) to probe for 23 different proteins (Supplementary Table 3). The tissue preparation and Western blotting methods have been described in detail previously (Siu et al., 2017. The initial clustering tests used a subset of seven proteins (GluN1, GluN2A, GluN2B, GluA2, GABA A α1, GABA A α3, and Synapsin) to explore age-related clustering with AGNES, PROCLUS, CLIQUE, SPARCL, and RSKC. Next, the sparsity-based clustering using RSKC was explored using all 23 proteins to determine how adding more features changed the age-based clustering. Then the reliability of the agerelated clustering was explored by running 100 iterations of RSKC with the 23 proteins. A heatmap illustrating the number of times each sample was partitioned into a cluster was made to visualize the reliability. The scalability of RSKC was tested using a larger protein database and the much larger gene database. These tests included clustering a matrix with 95 proteins collected from the Murphy lab postmortem tissue samples. This time the samples were probed with a high density ELISA array (RayBiotech Quantibody Human Cytokine Array 4000) and an additional 72 proteins were measured for a total of 95 proteins (p = 95) (Supplementary Table 4). Finally, RSKC clustering was done using the genes in the Kang database by selecting those listed in the SynGO ontology (n = 988) (Koopmans et al., 2019) and also the full set of genes (n = 17,237). The Basic Steps to Sparsity-Based Clustering Using R Here we describe four sparsity-based high-dimensional clustering approaches (PROCLUS, CLIQUE, SPARCL, and RSKC) for analyzing the development of human V1 using 7 or 23 proteins. Then we explore the scalability of the RSKC method using two larger datasets with 95, 988, or 17,237 proteins or genes. All of the analyses were done in the R programming language using the integrated development environment RStudio (version 1.3.1093). The basic steps in the workflow used to examine each of the clustering methods are illustrated in Figure 1. The text refers to the R packages that were used and R Markdowns with code and figures are included in Supplementary Material. Figure 1 illustrates the steps that were used for testing various sparsity-based clustering methods to examine if they produce an age-related progression in the median age of clusters. The data were prepared in an nxp matrix with each sample forming a row and the features, either proteins or genes, arranged in columns. Those data were used as the input to the clustering algorithms. Here the sparsity-based algorithms tested were PROCLUS and CLIQUE from the subspace package (Hassani, 2015), SPARCL (Witten and Tibshirani, 2018), and RSKC (Kondo, 2016). For the algorithms a range of k or xi values from 2 to 9 were tested to explore the types of clusters produced. The results of a tSNE dimension reduction was used to visualize clusters for all of the methods tested. However, clustering was not done on the tSNE data itself even though that is a commonly used approach. We used tSNE strictly as a visualization tool because it does a good job of projecting points from high dimensional space onto 2D so that neighboring points reflect their similarity. The Elbow method was used to determine the number of clusters. Finally, the quality of the age-related clustering of the samples was evaluated by making a boxplot to visualize the progression of the median ages. FIGURE 1 | The workflow for studying age-related molecular development of the brain. First, arrange the data into an n × p matrix, where features (p) are represented as columns and samples (n) as rows. Then, select the desired sparse clustering algorithm (e.g., CLIQUE, PROCLUS, SPARCL, and RSKC) and test its performance along a range of clusters (k). Lastly, determine the optimal k value using the elbow method and compare the median age of clusters with boxplots. Frontiers in Neuroscience | www.frontiersin.org This workflow was used for all of the clustering methods described in the next section and an example R Markdown of the analysis is included in Supplementary Material. Evaluating Sparsity-Based Clustering for Finding Age-Related Clusters First, we evaluated the data by exploring if simply visualizing the samples using tSNE produced an age-related organization. The human V1 samples with seven proteins and all of the WB runs were used as the input to the tsne package (Donaldson and Donaldson, 2010; Figure 2A). Color-coding the samples by their age showed a global progression in the ages with younger samples mapped to the bottom right and older to the top left in the 2D tSNE space. Next, we applied a commonly used agglomerative hierarchical clustering algorithm, AGNES in the cluster package (Maechler, 2019), to test if this clustering approach would reveal age-related groupings of the samples. This algorithm uses the dissimilarity matrix to merge nodes in the tree and it partitioned these data into clusters that suggest an age-related progression ( Figure 2B). However, groups of 2 or 3 adjacent clusters had very similar median ages indicating poor age-related separation of the samples. A major weakness of this hierarchical clustering approach is that incorrect branching can never be undone. Nevertheless, these findings show that even distancebased hierarchical clustering of human V1 postnatal samples can find some age-related progression of postnatal samples. Next, we tested the two density projection sparsity-based clustering methods that use either top-down (PROCLUS) or bottom-up (CLIQUE) clustering with all of the observations (n = 31) and seven of the proteins from the human visual cortex development dataset. The outputs were visualized in 2D using tSNE, and the data points were color-coded according to the clusters identified by each method. Finally, to determine if the clusters represented developmental changes in the dataset, we plotted boxplots showing the median age of the samples in the cluster. PROCLUS The PROCLUS clustering method was implemented in RStudio using the ProClus function in the subspace package version 1.0.4 (Hassani, 2015). We explored clusters between k = 2-9, and Figure 3 shows the results for 2, 4, 6, and 8 clusters for the human V1 data with seven proteins and all runs included. Visualizing the clusters found with PROCLUS ( Figure 3A) showed a mixing of the samples, but the boxplots illustrating the ages of the samples in the clusters suggested an agerelated progression, especially for 4 or 6 clusters ( Figure 3B). The PROCLUS clusters' age progression was somewhat better than the hierarchical clusters but still had clusters with very similar median ages. More importantly, some clusters had only one or two data points, and many samples were tagged as outliers (small gray dots) and excluded from the clusters. Thus, PROCLUS's iterative top-down feature identification and cluster border adjustments performed poorly for identifying age-related clusters of human V1 development. CLIQUE The bottom-up clustering method CLIQUE was tested to determine how well this iterative approach to building clusters performed using seven proteins to group the human V1 samples into age-related clusters. The CLIQUE function from the subspace package (Hassani, 2015) was used to test clustering. CLIQUE requires an input value for the interval setting because the intervals divide each dimension into equal-width bins that are searched for dense regions of data points. Here we tested a range of input interval values (xi = 2-8) and those resulted in 4-9 clusters (Figures 3C,D). CLIQUE allows data points to be in more than one cluster, so to visualize the multi-cluster identities, we plotted the data points using concentric color-coded rings. CLIQUE placed all of the data into multiple overlapping clusters, which was true for all interval settings (xi = 2-8). The poor partitioning of samples resulted in no progression in the clusters' median age ( Figure 3D). Thus the iterative bottom-up clustering of CLIQUE performed poorly for clustering the samples into age-related groups. Comparing these top-down PROCLUS and bottom-up CLIQUE density methods for sparsity clustering showed that neither algorithm was a good fit for producing age-related clustering of the samples. PROCLUS performed somewhat better because some of the parameters resulted in clusters with a progression in the median cluster age; but, the number of data points treated as outliers was unacceptably high. SPARCL Next, we tested a sparsity-based clustering algorithm, sparse k-means clustering, optimized for small sample sizes (Witten and Tibshirani, 2010). The SPARCL package (version 1.0.4) (Witten and Tibshirani, 2018) was used to cluster the human V1 samples with data from 7 proteins. This approach adaptively finds subsets of variables that capture the different dimensions and includes all samples in the clusters. SPARCL searches across multiple dimensions in the data and adjusts each variable's weight based on the contribution to the clustering. Thus, the term "sparse" in this method refers to selecting different subsets of proteins to define each cluster. To implement sparse k-means clustering, we used the Kmeans.sparsecluster function in the SPARCL package (Witten and Tibshirani, 2018). We explored a range of k clusters between k = 2-9. The SPARCL package also includes a function to help determine other input variables, such as the boundaries for reweighting the variables (wbounds) to produce optimal clustering. Visualizing the clusters created by SPARCL showed useful partitioning of the samples into clusters ( Figure 3E) that moved from the bottom right to the top left in the tSNE plot. Also, the boxplots illustrate a good progression of the median cluster age for 4 and 6 clusters. However, SPARCL is prone to making clusters with only 1 sample, and that was the case in this example for k = 4-9 clusters. To address that problem, we tested another sparse k-means cluster algorithm that is robust to making clusters of n = 1. Robust Sparse k-Means Clustering Finally, we tested a modified version of the SPARCL algorithm called RSKC . The RSKC algorithm was designed to be robust to the influence of outliers that can drive other algorithms to create clusters of n = 1. RSKC operates by iteratively omitting outliers from cluster analysis, assigning all remaining samples to clusters, and then reinserting outliers to the analysis by grouping them into the nearest-neighboring cluster. Using the RSKC package in R (Kondo, 2016) we explored clustering for a range of k values (k = 2-9) using the human V1 dataset with 7 proteins and all runs (Figure 4). The visualization of the clusters on the tSNE plot showed good grouping of the samples into spatially separated clusters. The boxplots illustrate good progression in the median ages of the clusters, especially for 4 or 6 clusters ( Figure 4B). In addition, the algorithm adaptively reweighted the proteins to identify the most robust clusters and we plotted the weights for each of the 7 proteins ( Figure 4C). This component of RSKC identified the lifespan variations in GluN2B, Synapsin, and GluN2A as having the greatest impact on the clustering of the samples. Next, the scalability of RSKC was explored using the full dataset of 23 proteins measured for the human V1 samples FIGURE 3 | Comparison of various sparse clustering methods. Top-down PROCLUS subspace method across range of cluster numbers (2, 4, 6, and 8). The clusters are visualized in tSNE 2D scatter plots of the data by color-coding each data point with its cluster identity (A) and in boxplots showing the median age of the samples in each cluster (B). (C,D) Bottom-up CLIQUE subspace clustering method for a range of "intervals." Different clusters are visualized as colored dots in a tSNE representation of the data (C) and as box plots depicting the mean age of the samples (D). (E,F) Sparse clustering after varying the inputted k cluster number (2, 4, 6, and 8). Different clusters are visualized as colored dots in a tSNE representation of the data (E) and as box plots depicting the mean age of the samples (F). The colors in scatter plots and boxplots represent the cluster designation for all plots. FIGURE 4 | Age-related clustering of seven synaptic proteins for a single iteration. Expression data from seven synaptic proteins were input into RSKC and used to identify k = 2, 4, 6, and 8 case clusters. For each k value, three plots were constructed: (A) 2D tSNE scatter plots showing samples color-coded by their cluster designations, (B) box plots displaying the distribution of ages for each cluster, and (C) a bar graph representing the RSKC weights for all seven proteins. (Murphy lab) (Figure 5). In this example, the average expression value for each protein was used and the elbow plot method identified six clusters. Figure 5 shows the results of three separate runs of RSKC on the 23 protein dataset. All three runs resulted in similar clustering (Figures 5A-C) with a tight progression of age-related clustering from cluster A with the youngest median age to cluster F with the oldest age. The addition of more proteins to the RSKC clustering provided greater precision for identifying the subtle changes that represent the temporal dynamics of human V1 development. The weights for the 23 proteins (Figures 5D-F) showed that all of the proteins contributed to this high dimensional clustering. Comparing the feature weights among the three runs showed some reordering in the weight of individual proteins suggesting that care is needed when using weights from a single run. These weights were used to improve the visualization of the clusters in a tSNE plot. The protein expression values for each sample were transformed by multiplying with the corresponding weight and those transformed data were visualized using tSNE (Figures 5G-I). Those plots showed the separation of the clusters in the 2D tSNE space. Since the starting conditions for clustering can affect which samples end up in a cluster, we tested how robust RSKC clusters were by running the algorithm 100 times with different starting conditions. We then plotted the results of 100 iterations in a boxplot showing the age-related clusters and a heatmap showing the number of times each sample fell into the different clusters (Figures 6A,B). This analysis showed that the progression in the age of the clusters was robust to the starting conditions (Supplementary Table 5). Furthermore, the heatmap showed that clusters B and C were the least stable, but the other clusters had strong consistency for which samples were partitioned FIGURE 5 | Age-related clustering of 23 synaptic proteins for three single iterations. (A-C) Expression data from 23 synaptic proteins was used to identify six case clusters. Boxplots of cluster age ordered from youngest (red) to oldest (dark blue) median age. (D-F) Bar plot visualizing RSKC feature weights for proteins. (G-I) tSNE plot of the protein data scaled by RSKC weights and color-coded by RSKC cluster. In both (A-C) and (G-I), sample ages were reduced to sample averages to reduce crowding. into those clusters. The Jaccard similarity was calculated for all cluster pairs to determine the proportion of samples shared between the clusters. Cases were counted as shared when the cases were partitioned to the cluster 10 or more times because the metric is sensitive to small samples sizes. The similarity indices ranged from 0 to 22% (adjacent pairs: A-B 12%, B-C 22%, C-D 11%, and D-E 20%), with cluster C having the most cases shared with other clusters. In addition, the average feature weight for each of the 23 proteins was calculated from the 100 runs ( Figure 6C) and illustrated the gradual progression of feature weights. Testing Robust Sparse k-Means Clustering With Larger Numbers of Proteins or Genes So far, we have shown that RSKC does a good job of partitioning samples into age-related clusters with datasets that have fewer than 25 proteins. Here we examine if RSKC scales to larger datasets with 2-3 orders of magnitude more features. We ran the RSKC clustering using data collected from the Murphy lab human V1 samples with measurements for 95 proteins (Supplementary Table 4; Figure 7A). Once again, 100 iterations of RSKC clustering was used to ensure that the clusters were robust to the starting condition. This analysis found strong age-related clustering of the samples showing six welldefined clusters that stepped across the lifespan. We tested if the progression of cluster ages could arise by chance by rerunning the clustering but on each iteration the age of the sample was randomized. As expected, randomizing the ages resulted in clusters with a very broad range of ages and no progression in the mean cluster age (Supplementary Figure 1). Next, RSKC clustering was extended to the transcriptomic dataset from Kang et al. (2011;Supplementary Table 2). First, RSKC was run using the 88 genes that matched the proteins in Figure 7A. Even though the two datasets used different samples it was possible to compare the ages of the clusters because the range of ages and number of samples were similar. The progression of age-related clusters for the gene data ( Figure 7B) was similar to the protein clusters and there was a strong correlation (r = 0.81) between the median ages of the six cluster pairs. The strong correlation between the protein-and gene-cluster ages was particularly interesting because previous studies have shown that the correlation between large sets of protein and gene expression values is notoriously low (e.g., r ∼ 0.2) (Gry et al., 2009). To assess if the datasets used here simply had an unusually strong similarity between the lifespan changes in the expression values for each protein and gene pair we calculated those correlations. To facilitate this analysis the protein and gene expression values were normalized by calculating z-scores and the normalized values were partitioned into six age-bins (<1, 1-5, 5-12, 12-20, 20-55, and >55 years) (Supplementary Figure 2). The correlation coefficient was then calculated for the 88 proteingene pairs using the mean gene and protein expression values from the six age bins (Supplementary Figure 3A). The mean correlation between the 88 protein-gene pairs was r = 0.15 and the median correlation was only slightly higher (median r = 0.21, 95% CI 0.04-0.25) (Supplementary Figure 3B). Thus, it is unlikely that the strong correlation found between the ages of the protein-and gene-clusters arose from a simple linear relationship between those two types of molecular measurements. Instead, the common cluster ages for these different omics datasets suggest similar high dimensional patterns that RSKC uses to partition the samples into the series of age-related clusters. Finally, we examined how well RSKC performed on datasets with measurements of hundreds to thousands of genes using 988 genes that overlap with the SynGO database of synaptic genes (Koopmans et al., 2019) and then with all 17,237 genes in the Kang dataset. The SynGO genes were analyzed to assess if a large set of functionally genes might reveal a different pattern of clusters from the full set of genes. The analysis of synaptic genes showed an age-related progression of the median age of the clusters (Figure 7C). Compared with the protein clusters ( Figure 7A), the median age of the SynGO clusters jumped between clusters B and C ( Figure 7C) and a very similar pattern of age-related clusters was found when all 17,237 genes in the Kang dataset were used ( Figure 7D). Thus, RSKC cluster analysis of 95 proteins revealed the tightest age-related clusters, but the gene data also resulted in the partitioning of samples into age-related clusters. This finding contrasts with hierarchical clustering used by Kang et al. (2011), (Supplementary Figure 8) that did not partition postnatal samples into age-related clusters. Thus, the optimization of sparse k-means cluster analysis (RSKC) for small sample sizes provides another approach for analyzing the human brain's molecular development that is sensitive to the subtle molecular changes that occur across the postnatal lifespan. A Note About Selecting the Number of Clusters An essential step in k-means clustering is selecting k, which denotes the number of groups to classify observation into. The correct choice of k is often ambiguous, as there are many different approaches for making this decision. Intuitively, an optimal k lies in between maximum generalization of the data using a single cluster and maximum accuracy by assigning each observation to its own cluster. One of the most common heuristics for determining k is the elbow plot method, where the sum of squared distances of observations to the nearest cluster center is plotted for various values of k. As k increases, the sum of squared distances tends toward zero. The "elbow" occurs at the point of diminishing returns for minimizing the sum of squared distances, and the k value at this point is selected as the optimal number of clusters. To tailor the selection of k to RSKC, we applied the elbow method to the Weighted Within Sum of Squares (WWSS), the objective function maximized by the algorithm. WWSS was calculated for various values of k and averaged over 100 iterations. The elbow can be identified using the elbowPoint function in the akmedoids package (version 0.1.5) (Adepeju et al., 2020), which uses a Savitzky-Golay filter to smooth the curve and identify the x where the curvature is maximized. This method found that k = 6 was the optimal number of clusters for all of the applications of RSKC used in this paper. There are more than 30 methods to determine the optimal values for k and a large number of journal papers (e.g., Tibshirani et al., 2001) and web resources (e.g., Cluster Validation Essentials) that can be used to learn more. The R packages NbClust (Charrad et al., 2015) and optCluster (Sekula, 2020) are particularly helpful tools for choosing the number of clusters because they test various methods for selecting k (Charrad et al., 2014). APPLICATION OF ROBUST SPARSE k-MEANS CLUSTERING CLUSTERS TO STUDY HUMAN VISUAL CORTEX DEVELOPMENT Previous studies using the datasets analyzed here (Murphy et al., 2005;Pinto et al., 2010Pinto et al., , 2015Williams et al., 2010;Kang et al., 2011;Siu et al., 2015Siu et al., , 2017 have examined molecular development by assigning samples into age-bins that approximate the lifespan stages defined by developmentalists. In contrast, the previous section describes a data-driven approach to partitioning samples into age-related clusters using sparse k-means clustering (RSKC). This use of unsupervised clustering raises the possibility that it might reveal aspects of human visual cortex molecular development that have escaped previous analyses. This section explores some of the information about human visual cortex development that can be revealed by examining the content of agerelated clusters. First, we compare partitioning of the samples into predefined age-bins versus data-driven clustering of the 23 proteins for post-mortem intervals (PMIs), the proportion of cases, and the biological sex of the cases (Supplementary Figure 4). The distribution of PMIs was similar between the two methods of partitioning the lifespan as was the FIGURE 7 | Age-related clusters for large numbers of proteins and genes. Expression data from (A) 95 synaptic and immune-related proteins, (B) 88 genes that correspond with the protein in (A), (C) 988 synaptic genes that correspond with the SynGO gene list, and (D) 17,237 protein-coding genes was used to identify six age-related clusters. The cluster designation of each sample over 100 iterations of RSKC was used to visualize the distribution of sample ages. Boxplots denote the median and interquartile range of ages in each cluster, and points denote outliers. proportion of samples and the balance of females and males in the bins. The progression of cluster ages was apparent when the age bins were color-coded to reflect the cluster identity (Supplementary Figure 4G). That histogram illustrated an interesting aspect of cortical development during young childhood (1-4 years) where samples in that age-bin were partitioned into five different clusters. Similar to previous studies that observed heightened childhood heterogeneity with waves of inter-individual variability that peak between 1 and 3 years (Pinto et al., 2015;Siu et al., 2017). The findings here suggest that the relationship between chronological and brain age varies across the lifespan. The developmental trajectories of the 23 proteins were plotted using LOESS fits (95% CI) to the expression values (normalized to control), and each sample was color-coded by their cluster assignment. The LOESS curves were ordered based on similar trajectories to illustrate the range of developmental patterns with some increasing (e.g., GABA A α1) or decreasing (e.g., GABA A α2) monotonically across the lifespan while others followed an inverted-U (e.g., gephyrin), an undulating pattern (e.g., VGAT), or remained relatively unchanged (e.g., GABA A α3) ( Figure 8A). The range of trajectories highlights the need for high-dimensional analyses to capture the complexity of this development. To help describe when the expression level of a protein in a cluster was above or below the overall mean, we implemented the over-representation analysis (ORA_phenotype function) described previously (Balsor et al., 2020; Figure 8B). Briefly, for each protein, a normal distribution was simulated using the mean and standard deviation of the expression values for all samples. Then the boxplots were color-coded by comparing the expression values for each cluster with the simulated distribution. Here, the box for a cluster was coded as over-represented (red) if the 25th percentile was above 95% of the simulated distribution and under-represented if the 75th percentile was below 5% of the simulated distribution. Of course, other cutoff values for the ORA can be implemented to be more stringent or lenient for the color-coding (e.g., Supplementary Figure 5), or other methods such as estimation statistics (Bernard, 2019) can be used for this step depending on the nature of the question. Here, the ORA identified a range of over-or underrepresented proteins in each cluster from a high of 12 proteins in clusters C to 5 proteins in cluster F (A -6 proteins, B -11 proteins, C -12 proteins, D -8 proteins, E -7 protein, and F -5 proteins). These LOESS curves and boxplots for the expression of each protein help to describe development, but it is challenging to synthesize an overall pattern for human V1 development when confronted with making hundreds of pairwise comparisons. To address that problem we implemented a series of visualizations and analyses aimed at representing the highdimensional nature of these data. The first step in addressing the high-dimensional patterns of protein expression captured by the age-related clusters was to plot a bubble chart illustrating the expression levels of all 23 proteins for the 6 clusters. That visualization ordered the proteins by their RSKC weight and color-coded each bubble with the normalized mean protein expression with blue representing low and red high expression levels (Figure 9). The visualization helped identify that cluster D has high expression levels for many proteins. That cluster represents older children and the transition to adolescence (mean cluster age = 10.3 years, CI 9.6-11.1 years) when rapid changes in cortical microstructure have been found (Norbom et al., 2021). In addition, groups of proteins with either high or low expression can be identified in a cluster, such as the higher expression of Golli-MBP, GFAP, CB1, and NR2B in cluster B. Thus, this visualization shows the mean expression for the 23 proteins in the 6 clusters, but it is still challenging to derive what differentiates the clusters. To address this, we applied our recently developed workflow (Balsor et al., 2019(Balsor et al., , 2020) that includes dimension reduction, identification of features and the construct of a plasticity phenotype visualization to characterize the development of the human visual cortex. This workflow is described in detail in a previous publication (Balsor et al., 2020). Adding Principal Component Analysis for Dimension Reduction This part of the workflow aims to reduce the dimensionality of the data by identifying combinations of functionally related proteins that we call features and using those features to capture the high dimensional pattern of brain development. The first step involves using PCA, a standard approach for reducing dimensionality when studying brain development (Jones et al., 2007;Beston et al., 2010;Bray, 2017). The scree plot showed that the first three dimensions capture ∼60% of the variance in the data (Supplementary Figure 6), and the correlation matrix identified the strength of the relationship between each protein and the 23 dimensions. For example, the expression of Gephyrin and PSD95 was strongly correlated with Dim 1 while VGAT, GABA A α2, CB1, and GluN1 were strongly correlated with Dim 2. In addition, the quality of the representation for each protein on the first three dimensions was assessed using the cos 2 metric. The cos 2 (cosine square, coordinates square) conveys the quality of the representation of that variable using the projection angle onto each PC dimension. The closer that cos 2 is to 1, the better the quality of that variable's projection onto the dimension. The biplots illustrate the quality of the representation of each protein on Dim 1, 2, and 3 (Figures 10A,B) and show that some aspects of the RSKC-defined clusters are apparent when the samples are plotted in the PC space (Figures 10C,D). However, clustering of the samples in PC space was less distinct than illustrated in Figure 5 where tSNE plots were used to visualize clusters in the RSKC-weight transformed data. We examined which proteins were well represented by the first three dimensions by plotting the cos 2 values for Dim 1, 2, and 3 (Figures 10E-G) and the sum of the cos 2 for those dimensions ( Figure 10H). The matrix of cos 2 values illustrated that only two of the proteins (GABA A α3 and Drebrin) were weakly represented by the first three dimensions ( Figure 10I). The remaining steps focus on Dim 1, 2, and 3 because they captured a large amount of the variance and had high-quality representations for most proteins. Comparing Principal Component Analysis and Robust Sparse k-Means Clustering We compared RSKC and PCA by assessing the similarity of RSKC weights and PCA cos 2 values for each protein (sum of Dim 1, 2, and 3) for the 23 proteins ( Figure 11A). There was a strong correlation (ρ = 0.72) between the two approaches; however, some proteins fell away from the line of best fit. Next, the differences between RSKC weights and PCA cos 2 of the proteins were assessed using a Bland-Altman analysis (Giavarina, 2015). This used the calculated differences between the normalized measures and plotted those as the difference score for each protein. Also, interval bands were plotted to represent no difference between the RSKC and PCA measurements (blue band), when RSKC measurements were greater (positive red Over-represented clusters were colored red if the 25th percentile of the RSKC cluster was greater than the 95th percentile of a simulated normal distribution. Under-represented clusters were colored blue if the 75th percentile of the RSKC cluster was less than the 5th percentile of a simulated normal distribution. Boxes that fell within the middle 90% of the simulated normal distribution were left gray. band) and when PCA measurements were greater (negative red band) (Figure 11B). The blue band was slightly offset from zero, indicating a bias for the normalized PCA cos 2 values to be greater than the RSKC weights. The plot identified key proteins, such as the Gephyrin and PSD95 homogenates, and Ube3A which were more strongly represented by the RSKC weights. All three of FIGURE 9 | Bubble plot of mean protein expression across each cluster. Proteins are ordered by their corresponding RSKC weight with the highest weighted protein arranged at the top, and the lowest weighted protein at the bottom and ordered by developmental cluster from left to right. The color of the dot represents standardized protein expression for each cluster, while the size of the dot represents the RSKC weight (see legend). those proteins are essential molecular components that regulate the experience-dependent development of the visual cortex. For example, Ube3A is involved in the experience-dependent cycling of AMPA receptors (Greer et al., 2010), is required for ocular dominance plasticity (Yashiro et al., 2009;Sato and Stryker, 2010) and is selectively lost during aging of the human visual cortex (Williams et al., 2010). Using Principal Component Analysis Basis Vectors to Identify Candidate Plasticity Features The proteins in the dataset are known to regulate experiencedependent plasticity in the visual cortex (e.g. Quinlan et al., 1999a,b;Fagiolini et al., 2003Fagiolini et al., , 2004Hensch, 2004Hensch, , 2005Hensch and Fagiolini, 2005;McGee et al., 2005;Philpot et al., 2007;Yashiro and Philpot, 2008;Cho et al., 2009;Gainey et al., 2009;Smith et al., 2009;Kubota and Kitajima, 2010;Larsen et al., 2010;Levelt and Hübener, 2012;Lambo and Turrigiano, 2013;Cooke and Bear, 2014;Guo et al., 2017;Turrigiano, 2017;Hensch and Quinlan, 2018). We took advantage of that a priori knowledge and the output from PCA to identify a new set of features that could be used to probe the neurobiology of the RSKC clusters. Although the RSKC weights reflect the contribution of individual proteins for partitioning the samples into clusters, the weights do not provide insights into combinations and balances of proteins that regulate plasticity. Thus, it is necessary to add another analysis that can help to identify those networks and balances of proteins that regulate experience-dependent plasticity. This step is a semi-supervised approach to select combinations of proteins using the PCA output (cos 2 values and the basis vectors) and the known functions of the proteins. These steps generate a list of candidate plasticity features that are combined to construct an extended phenotype (Dawkins, 1982). We call the collection of features a plasticity phenotype and it can be used to infer the plasticity state of the visual cortex. The approach is described in detail in Balsor et al. (2020) and briefly outlined here. Two heuristics were applied to identify combinations and balances among the proteins, using proteins that met the cos 2 cutoff shown in Figure 10H. First, the a priori knowledge 1 and 3 (B,D). The strength of the representation (cos 2 ) for a protein on the given set of dimensions is reflected by the length of the vector, and only proteins with cos 2 > 0.5 are shown. The color of each point corresponds to their cluster, matching the original cluster colors in Figure 6. Bar plots represent the quality of representation of each protein with each dimension (E-G), as well as the summed quality of representation across all three dimensions (H). The dashed line represents cos 2 cutoff value for representation of 0.5. (I) Matrix illustrating the quality of representation for each protein with each PCA dimension, representing the strength (circle size) and direction (zero = white, positive = red) of cos 2 . FIGURE 11 | Exploring the relationship between PCA and RSKC feature identification. (A) Scatter plot showing the PCA quality of representation (cos 2 ) for the first three dimensions and the RSKC weights. The dashed line represents the line of best fit, and rho is Spearman's rank correlation coefficient. (B) Bland-Altman plot comparing PCA cos 2 for the first three dimensions and RSKC weights for 23 proteins. The cos 2 and RSKC weights were each computed as proportions of their respective maximum values. The dashed blue line represents the mean difference, with the 95% confidence intervals shown as the blue shaded area. The top dashed red line represents the upper limit of agreement (+1.96 SD) and the bottom dashed red line is the lower limit of agreement (-1.96 SD), with corresponding 95% confidence intervals shown as red shaded areas. about the function of the proteins in plasticity and development of the visual cortex was used to guide the inspection of the three basis vectors (Figures 12A-C). Second, the amplitude and direction of each protein on the basis vector were used to select candidate features to sum or use in a relative difference index. For example, on PC1, we noted that four highly conserved synaptic markers (Pinto et al., 2015), the pre-synaptic proteins synapsin and synaptophysin and the post-synaptic proteins PSD95 and gephyrin had large positive amplitudes, so they were summed to create one of the candidate features (PGSS). On PC2, the receptor subunits GABA A α1 and GABA A α2 had opposite directions, so these were used for an index (GABA A α1:GABA A α2) because the balance between those subunits is developmentally regulated and governs the kinetics of Points are colored corresponding to the clusters in Figure 8, and the 95% confidence intervals around each curve are colored in gray. (B) Boxplots show the expression of each feature for the six clusters. A simulated normal distribution was sampled to obtain 5th and 95th percentile values. Boxes were colored red (i.e., over-represented) if the 25th percentile of the feature cluster was greater than the 95th percentile of the normal distribution. Boxes were colored blue (i.e., under-represented) if the 75th percentile of the feature cluster was less than the 5th percentile of the simulated distribution. Otherwise, boxes were colored gray. the GABA A receptor (Gingrich et al., 1995;Bosman et al., 2002;Heinen et al., 2004;Hashimoto et al., 2009). Finally, on PC3, we noted that GFAP and integrin had the largest amplitudes, and they were in opposite directions. Those two proteins are expressed by astrocytes, and the expression of integrin receptors is increased on reactive astrocytes (Lagos-Cabré et al., 2020), so FIGURE 14 | Associations between selected features and feature phenotype by cluster. (A) Correlation heat map between protein sums and indices, with strength and direction of Pearson's R correlation represented by the color (negative = blue, zero = white, positive = red), and arranged by similar pairwise correlations using a wrapped dendrogram. Features were selected if they were significantly correlated with any of the first three PCA dimensions. (B) The plasticity phenotype was visualized using color-coded horizontal bars representing the median expression of selected features across clusters. For protein sums, the color ranges from white (zero) to gray (midpoint) to black (maximum protein sum across all features). For the protein indices, the color ranges from green (favoring the first protein in the index) to yellow (balance of the two proteins) to red (favoring the second protein in the index). Asterisks indicate features that were found to be either over-or under-represented. The features are arranged according to the same dendrogram generated in (A). an index was calculated (GFAP:Integrin). Applying the heuristics resulted in 13 candidate features, including 5 protein sums identified using the basis vector for PC1 and 8 indices from PC2 and PC3 ( Figure 12D). The features were validated by calculating each feature using the expression values for the 23 proteins (Supplementary Table 6) and correlating those with the eigenvalues for the three PC dimensions (Figure 12D). LOESS curves and boxplots were made for all of the candidate features to illustrate how they changed across the lifespan and identify if features were over-or under-represented in a cluster (Figures 13A,B and Supplementary Figure 7). One aspect of development apparent in the boxplots was the overrepresentation of the protein sums in cluster D. That cluster has a mean age of 10.3 years (SD 8.4 years), which corresponds with the end of the critical period for developing amblyopia in children (Lewis and Maurer, 2005;Birch, 2013) and a stage of human cortex development often described by synaptic exuberance, growth, and changing state of plasticity. Furthermore, animal research has shown that excess excitation (Fagiolini and Hensch, 2000;Fagiolini et al., 2004) and expression of proteins regulating that activity, especially PSD95 (Huang et al., 2015), can close the critical period. Analyzing Plasticity Phenotypes for the Robust Sparse k-Means Clusters Finally, the 11 features with significant correlations were used to construct a plasticity phenotype that was combined with the six clusters. A correlation matrix was made using the values for the features calculated from the protein expression for each sample (Supplementary Table 1). The matrix and surrounding dendrogram showed that the protein sum and indices were separated into different tree branches. The order of the features in the correlation matrix was used for the bands in the plasticity phenotype visualization. In the phenotype, the median of each feature was represented as a color-coded band for the six clusters ( Figure 14B). Together, the 66 color-coded feature bands captured the high dimensional pattern of neurobiological changes across the lifespan. The protein sums represented by gray levels convey a pattern with specific groups of proteins that are highly expressed early in development (clusters A and B) and a broad wave of expression in older childhood (cluster D). The indices reflect the multiple timescales of molecular development that are the hallmarks of the human visual cortex . However, even with undulating features and different timescales, all appear to arrive at a similar level of maturation in cluster E. Combining the features and clusters into a visualization simplified this complex dataset and facilitated linking the clusters with sets of neurobiologically meaningful features. The asterisks on the feature bands indicate the ones identified as over-or under-represented in Figure 13B. Each cluster had a unique group of features that deviate from the average, and those represent the neurobiological mechanisms that differentiate the age-related clusters. For example, the set of 4 red bands for the young visual cortex (cluster A) was unique and showed that the indices were dominated by the NMDA receptor subunits NR2B and GluN1, the Golli family of myelin basic protein (MBP) and the GABA A α2 receptor subunit. In contrast, the older visual cortex (cluster F) was distinguished by a set of 3 light gray protein sum bands, a red band indicating more GFAP and a green band indicating more GABA A α1. Finally, the overall appearance of the protein sums and indices for cluster D gives the impression of a transition stage in the development of the visual cortex when exuberant protein expression (dark gray bands) (Huang et al., 2015) and the shift in protein balances (green bands) (e.g., Quinlan et al., 1999a,b;Fagiolini and Hensch, 2000;Chen et al., 2001;Philpot et al., 2001;Fagiolini et al., 2003Fagiolini et al., , 2004Hensch, 2005;Hall and Ghosh, 2008) signals the end of the critical period. A raincloud plot of the samples in the 6 clusters shows the range of ages that correspond with the plasticity phenotypes (Figure 15). The distribution of sample ages in the clusters appears like a series of overlapping waves extending beyond the ages of the traditional pre-defined age-bins included in Figure 15 as vertical lines. For example, for cluster D, the wave's peak falls into the age-bin associated with the end of the period for developing amblyopia (5-12 years). However, cluster D also includes younger and older samples suggesting that the end of the sensitive period may not occur uniformly among individuals. Furthermore, other clusters overlap the 5-to 12-year-old age-bin suggesting that multiple phenotypes can be found during certain age-bins. Thus, the cluster analysis helped reveal aspects of visual cortex development that are obscured by using pre-defined age bins, which is that chronological-and brain-age often diverge (Cole et al., 2019). DISCUSSION The current study shows that the application of sparse clustering leverages the high dimensional nature of proteomic and transcriptomic data from human brain development to find agerelated clusters that are spread across the lifespan. In particular, RSKC using measurements of proteins or genes from the human visual cortex partitioned samples into clusters that progressed from neonates to older adults. The iterative reweighting of the measurements to focus on the proteins or genes that carry the most information about lifespan changes led to robust age-related clustering of the data. Furthermore, especially for the datasets focusing on 95 proteins or genes, the clusters represented early development, young childhood, older childhood, adolescence, and adulthood. Thus, sparse clustering provides a robust approach for identifying proteomic or transcriptomic defined brain ages that overlap with behavioral and brain imaging findings of gradual and prolonged human brain maturation. Many factors come into play when selecting an appropriate clustering algorithm for a study. Here, we considered the goal of the study (to resolve sometimes subtle age-related changes in molecular mechanism), the structure of the dataset (p ∼ n to p >> n), and the output of the algorithm (is it just the clusters or is feature selection included). Sparse K-means clustering was selected because it fit all of those considerations. We know from previous studies of the molecular development of the human brain that there can be subtle differences between age groups (Murphy et al., 2005;Pinto et al., 2010Pinto et al., , 2015Williams et al., 2010;Siu et al., 2015Siu et al., , 2017, and yet even small changes in protein or gene expression will alter neural function. Therefore, we looked for algorithms designed for omics datasets where subtle changes in a subset of the genes or proteins would identify important characteristics of the data. The development of sparse K-means clustering by Witten and Tibshirani (2010) was partially inspired by the need to better cluster a breast cancer dataset. In that dataset, subtle differences in gene expression significantly impacted patient outcomes, but standard clustering approaches did not pick those up. In addition, sparse clustering was developed to address datasets, like ours and the breast cancer data where the structure is p ∼ n to p >> n. Sparse K-means clustering is a good fit for those high dimensional structures because it minimizes the within-cluster sum of squares with a dissimilarity measure while maximizing the between-cluster sum of squares by iteratively reweighting the measures. Finally, and most importantly, sparse K-means clustering performs feature selection. The examples in this paper show the reweighted proteins and those distributions identifying how much each protein contributes to partitioning the samples into clusters. That matrix is sparse, with unimportant proteins having near-zero weights and important ones having non-zero weights. Those weights are essential for cluster analysis to help with making neurobiologically relevant interpretations of brain development from the cluster analysis. Various other algorithms, including linear and non-linear dimension reduction [e.g., tSNE, multidimensional scaling (MDS), and PCA], can separate developmental samples. In this paper, we found that both tSNE and PCA show some age-related progress in the arrangement of the samples. Also, Kang et al. (2011) used MDS to separate the samples across MDS 1 and 2. Then the points were color-coded by pre-defined age bins to show a left to right flow from early prenatal to older adults. However, it was not apparent which genes mapped on to those dimensions. The selection of features in the form of the weights is a key difference between sparse k-means clustering and standard clustering approach that was critical for the current study. The current study is not exhaustive of clustering approaches, as the number of unsupervised clustering algorithms for analyzing high dimensional data is rapidly expanding. For example, new sparse clustering algorithms include innovation at the level of the lasso-type penalty used to adjust observation weights (Brodinová et al., 2019). Accordingly, the "best" algorithm for understanding molecular brain development will continue to change as new approaches are developed. Rather than acting as a prescriptive guide for which algorithm to use, the current study highlights the challenges raised when applying high dimensional clustering to studies using postmortem brain samples. In particular, developmental studies that use postmortem human brain tissue often have more measurements than samples (p > n) and require clustering algorithms optimized for high dimensional data structures. The examples showed that the RSKC algorithm worked well for a wide range of observations (p) from 7 to 17,237. However, the age-related progression of the 95 proteins and 88 gene datasets (Figures 7A,B) were more distinct than the clustering using 988 SynGO or the full 17,237 gene dataset (Figures 7C,D). The succession of age-related clusters found for the visual cortex aligns with some critical milestones in visual development. Using measurements of molecular mechanisms that regulate experience-dependent plasticity, the clusters illustrated in Figure 5 show that cluster A overlaps the start of the sensitive period for binocular vision at 4-6 months and cluster B the peak of that sensitive period at 1-3 years (Banks et al., 1975). Furthermore, cluster D aligns with the maturation of contrast sensitivity (Ellemberg et al., 1999), motion perception (Ellemberg et al., 2002), and the end of the period for the susceptibility of developing amblyopia (6-12 years) (Epelbaum et al., 1993;Keech and Kutschke, 1995;Lewis and Maurer, 2005). The oldest cluster, cluster F, highlights ages when cortical changes reduce performance on several visual tasks (Owsley, 2011). The alignment with visual milestones suggests that the clusters might provide insights into the molecular mechanisms that regulate various aspects of visual development and visual function dynamics across the lifespan. Notably, the molecular mechanisms are well studied in animal models. Thus, this information for the human cortex may be seen as a bridge linking results from animal studies with human neurobiology that can help interpret brain imaging and visual perception findings. By combining the RSKC clustering with PCA, we identified plasticity-related features and constructed a plasticity phenotype that was applied to each cluster (Figure 14). The term plasticity phenotype has been used before to describe the waxing and waning of gene expression in the developing brain (Smith et al., 2019). Here we used the term to describe an extended phenotype (Dawkins, 1982) because the proteins in the dataset have known functions in regulating experience-dependent plasticity in the visual cortex (e.g., Quinlan et al., 1999a,b;Fagiolini et al., 2003Fagiolini et al., , 2004Hensch, 2004Hensch, , 2005McGee et al., 2005;Philpot et al., 2007;Yashiro and Philpot, 2008;Cho et al., 2009;Gainey et al., 2009;Smith et al., 2009;Kubota and Kitajima, 2010;Larsen et al., 2010;Levelt and Hübener, 2012;Lambo and Turrigiano, 2013;Cooke and Bear, 2014;Guo et al., 2017;Turrigiano, 2017). Thus, the plasticity phenotype can be used to infer the potential for experience-dependent plasticity in the different clusters and provide a new perspective on the maturation of the human visual cortex. Each cluster had a unique set of features that were overor under-represented in the plasticity phenotype, and those features were apparent in the phenotype visualization. Notably, the features were selected using a semi-supervised approach with a series of heuristics that included protein combinations and balances known to regulate experience-dependent plasticity. As a result, the unique sets of features can be compared with the literature to infer the likely state of experience-dependent plasticity for a cluster. For example, balances in the youngest cluster (A) were dominated by receptors that are known to facilitate experience-dependent plasticity in the visual cortex (Kleinschmidt et al., 1987;Quinlan et al., 1999a,b;Philpot et al., 2001;Fagiolini et al., 2003Fagiolini et al., , 2004Iwai et al., 2003;Hensch, 2004;Cho et al., 2009;Jiang et al., 2010). In contrast, cluster D overlaps the end of the period of susceptibility to develop amblyopia, and has peaks in protein expression, especially PSD95 that are known to close the critical period in animal models (Huang et al., 2015). The features in cluster D also appeared to mark the transition from juvenile features found in clusters A, B, and C to the mature and aging patterns in clusters E and F. Moreover, the range of ages in a cluster appeared as a series of overlapping waves in the raincloud plot, thereby illustrating that chronological-and brain-age have a complex relationship. Clustering the data collected from human postmortem tissue samples to reveal the age-related progression in the brain's molecular complexity is just the start of using high dimensional analyses. The application of modern exploratory data-driven approaches reveals novel aspects of human brain development, such as the risk for mental illness or divergence from other primates (Zhu et al., 2018). Identifying an appropriate high dimensional clustering technique opens the door to many other downstream analyses to interrogate different clusters' molecular makeup. A critical benefit of clustering with RSKC is that it outputs the feature weights. Those weights reveal the impact of specific proteins or genes on differentiating the brain's molecular environment during the progression of lifespan stages. Those proteins and genes can be used as the input to Gene Ontology (GO) analysis to catalog the molecular processes, cellular components, and biological processes that dominate the stages. Or the opposite can be done as shown in the paper where the 988 genes corresponding to the SynGO database were used to cluster the samples. The clusters can also be used for differential gene expression analysis to highlight which features are enriched during various lifespan stages. For example, the top-weighted molecular features from the RSKC analysis may be useful for creating a phenotype that provides a biologically meaningful characterization of the high dimensional changes that occur in different stages of the lifespan (Balsor et al., 2020). An interesting finding of the current study is the overlapping ages among the clusters. While this may be viewed as imperfect partitioning of samples by the clustering algorithms, it may also reflect the human brain development's true heterogeneity. In other words, developmental periods may not necessarily be described by a single omic phenotype. Instead, the classically defined developmental stages may be characterized by two or more distinct patterns of gene or protein expression in the brain. This molecular heterogeneity may shed light on findings such as the substantial inter-individual variation in cortical responses measured by fMRI studies in infants (Born et al., 2000). Also, the overlapping ages among clusters may reflect periods of stationary fluctuations in the brain's developmental trajectory, representing transitions from one molecular state to the next, similar to language development models (Sanchez-Alonso and Aslin, 2020). Addressing how human brain development proceeds is an important question that will require large amounts of new data and algorithms that capture the local and global structure of high dimensional trajectories, including ones with gradual noisy changes and non-linear transitions. One approach could include repeated MRI measurements during the ages that overlap among molecular clusters to assess if those ages have heightened intraor inter-individual variation in brain responses. Those studies will help identify ages during development with gradual but noisy change from ages with non-linear transitions in the gene and protein expression pattern in the developing human brain. Ultimately, the models will need to include multi-omics data and link with brain imaging to understand how the human brain develops fully. CONCLUSION The last decade has seen remarkable growth in the number of studies examining the human brain's molecular features. In parallel, high throughput tools have dramatically increased the amount of data collected for every sample. The complexity and high dimensional nature of those datasets have spurred the need for more guidance in selecting appropriate tools to analyze those big data. Some studies are now collecting data from 100 or 1000 s of human brain postmortem samples (e.g., PsychENCODE), but studies of development still have many fewer tissue samples, and the ages of the cases are spread across the lifespan. The small sample sizes of the developmental datasets make it difficult to apply many commonly used high dimensional clustering methods. Those methods lack the sensitivity needed to reveal robust clusters defined by the subtle differences in genes or proteins that occur across the postnatal lifespan. At the same time, sparsity-based clustering algorithms designed for small sample size have emerged. In this guide, we explored the application of sparsity-based clustering and showed that one algorithm, RSKC, is a good fit for revealing the subtle and gradual changes of human brain development that occurs from birth to aging. In the next decade, the amount of data collected from each postmortem brain sample will only continue to grow as single-cell RNA sequencing methods are applied to studying human brain development. Furthermore, the push to integrate multimodal measurements, from molecules to imaging of human brain development will heighten the demand for robust high dimensional analysis tools. Neuroscientists will continue to face many challenges identifying rigorous methods to analyze those sparse and very high dimensional datasets. Nevertheless, careful selection of high dimensional analytical techniques that are designed for small sample sizes can be expected to have an impact on the discovery of novel aspects of human brain development. DATA AVAILABILITY STATEMENT The data used to support the findings in this manuscript and code are available at the following link: https://osf.io/6vgrf/. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Hamilton Integrated Research Ethics Board AUTHOR CONTRIBUTIONS JB designed the research, performed the research, analyzed the data, and wrote and revised the manuscript. KA and KM designed the research, analyzed the data, and wrote and revised the manuscript. DS performed the research, analyzed the data, and wrote and revised the manuscript. RK and JZ performed the research, analyzed the data, and revised the manuscript. EJ analyzed the data and revised the manuscript. All authors contributed to the article and approved the submitted version. FUNDING NSERC Grant RGPIN-2015-06215 and RGPIN-2020-06403 were awarded to KM, Woodburn Heron OGS was awarded to JB and KA, and NSERC CGS-M was awarded to EJ. The funder had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
2021-01-07T09:00:57.472Z
2021-01-04T00:00:00.000
{ "year": 2021, "sha1": "b60eeb174a283e84014ca82df362ba9d74c9bc62", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.668293/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "677ddf75b3c7d1428a85f3f1c44edd0e4f5fea9e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3243189
pes2o/s2orc
v3-fos-license
Kinetics of Langerhans cell chimerism in the skin of dogs following 2 Gy TBI allogeneic hematopoietic stem cell transplantation Background Langerhans cells (LC) are bone marrow-derived cells in the skin. The LC donor/recipient chimerism is assumed to influence the incidence and severity of graft-versus-host disease (GVHD) after hematopoietic stem cell transplantation (HSCT). In nonmyeloablative (NM) HSCT the appearance of acute GVHD is delayed when compared with myeloablative conditioning. Therefore, we examined the development of LC chimerism in a NM canine HSCT model. Methods 2 Gy conditioned dogs received bone marrow from dog leukocyte antigen identical littermates. Skin biopsies were obtained pre- and post-transplant. LC isolation was performed by immunomagnetic separation and chimerism analysis by PCR analyzing variable-number-of-tandem-repeat markers with subsequent capillary electrophoresis. Results All dogs engrafted. Compared to peripheral blood chimerism the development of LC chimerism was delayed (earliest at day +56). None of the dogs achieved complete donor LC chimerism, although two dogs manifested a 100 % donor chimerism in peripheral blood at days +91 and +77. Of interest, one dog remained LC chimeric despite loss of donor chimerism in the peripheral blood cells. Conclusion Our study indicates that LC donor chimerism correlates with chimerism development in the peripheral blood but occurs delayed following NM-HSCT. Background Haematopoietic stem cell transplantation (HSCT) is an essential option for therapeutic treatment of malignant haematopoietic diseases. Nonmyeloablative (NM) HSCT is characterized by reduced intensity and toxicity [1,2] and is therefore a treatment option for patients with contraindications (e.g. old age) who are not eligible candidates for conventional myeloablative (M)-HSCT [3]. The success of NM-HSCT in donor engraftment is (yet) associated with acute graft-versus-host disease (GVHD) rates affecting up to 50 % of the patients causing post therapeutic morbidity, mortality and decrease in quality of life [1,4]. Acute GVHD typically develops within the first 3 months after M-HSCT and mainly affects the skin, but also the liver and the gastrointestinal tract [5]. Following NM-HSCT the signs and symptoms of acute GVHD are usually delayed and arise beyond day +100 [6]. Langerhans cells (LC) are CD1a positive bone marrowderived dendritic cells located in the epidermis and mucous membrane [7,8]. They are characterised by the presence of cytoplasmatic Birbeck granules [9]. LC are able to deliver antigenic information of their environment to the draining lymph nodes for presentation to the T lymphocytes [10]. In addition, LC might play an important role in skin GVHD [11,12]. The origin of LC (donor or recipient) appears to be of importance in GVHD development [11,13]. The engraftment kinetic of donor LC is influenced by the conditioning. In conventional M-HSCT the majority of LC are of donor origin as soon as day +40. After reduced intensity conditioning the engraftment of donor LC is delayed and full donor LC chimerism is not detected before day +100 [12]. However, data regarding LC kinetics after NM-HSCT are rare and the correlation between LC chimerism and development of GVHD remains to be investigated. For preclinical studies, especially in the field of HSCT, the dog has proven as unique model organism for decades due to high transferability potential of the gained results to humans [2,14]. Canines and humans show common similarities in physiology, metabolism and lifespan of blood cells [15]. The clinical application of NM-HSCT in humans is based on a meanwhile wellestablished canine NM-HSCT model using 2 Gy total body irradiation for conditioning [14]. Lowering the intensity of the conditioning appears to increase the incidence of graft rejection [16]. Therefore, the development of new NM-HSCT regimens, e.g. application of new immunosuppressive drugs, is required. Hence, our present study was initially designed to assess the impact of the new immunosuppressant everolimus in the canine NM-HSCT model. In general occurrence of GVHD in the canine matched-sibling NM-HSCT model is rare, and thus the herein used experimental setting is not suitable for methodical GVHD studies. However, the development of donor LC chimerism following NM-HSCT is an observed phenomenon providing an important issue in transplantation LC biology that can be adequately investigated with this model. In this study we therefore described the kinetics of LC number and chimerism in a canine 2 Gy NM-HSCT model to give a first insight into the role of LC in NM-HSCT. Laboratory animals Experiments were approved by the regional review board of the state Mecklenburg-Vorpommern (State Institute for Agriculture, Food Safety and Fishery Mecklenburg-Vorpommern, Germany; AZ: 7221.3-1.2-039/06) under advice of the regional animal ethics committee ( §15 committee). Litters of beagles were obtained from commercial kennels licensed by the German Department of Agriculture. All dogs were dewormed and immunized against rabies, parainfluenca, leptospirosis, distemper, hepatitis, and parvovirus. Dog leukocyte antigen (DLA)identical donor/recipient sibling pairs were selected on the basis of matching for highly polymorphic DLA class I and class II microsatellite markers [17,18]. Preparation of Langerhans cells Tissue samples of the skin were obtained from the neck of 9 dogs before and after HSCT on days +28, +56 and +105 under general anaesthesia (punch biopsies, 2 × 50.5 mm 2 ). In long-term chimeras specimen of dermal tissue were also taken after day +105. Tissue samples were disinfected in povidone-iodine (Mundipharma, Limburg/Lahn, Germany), bleached with sodium thiosulfate (0.05 %, Sigma Aldrich, Hamburg, Germany) and washed in phosphate buffered saline (PBS, Biochrom AG, Berlin, Germany). The epidermis was separated from the dermis by digestion with dispase (2.24 U/ml, Roche, Mannheim, Germany) at 4°C overnight and at 37°C (water bath) for one additional hour. Subsequently the epidermis was incubated at 37°C for 30 min in trypsin (0.25 %, Biochrom AG) with DNase (10 μl/ml, Roche) to obtain a single cell suspension. Single cells were labelled with a monoclonal mouse anti-canine CD1a antibody (clone CA9.AG5; kindly provided by Dr. P.F. Moore, School of Veterinary Medicine, University of California). Afterwards cell suspension was incubated with a goat-anti-mouse MicroBead (Miltenyi Biotec, Bergisch Gladbach, Germany). The labelled LC were enriched by MiniMACS device using large cell columns (Miltenyi Biotec). Blood preparation and chimerism analyses Before and after HSCT peripheral blood of the recipients was taken weekly up to day +77 and in larger intervals thereafter for analyses of the donor/recipient haematopoietic chimerism. Granulocytes and peripheral blood mononuclear cell (PBMC) fractions were separated by standard Ficoll-Hypaque density gradient centrifugation (density 1.074 g/ml). Genomic DNA of LC was isolated using Genomic DNA from Tissue-Kit (Macherey-Nagel, Düren, Germany). Genomic DNA of granulocytes and PBMC was isolated using Nucleobond CB 100-Kit (Macherey-Nagel). Subsequently, polymorphic tetranucleotide repeats were amplified by PCR using commercially fluorescein-labelled primers (BioTez Berlin-Buch GmbH, Berlin, Germany) according to standard protocols. PCR-products were analysed by capillary electrophoresis as described elsewhere [20]. Statistics The Mann-Whitney U-Test was performed to compare LC cell counts between dogs that rejected the graft and long-term chimeras. Data of LC chimerism versus chimerism in the peripheral blood were analysed by the Wilcoxon test. Correlations between LC chimerism and chimerism in the peripheral blood compartments were evaluated using the Spearman's rank correlation coefficient. Probability of p < 0.05 was considered significant. Cell purity and yield Punch biopsies of the skin from 9 dogs were obtained before and on days +28, +56 and +105 after NM-HSCT. Flow cytometric analyses of isolated LC revealed a purity of CD1a positive cells of median 91 % (range 28-97 %) (Fig. 1). Absolute LC cell counts showing a median of 3.0 × 10 4 (range 0.8-13.5 × 10 4 ) per 100 mm 2 biopsy were obtained before HSCT. After transplantation a decrease in LC to a median of 1.5 × 10 4 (range 0.3-5.6 × 10 4 ) was detected at day +28. Normal counts of 3.0 × 10 4 could be reached at day +56 after HSCT (Table 1). Differences in LC counts between dogs that rejected the graft and longterm chimeras were not observed. To verify that the enriched CD1a positive cells were true LC, electron microscopic identification of LC-characteristic Birbeck granules were performed (Fig. 2). First LC donor chimerism was detected by day +56 in the dogs (No. 3, 4, 5, 7) that experienced a stable longterm chimerism in the granulocytes and PBMC compartments as well as in the dog that died. The median LC donor percentage of these five animals amounted to 6 % (2-42 %) at that time. Subsequently, a gradual increase in donor LC chimerism over the time was observed (exemplified by dog No.3 in Fig. 3a). The two dogs (No. 3, 4) that developed a full donor chimerism in the peripheral blood by days +77 and +91 also achieved the highest level of donor LC chimerism. Dog No. 4 showed the most rapid increase in donor LC percentage, and suffered as the only one from acute GVHD starting by day +70. The dog (No. 6) that experienced late rejection showed first detectable LC chimerism not until day +112 although a donor chimerism in granulocytes and PBMC of 58 % and 40 % was already present at day +56. Interestingly, despite a subsequent decline in donor chimerism in the peripheral blood to 0 %, a constantly increasing LC donor chimerism up to 36 % (day +469) was observed (Fig. 3b). In contrast, in the dogs that rejected their grafts before day +100 LC of donor type could not be detected during the complete observation period. In summary, donor LC chimerism was significantly lower than donor chimerism in the PBMC or granulocytes compartments (day +56: p = 0.011 each). Furthermore, there was a strong correlation between the PBMC donor chimerism and the donor chimerism in LC (day +56: r = 0.7, p = 0.038). Dogs that showed PBMC donor chimerism < 11 % at day +56 experienced early graft rejection and had no donor-derived LC at any time point. However, PBMC donor chimerism of 20-40 % at day +56 resulted subsequently in increasing LC donor chimerism despite decreasing PBMC chimerism. Only PBMC donor chimerism ≥ 50 % at day +56 correlated to high-level long-term engraftment in the peripheral blood and in the LC. Discussion The aim of this study was to characterize the development of LC donor chimerism in the skin after NM-HSCT. For this purpose skin biopsies were taken from 9 transplanted dogs before and at different times after NM-HSCT. Studies describing the kinetic of LC chimerism after myeloablative or reduced-intensity conditioning were conducted previously [12,13], but data analysing LC chimerism following NM-HSCT still remain rare. The herein gained results showed a moderate reduction of LC counts following NM-conditioning. The cell number decreased from day 0 to +28 by half (1.5 × 10 4 / 100 mm 2 ) and recovered to the initial value at day +56. In myeloablative regimens a LC nadir of 0.2 × 10 4 /100 mm 2 during the first month was observed and the LC count increased to its normal level within 4-12 months after HSCT [21,22]. These results demonstrated a considerably lower decrease and a faster recovery of LC numbers after NM-HSCT when compared to the kinetics seen in myeloablative regimens. The donor LC chimerism following NM-HSCT increased slowly. In none of the examined dogs donor LC were detectable until day +56. Even at day +105 the present LC were mainly of host origin and the development of LC chimerism was not finished within 1 year after HSCT. In contrast, data from myeloablative studies certainly had shown a rapid replacement of host LC by donor derived LC as early as day +56 after HSCT [13]. The retardation in donor LC engraftment in our NM-HSCT study was even more pronounced than the delay previously reported after reduced-intensity conditioning, where by day +100 the majority of LC were donor in origin [12]. Previous studies demonstrated that the recruitment of circulating LC precursors does not only depend on proinflammatory chemokines as CCL20, but also on available LC sites in the epidermis [11,23]. We assume that the availability of LC sites in the epidermis was reduced due to a less efficient depletion of host LC by NM-conditioning. Therefore, the recruitment of donor LC precursor could be hampered, beeing the reason for a delayed donor LC engraftment after NM-HSCT compared to myeloablative regimens. In addition, a small fraction of LC is able to perform in situ proliferation [23][24][25]. This self-reproducing capacity may explain why the reduction of LC number by half was not followed by a 50 % LC donor chimerism after reaching initial cell counts in our study. We also analysed the development of donor chimerism in LC comparatively to the ratio seen in granulocytes and PBMC. The significantly delayed donor LC engraftment in our dogs is in accordance with the reduced LC chimerism compared to DC chimerism in peripheral blood or the bone marrow as described in a current NM-HSCT study [26]. Dog No. 6 which experienced late graft rejection even displayed a continuous increase of LC donor chimerism, whereas chimerism in peripheral blood was not detectable any longer. This observation is potentially caused by the ability of LC to proliferate in the epidermis [23][24][25]. Furthermore, dogs showing a 100 % donor chimerism in granulocytes and PBMC also reached the highest LC donor chimerism. Correlation analysis confirmed a strong relationship between LC and PBMC chimerism in our study. In contrast, in previous publications no correlation between dendritic cell chimerism in the blood and in the skin has been described [11,12]. One dog in this study suffered from acute GVHD after transplantation. The GVHD occurred at day +70 and the dog rapidly developed a high LC donor chimerism until day +105. Whether the earlier onset of donor LC chimerism has triggered GVHD, or whether the development of acute GVHD may have facilitated the rapid replacement of host LC with donor derived LC cannot be concluded from this single case. Conclusions Our study indicates that LC chimerism kinetics are delayed following NM-HSCT compared to chimerism development in the peripheral blood. Highest donor LC engraftment rates were observed in dogs with full donor peripheral blood chimerism and the LC chimerism correlates with the chimerism in PBMC. The kinetic of LC chimerism after NM-HSCT seems to be delayed in comparison to published data on the development of LC chimerism after myeloablative and reduced-intensity conditioning as well. Recipient LC are present in the skin even 1 year after NM-HSCT. Whether this difference in the kinetic of LC chimerism might be responsible for the delayed onset of acute skin GVHD following NM-HSCT remains to be investigated in future studies. with full donor chimerism in peripheral blood. Continuously increasing LC donor chimerism starting at day +56 after HSCT at a time when the dog experienced strong engraftment in the peripheral blood. Donor chimerism of LC developed delayed compared to donor chimerism in the peripheral blood and did not achieve the peripheral blood levels during the observation period b Dog No 6 with initial engraftment and subsequent late graft rejection. Despite high initial donor chimerism levels in the peripheral blood of 82 % (granulocytes d +28) and 62 % (PBMC d +21) first LC donor chimerism was not detected before day +112 probably as a consequence of decreasing peripheral blood chimerism levels starting 4 weeks after HSCT. Interestingly, although donor chimerism values of the peripheral blood continuously declined and the graft was eventually rejected at day +391 a continuously increasing LC donor chimerism was observed also beyond the date of graft rejection
2016-05-18T01:25:03.430Z
2016-04-27T00:00:00.000
{ "year": 2016, "sha1": "a34a06f6b545a97fcd805edcb82814b79304818a", "oa_license": "CCBY", "oa_url": "https://bmchematol.biomedcentral.com/track/pdf/10.1186/s12878-016-0050-z", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a34a06f6b545a97fcd805edcb82814b79304818a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256868420
pes2o/s2orc
v3-fos-license
ForceFormer: Exploring Social Force and Transformer for Pedestrian Trajectory Prediction Predicting trajectories of pedestrians based on goal information in highly interactive scenes is a crucial step toward Intelligent Transportation Systems and Autonomous Driving. The challenges of this task come from two key sources: (1) complex social interactions in high pedestrian density scenarios and (2) limited utilization of goal information to effectively associate with past motion information. To address these difficulties, we integrate social forces into a Transformer-based stochastic generative model backbone and propose a new goal-based trajectory predictor called ForceFormer. Differentiating from most prior works that simply use the destination position as an input feature, we leverage the driving force from the destination to efficiently simulate the guidance of a target on a pedestrian. Additionally, repulsive forces are used as another input feature to describe the avoidance action among neighboring pedestrians. Extensive experiments show that our proposed method achieves on-par performance measured by distance errors with the state-of-the-art models but evidently decreases collisions, especially in dense pedestrian scenarios on widely used pedestrian datasets. I. INTRODUCTION Accurate and plausible trajectory prediction in crowd scenarios for pedestrians plays a fundamental role in different applications, such as mobile robot navigation [1], Intelligent Transportation Systems and Intelligent Vehicles [2], and shared space safety [3]. Unlike vehicle movement governed by traffic rules, such as lane geometry, traffic lights and the headway direction, pedestrians may stop or turn at any time and interact more with neighbors, making their behavior highly stochastic. In order to model pedestrian behavior, a variety of different methods have been applied. In rule-based models, the interactions among pedestrians, namely social interactions, are described as forces [4]. In data-driven models, attention mechanisms [5] and graph convolutional networks [6] are widely used to extract social interactions and obtain excellent results using supervised learning [7], [8], [9]. Goal information can also reduce the uncertainties of pedestrians' behavior. However, each of these methods has its own drawbacks. Although the rule-based models can simulate a certain degree of pedestrian behavior, they are relatively less 1 robust and resilient in the face of complex scenarios. Datadriven models often achieve better performance but are datadependent and less interpretable [10]. In goal-based models, goal information is often directly applied as an input or as the offset of the current position to the goal [11], [12], [13]. They are sub-optimal because the association between the goal and the current position is not established. To this end, we propose a novel goal-based pedestrian trajectory prediction framework called ForceFormer. It takes as input not only the sequential motion information but also forces to train a Transformer-based backbone. Unlike the previous models that directly use last position information as an input feature parallel to other features describing motion dynamics, we apply the goal information to derive social forces so that the changes in velocity, position, and direction are better linked to the goal information. In addition, we use the generative model AgentFormer [9] as the trajectory prediction backbone, which utilities the Transformer [5] network to learn social interactions in the temporal dimensions. Simultaneously, to estimate the temporary goal position for computing forces in the inference time, a goal-estimation module [14], [13] is applied. More specifically, history trajectories are concatenated with semantic scene information, and they are fed into a U-Net [15] structure to predict the potential goals. With this goalestimation module, we can obtain reliable goal information and as well naturally take into account the constraints of environmental factors. In summary, our major contributions are as follows: • We propose a goal-based trajectory prediction framework ForceFormer. It imports more interpretable features, i. e., social forces, into a data-driven model to learn stochastic pedestrian behavior. • Different variants of ForceFormer making use of goal information are studied, and we find two effective. Namely, a) ForceFormer-Re applies goal positions to derive repulsive forces, reinforcing the interactive information between the ego pedestrian and neighbors and decreasing the possibility of collisions; b) ForceFormer-Dr applies goals to derive driving force, enhancing destination guidance to predict the ego pedestrian's future trajectory. • Extensive empirical studies are carried out on the widely used ETH/UCY [16], [17] pedestrian datasets. The experimental results show that ForceFormer performs on par with the state-of-the-art models measured by standard distance errors but it evidently decreases collisions, especially in dense pedestrian scenarios. II. RELATED WORK This section briefly reviews the works in sequence modeling, social interaction modeling, and goal-based models. Sequence Modeling. Essentially, motion trajectory is composed of positional information on time series. Therefore, converting trajectory prediction to sequence-to-sequence modeling is one of the most common approaches. In previous works, thanks to their powerful gating functions, Long Short-Term Memories (LSTMs) [18] have been widely applied to many pedestrian trajectory prediction tasks and achieved excellent results, especially in the temporal dimension [7], [6], [19], [20], [8]. In recent years, with the great achievements of the Transformer [5] network in the domain of Natural Language Process (NLP) [21], [22], Transformer-based models are also applied to trajectory forecasting. In contrast to LSTMs, Transformer networks have a better capability of modeling temporal dependencies in long sequences based on the self-attention mechanism [23], [24], [9]. In addition to the previously adopted deterministic approaches like LSTMs, an increasing number of deep generative models, such as conditional variational autoencoders (CVAEs) [25], [26] and generative adversarial networks (GANs) [27] are applied to trajectory forecasting. Rather than producing one single prediction, generative models learn the potential future trajectories as a distribution and generate multiple possible predictions from latent space. For example, Social GAN and Sophie [28], [29] are proposed for pedestrian multipath trajectory prediction via jointly training a generator and a discriminator. Compared to GANs, CVAE models predict multiple plausible trajectories conditioned on the past trajectories and acquire better performance in recent works [30], [31], [32], [8], [33]. Social Interaction Modeling. Besides modeling individual trajectory sequences, establishing the influence of pedestrians on each other or from the environment has been a critical issue in pedestrian trajectory prediction. As groundbreaking work, Helbing et al. [4] leverage dynamic social forces to imitate the influence of the surroundings on pedestrians, e. g., a repulsive force for collision avoidance and an attractive force for social connection. The social force model has been effectively applied in various fields like robotics [34] and crowd analysis [35], [36]. Another pioneering data-driven work is Social LSTM [7]. It proposes a new structure, called the social pooling layer, to aggregate the interaction information from neighbors. With the development of graph neural networks (GNNs) [37], more recent works of deterministic models like [24], [38] resort to modeling a crowd as a graph and combining GNNs with attention mechanisms to learn spatial interactions. Other approaches like [32], [6] first encode features over social dimension at each independent time step. Then, these social features are fed into another temporal sequence model to summarize the social relations over time. Unlike these methods above, we import social forces at each time step to a Transformer-based backbone, facilitating the learning of social interactions among pedestrians. Goal-based Model. Recently, goal-based models have become an effective way to improve prediction performance [11], [39]. Diverse goal information can provide more predictive possibilities to deterministic model [40], [13]. Moreover, pedestrians are motivated by their destinations. Therefore, high uncertainty behavior can be limited through the goal information [14]. In contrast to those models that directly use the goal information as an input feature, we use the goal information to calculate the social forces for each pedestrian [4], and the resulting forces are used as input to our prediction module. However, the goal information is not accessible in inference time. To circumvent this issue, we utilize a goal-estimation [14] module for estimating the goals in the inference time. III. METHODOLOGY A. Problem formulation In the context of pedestrian trajectory prediction problems, a complete trajectory of a pedestrian can be divided into two parts, the observed and the future trajectories. The observed trajectory at time steps t ≤ 0 is denoted as X = (X −H , X −H+1 , ...X 0 ), which in total includes H + 1 observed time steps; While, the future trajectory at time steps t > 0 is denoted as Y = (Y 1 ,Y 2 , ...,Y T ) over T future time steps. Similar to [9], we use the xand y-coordinate and the velocity sequence in the 2D coordinate system to parameterize trajectories. In addition, the joint social sequences of all N pedestrians in the same scene at the same time step t are denoted as for the observation and Y t = (y t 1 , y t 2 , ..., y t N ) for the future trajectories. In our proposed generative model p θ (Y |X, G, F), where θ are the model parameters, the task is to forecast future trajectories Y depending on not only observed trajectories X but also goal information G and social forces F. Following [13], we use both the position of the last time step and the differences between every single position and the goal position to parameterize the goal information. It should be noted that we use the ground truth of the last position Y T to derive the goal representation in the training phase, while we use the estimation from the goal-estimation module in Sec. III-B to derive the goal representation in the test phase. Additionally, two kinds of forces, i. e., driving force F Dr and repulsive force F Re , are calculated for each agent at every time step. They are also represented as sequences. In order to explore different ways of incorporating the goal information, on the basis of the baseline model Agent-Former [9] that takes velocity and position sequences as input, we propose three variants of the additional goal information. As denoted in Figure 1, ForceFormer-Goal directly adds the additional goal sequence to the input. Alternatively, ForceFormer-Dr uses F Dr as the additional conditional information. ForceFormer-Re uses both the goal sequence and the repulsive force F Re sequence as the additional input. In the training process, the goal-estimation module and AgentFormer are trained separately. The goal information is supplied from ground truth Y T , which is used for training the goal-estimation module and the calculation of social forces. The repulsive force and driving force are calculated by the position information, velocity information, and goal information. However, the ground truth Y T is unavailable in the test phase. Hence, during the inference process, we sample K goal candidates for every trajectory from the goalestimation module. Following the previous goal-conditioned trajectory prediction models [39], [11], [14], we evaluate all potential K goals against the ground truth and choose the one with the smallest L2 error as the estimated goal position in the test phase. a) AgentFormer: The backbone prediction model is a CAVE-based model and establishes spatial and temporal relations using attention mechanisms. Based on the conditions of observed trajectory X, goal information G, and social forces F, the future trajectory distribution is modeled as p θ (Y |X, G, F). The future trajectory distribution can be rewritten as where p θ (Z|X, G, F) is the conditional Gaussian prior, which is learned by X-Encoder. p θ (Y |Z, X, G, F) is the conditional likelihood. Eq. (2) proposes a set of latent variables Z = (z (1) , ..., z (K) ), reflecting the latent intent of pedestrian n to account for stochasticity and multi-modality in the pedestrian's future behavior. The negative evidence lower bound L elbo is used to address the intractable posterior p θ (Z|Y, X, G, F). Concretely, the CVAE-based model is optimized using the loss function where q φ (Z|Y, X, G, F) is the approximate posterior distribution parameterized by φ , which is learned by Y-Encoder. The first term in the above equation can be considered as the expected predicted probability of the future trajectory p θ (Y |Z, X, G, F). The second term KL(q φ (Z|Y, X, G, F)||p θ (Z|X, G, F)) denotes the distribution difference between the prior and the approximate posterior, which both tend to be a standard normal distribution. b) Social Forces: A modified social force model [4] that contains both driving and repulsive forces is applied in this work. The driving force, denoted by Eq. (4), describes the attractive effect related to the destination (goal position). The value of the driving force depends on the deviation of the current velocity v α (t) from the desired velocity v 0 α (t) = v 0 α e α . Velocity v 0 α (t) = v 0 α e α denotes that if a pedestrian is not disturbed, she will walk at the desired speed v 0 α along the desired direction e α pointing to the destination. The relaxation time τ α is a parameter that represents the expected time removing this deviation. To avoid collisions, pedestrians maintain a proper distance from other strangers. The repulsive force, denoted by Eq. 5, describes the avoidance phenomenon between the ego pedestrian α and other pedestrian β , The repulsive potential V αβ (b) is a monotonic decreasing function related to b, which represents the semi-minor axis of an ellipse. Through b, the equipotential lines keep the form of an ellipse that is pointed to the direction of motion. In addition to distance, the influence of viewpoint also needs to be considered when calculating repulsive forces. Thus a parameter w is introduced. where the effective angle of sight is 2ε, and c is a constant factor. Hence, the repulsive force, after taking the perspective factor into account, is constrained as F αβ = w f αβ . Since the repulsive force is based on the premise that two pedestrians are strangers, they want to keep their distance from each other and avoid collisions. However, in reality, many pedestrians travel in pairs, such as classmates, relatives, and friends, who share a common destination. Therefore, it is not logical to consider their repulsive force within a group, which could cause significant errors in the experimental results, especially in high pedestrian density scenes. So we adapt the DBSCAN method, which is a density-based spatial clustering of applications with noise [41], [42] for every time step. Through the clustering, the candidates of group members are detected [43]. Namely, if two pedestrians are in the same cluster for more than σ time steps in the observed frames, they will be judged to be in the same group. Then the intra-group repulsive forces are eliminated. c) Goal-Estimation Module: The goal-estimation module is employed to provide the goal information for the AgentFormer prediction module and the calculation of social forces in the inference phase. We adopt the goal module proposed in [14], [13] for this purpose. First, the past trajectories in a heat map form concatenates with the semantic map information. The semantic map is adopted from Chiara et al. [13], which is extracted from a bird's-eye view of the RGB scene image in ETH/UCY datasets using a pre-trained segmentation network [14]. More specifically, through the semantic map, the constraints of environments, for instance, pavement, terrain, and building, can be naturally considered. The segmentation results in a tensor form S ∈ R W ×L×C containing C classes. H and L are the height and width of the input image. The past trajectory x −H n , x −H+1 n , ..., x 0 n of agent n is mapped to the heat map M ∈ R W ×L×(H+1) . Then, the heat map tensor of the past trajectory is concatenated with the semantic map S along the channel dimension, generating tensor M s ∈ R W ×L×(C+H+1) as the input tensor for goal estimation. Finally, the concatenated information is fed to a U-Net [15] model to generate a probability map of future positions. A. Dataset The proposed framework is evaluated on ETH [16] and UCY [17], which have been widely used as the benchmark for pedestrian trajectory prediction. The datasets contain five different subsets as listed in Table I. A valid trajectory denotes a single pedestrian's track information in 20 consecutive frames captured at 2.5 Hz. These twenty-time steps are divided into two parts -the first eight time steps (3.2 s) are observed trajectories X, based on which twelve future time steps (4.8 s) as future trajectories Y are predicted. The position of the goal is located at the twentieth time step. It can be seen that the density of pedestrians varies across the subsets. The density in a scene largely influences the prediction results, especially in this work, because the calculation of social forces is closely related to crowd density. B. State-of-the-art models and baseline We compare our proposed method, ForceFormer, with the following models. AgentFormer [9] is the baseline model without using any goal information. Sophoie [29] proposes a GAN-based model that combines trajectory information with context information. Trajectron++ [32] is a CVAE-based model that maintains top performance on the ETH/UCY benchmark. STAR [24] proposes a Temporal Transformer and a Spatial Transformer to model spatial-temporal information for pedestrian trajectory prediction. Moreover, Force-Former is compared with a bunch of goal-based models. Namely, PECNet [11] is a goal-conditioning model for shortterm trajectory prediction. Goal-GAN [12] integrates goal information in a GAN-based model for trajectory prediction. Heading [39] proposes a goal retrieval module that provides goal information for trajectory prediction. Y-net [14] combines scene information with goals and waypoints for trajectory prediction. Goal-SAR [13] proposes an attentionbased recurrent network combined with the same goalestimation module as ForceFormer. C. Evaluation Metrics and Protocol Three metrics are used to evaluate the proposed model. First, two standard error metrics are applied to measure the trajectory prediction performance of ForceFormer and compare it fairly with the previous models. These two distance errors are average displacement error ADE K and final displacement error FDE K of K trajectory samples of each agent compared to the corresponding ground truth. In addition, the number of collisions as another metric is leveraged to verify the social forces applied to ForceFormer. where m and n are different pedestrians in the same scene at time step t. The threshold γ is set for determining whether a collision occurs. In this paper, γ = 0.1 m. All the metrics are computed with K = 20 samples. The calculation of NC is based on trajectories with the best ADE. Following prior works [44], [9], [32], [11], we adopt the leave-one-out strategy for the evaluation. D. Implementation Details For calculating social forces, we adopt 200°as the effective angle 2ε. The factor c for the field of view is 0.5. The threshold of minimum frames in the same cluster for grouping σ is four. Also, we consider that the desired direction cannot be calculated when the pedestrian position overlaps with the goal position, so we set the social forces at these positions to zero. For the AgentFormer backbone, we use all the same settings as in Yuan et al. [9]. But we only train the CVAE model using Adam optimizer [45] for 50 epochs, shorter than the original paper. For the goalestimation module, in addition to the same settings as in Chiara et al., [13], we add a goal-specific MSE loss function 2 and a hyper-parameter λ = 1e 6 to balance the original BCE loss function. All our models are trained on Google Colab with a single Tesla P100 GPU. E. Results In table II, we compare our approaches with current state-of-the-art methods. First, our proposed methods ForceFormer-Dr and ForceFormer-Re achieve better performance in all the subsets compared to the baseline model AgentFormer, e. g., on average, ForceFormer-Dr reduces FDE by 26% and ForceFormer-Re reduces ADE by 17%. In addition, when comparing to models that also use goal information, our methods perform on par with the previous best method Y-net. In particular, when we compare the results on each subset, we can find that our models achieve better performance than Y-net on the other four subsets, except for ETH. Moreover, when comparing the results in the high-density scenes, i. e., on Univ and Zara2, ForceFormer-Dr decreases FDE by 22% and 23%, respectively, than Y-net. The improvements indicate better performance of our method on final position predictions in high-density scenes. F. Ablation study The variants of our proposed model making use of the goal information are compared against the FDE values in Table III and the number of collisions in Table IV. First, it can be seen clearly that, compared to the baseline model AgentFormer, all the variants making use of the additional goal information achieve smaller average FDE. Except ForceFormer-Goal, ForceFormer-Dr and ForceFormer-Re have evidently smaller numbers of collisions. Among the three variants of G. Qualitative results Figure 3 shows the qualitative results predicted by Agent-Former (left column), ForceFormer-Dr (middle column), and ForceFormer-Re (right column), respectively. From the upper row, we can see that, compared to AgentFormer, ForceFormer-Dr benefits from the goal information and driving force to predict trajectories around corners or turns. Although ForceFormer-Re predicts less accurate curving trajectories, its prediction for other trajectories is closer to the corresponding ground truth. In the middle row for a scenario with two pedestrians walking in parallel, both ForceFormer-Dr and ForceFormer-Re predict more accurate final positions as the pedestrians make a left turn. In contrast, AgnetFormer does not explore the goal information from the goal-estimation module and predicts walking in the middle of the road. A more visible scenario of predicting the final position can be seen in the bottom row. The prediction from AgentFormer largely deviates from the ground truth trajectory, while the predictions from ForceFormer-Dr and ForceFormer-Re are well aligned with the ground truth trajectory. Limitations. Despite the enhanced performance brought by the social forces and the goal-estimation module, several limitations of the proposed model need to be noted. The collisions have been reduced, but the predictions from Force-Former are not totally collision-free. One possible reason might be that in the social force module, we do not consider the interactions and forces within groups, which also may cause collisions. In future work, we will build a more comprehensive social force module and apply it to better simulate interactions among group members. Moreover, the overall performance of ForceFormer, especially the calculation of social forces, relies on the reliability of the goal-estimation module. Sub-optimal performance of this module can lead to compound errors in the final prediction. On the other hand, if we can access the ground truth goal information, we can quickly turn our model into a motion planning model. V. CONCLUSION This paper proposes a new goal-based trajectory predictor called ForceFormer that incorporates social forces into a Transformer-based generative model backbone. A U-Netbased goal-estimation module is adopted to predict the goals of pedestrians' trajectories. Additional to the position and velocity information, we derive the driving force from the estimated goal to efficiently simulate the guidance of a target on a pedestrian. Also, repulsive forces are used to help the model learn collision avoidance among neighboring pedestrians. ForceFormer achieves performance on par with the state-of-art models and better performance in the highdensity scenarios on widely used pedestrian datasets.
2023-02-16T02:15:49.618Z
2023-02-15T00:00:00.000
{ "year": 2023, "sha1": "ae67dcba7c3ef0fb995a6b68b09b031295c8a9e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae67dcba7c3ef0fb995a6b68b09b031295c8a9e2", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
201215523
pes2o/s2orc
v3-fos-license
Significantly improved dielectric properties of multiwall carbon nanotube-BaTiO3/PVDF polymer composites by tuning the particle size of the ceramic filler The effects of different BaTiO3 sizes (≈100 nm (nBT) and 0.5–1.0 μm (μBT)) on the dielectric and electrical properties of multiwall carbon nanotube (CNT)-BT/poly(vinylidene fluoride) (PVDF) composites are investigated. The fabricated three-phase composites using 20 vol% BT with various CNT volume fractions (fCNT) are systematically characterized. The dielectric permittivity (ε′) of the CNT-nBT/PVDF and CNT-μBT/PVDF composites rapidly increases when fCNT > 0.015 and fCNT > 0.017, respectively. The former is accompanied by the dramatic increase in the loss tangent (tan δ) and conductivity (σ), but surprisingly, not for the latter. At 103 Hz, the low tan δ and σ values of the CNT-μBT/PVDF composite are about 0.06 and 6.82 × 10−9 S cm−1, while its ε′ value is greatly enhanced (≈154.6). The variation of the dielectric permittivity with fCNT for both composite systems follows the percolation model with percolation thresholds of fc = 0.018 and fc = 0.02, respectively. With further increasing fCNT to 0.02, ε′ is greatly increased to 253.8, while tan δ ≤ 0.1. Without μBT particles, the ε′ and tan δ values of the CNT/PVDF composite with fCNT = 0.02 are as high as ≈240 and >103, respectively. Greatly enhanced dielectric properties are described in detail. Introduction Polymer-based materials with high dielectric permittivity are being considered as one of the materials with most potential for modern technological applications such as piezoelectric generators, energy storage devices, embedded components, gate dielectrics, and electromechanical transducers. [1][2][3][4] This is because excellent mechanical properties, good processability, lightweight and low density, and high permittivity, as well as high breakdown strength, can be achieved in polymer-based materials. Furthermore, the processing for the fabrication of polymer composites is easy. 1,2,[5][6][7] Poly(vinylidene uoride) (PVDF) polymer is usually used as a dielectric polymer for fabricating polymer composites owing to its excellent electrical and dielectric properties with large dielectric permittivity (3 0 z 10) and low loss tangent (tan d < 0.02). 1,4,[8][9][10][11][12][13] Furthermore, highly reliable, low cost, high breakdown, and hence, large energy density are achieved in PVDF. Nevertheless, it is a low dielectric-permittivity material when compared to dielectric oxides. As a result, a low dielectric permittivity in polymer composites was usually achieved even when lling with high-permittivity materials. [14][15][16][17][18][19][20][21] An easy approach for enhancing the dielectric permittivity of polymer composites is to incorporate polymer matrix with highpermittivity oxide particles. Many high-permittivity oxide particles (e.g., CaCu 3 Ti 4 O 12 and related oxides, 16,18,20,22 BaTiO 3 , 14,17,19 and FeTiNbO 6 23 have been widely selected as llers. Unfortunately, a signicantly enhanced dielectric permittivity (3 0 > 50) can oen be obtained by using very large loading of these llers (f ller $ 50 vol%). Consequently, poor exibility and processability owing to a large loading of llers become serious problems in ceramic-polymer composite system. In addition, large clusters of ceramic particles in a polymer matrix gave rise to leakage of current pathways. 19 The loss tangent usually increases as ceramic particles were increased due to poor distribution of llers and aggregation of ller particles. A new method to enhance dielectric permittivity with suppressing a great increase in the dielectric loss has been accomplished by fabricating conductor-ceramic oxide nanoparticles/PVDF three-phase composites such as Ag-BaTiO 3 /PVDF 8 and carbon nanotube (CNT)-BaTiO 3 /PVDF nanocomposites. 15,24 In these three-phase nanocomposite systems, BaTiO 3 nanoparticles (nBT) with sizes of z100 nm have been widely used. A signicant increase in the dielectric permittivity with low loss tangent was accomplished by increasing f ller . Notably, the (8.2 vol%) Ag-(48.6 vol%) nBT/ PVDF composite exhibited greatly increased dielectric permittivity of z160 (at room temperature (RT) and 10 3 Hz) with low loss tangent (tan d z 0.11). 8 It was also found that the dielectric permittivity increased to 613 by increasing the percentage ratio of Ag/nBT from 28.6/71.4 to 61.0/39.0 wt%. 25 Unfortunately, the loss tangent also increased to 0.29. A large increase in dielectric permittivity and low dielectric loss can be accomplished by using a low volume fraction of Ag. However, to obtain a large value of the dielectric permittivity, the total loading of Ag-BaTiO 3 llers was very large (f Ag+nBT $ 50 vol%). Such a large loading of ller would degrade the exibility of a PVDF matrix. On the other hand, a high dielectric permittivity (3 0 ¼ 151 at 10 2 Hz) and low loss tangent with minimum value of 0.08 (less than 0.6 in the range of 10 2 to 10 7 Hz) were obtained in the three-phase PVDF matrix composite using 1.0 vol% of CNT (f CNT ¼ 0.01) and 15 vol% of nBT (f nBT ¼ 0.15) with particle size of z100 nm as llers. 15 With further increasing f CNT ¼ 0.02 (xed f nBT ¼ 0.15), both of the dielectric permittivity and dielectric loss greatly increased (3 0 ¼ 507.9 at 10 4 Hz and tan d < 3.0 in the range of 10 2 to 10 7 Hz). 24 Although the loss tangent of the threephase composite is much lower than that of the two-phase (f CNT ¼ 0.02) CNT/PVDF composite, it is a large value for applications. Nevertheless, if the loss tangent can be reduced to lower than 0.1, the CNT-BT/PVDF composite system might be one of the most interesting polymer composites because this composite system should have good exibility owing to the total lowvolume fraction of CNT and nBT llers. Dang et al. 14 reported a strong increase in a low-frequency dielectric permittivity of BT/PVDF composites when the particle size of BT was decreased from 500 to 100 nm. Unfortunately, this was accompanied by a great increase in a lowfrequency loss tangent due to the existence of a low-frequency dielectric relaxation peak. At 10 3 Hz, the loss tangent of the (100 nm) nBT/PVDF composite with f nBT ¼ 0.6 was about 0.6. Then, it continuously decreased with increasing particle size of BT from 100 nm to 500 nm (0.5 mm). Thus, nBT with a size of z100 nm may be unsuitable for use as a ller in a polymer composite, while a large size of BT (>>1 mm) is usually undesirable for applications, especially when fabricated in the form of a thin lm. It was shown that the largest dielectric permittivity of about 5000 was obtained in the BT ceramic with the grain size of z1 mm or slightly less than to be 0.5 mm. Then, the dielectric permittivity decreased largely with decreasing the grain size, 26,27 which was resulted from the decrease in the amount of a tetragonal phase in BT. When the grain size was reduced to be <0.12 mm (120 nm), a tetragonal phase was changed to a cubic phase, hence a low value of the dielectric permittivity. 15 S. M. Aygün et al. 28 have clearly demonstrated that the dielectric permittivity of Ba 1Àx Sr x TiO 3 thin lms with crystalline size of z600 nm (or 0.6 mm) was much higher than that of the thin lms with crystalline size of z100 nm. The objective of this research is to show the effect of BT sizes on the dielectric properties of the CNT-BT/PVDF composite system. The dielectric and electrical properties of CNT-BT/PVDF composites with different sizes of 20 vol% BT (z100 nm and 0.5-1.0 mm) are studied. The inuences of CNT volume fractions on the dielectric and electrical properties are also investigated. Interestingly, signicantly enhanced dielectric response with suppressing dielectric loss and conductivity can be accomplished. Fabrication of polymer composites Poly(vinylidene uoride) (PVDF, M w $ 534 000), multi-wall carbon nanotubes (CNTs, 6-9 nm OD, 5 micron long), BaTiO 3 nanoparticles (nBT, <100 nm, $99% purity) and sub-micron/ micron BaTiO 3 particles (mBT, 0.5-1.0 mm, 99.5% purity) were purchased from Sigma Aldrich. First, (20 vol%) nBT and (20 vol%) mBT were xed for all CNT-BT/PVDF three-phase composites. Second, CNT-nBT/PVDF and CNT-mBT/PVDF composites with volume fractions of CNT (f CNT ) in the range of 0-0.02 were designed. Third, the starting raw materials were mixed by the wet ball-milling for 6 h in C 2 H 5 OH. Then, the mixture of powders was heated in an oven at 80 C to evaporate C 2 H 5 OH media. Finally, the mixed powders of CNT-nBT/PVDF and CNT-mBT/PVDF were hot-pressed into pellets with 14 mm in diameter and $0.7-1.4 mm in thickness at 200 C for 0.5 h. The pressure used for hot-pressing was z255 MPa. Characterization techniques and dielectric measurement The microstructure and morphologies of nBT and mBT, as well as morphology of CNTs, were revealed by a transmission electron microscopy (TEM; FEI, TECNAI G 2 20). Scanning electron microscopy (SEM; LEO, 1450VP) was used to show the microstructure of the composites. The phase compositions of nBT and mBT powders as well as CNT-BT/PVDF three-phase polymer composites were characterized by X-ray diffraction (XRD, PANalytical, EMPYREAN). Fourier transformed infrared spectroscopy (FTIR, Bruker, TENSOR 27) was used to identify PVDF phases. For dielectric measurement, an electrode was fabricated by painting a silver paste on both sides of composite samples. The samples were dried overnight to completely evaporate the solvent in the silver paint. The electrode area was $7.12 Â 10 À5 m 2 . The parallel capacitance (C p ) and dissipation factor (D, tan d) were corrected in the frequency range of 10 2 to 10 6 Hz with an AC oscillation voltage of 500 mV using KEYSIGHT E4990A Impedance Analyzer at RT. show the morphologies of nBT and mBT particles, which are used as a ceramic ller. The particle sizes of the nBT and mBT particles are about 50-100 nm and 0.5-1.0 mm, respectively. The nBT particles are spherical, while the mBT particles are non-spherical. The morphology of CNT is shown in the inset of Fig. 1(b). The outer diameter of CNT is about 6-9 nm. The fractured cross-section of the CNT-nBT/PVDF and CNT-mBT/PVDF composites with f CNT ¼ 0.02 is illustrated in Fig. 1(c) and (d). CNTs are observed in the SEM images, as indicated by an arrow. Small clusters of nBT particles are observed, which are usually found in the literature. 16 The formation of nBT clusters is attributed to the high surface energy of nBT particles. While mBT particles are randomly distributed and surrounded by a PVDF matrix. In both cases, a continuous phase of the PVDF polymer is formed, surrounding the llers. This result indicates a 0-3 type composite structure for the CNT-nBT/PVDF and CNT-mBT/PVDF composites. Results and discussion The XRD patterns of nBT and mBT powders are shown in Fig. 2(a). The main phase of BT is conrmed by the XRD patterns. Usually, when the grain size of BT is reduced to 0.3 mm, the tetragonal phase becomes to decrease slightly. Eventually, the tetragonal phase will disappear as the grain size of BT decreases to z0.1 mm (or 100 nm), remaining only the cubic phase. These two phases in the nBT and mBT powders are identied by considering at about 2q z 45 . 14 The difference in XRD patterns is observed in the nBT and mBT powders, indicating the different phases. Double peaks of a tetragonal phase are detected in the XRD pattern of mBT, while only one peak was observed in the nBT. These results conrm the presence of tetragonal and cubic phases in the mBT and nBT powders, respectively. Accordingly, the 3 0 values of nBT and mBT particles must be different. The 3 0 value of the nBT particle is lower than that of the mBT particle because of its nonpolar cubic phase. Fig. 2(b) displays the XRD diffraction patterns of the CNT-nBT/ PVDF and CNT-mBT/PVDF composites with f CNT ¼ 0.02 and f BT ¼ 0.2. Their XRD patterns are similar to those of the nBT and mBT powders. The XRD patterns of a PVDF polymer matrix cannot be observed due to the relatively high intensity of BT polycrystalline ceramics. Thus, phase structures in PVDF cannot be conrmed by the XRD technique. The phase structures of PVDF must be conrmed. As shown in Fig. 3, the FTIR spectra conrm the existence of a, g and b-PVDF phases. The spectra of a and bphase is easy to observe, while g and b-phases are difficult to separate at the wavenumber of about 840 cm À1 . 19 The inuences of CNT loading on the dielectric properties of the CNT-nBT/PVDF and CNT-mBT/PVDF composites in the frequency range of 10 2 to 10 6 Hz are shown in Fig. 4. For both composite systems, the dielectric permittivity increases with increasing CNT loading. According to the previous report, 24 the formation of micro-capacitors, in which CNT behaves as electrodes separated by a dielectric layer of PVDF and/or BT particle, was described as one of the reasons for the observed enhanced dielectric permittivity with suppressing dielectric loss. However, other important factors should be considered as well such as the interparticle distance between adjacent BT particles and between CNTs. The overall results show that the dielectric permittivity of the CNT-mBT/PVDF composites is more stable with the frequency than that of the CNT-nBT/PVDF composites for all f CNT . As clearly seen in the inset of Fig. 4(b), the lowfrequency dielectric permittivity for the CNT-nBT/PVDF composite with f CNT ¼ 0.015 is strongly dependent on frequency; while it is nearly independent on the frequency for the CNT-mBT/PVDF composite. The large low-frequency dependence of the dielectric permittivity of the CNT-nBT/PVDF composites is likely due to the interfacial polarization. 14,19 The dielectric loss tangent as a function of the frequency at room temperature for both composite systems is illustrated in Fig. 5. The low-frequency loss tangent, which is usually caused by the DC conduction of free charges, of the CNT-nBT/PVDF composites extremely increases when f CNT > 0.015, while the loss tangent of the CNT-mBT/PVDF composites slightly increased even at the low-frequency of 10 2 Hz. On the other hand, the loss tangent of the PVDF-based composites in a high-frequency range is caused by the dielectric relaxation of permanent dipoles in the composites. 21 As shown in Fig. 6, the variations in the low-frequency AC conductivity, which can be Fig. 7. Thus, the increase in the loss tangent of the CNT-mBT/PVDF composites is inhibited by the suppressed formation of the DC conduction path in the PVDF matrix. It is worth noting that, at 10 3 Hz, the low loss tangent (0.06) and DC conductivity (6.82 Â 10 À9 S cm À1 ) are obtained in the CNT-mBT/PVDF composite with f CNT ¼ 0.019, while its dielectric permittivity is signicantly enhanced (z155). With further increasing f CNT ¼ 0.02, the dielectric permittivity is greatly increased to z254 and the loss tangent is suppressed by #0.1. The dielectric properties of the CNT-mBT/PVDF composites with f CNT ¼ 0.019-0.02 can be comparable to those reported in the (8.2 vol%) Ag-(48.6 vol%) nBT/ PVDF composite (3 0 z 160 and tan d z 0.11 at 1 kHz). 8 Notably, the CNT-mBT/PVDF composites used only a small amount of llers (i.e., CNT and mBT) with the total volume fraction of z0. 22. A list of the dielectric properties of the CNT-BT/PVDF composites with different size of BT and volume fractions are summarized in Table 1. As reported in many previous works, 24,29-31 the composites lled with BT nanoparticles have high loss tangent values (>>0.1) or low dielectric permittivity due to a non-polar cubic phase. In this work, although the low-frequency dielectric permittivity of the CNT-nBT/PVDF composites can be increased to the level of 10 3 , the loss tangent is also largely enhanced. These results indicate the important effect of BT particle sizes used on the way to optimize the dielectric properties of polymer composites. Fig. 7(a) and (b) show the dielectric permittivity of the CNT-nBT/PVDF and CNT-mBT/PVDF composites at 1 kHz as a function of f CNT . The dielectric permittivity of the CNT-nBT/PVDF and CNT-mBT/PVDF composites rapidly increases when f CNT > 0.015 and f CNT > 0.017, respectively. Usually, the rapid change in the dielectric properties of insulator matrix composites lled with conductive phase is associated with the percolation effect. 32 Accordingly, a dramatic increase in the effective dielectric permittivity ð3 0 eff Þ of the composites would obey the power law: 3,32 where 3 0 matrix is the dielectric permittivity of the BT-PVDF matrix, f c is the percolation threshold, q is a constant value, and f ¼ f CNT . The dependence of the dielectric permittivity on f CNT is well described by eqn (1), Fig. 7. According to the tted curves, f c and q are obtained. f c values of the CNT-nBT/PVDF and CNT-mBT/PVDF composites are 0.018 and 0.02, respectively, while q values are about 0.8 and 0.53. Usually, the lower limit of the normal range for q is 0.8-1. 32 For the CNT-mBT/PVDF composite, q z 0.53 is far from the lower limit of the normal range, indicating a relatively slow-increase in the dielectric permittivity compared to the CNT-nBT/PVDF composites as f CNT was increased. Generally, for two-dimensional conductor/polymer composites, f c is dependent on the aspect ratio of the conductive phase used as well as the distribution of llers. 32 The dielectric properties of two-phase CNT/PVDF composite system with an aspect ratio of CNT of about 550-830 have also been studied. As demonstrated in Fig. 8(a) and (b), the dielectric permittivity and the loss tangent of the CNT/PVDF composites rapidly increase in the range of f CNT ¼ 0.005-0.01. Thus, f c can roughly be estimated to be z0.005-0.01. It is observed that the f c value of the CNT/PVDF composites can be increased by incorporating with nBT and mBT particles. As demonstrated in Fig. 8(b) and the inset of Fig. 7, for the composites with f CNT # 0.015, the loss tangent and the conductivity of the three-phase composites slightly change with increasing f CNT compared to the CNT/PVDF composites. This result indicates that the BT particles play an essential role to block the connection of CNT or inhibit the formation of the conductive pathway in the PVDF polymer matrix. It was clearly shown that the BT particles could disperse CNT well, increasing f c . 30 Furthermore, it is observed that the dielectric permittivity, loss tangent, and conductivity of the CNT-nBT/PVDF composites suddenly increase when f CNT > 0.015. The loss tangent and conductivity are increased by more than three orders of magnitude, indicating the formation of conduction pathway or percolation network. In contrast, the increase in the loss tangent and conductivity of the CNT-mBT/PVDF composites, which is usually due to the increase in conductive phase, is suppressed even at f CNT ¼ 0.02. This result may be due to the ability of mBT to inhibit the formation of the conductive pathway. As shown in Fig. 1, the agglomeration of nBT particles was observed in the microstructure of the CNT-nBT/PVDF composites due to a high surface energy of the small-sized nBT particles. Although the distribution of CNT is difficult to observe by the SEM technique, it is likely that CNT is not dispersed well due to the agglomeration of nBT. Some parts of CNT can easily connect and form a conductive network, giving rise to the increase in conductivity and correlated loss tangent. In contrast, the suppressed Paper conductivity and loss tangent in the CNT-mBT/PVDF composites even at f CNT ¼ 0.02 may be due to a better distribution of CNT, which was well dispersed by the mBT particles. In this case, the increase in dielectric permittivity, which may be originated from the formation of micro-capacitors, is not accompanied by the signicant increase in loss tangent and conductivity. Another important signies to the formation of micro-capacitors is the slight frequency dependence of the dielectric permittivity, as shown in Fig. 4(b). The strong frequency dependence of the dielectric permittivity is usually associated with the longrange motion of free charges, resulting from the formation of percolation network. Therefore, it is strongly recommended that the best suitable size of BT for using in a polymer matrix composite is about 0.5 mm due to its fully tetragonal phase and the size is not too large for applications and is not too small to cause an agglomeration of nBT. According to the micro-capacitor model, the dielectric permittivity of the CNT-nBT/PVDF composite should be lower than that of the CNT-mBT/PVDF composite due to a lower dielectric permittivity of nBT particles compared to mBT particles. In this work, this mechanism may contribute to the overall dielectric response, but it may not be the primary mechanism in the CNT-nBT/PVDF and CNT-mBT/ PVDF composites because of the higher dielectric permittivity of the CNT-nBT/PVDF composite, as shown in the inset of Fig. 4. The large increase in the dielectric permittivity of the CNT-mBT/PVDF composites and its exceptional low value of the loss tangent should be primarily associated with their characteristics of interfacial polarization. As well known, the interfacial polarization in the polymer composites is dependent on several factors such as electrical conductivity and aspect ratio of the conductive ller as well as the interparticle distance between the adjacent ller particles. The difference between BT sizes causes different interparticle distances. Accordingly, the interparticle distance of the CNT-nBT/PVDF composite is shorter than that of the CNT-mBT/PVDF composite. Thus, the relative interparticle distance between CNT in the CNT-nBT/PVDF composite is shorter, giving rise to higher dielectric permittivity. On the other hand, the CNT are easily contacted together, giving rise to high conductivity and related low-frequency loss tangent due to tunneling. For the CNT-mBT/PVDF composites, the largely increased dielectric permittivity is primarily contributed by both of a high permittivity of mBT particles and the strong interface polarization. The low loss tangent (z0.06) is due to longer interparticle distances of the adjacent mBT particles and longer relative-interparticle distance between CNT. Thus, the CNT is difficult to form the conduction network. Electron tunneling was inhibited due to a long interparticle distance between CNT. It is worth noting that the signicantly improved dielectric properties were achieved in the CNT-mBT/PVDF composites, which could be used in exible capacitor applications. One of the most important data for capacitor applications is the temperature dependence behavior of the dielectric properties. Thus, the temperature dependence of the dielectric properties of the CNT-mBT/PVDF composites was investigated. As illustrated in Fig. 9(a), interestingly, the dielectric permittivity of the CNT-mBT/PVDF composites is slightly dependent on temperature in the range of 20 to 150 C. However, as shown in Fig. 9(b), the loss tangent also increases continuously with increasing temperature, indicating to the increase in the conductivity of the composites. As shown in the inset of Fig. 9(b), it is observed that the results are not showing any scattering, indicating to good electrode made. Therefore, the effect of electrode on the dielectric properties of the CNT-mBT/PVDF composites can be ignored. The obtained dielectric properties are mainly resulted from the inner part of the composite samples. Conclusions The dielectric and electrical properties of CNT-nBT/PVDF and CNT-mBT/PVDF composites with 20 vol% BT have been investigated to optimize the BT sizes and CNT volume fractions for obtaining a high polymer dielectric performance. The rapid changes of the dielectric behavior for both three-phase composite systems are similar, but occur when f CNT > 0.015 and > 0.017, respectively. The percolation theory can well describe the dielectric permittivity as a function of f CNT for both systems. The rapid increase in the dielectric permittivity of the CNT-nBT/PVDF composites is accompanied by the signicant enhancements of the loss tangent and conductivity. Notably, the increased loss tangent and conductivity of the CNT-mBT/ PVDF composites are retained to low values of z0.06 and 6.82 Â 10 À9 S cm À1 , while the dielectric permittivity is signicantly increased (z155 at 10 3 Hz and RT) with slightly dependent on frequency. Interestingly, the greatly enhanced dielectric permittivity of z253.8 with suppressing tan d # 0.1 is achieved by further increasing f CNT ¼ 0.02. Good dielectric performance of the CNT-mBT/PVDF composites are attributed to the interfacial polarization in the composites and the tetragonal phase of mBT. Conflicts of interest There are no conicts to declare.
2019-08-23T08:16:50.269Z
2019-07-29T00:00:00.000
{ "year": 2019, "sha1": "90887444816e62cd6f4dd50a0a8d287fd4baa9bc", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra04933a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe131f187fd6cad5a7ccc8596fa06dff28092eff", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
252239934
pes2o/s2orc
v3-fos-license
Research on the Design of Bright Clothing for the Elderly Based on Intelligent Detection of Lower Limb Posture Antifall Sensors In this paper, we construct a model for intelligent detection of lower limb posture fall prevention sensors, conduct an in-depth study of bright clothing for the elderly with smart detection of fall prevention, and design an article of colorful clothing for the elderly based on intelligent detection of lower limb posture fall prevention sensors. In this paper, the overall system scheme is determined according to the application, including the characteristic position of the device to be worn, the establishment of the reference coordinate system, and the setting of the overall system architecture. The data acquisition, transmission, and saving functions are realized, and the two-step extended Kalman fi lter, complementary fi lter, and DMP-Kalman fi lter are used to solve the posture angle of the measured object. The feasibility of the prototype was veri fi ed by testing the daily behavioral activity data and fall behavior data of the elderly, and the collected data were processed and analyzed by three data fusion algorithms, which proved the e ff ectiveness of the algorithms, among which the DMP-Kalman fi lter algorithm has better performance. From the test results, the design of this paper can detect the posture angle of the human body more accurately, which provides a practical reference for fall detection equipment. Using clothing as a carrier, the fall detection and warning module is integrated with clothing to study an article of intelligent clothing for fall detection and warning for the elderly. A core issue in the integration process is the in fl uence of the clothing on the detection module when the human body is moving. The e ff ect of the material and loft of the clothing on the model is studied through experiments. The results of the two indexes, sensitivity and speci fi city, of the integrated model and the model not integrated with the clothing are around 98%. The experimental results of the material and loft of the clothing within a speci fi c range have little in fl uence on the model. The fi nal intelligent garment design scheme was determined on this basis. Introduction In the context of an increasingly aging population and a growing number of people with lower limb disabilities, the general nursing staff is no longer sufficient to meet the population's needs. In addition, there is already a severe shortage of caregivers, and nearly half of the elderly and disabled are currently unable to receive medical assistance [1]. With the rise of artificial intelligence technology, new robots have emerged in many industries, which not only effectively alleviate the shortage of professional caregivers but also improve the ability of elderly and disabled people to live independently [2]. At present, the research on fall detection and prevention technology of walking robots is relatively small, and the technology is not mature enough; the walking robots cannot accurately and quickly identify the user's fall status and give protection measures, so it is necessary to improve the ability to walk aids to detect and prevent falls, which not only brings excellent security to the life of the elderly and disabled patients but also reduces a lot of burden for families and society [3]. This is not only a great safety guarantee for the elderly and disabled patients but also a significant burden for families and the community. Five parts are realized by studying the human posture recognition subsystem requirements: data reception, data processing, network training, action recognition, and database establishment, and the visual program interface is built using Python language [4]. The data is received through the wireless serial module of the host computer and saved as CSV data files; the data is processed by noise reduction and extraction of time-frequency domain information; the network is trained by using the TensorFlow deep learning framework in Python to implement the improved convolutional network algorithm proposed in this paper and save the trained network model; the protected network model is parsed, and the input data is used to recognize the action. The training network model is held, and the efforts are recognized by parsing the saved network model and inputting the target data. Wearable sensor-based human posture recognition is an emerging field with an increasingly wide range of applications. For example, medical health, our country has accelerated the aging, and for the elderly, health care is critical. If not rescued in time, a typical fall may bring incalculable consequences to the elderly [5]. Caring for the health of the elderly reflects the level of development and civilization of the country. Daily monitoring of the elderly, abnormal behavior (such as falls, fainting), alarm, etc., are current research hot issues, and these are inseparable from human posture recognition. In terms of smart home, smart home has the advantages of safety, speed, humanization, and personalization, which makes people feel the comfort and convenience brought by various modern technologies to the home environment, which accelerates the intelligence of the society as well as the city [6]. In the coming decades, home intelligence is bound to grow by leaps and bounds, an inevitable choice for people seeking a higher standard of living. As one of the key technical points, wearable human behavior recognition technology is of great research value [7]. Clothing is the largest transaction object of e-commerce, so the demand for virtual fitting and clothing display is also getting stronger. At the same time, there is also an urgent demand for digital preservation, virtual collection, and tryon of the gradually disappearing fine national costumes [8]. To achieve a better virtual fitting and display effect, human posture recognition is one of the critical technologies. In addition, people are not satisfied with only warm and beautiful clothes. Therefore, the development of various functional clothes is also a hot topic for clothing design and production enterprises, such as sports monitoring, anti-theft and robbery, emotional monitoring, positioning and so on. In the design and development of available clothing, adding some relevant sensors for target data collection and monitoring is the critical link [9]. In the new textile materials research and development, there has been a combination of new textile materials and wearable human posture recognition research; from power consumption, node layout, and other considerations, human posture recognition, the new functional textile materials research and development have played a supporting role. Facing the social problem of an aging population, many countries and regions have taken the elderly users as the target population, and related design theories have been proposed from the perspective of the innovative design of products and services, including designing for accessibility, universal design, and inclusive design. With the development of different design concepts centered on elderly users, the idea of inclusive design is evolving in the world's aging process, and its theoretical contents are widely spread due to social trends [10]. The theory of inclusive design is based on the development and continuation of barrier-free and universal design and was first proposed in European countries [11]. The practical application of its theory allows products to consider the needs of various users as much as possible in the design process, expand the range of users, balance the needs and interests between users, designers, and suppliers, and treat each level of user groups relatively [12]. Therefore, in the aging society, the concept of inclusive design is of great value in meeting the user needs of groups such as the elderly and promoting the development of social equity. With the rise of mobile medical care, the development of new technologies, such as intelligent sensing, and the popularity of personalized health concepts and wearable products have rapidly occupied the market, among which wearable products related to health care have become one of the most promising fields. With their wearability and intelligence, medical wearable products can monitor human body data, record and process a large amount of information, and upload the data to the cloud to predict users' health and increase their awareness of self-health management's positive significance in reducing morbidity [13]. Medical wearable products can help the elderly monitor their health, predict diseases, assist in treatment and rehabilitation, help children understand the various functions of their parents' bodies, improve the quality of life of the elderly, and bring convenience to their lives. Related Works Nowadays, with the rapid development of technology, especially the mobile Internet, people's communication methods are also changing rapidly, and various intelligent interaction methods are changing people's daily lifestyles at any time. Intelligent interaction involves vision, network, sensing technology, and other information technology, which can improve people's perception of immersion, give people a novel way of interaction experience, and make the interaction more smooth, natural, and humanized. Human posture recognition is the basis of intelligent interaction and is a crucial technology to improve interaction experience and intellectual fashion life [14]. Human pose recognition is a hot topic in pattern recognition and has many applications. The research in human posture recognition also follows the international frontier and has achieved specific results in the study and its industrialization part in the field of human posture recognition [15]. The human behavior recognition technology based on visual images started earlier, the theory is relatively mature, and its correct rate is high, such as the human behavior recognition system researched and developed on DHN database, the recognition rate is 90-95%. The accuracy rate of human posture recognition based on Kinect video is not very satisfactory; the Kinectbased video is only limited to the field of view that the Kinect camera can capture [16]. In the research human posture recognition field, for example, a better set of motion capture devices is based on optical sensors for human behavior recognition; such optical sensor-based devices are one of the most significant limitations of the impact of ambient light 2 Journal of Sensors on the data collected by the widget, and the use of such devices is limited to indoor use, there are size limitations. Fall detection technologies have been more widely studied in elderly fall protection. At present, fall detection technologies are divided into three categories: video-based fall monitoring technologies, audio-based fall monitoring technologies, and fall detection technologies based on wearable devices [17]. People install cameras in the activity area of the elderly and use coarse elliptical models and particle filters to process the collected human images to determine the occurrence of abnormal behavior. Saco-Mera and Hernández-Patiño proposed an improved detection algorithm combining human body aspect ratio, effective area, and center change rate to optimize video-based fall detection technology, which can determine a fall when the above three conditions are met and within a specific time, effectively reducing the false judgment rate [18]. The wearable fall detection technology can be carried around, and the algorithm can determine whether the human body has fallen by real-time monitoring the human movement status. A device that can be placed on the chest, the device consists of a combination of acceleration sensors, tilt sensors, and gyroscopes, and the change in acceleration, the difference in the coordinates of human activity, and the change in the angle of the tilt of the chest to determine the positive fall, if the tip of the chest changes by 70°, then it can be considered a positive fall. Using acceleration sensor, pressure sensor, and magnetic sensor combination type fall detection system, to acceleration, pressure, tilt angle change whether to reach the threshold value to judge the fall, if considered as a fall can be sent through the Bluetooth timely GPS, shorten the rescue time, experience, the system has a certain degree of effectiveness. The grip of the walking aid is fitted with a tactile force sensor, and the user wears an accelerometer and gyroscope at the waist. The haptic force sensor captures the interaction force between the user's hand and the training device's handle. The accelerometer and gyroscope capture the acceleration and angle of the user's torso. A BP neural network is used to fuse the three types of information detected and to calculate the probability of a fall state occurring [19]. The user tends to fall when the probability value exceeds a predetermined empirical likelihood. This method is more complicated and time-consuming for data processing; there has been a lot of attention from experts and scholars on distinguishing the usual activity behavior of the mouth from the fall hazard; the use of acceleration sensors can capture changes in acceleration. Still, it is not easy to identify in real-time whether it is a regular squatting activity or indeed a fall hazard. Some developed countries are already in a relatively advanced position. Current research has focused on detecting and sending distress messages for falls that have already occurred in the elderly. At the same time, there are few research questions on identifying and warning for falls in progress. He et al. proposed a multisubject system for home care services to enhance the independent and autonomous mobility of the elderly [20]. The method comprises seven reliable and flexible agent systems capable of detecting abnormal behavior and issuing emergency alerts by sensing the user's environment, location, and posture-aware environment. In the fall detection experiments of this study, the prediction accuracies of 72%, 88%, and 91.33% were achieved using a machine learning algorithm agent system, expert knowledge agent system, and metaprediction agent system, respectively, considering postures and behaviors such as tripping, fainting, slipping from a chair, jumping onto a bed, and standing up quickly. Lu et al. used posture angle, three-axis acceleration of posture, and heading reference system modules; the fall detection system was measured, and a neural network-based fall detection method was proposed [21]. The technique can distinguish daily behavioral activities, including walking, jumping, sitting, bending, squatting, and lying down. Liu et al. focused on fall detection and motion tracking for the elderly and integrated different types of motion classification into one system, thus meeting the needs of wearable devices [22]. A comparative study was conducted by collecting and analyzing fall data, including four different fall trajectories, front, back, left, and right, and three normal activities, such as standing, walking, and lying down, for recognition and detection with different machine learning algorithms. The study results in the simple Bayesian classification algorithm can accurately classify other actions and postures and detect falls and near-falls. Smart Clothing Model Design for the Elderly Based on Intelligent Detection of Lower Limb Posture Antifall Sensors Intelligent Detection Model Construction for Lower Limb Posture Fall Prevention Sensor. The human posture data acquisition is mainly divided into two parts, one is the hardware part, which is responsible for collecting the human posture data and transmitting it to the PC via Wi-Fi module, and the other is the software part, which receives and stores the human posture data collected by the hardware part and performs data calculation via the PC-based host computer. The system framework diagram of the human posture data acquisition device is shown in Figure 1. The MEMS sensor made by microelectronics and micromechanics processing technology is a miniature electromechanical system. In athletes' daily training, MEMS sensors can be used to perform 3D human motion measurements, record each movement, and coaches analyze the results and compare them repeatedly to improve the athletes' performance. With the further development of MEMS technology, the Price of MEMS sensors will also be reduced, which can be widely used in mass gyms. Compared with the traditional sensors, they are small, lightweight, and easy to carry, and their low cost is more suitable for mass production. Considering that real-time detection requires data transmission to a computer for calculation, if wired communication is used, it will not only make the whole device redundant and confusing but also will affect human activities and significantly limit the range of human activities and the magnitude of activities, so to avoid many drawbacks, we use a wireless module for data transmission [23]. The role of the wireless module is to 3 Journal of Sensors replace the serial connection wire between two devices to achieve wireless data transmission. For example, the two microcontrollers connected to the module, respectively, if the serial port transceiver operation can be done, the microcontroller does not have to control the module; this is very convenient for the reality of wireless communication. Modules are generally used in pairs to transmit data in a halfduplex manner; the baud rate and communication channel pairs of the two modules must be the same. The raw motion data collected by MEMS sensors will inevitably have noise due to environmental factors and manufacturing processes, which will impact the pattern recognition of posture and often require noise suppression. The signal of the filter is very smooth in the passband and slowly decreases in the stopband until it is reduced to 0. The frequency of human motion is relatively low, generally not more than 20 Hz, and varies significantly from 0 to 5 Hz, and the sensor is easily affected by environmental noise and high-frequency interference during the detection process; to eliminate this noise effect through the lowfrequency signal and attenuate or suppress the highfrequency signal, it has the maximum flat amplitude response curve in the passband, which has been widely used in the field of communication, and also has a wide range of uses in electrical measurement, which can be used to detect the signal. Butterworth's low-pass filter design process is as follows: (1) To achieve the transformation of digital filters to analog filters, the essence lies in the conversion of technical indicators {a} into {w}; the transformation method is roughly impulse response invariant method and z-transformation method; the transformation is In the impulse response invariant method, the system's transfer function can be transformed into a transfer function by sampling the unit shock response gðtÞ, and the sampling process is This method can achieve a stable transformation of the transfer function to the system's transfer function. Still, with a small number of samples, overlap may occur in the frequency domain and produce a distortion condition. This situation does not happen in the z-transform, and the relationship for the z-transform mapping is Journal of Sensors The mapping relationship between the frequency a of the analog filter and the frequency w of the digital filter in Equation (3) is nonlinear. When w goes from 0 to π, Q changes from 0 to +∞; when w goes from 0 to (-π), a change from 0 to -∞. Since a is proportional to the tangent trigonometric function tan (2/w), the axis is mapped to the unit circle using linearity. So, on the premise of giving a series of technical indicators w p , w s , a p , and a s , of the digital filter, Equation (2) can be translated into the technical hands of the analog filter as Following the technical specifications obtained in Equation (4) designed by the Butterworth filter model We have made an exoskeleton with a simple structure, adjustable in length and bendable and twistable in the joint part of the lower extremity, equipped with a sensor part, a mechanical design worn on the lower human extremity. The research on the lower limb exoskeleton is very early and has received much attention from various aspects. Due to its evident and prominent role in the field of rehabilitation, people who are more and more concerned about the quality of life have developed rapidly. The advantage of an external structure that provides confirmation, support, and protection to all parts of the human body is that it is an aggregate that combines both support and protection functions [24]. In addition, the most popular and researched technology worldwide is the combination of lower limb exoskeleton technology and rehabilitation training, using appropriate control methods to control the exoskeleton part so that the mechanical structure worn on the patient's body can drive the patient to carry out the corresponding rehabilitation movement, and finally achieve the purpose of restoring the patient's ability to move. According to the needs of our research, our lower limb exoskeleton part adopts a retractable slide structure made of white steel material, which can be retracted mainly to consider the wearing needs of people with different body sizes so that the wearer can debug the suitable length according to other people, and make the wearer move freely. The whole structure is divided into two sections, carrying two composite sensors for the thigh section and two composite sensors for the calf section. To meet the needs of human activities, the degree of freedom at the connected joints is relatively high, generally to meet the five degrees of freedom. The human lower limb has three movable joints: hip, knee, and ankle; the movement of the lower limb mainly includes: (1) giant forward swing of the thigh, small backward swing, and lateral swing; (2) bending and slight twisting of the knee joint; (3) twisting of the foot around the ankle joint in the horizontal plane and twisting of the foot around the ankle joint in the vertical plane. Model Design of Bright Clothing for the Elderly with Fall Prevention and Intelligent Detection. The working principle of the fall prevention device is to detect the motion signal of the human body through the acceleration sensor, use the posture recognition algorithm to determine the current motion posture of the human body quickly, and determine if the risk of falling timely open the protective airbag, thus playing a role in reducing the fall injury of the elderly [25]. Therefore, the fall injury prevention device has high requirements on the posture recognition algorithm's real-time accuracy. At the same time, the design of the antifall device should be small, lightweight, and easy to wear the design principle, to minimize the impact on the user's life; its hardware design mainly consists of the main control chip, power chip, motion sensor, airbag, and Bluetooth serial module. The power supply chip provides the power supply for the device; the motion sensor converts the user's motion parameters into electrical signals and transmits them to the central control chip; the main control chip collects and processes the motion sensor signals in real-time and determines whether the user is in the process of falling if so, it issues a warning signal and triggers the airbag; the airbag component inflates in time when it gets the trigger signal to protect the user's safety; the Bluetooth serial module is used in the experimental research stage to transfer the airbag to the user. The function of the Bluetooth serial module in the empirical research stage is to upload data to the computer. In the standard configuration of the product, the Bluetooth serial module function is to realize the data communication between the antidrop device and the cell phone APP or computer software, which can use the cell phone to send a distress signal or learn the online firmware update. The overall structure of the fall injury prevention device is shown in Figure 2. The development of bright clothing is based on functional clothing, which integrates wearable technology into clothing to form its characteristics; such characteristics refer to its ability to store, transmit, and transform signals. The realization of this ability needs to rely on a combination of innovative material research and development, intelligent design innovation, manufacturing reimprovement, and other technological levels to complete the design process's complexity. Although the design of intelligent clothing can refer to the design method of functional clothing, it also needs to consider its brilliant effect and the difficulty of making the clothing comfortable when electronic components and other devices are combined with the dress, so it is essential to propose and improve the design framework of bright clothing. By establishing the intelligent clothing design framework system, taking it as a guide, with the help of scientific research and development power, investment 5 Journal of Sensors sponsorship of enterprises, the ingenuity of designers, national policy support, industry behavior norms, and other parties' efforts, cluster development, and jointly building a standardized intelligent clothing industry chain, we can achieve breakthrough growth of bright clothing, create a more optimal innovative clothing platform, and contribute to the realization of personal health care, profound, intelligent clothing diversification, and the harmonious development of the ecological, economic environment to play a facilitating role. The research on the design framework of bright clothing mainly starts from the intelligent needs of clothing, and based on meeting user needs, with the help of high-tech technical means, incorporates clothing design science, gets information feedback through the multidirectional verification and performance evaluation of the product and user experience of the product, and improves the product, to design intelligent clothing that better meets user requirements. In this paper, the design framework of smart clothing is proposed concerning the development and design model of protective clothing. The design framework of intelligent clothing is shown in Figure 3. The design of intelligent clothing can be divided into the following: (1) to obtain the perception of the external environment of the human body, the need for their state, and the development of new functions and to determine the intelligent needs of clothing. After the emergence of functional clothing with protective effects in unique environments, people also hope that clothing can respond to changes in the external environment or internal state to maximize the value of clothing, and clothing that incorporates a flexible fabric sensor for life protection and monitoring, through the collection and analysis of various parameters of the human body's state, to discover potential possible damage and provide early warning, so that the user's health is protected. Thus, by simulating the human life system, we can know the perceived needs of the human body's external environment and its state and analyze similar products in the market, as well as conduct research and investigation on users, identify and understand the target users, conduct a multifaceted assessment from user perception, product experience, and product acceptance, and find the users' physiological needs for clothing in terms of comfort and functionality, aesthetics, product value, and price, as well as their psychological needs for clothing use. Price and other psychological conditions as well as user habits and use of the environment to userdriven needs, you can develop new functions, determine the intelligent needs of clothing, and prepare for the realization of clothing intelligence. (2) The realization of clothing's perception, feedback, and response function. Without affecting the aesthetics and comfort of clothing based on its means of realization are mainly the following two major types of methods: the first type is the use of smart clothing materials; when the external environment changes or stimulated, fiber length, shape, color, temperature, and other corresponding changes, including shape memory materials, phase change materials, etc. The second type is the module embedded or flexible into the clothing people wear daily to adapt to the characteristics of the clothing itself [26]. And make the corresponding feedback, the organic integration into the clothing, the use of sensors to sense human physiological information, and through the feedback, a mechanism to transmit information to the intelligent module, to master the human body's various body condition indicators, and even according to the feedback to the information on its analysis. The human body takes corresponding improvement measures to achieve the intelligent effect of clothing. (3) Combined with clothing Journal of Sensors design, the intelligent module, and clothing organic combination to complete the intelligent clothing design. This part mainly considers how to combine the intelligent module with the garment. The use of innovative clothing materials to achieve the organic combination of intelligent textiles and clothing are especially two methods: one is to weave intelligent fibers into ordinary fibers or interweave with standard threads, and the other is to compound the material or make it modified through the method of dyeing and finishing processing. There are also two methods to organically combine modules with clothing: one is modularization technology, which embeds electronic components into clothing textiles as accessories of clothing; Second, based on the new fiber flexibility technology, the required electronic components and equipment, sensors and textiles are combined to maintain the original softness and comfort of clothing. Analysis of Intelligent Detection Model of Lower Limb Posture Antifall Sensor. Falls are common in older adults, but it is challenging to define them strictly. Therefore, falls are usually compared with people's daily behavioral activities. The hardware can obtain accurate posture data, and the algorithm can correctly solve the posture angle is crucial to the system design. This chapter presents the basis for determining falls by analyzing the difference between human fall behavior and other everyday behaviors. The regular and orderly operation of the hardware provides essential services for the software and algorithms, so the functional testing of the hardware is a crucial part of the process. Whether the relevant performance of the hardware circuit has imperceptible problems and errors and whether it meets the expected standards also has a guiding reference for the subsequent improvement of the circuit design. Hardware-related functional tests mainly include power supply, sensor performance, Bluetooth, and other available tests. During the hardware circuit testing, several ground sets in the circuit (such as digital ground, analog ground, and power ground) should first be tested before power is applied to see if they are connected. And check whether there are apparent PCB soldering and soldering problems to confirm that there are no errors before authority on the circuit power supply to determine. The power supply is a prerequisite for the hardware circuit to work. Whether the power supply is stable and the ripple is within an acceptable range is particularly critical to the circuit operation's stability and determines whether the circuit can work effectively in the long term. It is especially critical that the sensor reads the raw data related to attitude correctly and that the DMP function works properly. Reading the device address of 1 × 71 for the sensor MP9250 and 1 × 48 for the magnetometer indicates the correct device. After initialization of the DMP, the FIFO sampling rate is set, the DMP firmware is loaded and enabled, and if the DMP memory can be read and written correctly, the DMP is helped successfully. When the sensor is at rest, the measured acceleration in the z-axis should be g (the value is about 9.8 m/s); the actual measurement results are the raw data measured by the sensor and the results of the conversion from 16-bit data to decimal data: the data measured by the gyroscope may have a significant error in the axial direction due to installation errors, etc. The measured angular velocity at rest and the comparison of its correction results are shown in Figure 4. The calibration process of the magnetometer is as follows: the sensor is continuously changed at each position, enough data are collected, and the measured data in the three axes are derived from the deviation values, which are calibrated and plotted for comparison. The diagram of the calibrated magnetometer is shown in Figure 5. The design uses a Bluetooth-enabled controller to control the circuit and the sensor's data transmission. A Barron filter with a ceramic antenna is used to match the 50 Ω impedance of the CC2541 to send and receive data in the 2.4 GHz band range. Use USB dongle and packet sniffer software to make the board broadcast data for wireless packet capture to achieve the board's Bluetooth function part of the test; to view the board's Bluetooth performance, if the board and ; the number of error packets is 9, 15, and 14, and the false alarm rate is 0.14%, 0.22%, and 0.23%, respectively; the wireless transmission effect is good. In addition, the effective transmission distance of Bluetooth was tested, and the connection and data could be sent within 5 m with a low false alarm rate in the absence of obstacles. The preprocessing data method in the construction of the fall detection model is the same for the 90 normal and 1000 abnormal gait behavioral data collected. The 3-axis acceleration and 3-axis angular velocity data are processed to obtain their corresponding vector and amplitude sums. The different gait characteristics cause this difference during walking, which is different from the trend of continuous movement in a single direction. It is different from the trend of constant movement in a single order. Still, the alternating fluctuations of the seasonal changes are relatively fixed, with most of the regular cycle changes within 3 seconds, and the cycle length of each gait varies [27]. The cycle is fixed because people will have gait habits when walking, but the gait in each process will have different degrees of slight changes. Normal and five different abnormal gaits exist with varying degrees of variation in gait cycle motion due to the reduction of human locomotor ability caused by disease, making the difference between odd and normal gait when walking. The critical issue of the fall warning system research is to distinguish normal and abnormal gait effectively, and at a later stage, if necessary, to achieve the classification of abnormal gait, so it is feasible to analyze and discriminate the human gait by using intelligent sensor detection to establish a fall warning model. Research on the Design and Implementation of Intelligent Clothing for the Elderly Based on Intelligent Detection of Lower Limb Posture Antifall Sensors. In this paper, the Mann-Whitney U test was used to test whether there was a significant difference in the maximum impact force when cushioning fabric was added to the hip and knee; the data were analyzed and processed by SPSS software. The final result was P = 0:1 > 0:05, which did not have a significant difference, indicating that the force attenuation of the cushioning fabric for a specific range of forces on the knee and hip during the fall. The ability of the cushioning fabric is not affected by the different protection parts [28]. When there is no protective material (without considering the acceleration direction), the maximum acceleration value when the human body falls sideways is much larger than the maximum acceleration value when the human body falls forward. According to Newton's second law, the full impact force on the hip is currently more significant than the leading force on the knee. Regression analysis was used to test the correlation between the hip and knee under different size fabric conditions and the maximum impact force on the critical parts of the human body; for the hip position, P = 0:46, which is significant; for the knee position, P = 0:097 > 0:05 is not substantial. The maximum impact forces under different conditions at the hip and knee are shown in Figure 6. For hip protection, the maximum impact force when padded with three-dimensional knitted fabric is smaller than the total impact force when there is no cushioning material, which means that the three-dimensional knitted fabric has a specific cushioning effect (Figure 7). In summary, when the size of hip protection material is 24 * 10 * 7 (cm), the maximum impact force is the smallest, and the protective effect Journal of Sensors is better; when the knee protection material is 17 * 8 * 12 (cm), the full impact force is the smallest. After integrating the fall detection and warning module with the clothing, the model's results can be obtained; the sensitivity and specificity of the two indicators and the model's results before integration are above and below 98%. It can be concluded that the clothing material and looseness within a specific range have little effect on the Journal of Sensors model. On the one hand, it is because the jitter generated by the garment during human movement is within a particular field, and the impact of jitter on the model within the range is negligible; on the other hand, it is because the sensor adopts digital filtering technology to reduce the effects of external noise on the data effectively, plus the model synthesizes the 3D acceleration and 3D angular velocity into a onedimensional metric during data processing, so that the jitter Test set-2 Test set-3 Test set-4 Test set-5 Test set-6 Test set-7 generated by the garment can be reduced during data acquisition and processing. Jitter is filtered out during data acquisition and processing, resulting in a negligible impact on the model caused by the relative motion existing between the module and the human waste after good integration with the garment, so it can be concluded that within a specific range of different thicknesses of clothing fabrics and different loose-weight clothing styles have almost no impact on the model. The above results designed device and algorithm recognition model can discriminate the human fall posture in real-time and has a good detection effect, which confirms that the support vector machine-based human posture recognition model has certain practicality. The results of the fall-proof intelligent detection clothing recognition are shown in Figure 8. Conclusion Older people belong to a high incidence of falls, and some falls can cause serious injuries or even death, which affects the physical and mental health of older people and, at the same time, brings a heavy burden to families and society and falls of older people have become an issue of social importance. Human posture recognition technology has been applied in many fields, and the requirements for the correct rate of human posture recognition in various areas are getting higher and higher. Therefore, to facilitate the study of human posture recognition, this topic designs and implements a human posture recognition system based on wearable sensors, proposes intelligent sensor detection for human posture recognition and achieves an excellent correct rate. This paper uses the quaternion method to update the posture information of the human body. It uses three algorithms of two-step extended Kalman filtering, complementary filtering, and DMP combined with Kalman filtering to compensate for the posture information of acceleration and magnetic field strength measured by the sensor for the angular velocity posture information and solves to derive the posture angle of the measured object and then proposes the basis of fall determination, the designed fall detection system is tested. The feasibility of the prototype is verified by testing the daily behavioral activity data and fall behavior data of the elderly, and the algorithm's effectiveness is demonstrated by processing and analyzing the collected data with three data fusion algorithms. This paper studies an intelligent garment for fall detection and early warning of the elderly by integrating the fall detection and early warning module with the garment as a carrier. The experimental scheme investigates the influence of the garment material and the loft on the model. The sensitivity and specificity of the two indicators of the integrated model are around 98% of those of the model before integration with the garment. The final design of the smart garment is based on the final model before integration. In this paper, although the two models were established separately through many experiments, and the experiments verified the feasibility of the models, there are still problems of individual variability. The number of samples is not abundant, and as many subjects as possible should be found to conduct the experiments several times when the conditions allow for improving the accuracy of the models. Meanwhile, the fall detection method with high recognition accuracy should be explored in depth in subsequent research. In studying adaptive impedance control fall prevention systems, this paper only discusses the highest incidence of forwarding fall, which is not comprehensive enough to consider. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The author declares that there are no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-09-15T15:15:11.255Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "138012b7525fbe4ef88bf1e4af9354c347064a46", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/js/2022/8049766.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "770e44febd7cc2b46e6e12e617cb627f770c6113", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
233190128
pes2o/s2orc
v3-fos-license
Machine learning applied to X-ray tomography as a new tool to analyze the voids in RRP Nb3Sn wires The electro-mechanical and electro-thermal properties of high-performance Restacked-Rod-Process (RRP) Nb3Sn wires are key factors in the realization of compact magnets above 15 T for the future particle physics experiments. Combining X-ray micro-tomography with unsupervised machine learning algorithm, we provide a new tool capable to study the internal features of RRP wires and unlock different approaches to enhance their performances. Such tool is ideal to characterize the distribution and morphology of the voids that are generated during the heat treatment necessary to form the Nb3Sn superconducting phase. Two different types of voids can be detected in this type of wires: one inside the copper matrix and the other inside the Nb3Sn sub-elements. The former type can be related to Sn leaking from sub-elements to the copper matrix which leads to poor electro-thermal stability of the whole wire. The second type is detrimental for the electro-mechanical performance of the wires as superconducting wires experience large electromagnetic stresses in high field and high current conditions. We analyze these aspects thoroughly and discuss the potential of the X-ray tomography analysis tool to help modeling and predicting electro-mechanical and electro-thermal behavior of RRP wires and optimize their design. The intermetallic compound Nb 3 Sn is an A15 phase material with critical temperature (T c ) of about 18 K 1 and its upper critical magnetic field (B c2 ) can reach 30 T 2 . These characteristics combined with a high critical current density (J c ) make Nb 3 Sn the workhorse in superconducting magnets operating in the 10 T to 20 T range. Nb 3 Sn wire technology is under development since the '60s, but only in the last two decades the conductor's development has made great progress thanks to the research program for the International Thermonuclear Experimental Reactor (ITER) 3 , presently under construction. Since the '90s, the High Energy Physics (HEP) community has taken interest in the development of Nb 3 Sn wires for post Large Hadron Collider (LHC) accelerators. The effort was led by the European Organization for Nuclear Research (CERN) and the US LHC Accelerator Research Program (LARP) 4 , and it culminates with the adoption of Nb 3 Sn wires as baseline for the High Luminosity LHC upgrade (HL-LHC) 5 , which will include two pairs of dipoles, operating at 11 T, and 24 quadrupoles based on Nb 3 Sn. Presently, CERN is investigating various design for a new proton-proton collider with an energy of 100 TeV in the center of mass, the so-called Future Circular Collider (FCC) 6 . The maximum energy of a circular collider is determined by the maximum magnetic field of the dipole magnets and by the radius of the accelerator. Assuming a circumference of 100 km the dipole magnetic field should be about 16 T to achieve the target value of energy. Even though Nb 3 Sn wire technology has largely improved over the years, FCC requires new steps forward in Nb 3 Sn technology to achieve 16 T magnetic field in its dipoles 7 . Key factors are a high non-Cu critical current density, i.e. J c > 1500 A/mm 2 at 16 T and at 4.2 K, tolerance to mechanical loads and good thermal and electrical stabilization in the form of high purity copper (Cu) surrounding the superconducting sub-elements. Recently, four different dipole designs were developed for FCC in the frame of the Horizon 2020 European Circular Energy-Frontier Collider (EuroCirCol) study 8 : Cosine-theta 9 , Block-Coil 10 , Common-Coil 11 and Canted Cosine-Theta 12 . The analyses performed on these designs highlight the large mechanical loads generated by electromagnetic forces on the Nb 3 Sn wires reaching peak stress of 150-200 MPa at 16 T 13 . This large stress may lead to the degradation of the electrical transport properties of the brittle Nb 3 Sn sub-elements and, for this reason, the electro-mechanical properties of Nb 3 Sn wires are a fundamental factor for the FCC dipoles' design. Samples description. In this study, the emphasis is on the voids present in Nb 3 Sn RRP wires. Two types of wire configurations were investigated: 108/127 and 132/169. These numbers define the RRPs configuration as number of superconducting sub-elements over the total of restacked elements. Each design was used for a different study goal. 108/127 wires have been used for the 11 T dipoles 28 and for the Low-β quadrupole magnet (MQXF) of the HL-LHC upgrade 29 , at a diameter of 0.7 mm and 0.85 mm, respectively. Two different versions of this design were analyzed, both 0.85 mm in diameter: S20 is Ta doped wire with normal Sn content, which correspond to a Nb:Sn molar ratio of 3.4 and S21 is Ti doped with a reduced Sn content, i.e. Nb:Sn molar ratio of 3.6. The different Nb:Sn molar ratios allowed us to study the correlation between Nb content, Sn leakage and presence of voids in the copper matrix. 132/169 wires were developed in the framework of the European Coordination for Accelerator Research and Development (EuCARD) 30 and LARP 31 , and they have been used for studies related to FCC. The three samples are reduced Sn and Ti doped wires with different diameters. The different diameters, and consequently different sub-elements sizes, allowed us to investigate how the distribution of voids in the sub-elements is affected by the dimension of the sub-element itself. Samples characteristics and heat treatments are listed in Table 1 and samples cross sections are shown in Fig. 1. X-ray tomography. X-ray micro-tomography has been performed at the beamline ID19 of the ESRF in Grenoble, France. Synchrotron micro-tomography is an exceptionally powerful tool to study the porosity of materials on µm scale. X-ray synchrotron radiation was already proven to be suited to study the internal features of superconducting wires in several studies 25,32 . Furthermore the use of narrow bandwidth radiation helped us to improve the contrast between Nb 3 Sn and Cu voids 33 and prevented beam hardening effects, which is the phe- www.nature.com/scientificreports/ nomenon that occurs when polychromatic X-ray beam passes through an object, resulting in selective attenuation of lower energy photons 34 . X-ray micro-tomography allows detecting and defining internal structure of a composite in a non-destructive manner without influencing the sample. The sample is exposed transversally to the X-ray beam and an image of the transmitted beam is recorded with a detector placed behind the rotating sample holder. In our case the sample was rotated of 360° in 0.012° steps, i.e. 30,000 projections images acquired per sample. These projections, once stacked into a three-dimensional (3D) X-ray absorption map, provide a 3D reconstruction of the sample volume with its internal characteristics. The photon energy was 89 keV and the detector had 2560 × 2160 pixels resolution with an equivalent spatial sampling of about 0.7 μm/pixel (empirically verified on the wire diameter). The final output of the measurements is a set of two-dimensional (2D) images of the measured Nb 3 Sn samples where each image corresponds to a cross-section of the wire, see Fig. 1. The length of wire that was analyzed in this way was of about 1.5 mm equivalent to 2160 images per samples. Results Void detection and analysis. The tomography post-process was begun exploiting a dedicate algorithm written in MATLAB and already tested in 21 . The MATLAB tool analyses the 2D tomography's cross-sections one by one. To detect the voids, each pixel is compared with a color threshold that is calibrated on the slice average brightness. The voids are saved as binary maps were 1 is a void pixel and 0 is everything else. In Fig. 2, a section of an analyzed tomography is shown with the voids detected in red. Two different type of voids are visible, those located in the sub-elements and those in the Cu matrix. The MATLAB tool was not capable to separate the voids by location and thus discriminate the voids in the sub-elements from those in the Cu matrix. In case of RRPs, it is fundamental to separate Cu voids from sub-elements voids because they are generated differently, and they can be responsible for different phenomena. Therefore, we programmed an entirely new tool creating a more flexible program based on unsupervised machine learning implemented via Python. The separation of the wire components in the tomography was done using the well-established k-means method 35 , an unsupervised classification algorithm developed for clustering. The underlying idea of clustering is to divide a given set of data into a specific number of groups based on certain patterns or similarities present in the data; in our case the groups (sub-elements, voids and Cu matrix) were defined based on the pixels brightness of the tomography. k-means algorithm divides the given set of data into k disjoint clusters equivalent to the groups. The analysis of an image, i.e. one slice of the tomography, can be summarized as follows: in the initial step a k number of centers of brightness, c k , are generated based on the picture brightness scale. The initial centers are arbitrarily generated by the algorithm, the operator can control the number of time the k-means algorithm will be run with different initial c k , the maximum number of iteration per run, and the relative tolerance used to declare the convergence 36 . For every pixel of the image, the Euclidean distance on the brightness scale d, between the centers and pixel brightness, p(x,y), is calculated as: www.nature.com/scientificreports/ Then, the pixels are assigned to the nearest center based on the calculated brightness distance. When all pixels have been assigned, a new set of centers is generated using Eq. (2), in which n k denotes the number of observations in the k th cluster 37 . The process is repeated until the tolerance is satisfied, i.e. there are no significant changes in the centers positions. As a last step the clusters of pixels are saved. More on k-means algorithm can be found in 38 . k-means algorithm was implemented using the python scikit-learn library 39 . After applying the k-means algorithm, further processing was done in order to obtain as a final result three separated binary images: Nb 3 Sn sub-elements, Cu matrix and voids. Eventually, copper voids were achieved subtracting the sub-elements from the voids' maps. The 3D reconstruction of the voids for the wire S20, differentiated by type, is shown in Fig. 3. The larger volumes are in the sub-elements, while the small voids are in the copper shell and core, some copper voids are among the sub-elements and they are indication of Sn pollution. The voids' volume was calculated from the 3D voids reconstruction knowing the pixel/volume ratio. By separating the voids by type, the distribution as function of volume shows that the majority of copper voids has a volume smaller than sub-elements voids, but they can be far more numerous. Fig. 4 reports this and shows the data for all the examined wires. In particular, the strong variability of the Cu voids distributions among the samples, in comparison to the more homogeneous sub-elements voids distribution, highlights that the sub-element voids and the Cu voids have a different origin. Copper voids, Nb:Sn ratio and correlation with the residual resistivity ratio. From the direct observation of S20 in Fig. 3, it is evident that the morphology of copper and sub-elements voids is different. The sub-elements voids have larger dimensions than Cu voids and they are present in all sub-elements, whereas Cu voids are mostly concentrated in the outer copper shell and in the central core with few larger voids between the sub-elements. Differently from S20, the 3D reconstruction of S21 had copper voids only in the outer copper and few in the central core. Therefore, two types of Cu voids can be observed in S20: small voids in the Cu shell or core, generated by the coalescence of micro defects trapped during the wire assembly, and larger Kirkendall voids between sub-elements, due to Sn pollution generated by Nb barrier failures. The former type is present in all the wires and depends on the assembly procedure, the latter depends on different factors as Nb barrier thickness and heat treatment. Since they are independent, we are going to discuss the two types of voids separately, starting from the Kirkendall voids generated from Sn diffusion. As said, the analyzed 108/127 wires have similar design but different Nb:Sn molar ratio. It has been shown in 41 that normal Sn content RRP wires have typically lower Residual Resistivity Ratio (RRR) than reduced Sn content wires. Our RRR measurements are in agreement with this result, S20 has RRR = 89 while for S21 has RRR = 274, indicating a higher Cu purity for the latter. Since low RRR measurement can be an indication of the Sn presence in the Cu matrix, a manual investigation of Nb barriers macroscopic disruptions was performed on the wires' tomography looking for barrier failures and related Kirkendall voids. Studying the tomography of S20, several barriers' ruptures were detected. In presence of www.nature.com/scientificreports/ Nb barrier breakages, the voids extend from the sub-element to the Cu matrix crossing the Nb 3 Sn ring, allowing visual detection of the phenomenon. In Fig. 5, a collection of barrier disruptions is shown. Thanks to the 3D reconstruction of the tomographies, it is possible to appreciate the exact spot where the barrier fails, generating the Sn leakages, in both transversal and longitudinal directions. In general, it is expected that heavily deformed sub-elements are more prone to barrier failure. The 108/127 design has a hexagonal pattern of the sub-elements and, during the drawing process, the sub-elements in the corners are more subject to deformation stress, as can be observed in the cross sections reported in Fig. 1. Despite the limited statistics, the results show that about 20% of the broken barriers are in the corners of the wire, indicating that the 108/127 design can be particularly susceptible to this phenomenon. This qualifies our tool to identify problematic sub-elements in the wire arrangement and can be of use to manufacturers in order to optimize or tailor the sub-element barrier thickness depending on the sub-elements arrangement. As final evidence of the presence of Sn in the Cu matrix a SEM-EDX analysis was performed on five S20 cross sections. In one of the cross-section was possible to locate a Cu void in contact to one of the highly deformed sub-elements, see Fig. 6. The EDX analysis shows presence of Sn in the Cu and, in addition, the Sn concentration increased approaching the void. Both tomography and SEM-EDX analysis, performed on S21 samples, were not capable to detect Nb barrier ruptures. These results confirm that a Nb:Sn ratio of 3.4 is more prone to Sn diffusion in the Cu matrix at this heat treatment conditions and that a ratio of 3.6 is recommended, in agreement with the results of 42 and 41 . As a further step in the improvement of our analysis tool, we plan to automatize the identification of the broken barriers by implementing advanced machine-learning and deep-learning technologies, such as object detection 43 and semantic segmentation 44 . The analysis tool highlights that even with high purity Cu matrix, i.e. high RRR values (> 150), all the analyzed samples show presence of Cu voids, see Table 1. Furthermore looking at Fig. 4, it is clear that the Cu purity is not directly linked to the number of Cu voids since S19, the wire with the most Cu voids, is the wire with the highest RRR. In this case the voids are generated during the wire production. In the final steps of the wire assembly, sub-elements and copper hexagons are stacked inside the high purity Cu tube, which constitutes the outer layer of the wire. In this process Cu rods can be added to fill the empty spaces between hexagons and the Cu tube. Because of the unavoidable asperities at the surface of the components being assembled, defects can be incorporated between these components. Some of them will disappear during the subsequent deformation but part of them will stay trapped between the copper components. X-ray tomography performed on the wires before the activation heat treatments were not able to detect such defects. Therefore, as we suspected, these defects are significantly finer at this stage and they migrate and coalesce during the reaction heat treatment forming small voids in the copper matrix. Fig. 7 shows Cu voids and sub-elements in the 3D reconstruction of S19. In the highlighted area, Cu voids are homogeneously distributed around hexagonal volumes proving that Cu filler rods are the source of these type voids. These Cu voids do not affect the purity of the Cu but an excessive number could change the Cu matrix morphology varying the conductive path inside the matrix and therefore influencing the wire electro-thermal stability. During the SEM-EDX analysis, none of these Cu voids were visible in the Cu matrix. The most probable cause is the SEM samples preparation. To obtain a flat and clean surface, the wire is polished with progressively smoother polishing cloths, during this process the soft material can be Sub-element voids characterization. The sub-element voids are generated during the wire heat treatment, necessary for the formation of the Nb 3 Sn phase. As listed in Table 1, the heat treatment has three plateaus at different temperatures, at 210 °C, 400 °C and 640-650 °C. The sub-element voids begin to form after the first plateau, above 210 °C, coinciding with the appearance of a Cu-Sn phase 45 . These Kirkendall voids, due to the diffusion of Sn in Cu and Nb, grow by coalescence during the heat treatment steps. They reach the final dimensions during the formation of the Nb 3 Sn phase, where the majority of Sn bond with the Nb leaving in the sub-elements core large voids in a Cu-Sn solution. More on the sub-element voids formation is described in 24 . S19, S25 and S26 are three 132/169 RRP wires with different diameters. The heat treatments of the wires are tailored on the wire design as well as the sub-elements dimensions. In particular, the final temperature plateau and its duration are optimized depending on sub-elements size. The temperature and duration are empirically defined by the manufacturer to maximize the A15 reaction and the J c, without reducing the RRR. Previous studies showed that heat treatments longer than 50 h with a temperature higher than 670 °C stimulate the Sn diffusion increasing the probability of Sn leakage into the Cu stabilizer 46 . On the other hand, in case of lower temperature the Sn may not diffuse enough causing worse performances at high fields 47 . With these considerations, RRP can only exploit a limited temperature window between 640 and 665 °C to optimize the wire properties. The three samples have different diameters, and because they have the same configuration, this difference is reflected by the sub-element size. For this reason, the average diameter of the sub-elements, measured from the 2D tomography, shows a linear dependence on the wire size, see Table 2. Due to the deformation of the hexagonal sub-elements during the production process, the sub-elements shape can vary within the same wire impacting the dimensions of the voids as well. Furthermore, the sub-elements physically limit the maximum dimension of their voids, so the sub-elements' voids must scale as function of the wire diameter. www.nature.com/scientificreports/ To verify this relation, we used our analysis tool, which provides the voids volumes and their center of mass, allowing to uniquely locate their position in the sample. Sub-elements voids are strongly irregular, therefore, to assign them a geometrical description, we decided to approximate them to simple ellipsoids. In such approximation, the major axis of the ellipsoid is equivalent to the void length whereas the minor axis of the ellipsoid is the void diameter. These dimensions have been calculated using the projections of the voids to the XZ, YZ and XY planes, as shown in Fig. 8. The void length has been defined as the average of the major axes of the XZ and YZ projections, while the diameter is calculated from the circle which has the equivalent area of the void XY projection. The resulting frequency distributions of sub-elements voids' lengths and diameters as function of the wires' dimension is shown in Fig. 9. It is important to underline that the voids at the top and bottom of the analyzed samples can be incomplete. The tomography virtually slices the wire cutting off the voids at the edges, and consequently, these shortened voids have reduced volumes and dimensions. Therefore in Fig. 9 the left tails of www.nature.com/scientificreports/ the distributions are overestimated by this intrinsic generation of incomplete voids. The distribution shows the dependency of the voids' diameter and length from the wire size, showing that voids maximum volume scale with wire and sub-elements size and that the average voids length is approximately one order of magnitude less than the analyzed sample length. In addition, the fraction of the sample occupied by the sub-element voids should be constant, and in fact, the calculated void fractions are between 4.0% and 4.3%, with the highest percentage in sample S25, see Table 2. The difference in voids distribution among the three 132/169 RRP wires shows the complexity of the problem. The three samples have the same design and are in scale for sub-elements dimensions, but because of their differences in internal features and heat treatment, they have different voids distribution, thus they cannot be considered mechanically equivalent, i.e. one wire cannot be used as mechanical model scale for the others 48 . In this context a simple mechanical model based only on the sub-elements design cannot provide reliable results, because the sub-elements voids cannot be neglected, as they can act as nucleation point for the cracks in Nb 3 Sn 21 . Therefore to study and enhance the mechanical properties of RRP wires, a more complex mechanical model, capable to include the Kirkendall voids in the sub-elements, is necessary. Our geometrical characterization of the voids makes possible to overcome this issue providing the effective internal features of RRP wires calculating position, orientation, and dimensions of the voids and if needed of the sub-elements. This capability will unlock new strategies for pushing forward the development of RRP Nb 3 Sn wires obtaining wires with enhanced tolerance to stress by adjusting, for example, the heat treatment in order to tailor the voids distribution without neglecting the effects on electro-thermal properties and critical current. Discussion The information collected using the presented analysis tool raises new questions on voids formations, subelements deformations and barrier failures, which are a clear indication of the potential impact such instrument can have on the comprehension of RRP wires properties. A possible application could be to use the tool to quantify the impact of voids on the electro-mechanical behavior of the wire and, as consequence, to improve the mechanical limits of RRP Nb 3 Sn wires. As matter of fact a clear quantitative correlation between voids and www.nature.com/scientificreports/ www.nature.com/scientificreports/ mechanical limits was obtained for Bronze route wires 21 while for the other types of internal tin wires, such as RRPs, the correlation has yet to be demonstrated. It was shown that varying the dopants (Ti or Ta) and the temperature of the heat treatment final plateau (600-750 °C), the axial irreversible limit of RRP wire changes 49 . Using the combination of x-ray tomography and unsupervised machine learning algorithm would be possible to completely characterize Nb 3 Sn wires and quantify the impact of the voids variation on the irreversible limit due to the different heat treatments. The software detects and separates sub-elements, copper matrix and voids, and calculates their position, volume, orientation, and dimensions. Hence, the tool quantifies the wire characteristics paving the way for Finite Element Models (FEM) at sub-elements level, and further optimization of electro-mechanical properties of Nb 3 Sn wires. Furthermore, even if the tool was developed to analyze Nb 3 Sn RRP wires, it will have a significant impact on different technologies. This algorithm is ideal to characterize the wires whose production process involves a synthesis reaction or sintering heat treatment between precursors, which can cause the generation of voids or deformations of the superconducting filaments. The strength of the unsupervised machine learning is the ability to separate the elements that constitute a wire regardless of the materials involved, while the reconstruction program can easily be adapted to the wires components. In particular, the tool can provide support in the advancement of Powder in Tube (PIT) superconducting wires. In Bi-2212, voids have almost no effect on the mechanical behavior of the wire 50, 51 , on the other hand bridging or bonding of the filaments was related to the heat treatment process and proven to be partially responsible for low J c and n-value 52,53 . The analysis tool could easily map the filaments bridges allowing to quantify the number of filaments intergrowths and their impact on J c . Others superconducting wires based on powder metallurgy, as Iron-based 54 and MgB 2 wires 55 , could also benefit from tomography analysis by analyzing, for example, the interaction between the metal sheath and the superconducting phase. The technique can also be applied in in-situ MgB 2 wires for the investigation of the voids formation, due to volume contraction during the synthesis process, which could be detrimental for J c and mechanical limits [55][56][57] . Finally, the tool is not limited to study single wires, more complex system can also benefit from its use. The analysis of voids and cracks distribution of Nb 3 Sn Rutherford cables after load test 58 can provide precious information of the most sensible area of the cable and how to improve the cable design. While the study of the deformations of the wires in cable in conduit conductors (CICCs) for fusion application 59 can offer new solutions for the wires twisting pattern in order to optimize such conductors. Conclusion In this paper we propose a new tool for the study of the internal features of RRP Nb 3 Sn wires. The combination of X-ray micro-tomography and unsupervised machine learning algorithms is used to analyze two 108/127 wires with different Sn content and three 132/169 wires with different diameters. The unsupervised machine learning algorithm allows to differentiate sub-elements voids from Cu voids providing their 3D reconstruction and geometrical characterization. 108/127 samples have been used to study the variation of Cu voids depending on the Sn content. In case of normal Sn content, Kirkendall voids were generated by Sn diffusion resulting from ruptures in the Nb barriers. Sn pollution in the Cu matrix results in poor electro-thermal stability of the wire. In addition, the analysis underlies that, unsurprisingly, the barrier failures are more frequent in highly deformed sub-elements. In case of the reduced Sn wire, i.e. Nb:Sn molar ratio of 3.6, Sn contamination was not detected in the Cu matrix. Nevertheless, a large number of Cu voids was observed in all the samples, these voids were generated by defects trapped between the different constituents of the Cu matrix during the production process and they do not have a negative impact on the copper RRR. The three samples of different diameters but same design (132/169) have been used to study the voids distribution and dimensions. The voids were morphologically similar, and the maximum voids volume scales with the diameter of the sub-elements. On the other hand, the distribution of the voids dimensions presents differences which do not allows to directly extrapolate electro-mechanical properties from similar wires. The analysis demonstrates the importance of the accurate description of a wire's internal characteristics and the effectiveness of our analysis technique for studying RRP Nb 3 Sn wires. RRP Nb 3 Sn wire technology. RRPs are a type of Internal-Tin Nb 3 Sn wires, whose repetitive unit, called sub-element, is made of Nb filaments in a Cu matrix around a Sn core, the whole assembly being surrounded by a Nb barrier and then an outer layer of pure Cu. The RRP wire is then produced stacking and cold drawing several sub-elements in high purity Cu tube (sometimes with additional pure Cu hexagonal rods in the center) down to the wire final size, see Fig. 10. The Nb barrier is intended to prevent the Sn diffusion from the subelement into the high purity Cu matrix, which is the reason why RRP are also called distributed-barrier wires 60 . The superconducting compound is formed during a tailored heat treatment which activates the Nb-Sn reaction. The Nb-Sn compound is superconducting between 18at% Sn and 25at% Sn, reaching the highest T c and B c2 when the compound is about 24-24.4%, i.e. Nb 3 Sn. Nevertheless, stoichiometric ratio is generally avoided in the wires, as could result in a complete reaction of the barrier, with consequent Sn leakage. Typically a Nb:Sn molar ratio from 3.1 to 3.6 is used to guarantee a supply of Sn that is sufficient to fully react all the Nb filaments, but without an excess that would take valuable "real estate" in the wire and would lead to a too fast growth of Nb 3 Sn from the diffusion barrier Nb and thus beat the purpose of the barrier itself 61 . It has been shown in 42 that a Nb:Sn molar ratio of 3.6 greatly enhances the Residual-Resistivity-Ratio (RRR) of the wire compared to lower Nb:Sn values. The RRR is used as indication of the purity of the Cu matrix. It is defined as the ratio of the Cu matrix resistivity at room temperature to its resistivity above the superconducting transition. In case of superconducting wires for www.nature.com/scientificreports/ high field magnets, Cu with high RRR is necessary for electro-thermal stability 62 and a low RRR can indicate damages of the Nb barrier and Sn leakage into the Cu matrix. In order to achieve high J c , RRP production process includes a very low Cu content inside the sub-elements causing proximity between the Nb filaments 63 . In the heat treatment, the volume expansion during the Nb 3 Sn formation causes the coalescence of the filaments and the formation of a monolithic Nb 3 Sn ring, corresponding to filament effective diameter (d eff ) of 30-70 µm. Large d eff can generate issues in application that requires low hysteresis loss and high magnetic field quality as in accelerator magnets 64 . The design of RRP wires is defined by the number of restacked sub-elements. Sub-elements are deformed to hexagonal shape and they are stacked using a centered hexagonal grid. As said, high purity Cu is necessary for electro-thermal stability of the wire. The Cu matrix has double role: it acts as low resistance current shunt in case of transition of the superconductor to normal state and it homogenizes the superconductor temperature because its thermal conductivity is orders of magnitude higher than that of Nb 3 Sn. The amount of Cu present in a wire is defined using the ratio between Cu and non-Cu materials, so called Cu:Non-Cu ratio. This ratio is tailored depending on the wire application. In RRP wires, the Cu included in the sub-elements dissolves in the Sn core so it cannot be considered as assisting the stability, therefore to reach the desired Cu:non-Cu ratio, usually between 1 and 1.5 62 , some of the wire sub-elements must be substituted by high purity Cu hexagons. In this way, an additional number is given for the wire definition, which is the number of the total superconductive sub-elements. For example, a 132/169 wire consists of 169 restacked sub-elements of which 132 are Nb 3 Sn. RRR measurements. The RRR sample holder is designed to test eight straight samples per measurement. The samples are mounted in series and they are soldered to the current leads. The voltage-tap pairs are soldered on each sample separated by about 20 mm. The resistance (R) at room temperature is measured injecting in the samples a current of 1 A and measuring the voltage drops trough the taps. For the resistance at cryogenic temperature, the sample holder is mounted on a probe, which is inserted in a cold cryostat and gradually lowered to cool the samples up to liquid helium temperature. A thermometer is placed on the sample holder to monitor the temperature. A current of 10 A was injected through the samples and the voltage across each wire was measured to determine the resistance. The cryostat is then slowly heat up, increasing steadily the temperature. The resistance is measured as function of the temperature from 4.2 K up to 25 K, which is after the Nb 3 Sn transition. RRR is defined as the ratio between R at room temperature and at 18 K, which is just after the transition to normal state. Electron microscopy description. Electron microscopy is performed using a JEOL JSM-7600F field emission scanning electron microscope (FESEM). Energy-dispersive X-ray spectroscopy (EDS) is performed in the FESEM using an Oxford Instrument X-Max system, which utilizes a large area analytical Silicon Drift EDS Detector (SDD) with PentaFET Precision, at an acceleration voltage of 16 kV. Received: 10 July 2020; Accepted: 30 March 2021 Figure 10. Cross section of S21 Nb 3 Sn RRP wire before heat treatment. The picture is a combination of a Scanning Electric Microscope (SEM) image and an Energy-dispersive X-ray spectroscopy (EDX) analysis of the surface. The sub-elements are surrounded by high purity Cu matrix. Each sub-element is an assembly of Nb and Nb-Ti filaments in a Cu matrix built around a Sn rod. The Nb barrier, around the sub-element, prevents Sn diffusion in the Cu matrix during the heat treatment.
2021-04-10T05:12:44.591Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "fcb8c2e2f9c78c622ba9ceaedbdd147e8af845fd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-87475-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fcb8c2e2f9c78c622ba9ceaedbdd147e8af845fd", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
204382769
pes2o/s2orc
v3-fos-license
Professional learning through collaborative research in mathematics ABSTRACT In this study, the professional learning of two groups of secondary mathematics teachers are compared as they participate in an education research project to explore the uses of iPads within formative assessment processes. Data from lesson observations, meetings and teacher interviews show how collaborative participation in a design research cycle involving the development, implementation and analysis of lessons facilitated individual and collective professional learning. Specific elements of the design research process provided opportunities for knowledge sharing and reflection on practice, but individual learning gains were closely associated with the development of these teacher groups into professional learning communities. Two contrasting case studies show how various affordances and constraints of the research activity either encourage or restrain the development of characteristics associated with professional learning communities. The findings provide insight into the early developmental stages of professional learning communities, the conditions that affect their growth and the efficacy of collaborative design research to stimulate the development of such communities. Introduction Professional development has frequently been perceived to be an essential element of school improvement or national reform (Guskey 1994, Villegas-Reimers 2003) and it is a common assumption that changing teacher practice is crucial to raising and maintaining standards (Day and Sachs 2004).Whether professional development is an effective instrument in the change process is debatable but on-going international interest suggests its increasing importance (Fraser et al. 2007).Consequently, attention has turned to the effectiveness of different models (Kennedy 2005(Kennedy , 2016)).In countries such as England, high-stakes performance measures bring the quality of teaching in the classroom under close scrutiny and the search for effective professional development is a priority.From the perspective of an individual teacher though, professional development forms part of an on-going process towards maturity as a skilled professional.Much is demanded, therefore, from professional development activity as a means of facilitating change in systems, schools and the classroom practice of individual teachers (Cochran-Smith and Zeichner 2009). One of the difficulties in this search for effectiveness is uncertainty about how teacher practice can be improved.A recent shift in research interest away from short instructional events focussed on the individual, reflects trends in approaches to learning towards participation rather than acquisition (Matos et al. 2009) and evidence that traditional strategies, based on the assumption that theoretical learning about classroom teaching will result in changes to practice, are ineffective (Cochran-Smith and Zeichner 2009).There has been increasing interest in practice-based approaches involving collaborative activity within teacher groups rather than a 'top down' instructional approach (Stoll et al. 2006, Matos et al. 2009) and an emphasis on the professional learning gained from reflection on workplace activity in the light of relevant theory (Avalos 2011).This emphasis on collaborative teacher groups as a model for professional learning has led to much debate about how such teacher groups can form effective and sustainable professional learning communities with the ability to develop and change practice (Vescio et al. 2008).Despite variations in the meaning attributed to this term, which are discussed later, the characteristics of professional learning communities are well-documented (Cochran-Smith and Lytle 1999, Bolam et al. 2005, Stoll et al. 2006, Vescio et al. 2008, Dufour and Eaker 2009, O.E.C.D., 2013).Less attention seems to have been paid, however, to how groups of teachers actually develop and grows into communities with these characteristics. In this paper the changes in individual and socially shared professional knowledge within two teacher groups will be examined as they engage in a collaborative design research project with university researchers over a period of about nine months.These were school-based trios of mathematics teachers who volunteered to participate in the research.Each group of three worked together in the same school department.These trios were supported by university researchers who acted as facilitators of group activity, co-designers and observers with both an insider role in the design of lessons but also acted as an outsider to observe and evaluate.Since these are localised time-bound cases we will be concerned with the professional learning that takes place, rather than the longer-term development of professionals at school or system level (Fraser et al. 2007). The overall aim of this paper is to understand how involvement in this type of collaborative research affects the professional learning of the teachers involved.By studying the development of these teacher groups, the first objective is to identify the group characteristics that emerge and secondly their similarity, or otherwise, to features of a professional learning community.Finally the conditions for growth of these characteristics will be examined, from which conclusions will be drawn about the ways in which participation in collaborative research can provide favourable conditions for the development of these features. The study focuses on addressing the following research questions: • How does participation in the collaborative research project affect teachers' individual and collective professional learning?By focussing on two small case studies of teacher groups in similar schools who are participating in the same design research project, the study allows the actions and interactions of the participants to be examined in depth during the development stage of these groups and detailed crosscase comparisons to be made.This study of professional learning takes place within a national context in England where the profile of teacher-led research has been raised over recent years.Whilst agreeing with the value of participation in research for teachers, the promotion of a wholly teacher-centric approach seems to overlook the synergy generated from collaborative partnerships of professionals with complementary roles, knowledge and expertise.As Slavit and Nelson (2010) case study suggests, teachers benefit from working in collaborative inquiry to improve their practice.Collaboration that involves both teachers and researchers may, however, have additional benefits since this involves a combination of different experiences and knowledge with the potential to stimulate deeper inquiry into teachers' on-going examination of their own classroom practice. Research context Although the primary concern in this paper is the professional learning facilitated through teacher participation in a collaborative design research project, it is also necessary to consider the specific context in which this takes place.Professional learning is viewed here as a socially situated experience and the nature of the design research project contributes to the construction of a social space in which this learning takes place.From this perspective, there are two particular aspects of this project that need to be considered: the design process and the specific aims of the design project. The design process The lesson design process involves a cycle of activity with the aim of producing an artefact.In this case, the product is a mathematics lesson in which iPads are used to inform or facilitate a formative assessment process.The cyclical development process involves several stages to design, test, obtain feedback, reflect and redesign the lesson (Gravemeijer andCobb 2006, Swan 2014).Through various iterations, the designs are systematically reviewed and improved in a process of progressive refinement (Brown 1992).Typically, the intention is to produce a well-tested exemplar task or lesson through a rigorous reflexive process but, in this project, the emphasis is on exploring different ways of using iPads in the lesson through successive iterations, rather than producing an exemplar lesson.Involving teachers in collaborative discussions as co-designers during this design research process provides opportunities for professional learning in a social context where knowledge is shared between group members and also with researchers. Since such design experiments are a fusion of research and practice (Burkhardt and Schoenfeld 2003) there is a need to conduct trials and observe the 'learning phenomena' (Collins et al. 2004) in real situations.The cyclical design process therefore includes the use of designed tasks or lessons by teachers in classroom situations, so that observations, reflection and analysis can be carried out of the theoretically conceived tool in practical use.By working with teachers in both the design and implementation stages, a shared understanding of the aims is developed and the teachers' theoretical learning during the design work is directly linked to actual professional practice. These lessons are, however, dependent on the existing level of the teachers' knowledge and skills with iPad technology.In this project, the research team offer evidence-based understanding of formative assessment and its implementation in the classroom from prior studies, but there is a dependency on the technical knowledge of the teachers, so that the designed lessons can be implemented in their own classroom situations without extensive additional technical training.The teachers' knowledge-sharing regarding iPad technology is therefore essential to the success of the research project and their contributions to discussions about lesson designs are a valuable part of the process.This influences the nature of the collaborative partnership and provides opportunities for greater teacher participation in the design research process than might otherwise have taken place. There are three distinctive features of this design research approach that are important to consider with respect to the professional learning of these teachers: • the emphasis on experimentation and inquiry; • the reciprocal knowledge-sharing with researchers; • the extension of collaboration across both design and implementation stages. How teachers engage with these elements of the research process will influence the nature and extent of their individual professional learning, but their interactions as a group are particularly important in the development of shared learning experiences. The design project The second aspect to consider is how the exploration of specific aims for the design project might affect the professional learning of the participating teachers.The intention of the project was to gain a better understanding of how iPad technology could contribute to formative assessment processes by studying the interactions of teacher, technology and student (Dalby and Swan 2019).It could be reasonably expected therefore that the teachers would develop some theoretical and practical classroom-based knowledge in these areas from participating in the study. Their prior knowledge of iPad technology and formative assessment also becomes important though, since it determines individual starting points and possible learning trajectories.Individual teachers with different levels of prior knowledge about iPad technology or formative assessment may have more, or less, to learn from the project.Their existing knowledge also affects their contributions to discussions, thereby affecting the nature of their involvement in this element of the collaborative activity.In these ways the two areas of knowledge, iPad technology and formative assessment, help define the focus for collaborative discussion during the project, but may also act as a constraint, determining boundaries for the knowledge exchanges that take place and thereby affecting the capacity for professional learning generated directly from engaging with the project aims. Literature review Having briefly examined the context for this study, we now consider the professional learning that takes place.In the following discussion, different aspects are explored in more detail but, the starting point, for the purposes of this study, is that professional learning is any form of activity that allows teachers to think about and gain better understanding of their professional practice in a way that can facilitate a change in practice (Timperley et al. 2008). Fundamentally, any programme of professional development is concerned with facilitating change, namely a change in teacher knowledge, and therefore involves a process of learning (Avalos 2011, Kennedy 2016).Professional learning for teachers is not just about gaining theoretical knowledge but about developing practice (Timperley et al. 2008) and this may require a shift in thinking about what teacher learning actually involves, towards a view that is centred on effective enactment in the workplace setting (Fullan 2007).Such professional learning may be considered as a change in practice and thinking that results from meaningful interaction (Kelchtermans 2004).This fusion of theory and practice in professional learning is, however, problematic.Changing perspectives on what constitutes learning gives rise to different conceptualisations of professional learning (Matos et al. 2009, Mockler 2012, Kennedy 2016) and a variety of possible models for developing professional practice The distinction between 'learning as acquisition' and 'learning as participation' (Sfard 1998) emphasises the difference between a passive transfer of learning (acquisition) and active forms which take place in social situations (participation).Associated views of knowledge and 'knowing' suggest that knowledge can be conceptualised as either a commodity that is acquired, or as the result of active participation, communication and 'belonging' in a social situation.The latter view of knowledge, as socially constructed through participation, is often conceptualised as a process of identity-shaping and 'becoming' (Wenger 1999) rather than simply cognitive activity and this concept of learning as a professional underpins the approach in this study.The nature of the active participation of the teachers is, therefore, a key consideration but their prior professional knowledge and practice provide the contextual background for an on-going process of professional learning. For the development of technical and professional practice there is a need for 'knowing how' rather than simply 'knowing that' (Winch 2013).Professional or vocational competence may be considered as fundamentally the exercise of technique, or skill, in a social environment but opportunities for knowledge creation and learning within the workplace vary (Fuller and Unwin 2007).Teachers need to develop a conceptual understanding of different pedagogies but, in conjunction with classroom enactment, they should acquire a form of 'know how' that fuses theory and practice in a social context (Winch 2013).Opportunities for the expansion of 'know how' would therefore appear to be essential for the effective professional development of teachers. The process of learning in and from practice is important in professional development (Matos et al. 2009) and reference has been made to three specific types of knowledge: knowledge for practice; knowledge in practice and knowledge of practice.(Cochran-Smith and Lytle 1999, Dana and Yendol-Hoppey 2008).'Knowledge for practice' involves knowing about how to teach and is mainly gained from instruction in various forms, whilst 'knowledge in practice' is constructed through the exploration of ideas in the classroom (Dana and Yendol-Hoppey 2008).Both of these have value but are considered less effective in the process of changing practice than 'knowledge of practice', which is gained through teachers engaging in deeper reflection, questioning and systematic study of their classroom practice.The nature of the participatory opportunities offered by the study is therefore important.By placing teachers in particular roles within the social situations facilitated by the study, knowledge of different types could be constructed in the development of professional learning. In this study the main focus is on examining the process of professional learning rather than evaluating the long-term effects, which would be unrealistic considering the small groups and limited timescale.It is worth noting however that evaluating the effectiveness of professional learning is problematic, due to differences in the way 'effectiveness' is interpreted and how it can actually be measured.For example, Timperley et al. (2008) consider evidence of positive outcomes for students and the nature of the professional development as measures of effectiveness, whilst Guskey (2000) lists a series of possible measures, suggesting that the impact on student learning is often the most important in education.This forms a recurring theme as a measure of effective professional learning (Guskey 2000, Bolam et al. 2005, Timperley et al. 2008, Kennedy 2016) which is not surprising since this is, arguably, the primary purpose of the education system.It is also the primary concern of teachers, who are expected to learn and improve their teaching skills through participation in professional development.By focussing on how groups of teachers learn together the research contributes to an understanding of professional learning and highlights processes that may lead to better student outcomes but does not extend to a formal evaluation of the impact on student learning. Professional learning communities Research evidence suggests that active participation in professional learning communities is more effective than using the traditional model of individual theory-based instruction (Matos et al. 2009, Ermeling 2010, O.E.C.D., 2013)but also raises two important issues.Firstly, the nature of the participation of individual teachers is essential to the functioning and effectiveness of the professional learning community.Secondly, how collective teacher activity is bound together by a clear purpose, shared aims and vision is an important consideration (Dufour and Eaker 2009).In this study we therefore consider the nature of teacher participation and how this contributes to the development of professional learning communities in addition to the effects on the professional learning of individual teachers.Fullan (2007) suggests that professional learning should be centred on teachers' practice in their workplace, involving the de-privatisation of classroom practice and a collaborative approach.Rather than classroom practice being an individual activity enclosed in a classroom, de-privatisation allows for greater transparency, awareness and discussion of colleagues' work practices.Similar themes appear in other literature (e.g.Vescio et al. 2008, Slavit andNelson 2010) and highlight the effectiveness of professional development models that combine collaborative teacher activity with a strong focus on classroom practice.Recent research evidence supports the view that collaborative teacher learning in professional learning communities provides a 'successful' model for sustainable teacher development (Dana and Yendol-Hoppey 2008, Matos et al. 2009, Horn and Little 2010, O.E. C.D., 2013), particularly when focused on measures of effectiveness concerned with student achievement and professional learning (Bolam et al. 2005).Despite variations in views of how such communities are constituted (Stoll et al. 2006) this approach to professional development promises more than traditional methods (Ermeling 2010).The intention and purpose of the learning community needs, however, to be appropriately focussed on achieving improvement through changes in practice and should be based on a realistic model with clear aims (Dufour 2007). The concept of a professional learning community has two distinct roots, with some commonality but a fundamental difference in focus.Based on Senge's (2006) concept of a 'learning organisation' from a business perspective, some would view a professional learning community as a having school-wide membership and a characteristic collaborative culture (Fullan 1993).Alternatively, the starting point is the concept of a 'community of practice' (Lave andWenger 1991, Wenger 1999) or a 'learning community' (Wenger and Snyder 2000) which is formed when a group of people are informally bound together by mutual engagement, shared experience and passion for a joint enterprise (Wenger 1999).Such learning communities have a social dimension so that teachers are expected to regularly communicate, collaborate, share knowledge and give social support to each other (Krainer 2003).Collaboration and reflection on practice are common themes in both these conceptual foundations but the first arises from considerations of organisational change and the second from a model of apprenticeship, in which a group of teachers with a shared aim develop professional knowledge.For this study we will only be concerned with small groups of mathematics teachers in schools and the fundamental concept of a community of practice becomes more relevant than the school-wide organisational view.The orientation of the institution towards learning and the coherence of group aims with school goals is however still influential.Opportunities for professional learning may well be dependent on whether the workplace constitutes what Fuller and Unwin (2007) refer to as an 'expansive' or 'restrictive' learning environment. In a professional learning community we would expect the three main elements of a community of practice to be evidenced: a clear domain, a collaborative community and shared practice (Wenger 1999(Wenger , 2011)).Individual teachers may be positioned initially with their community of practice as experts relative to their colleagues, or as legitimate peripheral members who are moving towards full membership as their expertise develops (Lave and Wenger 1991).By basing the study on this fundamental concept, the positioning, relationships and interactions between members become central to the study.The roles of individual members and actions taken by more experienced teachers within the group are factors important to the success of professional learning communities (Lieberman and Pointer Mace 2009) and the approach taken will allow for a close examination of how these factors contribute to the early stages of development of teacher groups into similar learning communities. The development of professional learning communities of this type may however be incomplete as an effective model for teacher development without further focussed activity.Dimmock (2016) proposes that the missing element is that such professional learning communities need to be research-engaged.The processes in this study of involving teachers in the research are a vital part of the collaborative activity and it seems appropriate to examine what part this played in the professional learning that took place.Five characteristics commonly identified as important in early literature will be useful for comparison with emerging characteristics of the teacher groups: shared values; collective responsibility; collaboration; reflection and inquiry; and group and individual learning (Stoll et al. 2006).Bolam et al. (2005) however add three further common features, which are primarily concerned with relationships.The importance of teachers' positioning and relationships in collective participation has already been highlighted but these social relationships also connect characteristics of professional learning communities with cultural values in this situated learning situation.For example, if effective de-privatisation of practice takes place (Fullan 2007, Vescio et al. 2008) then this activity is more likely to be successful in a culture of mutual trust and respect (Bolam et al. 2005).The question to be explored in this paper is whether the opportunities provided through the distinctive collaborative design research approach can successfully facilitate the development of any of these characteristics and, if so, what elements of the process are most influential in providing favourable conditions for the growth of these features.In addition to interviewing the teachers regarding their professional learning, it is also important to observe the relationships and interactions between individuals in order to study the collaborative process and group characteristics develop. Methodology The main research findings for this project are reported elsewhere (Dalby and Swan 2019) and therefore only the methods relevant to this study of the teachers' professional learning are described here.However, the iterative design cycle described earlier remains an essential part of the process: lesson design; classroom trial and observation; feedback; reflection; revisions to the design (Gravemeijer andCobb 2006, Swan 2014).This cycle was repeated three times for each designed lesson, with a different teacher responsible for implementing the lesson in one of their own classrooms, within each cycle.The first cycle involved a substantial amount of planning, which took place over several weeks but subsequent cycles were usually completed with a week. Members of the research team met with the teacher groups to facilitate the lesson planning and supported them through this process.The researchers then carried out observations, in pairs, of each lesson and one of the three versions of each lesson was video-recorded to facilitate more detailed analysis.Discussions took place with the teachers in between each lesson within a cycle to give feedback and consider revisions before the next iteration.Interviews were carried out with each of the participating teachers at the end of the design project and these were used, in conjunction with the lesson observations and other field notes, to explore the professional learning of these teachers during the design research process. Three schools in the Midlands of England were involved in the project and within each school, a group of three teachers worked with the research team to develop three lessons over a period of around seven months.This was a project funded by the European Union (see Acknowledgements) and the teachers participated on a voluntary basis.Ethical approval was gained from the university and the relevant informed consents obtained from teachers and their students. Each of these teachers groups and their professional learning journeys became a case study.For the purposes of this paper, we are only concerned with a comparison between two of these cases, which were both secondary comprehensive schools of similar size.Both were non-selective but with streamed classes for mathematics and had similar grading from their most recent external inspections.Ipads were available in both schools for student use and the teachers had some technical expertise with these before commencing the research. Focussing on just two case studies provides the opportunity for a detailed, in-depth examination of a singularity in a natural setting which has justifiable research value (Bogdan andBiklen 1992, Bassey 1999).By using qualitative data from different sources (paired lesson observations, observations of meetings, teacher interviews) and comparing the two cases, the credibility is strengthened (Yin 2009). Since the data were entirely qualitative, the initial analysis was carried out using a process of open coding to identify key themes.Emerging themes from teacher interviews were compared to lesson observations and notes from meetings to ensure triangulation of data from different sources.These themes were then re-examined in relation to Wenger's (2011) features of communities of practice: domain, community and practice.Emerging characteristics of these teacher groups were identified and then compared to the five common characteristics of professional learning communities (Stoll et al. 2006).Case studies of teacher groups and their professional learning were developed and a comparative analysis of these cases was carried out. Results and analysis Although there is naturally some overlap, the results and analysis will be presented here in a similar order to the research questions.Results concerning individual professional learning will be followed by a consideration of collective professional learning.An analysis will then be presented of the characteristics of professional learning communities that developed within these teacher groups.Finally some evidence will be examined concerning the specific features of the research project that were instrumental in teacher development. Clear evidence of individual professional learning as a result of participation in the research project was provided from teachers' interviews and observations of their meetings.Teachers identified two main areas of individual learning: • technical understanding of specific uses of iPads and software; • pedagogical adjustments that help facilitate formative assessment processes, with or without iPads. These areas are not surprising, given the focus of the research project, but do highlight how professional learning was strongly connected to classroom practice and 'know how' (Winch 2013) rather than theoretical knowledge (Timperley et al. 2008).Individual knowledge gains showed some variation but were often linked by teachers to the opportunities for collaboration within the project.This included collaborative work with their colleagues to design or refine lessons, as well as the design activity and shared reflections on lessons that took place with researchers. Teachers explained that the time spent working together on lessons had been particularly valuable, since this was an activity that rarely featured in their normal way of working, mainly due to time pressures.Working together in small collaborative groups with a shared aim and a focus, even over the short period of time for this research project, provided a stark contrast when compared to their normal day-to-day interaction.The research project provided a reason for collaborative activity even when the researchers were not present.Furthermore, a mutual commitment to the design research activity from these teacher groups led to the sharing of ideas and a de-privatisation of practice (Fullan 2007, Vescio et al. 2008) that was difficult to achieve within their normal working routines. Alongside the importance of collaboration, two additional themes with respect to individual professional learning emerged strongly.Data from observations of the design process and the lessons showed how active participation in the research prompted teachers to adopt an inquiry approach to both lesson design and implementation.The intention for this project was to explore and innovate when using technology within mathematics lessons so developing an inquiry approach in the planning process was fundamental.Discussions with researchers encouraged teachers to reflect on the lessons and engage in questioning about lesson designs.Most individual teachers readily adopted this inquiry approach, becoming experimental with different uses of technology rather electing to implement 'safe' options.The freedom to experiment, endorsed by researchers, within a mutually supportive community with a shared aim, provided an environment for inquiry approaches to flourish. The knowledge-sharing aspect of the research design also emerged as a significant opportunity which facilitated individual professional learning.Differences in knowledge specialisms between teachers and researchers led to a pragmatic shared approach regarding individual contributions to lesson designs, rather than the design being researcher-led.In this way, two central areas of knowledge for the research project (using iPad technology and formative assessment) were integrated in the design process through a negotiation of how technology and formative assessment could be combined in effective classroom learning.This enabled teachers to explore and extend their use of technology in the classroom but also gain understanding of the associated pedagogical approaches that would enhance formative assessment and lead to more effective student learning.Involvement in the research, as suggested by Dimmock (2016), was a key aspect of teacher activity that facilitated professional learning for individuals within these groups. These three themes are all linked to the design research approach and indicate opportunities for individual professional learning within the research activities.In contrast, observations and interviews suggested three characteristics connected to the aims of the research project that might act as constraints on the individual professional learning of some teachers. The project aims usefully indicated the boundaries for the research activity and defined the research domain but these also resulted in unhelpful constraints on individual professional learning for some teachers.Although there were benefits in having a clear focus for the research activity, observations of lessons suggested that this emphasis sometimes caused other pedagogical issues to be neglected.Similarly, the knowledge priorities suggested by the research project aims constrained the progress of some teachers due to their different starting points.There was evidence that individuals with less prior knowledge of the areas prioritised, compared to others in the same teacher group, made less progress.Thirdly, the division of responsibility between teachers and researchers provided opportunities for some individual teachers to become deeply involved in the research project but also resulted in constraints on the involvement of others.For example, individual teachers' with strong technical knowledge were particularly valuable to this research project and readily engaged in discussion about the integration of technology, whilst those with less secure technical understanding took a more peripheral position in these discussions.These affordances and constraints are summarised in Figure 1. This representation of the affordances and constraints provides an analytic tool to view the potential for individual professional learning associated with participation in this research project.With the small number of teachers involved, this cannot be interpreted as a reliable or complete summary but offers a simple framework for consideration of the potential opportunities within a research project.In this case, the distinctive characteristics of the design research process provide opportunities for individual teachers to collaborate, share thinking and engage in inquiry, whilst the project aims sometimes constrains individual professional learning, due to the type of knowledge that is prioritised and the division of responsibility within the collaboration. These themes are important for identifying the potential for individual professional learning but they are also significant in the development of collective professional learning.A comparison to Wenger's (2011) three broad features of a community of practice (See Table 1), suggests that aspects of both the lesson design process, alongside the research project aims, contribute to the development of group characteristics. In this study, however, one case study group developed into a more functional and effective professional learning community than the other.In this group, one teacher reported that these lessons were the 'best lessons we have taught all year' (School A), implying both teacher satisfaction and an anticipated positive effect on student learning.Although the effectiveness of professional learning is only measureable qualitatively from teachers' responses in this study, the extent to which this group exhibited shared ownership and satisfaction with the lessons was evidenced strongly in their interviews. A small set of characteristics also emerge from the analysis, for which clear differences between the two cases can be identified (See Table 2).These characteristics show some connection to the features of effective professional learning communities described earlier (Stoll et al. 2006, Dimmock 2016) but highlight several factors that contribute to these features. Firstly, the teachers in these case studies approached the research project with their own personal interests as well as some shared group aims, but the connecting of these was important.Early negotiation of shared aims that had a focus on learning seemed to make it easier later to develop the group into an effective professional learning community.There were however some pre-conditions that may have been influential.Although both schools supported the used of digital technology and had iPads available, the school aims of the more effective professional learning community gave technology a high priority.Levels of prior technical knowledge and skills within the group became important for several reasons.Teachers who were confident with the use of technology in their lessons quickly became more actively engaged than those with less expertise, which affected their positioning within the small developing professional learning communities.In School A, teachers had similar levels of confidence initially, although different knowledge, but there was mutual respect and shared responsibility for the lesson design work.In School B, the teacher with least technical knowledge tended to take a more peripheral role and, although there was evidence they intended to learn and become more central in their community of practice, this was not achieved.Their initial position of legitimate peripheral participation (Lave Teachers share prior knowledge to inform lesson designs and their reflections on classroom implementation. Research project aims Teachers adopt a shared focus on using digital technology in formative assessment. Teachers work collaboratively in order to achieve the project aims. Aims encourage sharing of alternative pedagogical approaches and alternative methods. Table 2. Comparison of characteristics between cases. Characteristics of teacher group School A School B Individual aims Some similarity in individual aims regarding developing the effective use of iPads in mathematics teaching. Varied interests of individuals in participating in the research. Shared aims The facilitator within the group negotiates welldefined, shared aims.The group focus is on improving student learning. Shared aims are less clearly defined and individuals have different aims.The group focus is on the technology. Technical knowledge All members are confident with technology but specific technical knowledge varies. Levels of confidence with technology vary widely between team members. Leadership The facilitator is the main contact with researchers but responsibilities and ideas for lesson design are shared. The facilitator leads the group, liaises with the researchers and carries out most of the design activity on behalf of the group. Professional relationships Built on existing collaborative ways of working. Previously worked together as individuals within part of a larger team.Communication Frequent communication between team members, although often email rather than face to face. Infrequent communication between members. PROFESSIONAL DEVELOPMENT IN EDUCATION and Wenger 1991), eventually became one of marginalisation over the course of the project.Their lack of confidence with technology was a restraint that limited their involvement in lesson design and resulted in minimal gains in professional knowledge.Colleagues were supportive in terms of assisting their colleague with the technical skills but did not allow sufficient agency in the design phase for this teacher to move into a greater participatory role. In both schools, a group leader facilitated discussions between teachers but communication was noticeably more regular in School A. This enabled deeper discussions to take place and encouraged a higher level of involvement from the other two group members. Together, these differences in the development of the two teacher groups indicate some key areas where effective leadership of a teacher group can encourage the growth of a professional learning community.Although the evidence from these two contrasting cases is limited, there are indications that leaders who encourage the group to negotiate shared aims, communicate regularly and divide responsibilities are more likely to see the group develop some of the key characteristics of a professional learning community. Finally, it is important to consider the influences on individual and collective learning that arise from the situation of this teacher group activity within the broader context of collaborative work with researchers.In the first stage of the lesson design process, the way of working involved collaboration and knowledge sharing between teachers and researchers as well as within teacher groups.During the classroom trials, however, the teachers took an 'insider' role (Dana and Yendol-Hoppey 2008) and their teaching of the designed lessons was instrumental in developing 'knowledge in practice'.Feedback from the researchers following lessons involved further knowledge-sharing but this then led to the reflection stage where 'knowledge of practice' was further developed.Figure 2 shows how the design research cycle, which commences with an initial design and follows several iterations as the design is trialled and revised, is linked to a cycle of professional learning at four stages (design/re-design, trial, feedback, reflection).Specific opportunities for teachers to construct knowledge of different types are made available at each stage.In both our cases, teachers and researchers were involved in knowledge sharing through the collaborative design research activity and each party gained useful knowledge from these socially situated exchanges, with the interlinking of theory and practice being particularly important.Although there were exchanges in the design research cycle that only contributed to teachers' 'knowthat', such knowledge was often linked to classroom implementation in the next iteration and trial of the lesson.In this way theoretical ideas were used and experienced in classroom situations, thereby opening up opportunities for increasing 'knowledge in practice' (Cochran-Smith and Lytle 1999).As the cycle progresses, teachers are involved in reflective discussions about their enactments of lesson designs and develop a more critical approach which contributes to a deeper 'knowledge of practice' (Cochran-Smith and Lytle 1999, Dana and Yendol-Hoppey 2008).In this way, interaction between teachers and researcher within the design research cycle provides specific opportunities for knowledge creation (Wiliam 2002) that may not be present in alternative research designs. Conclusions The individual professional learning journeys of the teachers in this study were interwoven with those of their colleagues but also influenced by their engagement in the design research (Dimmock 2016) and the nature of the activity in which they were involved, including their interaction with researchers.Individual expectations of developing their professional practice were fused together by participating in the study into a shared purpose, indicating useful benefits for collective professional learning beyond the immediate project aim. The lesson design process and the aims of the design project provided both affordances and constraints for individual professional learning but also contributed to the development of key features of professional learning communities.Anticipating and balancing such affordances and constraints for a predetermined professional learning outcome is a challenge that needs careful consideration if collaborative research is to achieve more specific aims.In this study, teacher inquiry and collective reflection were promoted due to the experimental purpose of the design research approach.This added to 'knowledge of practice' and increased the capacity of these teachers to research their own practice.A focus for knowledge sharing was provided by the research project aims, thereby creating space for professional learning in the use of digital technology and formative assessment. Involvement in activity framed by these two elements, the lesson design process and research aims, provided teachers with rich opportunities for knowledge creation through a process with similar features to the key characteristics of effective professional learning (Stoll et al. 2006, Dimmock 2016).This further supports the view that participation in collaborative research has potential for effective professional learning, although the processes and boundaries require more extensive exploration than this limited study can provide. In this study, the interlinked components of the design project were fundamental to the way of working together that developed and to the professional learning of the teachers.Through working together collaboratively with a shared aim, teacher groups could develop into small professional learning communities where the teachers had the opportunity to develop 'knowledge in practice' of value for the research project, whilst also increasing their own 'knowledge of practice' in a specific area.The effectiveness of these groups as professional learning communities was, however, influenced by the leadership of the group, the frequency of communication between members and the level of ownership of shared aims.The prior technical knowledge of individuals also determined their positioning within the learning community and their resulting individual learning. The findings provide evidence that the participation of teachers in collaborative research can provide valuable opportunities for professional learning but the professional knowledge gained depends on the research project aims, the methods and the nature of the collaborative activity between teachers.These features affect the way in which the teacher groups function and the characteristics they develop.The findings support the view that the professional learning journeys of teachers can benefit from involvement in practice-based research (Dimmock 2016) in collaborative groups within their own schools, but that this not an automatic consequence.Professional learning through participation in collaborative research therefore needs to be carefully designed, bearing in mind the influences that will be instrumental, if specific knowledge gains or changes in practice are to be achieved. The study involves a comparison of two cases in similar contexts and is therefore is limited by its scale, the specific nature of the design research activity and the context in which collaborative activity took place.Further examination of the professional learning that develops from involvement within other research projects in other settings is needed to determine any wider principles.This study does, however, provide some clear indications of the conditions favourable for professional learning that may be developed during participation in research and how the early steps towards becoming a professional learning community might be established. Figure 1 . Figure 1.The affordance and constraints of the research design on individual professional teacher learning. Figure 2 . Figure 2. The design research cycle and associated elements of professional What elements of the design research activity, or other contextual factors, are instrumental in the development of these characteristics? • What characteristics of a professional learning community emerge during participation in the research activity?• Table 1 . Contributions to a community of practice from components of the research activity.
2019-09-26T09:04:27.284Z
2019-09-20T00:00:00.000
{ "year": 2021, "sha1": "0df487569e1083a0931ab748d34833af2ef152f4", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/2766563/Professional%20learning%20through%20collaborative%20research%20in%20mathematics%20(complete)%20010919.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "2f748fbfc9504ec17674b0e6e022976d166257a4", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
5697570
pes2o/s2orc
v3-fos-license
Uncalibrated Visual Servo Control of Magnetically Actuated Microrobots in a Fluid Environment Microrobots have a number of potential applications for micromanipulation and assembly, but also offer challenges in power and control. This paper describes an uncalibrated vision-based control system for magnetically actuated microrobots operating untethered at the interface between two immiscible fluids. The microrobots are 20 μm thick and approximately 100–200 μm in lateral dimension. Several different robot shapes are investigated. The robots and fluid are in a 20 × 20 × 15 mm vial placed at the center of four electromagnets. Pulse width modulation of the electromagnet currents is used to control robot speed and direction. Given a desired position, a controller based on recursive least square estimation drives the microrobot to the goal without a priori knowledge of system parameters such as drag coefficients or intrinsic and extrinsic camera parameters. Results are verified experimentally using a variety of microrobot shapes and system configurations. Introduction Tetherless microrobots have been proposed for a number of applications including minimally invasive surgery and micromanipulation of micro gels for in vitro tissue culture [1].Such microrobots, OPEN ACCESS which are at dimensions of tens to hundreds of micrometers, are of comparable size to many biological structures, such as cells, and therefore are an enabling technology.A challenge facing microrobots is providing power and control.In particular, adapting macroscale control strategies to the microscale environment is not always straightforward.The physics of microscale operation does not always scale intuitively.For example, surface forces such as friction or viscous drag play a much larger role than volumetric forces such as magnetic attraction or inertial forces.Adding to the difficulty of the problem, drag forces at this scale can be difficult to model for complex microrobot device geometries. Tetherless microrobot actuation has been achieved using a number of different power delivery methods including optical [2], electrostatic [3], thermal [4], ultrasonic [5] and electromagnetic [6].A review of propulsion methods specific to swimming microrobots medical applications is given in [7].Closed-loop control can be achieved with vision-based feedback of the microrobot position.Controller designs typically utilize a system model that characterizes the interaction between the applied forces (magnetic, drag, electrostatic, etc.) on the microrobot and the microrobot motion.A selected number of magnetically controlled systems are reviewed here with specific emphasis on the feedback control methods used and the tracking performance achieved. Using a clinical magnetic resonance imaging (MRI) system, Tamaz et al. [8] develop a proportional-integral-derivative (PID) controller capable of navigating a 1500 μm ferromagnetic bead along a predefined path.They conclude that an adaptive controller would significantly decrease complications in the system and allow for more robust uses in the biomedical field. Belharet et al. demonstrate a generalized predictive control (GPC) scheme to actuate a 500 μm neodymium sphere in an endovascular environment using Maxwell and Helmholtz coils in [9] achieving tracking errors on the order of 50-200 μm depending on the fluid composition and flow conditions. In [10], Pawashe et al. demonstrate model-based learning controllers for 210 μm microbead manipulation using side-pushing by a magnetic 480 μm microrobot operating in a fluid.Microbeads are pushed to within one pixel (7.5 μm).Modeling includes flow velocities induced by the microrobot and equations of motion for the microsphere being manipulated. Diller [11,12] uses multiple magnetic robots of the same design as [13] with geometric dissimilarities that resulted in differing responses to various frequencies.Planar path errors in [11] are on the order of 1000-1500 μm and three-dimensional (3D) control is achieved in [12] with path errors with less than 310 μm for 350 μm robot and 1500 μm robot. In [14] Marino et al. compare an H∞ controller with a PID controller for a linear uncertain dynamical model for electromagnetic steering control of a 1000 μm microrobot in low viscosity oil using the OctoMag system [15].Tracking errors on the order of 270-490 μm are reported using the H∞ controller.In another work by the same research group [16], Bergeles discusses the difficulties in localizing the position of the microdevice in the ocular environment due to complex optics and distortions.They propose a new projection model that allows localization of the microrobot for control with a proportional-derivative (PD) controller. Using the MiniMag (Aeon Scientific, Zürich, Switzerland) system, Ghanbari [17] proposes time-delay estimation (TDE) control as a superior approach to H∞ control for handling the many uncertainties present in modeling microrobot systems.Errors less than 200 μm are demonstrated for a microrobot consisting of a NdFeB cylinder permanent magnet of diameter 500 μm and length 1000 μm.The method does require the selection of controller gains for a particular configuration of the system. Keuning [18] and Khalil [19] use a proportional-integral (PI) controller and waypoints generated by path planning algorithms to achieve planar navigation of paramagnetic 100 μm beads moving in water.Average errors ranged from 4.7 to 7.0 μm with standard deviations on the order of 2.0 μm. Initial work [20] by the authors of this paper investigate simple linear model and demonstrated a proportional control strategy for electromagnetic actuation of various microrobot devices operating between fluid layers.Each microrobot design responds differently to the actuating magnetic fields required its own set of gains to compute the required duty cycle. While varying in complexity, all of these works rely on system models of the various subsystem components including the microrobot device, the environment it is operating within, the actuation system, and the visual feedback system.The contribution of this work is the demonstration of an uncalibrated microrobot control scheme that uses uncalibrated visual feedback for an unmodeled system consisting of a microrobot device operating in a fluidic environment observed via a microscope and controlled by four electromagnets.In this paper, we characterize the performance of this strategy by varying a number of system parameters (microrobot device, magnification, target velocity, etc.) without changing any terms in the controller or utilizing configuration specific controller gains.Assumptions made based on the system components are presented in Section 2 and the image-based estimation and control are presented in Section 3. Experimental results are given in Section 4 for planar control of various 200 μm microdevices at various magnifications and system configurations.The experimental results include point-to-point motion and trajectory following with path errors ranging from 1.0 to 4.1 pixels in the image plane (4.1-40.5 μm in the workspace). Electromagnetic Actuation for Microrobotic Control For this work, we consider a ferromagnetic mass suspended in between two fluid layers and surrounded by two electromagnet pairs whose magnetic fields act primarily in the plane created by the fluid boundary as depicted in Figure 1a.This system was initially developed for participation in the Mobile Microrobot Challenge Competition [21].While the method is extendable to higher degrees of freedom, for the theoretical and experimental results presented here it is assumed that there are two electromagnet pairs controlling a microrobot device that is acting in a planar, fluid environment. As shown in Figure 1b, forces acting on this mass include electromagnetic forces ⃗ M , viscous drag ⃗ d , surface tension at the boundary ⃗ t , and apparent weight ⃗ w .Let ⃗ w represent the position of the mass, [ x y z ] w T with respect to a fixed world coordinate frame.Then, the equation of motion for the mass is given by: Further expanding the terms to include drag coefficients (α, β, γ) and the magnetic field strength ( , , ) results in: where represents the magnetic moment of the microrobot along the x-axis.It is important to consider the relative magnitudes of the inertial and viscous forces acting on the body.The Reynolds number is a dimensionless quantity that relates inertial forces to viscous forces in the Navier-Stokes equations for a body moving in an incompressible Newtonian fluid.If the microrobot system is operating in a low Reynolds number regime (Re << 1), then the inertial term is much smaller than the drag forces as described by Purcell [22]. Furthermore, if we assume that the buoyant and surface tension forces counteract the gravitational forces, we have a microrobot operating in a planar region controlled by orthogonally oriented electromagnets.Thus, Equation ( 2) can be simplified for planar motion in a low Reynolds fluid environment as: As the microrobot motion is sensed by a computer vision system, the relationship between pixel coordinate frame and the world coordinate frame can be modeled using projective geometry and homogeneous coordinates.To relate microrobot velocities to pixel velocities, the image Jacobian Ji, sometimes called the interaction matrix, must be computed using partial derivatives where the elements of the matrix will be a function of the intrinsic and extrinsic camera parameters as well as the depth from the camera plane to the microrobot object.If planar robot motion is assumed, the following relationship is given where ∈ ℝ 2×2 : Finally, combining Equations ( 3) and ( 4) the observed velocities (in pixels/s) of the microrobot are given by: Thus, the fully modeled system would include camera parameters, drag coefficients, and magnetic field properties. Since the magnetic field strengths are proportional to the applied current (generated by a pulse-width modulated voltage square wave), the system displays a linear relationship between the actuation signal, �⃗(), the microrobot velocity ⃗ ̇p() (observed by the visual system) can be expressed as: where () incorporates the image Jacobian together with the drag coefficients, magnetic moments, and magnetic field strengths.It is similar to the composite Jacobian matrix used in traditional image based robotic control and will similarly vary as the microrobot moves throughout the workspace.For the 2D system presented in this work, it is a 2 × 2 matrix that relates the actuation signal to the velocity of the microrobot as seen in the image plane. Computing a closed form solution for J(x) is possible, but doing so requires accurate and calibrated models of the induced electromagnetic field, drag coefficients, the vision system, etc.Any changes to the physical position of the system, components, device geometry, fluid properties, etc. require a system calibration step.An alternative is online estimation of the J matrix using iterative methods such as Broyden's method or recursive nonlinear least squares estimation.Such uncalibrated adaptive methods have been successfully implemented in macro-scale manipulators and mobile robots for a variety of applications with more complex nonlinear system models and higher degrees of freedoms (DOF) [23][24][25].Experimental results in [20] and [26] show that the J matrix (the relationship between actuation and device velocities) is relatively linear for the experimental system used in this paper.The 2-DOF system described here presents a mathematically tractable problem for online system estimation as presented in the following section. Recursive Least Squares (RLS) Jacobian Estimation and Control Consider a microrobot system such as the one described by Equation ( 6) with an observed state ⃗ that will vary when the control signal �⃗ ∈ ℝ , is applied to the system.It is desired that the robot be controlled in such a manner that it is driven towards a goal position or trajectory ⃗ * ().The error : ∈ ℝ between the observed and desired or target position, ⃗ * (), is given by: For planar 2-DOF image-based position control, the image data ( ⃗ p from the previous section) implies that m = 2. Similarly, for an electromagnet array consisting of two opposing pairs of magnets, n = 2.More complex systems such as those controlling orientation would use higher degrees of freedom. It is desired to compute a control signal that will minimize the (()) image error squared and drive the microrobot to the target position * (): This can be achieved via a quasi-Newton method utilizing an iteratively estimated Jacobian as developed for various macro scale robotic systems in [23][24][25].Here, we use a dynamic recursive least squares (RLS) method presented in [25] for its improved performance in the presence of system noise and ability to follow moving targets. For a discrete control algorithm updated at iteration k with digital sampling time ℎ , let and * represent the robot position and the target position at the kth iteration as measured in the image plane, respectively.Let Δfk represent the change in image error − −1 , and let uk represent the actuation signal for the electromagnets.Then RLS estimate for the Jacobian ̂ is given by Equation ( 11) below and the entire iterative control algorithm is given in Algorithm 1.The actuation signal is computed in Equation ( 13), a quasi-Newton step: Algorithm 1. Recursive least squares control. End for End Equation ( 11) is iteratively estimating the relationship between the actuation signal commanded in the previous iteration and the observed change in error.Equation (13) uses this updated estimation of the Jacobian to compute a new command that will drive the microrobot towards the target.The matrix is the estimate of the covariance matrix of the actuation signal, and lambda is a weighting factor that controls the memory of the Jacobian estimation and prevents noise in the term ∆ (due to system or measurement noise) from resulting in erratic estimation.Values of lambda closer to 1 effect a longer memory; values >0.9 are typical [27].The result is a control scheme that adaptively learns the relationship between the actuation signal and the robot velocities and drives the robot to the desired position even in the presence of noise.Including the target velocity term * ⁄ ℎ in the development allows the controller to follow a moving trajectory.For point to point motion with stationary target positions, this term is simply zero.Figure 2 illustrates the microrobot and target positions used in computing Equations ( 9) and (10) as well as the control vector computed in Equation (13).No system calibration is required, and the same algorithm will control various microrobot device shapes in at arbitrary optical zoom settings with no system modeling or calibration. Figure 2. As the microrobot moves, the position error fk between the target and the robot are monitored in the kth image.The actuation signal uk is a duty cycle for a square wave sent to each electromagnet pair at the kth iteration of the control loop. Practical Implementation One final consideration regarding the 2D control signal computed in Equation ( 13) is necessary for implementation on a physical system.First, the magnitude of the computed control signal may be beyond the physical limitations.In this event, the signal may be scaled such that magnitude is within the system's capabilities but that the direction of the vector within the control space is preserved.For a maximum allowable scalar magnitude , the scaled actuation signal � +1 is used: This is similar to the trust region method employed by Jagersand [24] to prevent large motions outside of the estimated model's current area of validity. While no modeling is necessary for the algorithmic implementation, it is assumed that the system has been thoughtfully designed such that the magnetic field strength is sufficient to pull the microrobot and overcome viscous drag and other fluid interface reactions. The algorithm as presented and experimentally verified in this section and the next is for 2D planar control; however, it is plausible to extend the method to three dimensions with additional electromagnets and imaging capabilities to capture three-dimensional position information.Such a system would not utilize a two-fluid interface and gravity would apply a biasing force in the z dimension; however, the Jacobian estimation method would be able to adjust the magnetic field to either work with or against the force of gravity. Experimental System A microrobot system has been implemented comprised of an electroplated nickel slug suspended at the interface between two immiscible fluids, and an electromagnetic actuation system.The microrobot devices are 20 μm thick and fit within a 200 μm diameter circle and were fabricated through the MEMSCAP MetalMUMPS process (Crolles, France) [28].A variety of device morphologies (developed for an earlier microrobot competition [29]) provide an opportunity to study the effects of different robot shapes (which possess different viscous drag characteristics), and are shown in Figure 3. The microrobot operates at the interface between vegetable oil, and a solution consisting of sodium chloride and sodium bicarbonate dissolved in water, as shown in Figure 4.The robots and fluid are in a 20 × 20 × 15 mm vial placed at the center of four cylindrical electromagnets arrayed along the four points of the compass.Each magnet is driven with a pulse of amplitude 11 V and frequency 100 Hz.The duty cycle of the control signal is varied from 0% to 50%.By varying the amplitude or duty cycle of square wave input voltages to each electromagnet, the varying magnetic field imposed on the microrobot imposes varying forces that propel the microrobot through its workspace. The robots have no permanent magnetization.When placed in a magnetic field they develop an induced magnetization.If the field is non-uniform this leads to motion in the direction of increasing magnetic field strength.We utilize a simple actuation scheme with a magnet for each cardinal direction where only one magnet is actuated at a given time to pull the robot in the desired direction.Visual feedback is used to measure the position of the microrobot device.The vision system consists of a microscope and a 740 × 480 USB camera.System integration is achieved in LabVIEW environment with an 8 Hz vision update rate.Simple thresholding is used to distinguish the microrobot object from the background, and the centroid of the object is used as the robot position.The robustness of binarization is enhanced by backlighting the microrobot beneath the fluid. Stationary Target: Point to Point Motion To demonstrate the robustness of the control a microrobot device is commanded a sequence of point-to-point motions throughout the field of view of system with the following variations: • Magnification • Electromagnet position and orientation • Microrobot device morphology Figures 5-7 show the variation, trajectory, and error (distance from robot to goal position) for each of these variations, respectively.The figures demonstrate convergent control for a wide variety of system configurations with no a priori knowledge, calibration, tuning, or careful fixturing of the components.For each experiment, the initial Jacobian matrix J0 and the covariance matrix P0 were set to the identity matrix and the recursive least squares algorithm was used to control the robots from one point to the next using λ = 0.99. In Figure 5, results are shown demonstrating point-to-point motion throughout the field of view for three different magnifications {10×, 20×, and 30×}.The corresponding pixel resolutions are approximately 9.8, 6.5, and 3.25 μm/pixel, respectively.Starting in the center of the image, the microrobot devices are commanded to a sequence of points (denoted with A, B, C, and D).This self-intersecting quadrilateral path ensures that all four magnets are utilized for the robot motion and covers a large portion of the workspace.Note that the trajectories in the second column are plotted on axes equivalent to the image resolution shown in the first column.The same experiment is repeated for three different electromagnet configurations at 20× magnification as shown in Figure 6 demonstrating the adaptive ability of the controller to handle vastly different system configurations.If the controller were based on a system model (e.g., H∞ or PID), these significant alterations to the system configuration would render the controller ineffectual.Rather, the only difference that is demonstrated is found in row (c) where a reduced speed in downward motion is observed.This is due to the weaker magnetic field affected by pulling one magnet away.The system is still convergent and achieves each goal position.Notice that rows (a) and (b) would require significantly different inputs from the electromagnet pairs for up and down motion as compared with row (b) in Figure 5 where up and down motion can be affected with inputs from the north or south magnets.Point-to-point control of three different microrobot devices using the same algorithm and initial conditions: (a) "S", (b) "bar" and (c) "wedge" shaped devices.These were performed at a 20× magnification and may be compared to the "star" device in Figure 5b.Noticeably, the wedge device moves the most quickly, but each microrobot device converges on the target points. The experiments are repeated again at 20× for three more microrobot device morphologies.This is significant, because at this scale the different shapes will have different viscous drag coefficients.Indeed, this is indicated by the various speeds demonstrated in the positioning error plots; the rows (b) and (c) demonstrate significantly faster systems than row (a) or Figure 5b.A model-based controller would need to be calibrated to each device shape; however, the recursive least squares control is able to learn the system and servo each device to the goal positions.While the motion appears more erratic, it should be observed that the control method is designed to simply converge towards the static target points.The trajectory path is demonstrated in the following section. All three figures repeatedly demonstrate convergent motion towards the target points under disparate conditions with no initialization or calibration. Moving Target: Circular Motion The inclusion of the target velocity term * ⁄ ℎ term in the RLS algorithm given in Section 3 enables the controller to minimize the error even when the target is moving.To demonstrate this, a circular path is given as a desired motion.Again, results in this section demonstrate the ability to actuate a microrobot device with one algorithm (without calibration) while varying the following aspects of the system: As with the stationary experiments, the initial Jacobian matrix J0 and the covariance matrix P0 were arbitrarily set to the identity matrix and the RLS algorithm was used to control the robots from one point to the next using λ = 0.99. System Performance for Various Magnification and Target Speeds Here, the steady state tracking error for a moving target prescribed as follows: is studied for a range of angular velocities, at three different magnifications {10×, 20×, 30×}.As appropriate for an image-based visual servoing method, the error is presented in pixels at each magnification level.First, Figure 8 demonstrates the path, tracking error, and control effort made during one such experiment with the magnification set at 20×.The target is moving as described in Equation (15) where ω = 0.035 rad/s which results in an average speed (tangential velocity) of 3.5 pixels/s or approximately 22.8 μm/s.The inset in Figure 8 records the path as the microrobot starts at the denoted location, servos towards the desired trajectory and continues around in a counter clockwise manner until it reaches the point where it initially met the desired path.The error between the microrobot and the desired path is shown demonstrating initial convergence and a steady state tracking error of 1.5 pixels. The normalized control effort � k for the experiment depicted in Figure 8 is presented in Figure 9 with each subplot providing the scaled control signals sent to each electromagnet pair (where a scaled effort of 1 represents a 50% duty cycle signal sent to the electromagnet).This control effort varies a great deal as the controller seeks to make small moves keeping the microrobot on the desired path.The target velocity is such that the change in the goal position is on the order of the resolution of the position measurement and the signal is noise dominated.However, the underlying sinusoidal effort is observed with an expected 90° phase shift between the N/S and E/W electromagnet pairs.To more thoroughly investigate performance, the steady state tracking error is presented in Figure 10 for a series of experiments conducted at varying target speeds and system magnifications with error bars representing one standard deviation.The results convey both the limitations and strengths of the approach.Clearly the tracking error is greater for the 10× scenario.This makes sense in terms of system signal to noise ratios.At 10× magnification, the device is a smaller blob in the image which can result in more noise in the centroid calculation that is used to determine the position at each step.Furthermore, a given control signal results in a smaller motion (in pixels) than it would at a higher magnification.With greater system noise relative to observed motion, there is increased error in the tracking of the path. Additionally, it is observed that there is increased tracking error as the target path velocity is increased.To a certain extent, this is to be expected and is similar to the results seen in [25], however the large errors seen at the highest speed for the 10× and 20× magnification are not a failure of the control system but rather due to the limitations of the electromagnets, which are not able to produce the required speeds for the microrobots at these settings.Inspection of the control effort for those two experiments shows saturation at the maximum allowed values. Overall, Figure 10 demonstrates that when the target is moving at system achievable speeds, we demonstrate stable control for a 200 μm device following a moving target profile with average steady state errors ranging from 1.0 to 4.1 pixels in the image plane which translates to 4.1-40.5 μm in the workspace. Tracking Performance for Various System Modifications To demonstrate the versatility of the controller, the same moving target experiment is repeated for two significant system modifications using the same target path described in Equation (15).First, the Conclusions This work has experimentally demonstrated an uncalibrated vision-based control method using recursive least squares for a magnetically actuated microrobot system working in a planar fluid environment.The uncalibrated nature of the controller was demonstrated by altering the magnification of the microscope, significantly rotating the electromagnet array, and by testing 200 μm robots with different morphologies.Despite all these changes, the uncalibrated image-based control method converges to stationary target points and demonstrates stable and consistent tracking results for moving target trajectories with average steady state path errors ranging from 1.0 to 4.1 pixels in the image plane (4.1-40.5 μm in the workspace).At a scale where accurate modeling of all system parameters can be difficult, the recursive least squares estimation and control method offers a great deal of flexibility for microrobot control with one algorithm capable of controlling a variety of system configurations. Figure 1 . Figure 1.(a) Two electromagnet pairs arrayed about a ferromagnetic mass (the microrobot) operating in a fluid environment; and (b) the free body diagram of the microrobot in the fluid. Figure 3 .Figure 4 . Figure 3. Models of the different microrobot designs derived from the mask layout, rendered in MEMSPro.The designs are referred to as: (a) "S", (b) bar, (c) "wedge", and (d) "star".Each device is approximately 200 μm in diameter. Figure 5 . Figure 5. Point-to-point control for three different magnifications {(a) 10×, (b) 20×, and (c) 30×} showing the image plane trajectory and the error between the goal position and the microrobot over time.The same algorithm and initial parameters were used in each case. Figure 6 . Figure 6.Trajectory and error for point-to-point motion.In row (a), the electromagnets were rotated approximately 45° counter-clockwise from the nominal compass-based orientation; row (b) demonstrates results for a in the opposite direction; and row (c) gives results when the south magnet was pulled away from the microrobot device several centimeters. Figure 7 . Figure 7.Point-to-point control of three different microrobot devices using the same algorithm and initial conditions: (a) "S", (b) "bar" and (c) "wedge" shaped devices.These were performed at a 20× magnification and may be compared to the "star" device in Figure5b.Noticeably, the wedge device moves the most quickly, but each microrobot device converges on the target points. Figure 8 . Figure 8. Tracking error in pixels between actual robot position and desired path.The steady state error is 1.5 pixels, and the inset demonstrates the captured path of the robot. Figure 9 . Figure 9. Normalized control effort for the experiment given in Figure 8.The effort is normalized such that a 1 represents the maximum 50% duty cycle applied to a magnet.Positive values are applied to one magnet (E or N) of each magnet pair (N/S, E/W). Figure 10 . Figure 10.Average steady state tracking error (in pixels) with error bars indicated one standard deviation for a range of velocities and magnifications for a star microrobot following a circular path at three different magnifications (10×, 20×, and 30×).The pixel resolutions are approximately 9.8, 6.5, and 3.2 μm/pixel, respectively.
2016-01-29T17:58:53.149Z
2014-09-26T00:00:00.000
{ "year": 2014, "sha1": "60638e1e7bd161a4b4f7182b9bce31ff52138529", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/5/4/797/pdf?version=1412003955", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "60638e1e7bd161a4b4f7182b9bce31ff52138529", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
16100215
pes2o/s2orc
v3-fos-license
Analysis of methanol and ethanol in virgin olive oil Graphical abstract Method details The presence of short chain alcohols in virgin olive oil could be closely related with oil quality. Actually low amounts of methanol (MeOH) and ethanol (EtOH) are accepted since small quantities of these alcohols may be formed during the maturation of olives. On the other hand, high volumes of EtOH appear during the fermentation processes occurred mainly throughout olive fruit storage. The role of these short-chain alcohols regarding olive oil quality is still unclear, although their influence on the presence of fatty acid alkyl esters (FAAE), a quality parameter, is well known [1,2]. Due to the high volatility of short-chain alcohols their determination is normally accomplished by static headspace extraction followed by gas chromatography (GC) analysis [3,4]. Reagents and samples EtOH, MeOH, and 1-propanol (PrOH) used as reference materials were supplied by Romil Ltd. (Waterbeach, Cambridge, GB) and were of analytical quality. Varietal virgin olive oils of Adramitini, Blanqueta, Bouteillan, Chemdal Kabilye, Cipresino, Coratina, Frantoio, Koroneiki, Leccino, Lechín de Granada, Manzanilla, Negral, Pendolino, Picual, Rapasayo, and Sigoise were directly prepared in the laboratory using the Abencor 1 system described elsewhere [5] to assure maximum oil quality. Olive fruits were obtained from an irrigated orchard (drip irrigation) in the southern part of Spain, under optimal cultivation parameters. They were handpicked and, according to their maturity index, belonged to the categories 0-4 (deep green to black skin olives with white flesh) [6]. Chemically refined olive-pomace oil was obtained directly from the producer. This oil, together with all reagents and samples was kept in the dark at 4 8C until use. Concentrated solutions of PrOH (internal standard, IS) were prepared by dissolving PrOH (cooled down to 4 8C, density = 0.810 g mL À1 ) in refined olive-pomace oil at proportions of 12.5 mL PrOH per kilo oil. From these concentrated solutions diluted IS solutions were prepared by mixing 1 g concentrated solution with 24 g refined olive-pomace oil. All critical volumes were measured with calibrated precision pipettes (0.6 mL systematic error). Both concentrated and diluted IS solutions were kept in the dark at À20 8C before use. Samples were prepared just before the analysis in the following way: 3.00 g oil (room temperature) and 300 mg diluted IS solutions (room temperature) were introduced into a 9 mL vial (20 mm  46 mm), which was immediately sealed with an aluminium crimp cap with silicone septa and with a PTFE face to eliminate bleed from the rubber portion. They were heated in a dry heat bath at 110 8C during 60 min. The vial headspace was then sampled via a thermostated stainless steel syringe (110 8C; sampling time = 30 s) and analysed by injecting the sample into the gas chromatograph. After each injection the syringe was cleaned by blowing out air and then dry nitrogen. Blank injections were carried out after each analysis to check the absence of carry-over effects. Instrumentation Heating of the samples was carried out in a Tembloc thermostat dry-block (JP Selecta S.A., Barcelona, Spain). GC analyses of the volatiles were done with an Agilent 7890B Gas Chromatograph (Agilent Technologies, Santa Clara, California) equipped with a Tracer MHS123 2t 1 Head Space Sampler and a flame ionization detector (FID). Acquisition of data was done with the Agilent ChemStation for GC System program. The conditions for the GC assays were: SP2380 column (poly 90% biscyanopropyl-10% cyanopropylphenyl siloxane), 60 m length  0.25 mm internal diameter  0.20 mm film (Sigma-Aldrich Co. LLC, St. Louis, MO, USA), 500 mL injection volume, hydrogen carrier gas at 1.5 mL min À1 and split injection (50:1 split ratio). The oven temperature programme was: 50 8C (7 min initial time), then rise at 10 8C min À1 to 150 8C and hold 3 min. The injector and detector temperatures were 150 8C and 170 8C, respectively. Development of the method Tests to develop the method were performed using an in-house blank oil labelled as refined olivepomace oil (oil comprising exclusively olive-pomace oils that have undergone classical refining), which had showed no significant chromatographic peaks within the retention time (Rt) windows of any of the volatiles under study. This oil was spiked with MeOH, EtOH, and PrOH (IS) at concentrations between 4 and 12 mg kg À1 . On each case we observed three distinctive sharp, symmetrical peaks with a signal-tonoise ratio of at least 3 and with no tailing or shoulders corresponding to those three volatiles. The peaks were identified by their absolute and relative Rt, which were the result of 34 injections. Always the absolute Rt was measured to three decimal places. These results also allowed the establishment of the Rt window for each target analyte to compensate the shifts in absolute Rt as a result of chromatographic variability. The relative Rt values kept constant all over the study. Those values and the Rt windows for both MeOH and EtOH, together with that for the IS are shown in Table 1. Trials to establish the limit of detection (LOD), the limit of quantitation (LOQ), and differences in the response of the three volatiles were carried out by spiking eleven samples of refined olive-pomace oil with MeOH, EtOH and PrOH standard solutions at increasingly lower concentrations (from 992.1 mg kg À1 to 0.02 mg kg À1 ). The accepted concentration values were those that produced sharp, symmetrical analyte peaks with a signal-to-noise ratio of at least 2 and with no tailing or shoulders. Measures were always made in duplicate. Since the lowest sensitivity -minimum concentration of analyte that could be measured and reported with an acceptable confidence that it was higher than zero-was that of PrOH, we decided to be conservative and accept the same limits for both MeOH and EtOH. In this way, hundred per cent of the spiked samples gave signals within the acceptance criteria and clearly distinguishable from the background. We set the LOD at 0.55 mg kg À1 for the species tested. The empirical LOQ is defined as the lowest concentration at which the acceptance criteria are met and the quantitative value is within AE20% of the target concentration [7]. According to our results on virgin olive oil the EtOH lowest concentration to be expected is around 0.64 mg kg À1 (AE0.12), however the fact of being unique and having such a 'high' RSDr (19%) made us take our second lowest value (0.74 mg kg À1 ) as reference for the calculation of the LOQ. Applying the aforementioned reasoning we set the LOQ for any of the volatiles under study at 0.59 mg kg À1 . Analysis of samples The volatile composition of 16 virgin olive oil varieties was determined according to the described method. The GC-FID analysis showed two dominant analyte peaks, which we identified as MeOH and EtOH (Fig. 1) based on the results obtained with the standard solutions. The quantitative evaluation was carried out using PrOH as IS. The FID sensitivity towards PrOH was proven to be 1.32 and 1.43 times lower than towards MeOH and EtOH, respectively. Therefore the respective areas must be corrected after getting them using the data integration software. The calculation of the concentration of each individual compound, in mg kg À1 , was performed as follows: where Ax is the peak area for the volatile x divided by the its correction factor, AIS is the area of the PrOH peak, mIS is the mass of PrOH added (in mg), and mS is the mass of the sample used for the determination (in g). Table 2 shows the results of the analysis. Around 78% of the results show values for the relative standard deviation (referred to three times the standard deviation of the repeatability) not higher than 13%, which can be considered as quite good. Additional information This procedure is a modification of the Spanish specification UNE-EN 14110 [3], to determine MeOH content in fatty acid methyl esters (FAME) utilized as biodiesel. According to this method the samples must be heated in a sealed vial at 80 8C until the equilibrium is reached. The vial headspace is then sampled and analysed by capillary GC-FID. Quantitation is carried out with a three-point calibration curve using standard FAME solutions. The procedure only quantitates the MeOH content, and the use of external standardization may represent an error source. A method for determining MeOH and EtOH in olive oil samples by GC-FID using packed columns has been developed by Mariani and co-workers [4]. They use a modified liner in a way that they can inject the oil samples directly into the gas chromatograph. The oil is then heated and the adapted liner permits the concentration of the headspace from which the volatile fraction goes directly to the column, whereas the triglycerides remain in the liner's reservoir. Thanks to the column characteristics also in this case just MeOH and EtOH peaks are present in the chromatograms, but again the use of external standards for the quantitative determinations introduces an error source. Additional disadvantages of this method are the fact of needing an altered injector and the utilization of packed columns, which may not be so common nowadays. In any case Mariani's results on the content of MeOH (3-10 mg kg À1 ) and EtOH (1-28 mg kg À1 ) in virgin olive oils are comparable -within our error limits-to those obtained when applying the present method (Table 2), which supports the possibility of using both procedures according to the laboratory equipment.
2017-06-17T18:15:21.572Z
2014-09-23T00:00:00.000
{ "year": 2014, "sha1": "1dc6e9ca2397d2ff48d036e3f926bf3bb1f46b23", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mex.2014.09.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1dc6e9ca2397d2ff48d036e3f926bf3bb1f46b23", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
156718919
pes2o/s2orc
v3-fos-license
ANALYSIS OF RESOURCE USE EFFICIENCY AMONG SOYBEAN (GLYCINE MAX) FARMERS IN GBOKO LOCAL GOVERNMENT AREA OF BENUE STATE, NIGERIA The study examined the efficiency of resource use in soybean production in Gboko Local Government Area of Benue State, Nigeria. The objectives of the study were to identify and describe the socio economic characteristics of soybean farmers and to determine resource allocation among soybean farmers. Multi-stage random sampling technique was used to select a sample of 120 respondents. Data collected were subjected to Descriptive statistics and production function analysis. The result revealed that 93.3% of the farmers had one form of formal education, or the other with over 65% cultivating between 1-4 hectares. Also, 87.5% of the farmers were in their active age, and 81.7% utilized their personal saving as a major source of finance for production. The result of the production function analysis indicated that 87.21% of the variation in the output of soybean is explained for by the independent variables. Resource-use efficiency revealed that quantity of seed, farm size, herbicide and inorganic fertilizer were underutilized while labour was over utilized. Provision of adequate and timely farming inputs, making loans accessible to farmers and reasonable market price of soybean are essential to boost production. INTRODUCTION Soybean (Glycine max) is an important crop in the world. The crop can be successfully grown in many states in Nigeria using low agricultural input. Soybean cultivation in Nigeria has expanded as a result of its nutritive and economic importance and diverse domestic usage. It is also a prime source of vegetable oil in the international market. Soybean has an average protein content of 40% and is more protein-rich than any of the common vegetable or animal food sources found in Nigeria. Soybean seeds also contain about 20% oil on a dry matter basis, and this is 85% unsaturated and cholesterol-free (Dugje et al. 2009). Of the oil fraction, 95% is consumed as edible oil with the rest used for industrial products from cosmetics and hygiene products to paint removers and plastics (Liu 2008). Recently, soybean is found to be an industrially important crop used as anti-corrosion agent, core oil, and bio-fuel due to less or no nitrogen element in the oil, and as disinfectant, in pesticides, printing inks, paints, adhesives, antibiotics and cosmetics (Ngalamu et al. 2012). The animal protein intake in Nigeria is below the United Nations and Food and Agriculture Organization recommended optimal daily requirement of 20 grams for developing country as against the 75 grams for normal growth and development (FAO, 1992). Although protein in human diet is derived from both plant and animal sources the declining consumption of animal protein due to its high prices requires alternative sources. Soybean provides a cheaper and high protein rich alternative substitute to animal protein. The inclusion of soybean in the carbohydrate rich staple food in Nigeria will increase their protein content (Ashaye, Adegbulugbe andSanni, 2005 andAjobo andAkinyemi, 2007). Estimates show that about 925 million individuals are undernourished worldwide (FAO 2010b). Soybean has the potential to address the needs of these individuals through increased local production and consumption of the crop. Development of locally adapted soybean varieties consumed either as cooked mature seeds or immature green seeds would offer vital nutrients and bring balance to the undernourished diet. Other than the high protein content, it also has good amount of calories and fat. It contains the eight essential amino acids and is a rich source of polyunsaturated fatty acids (including the good fat-omega 3) and is free of cholesterol (Food and Agriculture Organization, 1999). Agricultural research centers like the International Institute of Tropical Agriculture's (IITA) main goal is to generate technologies that will improve productivity, welfare of the farmers, and household nutritional status. Benue state is acclaimed the nation's "food basket" because of its rich and diverse agricultural produce which include yams, rice, beans, cassava, potatoes, maize, Soybeans, sorghum, millet and cocoyam. The state also accounts for over 70 percent of Nigeria's Soybeans production. (Retrieved from: http://www.greaterbenue.com on the 25 th February, 2015) The Benue State Agricultural and Rural Development Authority (BNARDA 1995) also reported that Benue state accounts for over 70% of soybean production in Nigeria. Similarly, a survey conducted and reported by IITA in 1989 revealed that Benue State remained the major producer of soybean in Nigeria. The citizens of the state especially in the rural areas are predominantly engaged in farming activities. Production efficiency means the attainment of production goals without waste. Efficiency is often used synonymously with that of productivity which relates output to input. In agriculture, the analysis of efficiency is generally associated with the possibility of farm production to attain optimal level of output from a given bundle of input at least cost. Resource use efficiency means how efficiently the farmer can use his resources in production process. Analysis of resource use is very important because our resources are limited. In order to achieve optimum production level, resources must be available and whatever quantities of available resources must be used efficiently. Successful result oriented farm planning and policies require the knowledge of productivities of farm resources to know the resources whose quantity or rate of use should be increased or decreased (Sani et al. (2010) in (Alimi, 2000). Mugabo et al. (2014) in their study of Resource use Efficiency in Soybean Production in Rwanda reported that with an elasticity of 0.46, plot size was the most important factor of soybean production. It was closely followed by intermediate inputs (fertilizers, pesticides and seeds), with a coefficient of 0.44. When intermediate inputs were decomposed, fertilizers with an elasticity of 0.062 appears to contribute more to soybean production than pesticides (0.057) and seeds (0.034). Technical inefficiency was responsible for at least 93% of total variation in soybean output among the survey farmers. The relative efficiency (allocative efficiency) of resource use, expressed as the ratio of marginal value product (MVP) to marginal factor cost (MFC), were 1.73 for soybean plot size, 1.36 for fertilizers, and 1.92 for pesticides. These indicate that too little of these inputs are being used in relation to the prevailing market conditions. Also, Olorunsanya et al. (2009), in their study revealed the marginal analysis of resource utilization for soybean in Kwara State, North Central Nigeria, and showed that there was inefficiency in the utilization of resources in the area with land being underutilized and other resources labour, seed and herbicide been over utilized. Findings from most of the existing studies revealed that farmers are inefficient in their resource allocation. Some of the different variables that leads to inefficiency as reported by most studies majorly are socio-economic variables such as farmers' age, level of education, farm size, number of hired workers, years of farming experience, access to extension contact, land ownership, cooperative membership etc. All these negatively affect the efficacy of resource use by the peasant farmers. In order to ensure efficiency of resource use by farmers in Benue state, the Benue State Agricultural and Rural Development Authority (BNARDA) to serve as a research centre too, was established by the Benue State government as a Parastatal under Edict No. 7 of 1985 targeting essentially the small scale farmers. BNARDAs overall objectives are to promote increased agricultural production in the state and raise the income and standard of living of the farmers. In order to make the impact of BNARDA felt in the whole state it is operated on the basis of three agro-development zones, namely, Central Zone with headquarters at Otukpo, Eastern Zone with headquarters at Adikpo, and Northern Zone with its headquarters located at Gboko. Despite the efforts of BNARDA in the state and other agricultural research institutes, soybean farmers are still not efficient in the use of available resources and in an attempt to address this, the study was carried out with the broad objective to determine the Analysis of Resources use Efficiency among Soybean Farmers in Gboko Local Government Area of Benue State, Nigeria. The specific objectives were to describe the socioeconomic characteristics of soybean farmers in the study area and determine the efficiency of resource use among the soybean farmers. METHODOLOGY The study was conducted in Gboko Local Government Area of Benue State. Gboko Local Government is located between latitudes 6 0 3'and 8 0 1' North of the Equator and longitudes 8 0 and 10 0 East of the Greenwich Meridian (Benue State Government Diary, 2009). The Local Government is bounded by Tarka and Guma local government's Areas to the north, Ushongo Local Government to the south, Buruku Local Government to the East, and Konshisha Local Government to the South-West while Gwer Local Government lies in the West. The local government derived its name from the sceneries common trees known as Gboko which grows especially on the hills at the north western part of the area (Abaya, 2013). The local government covers a land mass of 2264 km 2 with a population of 361,325 people (National Population Commission, NPC, 2006) making Gboko one of the most populous Local Government Areas in Benue State. The Local Government Area has a tropical climate marked by two distinct seasons (the wet or rainy season and the dry season). The rainy season lasts from April to October with an August break. The annual rainfall is in the range of 1500 mm to 1800 mm. The dry season begins in November and ends in March with a dust laden spell, the Harmattan wind that blows from across the Sahara. The temperature fluctuates between 23 o C and 35 o C. Because the soil is rich, sandy loamy and very fertile for most savannah food crops, Gboko farmers produce root crops such as yams, cassava, and sweet potatoes in large quantities beyond subsistence level. The rich agricultural soil of the local government ranks Gboko as the highest producers of soybeans as well as other grains/seeds like maize, guinea corn, groundnut etc. (Abaya, 2013 Mbateriev. Multi-stage random sampling technique was used to select respondents for the study. In the first stage, six out of the seventeen wards in Gboko Local Government Area were randomly selected, that is, one wards each from the six districts. In the second stage, two villages were randomly selected from each of the six wards giving a total of twelve villages. A total of 350 farmers were found to be involved in soybean production in the selected villages (BNARDA, 2012). A total of 120 questionnaires were administered to randomly selected farmers in the villages. The distribution was proportionately done based on the number of farmers in the selected villages and all the questionnaires were retrieved and used for the analysis. The distribution of sample in the twelve (12) selected villages is presented in Table 1. Descriptive statistics made use of percentages and means to describe the socio-economic characteristics of soybean farmers while the production function analysis was used to examine resource allocation pattern among respondents. Four functional forms of production function analysis namely linear, semi-log, double log and exponential function were tried. The Cobb-Douglas or double log gave the best fit and is expressed explicitly as Log Y =Bo + B 1 LogX 1 + B 2 Log X 2 + B 3 LogX 3 +....+B 5 LogX 5 + U i…. (3.1) Where: Y = Output of soybean in (kg), X 1 = Quantity of soybean seed used (kg), X 2 = Farm size (ha), X 3 = Quantity of herbicides used (litres), X 4 = Labour in man-days, X 5 = Quantity of inorganic fertilizer used (kg), B 0 = Constant, B 1 -B 5 = coefficient of independent variables to be estimated and U i = Error term. Regression Decision Rule i. If the efficiency ratio (r) = 1: there is efficiency of resource-use ii. If the efficiency ratio (r) > 1: the resource is under-utilized iii. If the efficiency ratio (r) < 1: the resource is over-utilized (Moses and Adebayo, 2007;Goni and baba, 2007). The MVP X for each input used will be computed using the regression coefficients of each input. The MFC is the prevailing market price of each input or geometric mean value of the input (x). Socio-economic Characteristics of Respondents The study revealed that 82.5% of the respondents were male and 82.5% were equally married indicating that soybean production in the study area is dominated by the male although the female plays complimentary roles. Ndaghu et al. (2009) reported that males are the most household heads and they are ANALYSIS OF RESOURCE USE EFFICIENCY AMONG SOYBEAN (GLYCINE MAX) FARMERS responsible for major production decisions. The majority 68.3% of the respondents were aged 15-44 years, while the average age of the respondents was 39 years indicating that majority of the soybean farmers in the study area were within the most active age of the population. It also indicates that their productivity is expected to increase because younger farmers adopt new agricultural innovations easier than older farmers. The majority 93.3% had one form of formal education or the other. Ajao et al. (2012) stated that the more educated farmers are, the higher there utilization of soybean production process. The predominant land tenure system was inheritance; representing 65%. Ojo et al. (2008) and Omonona et al. (2010) both reported that majority of their respondents acquired their land through inheritance and family land. The mean farm size was 3.4 ha, with 71.6% of the respondents having a farm size of 1-4 ha. The major source of finance was through personal savings, 81.7%. This could negatively affect farmers especially when there is need to buy new farm inputs. Also, 84.2% of the respondents used improved seed in cultivating soybean and majority 72.5% had no personal access to extension personnel. Similarly, 60% had farming as their main occupation while 49.2% made use of family labour in the cultivation of soybean. The mean age of farming experience was 18 years and 35% of the respondents were having 21 years and above. Tashikalma (2007) also reported that farmers with more years of farming experience are better in terms of handling farm operations compared to farmers with fewer years of farming experience. Also, only 39.8% were members of a cooperative association. Production function analysis Production function analysis was used in examining the influence of various inputs (X 1 -X 5 ) on the output of soybean. The data obtaVined were subjected to four functional forms namely linear, semilog, double log and exponential function and Double log gave the best fit based on a priori expectation of fulfilling economic, statistical and econometric criteria with respect to signs, magnitude and significance of the regression coefficients and the result is presented in Table 3. The coefficient of multiple determination, (adjusted R 2 in the model was 0.872058 meaning that 87.21% of the variation in the output Y is explained by the independent variables. The entire coefficients in the chosen model carry the expected positive sign. The production function estimates indicates the relative importance of factor inputs in soybean production. From the result in Table 3, farm size (X 2 ) factor input appears to be the most important factor of production with an elasticity of 0.470. The positive coefficient of farm size conforms to a priori expectations and is significant at 1% probability level. The significance of this variable is as a result of its importance in crop production since its shortage would not only pose a direct negative effect on production but also an indirect negative effect on output through reducing the marginal productivity of non-land input (Shehu and Mshelia, 2007). The coefficient of seed (X I ) was positive (0.388761) and is statistically significant at 1% implying that a unit increase in the quantity of soybean seed will cause a corresponding increase of 0.388761kg ceteris paribus. The coefficient of labour input (X 4 ) is also positive (0.033) and statistically significant at 1% level, suggesting its importance in agricultural production. The coefficient of inorganic fertilizer (X 5 ) is also positive (0.192) and statistically significant at 1% meaning that fertilizer increases yield (output) when applied appropriately. In this analysis, a 1% increase in fertilizer application by soybean farmers would increase output by 19.2%. Similarly, the coefficient of herbicide used (X 3 ) is positive (0.154) and is statistically significant at 5% level, meaning that a 5% increase in the quantity of herbicide would bring about 15.3% increase in output of soybean if applied appropriately by the farmers. Ani et al. (2012) also reported on similar findings on leguminous crops in Benue state. Marginal productivity of resource use in soybean production The marginal physical product (MPP) for input utilization was derived from the estimated regression coefficients and the arithmetic mean values of output and inputs as shown in Table 5. The marginal physical product for each of the resources was obtained based on the Double log production function. Farm size (hectares) gave the highest value of marginal physical product (233.49). The implication is that an increase in farm size by one hectare would result in extra 233.49 kg of soybean. Efficiency of the marginal value product (MVP) of seed (X 1 ), farm size (X 2 ), herbicide (X 3 ), and inorganic fertilizer (X 5 ) to their corresponding marginal factor costs (MFC) revealed that the ratio was greater than unity for these inputs, indicating that the inputs were both underutilized. On the other hand, the efficiency of the marginal value product (MVP) of labour (X 4 ) to its corresponding marginal factor costs (MFC) showed that the ratio was less than unity for this input implying that it was over utilized. Optimal resource allocation requires that the marginal value product (MVP) be equal to marginal factor cost (MFC). CONCLUSION The study examined the analysis of resource use efficiency among soybean farmers in Gboko local government area of Benue state, Nigeria. The socio economic analysis of the respondents revealed that the majority 68.3% were young with mean age of 39 years. 82.5% were married and also 82.5% of the farmers were male. Majority of the farmers were small scale farmers and also the majority 93.3% had one form of education or the other. The major source of financing production was personal saving and the predominant land tenure system was inheritance. The majority of the farmers 72.5% had no personal access to extension personnel and only 39.8% belonged to a cooperative society. The production function analysis also revealed that quantity of seed used, farm size, herbicide and inorganic fertilizer were underutilized whereas labour was over utilized. The following recommendations are suggested; appropriate use of quantity of seed/ha, efficient use of available farm size for planting soybean as sparsely planted by some farmers, inadequate application of fertilizer and herbicide should be adjusted to bring output to optimal level. Also the standard man days of labour should be utilized in soybean production to avoid its overutilization and the surplus hours channel into other farming activities. Or necessary adjustments of the production inputs should be made to bring production to optimal level. Farmers are also encouraged to form cooperative groups to help them buy farm inputs (fertilizer, herbicide) etc. at reasonable prices and jointly market their produce at favourable prices too to eliminate the role of middlemen. Cooperative societies also will help the farmers' access agricultural loans at reasonable interest rates too. Additionally, favourable prices of soybeans could attract the youths into its production and will correct the existing scenario that farming is left for the weak and old in our rural communities. Lastly the mode of land ownership, inheritance, should be addressed by the government at all levels to facilitate additional production of soybean in the study area.
2019-05-18T13:07:40.316Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "2d9f0837a61fb51f5fefd35c38d75d88e5aad700", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/gjass/article/download/138882/128554", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "36a2e4d2876b3b624db5348297fcbd71faa7f2ad", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
118642893
pes2o/s2orc
v3-fos-license
Counting of discrete Rossby/drift wave resonant triads (again) The purpose of our earlier note (arXiv:1309.0405 [physics.flu-dyn]) was to remove the confusion over counting of resonant wave triads for Rossby and drift waves in the context of the Charney-Hasegawa-Mima equation. A comment by Kartashov and Kartashova (arXiv:1309.0992v1 [physics.flu-dyn]) on that note has further confused the situation. The present note aims to remove this obfuscation. Counting of Triads In our earlier note [BHLQ13], we pointed out a significant error of over-counting of triads in [KK13]. For a real field ψ(x, y, t) the modesψ k andψ −k must occur with amplitudes which are complex conjugates. They are not independent. Thus, the claim of [KK13] that they are listing six separate triads in their equation (10) is wrong. In fact, all six triads are equivalent. A comment [KK13a] on our note has once more confused the situation. Real Fields, not Real Coefficients The comment of [KK13a] claims that " [BHLQ13] states that [KK13] counted triads with real amplitudes." This is wrong. In fact, [BHLQ13] states that the underlying field ψ is real and that, consequently, the amplitudes occur in complex conjugate pairs. Again, in their Conclusions, [KK13a] write that " [BHLQ13] did not notice that the dynamical system (4) in [KK13] is written in complex variables." This is nonsense. It is abundantly clear that the variables in (4) are complex: there is no misunderstanding on this point. It is the physical field that is real. * E-mail: miguel.bustamante@ucd.ie 1 3 Deduction by [KK13] from a Theorem in [YY13] [KK13] use a result of [YY13], which is valid in the asymptotic limit of infinitely large β, to deduce results for finite β. An asymptotic result like this should be valid as β −→ ∞, but is invalid for small or moderate β, as in [BH13]. The deductions of [KK13] are unsustainable. Conclusions In [KK13a] there is confusion between what is real and what is complex. This has once again confused the picture that we were aiming to clarify in [BHLQ13]. The present note should remove this obfuscation. The Appendix contains important facts that provide evidence of the completeness of the classification of discrete resonant triads for Rossby/drift waves in periodic domains presented by the published paper [BH13], regardless of the unfounded comments by Kartashov and Kartashova in the preprints [KK13,KK13a]. Acknowledgments We thank Sergey V. Nazarenko for useful discussions. Appendix: Three Important Facts Fact 1. The classification of discrete resonant triads for Rossby/drift waves in periodic domains presented by the published paper [BH13] is in fact complete. The reason for the completeness is that the classification is based on an explicit bijective mapping from the set of non-zonal irreducible triads to the set of representations of integers as sums of the form r 2 + s 2 and 3 m 2 + n 2 . The explicit representations are due to well-known results by Pierre de Fermat and applied in [BH13] for the first time to the problem of finding exact resonances for the Charney-Hasegawa-Mima equation in periodic domains. Fact 2. There is a repeated critique in [KK13,KK13a] about the minimum level of detuning δ min of the set of quasi-resonant triads found by [BH13] in the box of size L = 100 (δ min ≈ 10 −5 as compared to δ min ≈ 10 −12 of the brute-force search). This is not an issue since [BH13] explicitly state (p. 2414) that they sample only a fraction of the total available triads (40 434 out of ≈ L 4 triads) in order to study the clusters' connectivity properties and percolation transition, all as functions of the allowed detuning. The success of [BH13] is that the observed features (connectivity and percolation) for such small sample have the same properties as the corresponding features for the whole set of triads, computed directly by brute force in a thorough study at higher resolution (L ≥ 256) published by Bustamante and coworkers in [HCB13]. Fact 3. The quasi-resonances found by [BH13] are close to resonant manifolds, not to exact discrete resonances. Preprint [KK13a] introduced confusion again into the subject by citing extracts from [BH13] and interpreting them literally. Let us clarify the matter: an arbitrary point (k 1 , k 2 ) on a resonant manifold is generically an "exact resonance" in the sense that it still satisfies k 1 + k 2 = k 3 and ω(k 1 ) + ω(k 2 ) = ω(k 3 ), but with non-integer wavevectors. This is not to be confused with an exact discrete resonance (the matter of our research, corresponding to a set of integer wavevectors). In order to make the wavevectors physically sensible we move the point to a nearby integer point. In so doing, the equations k ′ 1 + k ′ 2 = k ′ 3 are maintained for the new integer wavevectors but now ω(k ′ 1 ) + ω(k ′ 2 ) = ω(k ′ 3 ), providing thus a quasi-resonant triad lying close to the resonant manifold, not close to an exact discrete resonance. For example, one of the exact discrete resonant triads we use for building a set of quasi-resonances in the box of size L = 100 is the triad k 1 = {11 171 680, 463 515 988}, k 2 = {990 044 945, −305 135 237}, k 3 = k 1 + k 2 = {1 001 216 625, 158 380 751}. This triad is irreducible (the set of six components is relatively prime), satisfies ω(k 1 ) + ω(k 2 ) = ω(k 3 ) and is 10 7 times greater than the box. Our construction of quasi-resonant triads out of this "generating triad" is as follows: (i) Re-scale this triad by a common real factor α so that the re-scaled non-integer triad αk 1 , αk 2 , αk 3 fits into the L = 100 box. (ii) Replace this non-integer triad with a nearby integer triad that satisfies This will automatically satisfy ω(k ′ 1 ) + ω(k ′ 2 ) = ω(k ′ 3 ), so the new triad is quasiresonant. (iii) Repeat part (ii) with several nearby integer triads. (iv) Repeat part (i) using all allowed values of α. This produces about 1000 quasi-resonant triads in the box L = 100 out of the generating triad k 1 , k 2 , k 3 . Finally, start again with another big irreducible generating triad. The whole process' computational time scales like L 2 and is thus very efficient as compared to a brute-force search of all quasi-resonant triads, where computational time scales like L 4 (and even slower due to storage and sampling issues) in the box of size L.
2013-09-02T13:49:32.000Z
2013-09-02T00:00:00.000
{ "year": 2013, "sha1": "cad912eedef8c0fcb579deab832c4b4dac2c7dfd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cad912eedef8c0fcb579deab832c4b4dac2c7dfd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14873591
pes2o/s2orc
v3-fos-license
Effective connectivity associated with auditory error detection in musicians with absolute pitch It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. INTRODUCTION Understanding the neural mechanisms underlying human vocalization provides insight into sensory motor control that can inform voice production in health and disease. A critical need is the development of neurbiologically plausible models of vocalization that apply to a wide range of perceptual abilities (disorderedaverage-professional) in order to fully understand how vocal control varies across the full spectrum of abilities. From a system level perspective, understanding the regions involved in vocalization cannot provide information about the neural networks that govern the wide range of sensory motor interactions that lead to vocal output. Rather, it is necessary to study how regions of the brain are functionally connected within the voice production system. Prior studies (Bengtsson et al., 2005;Han et al., 2009;Loui and Schlaug, 2009;Kleber et al., 2010;Halwani et al., 2011) have shown differences between musicians and non-musicians while performing motor, auditory or somatosensory tasks. It is also known that voluntary responses to shifts in vocal pitch are more accurate and stable in experienced singers compared to non-musicians (Zarate and Zatorre, 2008), suggesting enhanced sensory control over the voice. A rare but interesting ability is perfect or absolute pitch (AP), which is the ability to perceive and identify exact musical notes. Individuals with AP have an enhanced ability to accurately relate a note to a musical scale without an acoustical reference pitch (Takeuchi and Hulse, 1993). The behavioral characteristics of AP have been examined extensively although the etiology is still unknown. Importantly, differences in both structural and functional characteristics of the brains of individuals with AP when compared to controls who do not possess AP have also been identified (Schlaug et al., 1995;Schlaug, 2001;Loui et al., 2011;Dohn et al., 2013). Specifically structural imaging studies have identified a stronger leftward asymmetry of the planum temporale when comparing AP musicians with non-AP musicians (Schlaug et al., 1995). Functional imaging studies have also identified differences in AP, specifically inferior frontal (Zatorre et al., 1998) and superior temporal regions as being increasingly activated in AP musicians during tone perception and pitch memory tasks (Schulze et al., 2009(Schulze et al., , 2012. Given the unique ability of AP, we expect that such enhancement of the functional mechanisms underlying sensory control of the voice in trained singers may lead to adaptations in the functional connectivity of brain regions and networks specifically related to audio-vocal integration and voice control. To date, there is one study that has examined functional connectivity networks in people with AP (Loui et al., 2012). Loui et al. (2012) used graph theory analysis of fMRI data to examine networks of functional activation during music listening. Results identified increased clustering in the left superior temporal regions in AP subjects compared to controls. However, to provide data on sensory control mechanisms of vocalization across the spectrum of abilities we have used a pitch perturbation approach in which a pitch-shift is introduced during vocalization (Larson, 1998). This approach has provided exceptionally robust data allowing for detailed insight into human vocalization. In the present experiment we studied effective connectivity in musicians with AP compared to musicians with relative pitch (RP) and subjects with no musical ability (NM) using dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch shifted auditory feedback. We have previously identified bilateral STG regions as playing a key role in sensory control of the voice. Using both fMRI and DCM methods we have identified the importance of STG in sensory motor control during pitch-shifted stimuli in healthy young subjects (Parkinson et al., 2012(Parkinson et al., , 2013. Here we compared families of models examining STG involvement in the error detection/correction process. Based on the work discussed above, our a priori hypothesis was that DCM modeling would identify different connectivity patterns in individuals with AP when compared to RP and NM individuals. Based on previous literature of structural and functional differences in the AP brain (Schlaug et al., 1995;Bengtsson et al., 2005;Han et al., 2009;Loui and Schlaug, 2009;Kleber et al., 2010;Halwani et al., 2011;Loui et al., 2012). We expected to see differences specifically in the left hemisphere connections between STG and IFG/PM, during vocalization with pitch-shifted feedback. We also expected to see a difference in the modulation of connections between the left and right hemisphere STG regions based on our previous work (Zarate and Zatorre, 2008;Parkinson et al., 2012Parkinson et al., , 2013Behroozmand et al., 2014). This work will provide additional understanding of the brain networks related to a range of sensory motor perceptual skills and an insight into the neural mechanisms driving differences in pitch perception across a spectrum of ability. PARTICIPANTS Thirty-three speakers of American English (18 females and 15 males, ages 18-25 years) with no history of neurological disorder participated in the study. Absolute and relative pitch subjects were recruited from the Bienen School of Music and the untrained non-musicians were recruited from the general Northwestern University student population. There were 11 subjects recruited to each of the AP, RP, and untrained non-musician (NM) groups. All musicians had a minimum of 4 years musical training [AP = 12.23 years (mean), range 7-16 years, RP = 11.64 years (mean), range 4-17 years]. Within the musician groups the instruments played included guitar, piano, violin, cello, clarinet, saxophone, trombone, trumpet, tuba, bassoon, French horn, oboe, and flute. A bilateral pure-tone hearing-screening test at 20 dB SPL (octave frequencies between 250 and 8000 Hz) was conducted to screen for normal hearing. A test of musical proficiency was conducted on the participants to evaluate their degree of pitch perception, identification, discrimination, and production abilities. The test included evaluation of chromatic pitch identification, chromatic sight singing, atonal sight singing, and microtonal pitch identification (for detailed description of the tests please see Behroozmand et al., 2014). Each subject's performance across all tests was evaluated and each subject was given an objective rating score between 0 and 100%. Classification into groups was based on score, with individuals classified as NM scoring below 50%, individuals classified as RP musicians scoring between 50 and 90% and individuals classified as AP scoring over 90%. The Northwestern University institutional review board approved all study procedures including recruitment, data acquisition and informed consent, and subjects were monetarily compensated for their participation. Written informed consent was received from all participants. EXPERIMENTAL DESIGN During the experimental session, subjects were seated in a soundtreated room and were instructed to sustain the vowel sound /a/ for approximately 2 s. Subjects were asked to vocalize at their conversational pitch and loudness levels whenever they felt comfortable, i.e., without a cue. Subjects were informed that their voice would be played back to them through headphones during their vocalizations. Subjects were instructed to ignore any pitchshifts they heard in the feedback of their voice. There was a pause of around 1-2 s between vocalizations, which allowed subjects to take a breath. During each vocalization a pitch-shift stimulus (±100 cents, 200 ms duration) was presented (Figure 1) in the auditory feedback occurring between 500 and 1000 ms after voice onset. All pitch-shift stimuli were randomly varied between type and pitch-shift onset from trial to trial. The unit, "cents" is a logarithmic value related to the 12-tone musical scale, where 100 cents equals one semitone. The rise time of the pitch shift was 10-15 ms. In each block of trials there were 120 vocalizations taking approximately 15-20 min. The experiment consisted of two blocks of trials for a total experiment duration of approximately 30-40 min. Subjects were asked to keep their eyes open throughout the recording session. Subjects' voices were picked up with an AKG boomset microphone (model C420), and amplified with a Mackie mixer (model 1202-VLZ3). Pitch shifting of the voice was performed using an Eventide Eclipse Harmonizer. MIDI software (Max/MSP v.5.0 Cycling 74) was used to control the time delay of the shift from vocal onset, the duration, direction, and magnitude of pitch shifts. Voice and auditory feedback were sampled at 10 kHz and recorded onto a laboratory computer utilizing Chart software (AD Instruments) and a PowerLab A/D Converter (Model ML880, AD Instruments). Subjects maintained their conversational F0 levels and voice loudness (about 70-75 dB) throughout the experiment, and the feedback signal (i.e., the subject's pitchshifted voice) was delivered back to the subjects through Etymotic earphones (model ER1-14A) at a loudness of about 80-85 dB. The 10 dB increase in loudness between voice and feedback channels (controlled by a Crown amplifier D75) was used to partially mask air-born and bone-conducted voice feedback. EEG ACQUISITION The electroencephalogram (EEG) signals were recorded from 32 sites on the subject's scalp using an Ag-AgCl electrode cap (EasyCap GmbH, Germany) in accordance with the extended international 10-20 system (Takeuchi and Hulse, 1993;Oostenveld and Praamstra, 2001) including left and right mastoids. Electrode impedances were kept below 5 k for all channels. EEG recordings were made using the average reference montage in which outputs of all of the amplifier channels were averaged. This averaged signal was used as the common reference for each channel. Signals were low-pass filtered with a 400-Hz cut-off frequency (anti-aliasing filter), digitized at 2 kHz, and recorded using a BrainVision QuickAmp amplifier (Brain Products GmbH, Germany). Electro-oculogram (EOG) signals were recorded using two pairs of bipolar electrodes placed above and below the right eye to monitor vertical eye movements and at the canthus of each eye to monitor horizontal eye movements. ERP analysis SPM8 [http://www.fil.ion.ucl.ac.uk/spm; update number 4667, (Schlaug et al., 1995;Schlaug, 2001;Litvak et al., 2011;Loui et al., 2011;Dohn et al., 2013)] was used for all ERP pre-processing and data analysis. The data were first epoched into to single trials, with a peri-stimulus window of −100 to 500 ms. The data were then down-sampled to 128 Hz and band-pass filtered (Butterworth) between 0.5 and 30 Hz. Artifact removal was implemented with robust averaging. A minimum number of 100 epochs were averaged for each condition. The data were finally grand averaged over 11 AP musicians, 11 RP musicians, and 11 non-musician subjects. DCM analysis SPM8 was also used to perform DCM (Schlaug et al., 1995;David et al., 2006) on the data. DCM was used to examine the connections between neural regions involved in the proposed model of processing auditory feedback during vocalization. DCM in SPM was originally created to analyze effective connectivity of fMRI data, and subsequently this was extended to model ERPs (Zatorre et al., 1998;David et al., 2006;Kiebel et al., 2006). The DCM method uses neural mass models to describe neural activity and estimate effective connectivity within a specified network model. Source time courses are first generated by a neurobiologically realistic model of the network of interest. These are then projected on the scalp using a spatial forward model (Boundary Element Model in our case). The parameters of both the source model and the neural model are optimized using a variational Bayesian approach to match the observed EEG data as closely as possible. Data were modeled within a time-window of 1-200 ms following the pitch-shift stimulus with an onset of 60 ms. A Hanning window was applied to the data and a detrend parameter of 1 was used with 8 modes. The evoked responses were modeled using the IMG (imaging) option, which models each source as a patch on the cortical surface Schulze et al., 2009Schulze et al., , 2012. The data for each of the three subject groups were modeled separately. For each pitch-shift direction (up and down) conditions were modeled together allowing particular connections in the model to vary to explain the difference between the two. Model identification and selection In order to test our hypotheses with DCM we constructed models with six regions and 18 connections. While there are many different models that could have been examined, we chose our model structure based on the literature and our initial work to address questions regarding the role of the STG in the identification of self voice error. Our model regions and network architecture for this experiment was motivated by results from a previous fMRI and ERP-DCM studies of pitch-shifted vocalization (Loui et al., 2012;Parkinson et al., 2012Parkinson et al., , 2013. The peak MNI coordinates reported in the literature for vocalization and those modeled in our previous ERP-DCM study (Larson, 1998;Parkinson et al., 2012Parkinson et al., , 2013 were used as coordinates for source regions for the models examined here. Three regions were selected in both the left and right hemispheres. The regions were superior temporal gyrus (STG), inferior frontal gyrus (IFG), and premotor (PM) cortex. MNI coordinates of the regions are displayed in Table 1. The basic model selected for analysis included modulated connections from STG to PM, PM to STG, and STG to IFG in both hemispheres. Variations in modulations across hemisphere from STG to STG and from STG to other cortical regions (PM and IFG) were examined. We specified a bilateral driving input to STG as the starting point to the model and nine different variations of the model (Figure 2) were examined. In the present study we proposed to examine differences in connectivity across hemispheres between the left and right STG regions. Reasoning behind examining lateral STG connectivity was based on our previous study using DCM to model vocal responses to pitch shifted voice feedback (Parkinson et al., 2012(Parkinson et al., , 2013 and evidence of both structural and functional differences in STG regions in AP musicians (Schulze et al., 2009; 2, Figure 2). The second characteristic we chose to model (factor 2) was the effect of connectivity between regions within a hemisphere. Reasoning behind this was again based on previous literature identifying differences in the left hemisphere superior temporal regions both functionally and structurally (Schulze et al., 2009;Loui et al., 2011Loui et al., , 2012, in individuals with AP. The right hemisphere is also known to be involved in pitch processing (Divenyi and Robinson, 1989;Binder et al., 1997;Johnsrude et al., 2000;Zatorre and Belin, 2001). Based on this literature we specified connections between STG to PM, PM to STG, and STG to IFG as being modulated by the experimental effect. We examined three Figure 2). Model comparison was performed for each of the three groups separately with a Bayesian model selection (BMS) family level inference procedure (Penny et al., 2010). Family level inference identifies the "best family of models" which is the one with the highest log-evidence for a given family over the other families across subjects. We used BMS for random effects (Stephan et al., 2009) to compare families across each of our two factors examined for each group (Table 2, Figure 2). Family model inference removes uncertainty about certain aspects of model structure other than the specific factor of interest. Family model inference outputs a model exceedance probability for each family of models examined. The family of models with the highest exceedance probability, i.e., the highest relative probability compared to any other model tested, was identified as the family which best represented the data. We then used these identified models to make inference about model structure across the groups. BEHAVIORAL AND ERP RESULTS ERP and vocal responses to pitch shifted stimuli are well established in the literature Liu et al., 2011;Korzyukov et al., 2012). Further detailed analysis of vocal and ERP responses from this data set are already published (Behroozmand et al., 2014). Figure 3 identifies scalp potential distribution of responses for all three groups to the up and down stimuli for the 1-200 ms post stimulus onset timeframe. The spatial variation seen in this figure provides justification for including the separate nodes in the DCM analysis. Grand average responses for all three groups for the 100 cent shift down condition is shown in Figure 4 where differences in both N1 and P2 responses can be seen across the groups. Responses from the left hemisphere C3 ( Figure 4A) and right hemisphere C4 ( Figure 4B) channels are displayed. The variation in responses between groups and across hemispheres again provides justification for examining left and right hemispheres separately across the three groups. Factor 1-effect of STG modulation across hemispheres Although there were no significant winning families identified for each group, BMS of the three families examining factor 1 identified that the AP group favored models with left to right STG connections (LtoR, models 1, 4, and 7, as displayed in Figure 2) being modulated (0.71 LtoR, 0.12 RtoL and 0.17 both random effects model exceedance probability). The RP and NM groups both favored the family with right to left STG connections (RtoL, models 2, 5, and 8, as displayed in Figure 2) modulated ( Figure 5A) (RP group: 0.12 LtoR, 0.55 RtoL and 0.32 both random effects FIGURE 5 | BMS results for families of models examining (A) effect of STG connections across hemispheres and (B) effects of laterality of STG connections to other cortical regions for 3 subjects groups. model exceedance probability; NM group: 0.08 LtoR, 0.74 RtoL and 0.18 both random effects model exceedance probability). Factor 2-effect of bilateral, left, or right connections The AP group clearly favored models with bilateral connections to other cortical regions being modulated (bilateral, models 1, 2, and 3, as displayed in Figure 2) (0.93 bilateral, 0.06 left and 0.01 right random effects model exceedance probability). In comparison the RP and NM groups did not significantly favor one family of models over another ( Figure 5B) (RP group: 0.09 bilateral, 0.56 left and 0.35 right random effects model exceedance probability; NM group: 0.42 bilateral, 0.21 left and 0.37 right random effects model exceedance probability). Influence on coupling Significance of coupling parameters were directly compared across all groups for all modulated connections of the bilateral family of models. Bayesian model averaging (BMA) was performed to identify coupling parameters for all connections within this family of models for every subject. Analysis of the coupling parameters derived from BMA showed a group specific modulation of the connections between left PM and left STG nodes (Figure 6), with a negative coupling between these nodes seen in the AP and RP groups and a positive coupling in the NM group (p < 0.05, 2 sample t-test). No difference was seen in coupling strength of the right hemisphere PM to STG connection. DISCUSSION The present study examined the effective connectivity of the neural networks associated with processing voice auditory feedback in individuals with varying pitch processing and identification abilities. Musicians have enhanced pitch identification mechanisms used for evaluating both vocal or instrument output resulting from continued practice. This enhanced pitch processing ability could be the result of stronger coupling between auditoryvocal motor networks for enhanced integration of feedback to update the predictive or feedforward internal model. The development of an internal representation of pitch in AP musicians may also be associated with their improved feedback-based monitoring and control of voice through more precise predictions of self-produced pitch provided by the efference copies of the motor commands. Online integration of auditory feedback to update the forward model must be essential for any musician and has likely been further enhanced and improved over years of practice and evaluation of performance. We have previously identified the STG as playing a key role in voice error detection and correction (Parkinson et al., 2012(Parkinson et al., , 2013 and STG has also been identified as a critical region in AP (Loui et al., 2012;Schulze et al., 2012;Dohn et al., 2013). Here we asked questions relating to lateral STG connectivity and connectivity of STG to PM and IFG connections in each hemisphere during pitch shifted auditory feedback. Our findings indicated that modulation of STG connections to PM and IFG in both hemispheres is critical in the identification of self-voice pitch error in musicians with AP but not in the RP and NM groups. We also identified reduced connectivity of left hemisphere PM to STG connections in AP and RP groups, compared to a positive coupling in the NM group during the error detection and corrections process. When examining lateral STG connectivity we showed that individuals with AP favor models with modulations in left to right connectivity whereas both RP and NM groups favored models with modulation in right to left STG connectivity. Finally, we note that the cohort of musicians in this study included those who played instruments and expert voice users, thereby suggesting that the pitch-shift vocalization paradigm has applications relative to the study of auditory feedback across a variety of voice and non-voice areas. The main finding of the current study identified the importance of left hemisphere connections from PM to STG in musicians during auditory error detection and correction. A considerable amount of evidence identifies the role of the left hemisphere and superior temporal regions in AP. Specifically, hemispheric differences in both brain structure and function have been identified in individuals with AP compared to controls Loui et al., 2011;Schulze et al., 2012). Diffusion tensor imaging (DTI) studies have shown increases in white matter connectivity between the STG and middle temporal gyrus especially in the left hemisphere in AP individuals . The planum temporale has also been identified in AP as a region showing increased volume in individuals with AP (Zatorre et al., 1998) and altered left-right asymmetry when comparing AP musicians with non-AP controls (Schlaug et al., 1995;Zatorre et al., 1998;Keenan et al., 2001;Dohn et al., 2013). Studies examining functional brain activations in AP musicians have also identified left hemisphere differences in AP, specifically inferior frontal (Zatorre et al., 1998) and superior temporal regions as increasingly activated during tone perception and pitch memory tasks (Schulze et al., 2009(Schulze et al., , 2012. It is likely that a predictive model of vocal output is created in the left hemisphere. Auditory feedback related to spectral (pitch) and temporal components of the voice is then compared with the predicted model. The motor output and forward model are then corrected and updated should any error signals arise between predicted and actual feedback. It is likely that musicians with enhanced abilities to accurately relate a note to a musical scale likely have an improved error detection and correction system. This would result in more precise internal models through years of practice and "fine-tuning" of the system and therefore these individuals rely less upon integration of feedback from premotor regions in the left hemisphere to update and maintain a current representation in this model. One key observation relates to the nature of the difference in modulation of left hemisphere PM to STG connection between the groups. Both musician groups (AP and RP) showed a negative coupling between these regions compared to non-musician controls who showed a positive coupling, suggesting that this connection is inhibitory in both musician groups (Figure 6). Thus, the role of the left hemisphere in error detection/correction mechanisms may be functionally different in musicians than in non-musicians. The inhibitory connection seen here between left PM and STG regions in musicians, suggests that STG activity is regulated by a frontal control system that assists in fine-tuning sensory motor integration. We have previously shown that left to right STG connections are key in pitch error detection and correction (Parkinson et al., 2013) and here that this connection is carefully tuned by inhibition from PM. Furthermore we also found evidence of bilateral connectivity of STG to both PM and IFG in AP only, suggesting a need for greater interhemispheric interplay in this subject group. The right hemisphere auditory areas have long been shown to be responsible for the processing of pitch. Examination of the specialization of the auditory cortex and STG to both spectral and temporal information has shown that damage to the right hemisphere STG affects a variety of pitch related processing tasks (Zatorre, 1985;Divenyi and Robinson, 1989;Robin et al., 1990). Specifically lesions to the right but not left primary auditory cortical areas impaired processing of pitch change (Johnsrude et al., 2000). The role of the right hemisphere in voice control in individuals with enhanced pitch processing abilities is unclear but it is likely linked to exquisite pitch discrimination and providing feedback to update and correct predictive models. Improved pitch error detection in the AP brain could reflect the development of stronger neural representations of pitch, facilitated by efference copies of the vocal motor system. Our findings here may suggest that individuals with AP are more adept at integrating feedback related to pitch from the right hemisphere. While it is clear that both the left and right hemispheres are involved in vocal pitch error detection and correction processes as identified here, different processing demands between individuals with varying pitch matching ability result in causal network coupling differences across groups. Another observation from our results relates to differences in lateral STG connectivity between groups. While not significant, it is clear that the groups favored different models. The AP group favors models with left to right lateral STG coupling compared to RP and NM groups who favored models with right to left STG coupling. This provides further evidence that individuals with AP have enhanced pitch memory and representation of the fundamental features of the pitch leading to a more accurate prediction, which facilitates their use of the left hemisphere more in the corrective process. Because the AP brain is so highly analytic there is less need for integration of information from the right hemisphere to update predictive models in the left hemisphere. Thus, the integration of feedback into the forward model might be through lateral STG connectivity, updating information based on pitch feedback (from the right hemisphere) and temporal components (from the left hemisphere) and with fine-tuning from an inhibitory left PM to STG connection. The existing literature on network connectivity in AP has been performed using graph theory analysis to examine functional and structural network properties (Jäncke et al., 2012;Loui et al., 2012). Loui et al. (2012) identified increased functional activation, network clustering and efficiency of connections in the left STG region in AP. The present study is the first to use DCM of event related potentials (ERP's) in musicians to take advantage of the exceptional temporal resolution of electrophysiological signals for more precise modeling of temporal dynamics within a network of specified brain regions. Our findings support the notion of experienced musicians being highly skilled at monitoring auditory feedback in order to regulate vocal or instrument output during performance. Individuals with AP have an enhanced pitch mismatch detection system, which is sensitive to the very smallest changes in pitch. It could also be the case that individuals with AP are able to retain information relating to the pitch of a note in their long-term memory and therefore have a more accurate internal representation of the pitch used in the comparison of actual and predicted auditory feedback when identifying an unknown pitch. On the opposite end of the spectrum to individuals with enhanced pitch processing skills is the disorder of congenital amusia where individuals affected are unable to detect out-of-key tones and are aware when others (or themselves) sing out of tune. Behavioral investigation of the disorder has linked the impairment to a deficit in pitch processing (Foxton et al., 2004;Hyde and Peretz, 2004). DCM of IFG and auditory cortex during melody encoding revealed increased lateral auditory cortex connectivity and a reduction in coupling in the right hemisphere IFG to auditory cortex in aumsics relative to control subjects (Albouy et al., 2013). This result of a reduction in coupling the right hemisphere in individuals at the opposite end of the pitch perception skill spectrum provides further support for our hypothesis of increased involvement of the right hemisphere for pitch detection in AP yet reduced need for lateral connectivity to integrate information due to a more precise initial model. Finally, we observe limitations in the current study. We recognize that more optimal network models involving additional brain regions (e.g., supplementary and primary motor regions) may exist in regard to vocal error detection and correction mechanisms in musicians. We based the current models on a priori hypotheses and only tested connections specific to these. It may be the case that experienced musicians recruit additional or alternative brain regions that we did not test. Due to the limitation in number of regions included in a DCM it is not possible to perform a direct comparison of many regions across both hemispheres. Also, the analysis we performed examined a timewindow of 1-200 ms-post onset of the pitch shift. This time window chosen for analysis may not be the optimal timeframe to reflect pitch processing. An additional analysis with an extended time window (1-400 ms) could be performed to examine the effect of later components. In using DCM of ERP data we have shown reduced connectivity between left PM and STG regions in individuals with enhanced pitch processing abilities compared to non-musician controls. We also identified differing lateral STG connectivity and hemispheric involvement related to pitch matching ability. These results provide further support for the involvement of STG in vocal pitch error detection and correction and also provide insight into the network and hemisphere differences in individuals with highly enhanced error discrimination abilities. That our subjects were not necessarily singers strongly suggests that the pitch-shift vocalization paradigm can be used to understand auditory motor integration in general rather than for vocalization or speech only.
2016-06-17T18:37:02.919Z
2014-03-05T00:00:00.000
{ "year": 2014, "sha1": "3705cc1a296662747df0da889b619563919596a6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2014.00046/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3705cc1a296662747df0da889b619563919596a6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
6018218
pes2o/s2orc
v3-fos-license
Disruption of the c-JUN-JNK Complex by a Cell-permeable Peptide Containing the c-JUN δ Domain Induces Apoptosis and Affects a Distinct Set of Interleukin-1-induced Inflammatory Genes* The transcription factor activator protein (AP)-1 plays crucial roles in proliferation, cell death, and the immune response. c-JUN is an important component of AP-1, but only very few c-JUN response genes have been identified to date. Activity of c-JUN is controlled by NH2-terminal phosphorylation (JNP) of its transactivation domain by a family of JUN-NH2-terminal protein kinases (JNK). JNK form a stable complex with c-JUN in vitro and in vivo. We have targeted this interaction by means of a cell-permeable peptide containing the JNK-binding (δ) domain of human c-JUN. This peptide strongly and specifically induced apoptosis in HeLa tumor cells, which was paralleled by inhibition of serum-induced c-JUN phosphorylation and up-regulation of the cell cycle inhibitor p21cip/waf. Application of the c-JUN peptide to interleukin (IL)-1-stimulated human primary fibroblasts resulted in up-regulation of four genes, namely COX-2, MnSOD, IκBα, and MAIL and down-regulation of 10 genes, namely CCL8, mPGES, SAA1, hIAP-1, hIAP-2, pent(r)axin-3, CXCL10, IL-1β, ICAM-1, and CCL2. Only a small group of genes, namely pent(r)axin-3, CXCL10, ICAM-1, and IL-1β, was inhibited by both the c-JUN peptide and the JNK inhibitor SP600125. Thereby, and by additional experiments using small interfering RNA to suppress endogenous c-JUN we identify for the first time three distinct groups of inflammatory genes whose IL-1-induced expression depends on c-JUN, on JNK, or on both. These results shed further light on the complexity of c-JUN-JNK-mediated gene regulation and also highlight the potential use of dissecting signaling downstream from JNK to specifically target proliferative diseases or the inflammatory response. The transcription factor activator protein 1 (AP-1) 1 was one of the first mammalian transcription factors to be identified, but its physiological functions are still being unraveled. AP-1 is involved in cellular proliferation, transformation, survival, cell death, and the immune response (1,2). AP-1 converts extracellular signals into changes of the expression of target genes, and AP-1 binding sites are found in a large number of genes. AP-1 is not a single protein, but a homo-or heterodimer composed of members of the JUN, FOS, ATF, and other protein families (1,2). With the exception of cell cycle regulatory genes, because of the structural and regulatory complexity of AP-1, the knowledge of AP-1 target genes mediating AP-1 functions is far from complete (1,2). c-JUN is one of the best characterized components of AP-1 (3). Genetic evidence suggests that c-JUN is essential for development and proliferation (1)(2)(3). Activity of c-JUN is controlled at multiple levels, first by changes in gene transcription, mRNA turnover, and protein stability, second by interaction with other transcription factors, and third by phosphorylation of its NH 2 -terminal transactivation domain (1)(2)(3)(4)(5)(6). The phosphorylation sites required for inducible c-JUN activation have been mapped to serines 63 and 67. A family of 10 highly homologous serine/threonine protein kinases derived from three genes by alternative splicing has been identified that specifically phosphorylate these residues in c-JUN and has therefore been named c-JUN NH 2 -terminal protein kinases (JNK) 1-3 (7,8). So far, no other protein kinases have been identified that phosphorylate the NH 2 terminus of c-JUN (6,7). Importantly, early work demonstrated that JNK not only phosphorylate c-JUN, but also bind to a region called the ␦ domain, which is located immediately NH 2 -terminal of the c-JUN transactivation domain (6 -10). Binding of JNK to c-JUN is a remarkable example of a high affinity signaling complex (6 -10). Further studies of the complex in intact cells showed that it does not require the JNK catalytic activity or the presence of the phospho-acceptor sites in c-JUN (11). Presumably, the c-JUN-JNK interaction serves two purposes, first it provides the specificity of JNK for c-JUN, and second it helps to increase the local concentration of JNK at gene promoters that bind c-JUN, thereby enhancing c-JUN-mediated transcription (4). Like the c-JUN protein, JNK have been implicated in numerous biological roles in response to growth factors, stress, and inflammatory cytokines, implying that JNK may mediate their gene regulatory effects mainly through c-JUN (12)(13)(14). However, for several reasons this is unlikely. First, JNK phosphorylate other transcription factor substrates, such as ATF-2 (15) and ELK-1 (16); second, individual JNK isoforms display different affinities for c-JUN in vitro (8,9); and third, the phenotypes of c-JUN and JNK knockout mice show no obvious overlapping phenotype (13,17,18). Therefore, one of the most intriguing questions regarding the c-JUN-JNK interaction is which of the many biological processes ascribed to both proteins critically requires a c-JUN-JNK complex. We have addressed this question by designing a cell-permeable peptide containing the JNK-binding site of human c-JUN. We report here that this peptide specifically disrupts the c-JUN-JNK complex in vitro and in vivo. Thereby we identify a number of hitherto unrecognized genes whose expression depends on the c-JUN-JNK complex. EXPERIMENTAL PROCEDURES Cells and Materials-HeLa cells stably expressing the tet transactivator protein, kindly provided by H. Bujard, and primary human fibroblasts derived from gingiva were cultured in Dulbecco's modified Eagle's medium complemented with 10% fetal calf serum. [␥-32 P]ATP was purchased from Hartmann Analytics. Rabbit antibodies against phospho(Ser 63 ) c-JUN, c-JUN, phospho(Thr 183 /Tyr 185 ) JNK, and JNK were from Cell Signaling Technology (kits 9250 and 9260). Other rabbit antibodies against c-JUN (H-79) or ERK (C-14) were from Santa Cruz. Horseradish peroxidase-coupled secondary antibodies against rabbit IgG were from Sigma. Glutathione-Sepharose was from Amersham Biosciences. Human recombinant IL-1␣ and GST-JUN were produced as described (19). SP600125 was from Tocris. SMARTpool small interfering (si) RNA oligonucleotides against c-JUN were from Dharmacon. Plasmids and Transfections-Plasmids pFR-Luc encoding five GAL4-binding sites upstream of a luciferase gene, pFC-MEKK1 encoding the catalytic domain of MEKK1, pFC2-dbd encoding the DNAbinding domain of GAL4 (amino acids 1-147), and pFA2 encoding the transactivation domain of c-JUN (amino acids 1-223) fused to the DNA-binding domain of GAL4 were obtained from Stratagene. Transient transfections by the calcium phosphate method were performed as described (20). For determination of promoter activity, cells (seeded at 1 ϫ 10 5 per well of 24-well plates) were transfected with 1 g of pFR-Luc, 100 ng of pFA2-cJUN, 100 ng of pFC2-dbd, or 50 ng of pFC-MEKK1. Equal amounts of plasmid DNA within each experiment were obtained by adding empty pCS3MT vector to a total amount of 2.2 g of DNA per well. Where indicated 100 M c-JUN peptide was added directly after transfection. 16 -20 h later cells from two wells transfected independently were pooled, lysed, and luciferase reporter gene activity was determined as described (20). For transfection of siRNA oligonucleotides OligofectAMINE TM (Invitrogen) or TransitIT-TKO TM (Mirus) were used, respectively, according to the manufacturer's instructions. Peptide Synthesis-Peptides (0.1 mmol) were synthesized as COOHterminal amides on TentaGel R RAM resin in an Applied Biosystems Pioneer instrument, using standard Fmoc protocols (21). Briefly, Fmoc derivatives (0.4 mmol) were activated with N-[(dimethylamino)-1H-1,2,3-triazolo [4,5-b]pyridin-1-yl-methylene]-N-methylmethanaminiumhexafluorophosphate N-oxide (0.4 mmol) in the presence of diisopropylethylamine (0.8 mmol). Fmoc group removal was with a mixture of 2% (v/v) piperidine and 2% (v/v) 1,8-diazabicyclo(5,4,0)undec-7-ene in dimethylformamide. To minimize aspartimide formation, Asp in the scrambled sequence was coupled as the Asp-Ser pseudo-proline (22). When the sequence was complete, the resin was washed with methanol and peroxide-free ether, and dried under nitrogen before the addition of dichloromethane (4 ml) and dimethylpropylene urea (0.5 ml). 5(6)-Carboxyfluorescein (0.5 mmol), 7-hydroxy-1-azabenzotriazole (0.5 mmol), and diisopropylcarbodiimide (0.6 mmol) were added and the mixture was shaken gently for 16 h. A ninhydrin test (21) showed that the reaction was complete. Piperidine (0.1 ml) was added and the resin was washed with dimethylformamide, methanol, and diethyl ether before drying for 16 h in vacuo over P 2 O 5 . Peptides were released from the resin by treatment for 3-4 h at room temperature with a mixture of trifluoroacetic acid (9 ml), thioanisole (0.5 ml), dithiothreitol (0.25 g), and triisopropylsilane (0.25 ml) containing 0.15 g of ammonium iodide to prevent oxidation of methionine (23). The spent resin was removed by filtration and washed with a little trifluoroacetic acid. The pooled filtrate was evaporated in vacuo to an oil, which was added dropwise to 45 ml of peroxide-free ether at 0°C. The precipitated peptide was recovered by centrifugation at 720 ϫ g for 3 min and washed three times with 45 ml of ether by resuspension and centrifugation, before drying under a gentle stream of nitrogen gas. Crude peptides were purified by reverse phase high performance liquid chromatography on a column (22 ϫ 250 mm) of Vydac octadecyl-silica (15-20 m particle size) using a linear gradient of acetonitrile in water containing 0.1% trifluoroacetic acid. Fractions containing homogeneous product were identified by analytical high performance liquid chromatography on a column (4.6 ϫ 250 mm) of Vydac octadecyl-silica (5 m diameter). These fractions were pooled and the acetonitrile was removed by rotary evaporation in vacuo. The residue was diluted with 10% (v/v) acetic acid and freeze-dried. The identity of the purified peptides was confirmed by mass spectrometry: TAT-c-JUN peptide, expected mass 4729.6 Da, found 4730.4 Ϯ 0.3 (S.D., 6); TAT-scrambled (scr.) peptide, expected mass 4729.6 Da, found 4729.8 Ϯ 0.2 Da (S.D., 6). The sequence of the TAT peptide control is FlCO-YGRKKRRQRRR-4Abu-NH 2 (M r 2002.3). FlCO is fluoresceinyl made with 5(6)-carboxyfluorescein and 4-Abu is a residue of 4-aminobutyric acid. NH 2 is the COOH-terminal amide. Protein Kinase Assays-Cells were harvested, washed in phosphatebuffered saline, and incubated for 15 min in ice-cold swelling buffer (5 mM Tris, pH 7.4). Then cells were lysed in 20 mM HEPES, pH 7.4, 2.5 mM MgCl 2 , 0.1 mM EDTA, 0.05% Triton X-100, 20 mM ␤-glycerophosphate, 0.1 mM Na 3 VO 4 , 1 mM dithiothreitol, and 1 mM fresh phenylmethanesulfonyl fluoride (Sigma). Lysates were cleared by centrifugation at 10,000 ϫ g for 15 min at 4°C. Cell extract proteins (0.5 mg) were incubated with 2.5 g of GST-JUN previously immobilized on glutathione-Sepharose beads to adsorb JNK protein kinases. Where indicated, 100 M c-JUN peptide was added to the binding reaction. After incubation for 30 min at 30°C, beads were pelleted, extensively washed in cell lysis buffer, and resuspended in 10 l of the same buffer. Then 10 l of H 2 O and 10 l of kinase buffer (150 mM Tris, pH 7.4, 30 mM MgCl 2 , 60 M ATP, 4 Ci of [␥-32 P]ATP) were added. After 30 min at room temperature SDS-PAGE sample buffer was added, and proteins were eluted from the beads by boiling for 5 min. After centrifugation at 10,000 ϫ g for 5 min, supernatants were separated on 10% SDS-PAGE. Phosphorylated proteins were visualized by autoradiography. Western Blotting-Cell extract proteins were separated on 10% SDS-PAGE and Western blotting was performed as described (19,20). Blots were stripped prior to reprobing with c-JUN, JNK, or ERK antibodies. Proteins were detected by using the Amersham enhanced chemiluminescence system. Enzyme-linked Immunosorbent Assay-IL-8 protein concentrations in the cell culture medium were determined using the human IL-8 duo set kit (R&D Systems) exactly following the manufacturer's instructions. Fluorescence Microscopy-Subcellular distribution of fluoresceinlabeled c-JUN and scrambled peptides was examined by phase-contrast and fluorescence microscopy at ϫ40 magnification using a Zeiss Axiovert 200M microscope. Cells were fixed for 15 min in 4% para-formaldehyde in phosphate-buffered saline. Nuclei of cells were stained with Hoechst 33342. Phase-contrast and fluorescence images of the same cells were collected in separate channels and images were saved as TIF files and processed electronically using Micrografix Picture Publisher Software 8.0. Determination of Cell Number and DNA Synthesis-Cells were seeded in six-well plates and counted after the indicated treatments in a Neubauer chamber. For determination of DNA synthesis rates, 10 4 cells were seeded in 96-well plates and incubated with 0.5 Ci/well [ 3 H]thymidine (Hartmann Analytics) for the final 4 h of treatment. Radioactivity incorporated into cellular DNA was determined by liquid scintillation counting. DNA Microarray Experiments-The microarray used in this study contains amino-modified oligonucleotides of 50 bp in length immobilized on panepoxy-coated glass slides (MWG Biotech). The oligonucleotide probes are complementary to several housekeeping genes and to 110 human genes that are strongly regulated during inflammation, which were selected by an extensive literature search using published resources. 2 With a few exceptions, three specific oligonucleotide probes per gene were designed by identifying unique sequences in these genes by a computer-based algorithm developed at MWG Biotech. The specificity of the probes for their respective target genes was then verified in a large number of biological experiments using different cell lines and inflammatory stimuli. 2 Fluorescent cRNA copies of mRNAs of cells treated as indicated in the legend of Table I were prepared by reverse transcription of 5 g of total RNA purified with a Qiagen RNeasy kit. RNA was treated with DNase and used for synthesis of double-stranded cDNA synthesis followed by fluorophore-cRNA synthesis. To be specific, the cDNA synthesis system from Roche and the MEGAscript T7 kit from Ambion were used as directed by the manufacturers, with 100 ng of double-stranded cDNA and 1.25 mM Cy3-UTP or 1.25 mM Cy5-UTP in each cRNA labeling reaction. Labeled cRNAs were hybridized individually to microarrays in pre-prepared hybridization solution (MWG Biotech) at 42°C overnight and then washed sequentially in 2ϫ SSC, 0.1% SDS, 1ϫ SSC, and 0.5ϫ SSC. Hybridized arrays were scanned at maximal resolution on an Affymetrix 428 scanner at variable PMT voltage settings to obtain maximal signal intensities without probe saturation. Fluorescence intensity values from TIFF images of Cy3 or Cy5 channels were integrated into one value per probe and normalized by the MAVI software (MWG Biotech) and further analyzed using Imagene 4.2 software (Biodiscovery). Additionally, ratio data from probes with signal intensities of less than 10% of the average signal intensity in one or both channels were excluded from the analyzed data sets. A Cell Permeable c-JUN Peptide Inhibits the c-JUN-JNK Interaction in Vitro and in Vivo- The c-JUN transcription factor belongs to the basic region leucine-zipper proteins (3,4,10). It contains COOH-terminal DNA-binding and dimerization domains. The first half of the protein harbors the transactivation domain. This region contains serines 63 and 73 that are inducibly phosphorylated by JNK as well as the JNK-binding (␦) domain (Fig. 1A). We designed a synthetic peptide comprising the ␦ domain (amino acids 33-57) of human c-JUN fused to the protein transduction domain (amino acids 47-57) of the HIV-1 transactivator protein TAT ( Fig. 1A) (24). Protein transduction is a process of unknown mechanism that allows TAT and other proteins to traverse biological membranes in a receptor-and transporter-independent fashion. Thus, fusion to the protein transduction domain can be used to deliver proteins and peptides to intact cells (24). Initial experiments were performed to reveal if the c-JUN peptide had the potential to disrupt a c-JUN-JNK complex using a two-stage assay. First, endogenous JNK are isolated from cells by virtue of their binding to an immobilized recombinant GST-JUN-(1-135) fusion protein (19). Second, after removal of unbound proteins, the presence of active JNK in the complex is detected by phosphorylation of GST-JUN-(1-135) in vitro (Fig. 1B). Addition of c-JUN peptide to this assay efficiently disrupted the binding of activated affinity purified endogenous JNK in vitro as detected by disappearance of in vitro phosphorylation of GST-JUN-(1-135) (Fig. 1B). To evaluate if the peptide also inhibits c-JUN-specific gene expression, we activated a fusion protein of the DNA-binding domain of the yeast transcription factor GAL4 and the c-JUN transactivation domain by co-expression of the protein kinase MEKK1, a strong upstream activator of endogenous JNK (12). This resulted in activation of a luciferase reporter gene driven by a promoter containing five GAL4-binding sites (Fig. 1C). The c-JUN peptide completely inhibited activation of the GAL4-c-JUN protein by MEKK1 (Fig. 1C). To further verify these results we analyzed the effect of the c-JUN peptide on basal and inducible phosphorylation of endogenous c-JUN. For this purpose we also designed a peptide with a scrambled sequence as shown in Fig. 1A as a control. As shown in Fig. 1D, treatment of synchronized HeLa cells by serum resulted in increased expression of c-JUN and the appearance of two phosphoforms of c-JUN, indicating the presence of at least two phosphorylation states. This is in agreement with the known multisite phosphorylation of c-JUN by JNK that, as outlined above, occurs primarily at serines 63 and 73, but also to a lesser extent at threonines 91, 93, and 95 (4, 6). The c-JUN peptide did not interfere with c-JUN expression. However, it almost completely inhibited the most slowly migrating hyperphosphorylated form of c-JUN (Fig. 1D, upper panel). In contrast, the serum-induced phosphorylation of three different JNK isoforms was not inhibited by the c-JUN peptide, indicating that the peptide acts immediately downstream from JNK (Fig. 1D, lower panel). The scrambled peptide had no effect on serum-induced c-JUN phosphorylation indicating specificity of the effects observed for the c-JUN peptide. Taken together, the experiments shown in Fig. 1, B to D, indicate that the c-JUN peptide specifically prevented the interaction with and subsequent phosphorylation and activation of c-JUN by JNK in vitro as well as in vivo. The c-Jun Peptide Causes Apoptosis-The results presented in Fig. 1, B-D, suggested that the c-JUN peptide could be used to identify c-JUN-JNK target genes in cells. Additional experiments with fluorophore-labeled peptides indicated a very efficient transfer in HeLa cells (Fig. 2). More importantly, both peptides evenly distributed within the cells, suggesting that the peptides, like endogenous c-JUN and JNK, located to the cytoplasm as well as to the nucleus (Fig. 2). The transduction of the c-JUN or the scrambled peptide was indistinguishable from that of a peptide containing the TAT sequence alone (Fig. 2). During these experiments cells treated with the c-JUN peptide, but not those exposed to the scrambled peptide, showed increased cell death that occurred at around 200 M and increased at higher doses (Fig. 3A). HeLa cells exposed to the c-JUN peptide started to round up and detach after a few hours. After 24 h of treatment at least 50% of the cells had died (Fig. 3B). This effect was specific for the c-JUN peptide, as it did not occur with the scrambled peptide, the TAT peptide, or in untreated cells. Staining of cells with Hoechst dye indicated fragmentation of nuclei resembling apoptosis (Fig. 3B). The pivotal role of c-JUN and AP-1 in apoptosis is well known, with evidence for both pro-and anti-apoptotic functions of c-JUN (1). Here, we observed increased cell death by targeting the c-JUN activation by JNK, suggesting that in proliferating HeLa cells the c-JUN-JNK complex plays an anti-apoptotic role. Furthermore, we found that the number of living cells decreased on treatment with the c-JUN peptide (data not shown), suggesting that the c-JUN peptide negatively affected proliferation. Accordingly, cells treated with the c-JUN peptide, but not those treated with the scrambled peptide, showed a significant reduction in DNA synthesis (data not shown) and up-regulation of the cell cycle inhibitor p21 cip/waf (Fig. 3C), indicating that the c-JUN peptide indeed affected proliferation and that apoptosis was very likely the indirect consequence of cell cycle arrest. Identification of Interleukin-8 as an AP-1 Target Gene That Is Inhibited by the JNK Inhibitor SP600125, but Not by the c-Jun Peptide-Activation of JNK and AP-1 is not restricted to growth factors, but is also of central importance to many genes involved in immune response, inflammation, or tissue remodeling (25). We therefore asked if the c-JUN peptide also interfered with gene expression induced by IL-1, a major pro-inflammatory cytokine. To test this hypothesis we investigated the IL-1-induced expression of IL-8, a major human chemoattractant protein that is activated by a plethora of external stimuli and whose promoter contains a consensus AP-1-binding site (26). We have previously studied the signal-dependent regulation of IL-8 in great detail and have shown by both expression of JNK antisense RNA or JNK dominant-negative mutants that IL-8 expression requires JNK activation (19,20). In line with these results, treatment of HeLa cells with the novel JNK inhibitor SP60125 resulted in a 50% reduction of IL-1-inducible IL-8 secretion (Fig. 4A), reinforcing our earlier conclusion that the JNK pathway provides an important signal for maximal IL-8 secretion. In sharp contrast, by RT-PCR and enzyme-linked immunosorbent assay, we found that the c-JUN peptide did not inhibit IL-1-inducible IL-8 mRNA expression or protein secretion (Fig. 4, B and C). Fig. 4 were surprising and suggested that signaling from JNK to AP-1 target genes of the inflammatory response diverges at the level of the c-JUN-JNK complex. To further strengthen this conclusion, we screened the expression of 110 genes with known relevance to inflammation, such as cytokines, chemokines, and matrix metalloproteinases (MMP) After that time cells were fixed and phase-contrast and fluorescence microscopic images were recorded in separate channels. (25). This was achieved by a customized DNA microarray developed by our laboratory. On this microarray each oligonucleotide probe has been optimized thoroughly and evaluated for its ability to specifically detect its target gene. 3 In more than 250 experiments, we found that primary cells show a broader spectrum and much stronger expression of genes in response to IL-1 compared with tumor cell lines such as HeLa. 4 We therefore transduced the peptides into human primary fibroblasts derived from gingiva (HuGi) that, like HeLa cells, showed uptake of the c-JUN and scrambled peptides into cytosolic and nuclear compartments with a 100% transduction efficiency (Fig. 5). Identification of a Novel Set of Distinct Inflammatory Genes That Is Up-or Down-regulated by the c-Jun Peptide-The experiments shown in As judged by microarray analysis, 61 inflammatory genes were expressed in HuGi cells, 31 of which were induced by IL-1 by at least 2-fold and up to 100-fold as shown in Table I, column 3. To identify the genes that were affected by the c-JUN peptide, cells were pretreated with c-JUN or scrambled peptides and then stimulated with IL-1. We used this comparison for further analysis to specifically identify the genes that were affected by the c-JUN peptide and to exclude potentially unspecific effects on inducible gene expression by treatment of fibroblasts with cell-permeable peptides. Ratios of gene expression obtained from cells stimulated with IL-1 ϩ c-JUN peptide were divided by ratios of gene expression of cells treated with IL-1 ϩ scrambled peptide. This analysis revealed a number of genes that were either down-or up-regulated by the c-JUN peptide, but not by the scrambled peptide, and whose expression is, therefore, specifically affected by disrupting the c-JUN-JNK interaction (Table I, columns 4 and 5). As summarized in Fig. 6, of the IL-1-inducible genes, four were up-regulated by more than 25% by the c-JUN peptide, namely COX-2, MnSOD, IB␣, and MAIL. Ten genes were down-regulated by the c-JUN peptide by more than 25%, including CCL8, mPGES, SAA1, hIAP-2, pent(r)axin-3, hIAP-1, CXCL10, ICAM-1, IL-1␤, and CCL2. CCL2, which is also called MCP-1, was the most strongly affected gene (inhibition of about 90%). This result, as well as the lack of inhibition of IL-8, was confirmed by RT-PCR (Fig. 7). To our knowledge, none of these genes has been shown previously to be dependent on a c-JUN-JNK complex. An immediate question that arose from these results was how far this set of genes would overlap with JNK-dependent genes. For this purpose, we treated HuGi cells with 20 M SP600125, a concentration that in agreement with other studies (27,28) was effective at blocking IL-1-induced phosphorylation of endogenous c-JUN (data not shown). As shown in Table I, column 6, a significant number of genes was affected by the SP600125 inhibitor. Of the 31 IL-1-induced genes, we observed downregulation of 14 and up-regulation of 5 by more than 25% (Fig. 6). Our results confirm the reported suppression of MMP-1, Red arrows indicate apoptotic morphology, respectively. In C, total RNA from HeLa cells treated as in A was isolated and p21 cip/waf and tubulin mRNA transcripts were amplified by RT-PCR, separated by agarose gel electrophoresis, and visualized by ethidium bromide staining. (27,28) and also identify novel genes inhibited by SP600125, such as MnSOD, PAI-2, and others (Fig. 6). SP600125 also caused inhibition of IL-8 (CXCL8) expression in HuGi cells, whereas the c-JUN peptide had no effect (Fig. 6), confirming the observations made in HeLa cells (Fig. 4). In addition, we also identify genes that are up-regulated, such as c-JUN, indicating that in HuGi cells, JNK negatively affects c-JUN expression (Fig. 6). MMP-3, and COX-2 by SP600125 A very interesting result emerging from these experiments is that only a small group of genes is inhibited by both the c-JUN peptide and SP600125 (Fig. 6). These genes are pent(r)axin-3, CXCL10, ICAM-1, and IL-1␤. Taken together, we have identified several novel target genes of c-JUN or JNK by means of the JNK inhibitor SP600125 or the c-JUN peptide containing the JNK-binding ␦ domain. The biological role of c-JUN as suggested by the results of these experiments was further confirmed by suppression of endogenous c-JUN protein by double-stranded siRNA molecules (Fig. 8). Significant reduction of c-JUN protein in HeLa cells (Fig. 8B) causes apoptosis (Fig. 8A). These effects were specific, as they did not occur with transfection reagent alone (Fig. 8, A and B) or with irrelevant siRNA directed against luciferase (data not shown). Furthermore, application of c-JUN siRNA to human gingival fibroblasts inhibited IL-1-induced CCL2 expression by 50% but did not affect IL-8 expression (Fig. 8C). Compared with the c-JUN peptide (Table I and Fig. 6) siRNA against c-JUN were somewhat less effective in CCL2 suppression (Fig. 8C). This most likely results from the difference between transduction efficiency of the cellpermeable peptide, which is 100% (Figs. 2 and 5), versus transfection efficiency of siRNA, which in HuGi was about 50 -80% (data not shown). Accordingly, there was still some c-JUN protein detectable (Fig. 8D) that is likely to account for the residual IL-1-induced CCL2 expression. In conclusion, based on the compelling evidence provided here, we identify for the first time three distinct groups of inflammatory genes whose IL-1induced expression depends on c-JUN, on JNK, or on both proteins (Fig. 6). DISCUSSION The transcription factor AP-1 and its component c-JUN are of central importance for enabling cells to respond to environmental changes. Recently, the usage of genetically altered mice and cells derived from them has unraveled crucial functions for AP-1 as a regulator of cell life and death (1,2). However, very little information is available from these model systems on the role of AP-1 in inflammation and infection, despite the fact that AP-1-binding sites are found in many genes activated during innate and adaptive immune reactions (1,2). Furthermore, because of its complex regulation, the importance of AP-1 for human disease is still unclear (29). It is, therefore, desirable to develop pharmacological inhibitors to dissect signaling pathways leading to activation of AP-1 to conclusively identify AP-1 target genes (1, 29). In addressing this question, we have focused on the long known specific interaction of c-JUN with its activating kinase TABLE I Comparison of the effects of the c-JUN peptide and the JNK inhibitor SP600125 on IL-1-inducible inflammatory gene expression Human primary fibroblasts were treated with IL-1 (10 ng/ml) for 4 h, or left untreated (0). In parallel cells were treated with 400 M c-JUN peptide or 400 M scrambled peptide for 1 h, or, with 20 M SP600125 for 30 min prior to stimulation with IL-1 for 4 h. Thereafter total RNA was isolated from all samples and used to prepare double-stranded cDNA followed by cRNA synthesis. cRNA was labeled with Cy3 and hybridized independently to DNA microarrays containing amino-modified oligonucleotide probes representing 110 genes relevant to inflammation as well as a number of housekeeping genes. Each gene was detected by three independent oligonucleotide probes. Fluorescence intensities of bound cRNAs were recorded, normalized, and used to identify 61 inflammatory genes that were significantly expressed. Alterations imposed by the different treatments on the IL-1-inducible or on the basal expression of these genes were determined as ratio of relative gene expression compared to unstimulated cells. For each comparison shown in columns 3-6, the mean ratio Ϯ S.E. of normalized dye fluorescence intensities from at least two independent experiments was calculated. Genes are ordered according to their relative induction by IL-1 as shown in column 3. In columns 1 and 2 GenBank™ accession numbers and gene names, respectively, are provided for identification. JNK (6). Here, we report that a cell-permeable peptide containing the minimal JNK-binding domain of human c-JUN efficiently and rapidly enters cells and affects c-JUN-specific gene expression. When applied to spontaneously growing HeLa cells, the peptide, like suppression of endogenous c-JUN protein by siRNA, caused apoptosis (Figs. 3, A and B, and 8). The pivotal role of c-JUN and AP-1 in apoptosis is well known, but the underlying mechanism is controversial with evidence for both, pro-and anti-apoptotic functions of c-JUN, depending on the cellular context (1, 2). Here, we observed increased cell death by targeting the c-JUN activation by JNK, suggesting that in proliferating HeLa cells the c-JUN-JNK complex plays an anti-apoptotic role. Two models are currently used to explain the role of c-JUN and the AP-1 complex in apoptosis. Either AP-1 is required for expression of pro-or anti-apoptotic regulators of apoptosis, or, AP-1 functions as a homeostatic regulator that keeps cells in a certain proliferative state in response to growth factors. Inhibition of AP-1 then results in cell cycle arrest and subsequent removal by apoptosis of cells unable to re-enter the cell cycle (1,2). The data presented in Fig. 3 strongly suggest that the latter scenario is evoked by the c-JUN peptide, which by disrupting the c-JUN-JNK interaction might prevent serum-induced c-JUN phosphorylation during the cell cycle. Interestingly, our data also suggest that c-JUN NH 2 -terminal phosphorylation (JNP), which is inhibited by the peptide, rather than the c-JUN expression level, which is not affected by the peptide, is more important for this effect to occur (Fig. 1D). This assumption is strongly supported by observations showing that JNP increases during G 2 -M transition in HeLa cells, whereas c-JUN levels remain stable (30). Also, fibroblasts from mice expressing a non-phosphorylatable c-JUN S63A/S73A mutant have a defect in proliferation (31). Unlike c-JUN Ϫ/Ϫ animals, c-JUN S63A/S73A transgenic mice are viable, further indicating that JNP mediates only a part of the spectrum of c-JUN-dependent functions (31). Additional evidence for a selective role of JNP in cell proliferation and apoptosis is provided by experiments using antisense oligonucleotides to inhibit JNK, which causes the same phenotype as observed by using the c-JUN peptide, namely inhibition of tumor cell growth and up-regulation of the cell cycle inhibitor p21, followed by apoptosis (32)(33)(34). Thus, the cell-permeable c-JUN peptide described here is a novel molecular tool that can be used to acutely and specifically inhibit JNP. It should provide an important tool to address the role of c-JUN in cell cycle control in those situations where cells cannot be genetically manipulated, or, may even be applicable as an efficient means of treating human diseases such as tumors that require c-JUN-JNK for proliferation and survival. Based on the compelling evidence of c-JUN for a positive regulatory role in cell proliferation, the strong effects of the c-JUN peptide described in the first part of our study might be expected. We therefore did not attempt to identify other c-JUN target genes in addition to p21 involved in cell cycle control or apoptosis. Rather we extended our investigations on the biological effects of this peptide to other potential AP-1 target genes, namely those of the inflammatory response. We have previously demonstrated in a number of studies that JNK are crucial for the expression of IL-8 and IL-6 in epithelial cells, such as HeLa, KB, or HEK293 (19,20). In addition, studies using JNK1-or JNK2-deficient animals, or, a novel JNK inhibitor, SP600125 (27,28), have clearly established an important role FIG. 6. Divergent effects of the c-JUN peptide or the JNK inhibitor SP600125 on IL-1-inducible gene expression of human primary fibroblasts. A, genes that were up-regulated at least 2-fold by IL-1, as shown in Table I, were selected. Specific effects of the c-JUN peptide on IL-1 inducible expression of these genes were identified by comparing the ratios of IL-1 ϩ c-JUN peptide (Table I, (Table I, column 5). For SP600125, ratios of IL-1 (Table I, column 3) were compared with IL-1 ϩ SP600125 (Table I, column 6). Values above 125% (black bars) or below 75% (hatched bars) indicate the relative induction or suppression by more than 25% caused by the c-JUN peptide or SP600125, respectively. Genes, whose IL-1-inducible expression is not changed are shown in gray. Genes are ordered according to the effects of the c-JUN peptide. B, summary of the three groups of genes that are either up-(ϩ) or downregulated (Ϫ) by the c-JUN peptide, by SP600125, or by both. FIG. 7. The c-JUN peptide inhibits IL-1-induced mRNA expression of CCL2 (MCP-1), but not of IL-8, in human primary fibroblasts. Human gingival fibroblasts were treated as described in the legend of Table I and CCL2, IL-8, and tubulin mRNA expression was analyzed by RT-PCR. for the JNK pathway in regulating a wide spectrum of other inflammatory genes such as MMP-1 (collagenase-1), MMP-3, MMP-13, COX-2, IL-2, interferon-␥, or tumor necrosis factor-␣ (27,28). Like collagenase, which was one of the first AP-1 response genes identified (35), these genes contain AP-1-binding sites. From this it may be deduced that JNK might generally regulate transcription of inflammatory genes by phosphorylating c-JUN, keeping in with the paradigm of the JNK-c-JUN-AP-1 signaling pathway (1)(2)(3)(4). However, the results of our study suggest that this is clearly an oversimplification. Like collagenase, the IL-8 promoter contains a typical AP-1binding site that is required for JNK-mediated IL-8 transcription (20,26). However, in contrast to the JNK inhibitor SP600125, or to expression of antisense RNA to JNK (19), the c-JUN peptide did neither inhibit IL-8 mRNA expression nor protein secretion (Fig. 4) at concentrations at which it inhibited serum-induced c-JUN phosphorylation (Fig. 1D) and caused apoptosis (Fig. 3). IL-8 expression was also not affected by suppression of endogenous c-JUN protein by siRNA (Fig. 8, C and D). These results prompted us to investigate many more genes with relevance to inflammation in terms of their sensitivity to SP600125 or the c-JUN peptide. Thereby we identify for the first time a distinct group of genes whose IL-1-induced expression is either specifically enhanced or inhibited by the c-JUN peptide (Table I, Fig. 6). Because we demonstrated in initial experiments that the c-JUN peptide inhibits c-JUN phosphorylation in vitro as well as in vivo (Fig. 1) we conclude that this group of genes requires JNP for activation and is sensitive to disruption of a c-JUN-JNK complex by the c-JUN peptide. Very interestingly, the JNK inhibitor SP600125 affects a significantly larger set of genes, suggesting that JNK may regulate theses genes by substrates other than c-JUN (Fig. 6). Only four genes are suppressed by both the c-JUN peptide and the JNK inhibitor, suggesting that they are activated by the "classical" JNK-c-JUN signaling cascade (Fig. 6). Interestingly, we also identified genes whose expression was suppressed by the c-JUN peptide, but not by the JNK inhibitor (Table I, Fig. 6). The most drastic example of this group was CCL2, also called MCP-1, a chemokine whose IL-1-induced expression was impaired by 90% by the c-JUN peptide (Table I, Figs. 6 and 7), or, by siRNA directed against endogenous c-JUN protein (Fig. 8, C and D), but was unaffected by SP600125. One explanation for this discrepancy might be that c-JUN bound to the CCL2 promoter is phosphorylated by a particular JNK isoform that is not efficiently inhibited by SP600125. It is not known if SP600125 inhibits all 10 JNK isoforms in intact cells, but it is a less potent inhibitor of JNK3 in vitro (27,28). The selectivity of IL-1 response genes to the c-JUN peptide might also be explained by a number of other considerations. To induce apoptosis and to inhibit phosphorylation of c-JUN in intact cells we had to use concentrations of the cell-permeable peptides around 400 M. These doses were similar to concentrations that were required in another study to inhibit the Nemo-IKK interaction (36). All JNK isoforms phosphorylate c-JUN, but they vary in their affinity to c-JUN (8,9) and it is possible that we affected only the less stable c-JUN-JNK complexes with the concentrations of c-JUN peptide used in this study. It is also possible that c-JUN-JNK complexes with different sensitivity to the c-JUN peptide might result from interactions of the c-JUN-JNK complex with proteins that may weaken or stabilize the c-JUN-JNK interaction, such as the recently discovered novel interaction partners of c-JUN, namely RNA helicase RHII/GU (37), or of JNK, namely Sab(SH3BP5) (38). The JNK signaling cascade is organized into modules formed by a complex of JNK, MKK4, or MKK7 and one of the many mitogen-activated protein kinase kinase kinases that activate JNK, such as MEKK1. Accessibility of the c-JUN peptide to JNK within these multiprotein complexes may vary according to the nature of the docking domains and scaffold proteins that tether the JNK signaling module (39,40). Finally, hitherto undetected c-JUN protein kinases that do not bind to the ␦ domain may contribute to c-JUN activation. In this case the c-JUN peptide would be unable to interfere with c-JUN phosphorylation. The existence of additional JUN kinases has been suggested by several groups (30,31,41). Alone or in combination these mechanisms may render genes more or less susceptible to the c-JUN peptide. Many of the genes affected or not affected by the c-JUN peptide contain known AP-1 sites. An important conclusion from our results, therefore, is that the sole presence of AP-1 sites is insufficient to predict if a gene is a target of a c-JUN-JNK signaling complex. Further experiments are required to identify the underlying mechanisms that result in sensitivity of the different genes identified here to either the c-JUN peptide, or to the JNK inhibitor. These results may than provide a basis to understand the complexity of transcriptional regulation of the inflammatory genes via AP-1. In summary, we have identified several novel target genes of c-JUN or JNK by means of the c-JUN peptide containing the JNK-binding domain, by the JNK inhibitor SP600125, or by siRNA directed against c-JUN. With regard to inflammation, these genes fall into three distinct groups whose IL-1induced expression depends on c-JUN, JNK, or on both proteins (Fig. 6B). Collectively, our results show that at the level of the c-JUN-JNK complex signals from growth factors or inflammatory cytokines can be specifically disrupted by a cell-permeable peptide to block proliferation and survival, or, inflammatory gene expression, respectively (Fig. 9). The results shed further light on the complexity of c-JUN-JNK-mediated gene regulation and also highlight the potential use of dissecting signaling downstream from JNK to specifically target proliferative diseases or the inflammatory response.
2018-04-03T03:29:18.380Z
2003-10-10T00:00:00.000
{ "year": 2003, "sha1": "e49c2fcb67a72bfe2d70f0f61dd6a46abfd8d693", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/41/40213.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "abc80c5b08672b1785030bfbf85c60c28b138213", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253295773
pes2o/s2orc
v3-fos-license
Detailed analysis of mortality rates in the female progeny of 1,001 Holstein bulls allows the discovery of new dominant genetic defects Reducing juvenile mortality in cattle is important for both economic and animal welfare reasons. Previous studies have revealed a large variability in mortality rates between breeds and sire progeny groups, with some extreme cases due to dominant mutations causing various syndromes among the descendants of mosaic bulls. The purpose of this study was to monitor sire-family calf mortality within the French and Walloon Holstein populations, and to use this information to detect genetic defects that might have been overlooked by lack of specific symptoms. In a population of heifers born from 1,001 bulls between 2017 and 2020, the average sire-family mortality rates were of 11.8% from birth to 1 year of age and of 4.2, 2.9, 3.1, and 3.2% for the perinatal, postnatal, preweaning, and postweaning subperiods, respectively. After outlining the 5 worst bulls per category, we paid particular attention to the bulls Mo and Pa, because they were half-brothers. Using a battery of approaches, including necropsies, karyotyping, genetic mapping, and whole-genome sequencing, we described 2 new independent genetic defects in their progeny and their molecular etiology. Mo was found to carry a de novo reciprocal translocation between chromosomes BTA26 and BTA29, leading to increased embryonic and juvenile mortality because of aneuploidy. Clinical examination of 2 calves that were monosomic for a large proportion of BTA29, including an orthologous segment deleted in human Jacobsen syndrome, revealed symptoms shared between species. In contrast, Pa was found to be mosaic for a dominant de novo nonsense mutation of GATA 6 binding protein ( GATA6 ), causing severe cardiac malformations. In conclusion, our results highlight the power of monitoring juvenile mortality to identify dominant genetic defects due to de novo mutation events. INTRODUCTION Juvenile mortality has a severe impact on the cattle industry, because calves are the major output in beef production and are necessary for replacement in dairy production. Beyond economics, juvenile mortality affects the environmental impact of cattle breeding and addresses serious animal welfare concern (Østerås et al., 2007;Uetake 2013;Knapp et al., 2014). For all these reasons, juvenile mortality is an increasingly important field of research. Several cross-sectional studies have been carried out worldwide, generally focusing on different periods to accurately monitor juvenile mortality (Reiten et al., 2018;Hyde et al., 2020;Dachrodt et al., 2021). Causes of death change during the course of the first year of life, the main ones being calving conditions or problems of fetal maturity, insufficient colostrum intake, digestive troubles, and respiratory diseases for the perinatal, postnatal, preweaning and postweaning periods, respectively. Studies conducted in Holstein cattle have found mortality rates of 6.8 to 7.3% in France and 8.8% in the Detailed analysis of mortality rates in the female progeny of 1,001 Holstein bulls allows the discovery of new dominant genetic defects F. Besnard, 1,2 * H. Leclerc, 2,3 M. Boussaha, 2 C. Grohs, 2 N. Jewell, 4 A. Pinton, 5 H. Barasc, 5 J. Jourdain, 2,3 M. Femenia, 2 L. Dorso, 6 B. Strugnell, 7 T. Floyd, 8 C. Danchin, 1 R. Guatteo, 6 D. Cassart, 9 X. Hubin, 10 S. Mattalia, 1,2 D. Boichard, 2 and A. Capitan 2,3 * US (Johanson et al., 2011;Raboisson et al., 2013) from birth to the second day of life, and of 12.9% from d 3 to 365 in the Chinese population (Zhang et al., 2019). In Norway, aggregating data from various dairy breeds, the mortality rate was only 7.8% over the whole first year of life in 2005 (Gulliksen et al., 2009), suggesting an influence of genetic components and farming systems on calves' survival. This assumption has been further supported by a comprehensive analysis of juvenile mortality in 19 French cattle breeds, which highlighted the breed purpose (beef or dairy), breed, sex, and sire progeny groups as the main factors influencing juvenile mortality across periods (e.g., Leclerc et al., 2016). Heritability estimates for juvenile mortality are lower than 10% (Fuerst-Waltl and Sørensen 2010;van Pelt et al., 2012), reflecting the multiplicity of factors involved and the predominant effect of environment. However, as frequently observed for low-heritability traits, the genetic variability is large, with a genetic standard deviation around 5%, corresponding to a huge genetic coefficient of variation of 50%. The corresponding genetic variability is difficult to characterize biologically because the largest part of the mortalities results from common infections with undetermined, and probably underestimated, genetic components. Nevertheless, some genetic factors have been identified in the past, such as bovine leukocyte adhesion deficiency (Shuster et al., 1992), but most of them are recessive and explain a very small proportion of the mortalities. In some rare situations, their determinism is dominant and can explain large mortality rates in the progeny of carrier sires. These dominant conditions can be transmitted by carrier animals either because of incomplete penetrance or of mosaicism for de novo mutations (Bourneuf et al., 2017). One possible way to seek for dominant deleterious mutations is to analyze mortality rates in the progeny of artificial insemination sires used in multiple farms, assuming that extreme values hide congenital anomalies. In this context, the purpose of this study was twofold: to finely monitor calf mortality at the level of sire families within the French and Walloon Holstein populations, and to use this information to detect genetic defects that might have been overlooked by lack of specific externally visible symptoms. Mortality Rates in Sire Families at Different Ages Data on the pedigree, sex, date of birth, date of death, and cause of death (either natural or slaughtering) of Holstein animals were recovered from the bovine French and Walloon databases. The data set included calves born from 2017 to 2020. To focus on the most reliable data, only female calves that remained on their farm of birth until death or during their whole first year of life were selected. Sire families with fewer than 100 female progeny were disregarded. Accordingly, the final data set comprised 2.25 million daughters from 1,001 sires (with a mean of 2,246 and a maximum of 35,375 females per sire family). Natural mortality rates were computed during the first year of life and for 4 subperiods known to correspond to distinct predominant causes of death: perinatal (d 0-2), postnatal (d 3-14), preweaning (d 15-55), and postweaning (d 56-365) mortality (Santman-Berends et al., 2019;Dachrodt et al., 2021). Mortality rates were calculated as the number of calves that died of natural causes during a window of time divided by the number of calves alive at the start date. Then, we paid particular attention to the 5 bulls showing the highest mortality rates for each period, to identify sires potentially transmitting unreported dominant genetic defects to their progeny. Among them, 2 sires (Mo and Pa) were selected for subsequent analyses because they were half-brothers and potentially transmitted a common genetic defect. Clinical Examination Two affected calves of Mo (2 females) and 8 of Pa (4 females, 4 males) were necropsied by trained veterinarians in France, Belgium, and the UK. By "affected calves" we mean animals that have been reported by breeders as suffering from unexplained weakness, diminished growth rates, and often spontaneous death despite intensive care. Gross phenotypic description was also available for 5 additional clinically affected calves of Mo (see Supplemental Notes S1 and S2 for information on the age and symptoms of all calves examined; https: / / figshare .com/ projects/ Besnard _JDS _Supplementary _material/ 140747; Besnard, 2022). At the time of the study, biological material was still available for 2 affected calves of Mo and 8 of Pa. Karyotyping Giemsa-stained karyotpe of sire Mo and of 2 affected daughters of Pa were obtained from blood lymphocytes as described in Ducos et al. (1998). Of note, Pa was dead at time of the study and thus not available for sampling. Analysis of Semen Quality and Fertility Because chromosomal rearrangements can negatively affect spermatogenesis, 5 different traits were analyzed Besnard et al.: JUVENILE MORTALITY AND NEW BOVINE GENETIC DEFECTS for a cohort of 50 bulls, including Mo, that had their semen collected on a routine basis in the same artificial insemination center: the mean volume of the ejaculate (in mL measured by weighting), its concentration (in million spermatozoids per milliliter, measured by spectrophotometry), fresh mass motility and individual motility (in score and percentage, respectively, based on microscope observation), as well as post-freezing mean motility and progressive motility, measured with computer-assisted sperm analysis and IVOS II (O'Meara et al., 2022). The number of records per bull and trait ranged from 2 to 39. In addition, 2 fertility traits were calculated for the initial cohort of 1,001 sires mentioned previously. The nonreturn rate at 56 d corresponds to the percentage of cows inseminated with the semen of a given sire that were not reinseminated within the following 56 d, and the conception rate corresponds to the percentage of inseminations that led to the birth of a calf. Analysis of Illumina SNP Array Genotypes The bull Mo, 15 of his progeny and 1 of their dams, as well as Pa, 203 of his progeny and 89 of their dams, and finally Mogul (sire of both bulls) were genotyped with various Illumina arrays over time (Bovine SNP50, EuroG10K, and EuroGMD). Genotypes were phased and imputed to the Bovine SNP50 using FImpute3 (Sargolzaei et al., 2014) in the framework of the French genomic evaluation, as described in Mesbah-Uddin et al. (2019). Following the detection by karyotyping of a chromosomal rearrangement in Mo, we analyzed along chromosomes BTA26 and BTA29 which of the paternal or maternal phases of this bull were transmitted to offspring, to detect recombination events. In parallel, we also mined the raw genotypes of affected daughters for increased rates of Mendelian transmission errors (a sign of monosomy) or increased rates of markers with null genotypes ("−/−"; a sign of trisomy). For Pa, no chromosomal rearrangement was identified, and other investigations were carried out. Assuming a dominant inheritance with somatic mosaicism, we performed transmission disequilibrium tests for 16,487 informative markers, for 14 progeny that died during the preweaning period (including 5 already necropsied at that time), and 189 half-sib controls still alive at 2 years of age. The proportion of each of the paternal alleles transmitted to the case and control groups were compared using a Fisher test with Bonferroni correction. Finally, ggplot2 and Rcolorbrewer were used for data visualization with the R software (R version 4.1.2). After the discovery of the causative mutation in the GATA6 gene (see the Results section), we used allele transmission proportions for 2 flanking informative markers within the control population to estimate the proportion of mosaicism in Pa's germ cells. Given the deleterious consequences of the GATA6 mutation on heart development, we expect that control calves carrying the at-risk haplotype inherited the ancestral version of this haplotype (i.e., predating the mutation event). The proportion of affected gametes was calculated as (nHb − nHa)/(2 × nHb), with nHa the number of carriers of the at-risk haplotype among half-sib controls and nHb the number of carriers of the alternative paternal haplotype within the same population. Finally, we used a chi-squared goodness-of-fit test to compare the observed proportion of affected gametes with those expected assuming mosaicism rates of 1/2, 1/4, 1/8, and 1/16 in Pa's germ cells. Gene Content and Comparative Genomics The gene content of specific regions was extracted from the bovine ARS-UCD1.2 and human GRCh38. p13 genome assemblies using the BioMart tool (Ensembl release 106; https: / / www .ensembl .org/ biomart/ martview/ ). In parallel, we used the synteny tool from Ensembl to identify conserved blocks between bovine and human chromosomes (https: / / www .ensembl .org/ Bos _taurus/ Location/ Synteny/ ). Then we compiled the list of genes in common between the BTA29 segment deleted in Mo's affected calf and the core HSA11 deletion responsible for Jacobsen syndrome in human. Analysis of Whole-Genome Sequences The genome of 1 affected calf of Pa was sequenced at a coverage of 19.4× on an Illumina HiSeq3000 HWI-J00173 platform with 150 bp paired-end reads, after library preparation with an average insert size of 440 bp using the NEXTflex PCR-Free DNA Sequencing Kit (Bioo Scientific). The whole genome sequence data are available under the study accession no. ERR9669242 at the European Nucleotide Archive (www .ebi .ac .uk/ ena). Reads were aligned on the ARS-UCD1.2 bovine genome assembly and processed in accordance with the guidelines of the 1000 Bull Genomes Project (Hayes and Daetwyler 2019) for the detection of SNPs and small InDels. Assuming that the causative mutation is dominant and occurred de novo, we retained only heterozygous variants that were (1) absent from 5,116 control genomes from run 9 of the 1000 Bull Genomes Project and (2) located within the mapping interval (positions 19,505,558 to 37,877,867 bp on BTA24). Genotyping of the GATA6 Candidate Variant DNA samples from Pa (extracted from semen), 3 affected calves, and 3 controls carrying the same paternal haplotype but in the nonmutated version, as well as their 6 dams, were genotyped for variant g.34,187,181T > A on BTA24 using PCR and Sanger sequencing. A segment of 321 bp was PCR amplified in a Mastercycler Pro thermocycler (Eppendorf) using primers CAGTGGGCGCTAAAACTACC and AGACCT-GCTGGAGGACCTG and the Go-Taq Flexi DNA Polymerase (Promega), according to the manufacturer's instructions. Amplicons were purified and bidirectionally sequenced by Eurofins MWG (Hilden, Germany) using conventional Sanger sequencing, before analysis with NovoSNP software for variant detection (Weckx et al., 2005). Analysis of Mortality Rates at Different Stages in the Progeny of Individual Sires The natural mortality rate of heifers during their first year of life was 11.8% on average in the population of 1,001 bulls analyzed, with 4.2% for perinatal, 2.9% for postnatal, 3.1% for preweaning, and 3.2% for postweaning mortalities. These rates were lower than most of those reported in the literature (e.g., Johanson et al., 2011;Raboisson et al., 2013;Leclerc et al., 2016;Zhang et al., 2019), probably because we considered only females. Sex is known to have a significant effect on juvenile mortality (Raboisson et al., 2013;Hyde et al., 2020), notably because females receive more care than males, due their higher financial value. The addition of vitality at birth in the French Holstein total merit index in 2009 may also have contributed to a reduction of perinatal mortality through selection. Interestingly, natural mortality rates per period and per half-sib family showed approximately normal distribution, suggesting quantitative inheritance (Figure 1). Yet we observed outlier families with possible mono-or oligogenic inheritance of excess mortality and focused on the 5 worst sires per category (Table 1). Among them, the 2 bulls Mo and Pa, ranked number 1 and number 5 for mortality rate over the first year of life, were half-brothers sired by the popular bull Mogul (HOLUSAM003006972816). Although they displayed distinct profiles (with, for example, 16.7 and 5.3% perinatal mortality versus 2.4 and 6.3% preweaning mortality, respectively), their close relationship raised the question of a common underlying pathophysiology, and therefore they were selected for further analysis. Genetic Analyses of Mo and His Progeny. To gain insights into the causes of increased mortality within the Mo and Pa sire families, we carried out a series of investigations, starting with cytogenetic analyses. Although the karyotypes of 2 affected daughters of Pa were apparently normal (not shown), we observed a reciprocal translocation between chromosomes BTA26 and BTA29 in Mo [t(26;29)(q11;q19); Figure 2A]. Subsequent analysis of Illumina SNP array genotypes from Mo, his own sire Mogul, and 15 of Mo's progeny enabled us to define the approximate borders of chromosomal break and fusion points, and to determine that the affected chromosomes originated from Mogul ( Figure 2B, C; Supplemental Figure S1, https: / / figshare .com/ projects/ Besnard _JDS _Supplementary _material/ 140747; Besnard, 2022). Considering that Mogul was extensively used as a bull sire and did not display abnormal juvenile mortality rates, these results suggest that the rearrangement occurred in the germ cells of Mogul during the meiosis that gave the spermatozoon at Mo's conception. In addition, we demonstrated that 2 affected daughters of Mo with DNA samples available were monosomic for approximately the first 70% of BTA29 (767 markers, 36.7 Mb; Supplemental Figure S1). Interestingly, comparative genomics revealed synteny between part of the hemizygous region and the monosomy of the telomeric region of chromosome 11q responsible for Jacobsen syndrome ( Figure 2C; Mattina et al., 2009). Both segments share a common set of 69 orthologous protein coding genes out of the 318 affected by monosomy in Mo's progeny and the ~100 of the core Jacobsen deletion (Supplemental Table S1, https: / / figshare .com/ projects/ Besnard _JDS _Supplementary _material/ 140747, Besnard, 2022;Rodríguez-López et al., 2021). Phenotypic Characterization of Mo's Calves and Mo's Semen Characteristics. In human, Jacobsen syndrome has been extensively studied, with 200 cases compiled in the Human Phenotype Ontology da- such as Paris-Trousseau thrombocytopenia, growth rate reduction, and psychomotor impairment, as well as cardiac, craniofacial, gastrointestinal, renal, genitourinary, ophthalmic, and orthopedic anomalies (https: / / www .omim .org/ entry/ 147791). In agreement with the observations made in humans, clinical examination of the 2 calves partially monosomic for BTA29 and of 5 additional cases for which no DNA was available revealed very similar symptoms (Figure 3; Supplemental Note S1). Figure S1, https: / / figshare .com/ projects/ Besnard _JDS _Supplementary _material/ 140747; Besnard, 2022) and synteny between BTA29 and human HSA11 chromosomes. Instances of reciprocal translocations are rare in cattle, with only 20 reports counted in a recent review of literature by Iannuzzi and coauthors, none of which affected chromosome 29 (Iannuzzi et al., 2021). Regarding aneuploidies affecting BTA29, only 1 complete trisomy has been reported before this study, in a stillborn Braunvieh calf showing preterm delivery, dwarfism, and severe craniofacial malformations (Häfliger et al., 2020). The absence of other reports on trisomy for BTA29 despite the segregation of a BTA1-29 Robertsonian fusion in various cattle breeds (Gustavsson, 1979), as well as the lack of human patients trisomic for the Jacobsen segment on HSA11 orthologous to part of BTA29 (e. g. Pylyp et al., 2018), suggest that this condition would lead to embryonic death in both species. Because chromosomal abnormalities affect not only the viability of conceptuses but also meiosis and gametogenesis (Raudsepp and Chowdhary, 2016), we investigated several traits related to semen volume, quality, and fertility in Mo and 2 groups of Holstein bulls (Figure 4). Among 50 bulls reared and sampled in the same artificial insemination center, Mo showed normal average volume of ejaculate but low semen quality, with average concentration of semen and fresh and post-freezing motility trait records in the lowest quartile. The influence of the chromosomal rearrangement was even more severe with regard to fertility, Mo ranking as the worst sire for nonreturn rate at 56 d and the fifth worst for conception rate among our cohort of 1,001 Holstein sires. This major degradation of fertility, observed on both early and late indicators of insemination success, is most probably the result of the premature death of a substantial proportion of aneuploid conceptuses throughout the gestation. Thus, we report the first large animal model for Jacobsen syndrome in humans, and the first instance of partial monosomy for BTA29 in cattle, to our knowledge. Identification of a Mosaic GATA6 Nonsense Mutation in Pa Despite their close relationship, a different etiology was suspected for the excess of mortality observed among the daughters of Pa, because the peak of mortality occurred later in life than for Mo's offspring. This assumption was rapidly confirmed by clinical examination and karyotyping of Pa's descendants. Clinical Examination of Pa's Progeny. A survey of French and British veterinarians allowed us to collect phenotypic information on Pa's descendants, among which 8 showed symptoms compatible with severe heart defects either leading to premature death or justifying euthanasia on humane grounds (Supplemental Note S2). Autopsies gave results strikingly similar to the systematic observation of a persistent truncus arteriosus (TA; i.e., a malformation of the large vessels at the base of the heart, characterized by the development of a single arterial trunk straddling the 2 ventricles, above a large interventricular communication, which gives rise to the aorta and the 2 branches of the pulmonary artery), sometimes associated with additional heart septation defects ( Figure 5). Mapping and Identification of the Causative Mutation. Given the fact that Pa was apparently un-affected and that TA has never been reported outside of his progeny among thousands of genetic defects reported to the French National Observatory for Bovine Abnormalities (Grohs et al., 2016) years, we assumed a dominant inheritance associated with germline or somatic mosaicism, or both, in the sire. Therefore, we analyzed SNP array genotypes of 14 progeny that died during the preweaning period (including 5 necropsied) and 189 half-sib controls still alive at 2 years of age, via transmission disequilibrium test. We mapped the TA locus on BTA24 between positions 19,505,558 (rs453420861) and 37,877,878 (rs723126921) bp on the ARS-UCD1 assembly. Then we sequenced the genome of one TA-affected animal with Illumina technology and used up to 5,116 genomes from run 9 of the 1000 Bull Genomes Project (Hayes and Daetwyler 2019) as controls. Filtering for heterozygous SNP, InDels, and structural variations that were absent from controls yielded only 29 positional candidates within the interval (Supplemental Table S2, https: / / figshare .com/ projects/ Besnard _JDS _Supplementary _material/ 140747; Besnard, 2022). Only one of them appeared as a bona fide functional candidate variant: a thymine-to-adenine substitution in exon 2 of GATA6 predicted to introduce a premature stop codon (chr24: g.34,187,181T > A; GATA6 p.K417X). If translated, the mutant protein would be shortened by approximately 30% and would lack 3 domains essential for the proper function of this transcription factor, controlling heart development in vertebrates (Brewer and Pizzey 2006;Lentjes et al., 2016; Figure 6D). Experiments in mice have demonstrated that the conditional inactivation of GATA6 in heart progenitor cells causes embryonic lethality due to interrupted aortic arch and persistent TA (Lentjes et al., 2016). In humans about 80 dominant mutations of GATA6 have been described to date, which cause various heart or pancreatic development anomalies depending on their nature and location (for a review see Škorić-Milosavljević et al., 2019). Remarkably, the 2 orthologous human truncating mutations located closest to the present bovine nonsense variant (pS418fs and pG441X) have been reported to cause exactly the same phenotype, that is, persistent TA, supporting the causality of the latter mutation ( Figure 6D). Validation of the Causality of the GATA6 Mutation. For verification, we genotyped this GATA6 nonsense variant by PCR and Sanger sequencing in Pa, 3 affected calves, and 3 controls carrying the same paternal haplotype but supposedly in the nonmutated version, as well as their 6 dams. As expected, the mutant allele was found in the heterozygous state only in the 3 cases and in Pa's semen, thus confirming the de novo nature and therefore the causality of the mutation ( Figure 6C). Then we analyzed the segregation distortion for 2 markers adjacent to the mutation among the 189 control calves of Pa that were still alive at 2 years of age. We found 57 controls carrying the same paternal haplotype as the affected animals but presumably in its ancestral version (i.e., without the de novo mutation) and 132 with the second paternal haplotype. From this 57:132 ratio, we estimated a proportion of 28.4% of mutant spermatozoids [(132 − 57)/(2 × 132)] and thus 56.8% of mutant germ cells. Comparing the proportion observed in controls with those expected for various degrees of mosaicism using a chi-squared goodness-of-fit test, we demonstrated that this distortion was compatible with a degree of mosaicism of 1/2 (P = 0.28) and rejected lower levels of mosaicism (proportions of 1/4, 1/8, and 1/16; P = 0.00012 and lower). These results suggest that the mutation occurred either early in the germline progenitor cells of Pa, or possibly at the first division of the egg cell. Unfortunately, Pa was dead at time of the study, and we did not have access to tissues other than semen to answer this question. CONCLUSIONS With a few exceptions, we observed a nearly normal distribution of juvenile mortality rates among the daughters of 1,001 Holstein sires. By focusing on the progeny of 2 outlier bulls, we identified 2 de novo mutations consisting of a balanced translocation between chromosomes 26 and 29, and a mosaic nonsense mutation of GATA6 (see Online Mendelian Inheritance in Animals entries OMIA 002558-9913, https: / / www .omia .org/ OMIA002558/ 9913/ , and 002559-9913, https: / / www .omia .org/ OMIA002559/ 9913/ ). Furthermore, we described the first large animal models for human Jacobsen syndrome and persistent truncus arteriosus due to GATA6 haploinsufficiency, to our knowledge. These results demonstrate the suitability of our approach to reveal genetic defects that are hardly detectable with traditional heredo-surveillance in the absence of specific externally visible symptoms. Beyond this proof of concept, the calculation of mortality rates at different ages for the whole population of bulls paves the way for future detection of QTL influencing juvenile mortality. chromosomal rearrangement, with the support of Anne Calgaro and Nathalie Mouney (Cytogene team, UMR GenPhySE). Finally, we express our special thanks to Nora Cesbron [Laboratoire de l'Environnement et de l'Alimentation de la Vendée (LEAV), La Roche Sur Yon, France] for her efforts in clinical examination and for her expertise in cardiology. F. Besnard is a recipient of a CIFRE PhD grant from IDELE, with the financial support of the Association Nationale de la Recherche et de la Technologie and APIS-GENE (Paris, France). Surveillance for livestock disease conducted by the APHA is funded by the UK Department for the Environment, Food, and Rural Affairs (London, England) and the devolved governments of Scotland and Wales. The authors have not stated any conflicts of interest.
2022-11-04T18:17:44.992Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "2f884f99d358d1f4741d66b8cf2f6e455e6ac635", "oa_license": "CCBY", "oa_url": "http://www.journalofdairyscience.org/article/S0022030222006373/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4dc463c7f34d21a02d3ed1da483b46245c515705", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259188186
pes2o/s2orc
v3-fos-license
C. elegans ATG-5 mutants associated with ataxia Intercellular cleaning via autophagy is crucial for maintaining cellular homeostasis, and impaired autophagy has been associated with the accumulation of protein aggregates that can contribute to neurological diseases. Specifically, the loss-of-function mutation in the human autophagy-related gene 5 (ATG5) at E122D has been linked to the pathogenesis of spinocerebellar ataxia in humans. In this study, we generated two homozygous C. elegans strains with mutations (E121D and E121A) at positions corresponding to the human ATG5 ataxia mutation to investigate the effects of ATG5 mutations on autophagy and motility. Our results showed that both mutants exhibited a reduction in autophagy activity and impaired motility, suggesting that the conserved mechanism of autophagy-mediated regulation of motility extends from C. elegans to humans. (f) The frequency of body bends on adult days 3 and 7 in WT, atg-5(E121D) and atg-5(E121A) (N=30) animals. Plots show the frequencies of body bends. Horizontal lines and error bars indicate means ± s.d. Description Autophagy is an intracellular degradation system that operates constitutively. This system selectively degrades abnormal intracellular molecules, such as aggregated proteins and impaired organelles, in response to their production (Yamamoto et al., 2023). Animals deficient in autophagy-related genes (ATGs), which are required to drive autophagy, are affected by neurodegenerative diseases with accumulation of abnormal proteins (Hara et al., 2006). These indicate that intracellular cleaning via autophagy contributes to the suppression of disease development through the maintenance of homeostasis. The autophagy pathway is based on large-scale membrane trafficking, particularly the generation and elongation of isolation membranes that sequester substrates. ATG5 is one of the most well-known ATGs involved in sequestration membrane elongation. Specifically, ATG5 catalyzes the ATG8 lipidation reaction through covalent binding with ATG12 for the recruitment of ATG8 on the isolation membrane (Mizushima et al., 2011). Note that ATG5 is functionally and structurally conserved from humans to C. elegans (Fig. 1a). A pathogenic ATG5 E122D mutation leading to human spinocerebellar ataxia reduces both the interaction with ATG12 and autophagy induction (Kim et al., 2016). However, there are insufficient studies on the pathogenesis of spinocerebellar ataxia due to a defect in the autophagy machinery. To address this issue, we generated a homozygous nematode strain with an atg-5 mutation at a position corresponding to a human ataxia mutation (atg-5(tj130[E121D])). We also generated an atg-5 mutant in which Glu121 was converted to alanine (atg-5(tj122[E121A])) to investigate the importance of carboxylate side chain of the glutamate residue. The side chain of alanine, the methyl group, is sterically small and does not engage in hydrogen bonding or ionic interactions. In both atg-5 mutants, no marked differences in body length, brood size, and growth rate from wild-type worms were observed (Fig. 1b−d). To observe autophagy levels in these mutants, we utilized GFP-tagged LGG-1 (GFP::LGG-1), a C. elegans ATG8 ortholog, that is widely accepted as an autophagosome marker (Mizushima, 2004). In worms in which autophagy is induced, GFP::LGG-1 is detected as dot-like structures in cells (Kang et al., 2007). In both atg-5 mutants, the number of GFP::LGG-1 dots in the pharynx was lower than wild-type worms, suggesting a permanent decrease in autophagy (Fig. 1e). A marked decrease in locomotion ability has been observed in patients with spinocerebellar ataxia (Kim et al., 2016). We tested whether a similar phenotype was observed in atg-5(tj130[E121D]) or atg-5(tj122[E121A]), using body bend frequency to evaluate the locomotion ability of worms (Onken & Driscoll, 2010). At day 7 of adulthood, both atg-5(E121D) and atg-5(E121A) worms showed a statistically significant reduction in body bend frequency (Fig. 1f). As early as day 3 of adulthood, a tendency of reduced frequency of body bend was observed in the atg-5(E121A) mutants (Fig. 1f). It is interesting because, to our knowledge, such alanine mutant has not been reported in human ataxia. Collectively, these results indicate that the E121 mutation on ATG-5 reduced the locomotion ability of worms that might be related to the human spinocerebellar ataxia. In summary, we introduced an ATG5 mutation associated with human spinocerebellar ataxia into C. elegans for the first time and demonstrated that this mutation causes locomotor defects in the nematode. Our analysis of GFP::LGG-1 dots indicated that the two mutant strains created in this study had defective autophagy activity, but their impact on locomotion appeared to be somewhat different. This nematode model could be used to investigate the etiology and pathogenesis of spinocerebellar degeneration in humans. Body length L4 larvae were collected, and 16-18 hours later, animals were taken under a microscope and measured as 1-day adults. At least 28 worms were analyzed. Brood size Individual L4 stage worms (N=9) were transferred every 12 hours. The number of fertilized eggs and hatched larvae were counted repeatedly until the eggs were laid unfertilized. Growth rate After performing timed-egg-lay of 10 adult worms at 20°C, collecting approximately 100 eggs, the number of worms at each stage (egg, L1, L2−L3, L4, adult) was counted every 12 hours. Measurement of frequency of body bends The assay was performed as previously described (Onken & Driscoll, 2010). Thirty worms in 20 µL of M9 buffer were transferred to a glass slide. After 2 minutes, the behavior of the worms in the M9 buffer was recorded for 30 seconds. Body bends were counted by reviewing the 30−second movies (N=30). Statistical analysis Statistical analyses were performed using Dunnett's test. In all tests, P-value of < 0.05 was considered statistically significant.
2023-06-19T05:05:02.848Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "116ea25b59c2e704966c6790e4000619e1842194", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "116ea25b59c2e704966c6790e4000619e1842194", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52010040
pes2o/s2orc
v3-fos-license
Ruminal methane emissions, metabolic, and microbial profile of Holstein steers fed forage and concentrate, separately or as a total mixed ration Few studies have examined the effects of feeding total mixed ration (TMR) versus roughage and concentrate separately (SF) on ruminant methane production. Therefore, this study compared differences in methane production, ruminal characteristics, total tract digestibility of nutrients, and rumen microbiome between the two feeding methods in Holstein steers. A total six Holstein steers of initial bodyweights 540 ± 34 kg were divided into two groups and assigned to a same experimental diet with two different feeding systems (TMR or SF) in a crossover design with 21 d periods. The experimental diet contained 73% concentrate and 27% forage and were fed twice a day. The total tract digestibility of crude protein, neutral detergent fibre, and organic matter were not affected by the two different feeding systems. Steers fed TMR emitted more methane (138.5 vs. 118.2 L/d; P < 0.05) and lost more gross energy as methane energy (4.0 vs. 3.5% gross energy intake; P = 0.005) compared to those fed SF. Steers fed TMR had greater (P < 0.05) total volatile fatty acid (VFA), ammonia-N concentrations and propionate proportion of total VFA at 1.5 h, whereas lower after that compared to steers fed SF. The greater (P < 0.05) acetate: propionate ratio at 4.5 h for steers fed TMR reflected the shift of H2 sink from propionate towards acetate synthesis. The lower (P < 0.05) isobutyrate and isovalerate proportions of total VFA observed in steers fed TMR implies decrease in net consumption of H2 for microbial protein synthesis compared to SF. There were no differences in both major bacterial and archaeal diversity between TMR and SF, unlike several minor bacterial abundances. The minor groups such as Coprococcus, Succiniclasticum, Butyrivibrio, and Succinivibrio were associated with the changes in ruminal VFA profiles or methanogenesis indirectly. Overall, these results indicate that SF reduces methane emissions from ruminants and increases propionate proportion of total VFA without affecting total tract digestion compared to TMR. There were no evidences that the response differed due to different major underlying microbial population. Introduction Greenhouse gas emissions from livestock production are expected to increase over the coming decades due to the projected increase in demand for livestock products [1]. Besides its negative impact on the environment, the methanogenesis process represents a loss of 3-10% of the gross energy intake of the animal and leads to the unproductive use of dietary energy [2]. Advances in understanding ruminant nutrition and rumen relevant to methane formation have led to various strategies to reduce CH 4 emissions from the rumen; the use of chemical inhibitors such as bromochloromethane; electron receptors such as fumarate, nitrates, sulphates, and nitro ethane; and bioactive plant compounds such as tannin and saponin [3] [4]. However, the use of these compounds as feed additives has not been promising because of several adverse effects, such as a reduction in fibre digestibility and feed intake, toxicity to the rumen microbiome, and questions regarding the persistence of the effect. Increasing the productivity of cattle to reduce CH 4 emissions is a key area of interest because reducing the ruminant population being farmed is not an option. Hence, alternative feeding strategies such as total mixed ration (TMR) is of interest [3], because it has been reported to be of significant benefit in terms of increasing feed intake and digestibility; minimising choice feeding among individual feeds; and maintaining sufficient fibre intake to support rumen health, such as a stable ruminal pH and a lower A/P ratio [5] [6] compared to animals fed roughage and concentrates separately (SF). However, there were also contradictory reports that feeding a TMR had no effect on animal performance or the carcass traits of steers [7] and milk production and milk composition [8] [9]. Despite the importance of feeding system on livestock production and environmental impact, very few studies have compared the effects of SF and TMR feeding on CH 4 production in ruminants and showed inconsistent results. Holter et al. [10] reported that yield of milk per unit total diet DM was about 4% more for TMR than for the same feeds offered separately but the CH 4 yield (% GEI) was not affected significantly by treatments. However, Lee et al. [11] showed a significantly higher CH 4 yield (% GEI) for TMR than for the same feeds offered separately. Hence, the effect of these feeding system on enteric CH 4 production should be validated, since the TMR feeding practice is rising up both in developed and developing countries. So, the objective of this study was to confirm the advantages of TMR in terms of a stable ruminal pH and nutrients digestibility in Holstein steers fed same amount with the same ingredients, and to understand how its ruminal fermentation characteristics affected ruminal methane production and ruminal microbial communities using Next Generation sequencing technique. feed mill company, and all experimental chemicals were obtained from Sigma-Aldrich (St. Louis, MO, USA) Animals and experimental design Six Holstein steers of initial body weights 540 ± 34 kg were divided into two groups of three steers and assigned to two experimental diets (TMR or SF) in a crossover design with 21 d periods. Each period was comprised of 12 d for diet adaptation in the pen, 4 d for metabolic adaptation in the indirect respiratory chamber, 2 d for CH 4 measurement, 2 d for fecal sampling, and 1 d for rumen fluid sampling. Experimental diet and feeding Total mixed ration was prepared using 73% concentrates (including water, yeast culture, limestone, salt, and molasses) and 27% roughage on DM basis ( Table 1). The moisture contents of the TMR and concentrates used for SF were 10% and 17%, respectively. The amount fed to the animals was restricted and adjusted to achieve average daily gains of 0.65 kg [12], and the animals were fed equal amounts twice a day at 0900 and 1800 h ( Table A in S1 Table). In the SF system, the roughage was given first, and then the concentrate was given 40 mins later to prevent an unnecessary drop in pH during the initial ruminal fermentation. The animals were given full access to water and a mineral block ad libitum in the pen and respiratory chamber. Samples of the feed offered were collected and stored to measure the dry matter content and perform other chemical analyses. Measurement of methane emission Methane production was measured using three indirect open-circuit whole-body respiratory chambers that had steel frames and were made of polycarbonate sheets [12]. Each chamber (137 × 256 × 200 cm wide × deep × tall) was equipped with a feeder, water bowl, air conditioner (model ALFFIZ-WBCAI-015H; Busung, India), dehumidifier (model DK-C-150E; Dryer, Korea) and circulatory fans to maintain the temperature and humidity. The gas analysis system consisted of a gas sampling pump (B.S. Technolab, Korea), a flow meter (model LS-3D; Teledyne Technologies, USA) consisted of a sealed rotary pump that provided a constant wet ventilation rate (wet VR) of 600 L/min, and a data acquisition and analysis unit equipped with a CH 4 gas analyser (Airwell+7; KINSCO Technology, Korea) containing a tuneable diode LASER CH 4 sensor with a range of 0 to 1000 ppm. The respiration chamber was maintained at a controlled temperature of 25˚C and humidity of 50%. The gas analyser was calibrated, and the recovery rate of each chamber was tested at the beginning of the experiment using a standard CH 4 gas mixture (25% mol/mol balance N 2 ; Air Korea). Briefly, a fixed volume of CH 4 (50 ml/min) was injected into each chamber from outside near the circulatory fan through a gas tube. The air was allowed to mix for 10 mins to achieve an equilibrium state with the inlet and outlet air passage closed. After that time, the inlet and outlet air passages were opened and the gas at both the inlet and outlet was sampled every 10.5 mins with the flush time and measuring time of 90 and 120 secs, respectively. The difference in CH 4 concentration between inlet and outlet gas (DCH 4 ppm), known dry standard temperate and pressure ventilation rate (Dry STP VR), and known injected CH 4 concentration were used to calculate the gas recovery rates of the chamber using the formula CH 4 emission ðL=minÞ ¼ ðDry STP VR Â ð½DCH 4 ppm=1000000ÞÞ=gas recovery rate During the experiment, the same routine operation was carried out when the animals were placed in the chamber, and the CH 4 emission was calculated using the same formula with known values. To avoid uncertainty in the data, we placed the same animals in the same chamber for both periods while measuring CH 4 throughout the experiment. However, chamber limitation delayed the measurement of CH 4 emission from the other group by 8 days. Digestion trial and rumen sampling The effects of the feeding systems on the total tract digestibility of nutrients were studied using chromic oxide (Cr 2 O 3 ) as an external marker. The feed was top-dressed with the marker twice daily at 0.2% of the daily feed amount for TMR, whereas it was mixed with the concentrate in the SF system before feeding. On day 19, a faecal grab sample (100 g fresh weight) was collected from the rectum of each animal 30 min before feeding and 1, 3, 5, and 7 h post feeding. On day 20, faeces were collected 30 min before and 2, 4, 6, and 8 h post feeding. The faeces samples were frozen at -20˚C until analysed by steer within period. Samples of ruminal fluid were collected 1.5, 3, and 4.5 h post feeding on day 21 of each period using a stomach tube (Oriental dream cooperation, Korea). The ruminal fluids were squeezed through four layers of muslin and the pH was measured immediately using a pH meter (model AG 8603; Seven Easy pH, Mettler-Toledo, Schwerzenbach, Switzerland). For microbial analysis, the rumen fluid was stored at -80˚C until DNA is extracted. The ruminal fluid was centrifuged at 11,200 ×g for 10 min (Centrifuge Smart 15, Hanil Science Industrial, South Korea), and the supernatant was transferred to a 50 mL centrifuge tube and stored at -20˚C for determination of the ammonianitrogen (NH 3 -N) and volatile fatty acid (VFA) concentrations. Chemical analyses The samples of feed and faeces were dried in a forced-air oven at 65˚C for 72 h to estimate the dry matter (DM) content and ground to pass through a 1 mm screen (Thomas Scientific Model 4, New Jersey, USA); and then assayed for crude protein (CP) by combustion (Method 990.03; [13]) using an Elementar rapid N-cube protein/nitrogen apparatus (Elementar Americas, Mt. Laurel, NJ, USA), ash (Method 942.05, [13]), ether extract (EE; Method 960.39, [13]), and chromium (Method 990.09; [13]). The neutral detergent fibre content was assayed with a heat stable amylase and expressed exclusive of residual ash (aNDFom) using the method of Van Soest [14]. Contents of acid detergent fibre excluding residual ash (ADFom) was determined according to Van Soest [15]. The gross energy (GE) of both feed and faecal samples was estimated using a bomb calorimeter (Shimadzu CA-3, Shimadzu, Japan). A 5.0 mL aliquot of rumen fluid was mixed with 1.0 mL 25% HPO 3 and 0.2 mL 2% pivalic acid to measure VFAs [16], and the mixture was analyzed using an Agilent 7890B GC system (Agilent Technologies, Santa Clara, CA, USA) with a FID detector. The inlet and detector temperature were maintained at 220˚C. Aliquots (1 μl) were injected with a split ratio of 10:1 into a 30 m × 0.25 mm × 0.25μm Nukol fused-silica capillary column (Cat. No: 24107, Supelco, Sigma-Aldrich, St. Louis, MO, USA) with helium carrier gas set to a flow rate of 1 mL/ min and initial oven temperature of 80˚C. The oven temperature was held constant at the initial temperature for 1 min, and thereafter increased at 20˚C /min to a temperature of 180˚C and held for 1 min, and increased at 10˚C /min to a final temperature of 200˚C, and a final run time of 14 min. The NH 3 -N concentration was determined using a modified colorimetric method [17]. The particle size of the feed in both the TMR and SF was determined using a Penn State particle separator using the technique of Kononoff et al. [18]. DNA extraction, PCR and 16S rRNA gene sequencing The 24 rumen samples collected at 3 time intervals from 4 Holstein steers fed by two different feeding system were thawed at room temperature and the genomic DNA was extracted from them using the NucleoSpin soil kit (Macherey-Nagel, Düren, Germany), with minor modifications. Briefly, 5 ml of thawed rumen fluid was centrifuged at 11,200 ×g using Centrifuge Smart 15 (Hanil Science Industrial, Seoul, South Korea) and supernatant was discarded. Three hundred and fifty μl of lysis buffer and 75 μl of enhancer was added to the pellet, and vortexed for 2 mins. The liquid was transferred to the NucleoSpin Bead Tube Type A containing the ceramic beads and was vortexed using the Taco Prep bead beater (GeneReach Biotechnology Corp., Taiwan). The rest of the procedure was followed according to the manufacturer's instructions. Extracted DNA samples were stored at -20˚C prior to PCR amplifications. In the present study, the forward primer F515 (5 0 -CACGGTCGKCGGCGCCATT-3 0 ) and reverse primer R806 (5 0 -GGACTACHVGGGTWTCTAAT-3 0 ) targeting the V4 domain of the bacterial/ archaeal 16S rRNA was selected as target for interrogating the bacterial and archaeal communities since the genus-level coverage of this region was found to be high [19]. This primer set targets~312 bp of the V4 hypervariable regions that can be fully covered by the Illumina MiSeq. The primer sets were modified to contain an Illumina adapter and linker region for sequencing on the Illumina MiSeq platform and, on the reverse primer, a 12-base barcode to enable sample multiplexing. Briefly, the PCR reaction was prepared using genomic DNA (5 ng), reaction buffer with 25 mM Mg2+, dNTP (200 mM each), Ex Taq polymerase (0.75 units; Takara Bio, Shiga, Japan), and 5 pmol each of the barcoded primers. The PCR reaction was carried out at 94˚C for 3 min for initial denaturation, 30 cycles of 45 s at 94˚C, 1 min at 55˚C, 90 s at 72˚C for amplification, and 72˚C for 10 min for final extension. Then, the PCR products were quantified using the Quant-iT TM dsDNA High-Sensitivity Assay Kit (Invitrogen, CA, USA), and all amplicons from the 24 DNA samples were loaded onto a 1.5%-agarose gel. Bands were visualized and the target band was excised and extracted using QIAquick Gel Extraction Kit (Qiagen, Hilden, Germany). The extracted DNA was used to construct the V4 sequencing library with the NEB-Next Ultra DNA Library Prep Kit (Cat. No: E7370S; New England Biolabs, Ipswich, MA, USA), according to the manufacturer's instructions and the library was sequenced for pairedend 250-bp reads in the Illumina MiSeq. Bioinformatics and statistical analysis The raw Illumina MiSeq reads were demultiplexed according to the barcodes and the sequences were quality-filtered (> = Q20). The processed paired reads were concatenated into a single read, and each single read was screened for operational taxonomic unit (OTU) picking using the UCLUST embedded within the QIIME 1.9.0 with the greengenes database (gg_otus-13_8-release, 97% nucleotide identity). Alpha diversity indices including Chao1, Shannon and Simpson indices were estimated using the PAST software [20], and the reads were rarified based on mean values of 10 iterations with 10,000 reads per sample. To identify bacterial lineages that drive the clustering of microbial communities in both feeding system, we performed PCA (Principal Component Analysis) using the fviz_pca_biplot function in the FactoMineR package of R [21]. Non-parametric Kendall rank correlation was used to test the correlation between the mean production variables and bacterial communities in rumen fluid using the corr.test function in the psych package of R [22]. The resulting correlation matrix was visualized in a heatmap format using the plot_ly function in the plotly package of R [23]. Data on daily methane emissions and total tract digestibility were analysed using the MIXED procedure of SAS (SAS Institute, Cary, NC, USA). The model included a fixed effect of dietary treatment and the random effects of period and animal nested within treatment. Ruminal fermentation characteristics and microbial diversity were analysed as repeated measures using SAS PROC MIXED [24]. The fixed effects in the model were dietary treatment and fermentation time as well as the interaction between them. Each animal within treatment were considered as a random effect. Appropriate covariance structures were chosen based on Akaike information criterion. Means were calculated using the LSMEANS statement, and the animal was considered the experimental unit. Treatment differences were considered significant at P < 0.05. Feed Intake, nutrient digestibility, ruminal methane production and fermentation The steers fed TMR and SF had similar DM intakes ( Table A in S1 Table) and nutrient digestibility (Table B in S1 Table). The steers fed TMR produced more (P < 0.05) CH 4 (g/d, g/kg DMI, g/kg OMI and %GEI) than those fed SF ( Table 2). As expected, the TMR mixing process increased the percentage of particles less than 1.18 mm and decreased the percentage of particles >19 mm (P < 0.05 and P = 0.01, respectively) ( Table 3). The two different feeding systems had distinctly different ruminal fermentation characteristics. Ruminal pH, acetate, and the acetate:propionate ratio were lower for TMR feeding than for the SF system at 1.5 h but were greater (P < 0.05) for TMR at 4.5 h (Table 4). In contrast, the overall pattern for the other variables, total VFA, and propionate was greater (P < 0.05) in steers fed TMR than those fed SF at 1.5 h, whereas the inverse was observed 4.5 h post feeding. No difference (P > 0.05) in the proportion of butyrate in total VFA was noted between the feeding systems. The concentration of NH 3 -N (P = 0.005), and the proportions of isobutyrate and isovalerate in total VFA in SF were also greater (P < 0.01) than those in TMR after feeding (Table 4). Richness, diversity estimates, and rumen bacteria and archaeal composition Illumina sequencing produced a total of good quality 1,231,081 bacterial and 323,775 archaeal sequences from 24 samples from 4 Holstein steers. These sequences included an average of 51,295 bacterial reads ranging from 28,357 to 176,175 reads and 15,418 archaeal reads ranging from 6,910 to 27,395 reads per rumen sample. The feeding system was found to have no effect (P > 0.05) on the total reads generated in bacteria and archaea. The mean observed OTUs were 1,911 and 1,937 for SF and TMR, respectively at a depth of 10000 reads per sample. Alpha diversity metrics, Chao1, exhibited a difference (P < 0.05) denoting greater bacterial richness in SF system. However, Shannon and Simpson indices did not exhibit any significant differences in both bacterial and archaeal diversity between the feeding systems at any time interval (S2 Table). Differences in bacterial community composition between the feeding systems The bacterial community composition between the feeding systems at the phylum level had no differences (P > 0.05). However, the abundance of the phylum Actinobacteria tended to vary (P = 0.061) in SF system (S3 Table). Likewise, at genera level, the mean abundance of Parabacteroides (P = 0.081), YRC22 (P = 0.082), Succiniclasticum (P = 0.063), Anaerovibrio (P = 0.071) and Succinivibrio (P = 0.074) tended to be greater in SF system. Similarly, abundance of genera CF231 and Coprococcus was greater (P < 0.05) in SF, whereas, abundance of SHD-231 (P = 0.072), Butyrivibrio and RFN20 was observed to be greater (P < 0.05) in TMR feeding system (Table 5; S3 Table). In the PCA plot, samples clustered in two groups by feeding system and were correlated to the mean taxonomic annotation. (Fig 2). Correlations between ruminal methane production, metabolites and bacterial abundance Prevotella, the most abundant genus in both the feeding system (up to 26%), showed a negative (Kendall's τ = −1, P < 0.001) and positive (Kendall's τ = 1, P < 0.001) correlation with isofatty acids in TMR and SF respectively (S4-S7 Tables; Fig 3). It also exerted a positive (Kendall's Table 5. Relative abundance of taxa in the steers fed by two feeding system representing > 0.1% of total sequences that tend to differ (P < 0.1) and significantly differ (P < 0.05). τ = 1, P < 0.001) correlation with ruminal pH and acetate in TMR feeding system. The next most abundant genera Ruminococcus and Butyrivibrio did not exert any correlation with any of the production variables in both the feeding system. However, Ruminococcus tended to exert a negative (Kendall's τ = −0.91, P = 0.087) and positive (Kendall's τ = 0.91, P = 0.087) correlation with CH 4 production and propionate proportion respectively in SF system. The differentially expressed genera RFN20 and Succiniclasticum had a strong negative (Kendall's τ = −1, P < 0.001) and positive (Kendall's τ = 1, P < 0.001) correlation with CH 4 production in SF and TMR respectively. Considering the rumen metabolites, propionate and NH 3 -N concentration in SF system correlated negatively (Kendall's τ = -1, P < 0.001) with the methane production. Effects of feeding system on CH 4 emissions In our experiment, feeding TMR resulted in increases in CH 4 production (absolute) and CH 4 yield, although the intake amount was same as for SF. The observed DMI to meet nutrient requirements for an average 0.65 kg daily gain were slightly higher than those predicted by the Korean feeding standard for beef cattle [25]. But, the CH 4 yield (g/kg DMI) in the current experiment was observed to be much lower than those reported earlier [26] [27]. However, this can be explained by the high feed quality and proportion of concentrate level in the feed in Korea. Similar CH 4 yield (g/kg DMI) was noted in an earlier report from Korea [28]. A strong relationship between DMI and ruminal CH 4 production has been reported [29] [30]. This indicates that increasing the DMI resulted in increased fermentable substrate, including both structural and nonstructural carbohydrates [31]. However, there is evidence that increasing the feed intake decreased the CH 4 yield [32] [33] [34], which was explained by the decrease in the mean rumen retention time, which consequently decreased the extent of rumen fermentation compared to low intake levels [35] [36] [37]. This is why our experiment was performed at a restricted feed intake level and not ad libitum using TMR and SF. According to a survey by Heinrichs et al. [38], only 7.1% of the particles in TMR were greater than 19 mm versus 16-18% for various forages, which is consistent with our results of 54 and 181 g/kg DM that is 5.4% and 18.1% for TMR and SF, respectively. Liu et al. [39] reported that TMR feeding had a greater proportion of ruminal contents with particle size < 1.18 mm, which is the critical size for particles to pass the rumen [40]. Although reducing forage particle size during the TMR mixing process usually results in reduced rumen solid retention time, the effects on DMI and digestibility remain less clear. Kononoff and Heinrichs [41] found that DM digestibility decreased with increasing ration particle size, whereas Kononoff and Heinrichs [42] reported the opposite. There were no differences (P > 0.05) in the indirect total digestibility of DM, OM, CP, and aNDFom and intake energy between the feeding systems in our experiment, and numerous previous studies have also found no significant differences in DMI and nutrient digestibility between the two feeding methods [8] [10] [41]. These different results are likely the result of interactions between forage particle sizes and forage type and the forage-to-concentrate ratio [40]. The limited effects of passage rate on total tract digestibility could also have been due to postruminal compensatory digestion [43]. Reducing the particle size distribution with feed processing might be another strategy for decreasing CH 4 emissions because it probably alters the rate of fermentation and passage rate of the particles. Recently, Huhtanen et al. [37] observed an inverse relationship between the Feeding system on ruminal methane emissions of cattle rate of feed passage and CH 4 production. This relationship was also seen by Okine et al. [44], who found that ruminal passage rate constants and ruminal fluid dilution rates explain 28% and 25% of the variation in methane, respectively. Although we did not measure the passage rate of the feed, perhaps the increased methane production is not associated with the decreased particle size of TMR. However, the diurnal variation in methane production between the feeding systems shows a higher production of CH 4 in TMR system after feeding (S1 Fig). Effects of feeding systems on rumen fermentation characteristics and bacterial abundance In this study, our results suggested that the TMR or SF system did not influence the abundance of the major microbiome in the rumen of Holstein steers, which may be a result of feeding the same diet ingredients. However, a change in rumen fermentation pattern between the feeding systems was observed, leading to an increase in ruminal pH after feeding TMR. In general, a reduced forage particle size decreases the time spent chewing and creates a trend toward decreased ruminal pH due to increased availability of substrate for fermentation [45]. However, this was not the case with TMR feeding in our experiment. It is unlikely that the reduced particle size due to the TMR mixing process was the factor determining the reduction in ruminal pH and CH 4 production. Feeding TMR eliminates the need to feed large meals of concentrate, which may be beneficial in terms of maintaining a high ruminal pH [5], consistent with our findings. In our study, steers fed SF showed a consistent decrease in pH from 6.7 to 6.3 until 4.5 h post feeding, which might be attributed to the rapid consumption of concentrate that was fed 40 min after roughage was fed. It is generally accepted that ruminal CH 4 production is lower when the ruminal pH is low, as in high-grain diets that are rich in soluble carbohydrate or starch rather than those that include a high amount of forage [46]. The same phenomena was observed in recent reports where the animals were fed high grain diets [47] [48]. Nevertheless, Hünerberg et al. [49] reported a discrepant result that reductions in diurnal ruminal pH did not correlate with the reduction in CH 4 production when ruminal pH decreased to threshold levels for subacute (5.2 pH < 5.5) or acute (pH < 5.2) ruminal acidosis. This is also in agreement with the observed no correlation between ruminal pH and CH 4 production in the current experiment. The overall ruminal fermentation characteristics in this experiment indicated that the fermentation pattern of steers fed TMR shifted away from propionate towards acetate. Moss et al. [50] reported that the production of acetate and butyrate from pyruvate is accompanied by the production of H 2 , whereas propionate production utilise H 2 , which is the major substrate for methanogenesis. The increase in CH 4 with the TMR system in our experiment might be due to an increase in acetate from cellulose digestion, although no difference in total NDF digestion was observed in this experiment, and a shift in the metabolic H 2 sink toward the production of acetate after feeding, which consequently increased the acetate: propionate ratio (A/P). This is also supported by Li et al. [51], who observed an increase in activity of xylanase, the most active fibrolytic enzyme, in the TMR feeding system compared to SF. This could be the major reason for the observed increase in CH 4 in TMR system in the current experiment. However, the butyrate proportion in the rumen is similar between the feeding systems after 1.5 h of feeding, though the abundance of Butyrivibrio in TMR was higher than SF feeding system, which had been reported to involve in decomposition of hemicellulose and cellulose thereby producing huge amount of butyrate [52], majorly contributing to ruminal CH 4 production [50]. The pattern of ruminal fermentation for TMR feeding in this experiment contrasts with other reports: decrease in A/P [51] and no difference in VFA and A/P [8] [39] compared to the SF system. Bacteroidetes and Firmicutes were the most abundant phylum in the present study irrespective of the feeding system, and the results were found to be similar to that of several other studies fed high concentrate diets [53] [54] [55]. Prevotella, the most abundant genera in the phylum Bacteroidetes, was not found to vary between the feeding systems in the current experiment. However, observed strong positive and negative correlations of Prevotella with acetate and propionate respectively, in TMR, might give another explanation for the increased production of acetate. Prevotella was also widely noted in animals fed high concentrate diets [49], which comprise a well-known xylan degrading group [56]. Furthermore, some of the species in Prevotella are also efficient hemicellulose, cellulose, pectin, long-chain carbohydrate, and protein digesters [57] [58] [59], which implies their important role in digestion. It has been suggested that, this bacterial family contributes to fumarate reductase activity, which could produce propionate via succinate or acrylate pathway [60], but the effect of Prevotella on the rumen fermentation and their relationship with methane production have not yet been clarified, because uncultured Prevotella represent a large portion of the bacterial population. It is worth noting that animals fed by SF system, that emitted lower CH 4, also showed a strong negative correlation between Prevotella and the proportion of propionate in this experiment. Higher abundance of RFN20 (Erysipelotrichaceae) in TMR system that positively correlated with the CH 4 production in earlier studies [61], contrastingly exhibited no correlation in the current experiment. On the other hand, RFN20 had a significantly strong negative and positive correlation with CH 4 and propionate proportion, respectively, in SF system. It was further coincided with the significantly higher abundance of Coprococcus of phylum Firmicutes for SF system. Coprococcus was also independently found to be enriched in the efficient animals' microbiome [62], which use H 2 for the production of propionate through the acrylate pathway that utilizes lactate [63]. However, no strong correlation of Coprococcus with propionate or CH 4 production was observed in SF system. In addition, low CH 4 production in SF system were also supported by the strong negative correlation with the abundance of Succiniclasticum, which is specialized in fermenting succinate and converting it to propionate as a major fermentation product [64] [65]. Ammonia-N is produced by the deamination and fermentation of the peptides released during protein digestion [66], which leads to a higher ratio of iso-fatty acids [67]. There is much interest in the importance of iso-fatty acids in the rumen, because isobutyric, isovaleric, and 2-methylbutyric acids are required for resynthesis of the branched-chain amino acids by carboxylation and amination [67] [68] [69]. The maximum ruminal NH 3 -N concentration in our experiment occurred at 3 h after feeding in steers fed SF. This implies that the supply of available nitrogen in the rumen is relatively well synchronised with the slow release of energy for microbial protein synthesis for SF compared to TMR. There is little experimental evidence to support the synchrony of energy and nitrogen release in the rumen, although Kim et al. [70] demonstrated that altering the degree of synchrony in the rates of ruminal release of energy and nitrogen had a marked effect on microbial protein synthesis when the diet contained about 30% DM as fermentable carbohydrate. Although microbial protein synthesis in the rumen was not determined in our experiment, the 3 or 1.3 times higher concentrations of isobutyrate and isovalerate in SF might have led to greater efficiency of microbial protein synthesis in the later phases of feeding. This is further supported by Kim et al. [71], who demonstrated that iso-fatty acids had a positive correlation with the efficiency of microbial growth. Hungate [72] also found that the incorporation of peptides synthesised from iso-fatty acids into microbial cells resulted in a net consumption of H 2 . Beever [73] suggested that the dry matter partitioning between microbial protein synthesis and fermentation influences hydrogen production and hence methanogenesis. In accordance with it, a strong negative relationship was observed between NH 3 -N and CH 4 in SF system. The relationship between numbers of methanogens and amount of CH 4 produced has been a topic of debate. However, in our study, population structure of the methanogens could not explain the difference in CH 4 production between the feeding systems. These results agree with previous studies that also showed that population of methanogens were not significantly different between two groups of feedlot bulls [74] and two groups of lambs [75] that produced significantly different amounts of CH 4 . Furthermore, the abundance of major genera belonging to Bacteroidetes, Firmicutes, and Euryarchaeota that are mainly involved in methane production did not exert any differences between the feeding systems. The shifts noted in other minor groups such as Coprococcus, Succiniclasticum, Butyrivibrio, and Succinivibrio provide novel insights, since their abundance were associated with the changes in ruminal VFA synthesis or methane production. However, the reason for the variation of abundance of these minor genera cannot be explained beyond the change in ruminal fermentation pattern as witnessed upon time after feeding. Despite no direct effect on methanogens and other major microbiome, the variation in abundance of minor microbiome that developed with the two feeding system probably varied in their metabolic potential, resulting in different proportion of metabolites becoming available for downstream methanogenic activity thereby altering the CH 4 production. The unclassified Clostridiales, Bacteroidales and Ruminococcaceae alone corresponded to almost 30% of total population which were observed to be the core microbiome in rumen across the world [76], and these unclassified orders were reported to play an important role in biohydrogenation [77]. This implies that there are a lot of microbe in rumen that are needed to be characterized to open novel insights to further understand methanogenesis. Conclusions This study demonstrated that the conventional method of feeding roughage and concentrates separately, reduces CH 4 production without altering the efficiency of nutrient utilisation and major rumen microbiome. There was no evidence to support concerns that the difference in methane production between TMR and SF differed due to different underlying major rumen microbial population, but a cardinal point that emerges from our findings is that the functional characteristics of minor microbiota can have a large impact on ecosystem functioning, and fermentation pattern in the rumen. Table. Relative abundance of taxa in the two groups representing > 0.1% of total sequences. Data is shown as LS Means with standard errors with n = 4 among groups. Bold P-values indicate groups that tend to differ (P < 0.1) and significantly differ (P < 0.05). (DOCX) S4 Table. Correlation coefficients between efficiency parameter and genus abundance in steers fed SF. Kendall's non parametric correlation matrix of the dominant bacterial genera across the rumen samples. The genera were included in the matrix if they were in at least 50% of the steers and represented at least 0.1% of the bacterial community in at least one of the steers. (XLS) S5 Table. P values of correlation coefficients between efficiency parameter and genus abundance in steers fed SF. Bold P-values indicate groups that tend to differ (P < 0.1) and significantly differ (P < 0.05). (XLSX) S6 Table. Correlation coefficients between efficiency parameter and genus abundance in steers fed TMR. Kendall's non parametric correlation matrix of the dominant bacterial genera across the rumen samples. The genera were included in the matrix if they were in at least 50% of the steers and represented at least 0.1% of the bacterial community in at least one of the steers. (XLSX) S7 Table. P values of correlation coefficients between efficiency parameter and genus abundance in steers fed TMR. Bold P-values indicate groups that tend to differ (P < 0.1) and significantly differ (P < 0.05). (XLSX)
2018-08-17T21:20:39.672Z
2018-08-15T00:00:00.000
{ "year": 2018, "sha1": "44edbded12c8644999902f38ae8ece65826405b2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0202446&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44edbded12c8644999902f38ae8ece65826405b2", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
243835181
pes2o/s2orc
v3-fos-license
Digital mental health literacy -program for the first-year medical students’ wellbeing: a one group quasi-experimental study Background Medical students are prone to mental disorders, such as depression and anxiety, and their psychological burden is mainly related to their highly demanding studies. Interventions are needed to improve medical students’ mental health literacy (MHL) and wellbeing. This study assessed the digital Transitions, a MHL program for medical students that covered blended life skills and mindfulness activities. Methodology This was a one group, quasi-experimental pretest-posttest study. The study population was 374 first-year students who started attending the medical faculty at the University of Turku, Finland, in 2018-2019. Transitions was provided as an elective course and 220 students chose to attend and 182 agreed to participate in our research. Transitions included two 60-minute lectures, four weeks apart, with online self-learning material in between. The content focused on life and academic skills, stress management, positive mental health, mental health problems and disorders. It included mindfulness audiotapes. Mental health knowledge, stigma and help-seeking questionnaires were used to measure MHL. The Perceived Stress Scale and General Health Questionnaire measured the students’ stress and health, respectively. A single group design, with repeated measurements of analysis of variance, was used to analyze the differences in the mean outcome scores for the 158 students who completed all three stages: the pre-test (before the first lecture), the post-test (after the second lecture) and the two-month follow-up evaluation. Results The students’ mean scores for mental health knowledge improved (-1.6, 95% Cl -1.9 to -1.3, P<.001) and their emotional symptoms were alleviated immediately after the program (0.5, 95% Cl 0.0 to 1.1, P=.040). The changes were maintained at the two-month follow up (-1.7, 95% Cl -2.0 to -1.4, P<.001 and 1.0, 95% Cl 0.2 to 1.8, P=.019, respectively). The students’ stress levels reduced (P=.022) and their attitudes towards help-seeking improved after the program (P<.001), but these changes were not maintained at the two-month follow up. The stigma of mental illness did not change during the study (P=.13). Conclusions The digital Transitions program was easily integrated into the university curriculum and it improved the students’ mental health literacy and wellbeing. The program may respond to the increasing global need for universal digital services, especially during the lockdowns due to the COVID-19 pandemic. Trial registration The trial was registered at the ISRCTN registry (26 May 2021), registration number 10.1186/ISRCTN10565335). Supplementary Information The online version contains supplementary material available at 10.1186/s12909-021-02990-4. Introduction Medical schools worldwide have raised concerns about the mental health of their students, as they face burdens and duties related not only to their demanding curricula and the competitive climate in medical schools and their future profession [1,2]. Research has shown that medical students have a higher incidence of stress and stress-related mental health problems than the general student population [3][4][5][6]. These include self-reported depression and anxiety, reduced sleep quality and burnout, and even suicidal ideation. Medical students also face an elevated risk of non-medical use of prescription medication and illegal drugs, due to the considerable stress [7,8]. At the same time, medical students encounter notable barriers to seeking help, including lack of knowledge and negative attitudes towards mental disorders and treatment, the fear of being stigmatized and poor access to appropriate care [9,10]. They may fear that disclosing their mental health problems might jeopardize their professional advancement or cost them their professional rights, and consequently postpone helpseeking [11]. These barriers make it difficult for clinicians to identify problems early enough and provide appropriate treatment. Therefore, strategies to promote mental health and wellbeing need to be incorporated into the medical students' curricula [12,13]. Mental health literacy (MHL) is an integral element of health literacy and it contributes to the public health strategies to prevent mental illness and promote mental health [14]. MHL is based on four components: understanding how to obtain and maintain good mental health, understanding mental disorders and their treatment, decreasing stigma and enhancing help-seeking behavior [15,16]. MHL increases mental health knowledge and positive attitudes towards the services available for mental health issues and lowers the threshold for seeking help and using effective treatment [17]. In contrast, poor MHL may lead to increased stigma, lack of awareness of how to identify mental disorders, and barriers to seeking help, namely confidentiality and trust in potential source of help. These situations have been associated with compromised wellbeing, quality of life and performance [18]. Previous studies have shown that MHL programs for adolescents in high-school settings can increased mental health knowledge and skills [19,20]. But a systematic review reported that university-based mental health educational programs did not improve attitudes towards seeking help or stigma among students studying to be health professionals [21]. From a developmental perspective, moving from the family environment to independent living is a critical transition period in a young adults' life, especially when this is coupled with the academic pressures of university studies. Strategies that are integrated into university settings, such as MHL interventions, can increase mental health awareness and reduce the stigma associated with mental health problems. This promotes wellbeing [13,22]. In the UK, an evidence-based Mental Health First Aid e-learning course, which focused on general awareness of mental health and recognizing mental health problems and mental disorders, was provided for medical students. This demonstrated promising results in terms of improved MHL [23]. The course also improved the participants' attitudes to providing help to those with mental health conditions [21]. The Canadian Transitions program, which provided the basis for the Finnish model, combines MHL with comprehensive life-skills resource for young people when they are making the transition to university studies [24]. In 2021, Wei et al. (2021) compared the findings of first-year postsecondary students who participated in the Canadian Transitions intervention and a control group who did not two months after the program ended [25]. The students in the intervention group showed improved mental health knowledge, reduced stigma, improved positive attitudes towards helpseeking, increased help-seeking behavior and reduced stress. Moreover, the participants felt more prepared for their academic studies after the program [26]. Other approaches have been used for medical students in addition to MHL programs. For example, mindfulness exercises effectively reduced stress, anxiety, depression and mental suffering among medical students, by increasing awareness, skills, efficiency and well-being [27]. This study was adapted from the Canadian Transitions initiative and included extra mindfulness exercises for stress management [24,25]. Our participants were first-year medical students who had enrolled at the University of Turku, Finland, at the start of the 2018 and 2019 academic years and the aim was to promote their well-being. Our hypothesis was that the digital Transitions program would improve their knowledge about mental health, decrease the stigma associated Keywords: Digital intervention, Mental health, Wellbeing, Mental health literacy, Mindfulness, Preventive intervention, Medical student with mental health problems, improve help-seeking attitudes and reduce their perceived stress and emotional symptoms. Study design and participants The study was conducted using a one group, quasi-experimental pretest-posttest design. A universal digitalized Transitions program was integrated into the first-year studies of the medical faculty at the University of Turku, South-Western Finland, as an elective course. The study population comprised 374 general medicine and dentistry students who started their studies in the 2018-2019 academic years. The 220 students who selected the course registered on the Transitions Internet-based platform. The study sample consisted of the 182 students who agreed to participate in our study, as shown in the flow chart ( Fig. 1). All the participants received the intervention and there was no control group, because only students from one study site were available. Participation in the study was voluntary and the students could also complete the course without participating in the research. The inclusion criteria for the participants were that they were first-year medical or dentistry students at the University of Turku in the 2018 and 2019 academic years, that they selected Transitions as an optional course and self-registered on the program website and they provided informed consent to participate in the research. The study was approved by the Ethics Board of human sciences research at the University of Turku, Finland. Program content The Canadian Transitions program was originally a booklet, and it was published online on 21 May 2019 at https:// menta lheal thlit eracy. org/ produ ct/ trans itions/. The program material was translated into Finnish by a professional translator and culturally adapted and digitalized by staff at the Research Center for Child Psychiatry at the University of Turku, Finland. Multi-professional experts reviewed the adapted material. These included adolescent psychiatrists, specialists in sexual diseases, substance abuse and communication difficulties, a teacher who specialized for learning strategies and 10 university students. Their feedback was carefully considered. Cultural adaptation and digitalization shortened the original material. Some of it was provided as videos, whereby students and professionals provided tips and advice for studying. The program also provided links to further information on specific topics, such as webpages run by Finnish mental health organizations. The contents of the digital Transitions material focused on three themes that addressed life skill resources and mental health topics (Table 1). Theme one focused on important skills for independent living, academic life strategies and relationships. Theme two provided strategies for how to obtain and maintain sound mental health and stress management skills. Theme three concentration on mental disorders, related treatment and help seeking. The material was presented as educational text and tips, Students needed to answer these questions to move to the next theme and complete the post-intervention questionnaires. The mindfulness component was an additional stress management resource. It included a series of audio tapes: 10 sessions that lasted from 4-10 minutes covered the theory of mindfulness and instructions for how to apply it and 10 sessions that lasted from 5-30 minutes that focused on exercises. The exercises were based on mindfulness-based stress reduction (MBSR) and mindfulnessbased cognitive therapy (MBCT) -programs, but were specifically modified for this age group by a mindfulness instructor at the University of Turku. Program procedure The courses started approximately after one month of the first semesters in 2018 and 2019. The students who selected the course registered on the Transitions program website using the link that was emailed to them. If they wanted to participate in the study, they provided their informed consent and completed the electronic baseline questionnaires. The electronic questionnaires at all stages of the research were filled by the participants. Two 60-minute face-to-face lectures were delivered by a mental health professional. The first lecture marked the beginning of the program, and it focused on strategies for independent living and studying ( Table 1). The students then had approximately four weeks to independently learn the material on the digital platform. The course corresponded to one European Credit Transfer System credit, which is the equivalent of a student working 27 hours. The students were required to allocate their time for the independent learning as part of the program. The students were also encouraged to practice stress management skills, including the mindfulness exercises on the platform, at any time during the course. The second lecture was held at the end of the program and was focused on mental health and stress management, as well as mental disorders, help-seeking and treatment. The post-intervention evaluation was conducted immediately after the second lecture ( Table 1). The students received an automatic e-mail from the platform two months after the program started to notify them that the follow-up questionnaires were open. They were contacted by e-mail and/or telephone to remind about the questionnaires. The second 60-minute lecture introduced Theme three. It aimed to enhance the participant's self-learning and promote help-seeking behavior. Background variables The students provided their name, e-mail address and phone number during the registration process. The baseline evaluation included questions about the following demographic characteristics: Discipline, birth year, gender, whether and when they had moved from elsewhere to Turku to pursue their studies, their current type of accommodation and whether they had sought help for mental health problems in the last three months. They were also asked about which of the program topics they needed to know about, such as study skills, finances, life management, accommodation, mental health, relationships, mental health problems and substance abuse. Outcomes The outcomes came from six questionnaires, which were identical in the baseline, post-intervention and follow-up evaluations. Mental health literacy was measured using three primary outcomes which addressed three separate dimensions: knowledge about mental health, stigma related to mental health problems and help-seeking attitudes [28]. These questionnaires were modified from the mental health literacy questionnaires developed by Kutcher et al. [29][30][31]. Cultural adaptation and digitalization of the original Transitions material meant that the content of the Finnish program was shorter than the original Canadian program and the questionnaires were modified to correspond to the digital contents. The Mental Health Knowledge questionnaire consisted of 13 statements that addressed the students' understanding of life skills, mental health, mental health problems and mental disorders. For example, one statement said: a small amount of anxiety could help how well a student performed at a sporting event or on a test [29]. The scores were based on a multiple choice response scale: true, false, don't know. Each correct response scored one point and each incorrect or 'don't know' response scored zero. The total knowledge score ranged from 0-13. The knowledge questionnaires yielded a Cronbach alpha of .60 for the pooled student data in 2018 and 2019. The Stigma questionnaire comprised 12 statements, which measured the students' attitudes towards mental health and mental illness. For example, one statement said: a person who received mental health treatment was just as intelligent as an average person [30]. Each statement was scored using 1-5 on a Likert scale. The total score for each participant ranged from 12-60. Higher scores indicated more positive attitudes and lower stigma. The Stigma questionnaire yielded a Cronbach alpha of .66 for the pooled data. The Help-seeking questionnaire comprised five statements that covered attitudes towards help-seeking for mental health problems. For example, one statement said: asking for help with a mental health problem or disorder was generally helpful [31]. Each statement was assigned a value from 1-5 on a Likert scale [ 24,26] and the participants received a total score of 5-25, with higher scores representing more positive attitudes towards seeking help. The Help-seeking questionnaire yielded a Cronbach alpha of .67 for the pooled data. The General Health Questionnaire was used to measure health and emotional symptoms, mainly anxiety and depression [32]. It contained 12 statements and each response assigned a value between 0-3 on a Likert scale. Total score ranged from 0-36, with higher scores indicating more severe health concerns. The questionnaire has previously been reported to have strong reliability and validity, with a Cronbach alpha of .88 [25]. The Perceived Stress Scale (PSS) was used to measure student's self-reported stress [33]. The instrument consists of 10 statements and each response scored a value between 0-4 on a Likert scale. Total scores ranged between 0 and 40, with higher scores indicating higher levels of perceived stress. Previous studies have reported strong reliability and validity for the questionnaire [34]. Cronbach alpha of .79 was reported in a recent study [25]. The Client Satisfaction Questionnaire (CSQ-I), an instrument designed for digital health interventions [35], was modified and applied in the post-intervention evaluation. This measured the students' satisfaction with the Transitions program. Five questions were adapted from the CSQ-I and these included whether the students found the program useful and whether they would recommend it to a friend. Each question scored 1-5 on a Likert scale and this generated a total score of 5-25. A Cronbach alpha of .88 was calculated for the present data. All instruments were translated and back-translated, according to a good scientific practice [36]. Statistics All 158 students who filled in the pre-test, post-test, and follow-up questionnaire, were included in the analysis. The mean scores of the primary and secondary variables were analyzed according to a single group design with repeated measurements analysis of variance (RM ANOVA). Interaction effects with the background and outcome variables were tested within the repeated measurement ANOVA models. The only interaction effects that were significant were gender, whether students had moved from outside the area to attend the course and the year of the course (2018 or 2019). This meant that the linear mixed model that analyzed the primary and secondary variables included the time, namely baseline, post-test and follow up, as the within-factor. Meanwhile, gender, year of the course and whether they had moved from elsewhere were used as between-factors. We used an unstructured covariance structure that allowed us to include estimates of co-variances within subjects as well as between subjects. The normal assumption of using the linear mixed modelling approach was checked with residual plots. As the restricted maximum likelihood estimation method was applied, there was no need for imputation. A two-sided significance level of 0.05 was used during the statistical testing and 95% confidence intervals (95% Cl) were calculated for the point estimates. Where appropriate, the Bonferroni-correction was applied to counteract the problem of multiple comparisons. The statistical analyses were carried out with SAS statistical software (SAS 9.4, SAS Institute, Cary, NC, USA). Results More than half of the first-year medical students, 58.8% (220/374), chose the optional course and registered with the program, and 82.7% (182/220) of those participated in the study. The drop-out rate was 13.2% (24/182). The follow-up questionnaire was filled in by 86.8% (158/182) of the participants (Fig. 1). Of the 158 participants (74.0% female), 42.4% (67/158) participated in 2018 and 57.6% (91/158) in 2019. The majority (82.3%) studied general medicine, while 17.7% studied dentistry. Most participants (71.5%) had moved from elsewhere less than a year ago, and currently lived alone (65.8%) ( Table 1). About a quarter (24.7%) of the participants had already contacted, or planned to contact, a healthcare professional about mental health problems ( Table 2). There were no differences in the background characteristics of the students who completed all evaluations and those who dropped out before the post-intervention or follow-up evaluations (Supporting material, Table 1 At baseline, most students felt that they needed greater knowledge to be able to handle their studies and life skills, their finances, their accommodation and relationships, as well as mental health and mental health problems. Only about a third of the students needed information about substance abuse (Fig. 2). Table 3 shows the mean values at baseline, after the intervention, and at follow up. Table 4 shows the changes between these time-points, with regards to knowledge, stigma, attitudes towards help-seeking, perceived stress, and emotional symptom scores. The students' knowledge about mental health and their emotional wellbeing, improved significantly immediately after the program (P<.001 and P=.04, respectively), and those positive changes were maintained at the follow-up stage. Furthermore, the students' attitudes towards help-seeking improved, and they reported reduced stress levels immediately after the program (P<.001 and P=.022, respectively). However, these changes were not maintained at follow up. There were no changes in stigma. Out of the 158 participants, 91.8% were satisfied with the digital Transitions program and the vast majority found the program useful and helpful for them. They were also willing to attend it again and would recommend it to a friend (Fig. 3). Discussion To our knowledge, this was the first study to combine a digitally delivered MHL program with a mindfulness component for medical students. We found that after they had participated in the digital Transitions program, the students' knowledge about mental health increased, their and emotional symptoms were alleviated, and these improvements were maintained for two months. Furthermore, their help-seeking attitudes improved, but only in a short-term, as the increase from baseline to the follow up stage was only close to statistically significant. The students also reported less stress immediately after the program. No changes in stigma were found. Students were very satisfied with the Transitions program. Our finding that the MHL of the medical students improved after the Transitions program agreed with previous studies, which suggested that mental health knowledge can be improved by delivering MHL or Knowledge -1.6 (-1.9 to -1. mental health first aid programs in school and university settings [19-21, 37, 38]. Evidence from controlled trials also shows that MHL courses decreased stigma and improved attitudes towards help-seeking in a various settings, including medical schools [19][20][21]. The effect could be sustained for up to six months according to a systematic review [38]. However, the results about decreased stigma and improved attitudes to help-seeking are controversial. For instance, Wei et al. showed that, in addition to improved mental health knowledge, stigma was reduced and the help-seeking attitudes and behavior of post-graduate students improved [25]. Small reductions in stigma were also observed in a systematic review and meta-analysis of MHFA studies. In addition, participants who attended MHFA courses reported it increased their confidence in helping people with mental health problems and providing mental health first aid. However, a systematic review of studies that focused on mental health educational programs found no significant improvements in help-seeking attitudes or stigma among healthcare students [21]. The original Canadian Transitions program, which was provided as a printed and online booklet, yielded similar positive outcomes to our current study [24,25]. However, it found that stigma was reduced by the program, and we did not observe any change in this factor. This could be due to a ceiling effect, which refers to a situation where the study subjects score close to the maximum score at baseline, which means there is little room for improvement. That was the case in the present study, which focused on an optional course. We believe that our Transitions program was probably was chosen by students with high mental health awareness to begin with. In general, the stigma surrounding mental health problems and attitudes towards help-seeking vary a lot among countries. Some studies have indicated that, although the stigma related to mental disorders still remains in Finland, its citizens may hold more positive attitudes than those living in other European countries [39]. The students reported significantly lower stress levels and decreased emotional symptoms immediately after the program than before it. Stress is known to adversely affect the medical students worldwide throughout their studies. Levels have been reported to remain moderately high throughout the first three years, with simultaneously worsening of physical, emotional and overall health during the first year [40,41]. Studies have reported that the prevalence of stress among medical students has ranged between 21 and 90% and that this was often associated with the competitive atmosphere in medical schools [42,43]. It is important to note that our study found that the students' stress levels and emotional symptoms improved after the digital Transitions program. Stress management skills are very important for medical professionals, not just during education, but in an individual's working life, as they can face of major importance not only during the studies but also later in the working life, as they can face daily situation that provoke stressful and emotional responses. Improved stress management skills can lead to better working performance and satisfaction as a medical doctor after medical school [2,42]. Our findings are an effective response to the international call for evidence-based interventions to improve the mental health of vulnerable groups, such as children and young people, including students [44]. One of the encouraging findings of this study was that the students reported improved emotional wellbeing after the program and this change was also seen two months after the program ended. It is notable that the Canadian study included university students from various faculties and no changes in emotional symptoms after the program were observed [25]. In the present study, the lectures emphasized the importance of empowering the students. They were strongly encouraged, and motivated, to exercise stress management skills. We also encouraged them to identify and reflect on factors that affected their personal wellbeing and what they could do to improve it. The resources that were added to the digital platform may have provided extra help on how to cultivate personal wellbeing. The original Canadian Transitions program, and our digitalized version of the Finnish Transitions program, were aimed at the first-year university students who were making the transition to independence. This target audience differed from other MHL programs. Both the original Canadian program, and the digitalized Finnish Transitions program, covered a wider rage of more general topics than other studies. These included positive and harmful relationships, loneliness, financial concerns, academic work loads, time management and pressure to perform. All of these have been identified as primary stressors among medical students in qualitative and quantitative assessments [40]. As shown in Fig. 2, the medical students in our study reported that they primarily needed knowledge on study skills, finances, life skills and accommodation, but they also needed information on mental health topics. Our findings suggest that the digital Transitions program was a feasible method of providing medical students with blended life skill and MHL and the participants were very satisfied with the program. Digital delivery enabled us to provide embedded features, such as links to appropriate further reading, mindfulness audio tapes and videos. The program was easily adopted by the students, due to its clear structure, recurring topics and pragmatic tips, and this advice was easily applied in their daily lives. The key contents and skills, such as the nature of stress and how to manage it, were emphasized were emphasized during the two face-to-face lectures that accompanied the digital program. The relatively short visits to the website did not reflect the entire time invested by the students into learning about mental health. It is likely that the mindfulness exercises, and the other exercises provided by the program, help them to manage their stress and that these contributed to the other positive impacts of the program. Strengths and limitations The main strength of the study was the digital delivery, which enabled us to provide videos and links to enhance learning. Moreover, the Transitions program contained learning material that ranged from general life skills to specific information about mental health. The survey carried out at the start of the program indicated that although the students needed more information about mental health and the problems it could cause. However, they also wanted to know about more general topics, including academic and life skills was highlighted. The holistic design of the program may partly be reflected in the improvements observed in the main outcomes. The main limitation of the study was that it did not include a control group and this means that we cannot draw solid conclusions about the effectiveness of the intervention. One group pretest and posttest designs have been criticized because they are not suitable for determining causality. Problems have also been reported about internal consistency. However, the study design continues to be applied in various contexts that study the implementation of behavioral interventions, for example in social sciences [45]. The approach in this study was applied because only one group was available. Clustered randomization would have been required to reliably divide the students into treatment and control groups, which was not feasible. We observed a significant change in four outcomes after the program was implemented, and the students' feedback on the program was very good. The students spent relatively short time on the Transitions program website. This raises questions about whether the significant positive changes in the main outcomes were related to self-learning of the program contents or whether the improvements could reflect the situation that the students had acclimatized to their new life situation as students. Although the main educational method used by the program was self-learning, the key contents of the Transitions program were delivered to the students in two lectures, and these were compulsory for those who opted to take part in the program. This provided the essential knowledge they needed on the topics covered by the program. It was not possible to carry out separate analyses of the impacts of the two compulsory lectures and self-learning on the main outcomes, especially with regards to knowledge about mental health. However, it is likely that some of the knowledge was gained during the lectures and this validates the findings to some degree. Integrating MHL courses into curricula may be one approach to promoting the wellbeing and mental health of medical students. In this study, the medical students, who selected the digital Transitions as an optional course, were very satisfied with it. Their MHL and emotional wellbeing improved and their stress reduced after the program. We recommend that the digital Transitions program should be provided as a mandatory part of curricula when Finnish medical students start their studies. Transitions program is a universal MHL program and it could be easily adapted to students in other contexts. Future studies should focus on implementing the program outside medical schools, in other faculties and in an internationally. Such studies are particularly topical at the moment, as the need for preventive interventions has increased globally due to the COVID-19 pandemic. Conclusion Digital Transitions was a feasible program for increasing the MHL of first-year medical students in Finland and satisfaction with the intervention was high. We suggest that the program may form a mandatory part of the curricula for medical students and it could be expanded to students in other contexts and countries.
2021-11-08T14:39:39.653Z
2021-11-06T00:00:00.000
{ "year": 2021, "sha1": "00d70837547ae08b695a33c50f8d8bdf14876025", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-021-02990-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bab7ea1811d666e6579b18ea30eae78c42fa5561", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255934267
pes2o/s2orc
v3-fos-license
CROP AND WEED SEGMENTATION ON GROUND-BASED IMAGES USING DEEP CONVOLUTIONAL NEURAL NETWORK : Weed management is of crucial importance in precision agriculture to improve productivity and reduce herbicide pollution. In this regard, showing promising results, deep learning algorithms have increasingly gained attention for crop and weed segmentation in agricultural fields. In this paper, the U-Net++ network, a state-of-the-art convolutional neural network (CNN) algorithm, which has rarely been used in precision agriculture, was implemented for the semantic segmentation of weed images. Then, we compared the model performance to that of the U-Net algorithm based on various criteria. The results show that the U-Net++ outperforms traditional U-Net in terms of overall accuracy, intersection over union (IoU), recall, and F1-Score metrics. Furthermore, the U-Net++ model provided weed IoU of 65%, whereas the U-Net gave weed IoU of 56%. In addition, the results indicate that the U-Net++ is quite capable of detecting small weeds, suggesting that this architecture is more desirable for identifying weeds in the early growing season. INTRODUCTION A monitoring system in precision agriculture is of fundamental significance to increase crop productivity (Fathipoor et al., 2019), and weed management is one of the critical elements of this system. Competing with plants for water and nutrients, weed adversely affects crop yield quality and takes a deleterious toll on crop production (Wang et al., 2019). Therefore, identifying and eliminating weeds is an essential step in precision agriculture. In this regard, many efforts have been made by farmers to counter the threat posed by weeds. However, conventional agriculture practices are laborious and inefficient, spraying herbicides uniformly to the whole field (Wang et al., 2019). Moreover, the remaining herbicides result in environmental pollution, which pose a serious threat to human health (Khan et al., 2020). To deal with this problem, site-specific weed management (SSWM) was proposed, which involves spraying the correct dose of herbicide (depending on the density of the weed patches or the species composition) in the right locations (Jensen et al., 2012). But, SSWM requires advanced autonomous weed detection systems. Accordingly, automated weed segmentation, a laborsaving process, is crucial to reducing the detrimental effects of herbicides or pesticides through localizing weeds precisely in agricultural fields (Pretto et al., 2021). In this context, several weed detection methods based on traditional image processing, such as decision trees (Deng et al., 2014), support vector machine (SVM) (Ishak et al., 2008), and random forest (Fletcher et al., 2016) are introduced to differentiate weeds from crops. In these techniques, pixels are classified into crop and weed classes based on extracted features, such as color and texture (Rico-Fernández et al., 2019). However, feature extraction in these methods * Corresponding author greatly depends on numerous parameters, such as weed density and lighting conditions, which hinders the performance of these algorithms in complex situations (Abdalla et al., 2019). Hence, there is a need to build efficient and robust modules to identify and recognize weeds. In recnet years, thanks to the advancement in computing power coupled with a rise in the amount of data, deep learning based methods such as convolutional neural networks (CNNs) offer a promising step toward managing weeds and pests more efficiently (Wu et al., 2021). Having been widely used recently, semantic segmentation algorithms based on CNN, such as fully convolutional network (FCN), SegNet, U-Net, and DeepLabV3, make it possible to segment weeds from crops with high accuracy. These algorithms generally are a fully convolutional network that often involves an encoder-decoder scheme, extracting features of input images and then up-sampling to the size of the original image. For instance, a study on crop/weed segmentation used an encoder-decoder deep learning architecture, which utilized different vegetation indices as inputs to improve performance, and the best mean segmentation accuracy of 96.12% was obtained (Wang et al., 2020). In another similar study using an RGB image dataset of carrot-weed, SegNet architecture was employed for semantic segmentation of images (Lameski et al., 2017). In another study of weed segmentation, the performance of SegNet was compared with that of U-Net based on a dataset of canola fields, and the authors showed SegNet had higher accuracy in their dataset (Asad et al., 2020). DeeplabV3 is another complex and powerful network with satisfactory performance in semantic segmentation studies. In a study of weed segmentation using aerial images, DeepLabv3 outperforms SegNet and U-Net with the highest accuracy of 0.89 and 0.81 in terms of area under the curve (AUC) and F1-score, respectively (Ramirez et al., 2020). (Khan et al., 2020) proposed a cascaded encoder-decoder network to segment precisely crop and weed, with fewer training parameters. Based on their architecture, four small networks were used to predict crop and weed in two stages independently and performed more accurately than U-Net, FCN-8s, SegNet, and DeepLabv3. However, SegNet and Deeplab V3 require more training data in comparison with U-Net (Zou et al., 2021). With the limited accessibility to large datasets for training models, the U-Net architecture that can be trained on small datasets is highly advantageous (Bousias Alexakis et al., 2020). For instance, in (Hashemi-Beni et al., 2020), the authors used 60 images of a carrot field and reached an accuracy of 60.48% with the U-Net network for weed semantic segmentation. Lately, researchers have tried to enhance the performance of the U-Net model by modifying this architecture. In an effort to discriminate weeds from other classes including the soil and crops, a modified VGG-UNet was implemented, which gave a desirable accuracy of intersection over union (IoU) of 92.91% (Zou et al., 2021). Although being the state-of-the-art models for image segmentation, modified U-Nets have two main limitations. Firstly, it is hard to reach the best accuracy achievable with the model because of its uncertainty and variability of optimal depth. Secondly, the skip connection scheme is inefficient due to the gap between pathways of corresponding convolutional encoderdecoder blocks (Bousias Alexakis et al., 2020;Zhou et al., 2019). The U-Net++ is a new architecture designed to overcome previous drawbacks and has a more robust performance in semantic segmentation . For example, in a study of change detection in an urban environment, the performance of the U-Net++ and the U-Net architectures were evaluated by different loss functions and metrics (Bousias Alexakis et al., 2020). The authors showed that U-Net++ architecture with BCE-Dice Loss function provides better results than the U-Net. The U-Net++ network is based on nested and dense skip connections, which has rarely been used in agricultural tasks. Hence, this paper mainly aims to address the weed management task by automatic crop/weed segmentation from high-resolution images using the U-Net++ model. The methodology is described in section 2, followed by the results presented in section 3, and finally, section 4 includes the conclusion. Dataset In this study, a public carrot-weed dataset was utilized in order to train and test our models (Lameski et al., 2017). The dataset contains 39 RGB images with a dimension of 3264×2448 pixels, acquired by a 10 MegaPixel phone camera. This dataset is complex data in which weeds are highly overlapped with plants, making segmentation a challenging task. In addition, it is an imbalanced dataset, meaning that the number of weed pixels is much less than those of other classes, which hinders the performance of classification algorithms. Pixel-level annotations with three classes: soil, carrots, and unspecified weeds are also provided for this dataset. Some images of this dataset are shown in Figure 7 (a). U-Net Architecture: U-Net is a modified fully convolutional network with a 'U' shape architecture in which the output image has the same size as the input image (Figure 1). The main difference between traditional fully convolutional networks (FCNs) and U-Net is that U-Net refills lost information in edges and localizes features more accurately by constantly extracting and combining the high-resolution features of the downsampling parts to the corresponding up-sampling block (Bousias Alexakis et al., 2020;Hashemi-Beni et al., 2020). U-Net++ architecture: U-Net++, also called Nested U-Net, is based on the U-Net network introduced to enhance the performance of U-Net. The motivation behind introducing U-Net++ is to make the optimization problem of the model easier and achieve more accurate results by densifying the connectivity and aggregating various depths of U-Nets (Figure 2). To bridge the gap between encoder and decoder sub-networks, skip pathways have been re-designed. Besides, deep supervision has been added, making the model more flexible by making a balance between performance and speed . Training Networks The Google Colab framework was employed to implement the models in this research. For the sake of computational efficiency, all images were resized into 128×128 pixels. In addition, among 39 images, 27, 3, and 8 images were utilized as training, validation, and testing, respectively. The networks were tuned using the "Adam" optimizer with a learning rate of 1×10 -4 . Furthermore, the loss function for the U-Net++ model was a weighted combination of categorical cross-entropy and dice coefficient loss. The reason behind using the hybrid loss function is to fully exploit what both functions provide; on the one hand, cross-entropy has smoother gradients; on the other, dice coefficient handles properly imbalanced dataset . The hybrid loss function can be calculated with the following equations: where ℒ = categorical cross-entropy loss ℒ = dice coefficient loss λ = weight that balances the two losses Categorical cross-entropy loss can be computed based on equation 2: where C = number of classes N = number of samples within one batch , = class label (if label is , , equals to 1; otherwise is 0) and ∈ [1, … . , ] , = probability of sample being correctly classified as Moreover, the formula for calculating dice coefficient loss is given in the following: Quantitative Assessment To evaluate the segmentation results and compare the performance of models, five popular criteria, including IoU, accuracy (Acc), precision (Pre), recall (Re), and F1-Score, were calculated for each class and then averaged. These metrics were computed based on four variables, including true positive (TP), true negative (TN), false positive (FP), and false negative (FN), which were derived from confusion matrixes between the predictions and the ground truth maps via the following equations. To describe these variables, take weed class for instance. TP represents the number of pixels correctly classified as weed, TN denotes the number of pixels correctly classified as non-weed, FP stands for the number of pixels classified as weed that were not actually weed, and FN represents the number of pixels incorrectly classified as non-weed classes. RESULTS AND DISCUSSION This section provides results of the semantic segmentation of the U-Net and the U-Net++ networks. Figure 3 and Figure 4 show the loss and average class accuracy for 200 iterations during the training process of the U-Net and the U-Net++ networks. According to these figures, there is a very neglectable performance improvement beyond 100 iterations, suggesting that the networks were trained enough for this segmentation task. Also, it should be noted that in the U-Net++ model the convergence process took much longer than in the U-Net network because of the more complex loss function used in the U-Net++ model. Figure 5 and Figure 6 provide the information of the confusion matrix between segmentation results and ground truth for both models. Accordingly, in the U-Net++, the TP values, the proportion of pixels correctly predicted, for three classes of weed, plant, and soil are 83.45%, 86.28%, and 98.96%, respectively. By comparing these values with the corresponding values achieved by the U-Net, we can infer that the U-Net++ had much better performance in classifying weed, even though it had quite similar performance as the U-Net in classifying plants and soil. In fact, in the U-Net model, there were a lot of weeds wrongly predicted as plants (15%), but the U-Net++ network overcame this problem to a great extent. On the other hand, the percentage of plants that were mistakenly classified as weeds in the U-Net network was almost twice in the U-Net++ model. Furthermore, in the U-Net model, more plant and weed pixels were wrongly classified as soil class. In Figure 7 parts (c) and (d), some parts of the qualitative results of both models are shown. As just mentioned, it is evident in this figure that some weeds were recognized wrongly as plants by the U-Net, while they were identified correctly by the U-Net++. This poor performance of the U-Net happened more particularly in complex parts of the image where weeds and plants are mixed (e.g., the first and third images in Figure 7). In addition, unlike the U-Net network, the U-Net++ represented the high ability to identify tiny weeds. This is because of the aggregating multidepth structure of the U-Net++ that makes it more powerful to segment weeds of various sizes. This advantage becomes greatly important, especially at the beginning of the growing season when young plants and weeds start to germinate. Therefore, detecting and removing weeds at this stage will be highly beneficial for young plants to flourish. Figure 6. Normalized confusion matrix of the U-Net++ model. The detailed results of the evaluation of both network architectures based on five well-known metrics are given in Table 1. Accordingly, the U-Net++ had higher mean IoU, Acc, Re, and F1-Score metric values, outperforming traditional U-Net in our semantic segmentation task. Although Pre decreased somewhat compared with that of the U-Net, the Re was better in the U-Net++, that is, the U-Net++ model segmented weed more aggressively and correctly classified more weeds at the expense of misclassification of some plants as weeds. Since segmenting weed is more important than other classes, this study intends to assess the accuracy of identifying per class individually. For this purpose, IoU values for three classes of weed, carrot, and soil are given in Table 2. U-Net++ model provided an IoU of 97.97% for soil, 80.80% for crops, and 65.13% for weeds, which are higher than the accuracy obtained from U-Net. In general, IoU value for each class represents the ability of models to correctly classify the corresponding class; the higher value for a given class, the better performance the model has for separating that class. Given the considerable improvement in weed IoU, U-Net++ proved much better in weed segmentation than U-Net. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-4/W1-2022 GeoSpatial Conference 2022 -Joint 6th SMPR and 4th GIResearch Conferences, 19-22 February 2023, Tehran, Iran (virtual) CONCLUSION The advent of deep learning algorithms has provided an unprecedented opportunity to pinpoint and thus eliminate weeds more efficiently. With the aim of pixel-wise semantic segmentation of a weed-carrot dataset, including 39 images, this study employed U-Net and U-Net++ as two kinds of advanced deep convolutional networks. The results show that the U-Net++ provides better performance than the U-Net in terms of overall accuracy, mean IoU, recall, and F1-Score metrics. Most importantly, the U-Net++ model performed better in complex parts of images where weeds were mixed with plants. In addition, the U-Net++ was notably more effective than the U-Net in weed segmentation based on weed IoU. Overall, this paper demonstrated that the U-Net++ network architecture has a high potential for crop/weed segmentation, especially at the beginning of the growing season, leading to high profitability and cost reduction in agricultural management. In future research, we aim to focus on enhancing the proposed algorithm through data augmentation using generative adversarial networks (GANs).
2023-01-17T16:20:35.398Z
2023-01-13T00:00:00.000
{ "year": 2023, "sha1": "8bfd90e0f27cab2ab4e52e69a66a118d364ab88d", "oa_license": "CCBY", "oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/X-4-W1-2022/195/2023/isprs-annals-X-4-W1-2022-195-2023.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9d1bbb0edad31de59457f7991ed798dc46f014b7", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science" ], "extfieldsofstudy": [] }
238529631
pes2o/s2orc
v3-fos-license
Clinical Impact of an Analytic Tool for Predicting the Fall Risk in Inpatients: Controlled Interrupted Time Series Background: Patient falls are a common cause of harm in acute-care hospitals worldwide. They are a difficult, complex, and common problem requiring a great deal of nurses’ time, attention, and effort in practice. The recent rapid expansion of health care predictive analytic applications and the growing availability of electronic health record (EHR) data have resulted in the development of machine learning models that predict adverse events. However, the clinical impact of these models in terms of patient outcomes and clinicians’ responses is undetermined. Objective: The purpose of this study was to determine the impact of an electronic analytic tool for predicting fall risk on patient outcomes and nurses’ responses. Methods: A controlled interrupted time series (ITS) experiment was conducted in 12 medical-surgical nursing units at a public hospital between May 2017 and April 2019. In six of the units, the patients’ fall risk was assessed using the St. Thomas’ Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) system (control units), while in the other six, a predictive model for inpatient fall risks was implemented using routinely obtained data from the hospital’s EHR system (intervention units). The primary outcome was the rate of patient falls; secondary outcomes included the rate of falls with injury and analysis of process metrics (nursing interventions that are designed to mitigate the risk of fall). Results Introduction Background Inpatient falls are preventable adverse events that are the top 10 sentinel events in hospitals. Up to 1 million fall events occur annually in the United States, and the average cost of each event has been estimated at $7900-$17,099 (2019 USD) [1,2]. On average, ~400-700 falls occur annually in Korean tertiary academic hospitals [3][4][5]. Despite the availability of a considerable body of literature on fall prevention and reduction, falls remain a difficult, complex, and common problem that consume a great deal of time, attention, and mitigation efforts among nurses in practice [6,7]. Considering the studies on inpatient falls, most falls are preventable through tailored interventions and universal fall precautions [8]. However, fall prevention efforts are hindered by the inability to accurately estimate the risk of falling [9,10]. Several risk assessment tools developed using heuristic approaches have been widely used to estimate fall risk in practice. However, evidence regarding the efficacy of those tools is lacking [11,12], potentially resulting in a high false-positive rate and consequently increased burden on nurses. In addition, rating fall risk without identifying the underlying source uses nursing time but does not inform preventative interventions [13]. Our clinical observations reveal that nurses frequently tend to rely only on several universal precautions, not considering risk factors [14]. Implementation of cognitive, toileting-related, or sensory-and sleep-related assessments and interventions was rare. The increased adoption of electronic health record (EHR) systems over the past decade has stimulated the development of predictive fall risk models using machine learning techniques, which are reported to exhibit better predictive performance than the existing fall risk assessment tools alone [15][16][17][18]. However, most of these models have not been validated in multiple settings, and their implementation is restricted by their use of aggregated data by hospital admission rather than by patient-days. None of these models have been evaluated prospectively to assess their performance or their impact on nursing practice. Nursing predictive analytics can include information regarding the likelihood of a future patient event through risk prediction models, which incorporate multiple predictor variables obtained automatically from the EHR. If such models are integrated into EHR systems, nurses can prospectively obtain information to inform their decision making on fall prevention intervention planning. In this study, we used the prediction model that was developed in our previous study [18]. This model was designed to use nursing process data from EHRs and to consider nurses' fall prevention workflow. Automatic and manual chart reviews were performed to identify all positive events in the retrospective data. The aim of this prospective study was to determine the effect of a predictive fall risk analytic tool on fall outcomes in patients admitted to 12 medical surgical units in South Korea, as well as their impact on nurses' responses. This study hypothesized that providing nurses with information about patients' likelihood of falling within 24 hours of admission, based on data routinely captured in EHRs, would enable nurses to provide risk-targeted interventions and contribute to a reduction in patient fall rates. Development of an Inpatient Fall Risk Prediction Model This research team previously reported on the development of a fall risk prediction model [18]. Briefly, concepts of fall risk factors and preventive care were identified using two international practice guidelines [10,19] and two implementation guidelines [20,21] on preventing inpatient falls. Two standard vocabularies, the Logical Observation Identifiers Names and Codes [22] and the International Classification for Nursing Practice [22,23], were used to represent the concepts in the prediction model, which was then itself represented using a probabilistic Bayesian network. The model was tested in two study cohorts obtained from two hospitals with different EHR systems and nursing vocabularies. The model concepts were mapped to local data elements of each EHR system, and two implementation models were developed for a proof-of-concept approach, followed by cross-site validation. The EHR data included in the model were demographics, administrative information, medications, Korean patient classification based on nursing needs, the fall risk assessment tool, and nursing fall risk prevention processes, including assessments and interventions. The two implementation models exhibited error rates of 11.7% and 4.87%, with c statistics of 0.96 and 0.99, respectively. The model performed 27% and 34% better than the existing Hendrich II tool [24] and the St. Thomas' Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) system [25], respectively. Clinical Implementation of the Intelligent Nursing @ Safety Improvement Guide of Health information Technology System The validation site model was implemented at a 900-bed public hospital in the metropolitan area of Seoul (Republic of Korea) that used STRATIFY to assess fall risks for all inpatients. The project, named Intelligent Nursing @ Safety Improvement Guide of Health information Technology (IN@SIGHT), was designed as a platform to support analytic tools as part of the infrastructure of a hospital EHR system, starting with a fall prediction analytic tool. The fall prediction analytic tool was integrated into the locally developed EHR system that had been in use for more than 10 years. The tool was deployed in 6 targeted nursing units (intervention group) on April 5, 2017, and all 204 nurses at those units automatically received the prediction results on a daily basis. This implementation process involved the chief of the Nursing Department, unit managers, unit champions, personnel of the Department of Medical Informatics, and the Patient Safety Committee. For 3 months before system deployment, three sessions of education on the IN@SIGHT system were provided to the intervention group, followed by peer-to-peer education provided by unit champions. The Nursing Department decided to replace the existing STRATIFY with the analytic tool during this quasi-experimental study. The original model was customized by replacing the six data elements of STRATIFY with proxy data elements in the EHRs. The adjusted model, consisting of 40 nodes and 68 links, had an error rate of 9.3%, a spherical payoff of 0.92, and a c statistic of 0.87. Related work processes were redefined, and the existing fall prevention documentation screen of the EHRs was modified. The hospital decided to deliver the risk information in dichotomized format, with at-risk and no-risk categories at a cutoff point of 15%, which provided a high specificity of 89.4%. The analytic tool triggered an "at-risk" alert on the EHR system when the user selected an at-risk patient. Study Framework and Objectives A study framework was developed based on a nursing role effectiveness model (Figure 1) [26]. The original model was based on the structure-process-outcome design of the Donabedian quality care model but was reformulated for this empirical testing, focusing on nurses' independent roles in the process component. We assumed that the characteristics of the patients, nurses, and hospital were fixed because the study involved a single institution and the same medical-surgical units. The hypothesis being tested was that the intervention of fall risk prediction would affect the appropriateness of multifactorial interventions and would be followed by changes in outcome. In accordance with the aim of this study, the impact of an electronic analytic tool for fall risk prediction on patient outcomes and nurses' responses was explored by addressing the following specific research questions: 1. Did the predictive analytic tool influence the quality of nursing care as assessed using outcome indicators? 2. Did the predictive analytic tool affect nursing fall prevention activities provided to patients? 3. How did the effects change over time? Study Design and Setting This nonrandomized controlled trial used an interrupted time series (ITS) design. To control for bias due to time-varying confounders, such as other quality improvement (QI) initiatives occurring in parallel with the intervention and other events, the 12 medical-surgical units were selected and allocated to 1 of 2 groups using pairs of units matched according to the known fall rates and unit characteristics for individual units ( Figure 2). All of the nurses and eligible patients participated in this study between May 1, 2017 and April 30, 2019. The patients met the following criteria: age ≥ 18 years and admitted to the hospital for >1 day in departments other than pediatrics, psychiatrics, obstetrics, and emergency care. The preintervention period was set at 16 months, which was the maximum retrospective time window. The 12 nursing units' nurse staffing ratios were changed at the time due to a policy for comprehensive nursing service in the Korean government's national health insurance. The postintervention period was 24 months. Process metrics, which measure the delivery of fall risk mitigation interventions by nurses to patients, were analyzed every 6 months. This study was approved by the hospital's ethical review board (IRB no. NHIMC 2016-08-005). A waiver of informed consent was granted by the IRB due to the QI nature of the intervention, thus enabling the inclusion of all patients and nurses in the participating units. This study followed the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) reporting guidelines [27]. Intervention Nurses in the intervention units received 24-hour fall risk prediction results for each patient every morning. These results could be overridden based on the nurses' clinical judgment, such as when patients were receiving treatments, procedures, operations, or fall related high-risk drugs or whether they suffered a fall, seizure, or syncope. The fall risk predictions were created by the analytic tool using the data collected within the past 24 hours. For missing data, a priori values from the day before were assigned first, and then a replacement was used: a mean value for continuous variables and a modal value for categorical variables. Nurses in the intervention units used the STRATIFY risk assessment tool only on the day of admission. When an at-risk patient was selected by nurses in the EHR system, they received an alert once each shift informing them that the patient was at risk and were guided to a care plan screen that listed pertinent interventions ordered by priority according to the patient's risk factors. Nurses in the control units used only STRATIFY to assess fall risk according to their individual clinical judgment. They were able to manually open the same care plan window through menu navigation but received no alerts for at-risk patients. Outcome Measures The primary outcome was the overall rate of patient falls per 1000 patient-days during the study period, as defined by the National Database of Nursing Quality Indicator (NDNQI) outcome metrics of the American Nurses Association [28,29]: A patient fall is a sudden, unintentional descent, with or without injury to the patient, that results in the patient coming to rest on the floor, on or against some other surface (e.g., a counter), on another person, or on an object (e.g., a trash can). NDNQI counts only falls that occur on an eligible inpatient unit that reports falls. When a patient rolls off a low bed onto a mat or is found on a surface where you would not expect to find a patient, this is considered a fall. If a patient who is attempting to stand or sit falls back onto a bed, chair, or commode, this is only counted as a fall if the patient is injured. All unassisted and assisted falls... are to be reported, including falls attributable to physiological factors such as fainting (known as physiological falls). The secondary outcomes were the overall rate of falls with injury, and process metrics. The rate of falls with injury was also measured using the aforementioned NDNQI definition. Process metrics were defined according to the Institute for Healthcare Improvement definition as "process indicators that measure compliance with key components of evidence-based prevention" [30]. Methods for identifying and defining key components of fall prevention are described elsewhere [31]. In brief, nursing activities identified by international guidelines on preventing falls are categorized into 17 components; of these, 7 nursing intervention components were used in this study. Process metrics were used to determine whether nursing behaviors independently affected patient outcomes. Each process metric measured the proportion of at-risk patients who were provided with targeted interventions. For example, all hospitalized patients are expected to be assessed for fall risk factors within 24 hours of admission, and at-risk patients are expected to receive risk-targeted interventions within 24 hours of their risk designation. Data Collection Monthly rates of patient falls were collected from 16 months before the experiment started (the preintervention period) from the hospital's quality assurance department to provide a baseline reference for comparisons. However, monthly rates of falls with injury before the experiment were not comparable due to differences in the criteria used to calculate them; only severe injuries were used as a sentinel event at the hospital. For process metrics, 1 month of data from before the experiment were collected as a baseline. During the study, data on patient demographics and medications, nursing activities, STRATIFY data, and administrative information were collected from the EHR system, and fall data were collected from the hospital's quality assurance department. To monitor and minimize the underreporting rate noted previously [31,32], the Nursing Department provided education to all units on the principles of reporting and documentation, and they provided monthly chart reviews and feedback. Sample Size and Statistical Analysis The study hypothesis was that the fall rate would be reduced by 15% during the 24-month implementation of the prediction program. We conservatively estimated the required sample size based on previous research [18] by assuming a fall rate in the control group of 2.0 per 1000 patient-days, an average of 15,000 patient-days per unit over 12 months, and an average 1700 admissions. The required number of falls in the control group was calculated using a Poisson distribution: D 0 = z 2 (θ + 1)/θ(log e θ) 2 [7]. We applied z=2.0; detecting a rate ratio (θ) of 0.85 between groups at the 5% significance level with a statistical power of 80% required 610 falls, which corresponded to a 24-month period for the 12 units. The participant characteristics were compared using chi-square tests for categorical variables and t tests for continuous variables. The primary outcome of the rate of patient falls was compared by the controlled ITS, incorporating the control series analysis and the uncontrolled ITS [33]. We fit negative binomial models, including a lagged dependent variable to control for serial autocorrelation and monthly dummy variables, to generate seasonal fixed effects in each model. Each model included three variables to measure the relationship between time and patient fall rates: (1) a continuous variable to represent the underlying temporal trends, (2) a dummy variable for dates after May 1, 2017, to determine the change in fall rate related to the intervention, and (3) a continuous time variable beginning on that date to represent the change in slope. The coefficients of the second and third variables indicated whether the intervention had immediate and ongoing effects on the fall rate, respectively. The Student t test and a comparative time series analysis were conducted to analyze the rate of falls with injury, and chi-square analysis was used for the comparison of process metrics between groups. Patient Characteristics This study involved 42,476 admissions of 40,345 unique patients in 12 units, corresponding to 362,805 patient-days in nursing units across both the control and intervention groups. In total, 2131 patients (5.02% of all admissions) were admitted to both an intervention and a control unit at different times. The patient characteristics differed significantly between the two groups ( Table 1). Compared with the intervention units, the control units were characterized by older patients, a longer stay, fewer female patients, and more patients with a fall history at admission; rates of secondary diagnoses and surgical procedures were also higher. Approximately half of the patients in the intervention group had a respiratory or digestive disease or any form of cancer, while control patients had a greater diversity of primary diagnoses. Primary Outcome: Rate of Patient Falls There were 325 fall events in the intervention group and 382 in the control group. The mean monthly rate of falls decreased from 1.92 to 1.79 in the intervention group and increased from 1.95 to 2.11 in the control group. Controlled ITS analysis revealed that the postintervention versus preintervention change in the incidence rate ratio of the fall rate was −0.10 (SE 0.04, P=.014). There was no seasonal effect. Due to the significant differences in patient characteristics between the control and intervention groups, we conducted separate before versus after comparisons between a period of time postintervention and the same period of time preintervention. In the intervention group, there was a significant reduction in the rate of falls of 29.73% (0.57 falls per 1000 patient-days) immediately postintervention (SE 0.14, P=.039). During the preintervention period, the slope exhibited a slightly decreasing trend (SE 0.08, P=.344), and after the intervention, the slope increased slightly but not significantly so (slope=0.01, SE 0.01, P=.059; Table 2). In the control group, there was a nonsignificant reduction in the rate of falls of 16.58% (0.16 falls per 1000 patient-days; SE 0.13, P=.20). The slope before the intervention increased (change in slope=0.08, SE 0.72, P=.292), while after the intervention, the slope increased slightly (change in slope=0.01, SE 0.01, P=.057). Data are rate ratio (95% CI) values. Secondary Outcomes: Fall With Injury Rates and Process Metrics During the intervention period, the mean monthly injury rate per 1000 patient-days was 0.42 in the intervention group and 0.31 in the control group. The comparative time series analysis revealed a nonsignificant increase in the rate ratio of 0.18 (z=1.50, P=.134). Regarding process metrics, fall risk assessment was not conducted in almost three-quarters of patient-days in the control group, while in the intervention group, fall risk assessment was conducted on 100% of patient-days (Table 3). During the intervention period, the frequency of at-risk days was almost 40% in the control group but ranged from 24.5% to 34.6% in the intervention group. There was a high rate of implementation of a fall risk tool within 24 hours of hospital admission in both groups, although rates fluctuated over time in the control group. Rates of assessment of injury risk factors were assessed in all patients in the intervention group; these data were not available for the control group. Universal fall precautions and fall prevention education were provided to most patients in the control group consistently throughout the study period. Rates of implementation of communication and environmental interventions were initially significantly better in the control group than in the intervention group; however, those for the intervention group increased over time and had caught up with the control group by the third observation point. Although the rate of risk-targeted interventions incrementally increased in both groups, the intervention group showed better adherence than the control group at the fourth observation point (29.5% vs 18.1%, P<.001). Table 3. Temporal changes in process metrics in the control and intervention groups. For the care components of nursing assessments, nurses in the intervention group performed various observation types, such as mental status, cognitive function, communication ability, and incontinence, including mobility, at each observation point ( Figure 3A), while those in the control group appeared to focus largely on mobility assessments, the frequency of which suddenly increased at the last observation point. Universal precautions, education, and medication reviews were the most common interventions in both groups ( Figure 3B). Although the frequency of interventions was lower in the intervention group than in the control group, there was a steady increase over time. Principal Findings Implementation of an electronic analytic tool designed to predict fall risk was associated with reduced fall rates among inpatients at a public hospital in South Korea. However, comparison with the control group should be considered with caution due to notable differences in patient characteristics between the two groups. There was no significant difference in the rate of falls with injury between the control and intervention groups. Use of the electronic analytic tool was feasible, and it was accepted by nurses and improved the completion of risk assessments. Moreover, the process metrics for multifactorial and risk-targeted interventions for at-risk days were lower in the intervention group but increased over time. These findings suggest that although the effectiveness of an electronic analytic tool may be limited, it has potential as an aid to help nurses make informed clinical decisions. The main challenges in this study were threefold: (1) random assignment of patients to the study groups was not possible; (2) it was not possible to control for co-interventions or external events at the hospital that may have affected the outcome, including QI activities; and (3) nurses' understanding of the analytic tool developed by a machine learning approach was not assessed. These issues were managed by selecting only medical-surgical units and assigning patients according to the particular characteristics of each unit. A controlled ITS design was adopted to control for time-varying confounders. Finally, the development and validation process of the predictive model and the mechanism of chaining joint probabilities of a Bayesian network were introduced via user education sessions. However, during the study, the research team confronted additional issues that made interpretation of the results challenging. Discussion on these issues is valuable for future research into risk prediction and alerting in real-world settings. The fall rates of 1.79 and 2.11 in the intervention and control groups, respectively, in this study were lower than previously reported rates of 2.08-4.18 for an intervention study involving a cluster randomized controlled trial (RCT) in four urban US hospitals [34], 3.05 for a cluster RCT in Australia [7], and 2.80 for a US intervention study [35]. However, differences in the patient populations and in the structural elements at the facilities preclude direct comparison [36]. The low fall incidence rate in this study allowed us to observe changes in nursing behaviors over a 24-month follow-up period. A fall prevention intervention will not be effective if it does not influence nurse behaviors. We focused on how the analytic tool can influence nursing behaviors in order to ensure that interventions that are beneficial to patients are routinely provided. Our findings revealed that the intervention group performed more multifactorial patient assessments than the control group; however, the interventions in both groups were limited. Most of the preventive components involved education and medication review, which is perhaps unsurprising since these precautions are routinely applied to all inpatients regardless of their fall risk. Interventions associated with toileting, impaired mental and cognitive function, impaired sensory function, and sleep disturbance were rarely observed in both groups. According to international guidelines for preventing falls [10,[19][20][21], multifactorial assessment of risks and multifactorial, risk-targeted interventions are basic components of fall prevention strategies. Application of the analytic tool in this study ensured that risk factors were monitored daily for each patient in the intervention group and that alerts were delivered to their nurses via the hospital EHR system. A large increase in data-seeking and data-gathering activities was observed during the first 6 months of observation, whereas notable increases in overall interventions and risk-targeted interventions appeared 12 and 18 months later, respectively. This suggests that adoption of this new approach and its processes by nurses was time dependent and stepwise, in line with the findings of surveys conducted repeatedly during the study period [37]. Those surveys revealed that some nurses reported neutral or even slightly negative attitudes and experiences at the beginning of the study. However, the proportion of negative responses gradually decreased over time. These findings can be understood in terms of the non-adoption, abandonment, scale-up, spread, and sustainability (NASSS) framework [38] to explain the success of technology-supported health or social care programs. Staff members are often initially more concerned about threats to their scope of practice or to the safety and welfare of patients, leading them to initially gather more information about risks. A previous qualitative exploration study [39] that used one-on-one and focus group interviews to investigate nurses' perception of predictive information and how they act upon it found that nurses attempt to gather more information from other sources and review more detailed predictions during periods of uncertainty. Time delays in adoption and changing of behaviors are expected, given that predictive information is relatively new to nurses. The other relevant domain of the NASSS framework is the readiness of a hospital for a predictive analytic tool. The understanding and support, antecedent conditions, and level of readiness for a novel tool at the board level might influence the uptake time by nurses and the internal drivers for scaling up the tool. Study Limitations This study had limitations. The control group patients had more comorbidities that rendered them more vulnerable to falls than the intervention group. They were on average 4 years older, had a hospital stay that was 1.3 days longer, and had a greater history of falls. These variables are known important covariates [19], and we did not balance these covariates in the ITS experiment. The differences in these covariates between the two study groups may be attributable to an ascertainment bias issue; it is possible that rather than there being a true reduction in fall rates in the intervention group, more patients at a lower risk of falls were included in that group. Evaluation of the baseline data suggests that the nurses in the control group delivered significantly more fall-preventive interventions to their patients than did those in the intervention group, including more additional risk assessments, universal precautions, educational interventions, and communication and environmental interventions. Thus, control group patients were both more likely to fall and to receive more fall-preventive interventions from nurses. It is unclear how these counterbalancing factors interact and how they may have impacted the outcomes of this study; however, it can be assumed that the greater provision of interventions appears to have contributed to the reduced fall risk in the control group. The temporal changes in process metrics and nursing activities can provide important clues as to the overall impact of this trial. In a previous study [18], we found that the analytic tool predicted about 20% of patient at-risk days, which was about a half of the rate classified using STRATIFY (~40%-50% of patient-days as at-risk days). The actual rate of falls in the hospital was much lower, at around 0.2% of patient-days. We assumed that more precise up-to-date predictions of fall events would decrease the nurses' burden on redundant interventions induced by false-positive warnings from STRATIFY. The analytic tool approach did not affect the universal fall precautions, but risk-targeted interventions, education, communication, and environmental interventions significantly increased compared with the control group, which remained at a steady state. These findings are meaningful, given that multifactorial interventions, including risk-targeted interventions, prevent anticipated physiologic falls, which are responsible for more than 70% of inpatient falls [34,40]. These process metrics revealed slow but explicit changes in nursing interventions, which indicates that the processes underlying care elements had changed and we could expect subsequent improvement in patient outcomes [41]. Continuous measurement and analysis of process metrics informed our understanding of the effects of interventions on patient outcomes and our interpretation of the effects of confounding, which has rarely been accounted for in previous studies [7,34,42]. Study Design Limitations The design of this study had several limitations that impacted the interpretation of its findings. First, due to the unexpected differences in baseline characteristics between the intervention and control groups, robust conclusions could not be drawn regarding comparison of the primary outcome between them. Future studies should implement matching techniques, such as propensity score matching [43] or synthetic control approaches [44], to ensure balance between known covariates. Second, implementation of the intervention at a single site over a long study period introduced several challenges that could have reduced the effects of this study trial. One challenge was an unexpected event at the hospital whereby one nursing unit in each group moved to a new location 1 month after study initiation, and nurse staffing was thus reorganized due to the physical reconstruction of the hospital buildings. The fall rate markedly increased for several months in that intervention unit compared with the other five units in the group. However, the control group unit that was relocated showed only a slight increase compared with the other units in their group. The relocations were accompanied by changes in staff nurses and in the medical diagnoses of patients, both of which may have increased the burden on nurses and induced the sudden increase in the fall rate at the unit. Another unexpected event was the routinization of hourly nursing rounds to all inpatients mandated by the hospital's safety committee during the final intervention period. This may have accounted for the sudden increase in nursing assessments observed in the control group. In addition, conducting this study at a single hospital may have an indirect effect on the control units. The unit managers of the control group were also involved in the QI initiatives of this study, along with those of the intervention units. This could have caused a contamination effect, whereby the managers of the control group learned about the study intervention and decided to adopt it for their own units. Third, we were unable to compare the injury fall rates between the pre-and postintervention periods; therefore, the impact of the analytic tool on the rate of falls with injury remains unknown. Inpatient fall prevention is a difficult and complex issue, for which there is little high-quality evidence [7,45]. Even after taking into account the study limitations, the findings of this early-stage evaluation of an analytic tool demonstrated that the interaction between the tool and nurses was adequate and the tool may have influenced nurses' decisions on preventive interventions. The analytic tool developed herein represents a potential new approach for patient-level risk surveillance and for improving the efficacy of interventions at the system level. The findings and challenges discussed herein will contribute to improving further research on risk prediction and alerting in real-world settings. Conclusions This was an early-stage clinical evaluation of a nursing predictive analytic application designed to forecast patient fall events in real time and at the point of care to improve outcomes and reduce costs. The effectiveness of the electronic analytic tool was supported only by the before-after comparison, not by the intervention-control comparison. Nurses were amenable to using the tool in practice, and over the course of the study, there were meaningful changes in process metrics, leading to more multifactorial and risk-targeted interventions to prevent patient falls.
2021-10-10T06:17:09.302Z
2020-12-12T00:00:00.000
{ "year": 2021, "sha1": "1225ea182ffe3556ebabc7d3e33f7b5eea35b663", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/26456", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e5df9c416d694c7c843792e227f105e041857129", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226957421
pes2o/s2orc
v3-fos-license
Vertical drag force acting on intruders of different shapes in granular media . The penetration of large objects into granular media is encountered commonly both in nature (e.g. impacts of meteors and projectiles) and engineering applications (e.g. insertion of tractor blades into sand). The motion of the impacting intruder in granular media is resisted by a granular drag force. In this work, we assess the effect of intruder shape on the granular drag force using discrete element modelling (DEM). The following intruder shapes were modelled: spherical, conical, cylindrical and cubical. We observed that the drag force can be described well by a power-law relationship with intrusion depth, independent of the intruder shape. However, the exponent of the power-law expression increases with increasing “flatness” of the intruder’s impacting surface due to an increasing fraction of the granular media affected by the impact of the intruder. Introduction The insertion and impact of a solid object or intruder into a granular material is of both academic and industrial relevance.Typical examples include the formation of craters by the impact of meteors or projectiles, motion of wheels on gravel or sand , insertion of tractor blades or the penetration of a bullet into a bag of sand [1][2][3][4][5].Previous work in this area has indicated that the granular drag force that acts on the intruder can be expressed as: where ‫ݖ‬ is the intrusion depth and ‫ݒ‬ is the velocity of the intruder [6][7][8][9][10].In Eq. (1), F(z) is the depth dependent hydrostatic drag force, F(v) is the velocity-dependent viscous drag force and F represents a friction-like force acting on the intruder [11].In some work the frictional component of the drag force is included in the hydrostatic term [9].Using experiments, Peng et al. [12] and Hill et al. [13] proposed a power law expression for F(z), viz.‫)ݖ(ܨ‬ = ‫ݖ݇‬ .In this work, we focused on low speed intruder dynamics, hence F(v) is neglected (generally F(v) can be neglected for intruder velocities less than about 0.8 m/s). So far, most reports concerning the drag force acting on an intruder in a granular media, have studied spherical intruders.There is very little information available on how intruder shape affects the drag force.Hence, this work assesses the effect of intruder shape on the hydrostatic component of the granular drag force, F(z), using discrete element model (DEM) simulations. Formulations The DEM implemented in this work uses a soft sphere model [14].The inter-particle collisions are modelled by a combination of a linear spring, a dashpot and a friction slider.The normal, F ୬ , and tangential component, F ୲ , of the contact force are calculated according to: where K is the spring constant, η is the damping coefficient, is the vector describing the relative velocity between the interacting particles, ۵ ୲ is the slip velocity of the contact point and f is the coefficient of friction.The subscripts n and t refer to the normal and tangential direction of the contact, respectively, and and are the unit vectors in the normal and tangential direction of the particle-particle contact, respectively.Contact detection for non-spherical intruders is performed by discretizing the surface of the intruder into small triangular elements.The normal and tangential force are calculated by determining the overlap between the particle and the triangular element as shown in Fig.The total force acing on an intruder is determined by summing up all forces acting on the elements of the intruder surface.Our numerical model was validated by MRI experiments and DEM simulations of spherical shaped intruders.In the latter validation case, a spherical intruder was pulled through a granular bed.The granular drag force calculated using the discretization scheme described above and using a conventional spherical DEM approach were compared.Good agreement between the different DEM schemes was observed. Simulation Setup To determine the drag force acting on non-spherical intruders in granular media, the following simulation setup was used.The granular bed was contained in a rectangular container.The width, depth and height of the granular bed was 30×30×180 times the particle diameter (݀ ).Particles were allowed to settle under gravity.The properties of particles used in the simulations are given in Table 1. Table 1.Properties of the particles used. Elastic modulus 80000 N/m The non-spherical intruders were pulled through the bed with a constant velocity of 0.2 m/s (hence, in a very good approximation, F(z) is the only component of the granular drag force that has to be considered).The shapes and sizes of the intruders simulated are summarized in Table 2.The small, medium and large sized intruders are referred to as, respectively, Case 1, Case 2 and Case 3 in the manuscript. Table 2. Shapes and sizes of the intruder studied. Intruder shape and direction of intrusion Intruder dimensions In Table 2 the arrow indicates the direction of intrusion.For a cylindrical intruder two intrusion directions were studied. Results & Discussions Fig. 2 plots the granular drag force (divided by the intruder volume) as a function of the dimensionless intrusion depth (normalized by the height of the intruder) for different intruder shapes.The height of the granular bed was kept constant in all of the cases studied.We observe that independent of the intruder shape the drag force acting on the intruder follows a power law with intrusion depth z. Similar to spherical intruders, dividing the drag force by the intruder volume leads to a size-independent power law expression for the drag force for all of particle shapes studied.We also observe that k ≈ 1.3 × 10 ିସ independent of particle shape.However, the power-law coefficient c is a function of intruder shape and equals to 1.19, 1.3, 1.35, 1.38 and 1.41 for conical, spherical, cylindrical H, cylindrical V and cubic shapes, respectively (Table 3).Sphere It is worth noting that the value of c obtained for spheres is in agreement with previously reported experimental work [13].Since for a given intrusion depth, cylinders and cubes experience a larger drag force than spheres or cones, it seems that the granular drag force increases with increasing "flatness" or "bluntness" of the impacting intruder surface. To understand better this behaviour, we analysed in more detail the motion of particles around impacting intruders (Figs. 3 and 4).During the motion of the intruder, we determine the fraction of particles that have changed their original position (i.e.originally located in the purple box highlighted in Fig. 3).From Fig 4 we observe that the motion of a cubic intruder affects more particles than the motion of a conical intruder.The increase of the granular drag force (and the exponent of the power law) with an increase in intruder flatness hence, may be due to a higher energy loss due to a larger number of particle collisions. Conclusions The granular drag force acting on differently shaped intruders that are pulled with a constant velocity through a granular bed were studied using DEM.The following conclusions are drawn from this work: (1) The hydrostatic component of the granular drag force obeys a power-law relationship with intrusion depth, independent of intruder shape. (2) The granular drag force at a given intrusion depth increases with increasing"flatness" of the impacting intruder surface. (3) The increasing exponent of the power-law relationship with increasing flatness of the intruder may be explained by an increasing number of particle-particle and particle-intruder collisions for "flatter" geometries. Fig. 2 . 3 . Fig. 2. Granular drag force acting on an intruder as a function of intrusion depth: (a) cone (b) sphere (c) cylinder configuration H (d) cylinder configuration V and (e) cube Fig. 4 . Fig. 4. Percentage of particles originally located in a box around the intruder that are affected by the motion of the intruder. Fig. 3 . Fig. 3. Schematic diagram of a box that moves with the intruder and is used for data analysis.The dashed green lines highlight the path of the box during intruder motion
2018-12-05T19:17:38.407Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "3dd167f9248d3448f7886f7a97ca16ffc7dd2c98", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/09/epjconf162370.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c4296246992aac8eea697b982f6cf7148ce39c2c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
235205239
pes2o/s2orc
v3-fos-license
Cell subtypes and immune dysfunction in peritoneal fluid of endometriosis revealed by single-cell RNA-sequencing Background Endometriosis is a refractory and recurrent disease and it affects nearly 10% of reproductive-aged women and 40% of infertile patients. The commonly accepted theory for endometriosis is retrograde menstruation where endometrial tissues invade into peritoneal cavity and fail to be cleared due to immune dysfunction. Therefore, the comprehensive understanding of immunologic microenvironment of peritoneal cavity deserves further investigation for the previous studies mainly focus on one or several immune cells. Results High-quality transcriptomes were from peritoneal fluid samples of patients with endometriosis and control, and firstly subjected to 10 × genomics single-cell RNA-sequencing. We acquired the single-cell transcriptomes of 10,280 cells from endometriosis sample and 7250 cells from control sample with an average of approximately 63,000 reads per cell. A comprehensive map of overall cells in peritoneal fluid was first exhibited. We unveiled the heterogeneity of immune cells and discovered new cell subtypes including T cell receptor positive (TCR+) macrophages, proliferating macrophages and natural killer dendritic cells in peritoneal fluid, which was further verified by double immunofluorescence staining and flow cytometry. Pseudo-time analysis showed that the response of macrophages to the menstrual debris might follow the certain differentiation trajectory after endometrial tissues invaded into the peritoneal cavity, that is, from antigen presentation to pro-inflammation, then to chemotaxis and phagocytosis. Our analyses also mirrored the dysfunctions of immune cells including decreased phagocytosis and cytotoxic activity and elevated pro-inflammatory and chemotactic effects in endometriosis. Conclusion TCR+ macrophages, proliferating macrophages and natural killer dendritic cells are firstly reported in human peritoneal fluid. Our results also revealed that immune dysfunction happens in peritoneal fluid of endometriosis, which may be responsible for the residues of invaded menstrual debris. It provided a large-scale and high-dimensional characterization of peritoneal microenvironment and offered a useful resource for future development of immunotherapy. Supplementary Information The online version contains supplementary material available at 10.1186/s13578-021-00613-5. the life quality of patients and causes a heavy burden on the healthcare [2,3]. The etiology is still incompletely understood and "retrograde menstruation" is one widely accepted theory that the reflux accounts for the accumulation of menstrual debris in peritoneal cavity [4,5]. However, retrograde menstruation occurs in almost all cycling women, while only a minority of them develops endometriosis, implying that additional factors contribute to the development of endometriosis [6]. Immune cells contribute to scavenging menstrual debris in cycling women [7]. One of the possible causes of endometriosis is the defective immune response to the refluxed menstrual debris in peritoneal cavity, which determines the survival and implantation of ectopic endometrial cells and lesion formation [8]. Accumulated evidence over the past decade has suggested that development of endometriosis is accompanied with sustained peritoneal inflammation, including altered immune cell contents in peritoneal fluid and ectopic lesions, as well as changed immune cells cytotoxicity and activation [9]. Alterations in both innate and adaptive immunity contribute to the pathogenesis of endometriosis [9][10][11]. As the front line of innate immunity, macrophages comprise the largest immune cell population in peritoneal fluid of both healthy women and endometriosis patients [12]. They are complex cells at the center of this elusive condition, which are critical for the growth, vascularization and innervation of endometriosis lesions. Previous studies have demonstrated functional disorders of macrophage in endometriosis. However, it is still controversial whether macrophages in peritoneal fluid exhibit a pro-inflammatory or pro-repair phenotype [13]. Recent studies have revealed that they are complex and heterogeneous in the pathology of endometriosis [14]. In addition to macrophages, other immune cells also have been proposed to play important roles in the pathogenesis of endometriosis. Decreased cytotoxicity of natural killer (NK) cells had been reported in peritoneal fluid of endometriosis [15]. An increased proportion of regulatory T (Treg) cells in peritoneal fluid of women with endometriosis has been reported [16]. Dendritic cells (DCs), mast cells and B cells have been observed to be changed as well [17][18][19]. Previous studies mainly focus on one or several immune cells and are lack of comprehensive investigation, which leads to a failure to discover heterogeneous cell contents at an unbiased scale. Recent advance in single-cell RNA-sequencing (scRNA-seq) has the potential to resolve heterogeneous cell populations at an unprecedented scale [20]. It has been used to discriminate cell types in healthy tissues or tumors, to explore immune cell heterogeneity and to reveal new types of immune cells [21,22]. Since peritoneal fluid plays an important role in the pathogenesis of endometriosis, it warrants an unbiased characterization of cell contents and scRNA-seq of all the cells in peritoneal fluid. Here, we found that the peritoneal microenvironment is mainly composed of different immune cells. We further found that the immune cells in endometriosis were dysfunctional with decreased phagocytosis and cytotoxic activity and elevated proinflammatory and chemotactic effects. Importantly, our findings offer a useful resource for understanding pathology of endometriosis and potential immunotherapy of endometriosis. Ethics and sample collection This project was approved by the Ethics Committee of Women's Hospital, School of Medicine, Zhejiang University (IRB-20200003-R). Included patients supplied written informed consent for collection of specimens and analyses of the derived genetic materials prior to their participation. Peritoneal fluid samples were collected from 39 endometriosis patients and 27 non-endometriosis controls undergoing surgery (details in Additional file 2: Table S1). All included patients did not receive hormone treatment in the 6 months and all samples were collected in proliferative phase. Among them, cell suspensions from one endometriosis patient and one control patient with septate uterus were subjected to scRNA-seq and the other 64 samples were applied for the validation using double immunofluorescence or flow cytometry. Samples were collected during laparoscopic surgery before any surgical procedure to avoid contamination from blood. Samples were transported to the laboratory in a cold chain within 30 min and used for subsequent experiments. Cell preparation Cells were pelleted from peritoneal fluid and washed three times. Red blood cells were lysed using Ammonium-Chloride-Potassium Lysing Buffer (Gibco, USA) according to the manufacturer's instructions. Samples were next diluted with PBS containing 0.04% Bovine Serum Albumin (Sigma, USA) to the density of about 1 × 10 6 cells/mL. 10 µL of this cell suspension was mixed with 10 μL 0.4% trypan blue solution (Sigma, USA) and counted using an automated cell counter (Bio-rad, USA) to determine the density of live cells. Cell viability of samples used for single cell sequencing was 94% for endometriosis sample and 86% for control sample. Cells were maintained on ice whenever possible throughout the dissociation procedure, and the entire procedure was completed less than one hour. scRNA-seq using 10 × genomics The density of single cell suspension was counted and adjusted to 1000 cells/μL. The cell suspension was loaded into Chromium microfluidic chips with 3ʹ (v3) chemistry and barcoded with a 10 × Chromium Controller (10 × Genomics) in order to catch approximately 10,000 cells/chip position. The remaining procedures including reverse transcription and the library construction were performed according to the standard manufacturer's instructions. Single cell libraries were sequenced on NovaSeq with approximately 50,000 to 100,000 reads per cell. Single-cell analyses were performed using Cell Ranger 3.0 and Seurat unless mentioned specifically. For the quality control, low quality cells (< 3 cells/gene, < 200 genes/cell, > 6 500 genes/cell, > 5% hemoglobin genes and > 30% mitochondrial genes) were removed. The average gene detection, number of UMIs and the level of mitochondrial reads were similar between the two samples (Additional file 3: Table S2). Biological process enrichment analysis, pathways analysis and single cell trajectories We used the web-based DAVID and KOBAS to perform biological process enrichment analysis with the differentially expressed genes in each cluster. Gene Set Enrichment Analysis (GSEA) was applied to identify a priori defined set of genes that show differences in each cell types between endometriosis and control samples. We used the mean expression of genes in endometriosis and control samples as the input (Additional file 4: Table S3), and implied gene sets of KEGG pathways (http:// softw are. broad insti tute. org/ gsea/ downl oads. jsp# msigdb), which were corrected in Molecular Signatures Database (MSigDB). The Monocle package of R software was used to analyze the single cell trajectory in macrophage subtypes in order to discover the developmental transition of macrophages. Antibodies Pre-conjugated antibodies were purchased from eBioscience, Abcam, Santa Cruz and BD company. The detailed information including working volume was listed in Additional file 11: Table S10. Cell surface and intracellular staining were performed according to the manufacturer's recommendations. Double immunofluorescence staining For the staining of membrane proteins, including CD14 (eBioscience, USA), TCR Cβ1 (Santa, USA), CD1C (eBioscience, USA) and KLRB1 (eBioscience, USA), ascites cells were incubated with the indicated fluorochrome-or biotin-conjugated antibodies. For the staining of KI67 (BD Biosciences, USA), 1 × 10 6 cells were firstly resuspended in 250 µL BD Cytofix/Cytoperm solution and then incubated with 20 µL antibody. The nuclear was stained by DAPI and the samples were analyzed with FV1000 confocal microscope. Flow cytometry Flow cytometry was conducted to measure the percentages of CD14 + TCR Cβ1 + and CD14 + KI67 + cells and to sort CD14 + and KLRB1 + KLRD1 + cells in peritoneal fluid. For cell surface staining, the cell suspension was incubated with CD14 (eBioscience, USA) and TCR Cβ1 antibodies. For CD14 and KI67 staining, the cell suspension was incubated with CD14 antibody at 4 °C for 30 min, and then incubated with the BD Cytofix/Cytoperm solution. After wash, the cells were incubated with 20 uL KI67 antibody for 30 min at 4 °C. The suspension was centrifuged, washed and re-suspended with 500 μL PBS to detect the positive cells with Cytoflex S Flow Cytometer. The results were analyzed with CytExpert in percentage. For cells sorting, the cells were incubated with CD14 (Abcam, USA), KLRB1 (eBioscience, USA) and KLRD1 (eBioscience, USA), and the cells were sorted by Beckman moflo Astrios EQ. Phagocytosis tests pHrodo ™ Red E. coli BioParticles ™ Conjugate (Invitrogen, USA) was used to perform phagocytosis tests for macrophages. After initial cell preparation, each sample was mixed sufficiently and divided into two tubes with approximately 1 × 10 6 cells per tube. One sample was incubated with 20 µL CD14 (Abcam, USA) for 45 min at 4 °C for background. The other sample was first incubated with 20 µL CD14 for 45 min at 4 °C, and then 100 µL pHrodo ™ Red E. coli BioParticles ™ (1 mg/mL) was added. The suspension was resuspended and incubated for 1.5 h at 37 °C. Cytoflex S Flow Cytometer was used to detect the mean fluorescence intensity of CD14 + macrophages with ingested bioparticles. Quantitative real-time PCR Total RNA was extracted from cells with TRIzol reagent (Invitrogen, USA) and reversed using a PrimeScript Reverse Transcription (RT) reagent kit (Takara, Japan) according to the manufacturer's recommendations. Specific primers used for amplification were listed in Additional File 12: Table S11 (Sangon Biotech, China). Real-time PCR was performed with an Applied Biosystems 7900HT system (ABI, USA) using a SYBR Premix Ex TaqTM kit (Takara. Japan). An average cycle threshold (Ct) value was calculated from triplicate wells for each sample, and the fold change was determined by the 2 −△△Ct method. Statistical analysis Data were presented as mean ± standard error of the mean (SEM). Independent-sample t-test or Mann-Whitney U-test was applied when comparing two samples, and One-way ANOVA or Kruskal-Wallis was employed when comparing 3 or more samples. Statistical difference was considered to be significant at a value of P < 0.05 (*), highly significant at a value of P < 0.01 (**) and extremely significant when P < 0.001 (***) or P < 0.0001 (****). Differential gene expression testing was performed in Seurat as described in the scRNA-seq section. Single-cell expression atlas and cell types in peritoneal fluid To explore the cell profiling in peritoneal fluid, scRNAseq was performed (Fig. 1a). After initial quality control (Additional file 1: Figure S1a), we acquired single-cell transcriptomes in a total of 10,280 cells from endometriosis sample and 7250 cells from control sample with an average of approximately 63,000 reads per cell (Additional file 3: Table S2). Cell transcriptomes from the two samples were merged and analyzed together to gain power to detect rare cell types. To explore the intrinsic structure and potential functional subtypes of overall cells in peritoneal fluid, we applied principal component analysis (PCA) with variable genes across all cells and identified 19 clusters (Fig. 1b, Additional file 1: Figure S1b and Additional file 5: Table S4). There were no unique populations identified in either dataset including endometriosis sample and control sample. We then used well-known marker genes to define the identity of each cell cluster (see "Methods"), such as co-expression of PTPRC/CD45, CD68, CD14 and FCGR3A/CD16 for macrophages ( Fig. 1c) [23,24]. Cluster 15 expressed marker genes for DCs (CD1C, ITGAX/CD11C) and NK cells (KLRB1) (Fig. 1d), and it mostly matched with natural killer dendritic cells (NKDCs), a rare intermediate cell type reported by Pillarisetty et al. [25]. Eventually, we identified 9 main cell types ( Fig. 1e), including macrophages (clusters 0, 1, 3, 6, 7, 9 and 11), T cells (clusters 4 and 5), DCs (clusters 2 and 14), NK cells (cluster 8), epithelial cells (cluster 12), mast cells (cluster 13 and 16), NKDCs (cluster 15), plasma cells (cluster 17) and multilymphoid progenitor cells (cluster 18). Otherwise, there was one cluster (cluster10) that we failed to match with any cell type because it lacked recognizable maker genes. Violin pictures and UMAP plots for each cell type further supported these cell types ( Fig. 1f and Additional file 1: Figure S1c). Eventually, we captured a comprehensive map of overall cells in peritoneal fluid (Fig. 1e, g and Additional file 6: Table S5). Immune cells predominated in peritoneal fluid of both endometriosis (96.5%) and control (95.5%) groups, and macrophages were the main immune cells, followed by T cells, DCs, NK cells, mast cells, epithelial cells, NKDCs, plasma cells and multilymphoid progenitor cells, which indicated that peritoneal cavity was an immune microenvironment. Distinct subtypes of macrophages revealed the heterogeneity of macrophages in peritoneal fluid We identified 7 clusters representing different subtypes of macrophages (Fig. 2a). The proportions of each subtype of endometriosis and control groups were exhibited in Fig. 2b. Firstly, we investigated the pro-inflammatory/ pro-repair polarization paradigm and found that one cell could express both pro-inflammatory and pro-repair marker genes, such as the high expression of macrophage receptor with collagenous structure (MARCO) and S100 calcium binding protein A8 (S100A8) in cells of cluster 0 (Fig. 2c). Scatter plot further revealed that pro-inflammatory gene signatures were correlated with pro-repair gene signatures and there was no significant shifting from pro-repair to pro-inflammatory or from pro-inflammatory to pro-repair (Fig. 2d). These findings supported the idea that macrophages in peritoneal fluid did not comport with the pro-inflammatory/pro-repair polarization model and the simplified view of pro-inflammatory/ pro-repair model could not present the cell heterogeneity in vivo [26]. Therefore, we tried to explain the heterogeneity by functional enrichment of marker genes. Comparing the highly differentially expressed genes and function enrichments of each subtype ( Fig. 2e and Additional file 7: Table S6), we found that scavenger receptors, such as MARCO and CD163 molecule (CD163), and complement receptors, such as complement C2 (C2), complement component 1, q subcomponent, alpha polypeptide (C1QA) and complement C1q B chain (C1QB) were highly expressed in cluster 0, indicating their phagocytic ability [27][28][29][30][31]. Versican (VCAN) was selectively expressed in cluster 1, which could promote the synthesis and secretion of inflammatory cytokines [32]. In the meanwhile, genes related to proinflammatory cytokines, including lysozyme (LYZ), S100A8 and S100 calcium binding protein A9 (S100A9) were also highly expressed in cluster 1 [33,34]. In contrast, genes related to adhesions and fibrosis such as secreted phosphoprotein 1 (SPP1) and CD9 molecule (CD9) were found at high levels in cluster 7 [35]. C-C motif chemokine ligand 2 (CCL2), C-C motif chemokine ligand 13 (CCL13), C-C motif chemokine ligand 18 (CCL18) and C-X-C motif chemokine ligand 12 (CXCL12) were highly expressed in cluster 3. Since CCL2 was the dominant chemokine gene for the migration of mononuclear phagocyte system and CCL13, CCL18 and CXCL12 were the critical chemokines, cluster 3 might play an important role in the chemotactic function [36]. Meanwhile, top differential genes also included apolipoprotein E (APOE), apolipoprotein C1 (APOC1) and legumain (LGMN) in this cluster, showing their ability of plasma lipoprotein regulation. Genes involved in class II antigen presentation were present at highest level in cluster 6, showing their functions in antigen processing and presentation. Importantly, we found two new subtypes of macrophages which were not reported previously in peritoneal fluid. One was cluster 11 which expressed high levels of TCRs (TRBC1 and TRBC2). The critical components of the TCR signal transduction machinery were also expressed in cluster 11, such as CD3D, CD3E, LCK proto-oncogene, Src family tyrosine kinase (LCK), zeta chain of T cell receptor associated protein kinase 70 (ZAP70), linker for activation of T cells (LAT) and FYN proto-oncogene, Src family tyrosine kinase (FYN), which were mostly matched with TCR + macrophages [37,38]. The other one was cluster 9 where genes associated with cell proliferation, including marker of proliferation Ki-67 (MKI67), cyclin dependent kinase 1 (CDK1), ubiquitin conjugating enzyme E2 C (UBE2C), baculoviral IAP repeat containing 5 (BIRC5) and KIAA0101, were highly and selectively expressed, indicating that the macrophages might be under the proliferating condition. Enriched GO analysis of each cluster supported these functions (Fig. 2e). Pseudo-time analysis exhibited differentiation trajectory of macrophages in peritoneal fluid To further investigate the differentiation trajectory of six clusters of macrophages (cluster 9 was excluded for the proliferating macrophages might not derived from monocytes), Monocle package of R software was applied for the analysis. The results showed that most cells from each cluster gathered based on the gene signatures and the six clusters formed into a relative process in pseudotime. Specifically, it began with cluster 6 (antigen presentation) as they expressed the highest levels of CCR2 and CD33 which were the surface markers for monocytes (Additional file 1: Figure S2), followed by cluster 1 (proinflammatory), and ended with cluster 3 (chemotaxis) and cluster 0 (phagocytosis) (Fig. 3a, b). Furthermore, Cluster 7 (adhesion and fibrosis) were presented in the whole period of the pseudo-time but highly enriched at the late period, which indicated that macrophages also had an ability for tissue repair in the whole development process and this ability was enhanced at the final stage of differentiation. As the newly discovered subtype of macrophages, we discussed the functions of TCR + macrophages below. Since retrograde menstruation was common in most women and macrophages were one of the main immune cells, the response of macrophages to the menstrual debris may follow the above differentiation trajectory after endometrial tissues invaded into the peritoneal cavity. Two newly discovered subtypes of macrophages in peritoneal fluid As mentioned above, we discovered two new subtypes of macrophages in peritoneal fluid: TCR + macrophages and proliferating macrophages. To confirm the existence of TCR + macrophages in peritoneal fluid, double immunofluorescence staining of ascites cells was performed. We confirmed the marked existence of TCR Cβ1 (green) in CD14 (red) positive macrophages and the morphology of double positive cells looked as same as common macrophages and the size was bigger than that of T cells (Fig. 4a). Flow cytometry further confirmed that the percentage of TCR + macrophages was elevated in endometriosis when compared to control (1.548 ± 0.271 vs. 0.747 ± 0.168, P = 0.0287) (Fig. 4b), which was consistent with the results tested by scRNA-seq. Granzyme (GZMK, GZMM, GZMH and GZMA) and immune mediators including C-C motif chemokine ligand 5 (CCL5) and C-C motif chemokine ligand 4 (CCL4) were highly expressed (Additional file 7: Table S6) in this cluster, indicating their cytotoxic and chemotactic effects. GO terms further revealed that they were enriched in granulocyte activation, cytokine production and response to bacterium (Fig. 2e). The majority of macrophages in peritoneal fluid were derived from monocytes, which were terminally differentiated and did not have the ability of proliferation. However, proliferating macrophages (cluster 9) were discovered and they might be identified as tissue-resident macrophages [39]. We also verified the existence of this subtype of macrophages in peritoneal fluid using double immunofluorescence staining (Fig. 4c). Both scRNA-seq and flow cytometry revealed that the percentage of proliferating macrophages was decreased in endometriosis (0.5991 ± 0.142 vs. 1.388 ± 0.276, P = 0.0151) (Fig. 4d, Additional file 1: Figure S3). The dysfunction of macrophages in peritoneal fluid of endometriosis In order to investigate if there was function deficiency of macrophages in peritoneal fluid of endometriosis, we applied GSEA to compare the differences between endometriosis and control samples. Cluster 0 which was the main subtype for phagocytosis had lower ability of phagocytosis in endometriosis. This dysfunction of phagocytosis could also be found in the other macrophage subtypes (Fig. 5a). Then we performed phagocytosis tests through flow cytometry to investigate the phagocytic ability of macrophages in peritoneal fluid. We found that the mean fluorescence intensity of CD14 + macrophages was significantly lower in endometriosis samples than that of control samples (139.6 ± 17.1 vs. 410.6 ± 92.0, P = 0.02) (Fig. 5b), indicating that the phagocytic ability of macrophages was decreased in peritoneal fluid of endometriosis. For the subtypes with pro-inflammation (cluster 1), antigen presentation (cluster 6) and adhesion and fibrosis (cluster 7), GSEA showed that the corresponding functions were elevated in endometriosis (Fig. 5c). Then, we sorted macrophages by flow cytometry and mRNA levels of functional genes were detected using Quantitative PCR analysis. We found the levels of corresponding functions genes were elevated in endometriosis groups compared to control groups (Fig. 5d) The cytotoxic activity of NK cells was decreased while chemotactic effect was elevated in peritoneal fluid of endometriosis Of all the cell types detected, the number of NK cells was increased most significantly in endometriosis sample (5.26%) when compared to the control sample (1.99%) (Additional file 6: Table S5). Comparing the differential genes, we found that this cluster was enriched in the KEGG pathway of natural killer cell mediated cytotoxicity (Fig. 6a), which further confirmed the identification of cluster 8. Then, we focused on the function of NK cells in peritoneal fluid of endometriosis. Go enrichment analysis and GSEA revealed that the cytotoxic activity of NK cells was decreased in endometriosis while the pro-inflammatory and chemotactic effects were elevated (Fig. 6b, c). The highly differential genes between the two groups showed that C-C motif chemokine ligand 3 (CCL3) and X-C motif chemokine ligand 1 (XCL1) were significantly elevated in endometriosis while cytotoxic molecules including GNLY, GZMB and GZMH were significantly down-regulated (Fig. 6d). Then, we sorted NK cells by flow cytometry and mRNA levels of functional genes were measured by Quantitative PCR analysis. We found that the gene expression of GZMB was significantly down-regulated (0.546 ± 0.079 vs. 1.300 ± 0.278, P = 0.0149) while XCL1 was up-regulated (3.889 ± 0.466 vs. 1.672 ± 0.238, P = 0.0006) in endometriosis compared to that of control groups (Fig. 6e). These findings indicated that cytotoxic activity of NK cells was decreased while chemotactic effect was elevated in peritoneal fluid of endometriosis, which might play an important role in the pathology of endometriosis. Two subtypes of DCs in peritoneal fluid We identified two subtypes of DCs, cluster 2 and cluster 14 (Fig. 7a). Cluster 2 mapped closely to the wellestablished DC subtype of CD1C + cDCs (Fig. 7b) [40]. Fc fragment of IgE receptor 1a (FCER1A), C-type lectin domain containing 10A (CLEC10A), mannose receptor C-type 1 (MRC1) and CD1E molecule (CD1E) were also highly expressed in this cluster ( Fig. 7c and Additional file 8: Table S7). As for cluster 14, it mapped most closely to thrombomodulin + (THBD + /CD141 + ) cDCs. But this commonly used marker (THBD) was a poor discriminator for this cluster, being also expressed by cells captured in macrophages (Fig. 7b). As C-type lectin domain containing 9A (CLEC9A) appeared to be a perfect discriminative marker gene for this cluster, we refer to this subtype as CLEC9A + cDCs as previously reported [22]. In addition, X-C motif chemokine receptor 1 (XCR1) and deoxyribonuclease 1 like 3 (DNASE1L3) were also highly and selectively expressed in this cluster. We also detected transcription factors which played important roles in the development of DCs. We found that cluster 2 expressed high level of IRF 4, while cluster 14 expressed high levels of IRF 8, which was coincident with previous studies. Both of the DCs subtypes had the abilities of antigen uptake, presentation and leukocyte activation (Fig. 7c). CD1C + cDCs were the main subtype of DCs in peritoneal fluid, which occupied 93.3% of the total DCs. On the other hand, CLEC9A + cDCs showed the special capacity to induce CD8 + CTL responses with the high expression of CLEC9A and XCR1, the wellknown receptors to cross-present antigens to CD8 + T cells [41,42]. NKDCs were firstly discovered in peritoneal fluid As a rare cell type, NKDCs were not reported in peritoneal fluid previously, we investigated their differential marker genes and enriched functions. We found that NKDCs expressed high levels of DC markers (CD1C, ITGAX and CD1E) and MHC class II receptors (Additional file 5: Table S4), and also expressed NK cell markers (KLRB1, KLRD1 and NKG7) and T cell markers (CD3D, CD3E and CD3G) (Additional file 1: Figure S4). Enriched GO analysis revealed that NKDCs had the abilities of antigen processing and presentation, T cell activation and response to IFN-γ, indicating that NKDCs had abilities of both DCs and NK cells (Fig. 7d). These findings were consistent with previous reports [25]. Furthermore, we confirmed the existence of NKDCs in peritoneal fluid by double immunofluorescence staining (Fig. 7e). The dysfunction of T cells in peritoneal fluid of endometriosis For T cells, we identified two subtypes (Fig. 7f ). Cluster 4 mostly mapped the CD8 + T cells while cluster 5 mapped most closely to CD4 + T cells (Additional file 1: Figure Fig ). KLRD1, CCL4, CCL5, GZMH, GZMA and granulysin (GNLY) were highly expressed in CD8 + T cells, indicating their cytotoxic and effector functions. The native markers, including interleukin 7 receptor (IL7R) and lymphotoxin beta (LTB), were expressed at high levels in CD4 + T cells (Fig. 7g and Additional file 9: Table S8). We further investigated the markers for regulatory T cells which was reported by previous literatures that played an important role in endometriosis [16,43]. Unfortunately, these markers including forkhead box P3 (FOXP3), interleukin 2 receptor subunit alpha (IL2RA) and IKAROS family zinc finger 2 (IKZF2) expressed with low levels and did not form a cluster (Additional file 1: Figure S5b). We compared the functions of T cell between endometriosis and control sample as well. The number of T cells was elevated in endometriosis sample (Additional file 6: Table S5). However, the cytotoxic effect and chemotactic activity of T cells were dysfunctional in endometriosis by bioinformatic analysis (Additional file 1: Figure S5c). Mast cells and other cell types in peritoneal fluid There were also two subtypes for mast cells (Fig. 7h). Cluster 13 seemed to be the activated mast cells as tryptase beta 2 (TPSB2), tryptase alpha/beta 1 (TPSAB1) which encoding tryptase were highly and selectively in this cluster. KIT proto-oncogene, receptor tyrosine kinase (KIT) was also highly expressed in cluster 13, which was another activating marker for mast cells ( Fig. 7i and Additional file 10: Table S9) [44]. Cluster 16 might be a transition state from basophils as they expressed high levels of Charcot-Leyden crystal galectin (CLC), membrane spanning 4-domains A3 (MS4A3), interleukin 3 receptor subunit alpha (IL3RA) and GATA2 [45]. For plasma cells and other small number of cell types, we failed to conclude a significant functional difference during to the small number of cells. Discussion Previous studies on the investigation of peritoneal fluid cell contents are mainly relied on flow cytometry or histological morphology [14,46], while these techniques need prior knowledge and are limited to a small number of parameters. Here, we used scRNA-seq for the first time to investigate the cell contents and draw a comprehensive map of cell types in peritoneal fluid. We found that cells in peritoneal fluid were almost immune cells responsible for the clearance of refluxed menstrual debris and tissue defense. Macrophages are the largest immune population in peritoneal fluid, and followed by T cells and DCs, which is similar with previous studies [14,46]. We also identified other groups of cells, including NK cells, mast cells, plasma cells and epithelial cells. Interestingly, we found an intermediate cell type which was named as NKDCs by Pillarisetty et al. [25]. We also found new cell subtypes of macrophages and revealed their functions by bioinformatic analysis as well. However, the complexity of the classification of cell groups was found in 10 × experiment, so the results should be treated with caution. Anyway, our study provides a fresh insight of peritoneal fluid cell contents and offers a useful resource for understanding menstruation and the pathology of endometriosis. Our study consolidates and reinforces previous studies that immune dysfunction does existed in endometriosis [10]. As the first line of innate immunity, macrophages occupy the largest immune population in peritoneal fluid and have functional changes in endometriosis compared to control patients. The abilities of phagocytosis are defective in all seven subtypes which might lead to the incomplete clearance of refluxed menstrual debris and survival of endometrial cells. However, the pro-inflammatory, angiogenesis, adhesion and fibrosis effects are all elevated in endometriosis, which have been proved by previous studies [47]. The changed functions are also existed in other immune cells. NK cells are another main immune cell type to eliminate refluxed menstrual debris. Although the numbers of NK cells are elevated, the cytotoxic activity is found to be down-regulated in endometriosis. This decreased cytotoxic activity is also existed in T cells. These findings supported that immune dysfunction plays a central role in the development of endometriosis. Therefore, we provide new insights into the peritoneal microenvironment in patients with advanced endometriosis and highlight several points of importance. pro-inflammatory/pro-repair polarization model. However, they might follow a certain differentiation trajectory after endometrial tissues invaded into the peritoneal cavity. Thirdly, the functions of immune cells in patients with endometriosis are defective. Generally speaking, the phagocytic and toxic effects of the immune cells are reduced while the pro-inflammatory and chemotactic effects are elevated. This immune dysfunction might play a central role in the pathology of endometriosis. The present study has several limitations. Firstly, selection bias was inevitable because of small numbers of cases, although we selected the patient of advanced endometriosis with severe dysmenorrhea and the control patient without pelvic abnormalities verified by laparoscopic surgery. Secondly, our results might only reveal the peritoneal microenvironment of early proliferative phase as the cell contents and functions might change in different stage of menstrual cycle. Therefore, welldesigned larger scale studies are required. Conclusion Here, a comprehensive map of overall cells in peritoneal fluid was firstly exhibited by scRNA-seq. We provided a large-scale and high-dimensional characterization of peritoneal microenvironment and firstly reported several novel cell subtypes including TCR + macrophages, proliferating macrophages and natural killer dendritic cells in peritoneal fluid. The results also consolidate that immune dysfunction does existed in endometriosis and offer a useful resource for immunotherapy of endometriosis.
2021-05-27T13:29:35.412Z
2021-05-26T00:00:00.000
{ "year": 2021, "sha1": "6da4c28a884224deef14f48eb96b7f3c76a4f0bc", "oa_license": "CCBY", "oa_url": "https://cellandbioscience.biomedcentral.com/track/pdf/10.1186/s13578-021-00613-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2013d805042e0b7c4b3f1aa4e7a740a3d6c2ba0e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54710221
pes2o/s2orc
v3-fos-license
Characterization of Rain Specific Attenuation and Frequency Scaling Method for Satellite Communication in South Korea The attenuation induced by rain is prominent in the satellite communication at Ku and Ka bands. The paper studied the empirical determination of the power law coefficients which support the calculation of specific attenuation from the knowledge of rain rate at Ku and Ka band for Koreasat 6 and COMS1 in South Korea that are based on the three years of measurement. Rain rate data was measured through OTT Parsivel which shows the rain rate of about 50mm/hr and attenuation of 10.7, 11.6, and 11.3 dB for 12.25, 19.8, and 20.73GHz, respectively, for 0.01% of the time for the combined values of rain rate and rain attenuation statistics. Comparingwith themeasured data illustrates the suitability for estimation of signal attenuation inKu andKa bandwhose validation is done through the comparison with prominent rain attenuation models, namely, ITU-R P.618-12 and ITU-R P. 838-3 with the use of empirically determined coefficient sets. The result indicates the significance of the ITU-R recommended regression coefficients of rain specific attenuation. Furthermore, the overview of predicted year-wise rain attenuation estimation for Ka band in the same link as well as different link is studied which is obtained from the ITU-R P. 618-12 frequency scaling method. Introduction Rainfall has been recognized as one of the atmospheric effects that has serious impacts on radio wave propagation [1].The path attenuation caused by heavy rainfall can result in signals to become indistinguishable from the noise signal of the receiver [2].The higher frequency bands such as Ku (12/14 GHz) and Ka (20/30 GHz) are most effective in satellite communication and promise future demands of higher data rate services.In this concern, satellite communication plays a crucial role but the atmospheric propagation effects impair the availability and quality of satellite links during the service period [3].The higher frequencies band has been preferable to provide direct to home (DTH) multimedia services [4].Rain attenuation in satellite communication systems operating at Ka band frequencies is more severe than usually experienced at lower frequency bands [5].A number of mitigation techniques have been envisioned and experimented over the years, in an attempt to overcome the problem and to make Ka band satellite applications as commercially viable as those at Ku band [6].Direct measurement of rain attenuation for all of the ground terminal locations in an operational network is not practical, so modeling and prediction methods must be used for better estimation of expected attenuation for each location [7].The methods for the prediction of rain attenuation for a given path have been grouped into two categories, namely, physical and semiempirical approaches.The physical approach considered the path attenuation as an integral of all individual increments of rain attenuation caused by the drops encountered along the path.Unfortunately, rain cannot be described accurately along the path without extensive meteorological database, which does not exist in most regions of the world [8].In addition, when physical approach is used, then all the input parameters needed for the analysis are not readily available [9].Most prediction models therefore resort to semiempirical approaches which depend on the two factors, namely, rain rate at a point on the surface of the earth and the effective path length over which the rain can be considered to be homogeneous [10].The attenuation on any given path depends on the value of specific attenuation, frequency, polarization, temperature, path length, and latitude [11].When the comparative analyses of the 2 International Journal of Antennas and Propagation various rain attenuation prediction models for earth-space communication have been carried out against the measured results to predictions, the ITU-R P. 618-12 is preferred from both its inherent simplicity and reasonable accuracy, at least for frequencies up to approximately 55 GHz [12].The short integration time rain rate is an essential input parameter required in prediction models for rain attenuation.In this regard, local prediction model for 1-minute rain rate is analyzed in South Korea, where the modified polynomial shows the predictable accuracy for the estimation of 1-minute rain rate distribution [13][14][15].Similarly, rain attenuation had been studied for Koreasat-3 satellite from the database provided by the Yong-in Satellite Control Office where the ITU-R prediction model for earth-space communication is analyzed [16,17].The preliminary bases for estimation of rain attenuation on slant path applicable for Ka band have been studied with the combined values of rain attenuation for three years in South Korea [18].In this paper, a technique for predicting the rain attenuation of Ka bands satellite signals during rain events at Mokdong-13 na-gil, Yangcheon-gu, Seoul, Republic of Korea, has been presented which is analyzed from the year-to-year variation of rain attenuation database provided by National Radio Research Agency (RRA) and studied for earth-space communication. The several prominent rain attenuation prediction methods have been studied in [21][22][23][24][25][26][27][28][29] where the performances of ITU-R method required for the design of earth-space telecommunication systems have been compared.The problem of predicting attenuation by rain is quite difficult, because of nonuniform distribution of rain rate along the entire path length.Most of the models show poor results on severe climates [30].The best possible estimates given by the available information can be provided by the use of prediction models due to sparse measured data.These measured distributions are compared with those predicted by method currently recommended by the International Telecommunication Union Radio Communication Sector ITU-R P. 618-12 [12], rain attenuation obtained by integrating the specific attenuation along the propagation path as per the ITU-R P. 838-3 [31] approach.Although some research activities are performed for Ku band satellite link in South Korea region, fewer studies are done for Ka band link.A theoretical study of rain attenuation factor has been performed in South Korea as mentioned in [32], which emphasizes the need of more experimental data for better comparison with the existing rain attenuation models.The techniques in [18] have been further studied which utilizes power law relationship between the effective path length and rain rate at 0.01% of the time and predicts the attenuation values for other time percentage as per the ITU-R P. 618-12 extrapolation approach.The rainfall rate at 0.01% of the time has been useful parameter for estimation of rain induced attenuation on slant path which can be seen in [33].The prediction approach requires the statistical features of the signal variation at the location obtained over a long term period, three years in the present case on earth-to-space propagation database at Ku and Ka band.This paper studied the result of measured rain attenuation as compared with the cumulative probability distributions of ITU-R P. 618-12 and ITU-R P. 838-3 methods and studied the suitable means to characterize the rain attenuation behavior for Ku and Ka band satellite communication links.The rest of this paper is organized as follows.Section 2 shows the brief overview of selected rain attenuation models.Experimental system along with proposed approach is described in Section 3. Based on the pertinent models and experimental setup, Section 4 presents the statistical analysis with particular emphasis on predicted and measured rain attenuation along with the frequency scaling technique adopted to predict attenuation values for 20.73, 19.8 GHz links from 12.25 GHz for the same and different path, respectively.Finally, conclusions are drawn in Section 5. Literature Review of Rain Attenuation Models The attenuation prediction model consists of three methodologies: firstly, the calculation of specific attenuation [31]; secondly, the calculation of rain height [34]; and, thirdly, the attenuation calculation methodology.Power law form of rain specific attenuation is widely used in calculating rain attenuation statistics.Path attenuation is essentially an integral of individual increments of rain attenuation caused by drops encountered along the path which required the physical approach.Unfortunately, rain cannot be described accurately along the path without extensive meteorological database, which do not exist in most regions of the world.Hence, total attenuation is determined as where (dB/km) is the specific attenuation and eff (km) is the effective path length. eff is the length of a hypothetical path obtained from radio data dividing the total attenuation by specific attenuation exceeded for the same percentage of time.The recommendation of the ITU-R P.838-3 [31] establishes the procedure of specific attenuation from the rain intensity.The specific attenuation (dB/km) is obtained from the rain rate (mm/h) exceeded at percent of the time using the power law relationship as where and depend on the frequency and polarization of the electromagnetic wave.The constants appear in recommendation tables of ITU-R P. 838-3 [31] and also can be obtained by interpolation considering a logarithmic scale for and linear scale for .Most of the existing rain attenuation prediction models used the regression coefficients and to estimate the rain attenuation.The calculated regression coefficients are listed in Table 1. Secondly, the mean annual rain height is determined through the recommendation of ITU-R P. 839-4 [34] where the 0 ∘ C isotherm height above mean sea level is obtained through the provided digital map.Thirdly, the attenuation calculation procedures differ as per the applicable methods.As an initial step, ITU-R P. 618-12 [12] has been tested against available field results of the experimental links for earthspace communication at 19.8 GHz for COMS1 and 12.25, 20.73 GHz for Koreasat 6 satellites.This requires the rain rate at 0.01% of the time with 1-minute integration, height above sea level of the earth station (km), elevation angle (), the latitude of the earth station (), and frequency (GHz).Similarly, the calculation of the horizontal reduction and vertical adjustment factors is based on 0.01% of the time exceedance whose detail approach can be found in [12].The effective path length can be obtained using (3a), whereas the total rain attenuation at 0.01% of the time ( 0.01 ) can be calculated using equation (3b): The predicted attenuation exceedances for other time percentages of an average year can be acquired from the value of 0.01 using the extrapolation approach as presented in (3c) [12]: . In addition, to calculate the effective path length, Simple Attenuation Model (SAM) [35] has been adopted.This model studied the relationship between specific attenuation and rain rate, statistics of the point rainfall intensity, and spatial distribution of rainfall on earth-space communication links operating in the range of 10 to 35 GHz.It considers the exponential shape of the rain spatial distribution, which includes the distinction between stratiform and convective rain.The effective path length is calculated from an effective rain height which is expressed by (4a) and (4b).In stratiform rain, with point ≤ 10 mm/hr, the rain height is constant and equal to isotherm height above mean sea level whose values is given by ITU-R P. 839-4 [34].Similarly, in convective rainstorms, when > 10 mm/hr, the effective rain height depends on the rain rate because strong storms push rain higher into the atmosphere, lengthening the slant path.The attenuation time series is depicted as [35] % = ; % ≤ 10 mm/hr (4) where where = 1/22.Furthermore, the empirical expression for effective rain height is given as 0 is the 0 ∘ C isotherm height.The detail description on the applicability of this model is described in [35]. Experimental Methods and Measurements The 3. Both the receivers sample the data at an interval of 10 seconds which are averaged over 1-minute distribution for further statistical analyses.Satellite links have availability of 99.95% and the schematic for the setup is shown in Figure 1. In addition, an optical disdrometer, OTT Parsivel, is used to measure the rain rates which operates simultaneously with the monitoring system of satellite beacon signal whose specification is also given in Table 2.These antennas were covered with radome to prevent wetting antenna conditions.The received signal levels were sampled every 10 seconds and finally averaged over 1 minute.The three years' rainfall intensities with 99.95% of the validity of all time were collected by OTT Parsivel, a laser-based optical disdrometer for simultaneous measurement of particle size and velocity of all liquid and solid precipitation, for every 10 seconds whose detail operation is mentioned in [18].The schematic diagram for system setup is shown in Figure 1. As shown in Figure 1, the offset parabolic antenna is faced towards Koreasat 6 and COMS 1 satellites.The circularly polarized beacon signal at 12.25, 20.73 GHz from Koreasat 6 and vertically polarized signal at 19.8 GHz were downconverted using separate Low Noise Block Converter (LNBC) which is further described in [18].The experimental data shows the receive signal level is relatively higher in Ku band as The procedure used to obtain slant-path attenuation exceedances for other time instances is further detailed in [18] along with the cumulative distribution of 1-minute rain rate for each year and when combined together.The values for rain attenuation and rain rate in different time series are extracted from the simultaneous measurement of rain attenuation and rain rate at 10-second intervals.The OTT Parsivel starts recording the rain rate whenever the rain drops pass through the laser beam.These rain rates are converted to 1-minute rain rate instances from the procedure as mentioned in [17].Similarly, beacon receiver received the beacon signals at fixed signal level for no rain condition.Whenever there is the rainfall, then the corresponding beacon signals are changed from the reference level.Thus, the difference in the signal level determines the required attenuation values.These values are arranged for the 1-minute instance by following the procedure mentioned in [17].Finally, these data are combined in the descending order and the required 1-minute rain rate and rain attenuation values are determined for 1% to 0.001% of the time.For instance, at 0.001% of the time, and 19 dB; 6.2, 11.6, and 25.1 dB; 5.7, 11.3, and 18.9 dB are observed for 0.1%, 0.01%, and 0.001% of the time, respectively.In addition, year-wise variability of rain rate and rain attenuation are studied further to generalize the use of regression and coefficients k and value for the better estimation against the measured rain attenuation statistics. Furthermore, the relation between rain attenuation and rain rate is shown in Figure 4 for three years of measurement along with the combined values.This figure indicates that there is the positive correlation between the rain rate and rain attenuation.Additionally, the experimental procedure carried out by Korea Metrological Administration (KMA) is studied for better analyses of 1-minute rain rate statistics which have been detailed in [18] along with the proposed approach.This emphasizes the need of measurement for rainfall rate provided by RRA with longer duration. Numerical Results and Discussion The analysis presented above is applied here to numerically illustrate the relation between estimated and measured rain attenuation.To this end, the Complementary Cumulative Distribution Function (CCDF) for combined value of rain attenuation for three years at 12.25, 19.8, and 20.73 GHz along with the ITU-R P. 618-12 predicted values is shown in Figure 5 in several time percentages, , at equiprobable exceedance probability (0.001% ≤ ≤ 1%).As shown in the figure, for 12.25 GHz link, ITU-R P. 618-12 model predictions show a close value at 0.01% of the time but it differs significantly against the rain attenuation CCDF in lower and higher time percentage.At lower time percentage, the difference in prediction is relatively lower as compared to higher time percentage.For instance, the calculated 1minute rain attenuation values are 7.9, 10.7, and 19 dB while the corresponding ITU-R P. 618-12 estimates 3.73, 11.14, and 23.49 dB for 0.1%, 0.01%, and 0.001% of the time, respectively, under the combined values of rain attenuation distribution.Under this aspect, the paper presents the discussion of ITU-R P. 618-12 which is applied for year-wise rain attenuation database and provides the overview of applicable regression coefficients for 12.25, 19.8, and 20.73 GHz frequencies ranges.The better performance analyses of regression coefficients are done from error calculation in further part. The measurement is performed for 12.25, 19.8, and 20.73 GHz links in years 2013, 2014, and 2015.Rain rate is plotted against the rain attenuation values arranged for time percentages 1% to 0.001% after applying power law so as to generate regression coefficients, k and .The curve fitting tool as presented in MATLAB program was used to determine the empirical expression for effective length, eff , as mentioned in Table 4. eff as obtained from SAM approach is plotted against the respective year-wise rain 4. The correlation coefficient, 2 , is greater in 2013 for the mentioned three links which indicates the better estimation of rain attenuation from rain rate statistics.This might be due to the use of higher rain rate values at 0.01% of the time.Hence, comparison of the attenuations obtained from empirically generated and along with the ITU-R P. 618-12 prediction method are graphically shown in Figures 6, 7 As noticed from Figures 6, 7, and 8, it is observed that the measured cumulative statistics of rain attenuation are overestimated by ITU-R P. 618-12 for Ka band operation as compared to Ku band.The overestimation becomes highly pronounced at 20.73 GHz.The attenuation obtained after using the empirically derived and for specific attenuation when multiplied with predicted eff generates the better estimation against the measured attenuation values for Ku and Ka band frequencies.For instance, the calculated rain attenuation values in 12. 25 7 and 8.The further error analyses justify the suitability of the mentioned approach. The rain attenuation prediction model for Earth-satellite link is determined for exceeding time percentages in the range of 0.001% to 1%.Hence, the percentage errors, (), between measured Earth-satellite attenuation data ( %,measured ) in dB and the model's predictions ( %,predicted ) in dB are obtained with expression exceeding time percentage of interest on link at the same probability level, , in the percentage interval 10 −3 % < < 1%, as follows: In addition, chi-square statistic is used to access the methods performance which is given by [36] The chi-square statistic is presented against the threshold value which depends on the degree of freedom whose calculated value is 12 for the given observed data.Similarly, for standard deviation, STD and root mean square, and RMS calculation, the approaches followed in [13] have been adopted.As per the recommendation by ITU-R P. 311-15 [37], the ratio of predicted to measured attenuation is calculated and the natural logarithm of these error ratios is used as a test variable.The mean ( V ), standard deviation ( V ), and root mean square ( V ) of the test variable are then calculated to provide the statistics for prediction methods' comparison which are listed in Tables 5(a), 5(b), and 5(c) along with the chi-square values for 12.25, 19.8, and 20.73 GHz links, respectively, which also shows the evaluation procedures adopted for comparison of prediction methods by the recommendation ITU-R P.311-15 [37].As shown in Tables 5(a), 5(b), and 5(c), in 12.25, 19.8, and 20.73 GHz links, the proposed empirically derived and value when used in specific attenuation calculation so as to obtain desired attenuation values result in lower chances of error as compared to ITU-R P. 618-12 approach when 0.001% ≤ ≤ 1% which is justified from lower STD, RMS, 2 values.In addition, ITU-R P. 618-12 shows underestimation against the measured values in 2014 for 12.25 GHz link.Thus, for all time percentages when 0.001% ≤ ≤ 1%, for 12.25, 19.8, and 20.73 GHz, empirically derived and value can be used.The numerical values as presented in Tables 5(a), 5(b), and 5(c) justify the suitability of the rain attenuation statistics obtained from the proposed empirically derived and value.Furthermore, the attenuation statistics obtained with the use of proposed empirical coefficients and result in lower values of V for 12.25, 19.8, and 20.73 GHz links, as per the recommendation of ITU-R P.311-15 [37], which is justified from lower values of V and V .Thus, this emphasizes the suitability of the proposed empirical coefficients for the estimation of rain attenuation in slant path for earth-space communication in 12.25, 19.8, and 20.73 GHz links.In order to better visualize the trend of error matrices for the values presented in Tables 5(a)-5(c), we have maintained the plots for relative error, standard deviation, root mean square, and chi-square values in the different time percentages as depicted by Figures 9(a)-9(d), respectively.These figures show that the error matrices decrease and tend to be lesser for the rain attenuation derived from the empirically generated coefficients sets as compared to the rain attenuation statistics obtained from ITU-R P. 618-12 approach. Furthermore, frequency scaling approach is tested for Ka band along the same and different communication paths.Frequency scaling method provides an alternative to rain attenuation models which are considered to be excellent predictors and provide a means for determining what to expect at a frequency for which there is no data.The analyses performed for year-wise estimation of rain attenuation for 19.8 and 20.73 GHz are depicted in Figures 10 and 11.These show that the estimation is relatively higher against the measured values.Similarly, analyses are performed for combined values of rain attenuation statistics from 2013 till 2015, in which attenuation values at 12.25 GHz are used for frequency scaling purpose and the attenuation series are predicted in 19.8 and 20.73 GHz as depicted in Figure 13. In order to minimize the error probabilities derived after applying frequency scaling as noticed from Figure 12, where Δ % is the prediction error and 1 , 2 , 3 , 4 , and 5 are regression parameters whose values depend on frequency and radio path length of the link under consideration.Table 6 shows the values of regression coefficients used for the estimation of prediction errors against the full 1-minute rain rate distribution over 0.001% ≤ % ≤ 1%.Hence, in order to improve rain attenuation prediction approach from the frequency scaling method, it is proposed that the prediction error obtained from (8) needs to be subtracted from the estimation values as derived from frequency scaling approach which is mentioned in the ITU-R P. 618-12. Figure 13 shows that there is the overestimation for predicted 19.8 and 20.73 GHz links which are obtained from the application of the frequency scaling method.For instance, the calculated rain attenuations at 19.8 and 20.73 GHz links are 6.2, 11.6, and 25.1 dB; 5.7, 11.3, and 18.9 dB, respectively, at 0.1%, 0.01%, and 0.001% of the time and the estimated values are 18.58, 24.80, and 42.48 dB; 20.06, 26.71, and 45.53 dB.Interestingly, this overestimation is decreased and the better estimation is obtained with the subtraction of the prediction error as calculated from (8).The better performance analyses are judged through the error matrices as presented in Table 7. As noticed from Table 7, the attenuation statistics obtained at 19.8 and 20.73 GHz after applying frequency scaling technique show the higher relative error chances for all time percentages when 0.001% ≤ ≤ 1% of the time.On the contrary, the predicted attenuation statistics for both frequencies calculated after subtracting prediction error from the attenuation series calculated with the application of frequency scaling technique as mentioned in ITU-R P. 618-12 generate the lower chances of relative error probabilities.For instance, obtained relative error percentages are 200%, 114%, and 69%; 252%, 136%, and 141% for 19.8 and 20.73 GHz under 0.1%, 0.01%, and 0.001% of the time, respectively, from the application of frequency scaling technique whereas the relative error percentage obtained after subtracting prediction error generates 1%, 1%, and 0%; 0%, 2%, and 0%. Conclusions The rain attenuation and rain rate, collected over three years during 2013-2015 in the 12.25 and 20.73 GHz from Koreasat 6 and 19.8 GHz from COMS1 for Mokdong Station, were analyzed to observe the statistical characteristics.In this paper, the local environmental propagation effects in slantpath attenuation for Ku and Ka bands have been investigated.As first approach to this open research problem, a statistical analysis has been proposed to predict the time series of rain attenuation, effective path length, and specific attenuation at Ku and Ka band over an earth-space path in South Korea.The measured rain attenuation distribution at 0.01% of the time in Ka band is higher than Ku band and the rain rate is found to be 50.35mm/hr.It has been found that the empirically derived k and show suitability in the calculation of attenuation series for all time percentages when 0.001% ≤ ≤ 1% of the time against the measured values.The predictive capabilities of the models are judged through the relative error analyses, standard deviation, root mean square, and chi-square values as well as the recommendation of ITU-R P.311-15 method.Thus, the paper presents the comparison of the measured data with the existing ITU-R rain attenuation prediction model for slant-path communication and shows suitable method for the categorization of best fitting approach.Rain attenuation predictions are made for a number of transmission paths at a fixed set of probability levels.However, it should be noted that the results are valid for these particular climates, and their feasibility for other regions requires more testing and analyses.On the whole, we can adopt ITU-R P. 618-12 rain attenuation model in South Korea for better prediction of rain attenuation until the sufficient database of rain attenuation and rain rate from other locations becomes available.In addition, frequency scaling schemes are analyzed as per the recommendation of ITU-R P. 618-12 where the validation of this approach is performed by comparing with experimental data through error matrices where the suitable parameters are derived for better estimation of the prediction error. Overall, based on such results, the contribution describes some preliminary steps aiming at devising appropriate methodology for prediction of rain attenuation affecting earth-space communication link.However, more observations are needed from different locations to provide a statically reliable estimation.Hence, the analyses of prominent rain attenuation methods make the Ku and Ka bands spectrum studies for broadband satellite applications and network centric systems. Figure 2 : Figure 2: (a) Variation of 12.25 GHz signal level during a rain event [19].(b) Variation of 20.73 GHz signal level during a rain event [19].(c) Variation of 19.8 GHz signal level during a rain event [19]. 1-minute rain rate and attenuation values are taken for about 16 (((3 * 365 * 24 * 60 * 0.001)/100) = 15.768≈ 16) instances for 3 years of measurement.Similarly, for each year of measurement, the values of about 5 (((1 * 365 * 24 * 60 * 0.001)/100) = 5.256 ≈ 5) instances were considered at the same percentage of time.These instances are calculated by multiplying the number of years, the number of days in a year, and hour and minute in a day, with the required time percentage. Measured rain attenuation in 2013 Rain attenuation obtained from SAM for 2013 ITU-R P. 618-12 in 2013 Measured rain attenuation in 2014 Rain attenuation obtained from SAM for 2014 ITU-R P. 618-12 in 2014 Measured rain attenuation in 2015 Rain attenuation obtained from SAM for 2015 ITU-R P. 618- rain attenuation in 2013 Rain attenuation obtained from SAM in 2013 ITU-R P. 618-12 in 2013 Measured rain attenuation in 2014 Rain attenuation obtained from SAM in 2014 ITU-R P. 618-12 in 2014 Measured rain attenuation in 2015 Rain attenuation obtained from SAM in 2015 ITU-R P. 618-12 in 2015 Figure 9 :Figure 10 :Figure 11 : Figure 9: (a) Plot of relative error for different time percentages.(b) Plot of standard deviation for different time percentages.(c) Plot of root mean square for different time percentages.(d) Plot of chi square for different time percentages. error for 19.8 GHz Predicted error for 19.8 GHz Calculated error for 20.73 GHz Predicted error for 20.73 GHz % and % are the attenuation and rain rate exceeded for % of time, is specific attenuation due to rainfall, is slant-path length up to rain height, is rain height above mean sea level, is station height, and is elevation angle of the top of rain height.In convective rainstorms, when % > 10 mm/hr, a modified value of effective path length is used for determination of slant-path attenuation as experimental setup is installed at Korea Radio Promotion Association building, Mokdong-13 na-gil, Yangcheon-gu, Seoul, Republic of Korea (37 ∘ 32 45.25 N, 126 ∘ 52 58.8 E), by Table 4 : [20]ession coefficients for three satellite links[20].In addition, the measured rain attenuation is divided by estimated eff , which is again plotted against the rain rate measurement so as to obtain the required regression coefficients, and , for specific attenuation, , at different rain rate.The rain attenuation is thus calculated with the product of empirically generated and values, with the estimated eff .Similarly, the and as derived from the procedure explained in ITU-R P. 838-3 whose values as listed in Table1are used to obtained attenuation series for ITU-R P. 618-12 extrapolation approach.These values are listed in Table , 19.8, and 20.73 GHz are 8.10, 10.80, and 14.70 dB; 8.40, 12.30, and 26.80 dB; 3.10, 6.00, and 12.60 dB and 7.00, 12.30, and 16.10 dB; 5.10, 11.00, and 34.70 dB; 6.20, 11.50, and 25.10 dB and 6.70, 12.10, and 17.50 dB; 5.30, 10.20, and 21.20 dB; 4.30, 9.70, and 17.70 dB for 2013, 2014, and 2015, respectively, at 0.1%, 0.01%, and 0.001% of the time.These values have been overestimated particularly in 19.8 and 20.73 GHz as depicted from Figures
2018-12-13T20:57:52.928Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "6ed4fb0d6ba957e124514d104bd40172a9a5a4fb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijap/2017/8694748.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ed4fb0d6ba957e124514d104bd40172a9a5a4fb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
15865419
pes2o/s2orc
v3-fos-license
Deuteron distribution in nuclear matter We analyze the properties of deuteron-like structures in infinite, correlated nuclear matter, described by a realistic hamiltonian containing the Urbana $v_{14}$ two-nucleon and the Urbana TNI many-body potentials. The distribution of neutron-proton pairs, carrying the deuteron quantum numbers, is obtained as a function of the total momentum by computing the overlap between the nuclear matter in its ground state and the deuteron wave functions in correlated basis functions theory. We study the differences between the S- and D-wave components of the deuteron and those of the deuteron-like pair in the nuclear medium. The total number of deuteron type pairs is computed and compared with the predictions of Levinger's quasideuteron model. The resulting Levinger's factor in nuclear matter at equilibrium densityis 11.63. We use the local density approximation to estimate the Levinger's factor for heavy nuclei, obtaining results which are consistent with the available experimental data from photoreactions. I. INTRODUCTION The suggestion that the nuclear response may be interpreted as the response of a collection of neutron-proton (np) pairs carrying the quantum numbers of the deuteron was first put forward in the fifties by Levinger [1] and Gottfried [2], to explain nuclear photoabsorption data. The basic idea underlying the Levinger's quasideuteron (QD) model is that the nuclear photoabsorption cross section σ A (E γ ), above the giant dipole resonance and below the pion threshold, is proportional to that corresponding to the break-up of a deuteron embedded in hadronic matter, and denoted hereafter as σ QD (E γ ) The proportionality constant P D has to be interpreted as the fraction of the A(A − 1)/2 nucleon-nucleon pairs, which are of QD type, and it is given by where A and Z denote the nuclear mass and charge and L is the so called Levinger's factor. P D can be directly calculated from the ground state wave function of the nucleus with mass A. Since the deuteron is a bound state, P D scales with the number of particles A. From P D , the probability of finding a deuteron-like nucleon pair in a complex nucleus can be easily extracted. Such probability can be obtained by normalizing the number of QD pairs P D to the total number of pairs, and, therefore, it is inversely proportional to the number of particles. The probability is zero in infinite nuclear matter, unless the nuclear matter wave function contains a long range order, providing a condensation of QD pairs. According to the Levinger's model [3] σ QD (E γ ) is taken as the deuteron cross section times a damping function, of exponential form, accounting for the Pauli blocking of the final states available to the nucleon ejected from the QD: Subsequently, Laget [4] proposed to associate σ QD (E γ ) with the transition amplitudes of virtual (π + ρ)-meson exchanges between the two nucleon of the QD pair, leading to a cross section denoted σ exch d (E γ ). Both models fit reasonably well the existing photoreaction data in heavy nuclei, but the resulting factors, L Lev (A) and L Laget (A), have different phenomenological values, with L Laget (A) being about 20% larger than L Lev (A). A generalization of the QD model was proposed by Frankfurt and Strikman [5], to explain the production of fast backward protons in semi-inclusive processes off nuclear targets. According to the model of Refs. [5], generally referred to as few nucleon correlation model, the structure of the nuclear wave function at short internucleon distances is dominated by strongly correlated multinucleon clusters. A quantitative understanding of the above reaction processes requires a microscopic calculation of the quasideuteron distribution P D (k D ) in the nucleus, as a function of its momentum k D . Moreover, the integral of P D (k D ) over k D , being proportional to P D , provides an unbiased calculation of the Levinger's factor L. More recently, the occurrence and spacial structure of deuteron-like configurations in light nuclei has been studied using the Green's Function Monte Carlo (GFMC) method [6]. It is interesting to extend such analysis to heavier nuclei and to nuclear matter. Systematic quantitative investigations of nucleon-nucleon (NN) correlations in nuclear matter have been carried out within microscopic many-body theories (for a recent review see Ref. [7]). In particular, Correlated Basis Function (CBF) theory has been applied to obtain the nuclear matter momentum distribution [8] and spectral functions [9][10][11][12] from realistic hamiltonians. In this paper we use the same many body framework to carry out an ab initio calculation of the momentum distribution P D (k D ) of QD pairs in infinite nuclear matter, as well as of the associated total number of QD pairs per particle P D /A. The definition of the QD total momentum distribution in terms of the overlap between the nuclear matter and the deuteron ground state wave functions is given in Sec. II, where the many-body formalism employed in the calculations is also briefly outlined. In Sec. III the results of numerical calculations, including both the QD momentum distribution and P D in nuclear matter at the empirical saturation density, ρ = 0.16 fm −3 , are discussed and compared to the empirical estimates of the Levinger's factor. Finally, the summary and conclusions are given in Sec. IV. II. FORMALISM The distribution of QD pairs with total momentum k D in nuclear matter is defined as (sum over repeated greek indeces is implicit hereafter) where J D = 1 is the spin of the deuteron, and with R ≡ (r 3 , . . . , r A ). In the above equation, Ψ N M and Φ n denote the normalized nuclear matter ground state wave function and the wave function of the (A − 2)-nucleon system in the state n, respectively. The configuration space deuteron wave function (DWF) can be cast in the form where Ω is the normalization volume, R ij = (r i + r j )/2, r ij = r i − r j , |00 is the spin-isospin singlet two-nucleon state and the relative motion of the pair is described by In Eq.(7), u D (r) and w D (r) are the ℓ = 0 and ℓ = 2 components of the deuteron wave function, normalized according to ∞ 0 r 2 dr u 2 (r) + w 2 (r) = 1 , σ α i (α = 1, 2, 3) denote the Pauli matrices and the tensor operator is given by In CBF theory |Ψ N M is usually written, in coordinate space, in the form (R≡ (r 1 , . . . , r A ) specifies the nucleon positions) where S is the symmetrization operator and Φ 0 is the Slater determinant describing a noninteracting Fermi gas of nucleons carrying momenta k with |k| ≤ k F = (6π 2 ρ/ν) 1/3 , ν being the degeneracy of the momentum states (in symmetric nuclear matter ν = 4). The operator F (ij), accounting for the correlation structure induced by the nucleon nucleon (NN) interaction, has been chosen of the form [13] F where f c (r), f σ (r), f τ (r), f στ (r), f t (r) and f tτ (r) are correlation functions whose radial shapes are determined minimizing the expectation value of the hamiltonian in the ground state described by Eq.(10) [13]. As r → ∞, f c (r) → 1, while all other correlation functions go to zero. Summation over k D of P D (k D ) yields P D /(2J D + 1) in nuclear matter. This number, which corresponds to an extensive quantity and therefore is propotional to A, leads to a direct evaluation of the Levinger's factor L, to be compared with the value resulting from the phenomenological analyses [14,15] of the available experimental data on photoreactions [16,17]. The quantity defined by Eqs. (4) and (5) is related to the fully linked part of the twonucleon density matrix, ρ (2) (r 1 , r 2 , r 1 ′ , r 2 ′ ). This part is the only one providing extra information on the N-N correlations with respect to that carried by the one-body density matrix, or, equivalently, by the nucleon momentum distribution [12]. Using standard cluster expansion techniques [18], P D (k D ) can be written as a series of terms involving an increasing number of particles. We have calculated the cluster contributions associated with the diagrammatic structure shown in fig.1, and its exchange counterpart, where the deuteron wave function Ψ D (1, 2) is multiplied by the correlation operator F (1, 2). This corresponds to a dressed leading order approximation, whose validity has been checked in previous CBF calculations of the response function and of the spectral function of nuclear matter, and whose expression reads with In the above equation, Π 00 is the operator projecting onto the S = 0, T = 0 two-nucleon state: while the spin-and isospin-exchange operators, P σ and P τ , are given by The function n(r) is the correlated one-body density matrix [8], normalized as n(r = 0) = 1, and trivially related to the nucleon momentum distribution, n(k), through Evaluation of the trace appearing in Eq. (13) leads to the simple result where and the functions U(r) and W (r) are defined as The explicit expression of the functions ∆u(r) and ∆w(r), yielding the deviation of U(r) and W (r) from the bare components of the DWF, are and with h c (r) = 1 − f c (r). Note that in absence of correlations, i.e. setting f c (r) ≡1 and all other correlation functions identically equal to zero, U(r) and W (r) reduce to u D (r) and w D (r), respectively. Using the functions defined in Eqs. (19) and (20), the wave function describing the motion of the QD pair in nuclear matter can be written in the same form as the DWF (see Eqs.(4), and (7)): Using the Fourier tranforms of U(r) and W (r), defined as and j 0 (kr) and j 2 (kr) being spherical Bessel functions, and the nucleon momentum distribution in nuclear matter, n(k), Eq. (12) can be rewritten in the form where and The above equations have been used to carry out the numerical calculations. It has to be noticed that the contributions arising from the non commuting structure of the correlations reaching the four external vertices, 1, 2, 1 ′ and 2 ′ , of the diagrammatical structure of fig. 1, are not exactly accounted for, but only according to the dressed leading order approximation. . 2 shows the behavior of U(r) and W (r) evaluated using a many body hamiltonian including the Urbana v 14 NN potential and supplemented by the TNI model of many-body forces [19]. For comparison, we also show the components of the Urbana v 14 DWF and the functions ∆u and ∆w defined in Eqs. (21) and (22), respectively. It appears that the main differences between deuteron and QD occur at r < 2 fm. At small relative distance (r < 1 fm), the effect of the nuclear medium leads to an appreciable suppression of U(r) Fig with respect to u D (r), whereas W D (r) turns out to be substantially enhanced, compared to w D (r). The momentum space behavior of |U(k)|, |W (k)|, |u D (k)|, |w D (k)|, |∆u(k)| and |∆w(k)| is displayed in fig. 3. The main effect of the nuclear medium appears to be a shift of the second mimimum of both |U(k)| and |W (k)| towards lower values of k. Eqs. (21) and (22) show that the nuclear medium modifications to the DWF are driven by the functions H t (r) = f t (r) − 3f tτ (r) and ∆H c (r) = −f σ (r) + 3f τ (r) + 3f στ (r), resulting from the combination of different components of the NN correlation operator. The radial dependence of H t (r) and ∆H c (r), illustrated in fig. 4, shows that the effect of scalar and spin-isospin correlations, described by ∆H c (r), dominates at very short relative distance, whereas H t (r), accounting for tensor correlations, has a significantly longer range. The distribution of deuteron pairs with total momentum k D , P D (k D ), resulting from our approach is displayed in fig. 5 Similarly, one can define the relative momentum distribution of the nucleons belonging to a QD pair in nuclear matter where For example, in the Fermi gas model n(k) = θ(k F − k), and φ(k) takes the simple form The total number of pairs of the QD type in nuclear matter, P D , can be obtained by momentum integration of either P rel D (k) or P D (k D ) times the spin multiplicity, 2J D + 1 = 3, of the deuteron: The calculation carried out using the correlated model of nuclear matter and Eq.(26) yields P D /A = 2.895, to be compared with the Fermi gas model result of 3.406. In order to compare the calculated P D to the number of QD pairs extracted from the analysis of photonuclear data we have to make a connection with the Levinger's formula given in Eqs. (1) and (2). The relation is given by and, for symmetrical matter (Z = A/2), one has The nuclear matter value resulting from our calculation gives L(∞) = 11.63. This value should be compared with that given by the phenomenological formula reported in Ref. [14], providing L Lev (∞) = 9.26. Notice that, for a deuteron in a Fermi gas, L F G (∞) = 13.6. Surface contributions to L(A) can be estimated by exploiting the calculation of the enhancement factor K in the electric dipole sum rule for finite nuclei of Ref. [20], performed within the CBF theory and Local Density Approximation (LDA). The enhancement factor is related to experimental data on photoreactions through the equation: where σ 0 = 60 [Z(A − Z)/A] MeV mb and m π c 2 is the π-meson production treshold. Therefore, the Levinger's factor can be related to K in the mass number range where the coefficient D in Eq. (3) is fairly A-independent, namely for sufficiently large values of A. By adding the surface contributions, as extracted from Ref. [20], to the nuclear matter bulk result, we get: where essentially the same theory as the one used in this paper leads to a value which is ∼ 60% larger than the experimental one. Therefore, this disagreement between theory and experiment has to be mainly traced back to the sizeable tail contributions to the electric dipole sum rule, absent in the definition of Eq.(36). IV. CONCLUSIONS Correlated Basis Function theory of the two-body density matrix has been applied to compute the distribution P D (k D ) of neutron-proton pairs characterized by the deuteron wave function and having total momentum k D . It has been found that this distribution in nuclear matter is mostly concentrated at 0 ≤ |k D | ≤ 2k F . Besides being responsible for the appearance of the tail of P D (k D ) at In addition, the analysis described in this paper shows that when a deuteron is embedded in nuclear matter at equilibrium density, its wave function gets appreciably modified by the surrounding medium. While in the case of the S-wave component the difference is mostly visible at small relative distance (r < 1 fm), the D-wave component of the QD appears to be significantly quenched, with respect to the deuteron w D (r), over the range 0 < r < 2 fm. It has to be pointed out, however, that the radius of the QD configuration is very close to the deuteron radius, the difference being ∼ 2 %. This result is in agreement with the conclusions of a recent study of deuteron-like configurations in light nuclei [6]. The authors of Ref. [6] find that the density distributions of np pairs carrying the deuteron quantum numbers in 3 He, 4 He, 6 Li, 7 Li and 16 O exhibit size and structure similar to those observed in the deuteron. The relative momentum distribution of a QD pair, P rel D (k), extends into the region |k| > k F , where it appears to be strongly suppressed with respect to the corresponding deuteron momentum distribution |Ψ D (k)| 2 , although |Ψ QD (k)| 2 is larger than |Ψ D (k)| 2 at high k. It has to be pointed out that the behavior of P rel D (k) at k > k F is entirely dictated by the high momentum tail of the nuclear matter momentum distribution, produced by strong short range NN correlations. Within the Fermi gas model n(k > k F ) ≡ 0, and P rel D (k > k F ) vanishes identically. Higher order cluster terms, neglected in this paper and arising from the inclusion of additional bonds in the diagrammatical structure of fig. 1, are not expected to change the main conclusions of the present paper, neither regarding the behavior of the deuteron distribution in nuclear matter, nor as far as the disussion on the Levinger's factor is concerned. In view of the relevance that P D (k D ) and |Ψ QD (k)| 2 may assume in the study of those lepton-nucleus reactions where the ejected hadron is in kinematical regions forbidden to lepton-nucleon processes, the calculations presented in this paper need to be extended i) by introducing higher order cluster terms in the expansion of the two-body density matrix, and ii) by explicitely considering finite nuclei wave functions. Work in these directions is in progress. (squares) and Ahrens et al. [17] (crosses and diamonds) are taken from ref. [14]. The empirical values of L Lev (A) represented by circles are from ref. [21].
2014-10-01T00:00:00.000Z
2001-06-19T00:00:00.000
{ "year": 2001, "sha1": "72c819431b707461d86e8141bb5b991745946101", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0106042v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7560e877675bc4541bc52fd6e8c4b37b3b0bd019", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
163939353
pes2o/s2orc
v3-fos-license
The Chemist as Anti-Hero: Walter White and Sherlock Holmes as Case Studies Compared to chemists in film, chemists in modern television drama are underexamined by scholars, even though the genre is a powerful processor of images and ideas about culture and society. This critical essay draws on ideas from science communication, media studies and literary studies to examine the representation of chemists and chemistry in the acclaimed television dramas “Breaking Bad” and “Sherlock.” A textual analysis of these shows, chosen as critical case studies, demonstrates that they both portray their chemist protagonists as anti-heroes, who are morally ambivalent characters. The essay argues that both shows portray chemistry as uncommon knowledge, which is conducted largely in isolation or in secret. Although the shows represent chemistry as an empirical and experimental science, they demonstrate that the craft of chemistry is not ethically neutral. In “Breaking Bad,” Walter White chooses to stop using his chemistry skills to teach, and subsequently slides into an immoral world of death, destruction and destabilization. In “Sherlock,” Sherlock Holmes is an amoral, but benign, figure who uses his forensic knowledge to save lives and confront crime. These representations demonstrate that ethical choices are entwined with the practice of chemistry and these choices, in turn, have social consequences. This critical essay aims to partially redress this overlooked portrayal of chemists, by investigating the patterns of representation of chemists and chemistry in contemporary television drama. Representation is used here as a concept from media studies to examine how the world is represented in television drama, a method of analysis that involves the close interrogation of media texts and their social contexts. (5) This approach informs the central questions of this essay: What images of chemists and chemists are presented in television drama? What does television drama reveal about the position of chemistry in society? What contribution does television drama make to the public understanding of chemistry? The essay offers answers these questions by analyzing two specific television dramas, "Breaking Bad" and "Sherlock," which have been purposefully chosen as critical case studies. (6) Both shows have protagonists --Walter White and Sherlock Holmes --who are chemists or have expertise in chemistry. Both shows have received popular and critical acclaim, highlighting their value as influential cultural products that warrant critical analysis. The selection of "Breaking Bad," which was produced in the U.S. by cable network AMC, and "Sherlock," which was made by the U.K. public service broadcaster, the BBC, allows for the analysis of cross-cultural portrayals of chemists. Additionally, the shows presented complex characters and stories that run across multiple episodes, providing, therefore, a rich body of material to analyze, and so allow for multiple patterns of representation to be examined. These complicated protagonists, appearing in several series of their shows, are difficult to categorize into distinct character types, which other studies of scientists in fiction and film have tended to do. Chemists are classed into categories, such as the evil alchemist, the noble scientist, the foolish scientist, the inhuman researcher, the scientist as adventurer, the mad, bad, dangerous scientist and the helpless scientist, (7) or as eccentrics or anti-social geeks. (8) But such categories, even in compound form, provide only a simplified shorthand for scientist types. Placing White and Holmes within these broad categories risks draining them of their psychological complexity. Instead, this essay explores these characters using the idea from literary studies of the anti-hero, an approach that allows for the examination of their complex representation, with their contradictions, tensions and individual quirks. The anti-hero is a central character in a drama "who lacks the qualities of nobility and magnanimity expected of traditional heroes and heroines in romances and epics." (9) The anti-hero exhibits amoral and selfish tendencies, in contrast to the hero who emerges victorious after a significant struggle with the ability to bestow benefits on humankind. (10) The anti-hero is essentially ambiguous and ambivalent in that they are neither heroic nor villainous. (11) Critics have labeled White (12) and Holmes (13) as anti-heroic, but have not developed this idea to explore what it means for the wider representation of chemistry. Yet this idea of the anti-hero is useful for analyzing White and Holmes, because it resonates with chemistry's broad social and cultural position. Examining the field's status in society, the editors of The Public Image of Chemistry note that the popular associations of the field range from "poisons, hazards, chemical warfare and environmental pollution to alchemical pseudo-science, sorcery and mad scientists." (14) The chemist Luciano Caglioti writes that chemical products, like penicillin, dynamite, insecticides and petrochemicals, are characterized by ambiguity in that they can, at once, improve life and make living more hazardous. (15) For chemist and popular science writer Pierre Laszlo, these associations contribute to the social impact of the field, as the public suffer from "chemophobia." (16) The portrayal of chemists and chemistry in"Breaking Bad" and "Sherlock" are produced, and circulate, in this social and cultural environment. "Breaking Bad" and Chemistry as Uncommon Knowledge At the beginning of "Breaking Bad," Walter "Walt" White, played by Bryan Cranston, is a self-described overqualified high school chemistry teacher. After contributing to the work of a Nobel Prize-winning research team early in his career, he has failed to live up to his academic promise. He earns $43,700 in his job in Albuquerque, New Mexico, a salary he supplements working in a local car wash. Married to Skyler, with a son, Walter Junior --joined in season three by daughter, Holly --White has watched his former best friend at Cal Tech create a fortune as an industrial chemist and marry White's ex-girlfriend. Diagnosed with inoperable lung cancer, he decides to provide for his family after his death by turning his prodigious talent as a chemist to something darker and more dangerous: the illicit production of crystal methamphetamine. He teams up with a former student and small-time drug dealer Jesse Pinkman to manufacture a potent brand of meth identified by its distinctive blue color and its extraordinary purity. White progresses from "cooking" meth in the back of a dilapidated Winnebago using equipment stolen from his school, to industrial drug production in a secret laboratory with weekly quotas, run by meth kingpin Gustavo "Gus" Fring. Walt's immersion into the gruesome and dehumanizing drug trade provides him with what one critic called "a sort of existential rejuvenation." (17) The show dramatizes how Walt's initial motivation --to provide for his family --is gradually surpassed by his desire to make his mark on the world through his chemistry. The series features several recurring patterns about the nature of chemistry as a science. Chemistry is portrayed as a form of "uncommon knowledge." This knowledge must be earned. For example, sitting on a desk in front of his class, with a poster of the periodic table hanging in front of the chalkboard behind him, Walt discusses the arcane wonders of chemistry that only extended study reveals: Mono-alkenes. Di-olefins. Tri-enes. Poly-enes. I mean the nomenclature alone is enough to make your head spin. But when you start to feel overwhelmed --and you will --just keep in mind that one element: carbon. Carbon is at the center of it all. There is no life without carbon. Nowhere that we know of in the universe. Everything that lives . . . lived . . . will live . . . carbon. (18) Walt uses his "uncommon knowledge" to produce meth. His brand of the drug is so pure because he synthesized it himself using his advanced knowledge. Walt trains Jesse --whose first forays into the chemistry of drug production were characterized by his addition of chili powder as a special ingredient --in the advanced chemical skills needed to cook high-grade meth. But that knowledge is not easily acquired. For example, when Jesse first manufactures a batch of blue meth on his own and shows the results to Walt, their conversation shows how technically accomplished Jesse has become, but how much more he needs to learn. The show dramatizes Jesse's progressive maturation as a scientist, as he learns hands-on in the lab, under Walter's tutelage. (20) His training is complete in season four, when he travels to Mexico to show a drug cartel's collection of chemists how to make blue meth. The head cartel chemist's initial dismissal of Jesse changes after young man demonstrates his newfound "uncommon knowledge" by making meth with a 96 percent purity. (21) Yet Walt's talent surpasses that of every other chemist on the show. The chemist Gale Boetticher, originally picked to run the industrial meth production lab for Fring, admits that Walt's meth is the product of unique talent. Gale tells Gus: I can guarantee you a purity of 96 percent. I'm proud of that figure. It's a hard-earned figure, 96. However, this other product is 99. Maybe even a touch beyond that . . . But that last three percent, it may not sound like a lot, but it is. It's tremendous. It's a tremendous gulf. (22) Threatened with death by Gus, Walt argues that his specialist knowledge means that his brand of meth cannot be cooked by anyone who just follows a formula. The thug Victor claims to know Walt's meth cooking process after observing him over several weeks. Victor says: "It's called a cook because everything comes down to following a recipe." Walt responds: You're not flipping hamburgers here, pal. What happens when you get a bad barrel of precursor, huh? How would you even know it? And what happens in summer, when . . . when . . . when the humidity rises and your product goes cloudy? (23) When Gus hints that Walt is proprietorial about his meth formula, Walt emphasizes that his skills are based on his deference to the intellectual integrity of his specialist field. Walt says: "I simply respect the chemistry. The chemistry must be respected." (24) The Craft of Methamphetamine Production Yet while chemistry is portrayed as "uncommon knowledge," Walt embodies the idea of the chemist as craftsman. The show represents the particular scientific work undertaken by the chemist. Walt is talented, but he is also industrious and careful. He knows the importance of having the correct equipment. He is excited and astounded by the quality of the lab equipment that Gus procures for him: "My God . . . thorium oxide for a catalyst bed. Look at the size of this reaction vessel. There's gotta be . . . There's gotta be 1,200 liters." (25) The show features several sequences that show quality meth production as the result of hard work. Walt and Jesse turn up for work each morning and use glassware and specialized machinery to create compounds. They regularly take apart and laboriously clean their equipment. The show does not mystify chemistry. It depicts the work of the field as experimental and empirical. It shows chemistry as a science of synthesis. It shows chemists making new materials. Jesse says that their work is art, but Walt corrects him by saying it is just basic chemistry. The craft of chemistry is, at times, presented as a pure process. Gale tells Walt that he holds a master's in organic chemistry, with a specialty in X-ray crystallography. Gale tells how he had been studying for a doctorate at the University of Colorado, following an established career path, but did not enjoy the politics of academic chemistry. The purity of the lab drives him. He tells Walt: -I love the lab because it's all still magic, you know, chemistry. -Walt: It is. It is magic. It still is. (26) In one sequence, the two expert craftsmen make meth together and a whimsical song played as background music. The chemists cook. They play chess in their breaks. (27) They find found beauty and peace and joy in the craft. But neither chemist acknowledges, in the scene, the effect that their illicit work would have on society. Both essentially turn their backs on establishment science and science education. Through his actions, Walt has abandoned his belief in science as social good. The portrayal of chemistry as socially or ethically problematic is signaled by the fact that the chemistry takes place in secret. Walt first produces his meth in a van in the middle of the New Mexico desert. Then he makes it in an underground lab, hidden in an industrial plant tied to Los Pollos Hermanos, the fried chicken business run by Gus as a front for his drug network. The secret locations are another manifestation of the ambiguity of chemistry. Walt is engaged in what science communication scholar Peter Weingart and colleagues call a "private science where the scientist has chosen to leave the community or was excommunicated by it because he or she transgressed the boundaries into forbidden research territory." (28) The Ambiguity of Walter White The dramatic way Walt succumbs to circumstantial economic pressures means the shows resonates with wider contermporary social uncertainties. For the New York Times, the show taps into the "sense of economic and social backsliding," as the middle-class White family engages in an "undignified struggle for dignity." (29) The same newspaper in another review --with the headline "Better Living Through Chemistry" --says that the dark mood of the series is so in tune with the post-bust economic times that its "extremist misery . . . feels virtually like reportage." (30) Walt is presented as a deeply ambiguous figure. He faces a complex moral choice when he opts to use his chemistry talents for ill. The same passion he once used to teach his students is used to manufacture meth. He remains loyal to Jesse, who was a sort of surrogate son to him. He breaks bad --the expression from the American southwest that describes a good person doing bad things --for his family, yet allows Skyler to become ever more involved in his illicit activities. A key moment in White's moral slide is the death of Jane, Jesse's girlfriend, who is aware of Walt's secret life in the drug trade. She dies after choking on her own vomit while passed out in a heroin haze alongside Jesse. Walt does not intervene to save her. He watches her die. His motivation is selfish as he knows this means Jesse will not break up his partnership with Walt and also that Jane will be never reaveal his unlawful work. But Walt, while unknowingly drugged, alludes to his supressed guilt. He tells Jesse: "If I had just lived right up until that moment and not one second more. That would have been perfect." In the same scene, Walt reveals that he is aware of the immorality of his work and that his continuation beyond the point when he had earned enough money meant he crossed an ethical boundary. He says: I missed it. It was some perfect moment and it passed me right by. I had to have enough to leave them. That was the whole point. None of this . . . None of this makes any sense if I didn't have enough. But it had to be before she found out, Skyler. It had to be before that. (31) Walt's ambiguity is linked implicitly to an ambivalent view of science through his choice of pseudonym in the drug trade: Heisenberg. The German Nobel-winning physicist is best-known for his Uncertainty Principle that says it is possible to know either the velocity or location of an electron, but not both. Heisenberg has remained controversial also for his atomic research in World War II. There are subtle references, too, to the development of atomic weapons, an historical association with Los Alamos in New Mexico that is made explicit when Walt meets Jesse's street dealer friends in the atomic museum in Albuquerque. The ambivalence surrounding the Manhattan Project and its scientific work is bound up subtly with the figure of White. White's work dramatizes a fundamental philosophical tension about the social role of chemistry. Is the craft of the chemist morally neutral? Can purity of craft be separated from the social consequences that flow from laboratory work? At a time when they are crucial cogs in Gus's meth empire, Walt tells Jesse: "You are not a murderer. I am not and you are not. It's as simple as that." (32) Yet at this time, there were several victims who died as Walt and Jesse became more embedded in their dark trade. Walt killed two rival dealers, Emilio and Krazy-8, choking one of them to death with a bicycle lock. Jesse's friend Combo was killed while selling blue meth on a street corner inanother gang's territory. By the end of season four, Walt is responsible for at least nine deaths (33) -fatalities that resulted from Walt's complex moral choices. The social consequences of illicit chemistry work are made clear. After Jane dies, her father is so distracted by grief in his work as an air-traffic controller that he fails to prevent a mid-air plane collision over Albuquerque. Debris and body parts rain down on the city and on Walt's home, a metaphor for the social carnage wrought by his work. The cold-blooded murder of Gale is similarly symbolic. Walt believes that Gus is planning to kill him and Jesse, now that Gale had learned from Walt how to manufacture blue meth, so to survive --and to continue to be the one holding "uncommon knowledge" --Walt tells Jesse to kill Gale. Jesse reluctantly does so. Gale --the herbal tea-drinking, karaoke-performing, vegan libertarian --loves the pristine isolation of laboratory life, but his execution shows that illicit chemistry has social costs. Walt comes to realize his progressive corruption that has resulted from his ambition. As his life is threatened by his associations with the drug trade, he is asked by his wife if he wants to go to the police to confess and get protection. But Walt tells her: "I am not in danger, Skyler. I am the danger." (34) Holmes's Hidden Chemistry Moving from Albuquerque to London, the portrayal of the eponymous detective in "Sherlock" is an illuminating point of comparison with White. Arthur Conan Doyle in his original novels and stories documents Holmes's chemistry credentials. In A Study in Scarlet, Holmes first meets Dr. Watson in a "chemical laboratory." Watson as narrator describes the scene: This was a lofty chamber, lined and littered with countless bottles. Broad, low tables were scattered about, which bristled with retorts, test-tubes, and little Bunsen lamps, with their blue flickering flames. (35) Holmes is presented in this initial encounter holding a test tube as he explains excitedly that he has developed a novel chemical test to identify human blood. Holmes says: "I've found it! I've found it . . . I have found a re-agent which is precipitated by hoemoglobin But there is a telling omission in this representation of Holmes and his abilities as a forensic chemist. Nowhere is Holmes identified as a chemistry expert. Nor is his advanced knowledge of the science explicitly identified. His knowledge and love for chemistry is instead portrayed as implicit and allusive. Throughout the series, he is featured in repeated shots using a microscope in the laboratory. He is known for his ability to distinguish between more than 100 varieties of cigarette ash. His kitchen table is usually covered with laboratory glassware. (His science takes places in secret.) His love of chemistry is symbolized most clearly by the only decoration that appears on his bedroom wall: a colorful poster of the periodic table. Holmes as Sociopath "Sherlock" presented Holmes as an ambiguous figure: impatient, anti-social, friendless, arrogant and cruel with a pronounced lack of empathy. Cumberbatch calls him "this character of the night, this sociopathic, slightly autistic, slightly anarchic, maverick, odd antihero." (40) Holmes is portrayed as amoral. For example, when asked by the police to help investigate a bizarre set of apparent suicides that have been linked by a mysterious note left by one of the victims, Holmes's Uncommon Knowledge Holmes's method of reasoning is obscured in Conan Doyle's stories. As the literary critic Steven Knight notes: The contexts of medical science, the chemistry and the exhaustive knowledge of crime are only gestured at, and we are actually shown no more than a special rational process. (46) This methodology continues to be largely hidden in "Sherlock." Examining physical evidence, Holmes rapidly forms conclusions that dazzle Watson (himself, a trained physician) with their insight. Elaborating on this point, Cumberbatch tells The Times: "You can have scientists on the ground who analyze forensics, but they won't be able to take that leap which takes them to a conclusion . . . It takes [Holmes'] leap of imagination as well as his knowledge to connect the dots." (47) For Holmes, the physical evidence is just the starting point. His uncommon knowledge lies in his powers of interpretation. With this portrayal, "Sherlock" presents scientific insight as the result of a process that is closer to artistic creation than experimental science. This is a recurring means of representing the work of the scientist, especially to audiences who may be unfamiliar with the process of scientific creativity. (48) But advances in production technology have allowed the creators of "Sherlock" to bring to the surface more elements of Holmes's uncommon reasoning process. In "A Study in Pink," as Holmes examines physical evidence, words flash up on screen that correspond to pieces of data that Holmes identifies as significant. For example, when Holmes examines the corpse of a woman who apparently took her own life, labels are projected onto her clothing to reveal her history and circumstances of her death. Holmes identifies from her body, clothing and jewelery that she was left-handed and married unhappily for more than 10 years. When Holmes meets a man in Buckingham Place, he reads the man's life history in an instant from his clothing and appearance. The viewer sees the evidence, sees what Holmes sees --but the conclusion can only be provided by Holmes. The conclusions remain the result of flashes of individual genius. Holmes's "uncommon knowledge" is symbolized also by what reviewers interpret as the character's uncommon appearance. The Times says this Holmes looks "as odd as you'd expect The Cleverest man in the World to look. Eyes white, skin like china clay and a voice like someone smoking a cigar inside a grand piano." (49) For The Daily Telegraph, Cumberbatch, with his "shock of blackened hair, his parchment-pale skin and liquid eyes [takes] on a translucent quality that made him appear both sickly and mesmerizingly other-worldly." (50) The philosopher John Gray says this version of Holmes represents a conflicted modern attitude to science. Gray argues that the detective embodies reason in an age when systems of rationality --from security software to the mathematical formulae used by hedge funds --"have proved to be dangerously unreliable." For Gray, Holmes symbolizes the power of the mind, at a time when "idea that intellect alone can be guide in life is weaker than it has been for many years." (51) Conclusion: The Social Consequences of a Chemist's Ethical Choices Although the patterns of representation in both shows are complex and sometimes contradictory, common themes about the public image of chemists and chemistry can be discerned. Chemistry is represented as an empirical and experimental field. Chemistry is portrayed as a type of "uncommon knowledge" held by particular experts. But the precise nature of this knowledge is depicted differently. The knowledge in "Breaking Bad" is gained through a process of experimentation and instruction, based on knowledge of the fundamentals of the field, although each chemist has particular talents and skills that distinguish their work. The special understanding in "Sherlock," by contrast, is gained through a process of imaginative interpretations of physical evidence, a largely hidden process that is portrayed as unique to Holmes. Chemists are represented in both shows as ambiguous figures. Neither White nor Holmes possesses the traditional heroic virtues. Instead, they are depicted as anti-heroes, who each exhibit various degrees of amorality, immorality and selfishness. White and Holmes see the practice of their work as ethically neutral, a value-free demonstration of their intellectual prowess. Their chemistry is largely conducted in secret, in private and in isolation. These common patterns mean the television characters of White and Holmes conform and reinforce the wider cultural portrayal of chemists as socially and ethically problematic figures.
2017-10-23T18:05:39.921Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "111ba2453d8d6c8c178d1cf4e7357f48d3c9088d", "oa_license": "CCBYNCSA", "oa_url": "https://doras.dcu.ie/26523/1/Fahy%20Chemist%20as%20Anti%20Hero%202013.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "17ecb7befca2e05b235056823ad31cbb9063a43d", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "History" ] }
4608968
pes2o/s2orc
v3-fos-license
Association between the retinal vascular network with Singapore "I" Vessel Assessment (SIVA) software, cardiovascular history and risk factors in the elderly: The Montrachet study, population-based study Purpose To identify patterns summarizing the retinal vascular network in the elderly and to investigate the relationship of these vascular patterns with cardiovascular history. Methods We conducted a population-based study, the Montrachet study (Maculopathy Optic Nerve nuTRition neurovAsCular and HEarT diseases), in participants older than 75 years. The history of cardiovascular disease and a score-based estimation of their 10-year risk of cardiovascular mortality (Heart SCORE) were collected. Retinal vascular network analysis was performed by means of Singapore “I” Vessel Assessment (SIVA) software. Principal component analysis was used to condense the information contained in the high number of variables provided and to identify independent retinal vascular patterns. Results Overall, 1069 photographs (1069 participants) were reviewed with SIVA software. The mean age was 80.0 ± 3.8 years. We extracted three vascular patterns summarizing 41.3% of the vascular information. The most clinically relevant pattern, Sparse vascular network, accounted for 17.4% of the total variance. It corresponded to a lower density in the vascular network and higher variability in vessel width. Diabetic participants with hypoglycemic treatment had a sparser vascular network pattern than subjects without such treatment (odds ratio, [OR], 1.68; 95% CI, 1.04–2.72; P = 0.04). Participants with no history of cardiovascular disease who had a sparser vascular network were associated with a higher Heart SCORE (OR, 1.76; 95% CI, 1.08–2.25; P = 0.02). Conclusions Three vascular patterns were identified. The Sparse vascular network pattern was associated with having a higher risk profile for cardiovascular mortality risk at 10 years. Introduction Demographic indicators confirm that life expectancy in industrialized countries has increased over the last few decades. However, an estimated 17.5 million people worldwide died from cardiovascular diseases in 2012, accounting for 31% of all deaths worldwide [1]. Although cardiovascular case fatality decreased in high-income countries, recurrent cardiovascular disease (CVD) events are common [2]. Moreover, a study in the US showed that CVDs were one of the major medical reasons for rehospitalization, with a rate ranging from 14.5% to 26.9% [3]. CVDs have also increased in developing countries [4]. As a consequence, the burden of CVD remains a key priority globally. Since in most patients the underlying pathophysiological process of CVD develops long before their diagnosis, simple examinations to detect early vascular remodeling would be useful for early identification of high-risk subjects. Previous studies have suggested that microcirculatory changes are closely related to cardiovascular modifications in humans [5,6]. Fundus photography provides an in vivo noninvasive view of the human microcirculation network [7]. Indeed, specific software programs have been developed to analyze the retinal vascular network automatically and provide a description of the geometric characteristics of its arterial and venous components. Particularly, the Singapore "I" Vessel Assessment (SIVA) software was proved to be highly accurate and reproducible in assessing multiple in vivo architectural changes in the retinal vascular network [8,9]. These improvements may allow considering fundus photography to be a suitable exam to assess the vascular aging process in the general population. Several recent studies focused on a single geometric feature of the retinal vascular network. Significant associations were found, notably between retinal vasculature alteration and risk of hypertension or coronary heart disease mortality, but only a few microvascular characteristics were investigated [10,11]. Nevertheless, automatized image analysis allows dozens of geometric features to be interpreted. A thorough description without an a priori selection of retinal vascular features is needed to establish a better understanding of which of these features or combinations of these features are the most promising candidates to become a biomarker of history of CVD. The purpose of this study was first to identify retinal vascular patterns in the elderly within a population-based study. Then we sought to determine the associations between these vascular patterns, cardiovascular history and the 10-year risk of fatal CVD in this population. Methods The Montrachet (Maculopathy Optic Nerve nuTRition neurovAsCular and HEarT diseases) Study focused on participants older than 75 years. This population-based study was designed to assess the associations between age-related eye diseases and neurologic and heart diseases in the elderly [12]. Participants in the Montrachet study were recruited from an ongoing population-based study, the Three-City (3C) study. The 3C study was designed to examine the relationship between vascular diseases and dementia in 9294 persons aged 65 years and over [13]. The participants were selected from the electoral rolls in an urban setting, all living in three French cities: Bordeaux, Dijon and Montpellier. At baseline and every 2 years, the participants filled in a complete questionnaire on their cardiovascular (myocardial infarction, angina, coronary artery dilatation, coronary bypass and cardiovascular morbidity) and neurological history (ischemic and hemorrhagic stroke) as well as their treatments. Blood pressure, weight and height were measured and a blood sample (lipid, blood glucose tests and creatinine values) was collected after fasting. In Dijon, 4931 individuals participated in the first run of the 3C Study in 1999. At the fifth run (10 years later), a subgroup of participants was invited to participate in the Montrachet Study. The study was approved by Dijon University Hospital ethics committee and was registered as 2009-A00448-49. All participants gave their informed consent, and the study followed the tenets of the Declaration of Helsinki. Each participant had a complete eye examination including fundus photography in the Department of Ophthalmology, Dijon University Hospital, Dijon, France [12]. Forty-five-degree color retinal photographs, centered on the optic disc, were performed on both eyes with a fundus camera (TRC NW6S, Topcon, Tokyo, Japan) after pupil dilation with tropicamide 0.5% (Thea, Clermont-Ferrand, France). Fundus photographs acquired during the study were anonymously sent to the reading center in Yamagata University, Japan (RK and YK), and a single trained grader extracted retinal vessel characteristics with the Singapore "I" Vessel Assessment (SIVA) software. Retinal vascular network computerized analysis was based on the analysis of vessels from the center of the optic disc and then to three successive zones corresponding to 0.5 (zone A), 1 (zone B) and 2 (zone C) disc diameter. The six largest arterioles and veins were analyzed. Only one eye was retained for analysis and selection of fundus photographs followed the criteria described below: 1) fundus photograph of the right eye for participants born in even-numbered years and the left eye for those born in odd-numbered years; 2) in single-eye patients, the functional eye was selected; 3) when a picture was uninterpretable on one eye, the other one was retained for analysis. A fundus photograph was considered uninterpretable if blurred, if zone C could not be analyzed or if the fundus photograph contained fewer than six arterioles and veins. Geometric retinal vascular features including fractal dimension, vascular caliber, tortuosity and branching angle were collected. They were measured by means of the semi-automated SIVA software. The grading system was based on algorithms for automated optic disc detection with automatic vascular structure extraction and tracing. Then retinal arterioles and venules were identified automatically. Finally, formulas and algorithms were used to compute quantifiable measurements of retinal vasculature. Statistical analysis Pattern identification. The computerized analysis identified 54 geometric features [14][15][16][17][18]. To summarize these numerous data and to identify retinal vascular patterns, we performed a principal component analysis (PCA) based on these geometric characteristics. PCA extracts the most important information from a quantitative data set and reduces the number of variables by keeping only worthwhile information [19]. It builds up uncorrelated variables (components), which are linear combinations of a set of correlated variables (here geometric characteristics of the retinal vascular network). Coefficients defining these linear combinations, called factor loadings, may be interpreted as correlation coefficients. A positive (negative) factor loading means that the vascular feature is positively (negatively) associated with the component. An absolute value of loading close to 1 indicates a strong influence of the vascular feature on the value of the component. We used SAS ''Varimax" orthogonal transformation to maximize the independence (orthogonality) of the factors retained and to obtain a simpler structure for easier interpretation. To determine the number of patterns to retain, we used Kaiser's criterion (eigenvalues > 1), graphical analysis of the scree plot and the clinical interpretability of these factors [20][21][22]. The labeling of each pattern was primarily descriptive and based on our interpretation of the vascular features strongly associated with the component. We then calculated each patient's score in relation to the component. The score indicates how well the patient fit the component. The fit corresponds to the distance between the patient and the component. We discretized these scores into tertiles (good, intermediate, poor fit). Associations between cardiovascular background and each pattern. We assessed the associations between the patients' characteristics (age, sex and education level), cardiovascular risk factors (smoking habits, body mass index [BMI], hypoglycemic treatment for diabetes mellitus, hypotensive treatment for hypertension and cholesterol-lowering treatment for dyslipidemia), cardiovascular and ischemic stroke history summarized in a single variable: major adverse cardiovascular and cerebrovascular events [MACCE]), and how well the patient fit each specific pattern identified in the preceding step. We first carried out a bivariate analysis using the chi-square test or the Fisher exact test, and the ANOVA test or the Kruskal-Wallis test when appropriate. Correlations between covariates were systematically checked to detect collinearity. Multivariate analysis was then performed using polytomous logistic regressions in which the dependent variable was how well the patient fit each retinal vascular pattern using tertiles. Age, gender and education level were forced in the model. Interactions between treated hypertension and diabetes were systematically tested because these two risk factors may interact with each other. The results were expressed as odds ratios and their 95% confidence intervals (95% CI). In addition, the 10-year risk of fatal CVD was estimated by means of Montrachet baseline information using the method proposed in the Systematic COronary Risk Evaluation (Heart SCORE project). The Heart SCORE is an age-and sex-specific risk chart that was developed based on cholesterol and systolic blood pressure levels and smoking habit, separately for high-and low-risk European populations. We used the corresponding low-risk charts given the low risk observed in the French population [23]. Since the Heart SCORE risk charts are intended for risk stratification in primary prevention of cardiovascular disease, participants with previous MACCE were excluded. The association between each pattern fit and the 10-year risk of a fatal cardiovascular event was estimated using a polytomous logistic regression adjusted on age, education level, BMI and treated diabetes mellitus. A P-value < 0.05 was considered significant. SAS software (version 9.4, SAS Institute, Inc., Cary, NC, USA) was used for all analyses. Results Among the 1153 subjects included in the Montrachet study, 1094 participants had interpretable fundus photography. After exclusion of subjects with epiretinal membrane (n = 25), 1069 retinal vascular network computerized analyses were performed. The characteristics of the study population and comparison between participants and nonparticipants are summarized in Table 1. No significant difference was found between subjects with fundus photography SIVA analysis and participants without SIVA analysis except for education level (P = 0.04). In the overall population, three vascular patterns were identified, which accounted for 41.3% of the total variance. The first pattern corresponded to lower density in the vascular network as well as greater variability in vascular width because it was mainly correlated with decreased fractal dimension and an increased vessel width standard deviation ( Table 2). It accounted for 17.4% of the total variance. We named the first factor the Sparse vascular network pattern. The second pattern, the Increased vessel caliber pattern, was characterized by large vessel diameter and width; it accounted for 14.6% of the total variance. The third pattern showed increased tortuosity for both arterioles and venules. It accounted for 9.3% of the total variance. We named it the Increased tortuosity pattern. Fractal dimension, vessel caliber and tortuosity for each tertile and the related pattern are displayed in Table 3. Poor and good fit for the three patterns are presented in Fig 1. Table 4 and Table 5 show relationships between the subjects' characteristics and these vascular patterns. In bivariate analyses, subjects with a good fit with the Sparse vascular network pattern were more likely to be male, smokers, with a higher mean systolic blood pressure, a BMI >25 kg/m 2 and they were more prone to be treated for diabetes mellitus. A good fit with the Increased vessel caliber pattern was associated with a slightly younger age (P = 0.002). Subjects with an Increased tortuosity pattern were also significantly slightly younger (P = 0.008) ( Table 3). In the multivariate polytomous regression analysis, we systematically adjusted for age, sex and education level ( Table 6). Given the low number of subjects with missing data concerning their education level (n = 5) and their BMI (n = 3), we excluded those individuals from further analyses. No interaction between treated systemic hypertension and diabetes was found. Participants with hypoglycemic treatment were more likely to display a Sparse vascular network pattern than subjects without such treatment (odds ratio, [OR] good vs poor fit, 1.68; 95% CI, 1.04-2.72; P = 0.04). Individuals with MACCE before their inclusion in the Montrachet study had increased retinal vessel caliber compared to patients with no vascular event (OR good vs poor fit, 1.67; 95% CI,1.02-2.70; P = 0.04). In addition, individuals with MACCE tended to have increased vessel tortuosity (OR good vs poor fit, 1.62; 95% CI, 0.98-2.70; P = 0.06). Discussion Although there is a large body of literature on the association between single retinal vascular network features and systemic vascular history, to the best of our knowledge this is the first study to integrate all geometric retinal vascular characteristics in one analysis. The PCA allowed us to account for and summarize as best possible all quantitative information given by the SIVA program on retinal vascularization. Three independent vascular patterns were identified. The most relevant factor was the Sparse vascular network pattern due to its ability to account for most of the explained variance and its significant association with CVD past history and future 10-year risk of cardiovascular death. The Sparse vascular network pattern was mainly characterized by a lower fractal dimension. This factor was able to describe the structural branching and density of the vascular retinal network. It showed subtle changes in retinal microcirculation. The measurement of density and evaluation of vascular branching as a fractal dimension was a major step forward in retinal microvasculature description. Numerous studies have demonstrated the association between suboptimal retinal vascular network and cardiovascular history, which supports our findings [24,25]. Although debated, other patterns such as vessel caliber could be influenced by the heart cycle and axial length of the eyeball; this emphasizes the increasing interest in the fractal dimension [26,27]. Moreover, with greater variability in the vascular width, the Sparse vascular network pattern seems to be the best model for summarizing suboptimal microvascular architecture and the lack of optimal vascular design according to the Murray principle of minimum work [28]. In addition, the strong association between Sparse vascular network pattern and treated diabetes mellitus makes sense because diabetes is known to affect retinal microvasculature. Pericyte apoptosis and activation of the renin-angiotensin system are one of the leading early mechanisms of the impact of diabetes on retinal microvasculature, which probably explains decreased vascular density [29][30][31]. Although the association between the fractal dimension and cardiovascular risk factors is well documented, few studies have focused on a suboptimal retinal vascular network and cardiovascular mortality risk [32]. The Heart SCORE was validated in the European population and it is the current European CVD risk assessment model. It is still unknown why Sparse vascular network pattern is related to an increased cardiovascular mortality risk. We assumed that suboptimal retinal microvasculature reflects systemic vessel remodeling, which leads to hypertension, nephropathy, arteriosclerosis leading to increased cardiovascular mortality risk [33]. Table 4. Baseline characteristics of the Montrachet study participants by tertiles of retinal vascular patterns. In the future, one could imagine that retinal microvasculature analysis with SIVA could be associated with other features in the Heart SCORE assessment model and account for an incremental prognostic value of this score. If these findings are confirmed by future studies, it could be assumed that patients with a Sparse retinal vascular network, measured with retinal analysis software, could benefit from an earlier and rigorous cardiovascular risk factor monitoring. We did not find any relationship between vessel caliber or vessel tortuosity and high blood pressure. One of the reasons postulated may be that the profile of the second and third patterns-Increased vessel caliber and Increased vessel tortuosity-include both venules and arterioles in the same direction. In fact, history and risk of developing systemic hypertension are associated with narrower retinal arteriolar diameter and wider venular diameter [34]. Moreover, Cheung et al. showed that less tortuous arterioles (lower retinal arteriolar tortuosity value) and more tortuous venules (higher retinal venular tortuosity value) were independently associated with higher blood pressure [18]. Since we did not succeed in discriminating venules and Association between the retinal vascular network, cardiovascular history and risk factors in the elderly arterioles with the PCA analysis, the second and the third patterns remain difficult to interpret as well as their relationship with previous history of MACCE. 07 1.04 1.00-1.09 0.93 0.89-0.97 0.94 0.91-0.98 † 0.96 0.92-0.99 0.95 0.91-0 Association between the retinal vascular network, cardiovascular history and risk factors in the elderly Increased vessel caliber pattern Large arteriole and venule diameter and width The strengths of this study include a large population-based study, a wide range of systemic medical features and a 10-year data collection through the 3C study's medical information. We acknowledge several limitations to this study. First, Montrachet participants are urban volunteers. These subjects follow a healthy lifestyle and they have benefited from steady access to medical care. Therefore, the prevalence of cardiovascular events was very low in the population and we suspect that cardiovascular risk factors were well monitored. Second, these findings based on a Caucasian European population cannot be extrapolated to other parts of the world and other ethnicities. Third, we did not perform carotid Doppler ultrasonography at the time that fundus photographs were taken. Liao et al. already showed that ipsilateral carotid artery stiffness was associated with generalized narrowing of the retinal arterioles [35]. Fourth, cardiovascular history was collected from self-declarations by participants, which introduced a reporting bias. Fifth, this exploratory cross-sectional study only enhanced a potential association between fatal CVD and the retinal vascular pattern. This has to be confirmed by a longitudinal study to validate our findings. Finally, we are planning to conduct future studies with these vascular patterns adding potential confounders such as axial length, dietary factors and exercise levels. In summary, this study strengthens the idea that retinal vascular network analysis is a promising tool to assess systemic vascular status and possible cardiovascular mortality. This study tends to confirm the appeal of the fractal dimension and highlights the Sparse vascular network pattern as the most appropriate feature to assess the vascular aging process. Moreover, this Sparse vascular network pattern is associated with a higher fatal CVD risk within the next 10 years. Further prospective studies should be implemented to validate retinal vessel analysis as a marker for cardiovascular morbidity and mortality.
2018-04-26T19:15:14.642Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "22529137627a3f1c95af8fc424e344ebaad52ee4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0194694&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22529137627a3f1c95af8fc424e344ebaad52ee4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32607898
pes2o/s2orc
v3-fos-license
Serovars of Salmonella spp Isolated from Broiler Chickens and Commercial Breeders in Diverse Regions in Brazil from July 1997 to December 2004 Avian salmonellosis is a worldwide problem to the poultry industry, from the point of view of animal health and public health as well. The aim of the present study was to survey the most common Salmonella serovars in commercial breeders or broiler flocks from several regions in Brazil. The results of the present study indicated a high incidence of S enterica subspecies enterica serovar Enteritidis in breeders (57.5%) and broilers flocks (84.0%). The importance of these findings lies in the fact that S. Enteritidis has become the most frequent serovar responsible for foodborne outbreaks and sporadic cases of salmonellosis in humans. INTRODUCTION Avian salmonella infections are distinguished between pullorum disease (Salmonella enterica subspecies enterica serovar Pullorum S. Pullorum), fowl typoid (S. enterica subspecies enterica serovar Gallinarum) and paratyphoid (other salmonellas) (Berchieri Jr, 2000).The intensive large scale production adopted by the poultry industry favors the introduction, establishment, maintenance and dissemination of paratyphoid salmonellas.Therefore, paratyphoid infections currently present a problem to poultry farmers (Berchieri Jr, 2000) and constitute a hindrance to the poultry industry worldwide from the point of view of animal health and also as a consequence of the involvement of poultry products in considerable problems to public health (Barrow, 1999).In Brazil, outbreaks caused by S. Enteritidis (SE) occurred after cases in Europe, United States of America and Japan (Tavechio et al., 1996;Fernandes et al., 2003), probably due to the importation of grandparents from these regions (Zancan et al., 2000).Nevertheless, information on the actual condition of avian paratyphoid in Brazil is scarce.Many control measures have been established, from the elimination of infected flocks and vaccination to the extensive use of drugs in breeders and one-dayold progenies (Nepomuceno, 1997).This study presents a profile of the serovars of Salmonella spp most commonly isolated from flocks of commercial breeders and broilers in different regions in Brazil, from July 1997 to December 2004. Samples Salmonella spp was investigated in samples received from diverse poultry industries located in different regions in Brazil, between July 1997 and December 2004.The samples were collected in farms of broilers and commercial breeders in the following Brazilian States: Bahia, Ceará, Goiás, Paraná, Mato Grosso, Mato Grosso do Sul, Santa Catarina and São Paulo.Various sample types were submitted to analysis: cloacal DISCUSSION The present study showed that SE was the serovar most frequently isolated from breeders and broiler chickens.Tavechio et al. (1996) reported an increase in SE isolation from non-human sources in the State of São Paulo, from 1.2% in 1991 to 64.9% in 1995.Such increase has been noticed in eggshells, birds and environment samples.Tavechio et al. (2002) reported that SE comprised 32.7% of 4581 samples of Salmonella isolated from non-human sources between 1996 and 2000.From all samples, 21.7% had been isolated from commercial birds.SE has also been identified as the most frequently isolated serovar from birds by Van Duijkeren et al. (2002) and Ferris et al. (2004).On the other hand, conflicting results have been reported.FSIS (2004) showed that among the serovars isolated from broiler carcasses between 1998 and 1999, SE was the ninth most common serovar.Previous studies reported SE isolation from 5.15% of bird samples, derived products and poultry environment (Roy et al., 2002), from 2.7% of commercial laying hen flocks (Poppe et al., 1991a) and from 3.1% of commercial broiler flocks (Poppe et al., 1991b).In Brazil, SE outbreaks were reported after cases in Europe, United States and Japan (Tavechio et al., 1996, Fernandes et al., 2003), probably because grandparent birds have been imported from these countries (Zancan et al., 2000).The importance of the high frequency of SE lies in the fact that salmonellosis is one of the most problematic zoonosis to public health worldwide (Hofer et al., 1997).Since 1994, this serovar has been implicated most frequently in outbreaks and sporadic cases of foodborne diseases in humans (Berchieri Júnior, 2000;Fernandes et al., 2003).It is also one of the serovars included in the control regulations established by the Brazilian National Program for Poultry Health (Plano Nacional de Sanidade Avícola, Brasil, 1995).SH was the second most frequent serovar in the breeder flocks examined in the present study.According to CCDR (1998), SH was the serovar most frequently isolated from non-human sources in 1995, which were almost exclusively bird samples.Studies carried out in the USA during 1999 and 2000 by Ferris et al. (2004) showed that 22% of salmonellas isolated from animal sources were originated from birds.These authors reported an increase of 58% in SH isolation, so that 67% of these samples were of poultry origin.The identification of this serovar in the present study (22.8%) was similar to the occurrence of 25.8% reported by Roy et al. (2002).SH was one of the most swabs, dragging swabs, swabs from chick boxes, pipped eggs, live or dead birds, feces and meconium. Bacteriology Bacteriological procedures to isolate Salmonella spp were performed according to Brasil (1995).Samples of swabs, feces and meconium were incubated in buffered peptone water at 37ºC for 18 to 24 hours and aliquots were transferred to tetrathionate and selenite enrichment broths.The samples from pipped eggs and birds (live or dead) were directly incubated in selective enrichment broths and also in Brain Heart Infusion broth.After incubation at 41ºC for 24 hours in a shaking water bath, a loopful of each sample was streaked onto XLT-4 and MacConkey agar plates and incubated at 37ºC for 24 hours.Suspected colonies were transferred to triple sugar iron (TSI), lysine iron agar (LIA), urea and SIM medium (sulphide, indole, motility).Suspected Salmonella spp colonies were serotyped at a reference centre (Fundação Instituto Oswaldo Cruz).All media used in the study were purchased from Difco Laboratories. RESULTS Three hundred and ninety-one samples from breeders and 94 samples from broilers were positive for Salmonella spp.The eight most frequently isolated serovars and their respective sources are shown in Table 1.The results evidenced that SE is the most frequent serovar in breeders (57.5%) and broilers chickens (84%).The second most predominant serovar was S. Heidelberg (SH) in breeders (22.8%) and S. I 9, 12: -: -in broilers chickens (9.6%).frequent serovars in broiler carcasses (FSIS, 2004).Nevertheless, no SH was recovered from broiler chicken samples in the present study.Hofer et al. (1997) ranked the serovars of Salmonella isolated from birds between 1962 and 1991 according to the occurrence.The most frequent category included SE, SH, S. Typhimurium (STM) and S. Infantis.The common but not frequent serovars were S. Mbandaka and S. Kentucky, whereas S. I 9,12:-:-was considered accidental or rare.It is worth noting that the latter was the second most frequent serovar isolated from broilers (10.2%) and the fourth in breeders in the present study.STM was a predominant serovar in bird flocks from diverse regions in Brazil from 1962 to 1991 (Hofer et al., 1997).High STM frequency has also been reported by CCDR (1998), Ferris et al. (2000), Van Duijkeren et al. (2002) andFSIS (2004).Nevertheless, the findings in the present study showed STM positivity only in 1.3% and 2.1% of breeders and broilers, respectively.The findings presented herein must be carefully interpreted.Although flocks from different regions in Brazil have been sampled, the present data do not represent nationwide data on poultry salmonellosis, but could be seen as an estimate of the Brazilian scenario.Therefore, information about different Salmonella serovars on an incidence/ prevalence basis have been provided, which makes it possible to compare their frequencies among regions and track the predominant isolates in order to implement preventive or control measures (Hofer, 1985). CONCLUSION In conclusion, considering the sampling performed in the present study, S. Enteritidis was the serovar with the greatest occurrence among breeders and broiler chickens, predominating in percentages as high as 57.5% and 84.0%, respectively. Table 1 - Total values and percentages of Salmonella serovars isolated from breeders and broilers, from July 1997 to December 2004.
2017-10-18T17:16:23.309Z
2005-09-01T00:00:00.000
{ "year": 2005, "sha1": "7f242257d0961cc1a6aa19de7e63ca475cafcc4c", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbca/a/bGRZ4pkKJbv6DPLHKsWTLqK/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f242257d0961cc1a6aa19de7e63ca475cafcc4c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
3689603
pes2o/s2orc
v3-fos-license
GLP2 Promotes Directed Differentiation from Osteosarcoma Cells to Osteoblasts and Inhibits Growth of Osteosarcoma Cells Glucagon-like peptide 2 (GLP2) is a proglucagon-derived peptide that is involved in the regulation of energy absorption and exerts beneficial effects on glucose metabolism. However, the exact mechanisms underlying the GLP2 during osteogenic differentiation has not been illustrated. Herein, we indicated that GLP2 was demonstrated to result in positive action during the osteogenic differentiation of human osteosarcoma cells. Our findings demonstrate that GLP2 inhibis the growth of osteosarcoma cells in vivo and in vitro. Mechanistic investigations reveal GLP2 inhibits the expression and activity of nuclear factor κB (NF-κB), triggering the decrease of c-Myc, PKM2, and CyclinD1 in osteosarcoma cells. In particular, rescued NF-κB abrogates the functions of GLP2 in osteosarcoma cells. Strikingly, GLP2 overexpression significantly increased the expression of osteogenesis-associated genes (e.g., Ocn and PICP) dependent on c-Fos-BMP signaling, which promotes directed differentiation from osteosarcoma cells to osteoblasts with higher alkaline phosphatase activity. Taken together, our results suggested that GLP2 could be a valuable drug to promote directed differentiation from osteosarcoma cells to osteoblasts, which may provide potential therapeutic targets for the treatment of osteosarcoma. INTRODUCTION Osteoblasts (OBs) are specialized, terminally differentiated products of mesenchymal stem cells. 1 It is an important store of minerals for physiological homeostasis, including both acid-base balance and calcium or phosphate maintenance. 2,3 A study showed that osteochondroprogenitor cells could differentiate under the influence of growth factors, e.g., Cbfa1/Runx2. Moreover, key growth factors, e.g., bone morphogenetic proteins (BMPs) and sclerostin, were involved in skeletal differentiation. 4 Furthermore, studies indicate osteosarcomas tend to occur at the sites of bone growth, presumably because proliferation makes osteoblastic cells in this region. 5 Strikingly, multipotent adult progenitor cells on an allograft scaffold may facilitate the bone repair process. 6 In addition, osteogenic differentiation of human mesenchymal stem cells promotes mineralization within a biodegradable peptide hydrogel. 7 Strikingly, miR-34a promotes osteogenic differentiation of human adipose-derived stem cells via the RBP2/ NOTCH1/CyclinD1 coregulatory network, 8 and Pioglitazone (PIO) (a thiazolidinedione) may also promote osteoclastogenesis by affecting the osteoprotegerin (OPG)/receptor activator of nuclear factor kB ligand (RANKL)/RANK (OPG-RANKL-RANK) system. 9 GLP2, a glucagon (GCG) peptide family member, is related to the regulation of energy absorption and maintenance of mucosal morphology and functions. Indeed, GLP2 acts as a beneficial factor for glucose metabolism in mice with high-fat diet-induced obesity. 10 Moreover, GLP2 exhibits a protective effect on hepatic ischemiareperfusion injury in rats, 11 and GLP2 agonists decrease the need for parenteral nutrition (PN) in short bowel syndrome (SBS). 12,13 Furthermore, the absence of a motif in GLP2 could be the reason for a significantly lower strength of interaction between GLP2 and heparin in inducing protein aggregation. 14 In particular, alteration of the intestinal barrier and GLP2 secretion was found in Berberine-treated type 2 diabetic rats. 15 Notably, GLP2-2G-XTEN is efficacy in a rat Crohn's disease model requiring a lower molar dose and less frequent dosing relative to GLP2-2G peptide. 16 Interestingly, endogenous GLP2 is a key mediator of refeeding-induced and resection-induced intestinal adaptive growth. 17 Intriguingly, GLP2 elicits neuroprotective effects on rat myenteric neurons cultured with or without mast cells by activating separate receptors VIP. 18 On the other hand, the evidence that suppressor of cytokine-signaling protein is induced by GLP2 in normal or inflamed intestine may limit IGF1-induced growth but protect against risk of dysplasia or fibrosis. 19 Significantly, studies also showed GLP2 could reduce food intake in mice in the short term, likely acting at a peripheral level. 20 At the present, the exact mechanisms underlying the GLP2 during osteogenic differentiation are not fully illustrated. The aim of this study was to explore the role of GLP2 in the control of OB differenitiation and to validate how this molecule could exert protective effects against the onset of osteosarcoma. GLP2 Inhibits Osteosarcoma Cell Growth In Vivo and In Vitro To address whether the GLP2 impacts on malignant proliferation of osteosarcoma cells, we prepared the rLV-GLP2 lentivirus ( Figure 1A), and we established the stable osteosarcoma cell line (MG63) infected with rLV or rLV-GLP2, respectively ( Figure 1B). GLP2 was significantly overexpressed in MG63 infected with rLV-GLP2 compared to the control ( Figure 1C). Using ELISA, we measured GLP2 released by the MG63 cells before and after transduction of GLP2. The results showed that the released level of GLP2 in the rLV-GLP2-infected group was significantly higher than in the rLV control group (0 versus 696.78 ± 152.77 pg/mL; p = 0.0078244 < 0.01) ( Figure S1A). Furthermore, GLP2R was expressed in MG63 cells, and there was no significant difference between the rLV group and the rLV-GLP2 group ( Figure S1B). Moreover, the interaction between GLP2 and GLP2R was significantly detected in MG63 cells infected with rLV-GLP2, but not in MG63 cells infected with rLV ( Figure 1D). At the first time, we detected these cells' proliferation ability in vitro. As shown in Figure 1E, overexpression of GLP2 significantly decreased proliferation ability of MG63 (p < 0.01). The BrdU staining findings showed that the BrdU-positive rate was 18.87% ± 5.75% in the rLV-GLP2 group, and the BrdU-positive rate was 49.4% ± 9.73% in the control rLV group (p < 0.01) ( Figure 1F). Then we conducted soft agar colony formation efficiency assay in these cells. The soft agar colony formation rate was 20.63% ± 3.06% in GLP2-overexpressed MG63 cells; in contrast, the soft agar colony formation rate was 60.77% ± 6.07% in the control rLV group (p = 0.0069 < 0.01) ( Figure 1G). Furthermore, the cell transwell rate was 7.2% ± 1.55% in the rLV GLP2 group, and the cell transwell rate was 23.08% ± 5.43% in the control rLV group (p < 0.01) ( Figure 1H). To further validate the effect of GLP2 on osteosarcoma carcinogenesis in vivo, the stable MG63 cells lines with altered expression of GLP2 were injected subcutaneously into athymic BALB/c nude mice. As Western blotting with anti-GLP2 in MG63 infected with rLV-GLP2 or rLV. b-actin was used as an internal control. (D) Co-Immunoprecipitation (coIP) with anti-GLP2R followed by western blotting with anti-GLP2 in MG63 cells infected with rLV or rLV-GLP2. IgG IP as negative control. INPUT refers to western blotting with anti-GLP2 and anti-GLP2R. (E) Cell proliferation assay was performed in 96-well format using the CCK8 cells proliferation kit to determine the cell viability as described by the manufacturer. Each sample was assayed in triplicates for 3 days consecutively.Cell growth curve was based on the corresponding the relative values of OD450 and each point represents the mean of three independent samples. Data are means of value from three independent experiments (error bar ± SEM; *p < 0.05). (F) Cell BrdU assay. Data are means of value from three independent experiment (error bar ± SEM; **p < 0.01). (G) (left)The photography of colonies from the cell lines indicated in left. (right) Cell plate colony formation ability assay. Data are means of value from three independent experiment, bar ± SEM. (H) Cell transwell assay. Data are means of value from three independent experiment (error bar ± SEM; **p < 0.01). www.moleculartherapy.org Molecular Therapy: Nucleic Acids Vol. 10 March 2018 shown in Figure 2A, when GLP2 was overexpressed, the xenograft tumor weight decreased approximately 5-fold compared to the corresponding control group (0.15 ± 0.04 g versus 0.73 ± 0.14 g; p < 0.01). Compared to the control group, excessive GLP2 increased the time of xenograft tumor formation (14.5 ± 3.05 days versus 9.22 ± 1.59 days; p < 0.01) ( Figure 2B). Moreover, xenograft tumor had more well differentiation cells in the GLP2-overexpressing group than that of the control group. The proliferation index (calculated as a percentage of PCNA-positive cells) was significantly lower in GLP2-overexpressing xenograft tumors compared to the control rLV group (20% ± 3.92% versus 53.59% ± 10.28%; p < 0.01) ( Figure 2C). In particular, our results also showed that excessive GLP2 inhibited the growth of orthotopic osteosarcoma. when GLP2 was overexpressed, the weight of orthotopic osteosarcoma decreased approximately 3-fold when compared to the corresponding control group (0.586 ± 0.137 g versus 0.192 ± 0.06 g; p = 0.000002 < 0.01) ( Figure S2A), and the tumor formation time of the rLV group (7.2 ± 1.39 days) was shorter than that of the rLV-GLP2 group (13 ± 2.67 days; p = 0.00008 < 0.01) (Fig-ure S2B). Together, these findings suggest that GLP2 inhibits growth of osteosarcoma cells in vivo and in vitro. GLP2 Inhibits the Expression of NF-kB in Osteosarcoma Cells Given that GLP2 inhibited malignant growth of osteosarcoma cells, we considered whether this function is associated with inflammationrelated genes, e.g., nuclear factor kB (NF-kB). In the cell lines, GLP2 was significantly overexpressed in MG63 infected with rLV-GLP2 compared to the control MG63 infected with rLV ( Figure S3). As shown in Figure 3A, the excessive GLP2 significantly inhibited the loading of RNA polymerase II (Pol II) onto the NF-kB promoter region compared to the control group. Moreover, the excessive expression of GLP2 significantly decreased the luciferase activity of NF-kB promoter compared to control ( Figure 3B). Furthermore, we performed western blotting and RT-PCR assay; the excessive expression of GLP2 significantly reduced the level of transcription and translation of NF-kB compared to control ( Figure 3C). Significantly, through co-immunoprecipitation (coIP) with anti-NF-kB, the excessive The xenograft tumors from BALB/c mouse injected with MG63 cells infected with rLV-GLP2 or rLV. subcutaneously at armpit. The xenograft tumors weight (gram) in the two groups. Data were means of value from ten BALB/c mice (mean ± SEM; n = 10; **p < 0.01). (B) The xenograft tumors appearance time in the two groups. Data were means of value from ten BALB/c mice (mean ± SEM; n = 10; *p < 0.05). (C) A portion of each xenograft tumor was fixed in 4% formaldehyde and embedded in paraffin, and the micrometers of sections (4 mm) were made for PCNA staining (original magnification  100). **p < 0.01. expression of GLP2 significantly decreased the binding of NF-kB to b-catenin or AP1 compared to control ( Figure 3D). Furthermore, through chromatin immunoprecipitation (ChIP) assay with anti-NF-kB, the excessive expression of GLP2 significantly decreased the loading of NF-kB onto the promoter region of c-Myc, PKM2, and CyclinD1 compared to control ( Figure 3E). Ultimately, the excessive GLP2 significantly decreased the level of transcription and translation of c-Myc, PKM2, and CyclinD1 compared to control ( Figures 3F and 3G). Taken together, these observations suggest that GLP2 inhibits the expression and activity of NF-kB in osteosarcoma cells. Excessive NF-kB Abrogates the Function of GLP2 in Osteosarcoma Cells To validate whether NF-kB blocked the function of GLP2 in osteosarcoma cells, we established the stable MG63 (osteosarcoma cells) infected with rLV, rLV-GLP2, or rLV-GLP2 plus pcDNA3-NF-kB. Using ELISA, we measured GLP2 released by the MG63 cells after co-NF-kB transduction. The results showed that the released level of GLP2 in the rLV-GLP2-infected group was significantly higher than in the control rLV-infected group (0 versus 950.04 ± 62.67 pg/mL; p = 0.0007 < 0.01); however, the released level of GLP2 in the rLV-GLP2-infected group was not significantly altered compared to the rLV-GLP2 plus pcDNA3-NF-kB group (877.13 ± 106.36 versus 950.04 ± 62.67 pg/mL; p = 0.145 > 0.05) ( Figure S4A). Furthermore, GLP2R was expressed in MG63 cells, and there was no significant difference among the rLV, rLV-GLP2, and rLV-GLP2 plus pcDNA3-NF-kB groups ( Figure S4B). As shown in Figure 4A, compared to the control group, GLP2 was significantly overexpressed in MG63 transfected with rLV-GLP2 and pLV-GLP2 plus pcDNA3-NF-kB, respectively, and NF-kB was significantly decreased in MG63 transfected with rLV-GLP2 and was increased in MG63 infected with pLV-GLP2 plus pcDNA3-NF-kB. At the first time, we detected these cells proliferation ability in vitro. As shown in Figure 4B, excessive GLP2 significantly decreased proliferation ability of MG63 (p < 0.01). However, proliferation ability of MG63 was not significantly different between the rLV group and the pLV-GLP2 plus pcDNA3-NF-kB group (p > 0.05). Furthermore, the BrdU staining findings showed that the BrdU-positive rate was 16.06% ± 2.06% in the GLP2-overexpressing group, and the BrdU-positive rate was 57.13% ± 12.73% in the rLV control group (p < 0.01). However, the BrdU-positive rate was 50% ± 5.96% in the pLV-GLP2 plus pcDNA3-NF-kB group (57.13% ± 12.73% versus 50% ± 5.96%; p > 0.05) ( Figure 4C). Then we conducted soft agar colony formation efficiency assay in these cells. The soft agar colony formation rate was 21.29% ± 4.07% in the rLV-GLP2 group, and the soft agar colony formation rate was 55.83% ± 12.9% in the rLV group l (p < 0.01). However, the soft agar colony formation rate was 56.19% ± 7.77% in the rLV-GLP2 plus pcDNA3-NF-kB group (55.83% ± 12.9% versus 56.19% ± 7.77%; p > 0.05) ( Figure 4D). Next, the xenograft tumors from BALB/c nude mouse were injected with MG63 cells infected with rLV-GLP2, rLV-GLP2 plus pcDNA3-NF-kB, or rLV subcutaneously at the armpit. As shown in Figure 4E, when GLP2 was overexpressed, the xenograft tumor weight decreased approximately 7-fold compared to the control group (0.125 ± 0.033 g versus 0.833 ± 0.100 g; p < 0.01). However, compared to the control group, xenograft tumor weight was not significantly altered in the rLV-GLP2 plus pcDNA3-NF-kB group (0.79 ± 0.165 g versus 0.833 ± 0.100 g; p > 0.05). Moreover, the PCNA-positive rate of cells was significantly lower in GLP2 overexpressed tumors compared to the control rLV group (25.53% ± 4.38% versus 58.3% ± 10.58%; p < 0.01). However, compared to the control group, PCNA-positive rate was not significantly altered in the rLV-GLP2 plus pcDNA3-NF-kB group (54.63% ± 7.62% versus 58.3% ± 10.58%; p > 0.05) ( Figure 4F). Together, these observations demonstrate that excessive NF-kB abrogates the function of GLP2, and the cancerous suppression of GLP2 is regulated and controlled by NF-kB in osteosarcoma cells. Cell proliferation assay was performed in 96-well format using the CCK8 cells proliferation kit to determine the cell viability as described by the manufacturer. Each sample was assayed in triplicates for 3 days consecutively. Cell growth curve was based on the corresponding the relative values of OD450 and each point represents the mean of three independent samples. Data are means of value from three independent experiments (error bar ± SEM; **p < 0.01 and *p < 0.05). (C) Cell BrdU assay. Data are means of value from three independent experiment (error bar ± SEM; **p < 0.01). (D) Cell plate colony formation ability assay. Data are means of value from three independent experiment (error bar ± SEM; **p < 0.01). (E) The xenograft tumors from BALB/c nude mouse injected with MG63 cells infected with rLV, rLV-GLP2, rLV-GLP2 plus pcDNA3-NF-kB subcutaneously at armpit. The xenograft tumors weight (gram) in the three groups. Data were means of value from six BALB/c mice (mean ± SEM; n = 6; **p < 0.01). (F) A portion of each xenograft tumor was fixed in 4% formaldehyde and embedded in paraffin, and the micrometers of sections (4 mm) were made for PCNA staining (original magnification  100). **p < 0.01. GLP2 Promotes Directed Differentiation from Osteosarcoma Cells to OBs To explore whether GLP2 promotes the differentiation of MG63 to OB, we constructed a model of OB induced from the MG63 infected with rLV or rLV-GLP2 using dexamethasone, b-glycerophosphate, and mlascorbic acid (Asc) for 21 days, according to the schematic digram ( Figure 5A). Our result showed that GLP2 was significantly overexpressed in OB induced from MG63 infected with rLV-GLP2 compared to the control OB induced from MG63 infected with rLV ( Figure 5B). Using ELISA, we measured GLP2 released by the OB cells before and after transduction of GLP2. The results showed that the released level of GLP2 in the rLV-GLP2-infected group was significantly higher than in the rLV-green control group (0 versus 480.53 ± 107.59322 pg/mL; p = 0.0082 < 0.01) ( Figure S5A). Furthermore, GLP2R was expressed in OB cells, and there was no significant difference between the rLV group and the rLV-GLP2 group (Figure S5B). Moreover, compared to control, the induced OB cells (bone alkaline phosphatase [BALP]-positive cells) were increased in the rLV-GLP2 group (11.14% ± 2.55% versus 78.23% ± 8.26%; p < 0.01) ( Figure 5C). Furthermore, compared to control, the calcific nodules were significantly increased in the rLV-GLP2 group (4.72% ± 1.204% versus 18.19% ± 2.636%; p < 0.01) ( Figure 5D). Moreover, compared to control, the transcriptional level of BALP, Ocn, and PICP was significantly increased in the rLV-GLP2 group ( Figure 5E), and the expression of BALP, Ocn, and PICP was significantly increased in the rLV-GLP2 group ( Figure 5F). Furthermore, the BALP activity was significantly increased in the rLV-GLP2 group ( Figure 5G), and the PICP activity was significantly increased in the rLV-GLP2 group ( Figure 5H). Taken together, these observations suggest that GLP2 could promote the directed differentiation from osteosarcoma cells to osteoblasts. GLP2 Enhances the Expression of BMP through c-Fos in OBs Given that GLP2 could enhance the directed differentiation from osteosarcoma cells to OBs, we tried to validate the involvement of c-Fos and BMP during this differentiation. In the cell lines, GLP2 was significantly overexpressed in OB induced from MG63 infected with rLV-GLP2 compared to the control OB induced from MG63 infected with rLV ( Figure S6A). At the first time, we adopted RNA immunoprecipitation (RIP) with anti-METTL3 followed by RT-PCR with FOS primers in OB cells. The results showed that the excessive expression of GLP2 enhanced the binding of METTL3(a mRNAmethyltransferase) to c-Fos mRNA compared to control (Figure 6A). Moreover, we performed RIP with anti-m6A followed by RT-PCR with c-Fos primers in OB cells. The results showed that the excessive expression of GLP2 promoted methylation of the c-Fos mRNA compared to rLV control ( Figure 6B). Furthermore, we performed RT-PCR analysis of c-Fos in OB cells, and the results showed that overexpression of GLP2 enhanced the transcription of c-Fos in OB cells compared to control ( Figure 6C). Next, we performed western blotting with anti-Fos in OB cells, and the results showed that overexpression of GLP2 enhanced the expression of c-Fos in OB cells compared to control group ( Figure 6D). Furthermore, compared to control, excessive GLP2 increased the loading of c-Fos on the promoter region of BMP ( Figure 6E). Moreover, compared to control, excessive GLP2 increased the luciferase activity of the promoter of BMP ( Figure 6F). Ultimately, compared to control, excessive GLP2 increased the transcription and translation of BMP ( Figure 6G).Taken together, GLP2 enhanced the expression of BMP dependent on c-Fos in OB cells. GLP2 Activates the BALP and PICP through c-Fos and BMP in OBs To validate whether GLP2 could alter the transcriptional activity of BALP and PICP, we performed related detection in OBs. In the cell lines, GLP2 was significantly overexpressed in OB induced from MG63 infected with rLV-GLP2 compared to the control OB induced from MG63 infected with rLV ( Figure S6B). At the first time, we adopted ChIP assay with anti-c-Fos and anti-BMP followed by PCR with BALP and PICP promoter primers in OB cells induced from MG63 infected with rLV-GLP2 or control OB induced from MG63 infected with rLV, respectively. The results showed that excessive GLP2 increased the loading of FOS or BMP on the promoter region of BALP and PICP compared to the control group ( Figure 7A). Furthermore, compared to control, excessive GLP2 increased the luciferase activity of the promoter of BALP ( Figure 7B), and excessive GLP2 increased the luciferase activity of the promoter of PICP ( Figure 7C). Taken together, these observations suggest that GLP2 enhanced the transcriptional activity of BALP and PICP in OB cells. c-Fos Knockdown Abrogates GLP2 Action in OBs To validate whether function of GLP2 in OB is elicited by c-Fos, we obtained three stable OB cells induced from MG63 infected with rLV, rLV-GLP2, and rLV-GLP2 plus pGFP-V-RS-c-Fos, respectively. Using ELISA, we measured GLP2 released by the OB cells after c-Fos knockdown. The results showed that the released level of GLP2 in the rLV-GLP2-infected group was significantly higher than in the rLV control group (0 versus 541.01 ± 84.28 pg/mL; p = 0.0039 < 0.01); however, the released level of GLP2 in the rLV-GLP2-infected group was not significantly altered compared to the rLV-GLP2 plus pGFP-V-RS-c-Fos group (541.01 ± 84.28 versus 460.04 ± 124.85 pg/mL; p = 0.271 > 0.05) ( Figure S7A). Furthermore, GLP2R was expressed in OB cells, and there was no significant difference among the rLV, rLV-GLP2, and rLV-GLP2 plus pGFP-V-RS-c-Fos groups (Figure S7B). As shown in Figure 8A, overexpression of GLP2 enhanced the expression of c-Fos, BMP, BALP, and PICP in OB cells compared to the rLV group. However, when c-Fos was knocked down, excessive GLP2 could not significantly alter the expression of BMP, BALP, and PICP in OB cells. Moreover, excessive GLP2 increased the luciferase activity of the promoter of BALP ( Figure 8B), and it increased the activity of BALP in OB cells compared to control ( Figure 8C). However, when c-Fos was knocked down, excessive GLP2 did not significantly alter the luciferase activity of the promoter of BALP and the activity of BALP in OB (Figures 8B and 8C). Furthermore, excessive GLP2 increased the positive rate of staining of BALP in OB (BALP-positive cells) compared to control (9.01% ± 1.55% versus 69.05% ± 11.6%; p = 0.0059 < 0.01). However, the c-Fos knockdown fully abrogated the function of GLP2 (9.01% ± 1.55% versus 10.89% ± 2.91%; p = 0.266 > 0.05) (Figures 8D and 8E).On the other hand, excessive GLP2 increased the luciferase activity of the promoter of PICP (Figure 8F) and increased the activity of BALP in OB compared to control ( Figure 8G). However, when c-Fos was knocked down, excessive GLP2 did not significantly alter the luciferase activity of the promoter of PICP and the activity of PICP in OB (Figures 8F and 8G). Taken together, these observations suggest that the action of GLP2 is dependent on c-Fos in OBs. DISCUSSION Recently, the role of GLP2 has been studied extensively, and a large number of studies have shown potential applications for GLP2 in disease therapy. In this report, we focused mainly on the view that GLP2 inhibits growth of osteosarcoma cells by inhibiting NF-kB and promotes direct differentiation of osteosarcoma cells to OBs dependent on c-Fos ( Figure S8). To our knowledge, this is the first report demonstrating GLP2 is associated with osteosarcoma and OBs. It is worth mentioning that our observations clearly demonstrated that GLP2 is crucial for the inhibition of osteosarcoma. Our results showed that GLP2 inhibits osteosarcoma cells growth elicited by NF-kB. This assertion is based on several observations: (1) GLP2 inhibits the expression and activity of inflammation-related gene NF-kB in osteosarcoma cells, and (2) excessive NF-kB fully abrogates the function of cancerous suppression of GLP2 in osteosarcoma cells. Our results imply that GLP2 is involved in the inhibition of development of osteosarcoma, which strongly suggests that GLP2 has suppressor properties. This is consistent with previous reports; for example, recent data have suggested the notion that GLP2 plays a key role in colon carcinogenesis. 21 Furthermore, it is obvious that GLP2 promoted directed differentiation from osteosarcoma cells to OBs dependent on c-Fos. Herein, the involvement of GLP2 is supported by results from two parallel sets of experiments: (1) GLP2 increased the activity of BALP and PICP in OBs dependent on c-Fos, and (2) the depletion of c-Fos abrogated the GLP2 action in OBs. Our results imply that GLP2 is involved in the differentiation of osteosarcoma cells to OBs. This suggests that GLP2 may inhibit osteosarcoma progression by triggering differentiation of osteosarcoma. Strikingly, our results showed that GLP2 inhibited the expression of c-myc, CyclinD1, and pyruvate kinase M2 (PKM2) dependent on NF-kB. It is well known PKM2 is a limiting glycolytic enzyme that catalyzes the final step in glycolysis, which is key in tumor metabolism and growth. 22,23 Moreover, PKM2 dephosphorylation by Cdc25A promotes the Warburg effect and tumorigenesis. 24 In particular, PKM2 promotes tumor angiogenesis through NF-kB activation. 25 Furthermore, cytosolic PKM2 stabilizes mutant EGFR protein expression through regulating HSP90-EGFR association. 26 In particular, mutant EGFR is associated with tyrosine kinase, which plays a role in cancer. 27 Moreover, it has been proven that c-myc and CyclinD1 play important roles during tumorigenesis. 28,29 Notably, our present observations also demonstrate that GLP2 decreased the interaction between b-catenin and NF-kB in osteosarcoma. b-catenin as intracellular signal transducer plays an important role in the WNT-signaling pathway. A report indicates that microRNA-153 promotes Wnt/b-catenin activation in hepatocellular carcinoma through the suppression of WWOX. 30 Moreover, focal adhesion kinase (FAK) promotes OB progenitor cell proliferation and differentiation by enhancing Wnt signaling. 31 A report also showed that long noncoding RNA BC032913 plays an inhibitory role in cancer aggression by inactivation of the Wnt/b-catenin pathway. 32 In this report, our results suggest that GLP2 regulates and controls BMP. It is well known that BMP is critical for skeletal and cartilage development, homeostasis, and repair, which stimulate chondrogenesis of equine synovial membrane-derived progenitor cells. 33 A study showed BMP-2 played a role in the differentiation of Runx2-deficient cells into OBs. 34 In the report, we first proved that GLP2 exerts its effect in part through the upregulation of c-Fos and downregulation of NF-kB expression. Our present approaches provided unequivocal evidence for critical suppressor roles of the GLP2 in the tumorigenesis of osteosarcoma and osteogenesis, and they supported the notion that GLP2 may be an alternative bona fide inhibiting factor of osteosarcoma and promoting factor of osteogenesis. However, the exact mechanism underlying the role of GLP2 in the tumorigenesis of osteosarcoma and osteogenesis remains to be elucidated. We further explore the different pathways of GLP2 signaling to suggest suitable GLP2-based therapeutic strategies in cancer. In conclusion, our results suggested that GLP2 promotes the osteogenic differentiation from osteosarcoma cells and inhibits the growth of osteosarcoma, indicating that GLP2 therapy could be a valuable approach to promote bone regeneration and to inhibit osteosarcogenesis. This may provide potential therapeutic targets for the treatment of osteosarcoma. Cell Lines, Plasmid, and Lentivirus Human osteosarcoma cell lines MG63 were obtained from the Cell Bank of Chinese Academy of Sciences (Shanghai, China). These cell lines were maintained in DMEM supplemented with 10% fetal bovine serum (Gibco BRL Life Technologies) in a humidified atmosphere of 5% CO 2 incubator at 37 C. pGFP-V-RS and pGFP-V-RS-c-Fos were purchased from Origene (Rockville, MD, USA). pcDNA3 and pcDNA3-NF-kB were purchased from Addgene (Cambridge, MA, USA). rLV and rLV-GLP2 were purchased from Wu Han viraltherapy Technologies. Cell Infection, Transfection, and Stable Cell Lines MG63 cells were infected with rLV and rLV-GLP2, respectively. MG63 cells were transfected with pGFP-V-RS-c-Fos and pcDNA3-NF-kB using transfast transfection reagent Lipofectamine 2000 (Invitrogen), according to the manufacturer's instructions. We selected the single-cell clone with overexpressing GLP2 to establish the stable cell lines. Transfection efficiency was observed by Green imaging and measured by western blotting. Directed Differentiation of Osteosarcoma Cells to OBs Direct differentiation of osteosarcoma cells were preformed by 0.1 mM dexamethasone(Dex), 0.5 mM b-glycerophosphate (b-GP) and 50 mg/L ascorbic acid (Asc) according to the manufacturer instruction. Alkaline phosphatase (ALP), Osteocalcin (Ocn), Bone morphogenetic protein (BMP), Procollagen I carboxyterminal propeptide (PICP) were detected in these cells according to the manufacturer instruction (TaKaLa). BALP Activity Assay The BALP activity was adopted according to the manufacturer's instructions (Beyotime). The sample, the detecting buffer, and paranitrophenyl phosphate (p-NPP) were mixed and then incubated at 37 C for 5-10 min. Finally, 100 mL reaction stop solution was added to each well to stop the reaction, and then absorbance at 405 nm was measured. BALP Staining The BALP staining was adopted according to the manufacturer's instructions (COSMO BIO). The culture medium was removed and then each well was washed three times with 1 mL PBS. 500 mL 10% Neutral buffer solution was added to each, and the cells were fixed for 20 min at room temperature. Then, 10% Formalin Natural buffer solution was removed, and each well was washed with 2 mL deionized water three times. Then, 400 mL Chromogenic substrates was added to each well, and cells were incubated at 37 C for 20 min. Finally, cells were washed with deionized water to stop the reaction. The ImageJ software was used for the quantification of BALP. Alizarin Red S Staining Cells were rinsed with 1 PBS 3 times and then fixed with formalin for 15 min. Following that, the cells were stained with alizarin red S for 5 min and then washed with PBS and dried and mounted. ELISA of GLP2 Diluted washing solution was filled into each hole of microplate and then the plate was shaken for 30 s. Then, the washing liquid was washed off and dried with absorbent paper (repeated 5 times). To each hole was first added the chromogenic agent A (50 mL) and then chromogenic agent B (50 mL). This was gently shaken to mix, avoiding light for 15 min at 37 C. Then, the microplate was taken out and termination liquid (50 mL) was added to each hole for reaction termination. The absorbance value (OD) of each hole was measured at 450-nm wavelength. RT-PCR Total RNA was purified using Trizol (Invitrogen) according to the manufacturer's instructions. cDNA was prepared by using oligonucleotide (dT) 18 and a SuperScript First-Strand Synthesis System (Invitrogen). The PCR amplification kit (TaKaRa) was adopted according to the manufacturer's instructions. PCR products were analyzed by 1.0% agarose gel electrophoresis and visualized by ethidium bromide staining using Image imaging system (Baygene). Western Blotting The cells were lysed in RIRP lysis buffer and centrifuged at 12,000  g for 20 min at 4 C after sonication on ice, and supernatants were www.moleculartherapy.org separated. After being boiled for 10 min in the presence of 2-mercaptoethanol, samples were separated on a 10% SDS-PAGE and transferred onto nitrocellulose membranes, stained, and then blocked in 10% dry milk-PBS Tween-20 (TBST) for 1 hr at 37 C. Following three washes in TBST, the blots were incubated with antibody (appropriate dilution) overnight at 4 C. Following three washes, membranes were then incubated with secondary antibody for 60 min at 37 C. Signals were visualized by enhanced chemiluminescence (ECL). ChIP Cells were cross-linked with 1% (v/v) formaldehyde for 10 min at room temperature and stopped with 125 mM glycine for 10 min. Crossed-linked cells were washed with PBS, resuspended in lysis buffer, and sonicated for 10 min in a SONICS to generate DNA fragments. Chromatin extracts were pre-cleared with Protein-A/ G-Sepharose beads, and they were immunoprecipitated with specific antibody on Protein-A/G-Sepharose beads. After washing, elution, and de-cross-linking, the ChIP DNA was detected by PCR. RIP Cells were lysed and the ribonucleoprotein particle-enriched lysates were incubated with protein A/G-plus agarose beads (Santa Cruz Biotechnology, CA) together with antibody or IgG for 4 hr at 4 C. Beads were subsequently washed and RNAs were then isolated for RT-PCR. Cell Proliferation Assay Cells at a concentration 5  10 4 were seeded into 96-well culture plates in 100 mL culture medium containing 10% fetal calf serum (FCS). Before detected, 10 mg/well cell proliferation reagent CCK8 (Yeasen) was added and incubated for 4 hr at 37 C and 5% CO 2 . Absorbance of OD450 was measured using SpectraMax M5 (Molecular Devices, MD, USA). Soft Agar Colony Formation Assay 5  10 2 cells were plated on a 10-cm dish containing 0.5% (lower) and 0.35% (upper) double-layer soft agar. Then the 10-cm dish was incubated at 37 C in a humidified incubator for 14 days. Soft agar colonies on the 10-cm dish were stained with 5 mL 0.05% crystal violet for more than 1 hr and the colonies were counted. Transwell Assay Transwell assays were performed in 24-well polyester (PET) inserts (Falcon 8.0-mm pore size) for migration assays according to the manufacturer's instructions (BD Falcon). We observed and counted the migrated cells of triplicate membranes to determine the average migrated cell number. Xenograft Transplantation In Vivo The 4-week-old male athymic BALB/c mice were injected at the armpit area subcutaneously with a suspension of 1  10 7 MG63 cells in 100 mL PBS. The mice were observed 4 weeks and then sacrificed to recover the tumors. The wet weight of each tumor was determined for each mouse. The use of mice for this work was reviewed and approved by the institutional animal care and use committee in accordance with China NIH guidelines. Orthotopic Osteosarcoma Mouse Model The model was carried out according to methodology as previously described. 35,36 Briefly, MG63 cells were cultured in DMEM and collected before transplantation, and they were resuspended in serum-free media to a final concentration of 10 7 cells/mL. About 100 mL cell suspensions were injected into the right proximal tibia of 4-week-old female athymic BALB/c mice (a severe combined immunodeficiency mice). The mice were observed for 4 weeks and then sacrificed to recover the tumors. The wet weight of each tumor was determined for each mouse. The use of mice for this work was reviewed and approved by the institutional animal care and use committee in accordance with China NIH guidelines. Statistical Analysis Each value was presented as mean ± SEM, with a minimum of three replicates. The results were evaluated by statistical software (SPSS), and Student's t test was used for comparisons, with p < 0.05 considered significant. SUPPLEMENTAL INFORMATION Supplemental Information includes eight figures and can be found with this article online at https://doi.org/10.1016/j.omtn.2017.12.009.
2018-04-03T05:07:35.349Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "e1fdf1ef47c84735538ce737c35ae6dde187d0ee", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2162253117303086/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e1fdf1ef47c84735538ce737c35ae6dde187d0ee", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
5629311
pes2o/s2orc
v3-fos-license
A next generation sequencing based approach to identify extracellular vesicle mediated mRNA transfers between cells Background Exosomes and other extracellular vesicles (EVs) have emerged as an important mechanism of cell-to-cell communication. However, previous studies either did not fully resolve what genetic materials were shuttled by exosomes or only focused on a specific set of miRNAs and mRNAs. A more systematic method is required to identify the genetic materials that are potentially transferred during cell-to-cell communication through EVs in an unbiased manner. Results In this work, we present a novel next generation of sequencing (NGS) based approach to identify EV mediated mRNA exchanges between co-cultured adipocyte and macrophage cells. We performed molecular and genomic profiling and jointly considered data from RNA sequencing (RNA-seq) and genotyping to track the “sequence varying mRNAs” transferred between cells. We identified 8 mRNAs being transferred from macrophages to adipocytes and 21 mRNAs being transferred in the opposite direction. These mRNAs represented biological functions including extracellular matrix, cell adhesion, glycoprotein, and signal peptides. Conclusions Our study sheds new light on EV mediated RNA communications between adipocyte and macrophage cells, which may play a significant role in developing insulin resistance in diabetic patients. This work establishes a new method that is applicable to examining genetic material exchanges in many cellular systems and has the potential to be extended to in vivo studies as well. Electronic supplementary material The online version of this article (10.1186/s12864-017-4359-1) contains supplementary material, which is available to authorized users. Background Cell-to-cell communication plays a key role in maintaining the integrity of multicellular systems. The most studied cellto-cell communication mechanisms include chemical or hormone-mediated signaling and direct cell-to-cell contacts. In late 1990's, exchange of cellular information was also demonstrated to occur via release of intracellular contents packaged in lipid bilayer vesicles called extracellular vesicles (EVs) or exosomes as reviewed by Colombo et al. [1]. Various types of EVs exist and are generally classified according to their sub-cellular origins. Microvesicles (MVs) are EVs formed and released by budding from cells plasma membrane and display a diverse range of sizes (100-1000 nm in diameter). Exosomes, in contrast, are of endosomal origin and are released as a consequence of multivesicular endosomes fusing with the plasma membrane and are generally small in size (30-150 nm). Importantly, EV secretion appears to be conserved throughout evolution and is a characteristic of most cell types including adipocytes, macrophages, hematopoietic, neuronal, fibroblastic, and various tumour cells [2,3]. Furthermore, these EVs have been found to contain various types of cargo, such as mRNA, microRNA, proteins, lipids, and DNA [4][5][6], representing diverse ways of mediating cell-tocell communications. As EVs contain surface molecules that can be recognized by recipient cells, these molecules can be readily shuttled from one cell to another and thereby influence the biological state of the recipient cell in multiple ways [1]. With the potential of EVs as critical mediators of cellto-cell communication, they have become a key focus of research for numerous pathological settings as biomarkers as well as mediators of disease. A role for EVs has been subsequently identified in various disease settings, including diabetes, cardiovascular disease, inflammation and pain, degenerative brain disorders and cancer, to name a few. For example, Deng et al. demonstrated that exosome-like vesicles obtained from obese mice was sufficient to induce insulin resistance when injected into wild-type C57BL/6 mice [2]. Ibrahim et al. pinpointed exosomes secreted by human cardio spherederived cells (CDCs) as critical agents of regeneration of injured heart muscle and cardio protection. They found that the injection of exosomes into injured mouse hearts recapitulated the regenerative and functional effects produced by CDC transplantation, whereas inhibition of exosome production by CDCs blocks these benefits [7]. With respect to cancers, exosomal microRNAs and RNA are in fact being used as diagnostic biomarkers for several cancers such as ovarian cancer [8]. Moreover, exosomes have been adopted as carriers to load MHC class I and class II peptides for vaccinating metastatic melanoma patients [9] and to deliver antitumor micro-RNA let-7a to treat breast cancer in mice [10]. The readers are referred to recent review articles for detailed roles of exosomes in disease processes [11,12]. Despite the seemingly extensive characterization of EVs to date, however, limitations exist. First, though several studies have identified many genetic materials within EVs, they were not detailed as necessarily being transported and released into recipient cells [13]. Secondly, there is a lack of estimation of the transferring rate, e.g., the proportion of total genetic materials (in the form of mRNA, miRNA, protein, etc.) being transferred from one cell to another. Mittelbrumn et al. demonstrated the existence of antigen-driven unidirectional miRNA transfer from the T cell to the antigen-presenting cells [14]. However, it is not clear if the unidirectional transfer is true between other cell types. For these reasons, a more systematic method is needed to identify the genetic materials that are potentially transferred during cell-to-cell communication through EVs in an unbiased manner. In this study, we jointly consider data from RNA sequencing and genotype array to systematically discover mRNA exchanges between two co-cultured cell lines of different genetic backgrounds. We relate those exchanges to potential EV mediated transfer by verifying them with mRNA sequencing of purified EVs. Given the precedent for potential crosstalk between macrophages and adipocytes in adipose tissue under obese conditions leading to insulin resistance [15], we chose to apply our novel methodology to co-cultures of human differentiated adipocytes and macrophages. Experimental design and data generation Experimental design Our main experiment was performed on an in vitro co-culture cellular system, in which two types of human cells, namely, the adipocytes and macrophages were cultured in transwell plates with porous membrane inserts (pore size of 0.4 μm) to prevent them from being mixed together (Corning, Inc. Costar, NY, USA, see Additional file 1). The porous membrane allows small particles (size less than 0.4 μm) to pass through, making EV mediated mRNA exchanges between the two cell lines possible, which has been demonstrated by Garcia et al. [16] and Zheng et al. [17] (Fig. 1a and Additional file 1: Figure S1). The two cell lines were derived from donors with no known familial relationships. Therefore, a large number of single nucleotide polymorphic (SNP) markers between the two cell lines were expected which would allow us to easily distinguish cell origins. With this experimental setting, we planned to address: if there are mRNA exchanges detectable between the two cell lines; what the genes being transferred and their functions are; and finally if any transferred mRNAs are likely mediated by EVs in the media. Data generation We performed genotype profiling using the Illumina HumanOmni2.5Exome-8 BeadChip for the two cell lines with two technical replicates for each cell line (Adipocyte (AD) B1/B2 and Macrophage (MO) B1/B2). We also performed RNA-seq profiling for both single cultured and co-cultured cells. For single cultured cells, we generated RNA-seq data from two replicate cell pellet samples (ADaloneN1/N2, and MOaloneN1/N2). For the co-culture cells, we obtained the RNA-seq data from triplicate cell pellet samples in the adipocyte layer (ADcoN1-N3) and triplicate cell pellet samples in the macrophage layer (MOcoN1-N3). EVs were isolated using sequential ultracentrifugation and western blotting demonstrated enrichment for CD9 expression, a marker of endosomal origin. Considering the precedence set in the field, where a mixture of exosomes and MV is likely, we are being conservative and referring to what we isolated in the media broadly as EVs also referred to as 'exosome-like vesicles'. We performed RNA-sequencing on isolated EVs from media of single cultured and co-cultured cells and profiled the data of ADexosome, MOexosome, and ADMOexosome (Fig. 1a). The readers are referred to Additional file 1 for details regarding genotyping and RNA sequencing on both cells and EVs. Data processing and quality control NGS data processing We first conducted a multiple-step quality control (QC). Briefly, we filtered out ribosomal RNAs (rRNAs) using SortMeRNA [18] and trimmed Illumina adaptors using Trimmomatic [19] on raw pair-ended RNA-seq reads (of length 100 bps) for each sample. 11~90 M pairs of reads were obtained after these two QC steps for all samples Fig. 1 The experimental workflow used to identify mRNA transfers between adipocytes and macrophages in a co-culture system: (a) A schema illustrating the experimental design for cell culture and sample collection including (a) adipocyte cells cultured alone, from which two cell pellet samples AD alone N1 and AD alone N2 were retrieved for RNA-seq, adipocyte B1 and B2 were technical duplicates of genotyping using Illumina Omni2.5 SNP array, and an exosome extraction AD exosome was prepared for RNA-seq; (b) same as (a) but for macrophage cells; (c) co-culture of two cell lines, from which three adipocyte cell pellet samples AD co N1-N3, three macrophage cell pellet samples MO co N1-N3, and an exosome sample ADMO exosome were profiled by RNA-seq. b The analytical pipeline including the pre-processing of (a) genotype data and (b) RNA-seq data, (c) the Bayesian model to call mRNA transfers between two cell lines, and (d) the final output of this pipeline except ADexosome which generated much fewer reads (528 K pairs reads). These reads were then mapped to reference human genome (hg38) using an annotation file (GENCODE V21) by STAR (2.4.0.1) [20]. The mapping rates were over 90% for all cell samples and over 80% for exosomes (Additional file 1: Table S1). The uniquely mapped reads were further processed by several steps, such as marking duplicates, Split'N'Trim, reassigning mapping quality, base call recalibration, etc., similar to the protocol used in [21] (Fig. 1b). SAMtools mpileup [22] was used to retrieve the base profile at each locus (see Additional file 1 for more details). We disregarded mapped indel, placeholder, and reference skip bases and only considered remaining bases for our analysis. We adopted TopHat, HTSeq, and edgeR/DEseq protocols [23] to identify differentially expressed genes between single-cultured and co-cultured samples. Cufflinks protocol [24] was used to quantify gene expression, and GATK [21] to call variants in each sample. For cross sample quality control, we applied Principal Component Analysis (PCA) and the results based on the first two PCs for gene expressions were plotted in Additional file 1: Figure S2. We also showed the consistency of genotypes on common SNPs across 10 cell samples and 2 exosome samples in Additional file 1: Figure S3. Both gene expression levels and variants were consistent with sample annotations, suggesting no sample mislabelling occurred. To maximize the power of detecting mRNA transfer, we also merged RNA-seq reads from the same cell lines and denoted them as ADalone, ADco, MOalone, and MOco, respectively. Analytical procedures for identifying transferred mRNAs At a SNP locus, five typical scenarios of nucleotide base composition exist as illustrated in Fig. 1b (c). We define a SNP as a "tag" SNP if it satisfies the following two criteria: (1) shows a polymorphism between the two cell lines; and (2) it has a homozygous genotype in the recipient cell. Tag SNPs are informative to us to infer if donor cells have provided some copies of their mRNAs to the recipient cells using RNA-seq data. Two candidate tag SNPs are highlighted by grey background shading. Using the first highlighted case as an example, the genotype at this locus is "CT" for macrophages and "CC" for adipocytes. For adipocytes under co-culture condition, the mapped reads data contain both "C" and "T" bases, in which the "T" bases may come from macrophages during in vitro co-culture. Alternatively, it is possible that the "T"s in the adipocytes may originate from other mechanisms such as sequencing error, mapping error, and RNA editing. We developed a Bayesian model to distinguish mRNA transfers from the alternative mechanisms. Considering adipocytes co-cultured with macrophages, for a given locus, the Bayesian model takes the following as inputs: (1) the genotype information of donor (macrophage) and recipient (adipocyte) cell lines; (2) the counts of the nucleotides at the locus, and (3) base qualities of the reads mapped to the locus. Specifically as shown in Additional file 1: Figure S4, we denote the mapped RNA-seq read data at a particular genome position from four profiles by A I , A C , M I , and M C , corresponding to adipocyte alone, adipocyte co-cultured, macrophage alone, and macrophage co-cultured cells, respectively. Let G d and G r be the genotypes of donor cells and receptor cell respectively. When examining the sequence data of adipocyte co-cultured with macrophage, G d would be the genotype of macrophage (donor cell) and G r would be the genotype of adipocyte (receptor cell). We assume that the read depth of A C is N and denote t r (0 ≤ t r ≤ N) as the number of reads in A C that were transferred from donor cells. Obviously, a genetic material transfer happens at the position of consideration if t r ≥ 1. We frame the question as a Bayesian inference. We calculate the Bayes factor r to measure the confidence ratio of observing the data under two hypotheses, i.e., there exists at least one transfer vs. there is no transfer (null hypothesis). We reject null hypothesis, and claim a positive genetic material transfer if ris greater than a predefined threshold. We assume that the true genotype of the donor and receptor cells are known, i.e., G r and G d are predefined constants (based on genotype array data). We next calculate P[t r = i| A C ] with 0 ≤ i ≤ N, that is, the posterior probability of exactly i nucleotides being transferred given A C . To calculate P[t r = i| A C , G r , G d ], based on Bayesian rule, where the denominator P[A C , G r , G d ] is a constant and is often omitted in Bayesian calculation. The P[t r = i, G r , G d ] = P[t r = i]P[G r , G d ] holds as we assume that the number of reads being transferred is independent of the genotype of donor or acceptor cells. It is of note that we derive G r and G d based on genotype array data, but given we also have the RNA-seq data, an alternative approach is to estimate P[G r ] and P[G d ] by P[G r | A I ] and P[G d | M I ], which could be calculated in the same way as McKenna et al. [25]. This alternative approach can be useful when genotype information is not available. P[t r = i] is the prior "belief" that i reads were from transfer at the position under consideration. Without much prior knowledge of transfer, we assume a uniform prior, so that, . That is to assume equal probabilities of having genetic material transfer and not having a transfer, and further assume that the probability of transferring i nucleotides to be equal for any we assume that reads are independent to each other, thus P½A C jt r ¼ i; where b j is the nucleotide observed for read j. For N reads in A C , if i of them were from transfer, each read has a probability of i N for coming from a transfer and a probability of In addition, for those transferred reads, they are from either one of the parental chromosomes in the donor cell. By assuming that the probability of transferring from maternal chromosome is equal to the probability from paternal chromosome, (i.e., we ignore the situation of allele specific expression), we get P½b j jG d ¼ P½b j jfA 1 ; A 2 g ¼ 1 2 P½b j jA 1 þ 1 2 P½b j jA 2 , where the genotype G d = {A 1 , A 2 } is decomposed into its two alleles. The probability of observing a base given an allele is Where Q is the phred scaled recalibrated quality score at the base. Similarly, we can calculateP Based on the method described above, we calculate the Bayesian factor for each locus. We call genetic material transfer occurred at a locus if the Bayes factor is greater than a predefined threshold β (β = 20). We performed the analysis on all~605,000 tag SNPs ( Fig. 2) with read depth equal to or greater than 10 in the co-cultured samples. Finally, we evaluated the possibility of mRNA transfers being mediated through exosomes by cross-referencing to the identity of mRNAs in the EVs (Fig. 1b). It is of note that a very small fraction of loci showed inconsistency between the genotyping data and the RNA-seq reads profile. For example, at chr1:145,746,933, the genotype of adipocyte cell line is "GG", however all the 56 reads and all the 38 reads mapped to this locus are "C"s in ADaloneN1 and ADaloneN2, respectively (Additional file 2: Dataset S1). Such inconsistent loci were likely caused by genotyping errors, alignment errors, errors in the reference genome, etc., and could be problematic for our down-stream analysis. Therefore, we filtered out genotypes if in the single cultured recipient cells, the proportion of reads different from genotype was larger than a predefined threshold γ. With γ = 0.005, 9861 (2.26%) and 14,028 (3.22%) loci were Fig. 2 Genotyping quality control and comparison between adipocyte and macrophage cell lines: There are two cell lines each with two technical replicates for their genotyping profiling (Adipocyte B1/B2 and Macrophage B1/B2). The number in each eclipse denotes the number of SNPs kept at that step for the corresponding sample. There are 604,540 "tag" SNPs showing polymorphisms between the two cell lines filtered out for adipocytes and macrophages, respectively (Additional file 2: Dataset S1). Estimating FDR We calculated the Bayesian factors for triplicate co-cultured samples (i.e., ADcoN1-N3: adipocytes co-cultured with macrophages, and MOcoN1-N3: macrophages co-cultured with adipocytes) and ranked target loci accordingly (loci with larger Bayesian factor rank to the top) in each sample. We then adopted a robust rank aggregation method [26] to identify loci that were ranked consistently better than expected under null hypothesis of uncorrelated inputs and assigned a significance score (p-value) to each locus. Finally, we adjusted the p-values for multiple comparisons to estimate the false discovery rate (FDR). Identify mRNA transfers potentially mediated by EVs in co-cultured adipocytes and macrophages We explored mRNA transfers in two directions, i.e., from macrophage to adipocyte and from adipocyte to macrophage, respectively. From macrophage to adipocyte Ten thousand ninety-five loci survived the filtering steps in the analytical procedures. We further required the potential transfer loci to (1) have Bayesian factor larger than or equal to 20 in at least one sample (i.e., ADcoN1-N3), and (2) contain at least 2 reads that are potentially transferred from donor cells (e.g., "T" allele for the example in analytical procedures in ADco (the merged data from ADcoN1-N3)). Three hundred twenty-one loci satisfied such requirements (Additional file 3: Dataset S2). Among these loci, we identified 8 (corresponding to 8 unique genes) that are putative mRNA transfers from macrophage to adipocytes with high confidence (FDR < 0.1, Table 1). All these 8 genes were likely mediated by EVs as they were all expressed in both MOexosome and ADMOexosome (Table 1). It is of note that the number of total transcripts in GENCODE V21 is 60,566, among which 13,697 have FPKM larger than 0 and only 1966 larger than 1 in ADMOexosome RNA expression. The Fisher's exact test indicates that the overlap between inferred transferred genes and those expressed in ADMOexosome are very significant (p-value < 2.2E-16) ( Table 2). We also used Integrative Genomics Viewer (IGV) [27] to visually verify the 8 genes and showed the alignment (of MOaloneN1-N2, ADcoN1-N3, and ADaloneN1-N2) at chr19:10,286,547 (ICAM1) in Additional file 1: Figure S5. As shown in Additional file 1: Figure S5, the read counts summarized by IGV were consistent with the read counts we calculated and listed in Table 1. A close inspection of the individual genes showed that several of them belonged to the extracellular space (GO:0005615; PAPPA, ICAM1, CTSZ and COL6A3) and lysosome (GO:0005764 NPC1 and CTSZ) GO categories. One unifying theme identified from the top three transfers was related to impacting pathways associated with insulin resistance. For example, our top hit ICAM1 (Intercellular Adhesion Molecule 1) from macrophages to adipocytes is an endothelial-and leukocyte-associated transmembrane protein long known for its importance in stabilizing cellcell interactions and facilitating leukocyte endothelial transmigration [28]. It has also been demonstrated to associate with insulin resistance and diabetic retinopathy in type 2 diabetes (T2D) mellitus [29,30]. Interestingly, although ICAM-1 is often annotated as a transmembrane protein, two types of extracellular ICAM-1 have also been detected outside of cells or in serum including a soluble form and a membranous form associated with exosomes. In addition to inflammatory mediators like ICAM-1, factors related to extracellular matrix (ECM) components of the adipose tissue have recently emerged as important mediators in obesity-related pathogenesis. In particular one of the most abundantly expressed collagens in the adipose tissue forming part of the ECM structure is COL6 and its alpha 3 chain, COL6A3 has been associated with adipose tissue inflammation and fibrosis. In collagen VI knockout (KO) mouse for example on an ob/ob background, adipocytes of the knockout mice were larger than wildtype and blood glucose was normalized suggesting that elements of the ECM restrict expansion of adipocyte during obese insults. Relevance has also been observed in obese humans, with elevated levels of collagen VI being detected as well as significant correlations with macrophage infiltration. Observing mRNA for COL6A3 in exosomes from macrophages suggests another level by which the ECM may be influenced by the cells of the adipose tissue depots [31,32]. Finally, rounding out the top three hits, is Pregnancyassociated plasma protein-A (PAPP-A), a secreted metalloproteinase. PAPP-A cleaves insulin-like growth factor binding proteins (IGFBPs), thereby functioning as a growth-promoting enzyme by releasing bioactive IGF in close proximity to the IGF receptor. As PAPP-A has demonstrated to have fat depot-specific expression in humans and mice, and as the IGF signalling is known to regulate various adipose tissue processes in part through influencing the insulin/insulin receptor signalling axis, exosomal derived PAPP-A mRNA may serve as another mechanism whereby PAPP-A elicits its autocrine/paracrine actions [33]. From adipocyte to macrophage Thirty-seven thousand eight hundred fifty-three loci survived the filtering steps in the analytical procedures, Fig. S6. A search in PubMed for a role for our top 3 mRNA transfers, namely, PIEZO1, RMRP and LAMC1 indicate their relevance in cellular communication processes.PIEZO1 is an ion channel mediating mechanosensory transduction in mammalian cells [34]; RMRP is a lncRNA with multiple RNA targets [35] and LAMC1 is a member of laminins(subunit gamma 1), which are a family of extracellular matrix glycoproteins [36]. Moreover, LAMC1 is also a candidate gene for diabetic nephropathy [37]. Several other significant mRNA transfers include HSPG2 and SPTLC2. SPTLC2 encodes an enzyme involved in sphingolipid synthesis, which has been shown upon heterozygous deficiency to protect mice from insulin resistance [38]. Finally, the neuroblast differentiation-associated protein AHNAK is highlighted for its important role in the regulation of thermogenesis and lipolysis in WAT via b-adrenergic signalling. AHNAK−/− mice under a high-fat diet had enhanced insulin sensitivity and browning of the WAT depot [39]. Interestingly, AHNAK is a large plasma membrane protein of various functions including plasma membrane support, calcium signalling as well as regulated exocytosis via its key membership of a specialized vesicle, called enlargeosomes. Enlargeosomes are non-secretory, cytoplasmic vesicles competent of regulated exocytosis after rising intracellular calcium and contribute to plasma membrane repair and vesicle shedding [40,41]. Finding that AHNAK mRNA is transferred from adipocyte to macrophages in exosomes suggests an additional level of complexity to the role of AHNAK in exocytosis. Although it remains to be ascertained whether these mRNAs become functional proteins, evidence from others suggest that mRNAs transferred via exosomes do become functional proteins [4]. Thus these 25 genes (from both directions) are viable candidates for further studies with respect to communication between adipocytes and macrophages. Estimate the mRNA transfer rate between the two cell lines To estimate the transfer rate and test the performance of our model at different transfer rates, we simulated mRNA transfers at various rates and performed analyses based on simulated data. mRNA transfer simulation The general work flow of the simulation process is shown in Fig. 3. Using simulated mRNA transfer from macrophages to adipocytes as an example, we randomly selected a predefined fraction of reads from MOalone and merged them with ADalone data. We then ran our previously described pipeline on the merged ADalone data to see how many loci with "transferred" reads could be correctly identified. For each predefined transfer rate, we performed 10 simulation runs (denoted as AD Si co for 1 ≤ i ≤ 10with S indicates simulated data). Similarly, we constructed MO Si co for 1 ≤ i ≤ 10 to study genetic transfer from adipocytes to macrophages. The simulation study has its unique advantage since we know exactly if a base mapped to a locus is from the donor cells or not, this would allow us to assess the performance of our pipeline of detecting mRNA transfer at different transfer rates. Calling genetic exchange on simulated data and calculating accuracy We calculated the Bayesian factor at each "target" locus for the simulated samples. By merging the results from these samples and using Bayesian factor as a binary classifier, we created receiver operating characteristic (ROC) curves and calculated the area under curves (AUCs) of The p-value is obtained by testing the overlapping significance between "All" and "Sig Gene" using the Fisher's exact test with all 60,566 transcripts measured as background (Fig. 4). As shown in Fig. 4, our method performs very well on identifying mRNA transfers at high transfer rates. At transfer rate 0.1, we achieved AUCs of 0.95 for transfer from macrophages to adipocytes and 0.90 for transfer in the opposite direction. The performance of our method decreases as transfer rate decreases. This is not surprising since when transfer rate decreases, the number of "alien" reads at each locus decreases, making signal to noise ratio to drop. Estimation of mRNA transfer rate in the co-cultured samples We provided a rough estimation of the transfer rate in our co-culture system by examining the difference in the overall distribution of Bayesian factor statistics obtained from the real co-culture data and the ones from simulated samples. The rationale underlying this analysis is that when the simulated transfer rate is similar to the real transfer rate, the difference between the two distributions should be minimal. It is of note that the numbers of reads are different in simulated samples and real co-cultured samples; this may cause the distribution of Bayesian factor statistics to change even when the underlying transfer rates are the same. To adjust for variation in the sample total read numbers to achieve a fair comparison, we down-sampled the reads data in ADco (MOco) such that it had roughly the same number of reads as the simulated co-cultured samples. We performed 10 runs of down-sampling, and generated 10 reads profiles denoted by AD Ri co (MO Ri co ) (1 ≤ i ≤ 10 with R denotes real data), respectively for each transfer rate, i.e., 0.0001, 0.001, 0.01, and 0.1. We then calculated the Bayesian factor at each "target" locus for the downsampled data. At a specific transfer rate, the overall distribution of Bayesian factor statistics in real co-culture is estimated by merging all the Bayesian factors from AD Ri co Fig. 4 Performance of the Bayesian framework on simulated data with different transfer rates: (a) The ROC curves and AUCs for our method on simulated data from macrophages to adipocytes at 4 different transfer rates: 0.1, 0.01, 0.001, and 0.0001. b The ROC curves and AUCs for our method on simulated data from adipocytes to macrophages Fig. 3 A pipeline to generate simulated data for adipocytes co-cultured with macrophages: (a) Steps used to generate simulated data. b A schematic to illustrate how we sampled reads from one type of cells and merged them with the reads from the other type of cells to simulate a sample with known mRNA transfers ( MO Ri co ) (1 ≤ i ≤ 10), and similar procedures were performed for simulated data. The Kolmogorov-Smirnov (KS) test was applied to compare the two distributions and we estimated the transfer rate to be the one with the lowest KS statistics. For both transfer directions, the minimal KS-statistics were achieved at transfer rate close to 0.001 (Table 4), indicating that the transfer rate in our co-culture system under quiescent condition was around 0.001. Differentially expressed genes between cell lines cultured alone and co-cultured To evaluate the impact of co-culturing to the transcriptome, we used cell lines cultured alone as control, and identified differentially expressed genes (DEGs) in the co-cultured system. We adopted two commonly used methods DESeq and EdgeR [23,42] and summarized the results in Table 5. As can be seen in Table 5, the DEGs identified by the two methods were consistent even though DESeq inferred more differentially expressed genes than EdgeR. There were 575 and 142 DEGs (FDR ≤ 0.05) identified by both methods for adipocytes and macrophages respectively (Additional file 5: Dataset S4). We performed function enrichment analysis of the DEGs using David tools (Version 6.7) [43] and the full results were provided in supplementary materials (Additional file 1: Table S2 for adipocytes and Table S3 for macrophages). The DEGs for adipocytes were mostly enriched in type 1 repeats in thrombospondin family, which are multimeric multidomain glycoproteins that function at cell surfaces and in the extracellular matrix milieu. THBS1, which encodes thrombosponin 1 was one of the genes found within this pathway significantly down-regulated in the co-cultured cells relative to the adipocytes cultured alone. THBS1 is interesting with respect to type 2 diabetes as it has been shown to be elevated in the circulation in obese and insulin-resistance individuals [44] and loss of THBS1 in mice, protects them from diet-induced weight gain and adipocyte hypertrophy [45]. Interestingly, the top 1, 3 and 6 genes identified in our transfer study, i.e., ICAM1, PAPPA, and NPC1 are also significantly differentially expressed between ADalone and ADco with FDRs 1.02E-7, 1.36E-7, and 1.68E-5, respectively. ICAM1 expression has been shown to relate with obesity and insulin resistance [46]. NPC1 haploinsufficiency also promotes weight gain and metabolic features associated with insulin resistance [47]. In contrast to the adipose tissue, the DEGs identified by comparing MOco vs. MOalone are mostly enriched for transmembrane proteins, immune response pathways such as leukocyte activation, and EGF-like domains. Most occurrences of the EGF-like domain are found in the extracellular domain of membrane-bound proteins or in proteins known to be secreted. The EGF receptor family is in part involved in Notch signaling, which controls cell-cell communication [48]. It is of note that a lot of studies have shown that ECM modulates epidermal growth factor receptor activation and leukocytes [49,50]. Thus, the differential expressed genes could be some down-stream effects of the transferred transcripts from adipocytes to macrophages. Discussion There is accumulating evidence that exosomes via horizontal transfer (i.e., the movement of genetic material between cells) of genetic information can play a key role in cell-to-cell communications [4-6, 51, 52]. Numerous studies have thus focused on providing a comprehensive characterization of the content of EVs and these efforts have led to the creation of databases, such as EVpedia and Vesiclepedia [53,54], which record molecules (proteins, mRNAs, microRNAs or lipids) observed within these vesicles. However, being identified in exosomes is not necessarily indicating that the RNAs or proteins will be transferred into other cells. Thus in the present study, we complement these efforts by providing a computational approach to identify genetic material that has been transferred between two in vitro co-cultured cells mediated by EVs. Next generation sequencing of cellular genetic material which differed between the co-cultured cell types was used as the finger print to place donorderived mRNAs at the scene of the recipient's cellular RNA pool. In comparison to other labelling technologies, using DNA sequence polymorphism as marker has its advantages since they are naturally occurring and introduce no artificial modifications to the biological systems. On the other hand, we are limited to loci with polymorphisms for mRNA transfer detection, and therefore this approach while at genome-scale provides semiwhole genome coverage. In comparison of our identified 25 genetic transfers (in both directions) to those catalogued in the exosome database, ExoCarta (~1700 distinct human mRNAs across various tissues sources) [55], we identified 7 mRNAs in common (ITGB1, NFATC3, SOD2, FOS, AHNAK, LAMC1, and SPTLC2), all of which are in the direction from adipocytes to macrophages (with p-value 2.45E-3). The overlap is significant despite the diversity seen in exosome cargo depending on the cell type under study as well as the cellular state. Nonetheless this serves to further underlie the importance of characterization of EV contents. While our method has general applicability, as described below, we perceptively chose to apply this to co-cultures of macrophage and adipocyte given the precedence in the literature that there is crosstalk between adipocytes and immune cells in the adipose tissue that contributes to metabolic dysregulation and obesity [56]. The function of the mRNAs transferred range from protein coding to transcriptional regulators and thus the impact on the recipient cell could vary greatly if they are translated into functional units. Although in our study we did not assess this, evidence from other studies suggest the transferred mRNA can be functional in the recipient cell. From this perspective, it is of great interest to see that one of the mRNAs we identify as transferred from adipocyte to macrophage, as well as confirmed in ExoCarta, is AHNAK. Given the most recent identification that AHNAK in the regulation of thermogenesis and lipolysis in WAT via β-adrenergic signalling makes this observation of AHNAK mRNA transfer in adipocyte exosomes to macrophages very relevant to the new field of extracellular vesicle biology and the possible impact for new therapeutics for T2D. It also highlights the value of this computational approach in generating novel hypothesis. Currently our method is optimized for detecting nucleic acid exchanges between two cells lines at loci with known polymorphisms. It is known that exosome RNA contains not only mRNA, but also non-coding RNA species such as small microRNAs. For example, by analysing miRNA expression levels in a variety of cell lines and their derived exosomes, Guduric-Fuchs et al. found that miRNAs like miR-150, miR-142-3p, and miR-451 preferentially enter exosomes [57]. Huang et al. characterized the human plasma-derived exosomal RNAs by deep sequencing [58]. Although we did sequence miRNAs from the cells alongside total mRNA, we could however, not find sufficient genetic diversity amongst the miRNAs between the adipocyte and macrophages in order to allow for the identification of transfers via genetic differences mainly due to short length and relatively small number of miRNA species that could be detected. Although accumulating evidence supports the important role of exosomes/EVs for mediating the cell-to-cell communication and exchange of genetic information, exosomes or lipid vesicles in general may not be the only mechanism for RNAs to transfer between cells. For example, it has been demonstrated that miRNA can be protected in the extracellular environment by forming complex with high-density lipoproteins (HDL) and RNA-binding proteins [59,60]. Since our experimental design did not exclude other mechanisms of genetic material transfer besides EVs, we consider the identified transfers are likely mediated by exosomes but could also be contributed by other mechanisms. There are a few possibilities that can lead to false discoveries. First, reads from RNA-seq contain errors due to the technical limitation of the next generation sequencing technology [61]. For example, the raw quality scores of Illumina sequencing are calculated from the signal intensity, which do not always accurately represent the true error rate. Our study is particularly sensitive to this error rate since the transfer rate of transcripts is also quite low in this study as indicated by our simulation study. When the transfer rate is low (as seen in our experiment), the low signal/noise ratio could be the major factor for the high FDRs. Second, all the aligners may have some mapping errors. A comparison study with various sequence aligners shows that STAR-2pass with annotation has the best alignment performance but is still not completely error free [62]. Third, there are also genotyping error by SNP-arrays [63]. Finally, there are a few biological mechanisms such as RNA editing which will cause false discovery in our method. Our simulation study has shown that our method has its ideal performance when the transfer rates between the two in vitro co-cultured cell lines are high. Inducing exosome secretion could be done via chemical means, such as altering cellular ceramide levels [64]. Our methods also require a good level of genetic diversity considering the donor and recipient systems. Optimization of genetic diversity can be done using cultures of cells that are knowingly from different genetic individuals such as in our experimental design. Other experimentally relevant systems to optimize genetic diversity, include investigations surrounding human derived exosomes (e.g., from plasma) and their function by injecting into the mouse [65]. In this case, one could survey the mouse tissues in order to identify tissues which have been impacted by the nucleic acid cargo of the donor exosome. Another naturally expressing genetic diversity is seen in cancers. It has been known that cancer cells secrete exosomes including during cell migration [66] and invasion [67]. Importantly, tumour derived microvesicles have been identified to contain mutated and amplified oncogenic DNA sequences and potentially have a role in genetic communication between cells as well as provide a potential source of tumour biomarkers. Thus our approach, we predict, would be highly amenable to identifying the genetic materials transferred between cancer and surrounding or distant normal cells [52] and further highlights the overall potential impact of this methodology. Conclusions In this study we present a novel systematic framework to call genes involved in the process of mRNA exchange between two co-cultured cell lines of different genetic backgrounds and investigate the role of exosomes as a vehicle in mediating the exchange. The systematic framework includes a protocol to perform quality control, alignment, mapping, and base call recalibration on the raw SNP array and RNA sequencing reads data, a Bayesian model to evaluate the significance of genotypic variation of a cell line under in vitro co-culture, and a method to estimate the rate of false discovered loci involving in the transfer process. By applying the framework to a co-culture between adipocyte and macrophage cell lines, we identified with high confidence, 8 mRNAs being transferred from macrophages to adipocytes and 21 mRNAs transferred in the opposite direction. These mRNAs represent biological functions including extracellular matrix, cell adhesion, glycoprotein, and signal peptides. We also estimate the transfer rate to be 0.001 in both directions. Our work provides a novel solution to studying EV mediated mRNA transfers between cells and this work is able to be extended to in vivo studies as well.
2017-12-23T18:07:17.624Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "c5cf7328346410b0c7ed2b1ae6e3d0b51919092e", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-017-4359-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5cf7328346410b0c7ed2b1ae6e3d0b51919092e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
21541664
pes2o/s2orc
v3-fos-license
The social inefficiency of regulating indirect land use change due to biofuels Efforts to reduce the indirect land use change (ILUC) -related carbon emissions caused by biofuels has led to inclusion of an ILUC factor as a part of the carbon intensity of biofuels in a Low Carbon Fuel Standard. While previous research has provided varying estimates of this ILUC factor, there has been no research examining the economic effects and additional carbon savings from including this factor in implementing a Low Carbon Fuel Standard. Here we show that inclusion of an ILUC factor in a national Low Carbon Fuel Standard led to additional abatement of cumulative emissions over 2007–2027 by 1.3 to 2.6% (0.6–1.1 billion mega-grams carbon-dioxide-equivalent (Mg CO2e−1) compared to those without an ILUC factor, depending on the ILUC factors utilized. The welfare cost to the US of this additional abatement ranged from $61 to $187 Mg CO2e−1 and was substantially greater than the social cost of carbon of $50 Mg CO2e−1. L ow carbon fuel policies at the federal and state level in the US such as the Renewable Fuel Standard (RFS) and the Low Carbon Fuel Standard (LCFS) in California seek to reduce dependence on fossil fuels and carbon emissions by inducing a switch towards biofuels. The RFS sets a quantity mandate for different types of biofuels that differ in their carbon intensity relative to gasoline. The RFS is implemented as a mandate to blend a certain share of biofuels with gasoline annually since 2007. On the other hand, a LCFS sets a target for the percentage reduction in the average carbon intensity of transportation fuel below a baseline level and provides blenders the flexibility to select the mix and quantities of different biofuels to meet the average fuel carbon intensity standard. The production of biofuels has raised concerns about their competition for land with food crops resulting in higher global crop prices 1,2 that lead to indirect land use change (ILUC) globally by creating incentives for the conversion of non-cropland to crop production and releasing carbon stored in soils and vegetation 3 . Studies differ in their estimate of the extent to which biofuels have affected food crop prices with many studies estimating these to be 14-35% higher than in the baseline depending on the specifics of the biofuel policies, the definition of the baseline, the time frame for the comparison, types of biofuels included and the models and methods utilized 1,4,5 . To reduce the potential for ILUC offsetting at least a part of the carbon savings generated by displacing fossil fuels with biofuels, legislation establishing the RFS and the California LCFS require inclusion of the direct-and ILUC-related carbon intensity of a biofuel in determining its total carbon intensity for compliance with these regulations 6,7 . The ILUC-related carbon intensity is biofuel-specific and is referred to as the 'ILUC factor' of that biofuel. The ILUC factor is a measure of the carbon emissions released per unit of biofuel, due to land use change domestically and internationally caused by the biofuel-induced changes in food/feed crop prices and land rents in the US. It is feedstock-specific and higher for feedstocks that require greater diversion of productive cropland from food crop production to biofuel production than for energy crops that can be grown productively on low-quality soils 8 . The inclusion of the ILUC-related carbon intensity of a biofuel in the carbon intensity of a biofuel for compliance with the LCFS is intended to lead to internalization of these indirect effects and create incentives to shift the mix of biofuels towards those with low ILUC effects, thereby increasing the abatement of global carbon emissions. However, this approach and the ILUC factors used for the California LCFS have been controversial and the subject of lawsuits by biofuel producers 9 . There is a large literature assessing the magnitude of the ILUC effect of corn ethanol using global equilibrium models 8 . A few studies have also estimated the ILUC effect of cellulosic biofuels from various feedstocks 10,11 . Several studies have examined the effect on carbon emissions of various biofuel policies, including the RFS 4,12-16 , volumetric tax credits 13,14 and a national LCFS 12,17 . Others have examined the land use effect of the RFS 4,12,13 and a national LCFS in the US 17 and internationally 18 . None of these studies examined the economic effects and emissions implications of including an ILUC factor when implementing a LCFS 7,8,[19][20][21] . For this study, we used an integrated modelling approach 14 to analyse the effects on economics and carbon emissions of supplementing the RFS with a national LCFS and the implications of implementing the LCFS with and without an ILUC factor over the 2007-2027 period. We combined the Biofuel and Environmental Policy Analysis Model (BEPAM-F) 14,22 , with DayCent 23-26 to estimate soil carbon sequestration and with the Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) model to estimate above-ground life cycle emissions. BEPAM-F is a dynamic, open economy, integrated model of the agricultural, forestry and transportation sectors in the US. DayCent is a globally validated ecosystem model which simulates the direct effects of land use change on soil carbon sequestration and nitrogen cycling. We used this to estimate the spatially heterogeneous feedstock-specific direct life cycle carbon emissions intensity of biofuels together with parameters from GREET 27 . We included feedstock-specific ILUC factor estimates from California Air Resources Board (CARB) 28 , Environmental Protection Agency (EPA) 29 and Searchinger 10 . Our results show that inclusion of an ILUC factor in a national LCFS leads to additional abatement of cumulative emissions over 2007-2027 by 1.3 to 2.6% (0.6-1.1 Billion Mg CO 2 e À 1 ) compared to those without an ILUC factor, depending on the ILUC factors utilized. However, this abatement is achieved at a welfare cost to the US ranging from $61 to $187 Mg CO 2 e À 1 , which is substantially greater than the social cost of carbon of $50 Mg CO 2 e À 1 . Results Simulated scenarios. The baseline scenario (No_LCFS) was defined as one in which only the RFS is implemented over the 2007-2027 period 12 (Supplementary Fig. 1). We then supplemented the baseline with two alternative LCFS scenarios, defined as 'with' and 'without' the inclusion of the ILUC factor in the carbon intensity of biofuels. Both LCFS scenarios set the same targets for reducing the average carbon intensity of fuel over the 2017-2027 period. In the 'without' scenario (LCFS_No_ILUC factor), we considered only the direct life cycle carbon intensity of a biofuel (including the carbon intensity due to direct land use change) to determine compliance with the LCFS. In the 'with' scenario (LCFS_With_ILUC factor) the sum of the ILUC factor and the direct life cycle GHG intensity of a biofuel was considered. Studies differ widely in their estimates of the ILUC factor of a feedstock; with estimates ranging from 13 to 104 g CO 2 e Mega-Joule (MJ) À 1 for corn ethanol and from 5.8-111 g CO 2 e MJ À 1 for cellulosic biofuels 11 (Supplementary Table 1), depending on choice of model 8,19,20 and underlying assumptions 8,11,12,19 . The first study to quantify the ILUC effect by Searchinger et al. 10 obtained the largest values for the ILUC factor in this range for both corn ethanol and cellulosic ethanol. These large estimates have been shown to result from a number of restrictive assumptions in the modelling analysis including those about the rate of growth of crop productivity, the availability of idle land for conversion to crop production and the ease of conversion of land from one use to another as discussed in Khanna and Crago 8 and Dumortier et al. 30 . Subsequent estimates obtained using alternative modelling approaches by the EPA 29 for implementing the RFS and by the CARB 31 for implementing the LCFS were substantially lower due to differences in the model structure and parametric assumptions 8,32 . We considered three cases of the LCFS_With_ILUC factor scenario, using feedstock-specific ILUC factors from CARB 28 10 were included in the spectrum of ILUC factors considered here, despite their limitations, to analyse and illustrate the economic and carbon emission consequences of these extremely large ILUC factors in implementing a LCFS. The RFS and LCFS policy targets varied over time, thus, the mix and quantities of biofuels and fuels and their economic and carbon emission effects differed over time during the 2007-2027 period. To account for the complete effect of these policies over time, we compared the cumulative 'global' emissions between different scenarios. Cumulative emissions were defined as the sum of the direct emissions from the agricultural, forestry and transportation sectors in the US and the ILUC-related emissions from biofuels over the 2007-2027 period. We used the feedstockspecific ILUC factors for estimating the cumulative ILUC-related emissions in each of the three cases of the LCFS_With_ILUC factor scenario. We estimated the change in cumulative emissions in each of the three cases of the LCFS_With_ILUC factor scenario relative to the LCFS_No_ILUC_factor scenario and relative to the No_LCFS scenario. A comparison of the cumulative emissions in each of the three cases of the LCFS_With_ILUC factor scenario with those under the LCFS_No_ILUC factor scenario provided an assessment of the additional abatement achieved globally due to the inclusion of the ILUC factor in each case. To estimate the economic effects of this abatement, we measured the change in present value of social welfare, defined by the discounted sum of the changes in consumer, producer and government surplus across the agricultural, forestry and transportation sectors over the 2007-2027 period in each of the three cases of the LCFS_With_ILUC scenario relative to the LCFS_No_ILUC factor scenario. We divided the estimate of the difference in economic surplus between the 'with' and 'without' scenarios, over the 2007-2027 period, by the additional cumulative emissions abated in each of the three cases to obtain a case-specific estimate of the cost of this additional abatement. We compared this cost of abatement to the average social cost of carbon 33 which is a measure of the monetary value of the damages due to carbon emissions to determine if the ILUC factor approach resulted in a positive or negative net societal benefit. Implicit taxes and subsidies under the LCFS. The RFS and the LCFS policies implicitly tax gasoline and diesel and subsidize biofuels 12,34 . Unlike the RFS, the implicit tax on fossil fuels and implicit subsidy on low carbon biofuels is based on their carbon intensities. These implicit taxes and subsidies are determined by an implicit price per unit of carbon that is the same for all fuels and depends on the stringency of the LCFS target relative to the baseline and by a fuel-specific difference between the fuel's carbon intensity and the target for average fuel carbon intensity set by the LCFS. Fuels with carbon intensity higher than the standard are implicitly taxed (such as fossil fuels) while those with carbon intensity lower than the standard (such as biofuels) are implicitly subsidized. The inclusion of the ILUC factor in the carbon intensity of biofuels increases the difficulty and thus the implicit price of carbon for achieving a given LCFS target by making biofuels more carbon intensive (Fig. 1). This increases the implicit tax on fossil fuels and creates greater incentives to reduce their consumption. The inclusion of the ILUC factor also reduces the difference between the carbon intensity of a biofuel and the LCFS target. The impact of this on the implicit subsidy for a biofuel is ambiguous and will differ across biofuels; it will increase the implicit subsidy for biofuels with low ILUC factors and reduce the implicit subsidy (or even implicitly tax) biofuels with high ILUC factors. This will thereby induce a shift from biofuels with relatively high ILUC factors towards biofuels with low ILUC factors. We found the implicit carbon price under the LCFS_No_ILUC scenario to be $81 Mg CO 2 e À 1 . The extent to which the inclusion of the ILUC factor increased this implicit carbon price varied across the three cases considered. The CARB, EPA and Searchinger ILUC factors raised the implicit price of carbon by 25, 30 and 192%, respectively, relative to the LCFS_No_ILUC factor scenario (Fig. 2). This carbon price represents the marginal cost of carbon abatement to meet the LCFS. This is different from the welfare cost of carbon abatement discussed below which is based on the change in economic surplus for consumers, producers and government due to the LCFS with or without the ILUC factor relative the No_LCFS scenario. This increased the implicit tax per litre on fossil fuels (gasoline and diesel) and lowered the implicit subsidy on corn ethanol and energy crops for cellulosic biofuels (Fig. 1); the high Searchinger ILUC factor for corn ethanol converted the implicit subsidy on corn ethanol under the LCFS_No_ILUC scenario to a tax. All three sets of ILUC factors (particularly the Searchinger factor) increased the implicit subsidy for crop residues due to their negligible ILUC factor (Supplementary Table 1). The Searchinger factors also increased the implicit subsidy for certain energy crops (such as willow, poplar and energy cane that were assumed to have a zero ILUC factor because of regions where they are grown while reducing the implicit subsidy for other energy crops (miscanthus and switchgrass) with very high ILUC factors to zero (see Supplementary Table 2). Effects on consumption of alternative fuels and land use. Under the No_LCFS scenario there is 57 billion litres of corn ethanol and 70 billion litres of cellulosic ethanol (of this 47 billion litres are from crop residues, mainly corn stover, and the rest from perennial energy crops) in 2027 (Supplementary Table 3). The implementation of the LCFS_No_ILUC factor increased the implicit subsidies for cellulosic biofuels and increased their volume to 110 billion litres, with most of it produced from cellulosic feedstocks, while reducing the amount of corn ethanol to 19 billion litres. The addition of an ILUC factor in all three cases (CARB, EPA and Searchinger) reduced the demand for fossil fuels and corn ethanol and increased reliance on cellulosic ethanol; however, the composition of feedstocks for the cellulosic biofuels differed across the three cases. Production of corn ethanol decreased by 18-19 billion litres to levels close to zero in all three cases relative to the LCFS_No_ILUC factor scenario (Fig. 3). The CARB factors led to an 8-and 11-billion-litre increase in cellulosic ethanol from energy crops and crop residue ethanol consumption respectively. The inclusion of the Searchinger ILUC factors reduced perennial grass ethanol from all sources by 27 billion litres (Supplementary Table 3). It also affected the mix of energy crops used to produce ethanol by switching away from those with high ILUC factors such as miscanthus and switchgrass to other perennials, such as energy cane, willow and poplar (Supplementary Table 3). Production of crop residue ethanol increased by 47-billion litres relative to the LCFS_No_ILUC scenario. Despite the assumed ILUC factor for corn stover and wheat straw being the same in all three cases, the larger consumption of corn stover in the Searchinger case was due to limited cost-effective feedstock alternatives with a low ILUC factor. Consequently, this case resulted in a high carbon price and a larger implicit subsidy for crop residues (Figs 2 and 3). The additional demand for cellulosic biofuels in all three cases of the LCFS_With_ILUC factor scenario resulted in a higher price of biomass compared to the $79 Mg À 1 level in the No_LCFS scenario (Fig. 2). Biomass price increased by 9,13 and 167% under the CARB, EPA and Searchinger cases, respectively. Under the No_LCFS scenario 13.7 million hectares of land were used in 2027 to produce the corn needed to meet the corn ethanol mandate of 56 billion litres (Supplementary Table 3). This estimate was significantly smaller than the 60 million hectares estimated in Chakravorty et al. 4 because they assumed that the lowest quality cropland (with a yield of 1.7 Mg per hectare) would be used for producing corn for ethanol. We assumed that average quality land with a yield of 10.3 Mg per hectare would be used for corn for ethanol production in 2027. EIA estimates for land used to produce 14.2 billion gallons in 2014 indicate a yield of 9.8 Mg per hectare 35 , while USDA estimates of corn yields in 2015 are 10.6 Mg per hectare 36 . Our findings were similar to those in Hertel et al. 37 who found that 15 million hectares of land would be used for corn for ethanol assuming a 2001 corn yield of 8.5 Mg per hectare. Our findings were also similar to Chen et al. 12 who found that 11.6 million hectares of land would be used to produce 15 billion gallons of corn ethanol in 2030. We also found that 4.2 million hectares of land would be needed to produce 18.8 billion gallons (70 billion litres) of cellulosic ethanol (from all feedstocks) including crop residues in the No_LCFS scenario. This was close to the 4.2 million hectares of land needed to produce 16 billion gallons of cellulosic biofuel in 2030 in Hudiburg et al. 14 but much smaller than the 11 million hectares required to produce 21 billion gallons in Chakravorty et al. 4 . This was largely because Chakravorty et al. 4 did not consider the potential to produce biofuels from crop residues which requires no diversion of land. The implementation of the LCFS 'with' and 'without' the ILUC factors resulted in a change in land use relative to the No_LCFS scenario (Supplementary Table 1). The LCFS_No_ILUC factor resulted in a reduction in demand for corn ethanol and a shift towards energy crops. Land under corn for ethanol declined to 4.6 million hectares while that under energy crops for cellulosic biofuels increased to 23 million hectares (Supplementary Table 4). The LCFS_With_ILUC factor further exacerbated this shift away from corn ethanol to cellulosic biofuels. Land under energy crops increased to 24-30 million hectares under the three cases of the LCFS_With_ILUC factor scenario. As a result, land under crop production for food, feed and fibre was marginally higher in the LCFS_With_ILUC factor scenario in the CARB and EPA cases. Carbon emissions and welfare effects. The estimated US carbon emissions (including those due to ILUC) ranged between 44 and 46.2 B Mg CO 2 e in the No_LCFS scenario across the three sets of ILUC factors (Table 1) primarily because of the high baseline emissions in this case in the No_LCFS scenario due to the high ILUC factor for corn ethanol. The implementation of the LCFS_With_ILUC factor led to an additional abatement of 1.3-2.6% relative to the LCFS_No_ILUC factor scenario. This amounted to 0.6-1.1 B Mg CO 2 e across the three cases. The LCFS_No_ILUC policy increased the economic surplus of food and fuel consumers while adversely affecting fossil fuel producers. There was a small net increase in the discounted value of cumulative economic surplus (2007-2027) by $35 billion relative to the No_LCFS baseline (by 0.13%), assuming a 3% discount rate (Table 1). This was different from the result obtained in Huang et al. 17 which showed a slight decline (0.17%) in social welfare in the LCFS_No_ILUC scenario relative to the No_LCFS scenario. This was due to higher values for the direct carbon intensities of energy crop feedstocks assumed in that study that were based on previous literature. The carbon intensity of energy crops assumed here were based on a calibrated and validated DayCent model and were significantly lower, resulting in lower costs of implementing the LCFS. Chen et al. 12 found that a national LCFS implemented by itself would lead to an increase in US social welfare by $33.4 B over the 2007-2030 period relative to a no-policy scenario. The additional cost of implementing the LCFS was estimated as the difference in discounted social welfare between the LCFS_No_ILUC and LCFS_With_ILUC scenarios. As compared to the LCFS_No_ILUC scenario, the higher implicit tax on fossil fuels and the lower implicit subsidy on biofuels increased the price of fuel for consumers and lowered the price received by agricultural and fuel producers; the net loss in economic surplus for producers ranged from $20 to $80 billion ( Fig. 4 and Supplementary Table 4). The net reduction in total consumer surplus ranged from $15 to $131 billion. These losses were largest with the Searchinger factors. The overall reduction in social welfare for consumers, producers and government across the sectors considered here ranged between $35 and $211 B. It was highest with the Searchinger factors and lowest with the CARB factors. Over half of this loss in economic surplus was borne by the fuel consumers ( Fig. 4 and Supplementary Table 4). We divided the additional cost by the additional abatement achieved with the inclusion of the ILUC factor (0.6-1.1 B Mg CO 2 e) to obtain the per metric ton welfare cost of abatement. We found this ranged from $61 Mg CO 2 e À 1 ( ¼ $35B/0.6 B Mg CO 2 e) to $187 Mg CO 2 e À 1 ( ¼ $211B/1.1 B Mg CO 2 e). This welfare cost of abatement per metric ton is lower than the marginal cost of abatement implied by the price of carbon discussed earlier which was $81 Mg CO 2 e À 1 in the LCFS_No-ILUC scenario and ranged from $101 to $235 Mg CO 2 e À 1 in the LCFS_With_ILUC scenarios; the higher end of the range was estimated with the Searchinger factors. Even the welfare cost of abatement per metric ton was substantially higher (20-270%) than the average social cost of carbon of $50 Mg CO 2 e À 1 with the same 3% discount rate assumed here 33 (Table 1). There is wide disparity in the range of estimates of the social cost of carbon 38 but considerable consensus that $50 Mg CO 2 e À 1 is a reasonable estimate. Following an extensive review of the estimates of the social cost of carbon in the literature, Tol (2005) 39 concluded that the social cost of carbon in 2030 was unlikely to exceed $50 Mg CO 2 e À 1 , under standard assumptions about discounting and aggregation. Based on a similar review, Watkiss and Downing (2008) 40 found that $50 Mg CO 2 e À 1 provided a reasonable benchmark for global decision making seeking to reduce the threat of dangerous climate change and including a modest level of aversion to extreme risks, relatively low discount rates and equity weighting. Most recently Havranek et al. (2015) 41 conducted a meta-analysis of estimates of the social cost of carbon in the literature and found that the upper boundary for mean estimates of the social cost of carbon reported by studies after controlling for various factors (including publication bias) was $39 Mg CO 2 e À 1 . Estimates of the social cost of carbon have a skewed, right-tailed distribution 33 . This implies a relatively smaller likelihood of their exceeding the cost of abatement estimated here than of being lower than it. Cost of abatement with the Searchinger factors ($187 Mg CO 2 e À 1 ) was higher than even the 95th percentile of the social cost of carbon of $152Mg CO 2 e À 1 with the same 3% discount rate assumed here. We examined the sensitivity of our findings to several key parameters assumed here by considering alternative values for: the elasticity of supply of gasoline from the rest of the world, feedstock yields, cost of conversion to ethanol and carbon emissions due to conversion of marginal land to cropland ( Supplementary Fig. 2). We found that these costs of abatement could increase significantly under more conservative assumptions about the yields and availability of marginal land for perennial grasses and the costs of producing cellulosic biofuels. Cost of abatement ranged between $54 and $94 Mg CO 2 e À 1 with the CARB factors; corresponding ranges were $63-$107 with the EPA factors and $162-$199 with the Searchinger factors. Lastly, we investigated the sensitivity to the discount rate by increasing it from 3 to 5%. Cost of abatement with a 5% discount rate was $45-$122 Mg CO 2 e À 1 . These costs were 181 to 662% higher than the correspondingly lower average social cost of carbon of $16 Mg CO 2 e À 1 in 2030 (ref. 33). Discussion Our analysis examined the effectiveness of an ILUC factor approach while implementing a national LCFS in achieving additional reduction in carbon emissions and the welfare costs at which these reductions were achieved. Estimates of the ILUC factor of a biofuel differ considerably across studies. We selected estimates from three different sources to analyse the range in outcomes in response to variability in ILUC estimates. In all cases, we found that the inclusion of the ILUC factor in implementing the LCFS imposed additional costs on fuel consumers and fuel producers because it lowered the implicit subsidy on several types of biofuels while raising the implicit tax on fossil fuels. It led to additional abatement of cumulative emissions over 2007-2027 by 0.6-1.1 B Mg CO 2 e compared to those without an ILUC factor. The relatively higher ILUC factors for both corn ethanol and cellulosic ethanol in the Searchinger case led to a higher implicit carbon price and greater reduction in carbon emissions relative to the other two cases. However, this also imposed very high costs on fuel consumers and producers. The overall discounted welfare cost of abatement over the 2007-2027 period on the agricultural, forestry and transportation sectors considered here ranged from $35 B to $211 B; the largest cost was obtained with the Searchinger factors. These values implied that the per unit cost of additional abatement to the US ranged from $61 to $187 Mg CO 2 e À 1 . We found that across all three cases of ILUC factors, this cost of abatement was substantially greater than the social cost of carbon of $50 Mg CO 2 e À 1 in 2030, with the same 3% discount rate used in both cases. A higher discount rate of 5%, lowered the cost of abatement to range between $45 and $122 Mg CO 2 e À 1 across the three cases of ILUC factors. These costs were 181 to 662% higher than the correspondingly lower average social cost of carbon of $16 Mg CO 2 e À 1 in 2030. Our analysis, therefore, showed that the ILUC factor approach to reducing ILUC-related carbon emissions with a LCFS did not result in positive net social benefits; the monetary value of the benefits from the additional abatement achieved was lower than the cost of achieving that abatement for the US. Leakage of carbon emissions due to ILUC is an issue of concern since it offsets the direct savings in emissions due to displacement of fossil fuels by biofuels. Alternatives to the ILUC factor approach include those that directly address the source of the problem, namely, the choice of feedstock and the land on which it is grown, and thereby reduce the potential for indirect market effects 1,4,7 . Food-crop-based biofuels, like corn ethanol, have a high ILUC effect and also a high direct carbon intensity (Supplementary Table 1). In contrast, cellulosic feedstocks have low direct carbon intensity. Accompanying biofuel policies like the RFS and/or LCFS with an explicit carbon price policy that penalizes fuels based on their direct carbon intensity can provide incentives to switch away from corn ethanol to low carbon cellulosic feedstocks. Alternatively, certification of low indirect impact biofuels and policies that restrict blending of non-certified biofuels can create incentives to produce more low ILUC feedstocks. Direct regulations to restrict conversion of grasslands and forestland to cropland can also prevent indirect loss of carbon. We leave it to future research to examine the cost-effectiveness of such approaches compared to that of an ILUC factor approach. Methods Economic modelling. BEPAM-F (Biofuel and Environmental Policy Analysis Model with Forestry), is a spatially explicit multi-market dynamic open economy model that determines the market equilibrium by maximizing the sum of consumers' and producers' surpluses in the agricultural, forestry and transportation fuel sectors in the US subject to various material balance, technological, land availability, and policy constraints over the 2007-2027 period. The model includes crop, forest and pasture land in the US with the potential for conversion of land from one use to another based on the net returns to land under various uses subject to some constraints. The BEPAM-F integrates the transportation, agriculture and forest sectors to endogenously determine the effects of alternative policy scenarios on land allocation among food and biofuel crops, fuel mix, prices in markets for fossil fuel, biofuel, food/feed crops and livestock and on carbon emissions in the US at 5-year intervals over the period 2007-2027. Model structure, parameterization and validation were explained in previous studies 12,14,17 . The transportation sector incorporates downward sloping demand curves for vehicle kilometres travelled (VKT) with four types of vehicles (conventional gasoline, flex fuel, gasoline-hybrid and diesel vehicles) that generate a derived demand for liquid fossil fuels and biofuels that include first-and second-generation biofuels. The VKT production function considers the energy content of alternative fuels, fuel economy of each type of vehicle and the forthcoming Corporate Average Fuel Economy standards, and technological limits on blending gasoline and ethanol for each of these four types of vehicles 42 .Gasoline is produced domestically and imported. Supply curves for domestic gasoline and diesel as well as for gasoline supply and demand in rest of world are included to determine the amount of gasoline imports and the price of gasoline and diesel. Several first-and secondgeneration biofuels that can be blended with gasoline and diesel were considered (Supplementary Table 2). First-generation biofuels include domestically produced corn ethanol and imported sugarcane ethanol, soybean biodiesel, DDGS-derived corn oil and waste grease. Second-generation biofuels include cellulosic ethanol and biomass-to-liquid diesel that can be blended with gasoline and diesel, respectively. We determined the domestic and international price of gasoline endogenously by the domestic demand for gasoline derived from the downward sloping demands for VKT and the demand for gasoline in the rest of the world and the upward sloping domestic and the rest of the world supply of gasoline. The policy induced increased production of biofuels reduces the domestic demand for gasoline and the US demand for imports from the rest of the world. We incorporated the feedback effect of the biofuel-driven reduction in the world and domestic price of gasoline on fuel consumption in the US and its implications for the carbon savings with biofuels 12,43 . The agricultural and forestry sectors produce a broad range of crop, livestock, bioenergy and forest products that compete for land. The prior application of BEPAM-F 14 focused on analysing the feedstock mix, land use and GHG implications of a cellulosic biofuel mandate over the 2007-2027 period. The agricultural sector in BEPAM-F includes all major conventional crops and livestock animals, four energy crops (miscanthus, switchgrass, energy cane, hybrid poplar and willow) and two crop residues (corn and wheat) as well as choice of tillage practice and crop rotations for conventional crops. It incorporated spatial heterogeneity in the yields and costs of production of various crops and livestock products, availability of different types of land and costs of conversion of cropland pasture to cropland across the 295 crop-reporting districts (CRDs) in the US. Availability of five types of agricultural land (irrigated and non-irrigated cropland, idle cropland, cropland pasture, and pasture land) were specified for each CRD. Changes in the mix of crops grown were determined using the methods described in Chen and Ö nal (2012) 44 . Assumptions about the productivity of cropland pasture, the costs of converting it to conventional crops or energy crops, and restrictions on land conversion for energy crop production in a CRD were similar to Hudiburg et al. 14 . The structure of the forestry sector in BEPAM-F was similar to that in Forest and Agricultural Sector Optimization Model 16 and included 11 forest marketing regions. Forestland was characterized by two types of trees (softwoods and hardwoods) and distinguished by various site productivity classes that determined yield per unit land. Land conversion from one use to another within the sector and across sectors was constrained by pre-defined suitability classes that determined which acres could be converted to forest, crop or pasture. A detailed description of forestry sector module in BEPAM-F is provided in Wang et al. 22 . Model validation is provided in Wang et al. 22 and in Hudiburg et al. 14 . A key extension of BEPAM-F here is the imposition of a LCFS constraint that restricts the ratio of the sum of the GHG emissions with each type of fossil fuel and biofuel consumed (defined as the sum of the product of the GHG intensity of each fuel and the quantities of those fuels consumed) to the sum of the energy from these fuels to be less than the targeted standard. The GHG intensity of each type of biofuel feedstock included below-ground changes in soil carbon and above-ground emissions. We quantified the major factors influencing the direct life cycle carbon emissions above ground and sequestration below ground due to bioenergy crop production and carbon emissions due to gasoline and diesel consumption in each policy scenario. We simulated the soil organic carbon changes and associated direct N 2 O, CH 4 , NO 3 leaching for each modelled crop using DayCent 14,24,26 . DayCent calculates plant growth as a function of water, light, and soil temperature and limits actual growth based on soil nutrient availability. In addition to soil carbon uptake and loss, the DayCent model was also used to simulate harvested yields, direct N 2 O emissions (indirect calculated using IPCC Tier 1 factors), nitrate leaching, and methane flux. Model parameterization, calibration and validation were completed in prior studies 14,24 . The direct above-ground life cycle GHG intensity of each of the biofuel pathways was estimated by adapting the Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) model as in Dwivedi et al. 27 . Both the below-ground and above-ground emissions vary spatially. As a result, the GHG intensity of the overall transportation fuel depended on the mix of feedstocks and the spatial location where they were produced. Emissions due to ILUC both domestic and globally were included through the ILUC factors. The ILUC factors assumed in this paper were estimated using global economic models that provide estimates of the carbon emissions due to land use change in the US and rest of the world caused by biofuel production. Specifically, ILUC factors estimated by CARB 32 are obtained from the Global Trade Analysis Project (GTAP) model that is a global general equilibrium model. ILUC factors estimated by Searchinger et al. 10 and by EPA 29 were estimated using the global partial equilibrium, Food and Agricultural Policy Research Institute (FAPRI) model. These models are described in greater detail in Khanna et al. 32 . The estimation of ILUC factors is sensitive to a number of modelling and parametric assumptions as discussed in Khanna and Crago 8 . By adding the ILUC factors to the direct carbon intensity of feedstocks in the US and comparing emissions across scenarios we obtain the change in global emissions in the various policy scenarios. With the LCFS constraint, the model endogenously determined the mix of feedstock, the locations to grow them in taking into account the spatially varying direct GHG intensity of biofuels, the implicit price of carbon, the fuel-specific implicit taxes/subsidies. In each of the scenarios, we examined the cumulative change (summed over 2007-2027) in global GHG emissions which was the sum of the emissions from the US transportation, agricultural and forestry sectors (including the direct emissions from biofuel production and soil carbon sequestration) and the emissions due to the scenario-specific ILUC effect in the rest of the world due to biofuels. The three policy scenarios simulated here are one with no LCFS baseline (No_LCFS), a second LCFS with no ILUC factor (LCFS_No_ILUC) and a third LFCS with ILUC factor (LCFS_With_ILUC). In the no LCFS baseline a mandated level of biofuel production based on the RFS established by EISA, 2007 was imposed as a blend mandate as in ref. 12 ( Supplementary Fig. 1). Unlike the RFS which mandated blending of 36 billion gallon (136.3 billion litres) of ethanol with gasoline by 2022 (considered earlier in ref. 24) with an implicit upper limit of 15 billion gallons on corn ethanol, we imposed a lower mandate of 35 billion gallons (131.5 billion litres) by 2027 with a maximum of 15 billion gallons of corn ethanol, assuming the remaining volumes could be met by sources not included in the model such as municipal solid waste, animal fats and waste oil. Sugarcane ethanol imports from Brazil were allowed with the level determined endogenously based on competitiveness with corn ethanol and cellulosic ethanol, up to a maximum of 4 billion gallons. In the LCFS with no ILUC factor scenario the RFS in Scenario 1 was supplemented by a LCFS imposed in 2017 to achieve a targeted reduction in average fuel carbon intensity of 15% by 2027 relative to the level in 2007. The GHG intensity of biofuels here included only the direct life cycle emissions. The LCFS with LUC factor scenario is the same as above, except that the GHG intensity of biofuels included both the direct life cycle emissions and ILUC-related emissions intensity obtained from three existing studies, CARB 28 , EPA 29 , and Searchinger 10 . Ecosystem modelling. Required inputs for the model include vegetation cover, daily precipitation and temperature, soil texture and current and historical land use practices. Soil organic carbon is estimated from the turnover of soil organic matter pools, which change with the decomposition rate of dead plant material. For the perennial grasses, crop specific physiological parameterizations were performed using the values from a synthesis of studies. We simulated county-level yields for corn grain and stover removals, soy, miscanthus and switchgrass in the US on cropland and marginal land. We define marginal land as land that has been historically less productive cropland and has been idle or set aside as pasture for grazing. Daily climate data were downloaded from the Daymet database (http:// daymet.ornl.gov/;1). Historical simulations on cropland followed native vegetation (for example, grasslands) with disturbance history (for example, fire and harvest) followed by B110 years of agricultural history. Agricultural history included cornsoy rotations, alfalfa and wheat. Soil carbon stocks were simulated to represent the pre-agricultural native vegetation levels with a subsequent decline as the land was cultivated each year for the annual crops. Model output of yield and soil carbon were evaluated against data at a variety of scales and further evaluation of direct N 2 O were compared with observations in Hudiburg et al. 24 . Indirect N 2 O emissions were calculated using the IPCC indirect emission factor for leaching/ runoff (0.75%) and the IPCC indirect emission factor for volatilized N (1%). DayCent modelled CH 4 emissions (consumption through oxidation in non-flooded soils) have been evaluated in US cropping systems 45 . Moreover, DayCent output of crop yields and carbon emissions has been evaluated in numerous studies and at sites all around the world 25,26,46-50 . Sensitivity analysis. There is uncertainty about several key parameters assumed in our modelling framework. We examined the sensitivity of our results to alternative values of these parameters. Specifically, our benchmark analysis assumed a fairly inelastic supply of gasoline in the rest of the world (elasticity 0.2) 12 . With an inelastic supply of gasoline, the displacement of demand for gasoline due to the production of biofuels in the US results in a large reduction in the world price of oil which lowers the effects of the implicit tax on fossil fuels imposed by the LCFS on fuel consumers. Thus the effect of the ILUC factor on fuel consumers is mitigated and the cost of abatement is lower. We examined the sensitivity of our results to two extreme assumptions about the elasticity of supply of gasoline: a very elastic supply of gasoline (elasticity of 20) and a very inelastic supply of gasoline (elasticity of 0.1). The economic cost of the LCFS will be higher the higher the cost of cellulosic biofuels. In particular, the industrial cost of conversion of feedstock to cellulosic biofuel is uncertain in the absence of significant commercial scale production. We considered the effects of this cost being 50% higher than in the benchmark. The yield per acre of energy crops assumed in the model is based on simulated results from DayCent; these yields could be lower in practice. We examined the effects of these yields being 25% lower than the benchmark level. We also considered the effects of including emissions from the conversion of cropland pasture to cropland (1.85 MgCO2/ha/yr) as suggested by some studies 11 . Code availability. The size and complexity of the code preclude its online availability. However, it is available on request from the authors. Data availability. The data needed for replication of results is available on request from the authors.
2018-04-03T04:19:45.093Z
2017-06-26T00:00:00.000
{ "year": 2017, "sha1": "7806f6094e3ce7b0bfbf01520d14f3944d8b7cc3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms15513.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c46b1efa4a800aaad9ca8dfb15e5ec101ff9f9b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Economics" ] }
15992631
pes2o/s2orc
v3-fos-license
Low Q^2 Jet Production at HERA and Virtual Photon Structure The transition between photoproduction and deep-inelastic scattering is investigated in jet production at the HERA ep collider, using data collected by the H1 experiment. Measurements of the differential inclusive jet cross-sections dsigep/dEt* and dsigmep/deta*, where Et* and eta* are the transverse energy and the pseudorapidity of the jets in the virtual photon-proton centre of mass frame, are presented for 0<Q2<49 GeV2 and 0.3<y<0.6. The interpretation of the results in terms of the structure of the virtual photon is discussed. The data are best described by QCD calculations which include a partonic structure of the virtual photon that evolves with Q2. Introduction and Motivation In this paper jet production in electron-proton scattering in the transition region between photoproduction and deep inelastic scattering (DIS) is investigated. The results are interpreted in terms of parton densities of the virtual photon which are probed at a scale determined by the transverse momentum (p t ) of the jets and which evolve with the virtuality of the photon (Q 2 ). In the photoproduction of jets [1], the photon can couple directly to a parton from the proton ("direct" interactions). However, the cross-section is dominated by interactions, socalled "resolved" processes, in which the photon fluctuates into a system of partons and one of these interacts with a parton out of the proton to produce the jets. This separation into direct and resolved processes can only be made unambiguously in leading order [2]. At its simplest, the hadronic fluctuation of the photon may take the form of a quark-antiquark (qq) pair. More complicated structure is built up through QCD interactions. In addition to this point-like "anomalous" component [3], the photon can also acquire a more conventional hadronic structure, often modelled as a fluctuation into a vector meson (vector dominance model, VDM). The crosssection for jet production can be expressed as a convolution of universal parton densities of the proton and the photon together with hard parton-parton scattering cross-sections. The evolution of the photon parton densities with the scale at which they are probed can be calculated in perturbative QCD and has been extensively measured in two-photon interactions [4] and recently at HERA [5]. In contrast, it is usual to consider that the only contribution to jet production in DIS is from direct interactions with the partons in the proton probed by a structureless photon at the scale Q 2 . However, in the small region of phase space where high p t jets are produced with p 2 t much larger than Q 2 , it is possible that the jet production may be most easily understood in terms of the partonic structure of the virtual photon together with that of the proton [6,7,8,9]. Parton densities within the virtual photon are expected to be suppressed [7,10,11,12,13] with increasing Q 2 until direct processes dominate at Q 2 ∼ p 2 t . Measurements of the virtual photon structure in two-photon interactions require detection of both scattered leptons at non-zero scattering angles. Only one such measurement has previously been made [14]. The extensive Q 2 range together with the large centre-of-mass energy available at HERA enables a detailed study of the Q 2 evolution of photon structure. After a description of the data used in this analysis, different models are introduced which are intended to describe the photoproduction and deep inelastic scattering regimes. The inclusive differential jet cross-sections are then presented as a function of jet transverse energy and rapidity for photon virtualities in the range 0 < Q 2 < 49 GeV 2 . Finally, a measurement of the energy flow in the direction of the photon is shown as a function of Q 2 to verify the existence of a photon remnant. The H1 Detector A full description of the H1 detector can be found elsewhere [15] and only those components relevant for this analysis are described here. The coordinate system used has the nominal interaction point as the origin and the incident proton beam defining the +z direction. The polar angle θ is defined with respect to the proton direction, and the pseudorapidity is given by η = − ln tan(θ/2). A finely grained Liquid Argon calorimeter [16] covers the range in polar angle 4 • < θ < 154 • , with full azimuthal acceptance. It consists of an electromagnetic section with lead absorbers, 20-30 radiation lengths in depth, and a hadronic section with steel absorbers. The total depth of the calorimeter ranges from 4.5 to 8 hadronic interaction lengths. The energy resolution is σ(E)/E ≈ 0.11/ √ E for electrons and σ(E)/E ≈ 0.5/ √ E for pions (E in GeV), as measured in test beams [17]. The absolute energy scale is known to a precision of 3% for electrons and 4% for hadrons. A series of interleaved drift and multiwire proportional chambers surround the interaction point, enabling the reconstruction of charged particles in the range 7 • < θ < 165 • , and the determination of the event vertex. A uniform axial magnetic field of 1.15 T is provided by a superconducting coil which surrounds the calorimeter. For 1994 data taking, the polar region 151 • < θ < 176 • was covered by the BEMC [18], a lead/scintillator electromagnetic calorimeter with a depth of 21.7 radiation lengths. The resolution was given by 0.10/ √ E (E in GeV) and the absolute electromagnetic energy scale was known to a precision of about 1%. In 1995, the BEMC was replaced by the SPACAL [19], a lead/scintillating fiber calorimeter with both an electromagnetic and hadronic section covering the range 153 • < θ < 177.8 • . The energy resolution for the electromagnetic section has been determined as 7.5%/ √ E(GeV)⊕2.5% and the absolute energy scale uncertainty is 1% at 27.5 GeV and 3% at 7 GeV [20]. The hadronic energy scale uncertainty of the measurement in the SPACAL is presently about 10%. Both calorimeter sections have a time resolution better than 1 ns, enabling the reduction of proton beam induced background events. The BEMC and the SPACAL were used both to trigger on and to measure the scattered lepton in DIS processes for 0.65 < Q 2 < 49 GeV 2 . In 1994, the backward proportional chamber (BPC) was located in front of the BEMC. In 1995, this was replaced by a four module drift chamber, the BDC [21], in front of the SPACAL. The polar angle of the scattered lepton was determined using the event vertex and information from these backward tracking chambers. The luminosity system consists of two crystal calorimeters with resolution σ(E)/E = 0.1/ √ E (E in GeV). The electron tagger is located at z = −33 m and the photon detector at z = −103 m. The electron tagger accepts electrons with an energy of between 0.2 and 0.8 of the incident beam energy, and with scattering angles θ ′ < 5 mrad (θ ′ = π − θ). Data Samples and Event Selection The analysis is based on data taken by the H1 experiment in 1994 and 1995. Three data samples were used, each restricted to a region of Q 2 with good acceptance: • Q 2 < 10 −2 GeV 2 : The photoproduction sample in which the scattered electron is detected in the electron tagger of the luminosity system. The events were triggered by demanding an energy deposit in the electron tagger with E > 4 GeV in coincidence with at least one track pointing to the vertex region (p t > ∼ 500 MeV). More details of the trigger conditions can be found in [22]. The data sample used was a sub-sample of that collected in 1994 and corresponds to an integrated luminosity of 210 nb −1 . • 0.65 < Q 2 < 20 GeV 2 : The 1995 shifted vertex data sample, corresponding to 120 nb −1 collected in a special run in which the mean position of the interaction point was shifted by 70 cm in the +z direction, enabling positron detection in the SPACAL down to angles of 178.5 • . The events were triggered by requiring a cluster of more than 5 GeV in the SPACAL and timing consistent with an ep bunch crossing. The most energetic cluster in the electromagnetic section of the SPACAL was taken as the electron candidate [20]. • 9 < Q 2 < 49 GeV 2 : The standard 1994 data sample, corresponding to an integrated luminosity of 2 pb −1 . The events were triggered by requiring a cluster of more than 4 GeV in the BEMC, and the scattered positron was taken as the most energetic BEMC cluster [23]. The error on the luminosity determination was 1.5(3)% for data taken in 1994 (1995). For all three samples, the z position of the interaction vertex was required to be within 30 cm of the nominal position. In addition, to enable comparisons between the DIS and photoproduction data, for all data samples the inelasticity variable y was restricted to 0.3 < y < 0.6, the range where the acceptance of the electron tagger is well-understood. For events with 0.65 < Q 2 < 49 GeV 2 it was also required that the i (E i − P zi ) of all the reconstructed calorimeter clusters, which should be equal to twice the electron beam energy for a DIS event, was greater than 45 GeV. Monte Carlo studies showed that the background from real photoproduction events where hadronic activity in the backward region fakes a scattered lepton was reduced to less than 3% in the selected kinematic region. The event kinematics were reconstructed from the scattered electron 4-vector. Jets were reconstructed in the γ * p centre of mass frame using a k T clustering algorithm [24]. The merging procedure is based on the quantity y ki which is evaluated for each pair of clusters: where E k and E i are the energies of the clusters and θ ki is the angle between them. In addition, to enable the association of particles to either a photon or proton remnant, two infinite momentum pseudo-particles along ±z are included in the clustering procedure, but excluded from the final jets. Particles are combined by the addition of their 4-vectors when y ki < 1. Thus E cut sets the scale for the jet resolution and separates the hard jets from the beam remnants. In this analysis, E cut was chosen to be 3 GeV. Jets were accepted with a transverse energy E * t > 4 GeV and a pseudorapidity in the range −2.5 < η * < −0.5, where E * t and η * are calculated in the γ * p frame with positive η * corresponding to the incident proton direction. For the photoproduction sample, the E * t threshold was raised to 5 GeV to reduce the influence of multiple parton interactions. This selection ensures that the energy flow around the jet axis is well described by the Monte Carlo models used for the acceptance corrections. QCD Motivated Calculations For acceptance corrections and comparisons with the measured jet cross-sections, several event generators were used. The PHOJET 1.03 [25] generator simulates all relevant components of the total photoproduction cross-section. It is based on the two-component Dual Parton Model [26] and incorporates both hard and multiple soft interactions. The photon flux is calculated using the Weizsäcker-Williams approximation [27,28] and the hard processes are calculated using leading order QCD matrix elements. Final state QCD radiation and fragmentation effects are implemented using the string fragmentation model JETSET 7.4 [29]. In this analysis, PHOJET was used to simulate quasi-real photon-proton interactions. DIS events were modelled using the LEPTO 6.5 [30] and ARIADNE 4.08 [31] programs. LEPTO includes the first order QCD matrix elements and uses leading-log parton showers to model higher order radiation. The ARIADNE generator uses the Colour Dipole model [32] to simulate QCD radiation to all orders. A feature of this model is that the hard subprocess need not be generated at the photon vertex, and this can be regarded as generating "resolved-like" events. For both models, hadronisation is performed using JETSET. HERWIG 5.9 [33] was used to model direct and resolved photon processes. The emission of the photon from the incident electron is generated according to the equivalent photon approximation [28]. The parton-parton interactions are simulated according to leading order QCD calculations, and a parton shower model which effectively includes interference effects between the initial and final state showers (colour coherence) is implemented [34]. The factorisation scales were set according to the transverse momentum of the scattered partons, with a cut-off at p min t = 1.5 GeV. A cluster model is used for hadronisation. HERWIG includes the option of additional interactions of the beam remnant in the phenomenological soft underlying event model. A reasonable description of the jet profiles and the jet rates observed in the data was obtained with no soft underlying event for Q 2 > 0.65 GeV 2 and with a soft underlying event in 15% of the resolved interactions at Q 2 = 0 GeV 2 . These values were used throughout the subsequent analysis. The RAPGAP Monte Carlo [35] originally developed to simulate diffractive processes, also includes modeling of deep-inelastic and all relevant resolved photon processes. DIS processes are simulated using leading order QCD matrix elements with a p 2 t cut-off scheme for the light quarks and the full matrix element for heavy quarks. For resolved photon processes, the equivalent photon approximation is used to model the flux of virtual photons. Parton-parton interactions are calculated from on-shell matrix elements supplemented by initial and final state parton showers. Both HERWIG and RAPGAP include models for the evolution of the photon parton densities with Q 2 . Three approaches are considered. The first assumes no Q 2 dependence of the parton densities. The second uses the Drees-Godbole parameterization [12] of virtual photon structure, following an analysis of Borzumati and Schuler [13], in which the quark densities f q|γ * in a photon of virtuality Q 2 probed at a scale p 2 t are related to those of a real photon f q|γ by: An analogous relation exists for the gluon density, with L replaced by L 2 . The parameter ω is the value of Q 2 above which the suppression becomes significant. We use a value of ω 2 = 1 GeV 2 and the GRV-G HO (DIS) [36] parameterizations for the unsuppressed photon parton densities. Throughout this paper we refer to this as the DG model. The third approach is to use the photon parton densities from Schuler and Sjöstrand [37] (SaS) which are valid for Q 2 ≥ 0 GeV 2 . In this scheme, the photon parton densities are decomposed into a direct, a VDM and a perturbative anomalous component. We will show comparisons with the SaS-2D parameterisation in the DIS scheme using the form for the Q 2 suppression recommended by the authors. All models were used with the GRV94 HO (DIS) [38] parton densities for the proton, which give a good description of the measured F 2 for Q 2 > 1 GeV 2 [20]. Determination The distributions of the jet transverse energy (E * t ) and pseudorapidity (η * ) in the γ * p centre of mass frame were corrected bin-by-bin for detector effects using generated events passed through a simulation of the H1 detector based on the GEANT program [39]. The bin sizes were chosen to keep the effects of finite resolution and bin-to-bin migration small. For the photoproduction data, correction factors were determined from the HERWIG DG model, which gives a good description of the data. The model dependence was estimated by comparison with the values obtained from PHOJET, which also gives a good description of the jets observed in the data. This model dependence is one of the largest contributions to the systematic error. For the DIS data, correction factors in the range 0.65 < Q 2 < 20 GeV 2 were determined from the HERWIG DG model, and for Q 2 > 20 GeV 2 from the HERWIG direct model, both of which give a satisfactory description of the observed jets in these kinematic regimes. LEPTO was used to estimate the model dependence of the correction factors, which is again one of the dominant sources of systematic error. The other large source of systematic error arises from the uncertainty in the knowledge of the hadronic energy scale of the Liquid Argon calorimeter. This has two contributions; a possible 3% variation in the energy scale between different calorimeter modules, which is included in the point-to-point error, and a 4% uncertainty in the overall energy scale, which affects the normalisation of the jet cross-sections. Further sources of systematic error include a 1(2)% uncertainty in the electromagnetic energy scale of the BEMC(SPACAL) and a 1 mrad uncertainty in the polar angle of the scattered electron. For the photoproduction data, the uncertainty in the acceptance and energy calibration of the electron tagger was included. A 20(10)% uncertainty in the knowledge of the hadronic energy scale of the BEMC(SPACAL) is also considered. The 1.5(3)% uncertainty in the luminosity determination in 1994(1995) affects the overall normalisation of the jet cross-sections. The effect of radiative corrections in DIS events has been studied using the HERACLES program [40] which includes complete first order radiative corrections and the emission of real bremsstrahlung photons for the electroweak interaction. The effect is 20-30% for jets with E * t of 4 GeV, decreases with increasing E * t and is negligible for E * t > 7 GeV. It does not significantly influence the conclusions and the data are not corrected for this effect. The corrected cross-sections obtained from the 1995 shifted vertex data and from the 1994 data are in good agreement in the region 9 < Q 2 < 20 GeV 2 where the data samples overlap. Figure 1 shows the inclusive ep jet cross-section dσ ep /dE * t for 0.3 < y < 0.6 and jets with −2.5 < η * < −0.5, and the values are listed in table 2. The data are compared with the prediction from the HERWIG DG model, which includes a resolved component of the virtual photon. This is able to give a good description of the data except for jets in the lowest E * t range when 9 < Q 2 < 49 GeV 2 . Also shown is the direct contribution to this model which accounts for an increasing fraction of the total prediction as Q 2 increases, but which alone cannot describe the measured jet cross-sections. Results The jet cross-section dσ ep /dη * for jets with E * t > 5 GeV is shown in figure 2 and the values are listed in table 3. The data are compared to the HERWIG DG model and the direct photon contribution to this model. The direct photon processes alone significantly underestimate the jet cross-section at low Q 2 , but the data are described by HERWIG if the resolved photon component is included and suppressed with increasing Q 2 according to the DG model. The relative contribution to the jet cross-section from resolved photon processes increases towards the proton (+η * ) direction. In order to study in more detail the Q 2 evolution of the photon parton densities, we factor out the Q 2 dependence contained in the flux of photons and calculate a γ * p jet cross-section in each Q 2 range using: We use the Weizsäcker-Williams approximation [27,28] to calculate the flux of photons, F γ|e : The flux is integrated over 0.3 < y < 0.6 and Q 2 max and Q 2 min are the upper and lower edges of the Q 2 range. For photoproduction: where m e is the electron mass. This factorisation of the cross-section remains a reasonable approximation for high p t jet production if p 2 t ≫ Q 2 [7], a condition which is satisfied by the majority of our data 1 . The numerical values of the flux factors used in each Q 2 range are listed in table 1. The Q 2 dependence of the inclusive γ * p jet cross-section at fixed jet E * t is shown in figure 3. For E * t < 10 GeV there is a significant decrease of the jet cross-section with increasing Q 2 . Also shown for comparison are the predictions from LEPTO and ARIADNE. We expect such DIS models to be valid when Q 2 > ∼ E * 2 t , where the photon cannot be resolved. This region corresponds to the 2-3 highest Q 2 ranges in figures 3a and 3b. It can be seen that both models give an adequate description of the data in these ranges. However, neither model can describe the data when Q 2 < E * 2 t and the virtual photon can be resolved. Also the predictions of these models differ significantly in this region. Although ARIADNE predicts a jet cross-section which decreases with increasing Q 2 in a similar manner to the data and is able to describe the data for Q 2 > ∼ 4 GeV 2 , it is unable to describe the data at all E * t and all Q 2 . The prediction from ARIADNE is sensitive to the parameters which limit the phase space for QCD radiation 2 and therefore to the fraction of "resolved-like" events produced by the model, but we found no choice of these parameters which enabled the model to describe the data. The same data compared to a series of models which include a partonic structure for the virtual photon, as implemented in the HERWIG and RAPGAP generators, are shown in figure 4. In each case, the sum of the direct and resolved contributions is shown. The dot-dashed curve shows the prediction from HERWIG assuming the GRV-G HO structure function for the photon with no Q 2 suppression. The resulting γ * p jet cross-section is almost independent of Q 2 , in contrast to the data which show, except for the highest E * t jets, a significant decrease of the jet cross-section with Q 2 . The jet cross-section predicted by this model at Q 2 = 0 GeV 2 is slightly larger than that predicted for Q 2 > 0 GeV 2 because a soft underlying event is included at Q 2 = 0 GeV 2 . Also shown are the predictions from HERWIG and from RAPGAP using the DG model for the virtual photon structure. The models are in good agreement with each other, and describe the data well except for jets with 4 < E * t < 5 GeV when 9 < Q 2 < 49 GeV 2 , where they underestimate the measured cross-section. We note that in the DG model the photon parton density functions all vanish for Q 2 > p 2 t , which is approximately the case in this region. The data are best described by the RAPGAP model using the SaS-2D parameterization of the virtual photon. In contrast to the DG model, the photon parton densities do not vanish when Q 2 > p 2 t in this parameterization. The inclusive jet cross-section can therefore be understood if a partonic structure is ascribed to the virtual photon. Moreover, the observed Q 2 evolution of the jet cross-sections can be explained by the suppression of the parton densities with increasing photon virtuality as predicted by QCD inspired models. For Q 2 > ∼ p 2 t , the photon is effectively structureless. Photon Remnant The jet algorithm used in this analysis assigns particles to a photon remnant. The fraction of the incident photon's energy which is reconstructed in the photon remnant jet is given by: where E * i is the energy of a particle assigned to the photon remnant and E * e and E ′ e * are the energies of the incident and scattered electron respectively in the γ * p frame. Figure 5 shows the uncorrected distribution of f in the data, as a function of Q 2 , for events with at least one jet with E * t > 5 GeV and −2.5 < η * < −0.5. At Q 2 = 0 GeV 2 , where resolved photon processes dominate the cross-section, most of the events with jets also contain a photon remnant with a significant fraction of the incident photon's energy. Conversely, at the highest Q 2 values, where the direct processes dominate, f is peaked at zero. Also shown are the predictions from the HERWIG DG model and from LEPTO after detector simulation. It can be seen that the distribution of f from LEPTO is peaked at zero for all Q 2 , as expected for a model which includes only direct processes. At low Q 2 , the data agree with the HERWIG DG model, and at high Q 2 they agree with the LEPTO prediction. The evolution of f with Q 2 is consistent with the picture of a resolved photon contribution which is suppressed with increasing virtuality. Models in which the photon only couples directly to the partons of the hard scattering process fail to describe the data in the region E * 2 t > ∼ Q 2 , where the virtual photon can be resolved. Models which include a resolved component of the photon suppressed with Q 2 are in good agreement with the data. The best description of the data was obtained with a model which includes direct, VDM and perturbative contributions to the virtual photon structure. It has been established that the energy assigned to the photon remnant in events with high E * t jets is on average large at low Q 2 and decreases with increasing Q 2 , consistent with the picture of a resolved photon component which is suppressed with its increasing virtuality. Figure 1: The differential jet cross-section dσ ep /dE * t for jets with −2.5 < η * < −0.5 and 0.3 < y < 0.6. The inner error bars indicate the statistical errors, the total error bars show the quadratic sum of the statistical and systematic errors and the shaded band represents the correlated error from the uncertainty in the Liquid Argon energy scale. Not shown is the error from the uncertainty in the luminosity determination which leads to a 3% normalisation error for the data with 0.65 < Q 2 < 9 GeV 2 and a 1.5% normalisation error elsewhere. The data are compared to the HERWIG DG model (solid line) and to the direct contribution to this model (dashed line). Figure 2: The differential jet cross-section dσ ep /dη * for jets with E * t > 5 GeV 2 and 0.3 < y < 0.6. The incident proton direction is to the right. The inner error bars indicate the statistical errors, the total error bars show the quadratic sum of the statistical and systematic errors and the shaded band represents the correlated error from the uncertainty in the Liquid Argon energy scale. Not shown is the error from the uncertainty in the luminosity determination which leads to a 3% normalisation error for the data with 0.65 < Q 2 < 9 GeV 2 and a 1.5% normalisation error elsewhere. The data are compared to the HERWIG DG model (solid line) and to the direct contribution to this model (dashed line). Figure 3: The inclusive γ * p jet cross-section σ γ * p (Q 2 ) for jets with −2.5 < η * < −0.5 and 0.3 < y < 0.6. The inner error bars indicate the statistical errors and the total error bars show the quadratic sum of the statistical and systematic errors. Not shown are the normalisation error from the uncertainty in the Liquid Argon energy scale, which is 15% at low E * t and increases to 25% at high E * t , and the normalisation error from the uncertainty in the luminosity determination, which is 3% for the data with 0.65 < Q 2 < 9 GeV 2 and 1.5% elsewhere. The data are compared to LEPTO (solid line) and ARIADNE (dashed line). Figure 4: The inclusive γ * p jet cross-section σ γ * p (Q 2 ) for jets with −2.5 < η * < −0.5 and 0.3 < y < 0.6. The inner error bars indicate the statistical errors and the total error bar shows the quadratic sum of the statistical and systematic errors. Not shown are the normalisation error from the uncertainty in the Liquid Argon energy scale, which is 15% at low E * t and increases to 25% at high E * t , and the normalisation error from the uncertainty in the luminosity determination, which is 3% for the data with 0.65 < Q 2 < 9 GeV 2 and 1.5% elsewhere. The data are compared to HERWIG with no suppression of the photon structure function with Q 2 (dot-dashed line), the HERWIG DG model (dashed line), the RAPGAP DG model (dotted line) and RAPGAP with the SaS-2D photon structure function (solid line). The inclusive differential jet cross-section dσ ep /dE * t for jets with −2.5 < η * < −0.5 in the γ * p centre of mass frame measured in the range 0.3 < y < 0.6 for nine different Q 2 ranges. The statistical, positive systematic, negative systematic and normalisation errors are given. In addition, the uncertainty in the luminosity determination leads to a 3% normalisation error for the data with 0.65 < Q 2 < 9 GeV 2 and a 1.5% normalisation error elsewhere. Q 2 (GeV 2 ) η * dσ ep /dη * (nb) δ(stat) +δ(sys) -δ(sys) ±δ(norm) Q 2 < 10 −2 −2.5 < η * < −2. Table 3: The inclusive differential jet cross-section dσ ep /dη * for jets with E * t > 5 GeV in the γ * p centre of mass frame measured in the range 0.3 < y < 0.6 for nine different Q 2 ranges. The statistical, positive systematic, negative systematic and normalisation errors are given. In addition, the uncertainty in the luminosity determination leads to a 3% normalisation error for the data with 0.65 < Q 2 < 9 GeV 2 and a 1.5% normalisation error elsewhere.
2014-10-01T00:00:00.000Z
1997-09-16T00:00:00.000
{ "year": 1997, "sha1": "69a849e5c87ac2257ed20acc9de24cd42ef2a005", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/9709017", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3e6194a8a52b30668cbd1a8c3859fff5cefaa8e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
199168385
pes2o/s2orc
v3-fos-license
The use of bentonite of Bener Meriah Aceh to improve the mechanical properties of Polypropylene-Montmorillonite Nanocomposite The research on application of bentonite Bener Meriah of Aceh to improve the mechanical properties of polypropylene-montmorillonite nanocomposite has been conducted. Bentonite was isolated into the nano-sized montmorillonite and was used as a filler of polypropylene-montmorillonite nanocomposites with the addition of PP-g-MA as a compatibilizer and octadecylamine as a modifier of MMT. The results showed that bentonite of Bener Meriah Aceh contained montmorillonite with 70.6% content. Based on the result of mechanical properties test, it was ddiscovered that montmorillonite isolated from Bentonite of Bener Meriah could improve the mechanical properties of PP-MMT nanocomposite in composition ratio of PP/PP-g-MA/MMT is 85/10/5. Introduction Bentonite is an abundant natural resource in Indonesia, such as in Java, Sumatra, and Sulawesi, which contain more than 380 million tons but has not been optimally utilized. In Aceh, natural bentonite can be found in the districts of North Aceh, Bener Meriah, Sabang, Central Aceh, and Simeulue [1,2,3] which reaches the amount of 2,618,224,030.20 tons [4]. To date, research on Aceh bentonite has been conducted only on several studies [5] and the bentonite that has been used only the one that found in North Aceh. Therefore, bentonite processing from other districts in Aceh is required. Bentonite that will be used in this research is bentonite from Bener Meriah, which is one of the areas in the Province of Aceh which has bentonite with a thickness of 1 m width of 20 Ha and its content reaches 520,000 tons [4] which has not been utilized. Bentonite is natural clay whose main component is the mineral montmorillonite (85%), with the chemical formula of Mx(Al4-xMgx)Si8O20(OH)4.nH2O. Montmorillonite (MMT) is a filo-silicate mineral that has the ability to expand and can be intercalated and exfoliated, thus it is widely used as filler of nanocomposite to enhance the properties of the nanocomposite [6]. When an exfoliation occurs, the mechanical and rheological properties of the nanocomposite increase dramatically when compared with the pure polymer [7]. Several researches on the addition of MMT in polypropylene (PP) nanocomposites has been conducted and indicated that MMT can improve some of these nanocomposite properties such as mechanical properties [8,9,10,11,12], thermal properties [13], fire retardancy properties [14], and increased degrees of degradation [16]. It is expected that MMT from Bener Meriah can also be used to improve the mechanical properties of nanocomposites. Nanocomposites can be obtained by mixing the silicate layers of MMT with PP by melting intercalation method. Mixing of silicate layers from MMT in PP can be increased by using functional oligomers as compatibilizers. Several studies have reported using polypropylene-graft-maleic anhydride (PP-g-MA) as a compatibilizer [15].MMT is also modified using a long organic alkyl chain, which is called a modified organo-silicate layer (OMLS) or organo-clay.The organic clay will change the hydrophilic into hydrophobicproperties of MMT, which is allowing the MMT interface to interact with several different polymer matric.The Organic compounds commonly used to modify MMT are alkynammonium. The research was conducted several stages are characterizing of bentonite, isolationof the MMT nanoparticle, preparation and characterize of PP-MMT nanocomposites. Methods 2.2.1. Isolation nano montmorillonite from natural bentonite Bener Meriah. Montmorillonite nanoparticles can be obtained by preparing 1 kg of bentonite sample and filtered with a 100 mesh sieves, then it was dried in oven at the temperature of 105 °C for 4 hours. Subsequently, the sample was fractionated. Fractionation was done with sedimentation by weighing 40 grams of 100 mesh bentonite and added with 2L aquades to form the suspension. Bentonite suspension was given ultrasonic waves for 15 minutes at 750 watts at room temperature. Furthermore, the suspension is left in a flat place and kept away from all vibrations. Precipitation that occur within 15 minutes are taken by pouring the suspension into another container and leaving the filtrate again.The precipitate formed in the next 3 days is filtered back and taken its filtrate. The floating fraction in the filtrate is stirred again, then the filtrate is left for a week and collected of precipitate is formed. This precipitate was dried in an oven at 105 °C for 3 hours, then crushed and sieved using 200 mesh sieve. This fraction is stored in a desiccator. The identification of the fraction (montmorillonite) was carried out using FT-IR, X-RD diffraction and SEM [16]. Preparation of polypropylene-montmorillonite nanocomposite (PP-MMT). Variation of the composition materials of PP, PP-g-MA, and montmorillonite for preparing PP-MMT nanocomposite can be seen in Table 1. Materials of various compositions of PP, PP-g-MA and montmorillonite were coumpounded respectively in a Haake Rheomix 3000 internal laboratory mixer with high rotor intensity, operating at 180 °C at 65 rpm for 10 minutes. Samples for mechanical tests were made from the nanocomposite produced. Characterization Samples of bentonite and nano montmorillonite were characterized by X-Ray Flourencense (XRF) PANalytical, Axios Advance. The percentage of montmorillonite be calculated using the Meyer equation [17]. The functional group of bentonite, nano montmorillonite and nanocomposite speciment were analyzed by using FTIR of Perkin Elmer type spectrum 100 at wave numbers 400 to 4000 cm -1 . The X-Ray Diffraction scattering patterns of bentonite, nano montmorillonite and nanocomposite were recorded using XRD Shimadzu 6000, operating at Cu Kα radiation generated at 40kV and 30 mA. The sample were scanned in range of 3° to 50° with scan rate 5°(2θ)/minute. Surfaces morfologies of nano montmorillonite and nanocomposite was analyzed by Scanning Electron Microscopy (SEM) Merck Zeiss type EVO MA 10 at 20 kV. Mechanical tests was done using the equipment a Lloyd LR/10KN Universal, operation at room temperature and a speed of 50 mm min -1 , according to ASTM D638 and ASTM D256 standard. Nanoparticle was analyzed by Particle Size Analyzer (PSA) of LS 100 Q Coulter. Result and discussion 3.1. XRF analysis XRF analysis was chosen asa method for the assessment of the quantitative results. The result of characterization of bentonite of Bener Meriah Aceh by XRF can be seen in Table 2. Table 2 shows the chemical composition of bentonite from Bener Meriah, the amount of Na2O is greater than CaO, this indicates that the bentonite from Bener Meriah is Na-Bentonite as described in [18]. 3.2.Meyer test The content of montmorillonite in Bener Meriah bentonite was analyzed using the Meyer equation, (1972) and the result was 70.6%. The percentage of montmorillonite is different for each region, it is also suspected to be influenced by the process of bentonite formation refer to [18] XRD analysis The characterization of sample by X-Ray Diffraction (XRD) aimed to verify the existence minerals of Bentonite Bener Meriah and about exfoliation and intercalation of PP-MMT nanocomposite. The XRD patterns of bentonite Bener Meriah can be seen in Figure 2. [16,21]. In addition, the peak of 2θ at 5.86 o and 16.29 o in the Figure 2 showed that bentonite of Bener Meriah is Na-bentonite [22]. The XRD patterns of montmorillonite show only peak of 2θ : 5.47°; 14.54°; 25.25°, 35.04°, which is the result of isolation from bentonite of Bener Meriah [23]. The pattern XRD of the PP-MMT nanocomposite in the composition ratio of PP/ PP-g-MA/ MMT is 85/ 10/ 5 showed peak of 2θ at 2.4 to 2.7 o , that indicates that PP chains insert into the interlayers of MMT and hence the polymer intercalated or exfoliated structure is achieved [24]. In addition, angular shifts from the top of MMT also explain the occurrence of intercalation and exfoliation in the silicate layer. SEM analysis The Scanning Electron Microscopic (SEM) technique was used to explore the surface morphologies of the sample. Figure 3(A) shows the surface morphologies of montmorillonite which isolated from bentonite Bener Meriah.The figure showed that structure of MMT Bener Meriah has layered pores which randomly distributed with different sizes [25,26]. The existence of these layered pores causes MMT to be used as a nanocomposite filler to enhance the properties of the nanocomposite. Figure 3 PSA analysis Montmorillonite which has isolated from bentonite was processed into nanoparticles by precipitation [16]. To analyze particle size of montmorillonite was used Particle Size Analyzer. The average result of particle size of montmorillonite Bener Meriah was 67,8 nm, can be seen in Figure 4. Then, this montmorillonite was used to preparation of nanocomposite. Conclusion It can be concluded that bentonite is Na-bentonit and has percentage of montmorillonite of 70.6%. Bentonite Bener Meriah was isolated into nano-sized montmorillonite and used as a filler of polypropylene-montmorillonite nanocomposites by the addition of PP-g-MA as a compatibilizer and octadecylamine as a modifier of MMT. The result indicates that exfoliation and intercalation of PP in MMT may occur to produce compatible nanocomposites. Based on mechanical properties test, it showed
2019-08-02T22:12:37.321Z
2019-07-08T00:00:00.000
{ "year": 2019, "sha1": "734efd2d6edc8da250a537b79b50abf85779f312", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1757-899x/523/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "43e3227177b014a64091499e7c3d0e185c75641c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237156770
pes2o/s2orc
v3-fos-license
Video mediastinoscopy-assisted superior mediastinal dissection in the treatment of thyroid carcinoma with mediastinal lymphadenopathy: preliminary results Background Mediastinal lymph node metastases (MLNM) are not rare in thyroid cancer, but their treatment has not been extensively studied. This study aimed to explore the preliminary application of video mediastinoscopy-assisted superior mediastinal dissection in the diagnosis and treatment of thyroid carcinoma with mediastinal lymphadenopathy. Materials and methods We retrospectively reviewed the clinical pathologic data and short-term outcomes of thyroid cancer patients with suspicious MLNM treated with video mediastinoscopy-assisted mediastinal dissection at our institution from 2017 to 2020. Results Nineteen patients were included: 14 with medullary thyroid carcinoma and five with papillary thyroid carcinoma. Superior mediastinal nodes were positive in nine (64.3%) patients with medullary thyroid carcinoma and in four (80.0%) patients with papillary carcinoma. No fatal bleeding occurred. There were three cases of temporary recurrent laryngeal nerve (RLN) palsy postoperatively, one of which was bilateral. Four patients had temporary hypocalcemia requiring supplementation, one had a chyle fistula, and one developed wound infection after the procedure. Postoperative serum molecular markers decreased in all patients. One patient died of cancer while the other 18 patients remained disease-free, with a median follow-up of 33 months. Conclusion Video mediastinoscopy-assisted superior mediastinal dissection can be performed relatively safely in patients with suspicious MLNM. This diagnostic and therapeutic approach may help control locoregional recurrences. Introduction The incidence of thyroid cancer (TC) has been continuously increasing worldwide during the past decades [1,2]. Common TC categories, such as papillary thyroid carcinoma (PTC) and medullary thyroid carcinoma (MTC), tend to develop regional lymphatic metastasis [3], which is an important factor in predicting the structural recurrence of PTC [4] and is associated with decreased Open Access *Correspondence: docbinzhang@hotmail.com † Yuntao Song and Liang Dai contributed equally to this article as first authors 1 Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Head and Neck Surgery, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, China Full list of author information is available at the end of the article prognosis in MTC [5]. Surgical dissection is the first choice of treatment [6]. Lymph nodes involved in thyroid carcinoma could be classified into three regions: the central, lateral, and mediastinal compartments [7]. The central neck is the most commonly involved region, which is defined inferiorly by the superior sternal border [8]. Lymphatic tissue in this portion is in continuity with the superior mediastinum. Therefore, mediastinal lymph node metastases (MLNM) from TC are not uncommon. According to the literature, the incidence of MLNM was reported to range from 0.7 to 48.1% [9,10]. Patients with mediastinal metastases have a poorer prognosis [11], and surgical extirpation is the preferred treatment for MLNM of thyroid cancer whenever possible. Currently, there are two surgical approaches to treat MLNM. Transcervical approach is an extension of central compartment dissection, which is indicated for LNs located superior to innominate artery. While lower MLNM requires a more extensive operation. Sometimes, a partial-complete median sternotomy or thoracotomy is mandatory, which could potentially increase the risk of complications [12]. Ultrasound-guided fine needle aspiration cannot be easily done on mediastinal lymph nodes due to interference of bony structures of the chest wall. Consequently, it is difficult to confirm the enlarged mediastinal lymph node by pathology preoperatively, which thus leads to a diagnostic dilemma for physicians. To minimize operative trauma in TC patients with suspected mediastinal metastasis, we explored a transcervical approach of video mediastinoscopy-assisted superior mediastinal dissection (VMSASMD), which has not been investigated in previous literature to the best of our knowledge. The goal of this study was to review a series of cases as preliminary communication demonstrating this technique. Materials and methods A retrospective review was performed involving patients with thyroid carcinomas who underwent transcervical VMSASMD in the setting of suspicious mediastinal lymph nodes. Patients were treated between March 2017 and October 2020 at the Peking University Cancer Hospital. Demographic data, histology, incidence of mediastinal nodal metastasis, postoperative complications, and follow-up were reviewed. A Wilcoxon signed rank test was used to compare pre-and post-operative biomarkers. Data were analyzed using SPSS 22.0. All patients required demonstration of suspicious MLNM on preoperative contrast-enhanced CT. CT features suggestive of metastasis included the presence of calcifications, central necrosis or cystic changes, and lymph nodes showing heterogeneous cortical enhancement or greater enhancement than the adjacent muscle [13]. Six patients underwent preoperative functional imaging examination, including 99m Tc-methoxyisobutylisonitrile ( 99m Tc-MIBI) single-photon emission computed tomography/computed tomography (SPECT/ CT) or fluorine-18-deoxyglucose (FDG) positron emission tomography (PET), all with positive results. All patients underwent preoperative laryngoscopy and signed informed consent forms. A multidisciplinary discussion was conducted by the head and neck surgeon, thoracic surgeon and radiologist. Indications for VMSASMD included: (1) PTC or MTC with suspected upper MLNM that are not amenable to remove through transcervical approach. (2) No major vascular involvement was found by imaging investigations. We routinely informed the patient of the possible complications of surgery and the possible instance that postoperative pathology may be negative. Other options, such as sternotomy, were also provided to the patient. Close cooperation between head and neck surgeon and thoracic surgeon during surgery was important. All patients underwent dissection of pretracheal and paratracheal (level VI) lymphatic tissue through a transcervical incision above the sternal notch. If an intact or residual thyroid gland exists, a total or complemental thyroidectomy is performed, and if the lateral neck is clinically involved, it is dissected concurrently. Bilateral recurrent laryngeal nerves (RLNs) were routinely exposed with the assistance of intraoperative neuromonitoring (IONM, NIM-Response 3.0, Medtronic, Jacksonville, Florida, USA). Standard open surgical instrumentation was used for cervical surgery. Part of the superior mediastinal LNs were taken together with the central neck specimen especially on the left side, which was superior to the innominate vein. The suprainnominate artery lymph nodes, also known as the level VII LNs, were resected through open approach as well. Mediastinal dissection was performed by senior thoracic surgeons who were familiar with mediastinoscopic biopsy and sternotomy using video mediastinoscopy (Karl Storz, Tuttlingen, Germany). The equipment for immediate sternotomy or thoracotomy was also prepared in case of intraoperative conversion. The surgeon stood on the cranial side of the patient and the video monitor was placed on the caudal side. Through routine cervical thyroid incision, the thymus is separated from the trachea. The index finger followed the trachea and broke the pretracheal fascia (Fig. 1). Then, the scope is introduced with the blades closed. The right pulmonary artery was separated, the scope blades were spread, the right and left tracheobronchial angles were identified ( Fig. 2A), and the axis of the scope was twisted to the left. The left recurrent laryngeal nerve was identified and exposed (Fig. 2B). A thorough dissection of the adipose tissue of lymph nodes in station 4L was performed while preserving the function of the left recurrent nerve. Routine dissection of the subcarinal space is not necessary unless lymph nodes in that area are suspected. The right compartment is the largest compartment. The scope was fixed from the tracheal axis to the right, and the lymph nodes were dissected away from the innominate artery and vein, superior vena cava, and right parietal pleura (Fig. 2C). The mediastinal lymph node specimens were divided and labeled according to the criteria for lung cancer [14]. Results Nineteen patients with thyroid cancer were included, wherein 14 had MTC and 5 had PTC, all of the MTC patients were sporadic. Four (21.1%) patients were initially treated, including three with MTC and one with PTC, while others underwent re-operation. The median age of the patients was 39 years (range 15-65 years). Eight patients were women (42.1%) and 11 were men (57.9%). Patient demographics, tumor stages, and treatments are listed in Table 1. All patients underwent VMSASMD successfully without intraoperative conversion to sternotomy or thoracotomy. No major vessel injury occurred during superior mediastinal dissection. The mean operation time was 206 min (SD, 58 min; range 100-320 min). The mean blood loss was 65 ml (SD, 30 ml; range 20-100-ml) and no blood transfusion was required. Preoperative laryngoscopy revealed unilateral paralyzed vocal motility in two patients with MTC, whose recurrent laryngeal nerve (RLN) in the affected side was invaded by the tumor and was resected. For other patients with normal preresection intraoperative electromyographic (EMG) signals, two developed unilateral temporary RLN palsy and recovered within 1 to 4 months. One patient developed bilateral RLN palsy during surgery and required prophylactic tracheotomy. The tube was removed 1 week later and there was normalization of voice quality 3 months (Table 2). One patient with recurrent MTC had permanent hypoparathyroidism before surgery but had no significant change in parathyroid hormone (PTH) levels after surgery (5.2 pg/ml vs. 5.1 pg/ml). Among the patients, four experienced temporary hypoparathyroidism, with three of them asymptomatic. Their PTH levels were all restored 6 months after the operation. The median total drainage volume was 357 ml (quartile: 175, 740 ml; range 25-9160 ml). One patient developed a chyle fistula (CF) after the procedure and was referred for surgery (ligature of the thoracic duct under thoracoscopy) after conservative treatment failed. One patient had a postoperative wound infection, which healed after debridement and antibiotic use. The median hospitalization time was 10 days (interquartile range-9, 14 days; and range 7-38 day). The median follow-up time was 33 months (range 3-47 months). One patient with papillary thyroid carcinoma with lung metastasis died of cancer 16 months postoperatively while the others are currently alive. At the last follow-up, clinical examination and radiographic studies were negative for recurrent tumors in all patients (Fig. 3). In MTC patients, the median preoperative serum calcitonin (CT) level was 1444.0 pg/ml (quartile: 814.4, 2000 pg/ml; range 162.4 pg/ml to > 2000 pg/ml). Because the upper limit of calcitonin level was 2000 pg/ml in this institute's laboratory, the serum calcitonin level was .0 pg/ml; range 0.5 pg/ml to > 2000 pg/ml). The value significantly decreased compared with the preoperative value (p = 0.001). There was only one patient whose serum calcitonin level was beyond the upper limit before and after surgery, when her blood test was performed in another hospital, the serum calcitonin level was > 20,000 pg/ml preoperatively and 4300 pg/ml postoperatively. For these patients, calcitonin levels remained stable postoperatively, with only one patient in the normal range. All five patients with PTC received radioiodine therapy before or after VMSASMD. Postoperative serum thyroglobulin (Tg) levels decreased in all living patients to < 2 ng/ml, while serum thyroglobulin antibody (Tg-Ab) levels were normal. Discussion Lymph node metastases are common in thyroid cancer patients, with a high incidence of both occult and overt metastases [15,16]. Most studies have focused on central and lateral lymph node metastases, but the incidence of MLNM and the extent of mediastinal lymph node dissection are not clearly defined. Although MTC and PTC have different pathological origins and prognoses, surgical techniques for the management of primary tumors and regional metastases are the same, which is the primary focus of our research. The majority of our cohort had MTC because it has a worse prognosis and had no established adjuvant therapy, thus requiring more extensive surgery. Sometimes, elective dissection may be recommended. Most experts agree that sternotomy with mediastinal dissection should be reserved for PTC and MTC patients with imaging evidence of mediastinal disease [4,17]. However, radiological evaluation has a low accuracy for detecting mediastinal nodal disease. In a study of 94 patients with highly suspected MLNM who underwent mediastinal lymph node dissection, 13 (13.8%) patients were pathologically negative [10]. Ducic et al. [18] performed transcervical elective superior mediastinal dissection in certain patients with papillary, medullary, and anaplastic thyroid carcinomas and found that 19 of 31 (61.3%) were positive even without overt mediastinal adenopathy during preoperative evaluation. Sugenoya et al. [9] found mediastinal lymph node metastases in 10 of 21 patients (48%) with advanced differentiated thyroid carcinoma after mediastinal dissection through partial midline sternotomy. In particular, for previously untreated MTC, if pretherapeutic basal calcitonin levels were greater than 500 pg/ml, mediastinal dissection was strongly recommended [19]. Because of the difficulty in doing a biopsy of mediastinal lymph nodes, there will always be some patients with negative postoperative pathological results. This proportion was 31.6% in the present study. Obviously, it is not worthwhile for patients without metastasis to undergo such an operation, which can cause excessive trauma. There are two main approaches to mediastinal lymph node dissection: the transcervical or the transsternal procedure. The transcervical approach is more convenient but has a limited surgical field. Sternotomy and partial sternotomy are most frequently used for mediastinal dissection since they can offer greater exposure, but they also have greater surgical invasiveness [12,20,21]. Video mediastinoscopy is a minimally invasive strategy which is seldom used in the treatment of MLNM from thyroid cancer. Mediastinoscopy was initially used in the mediastinal staging of lung cancer [22]. Video mediastinoscopy (VMS) enables the surgeon to operate bimanually as in open surgery. The superior mediastinum is entered through a transcervical incision and visualization of the area caudal to the subcarinal lymph nodes is facilitated and flashed on the video screen [23,24]. There is a lack of studies describing the terminology and classification of mediastinal lymph node dissection; therefore, we adopted the mediastinal division compartments of thoracic surgery. We found that level 2R was the most commonly involved compartment, probably because the inferior border of the central compartment is defined as the innominate artery on the right and the corresponding axial plane on the left [25], while the boundary of levels 2R and 2L is the left margin of the trachea. As a result, the extent of level 2R is larger than 2L, and the latter tends to be contiguous with the left central neck compartment and removed together with the central neck dissection specimen. In our experience, lymph nodes can be easily identified and resected without compromising the adjacent tissues under VMS. This is mainly because lymph nodes with thyroid cancer metastasis usually have a smooth capsule and are well demarcated from the surrounding tissues. However, extreme caution should be exercised when grasping the lymph node tissue. Gentle traction must also be used while dissecting the surrounding structures to avoid fatal bleeding [26]. The postoperative complication rate was relatively higher; some had a prolonged time of hospitalization, but there was no development of any avoidable permanent sequelae. Among the complications, one patient developed bilateral RLN palsy and required tracheostomy. Bilateral RLN injury is a rare complication. It is reported to occur in one out of 1000 cases following total thyroidectomy in a specialized thyroid unit [27]. The possible reasons in our cohort may be that the surgeons were inexperienced in the first few cases of bilateral mediastinal dissection. A muscle relaxant was used during the mediastinal procedure to avoid bucking or movement of the patients; hence, IONM was temporarily not applied. With technical improvements, the incidence of RLN injury has been reduced afterward. One patient developed a chyle leak, which is not a rare complication of neck dissection. In this case, it is not known whether thoracic duct injury occurred in the neck or chest. Therefore, we performed more active surgical management to prevent fatal mediastinal infection. Considering the high proportion of revision surgeries (15/19, 78.9%) in our cohort, the safety of VMSASMD was acceptable. A study of mediastinal lymph node dissection for thyroid carcinoma through a sternotomy or partial sternotomy approach revealed a postoperative complication rate of 38.2% (13/34), which was significantly higher than that of the transcervical approach (28.4%, 25/88) [21]. Mediastinal operation-associated complications associated with the sternotomy approach, such as pleural effusion, mediastinal infection, and superior vena cava rupture, were not observed in our study. The short follow-up period was a deficiency of this study, especially for indolent tumors like PTC and MTC. Calcitonin (CT) is a sensitive and specific marker for the persistence or recurrence of MTC and has been used to determine the success of operations for many years. Previous reports have shown that a decrease in CT was not ideal in patients undergoing reoperation. Moley et al. [28] performed 35 repeat neck explorations and microdissections in 32 patients and found that in 10 cases, the CT levels did not decrease. Even in experienced hands, reoperation on selected patients can only yield biochemical cure rates of 30-40% [29]. It may be lower in patients with suspected mediastinal metastasis. Nevertheless, many patients with persistently high levels of CT after surgery continue to live without evidence of disease for many years because of the indolent pattern of the tumor. The serum CT levels of MTC patients in our cohort all decreased after surgery, but only one of them who underwent initial surgery had normalization of CT levels. None of the MTC patients had radiologic recurrence, but a longer follow-up is needed for further research. The prognosis of PTC patients with MLNM has rarely been studied. Moritani et al. [11] investigated the impact of mediastinal metastases on the prognosis of PTC based on a mean 10.5-year follow-up of 488 patients. They found significant differences in disease-free survival (DFS) between patients with and without mediastinal metastases. One of the four patients with MLNM of PTC who died of cancer in our study had pulmonary metastases before surgery. This may dispute the significance of palliative mediastinal lymph node dissection in patients with distant metastasis, but the number of cases are too small to provide meaningful statistical results. Conclusions Video mediastinoscopy-assisted superior mediastinal dissection can be performed safely in patients with MLNM without sternotomy, especially when malignancy is uncertain. This diagnostic and therapeutic approach may help control locoregional recurrences, and further studies are necessary to determine the impact of MLNM on the long-term prognosis of thyroid cancer patients.
2021-08-18T13:42:32.928Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "ef4584351aedc5df150f2c27a8a071d9be4ea713", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-021-01326-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef4584351aedc5df150f2c27a8a071d9be4ea713", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261338253
pes2o/s2orc
v3-fos-license
Reducing stigma and improving access to care for people with mental health conditions in the community: protocol for a multi-site feasibility intervention study (Indigo-Local) Background Stigma and discrimination towards people with mental health conditions by their communities are common worldwide. This can result in a range of negative outcomes for affected persons, including poor access to health care. However, evidence is still patchy from low- and middle-income countries (LMICs) on affordable, community-based interventions to reduce mental health-related stigma and to improve access to mental health care. Methods This study aims to conduct a feasibility (proof-of-principle) pilot study that involves developing, implementing and evaluating a community-based, multi-component, public awareness-raising intervention (titled Indigo-Local), designed to reduce stigma and discrimination and to increase referrals of people with mental health conditions for assessment and treatment. It is being piloted in five LMICs – China, Ethiopia, India, Nepal and Tunisia – and includes several key components: a stakeholder group workshop; a stepped training programme (using a ‘Training of Trainers’ approach) of community health workers (or similar cadres of workers) and service users that includes repeated supervision and booster sessions; awareness-raising activities in the community; and a media campaign. Social contact and service user involvement are instrumental to all components. The intervention is being evaluated through a mixed-methods pre-post study design that involves quantitative assessment of stigma outcomes measuring knowledge, attitudes and (discriminatory) behaviour; quantitative evaluation of mental health service utilization rates (where feasible in sites); qualitative exploration of the potential effectiveness and impact of the Indigo-Local intervention; a process evaluation; implementation evaluation; and an evaluation of implementation costs. Discussion The outcome of this study will be contextually adapted, evidence-based interventions to reduce mental health-related stigma in local communities in five LMICs to achieve improved access to healthcare. We will have replicable models of how to involve people with lived experience as an integral part of the intervention and will produce knowledge of how intervention content and implementation strategies vary across settings. The interventions and their delivery will be refined to be acceptable, feasible and ready for larger-scale implementation and evaluation. This study thereby has the potential to make an important contribution to the evidence base on what works to reduce mental health-related stigma and discrimination and improve access to health care. Abstract Background Stigma and discrimination towards people with mental health conditions by their communities are common worldwide. This can result in a range of negative outcomes for affected persons, including poor access to health care. However, evidence is still patchy from low-and middle-income countries (LMICs) on affordable, community-based interventions to reduce mental health-related stigma and to improve access to mental health care. Methods This study aims to conduct a feasibility (proof-of-principle) pilot study that involves developing, implementing and evaluating a community-based, multi-component, public awareness-raising intervention (titled Indigo-Local), designed to reduce stigma and discrimination and to increase referrals of people with mental health conditions for assessment and treatment. It is being piloted in ve LMICs -China, Ethiopia, India, Nepal and Tunisia -and includes several key components: a stakeholder group workshop; a stepped training programme (using a 'Training of Trainers' approach) of community health workers (or similar cadres of workers) and service users that includes repeated supervision and booster sessions; awareness-raising activities in the community; and a media campaign. Social contact and service user involvement are instrumental to all components. The intervention is being evaluated through a mixed-methods pre-post study design that involves quantitative assessment of stigma outcomes measuring knowledge, attitudes and (discriminatory) behaviour; quantitative evaluation of mental health service utilization rates (where feasible in sites); qualitative exploration of the potential effectiveness and impact of the Indigo-Local intervention; a process evaluation; implementation evaluation; and an evaluation of implementation costs. Discussion The outcome of this study will be contextually adapted, evidence-based interventions to reduce mental health-related stigma in local communities in ve LMICs to achieve improved access to healthcare. We will have replicable models of how to involve people with lived experience as an integral part of the intervention and will produce knowledge of how intervention content and implementation strategies vary across settings. The interventions and their delivery will be re ned to be acceptable, feasible and ready for larger-scale implementation and evaluation. This study thereby has the potential to make an important contribution to the evidence base on what works to reduce mental health-related stigma and discrimination and improve access to health care. Background People with mental health conditions are often stigmatised and discriminated against in their local communities across the globe [1]. This stigma has far-reaching consequences and has even been described by affected people as worse than the mental illness itself [2,3]. This can have a range of negative impacts in terms of social exclusion and wellbeing, reduced employment opportunities and poverty, relationship di culties [4], as well as poor access to health care and reduced health care seeking behaviours [5][6][7][8]. To address these problems, over recent years there has been an increasing number of small-scale and short-term stigma reduction interventions published [9][10][11][12][13], with several systematic reviews examining their effectiveness [14][15][16][17][18][19][20][21][22][23]. Overall, these reviews have demonstrated that there are a number of education-based (addressing myths and misconceptions) and social contact-based (involving direct or indirect interactions with people with the stigmatised condition) interventions that produce small to moderate effects on stigma reduction in the short-to medium-term. Only a small percentage of these have been published from low-and middle-income countries (LMICs) [21], though one of the newer systematic reviews on the topic [14] found that effective mental health stigma reduction interventions in LMICs had increased in quantity and quality over recent years. The same review reported that research was limited to a small number of LMICs, that there was a lack of robust research designs, as well as a high number of short-term interventions and follow-up, and nominal use of local expertise in developing interventions or the cultural adaptation of interventions. Furthermore, the authors found minimal mention of social contact interventions despite existing strong evidence for them, concluding that more research and further translation/application of research ndings are needed to address these issues [14]. There has also been a paucity of research published in LMICs evaluating the effectiveness of interventions aimed at reducing stigma and discrimination in the local community [14,19,[24][25][26]. Even though community awareness-raising is commonly included in programmes working with marginalised or stigmatised groups, there is a signi cant lack of evidence about whether awareness-raising strategies alone are effective in reducing stigma in the community, particularly in regard to changes beyond knowledge, covering the essential areas of attitudes and behaviour. Changing attitudes and behaviour is recognised to be a complex process, and interventions focusing on increasing knowledge through education or teaching alone are not likely to be effective in changing behaviours. There is evidence that social contact interventions are one of the most effective ways in which to facilitate behaviour change, such as reduced discriminatory actions by community members [14]. Help-seeking behaviours can also be negatively affected by stigma. Causal attributions of mental health conditions vary considerably between cultures, and in most settings a range of external and internal in uences intermix and contribute to differences in understanding. In some low-and middle-income settings, for example, people may hold traditional beliefs alongside 'Western' medical models of mental health conditions. Such explanatory models can have a major impact on help-seeking behaviour because people may 'shop around' for health care and may seek mental health support from traditional or religious healers in the rst instance [27]. To make use of mental health services, alongside other access issues such as affordability and geographical location, populations need to be made aware of them in the rst place, perceive them to be a (potential) solution and remove the barrier of negative societal attitudes towards people with mental health conditions [28]. Previous work has shown increased mental health service utilisation following an awareness-raising programme in a lowresource setting in South-East Nigeria [29]. Between 2011 and 2013 Amaudo Itumbauzo, a civil society organisation working in mental health in South-East Nigeria, developed and implemented a mental health awareness-raising intervention [29]. The programme attempted to change community knowledge and attitudes towards people with mental health conditions and increase utilisation of their Community Mental Health Programme, which works within three States in South-East Nigeria to integrate mental health into health services at the local government level. The intervention was a re ned version of an earlier programme [30] and was implemented in partnership with CBM (an international non-governmental organisation (NGO)). The programme was shown to signi cantly increase attendance at primary care clinics under the Amaudo Community Mental Health Programme. The Indigo-Local study described here builds on the Amaudo programme and extends it to other settings. The study is part of the larger Indigo Partnership programme, which involves developing and piloting a range of mental-health-related culturally-adapted, multi-level stigma reduction interventions across a variety of target populations in seven sites across ve LMICs in Africa and Asia [31]. The Indigo Partnership arose out of the Indigo Network, which is an international network of researchers committed to the promotion of mental health by reducing stigma and discrimination related to mental illness [32]. Since the previous Amaudo programme [29] did not include a speci c role for contact interventions with people living with mental health conditions, the Indigo-Local study developed an intervention that added this component to awarenessraising through media and information-sharing by professionals. The Indigo-Local intervention therefore contains the elements previously used in the Amaudo programme [29], but deliberately adds an element of social-contact service user testimony, because of the clear evidence that has emerged since then of the impact of personal testimony of people with lived experience of mental health conditions in changing attitudes to mental health conditions [33]. In addition, the Indigo-Local intervention incorporates a media campaign that follows recent understanding of effective means of sharing information in the community [12][13][34][35]. Furthermore, the Indigo-Local study focuses on the ability of the intervention to reduce stigma and discrimination, using broader stigma measures that capture knowledge, attitudes and behaviour, alongside service utilisation rates as a secondary outcome. The aim of the Indigo-Local study is therefore to conduct a feasibility (proof-of-principle) pilot study that involves developing, implementing and evaluating a community-based, multi-component public awareness-raising intervention designed to reduce stigma and discrimination and increase referrals of people with mental health conditions for assessment and treatment in all seven of the Indigo Partnership sites. Study design and objectives The Indigo-Local feasibility pilot study aims to: 1. Develop a community-based public awareness programme intervention (Indigo-Local) that involves training community health workers (or similar cadres of workers) and mental health service users, alongside a media campaign, designed to: i) reduce stigma and discrimination, and ii) increase referrals of people with mental health conditions for assessment and treatment. 2. Implement and pilot the Indigo-Local intervention in a small feasibility (proof-of-principle) platform activity using a prepost mixed-methods study design in seven sites in ve LMICs, to evaluate procedures for a subsequent fully-powered study comparing the clinical and cost-effectiveness of Indigo-Local in: i) reducing stigma and discrimination amongst trained community health workers (or similar cadres of workers) and service users, and ii) increasing mental health service uptake. Setting The Indigo-Local feasibility pilot study is being carried out in seven sites in ve LMICs [31,32], i.e. two sites in China (Beijing and Guangzhou), two sites in India (Bengaluru and Delhi National Capital Region), Ethiopia, Nepal and Tunisia. See Table 1 for further details about the study setting/location for each of the seven sites. The study sites have been selected based on accessibility, appropriateness and feasibility, and where possible entail a distinct region or neighbourhood. The Indigo-Local intervention is being implemented in community settings within the seven study sites, such as in public spaces or community facilities. The training elements of the intervention are being conducted either within health, community, private or work spaces as appropriate, depending on the local contexts. For ethical reasons, mental health services need to be in place in the settings in which the Indigo-Local intervention is being implemented, given the likely stimulation of and anticipated increase in help-seeking. Participants A wide range of stakeholders will be involved in the Indigo-Local feasibility pilot study in each of the seven sites. This may include local key stakeholders such as health service leaders and/or members of service user organisations, community health workers or similar cadres of workers, (mental) health and/or site staff, service users and their caregivers. Table 2 shows an overview of the different study activities that each of the participant groups are involved in. All participants of the Indigo-Local feasibility pilot study will be at least 18 years of age and have to freely consent to participate. We will review mental capacity to consent where a concern is raised, but seek to respect preference of the service user in all cases. For all groups, sampling further aims to achieve adequate sample variability with regard to gender and age group of participants. Further details about participant eligibility are outlined below in the section on the key components of the Indigo-Local intervention. The following groups of people are excluded from the study, where relevant: health workers who do not have appropriate government credentialing for their cadre or who have any known professionalism infractions or legal infraction revoking professional licensure. This is screened by site research teams when approaching potential participants, and involves for example only recruiting participants from legitimate and licenced health care facilities, where staff are required to meet appropriate regulations and professional standards. We are also excluding anybody who is at risk of a psychiatric emergency, who may not be able to provide consent, or who may not be able to perform the intervention and research activities. Recruitment All participants of the study will be sampled purposively by each of the site teams. All participants will be identi ed and approached by either the implementing partners in the country sites, or by the local health service leaders or similar key stakeholders, to engage them to participate in the study. Where possible, contact regarding the study will be conducted by an impartial third-party individual (i.e. not the participants' clinician [for service users] or staff managers [for health workers], but instead, for example, a recruitment o cer, research assistant, PhD student, or clinic administrator, depending on site resources). Sample sizes Sample sizes will vary between the seven study sites, depending on feasibility and the local resources available, as well as the size of the site -see Table 1 for further details. We plan to recruit a minimum number of ten community health workers (or similar cadres of workers, depending on the local context) and service users in total for training in each of the seven sites. If possible, of the total number of participants recruited for training, a minimum of 15-20% should be service users in each of the sites (and the rest community health workers or similar cadres of workers). All trained community health workers and service users should ideally be involved in the quantitative evaluation of the Indigo-Local intervention, and a sub-set of them are taking part in the qualitative evaluation per site (depending on site feasibility). In addition, two to three people will receive the 'Training of Trainers' training in each of the seven study sites, and between ve and 20 participants will take part in the stakeholder group workshop per site (depending on local feasibility). Since the Indigo-Local study is being conducted on a proof-of-principle feasibility basis, sample sizes for the quantitative evaluation elements are not guided by power calculations, but by a minimum of 30-50 participants. The intention is not to formally test for pre-post differences in the sample, but we will instead examine the effect size and direction of change, which could guide the sample size for a future full-scale study. Further evaluation data will be collected through qualitative means, for which the sample sizes outlined are appropriate. Indigo-Local intervention Principles guiding the Indigo-Local intervention See Box 1 for the principles guiding the Indigo-Local intervention as its 'essential ingredients', based on the Amaudo Mental Health Awareness Programme in South-East Nigeria [29] and other work since then [14,36,37]. Key components of the Indigo-Local intervention The key components of the Indigo-Local intervention are outlined below. Each of these key components will be carried out in each of the seven study sites. Training of Trainers The plan for future Indigo-Local interventions is for the 'Training of Trainers' (ToT) to be conducted for ve days residential, whereby master trainer(s) are trained to train people to conduct the community health worker / service user training. This ToT training should include a direct (e.g. a service user provides a 'lived testimony' in person) or indirect (e.g. showing a video of a person talking about their experiences) contact element with service user(s). However, since this is a feasibility study with small sample sizes and since the teams in each site are mental health stigma experts with prior knowledge on the topic, in this study an online ToT programme is being carried out, in which the Indigo-Local research leads train site teams to conduct the community health worker / service user training in around one day through a series of online training videos and seminars. A minimum of two to three people should take part as recipients of the ToT training in each site. These participants would be expected to have mental health knowledge, since the training they are being trained to deliver is focused on the methods for delivering messages about mental health conditions and mental health services, rather than teaching much about mental health/conditions. Recipients of the ToT are taught to train the community health workers (or similar cadres of workers) and service users to share mental health related messages in community forums (e.g. community meetings), for example to give advice about the location and availability of mental health services (including opening times), referral methods, follow-up and monitoring of service users in the community, and the costs involved. The training also includes a brief overview and materials to understand effective implementation strategies for the intervention. Stakeholder group workshop A stakeholder group workshop is being conducted for the duration of half up to one full day in each of the study sites. In each site between ve and 20 participants are joining, including relevant local stakeholders, such as health service leaders, members of service user organizations, local community groups or NGOs, community workers, health staff, service users, traditional healers, religious leaders etc. Local health service leaders are selected based on the following characteristics: they should hold a leadership role at their institution within health services in the site, ideally within mental health services (or have a good working knowledge of mental health issues). Any other local stakeholders should be people or groups who advocate and manifest the interest and will of mental health service users in the community, or who are engaging or supporting people seeking mental health care. The aims of the stakeholder group workshop are to: (a) bring all key stakeholder groups together to establish the project team, build relationships, and ensure buy-in from the beginning; (b) advise on the local context, training needs and the local media landscape; (c) review, re ne and adapt the training materials and translate them into the local language (where needed/appropriate) -for consistency and delity, the material templates have been developed centrally (based on the materials used in the Nigeria study, provided by Amaudo [29]), which allows for sharing of evidence-based practice; however, these materials are being adapted by each of the sites to cover local cultural beliefs and speci c issues related to the area of intervention; (d) plan and de ne the media strategy and clarify its messages; (e) help in planning the training, including identifying which cadres of workers to train -it is crucial that this is done carefully to maximise the e cacy and retention of those trained, and involves de ning in advance what is expected, post-training, of the trainees (e.g. to hold community forums, to identify and refer patients in their community etc.); and (f) help in planning the implementation of the intervention, including re ning details of the intervention to match local services, resources and needs, and deciding on the most appropriate way(s) to raise awareness in the community. The stakeholder group workshop builds on detailed formative work already completed previously in study sites as part of the Indigo Partnership [31,32]. Training of community health workers and service users Community health workers (or similar cadres of workers) and service users are being trained over a minimum of two days (for resource-limited settings) up to an ideal maximum of ve days in each of the sites. Training could be conducted over successive days or in separate blocks over a few weeks, depending on feasibility and the local context within sites. At least ten participants in total per site will be trained, within or near their local communities. Community health workers or similar cadres of workers who are trained are selected based on the following characteristics: they should be well-respected members of the local community; should know their communities well and be intimately familiar with community cultural perspectives; should be familiar with community education and mobilisation; should be part of existing cadres of personnel if possible, for instance Accredited Social Health Activists (ASHAs), female community health volunteers (FCHVs), government o cers, faith-based group leaders etc. Careful choice of such workers was found to be crucial for good results, coordination and sustainability during the previous Amaudo programme in Nigeria [29]. Eligible mental health service users to be trained alongside community health workers (or similar cadres of workers) can include any person seeking care from and using a mental health service. We expect to involve people with a range of diagnoses from common mental illness (depression, anxiety) to more severe mental illness (bipolar disorder, psychosis) or harmful substance use. These service users who are being included as recipients of the training should be able, willing and feel safe to discuss their own experience of living with a mental health condition as well as their own mental health service use. Ideally this should be somebody from the local community, though service users from elsewhere can be involved if necessary (recognising that for some, speaking in their own community may pose greater challenges or risks). In sites where this is deemed to be appropriate and bene cial service users' caregivers may also be involved in the training. The training is facilitated by the recipients of the ToT within or near their local communities, who train the community health workers (or similar cadres of workers) and service users (and possibly their caregivers where appropriate). Community health workers and service users should ideally be trained together to reinforce the social contact element of the training (in that case, both groups will likely need to be briefed before and debriefed after the training), but if this is considered not to be possible or good practice in sites (e.g. because of power dynamics, social hierarchies etc.), the two groups could also be trained separately. In addition, sites identify service users in the community (or who at least have the same language/culture), who could contribute to and co-facilitate the training by providing a 'lived testimony'. If such direct inperson contact is not possible during the training, then the social contact element could also be done through indirect contact, for example video or online materials that could have been developed previously (e.g. Time to Change Global [36-38] or other locally relevant materials). The training content includes mental health and stigma, awareness-raising, i.e. how to spread messages of mental health (services) in the selected community, and how to conduct outreach and referrals (for which the pathways will be contextualised by sites). See Table 3 for further details on the training content. Sites have the exibility to culturally adapt it and complement it with contextually relevant information from other sources. Table 3 around here Implementation of intervention The trained community health workers (or similar cadres of workers) and service users will then implement an intervention (i.e. locally contextualised awareness-raising activities/engagement) in the local community within each of the sites. This can be embedded within their usual role/ community engagement activities. The exact awareness-raising activities vary across sites depending on the local context, but may include community contact activities, speaking to community groups (e.g. faith locations, women's groups, youth groups etc.), or at events or locations such as markets. Supervision meetings / booster trainings Supervision meetings for the trained community health workers (or similar cadres of workers) and service users will take place every two to three months, with brief booster trainings after three to six months and six to 12 months (if feasible in sites). Process data, for example on their level of activity in regard to mental health awareness-raising, could be collected as part of these sessions. Ideally these supervision meetings and booster trainings will be conducted by the same people who conducted the initial training. Local awareness-raising media campaign A media campaign is being conducted over a minimum of a one-month period (ideally longer), which starts at the same time as the training of the community health workers and service users. The format and messages of the media campaign depends on what is feasible and appropriate within each of the sites, but may include posters, yers, newspaper articles, social media (WhatsApp, Facebook, Instagram, Twitter etc.), announcements or jingles in local radio or television etc. At least two different media outlets should be used in each site -see Table 1 for further details on this for each of the study sites. The media campaign is being developed by the local site teams according to the local context. The content of the campaign is framed and phrased as such that it will aim to help increase public knowledge and improve attitudes and awareness around mental health conditions, and inform the community about the availability of mental health services. The messages are linked to services and to the content of the training activities (e.g. myth-busting, information about available services etc. Evaluation of Indigo-Local intervention The evaluation of the Indigo-Local intervention will be conducted as a feasibility (proof-of-platform) pilot study using a mixed-methods design. This will involve quantitative evaluation of stigma outcomes; quantitative evaluation of mental health service utilization rates (where feasible in sites); a qualitative evaluation; a process evaluation; an implementation evaluation; and an evaluation of implementation costs. These aspects are each described further below. An overview of these evaluations along with the time points for their assessment are provided in Table 4 (adapted from the SPIRIT owchart; a populated SPIRIT checklist is provided as additional le [40][41][42]). Quantitative evaluation of stigma outcomes This involves pre vs. post assessment of quantitative scales to measure stigma and discrimination (in terms of knowledge, attitudes and (intended/expected) behaviour) amongst the community health workers (or similar cadres of workers) and service users who receive the training, using the following quantitative questionnaires: • Changes in knowledge about mental health conditions: The 'Mental Health Knowledge Schedule' (MAKS) [43] will be completed by the trained community health workers and service users. The MAKS has 12 items, which are each scored on a ve-point Likert scale, with higher scores indicating higher levels of knowledge. • Changes in (intended/experienced) behaviour: 1. The 'Reported and Intended Behaviour Scale' (RIBS) [44] will be used to assess changes in intended behaviour by the trained community health workers. The RIBS contains eight items across two sub-scales, which are rated either as 'yes/no' response or on a Likert scale, with higher total score indicating higher willingness to interact with a person with lived experience of a mental health condition. 2. The shortened version of the 'Discrimination and Stigma Scale' (DISCUS) [45] will be used to assess changes in mental health service users' experience of stigma and discrimination. The DISCUS has 11 items, which are rated on a fourpoint Likert scale, with higher scores indicating higher levels of discrimination. Service users who take part in the Indigo-Local training component will complete the DISCUS. • Stress: The shortened 2-item version of the Stigma Stress Scale [46], for completion by the service users who take part in the training. Higher scores indicate higher levels of stress due to stigma, with total scores ranging between -6 and 6. • Changes in attitudes towards people with mental health conditions: Social Distance Scale (SDS) [47,48], for completion by the trained community health workers (or similar cadres of workers) and service users. The SDS has 12 items, which are each rated on a six-item Likert scale, with higher scores indicating greater social distance. This scale is optional rather than obligatory for sites. All scales have been validated and used in earlier studies [43][44][45][46][47][48]. They have already been adapted and translated by the site teams locally as part of the formative work within the Indigo Partnership [31,32]. All scales will be completed at several time points (see Table 4). As a minimum, these data will be collected immediately before (Time 1) and after (Time 2) the community health worker / service user training. If feasible in sites, at least one further follow-up point will be included, ideally at three or six months (Time 3). Further follow-up assessment time points (e.g. at the time of the booster training sessions at six or 12 months) are optional depending on feasibility within sites (Time 4). Quantitative evaluation of mental health service utilization rates Where feasible in sites, this will be conducted to test the difference on mental health service utilization rates of the Indigo-Local intervention. In sites where this is feasible and appropriate, routinely collected quantitative data will be used to assess (at site-level) the following (or similar/related/proxy) outcomes: Total number of 'new referrals' to mental health services by the community health workers who participated in the training (e.g. by comparing to one-year pre-intervention); Total uptake of mental health services, including total number of service users seen by mental health services (and % change), and new referrals to mental health services (and % change); Contact coverage (de ned as service utilization taken from the programme records divided by the total population in need of services taken from prevalence surveys of the disorder), where feasible, i.e. where adequate data is available in the scienti c literature for the site about the number of people who require mental health services (to act as denominator of contact coverage) [49]. If feasible, routine data should be collected (retrospectively) on a monthly basis for one year before the Indigo-Local intervention is implemented, and then on a monthly basis for a minimum of one year after the intervention is implemented (to assess the long-term impact of the intervention and also the impact of the supervision meetings/booster trainings). Where feasible, data will be collected on previous referrals of patients, as well as on referral pathways / how referrals are made, for example referral by community mental health workers, self-referral following the media campaign etc. Qualitative evaluation A qualitative evaluation will be conducted to complement the quantitative ndings, i.e. to obtain in-depth qualitative data from community health workers and service users on the potential effectiveness of the Indigo-Local intervention in terms of stigma (knowledge, attitudes, behaviour) reduction, mental health service utilization rates (including referral rates), and the impact of the intervention amongst participants who received the training to deliver the intervention. The following will be explored qualitatively: (1) ways to improve the training; (2) changes in stigma, including possible explanations for changes in the quantitative outcomes/lack thereof, based on the directions of change observed; (3) information around possible changes in mental health service utilization rates; (4) other outcomes not covered by the quantitative measures, including any possible negative, unintended consequences. This will be done through focus groups and/or semi-structured interviews, ideally immediately after the training (i.e. Time 2) and/or at the end of the intervention period (i.e. Time 5); the data collection approach will be selected based on feasibility and appropriateness in each study site. Process evaluation In addition, a process evaluation will be conducted at site-level, to record the exact implementation details of the Indigo-Local intervention in each of the sites. For this, process indicators (e.g. information on how many times the community health workers / service users are involved in awareness-raising activities and the types of activities, attendance details of training etc.) will be collected using a specially-developed Excel le. Implementation evaluation Implementation of the Indigo-Local intervention will be evaluated at site-level with members of the seven site research teams. Semi-structured interviews will be carried out by the Indigo Partnership project coordination team with the research teams in each of the implementing sites at a minimum of one time point post-intervention. These interviews will collect information on the site teams' implementation experiences and perceptions of the facilitators and barriers to implementation. These interviews will be framed around an established implementation strategy framework, the updated Consolidated Framework for Implementation Research (CFIR) [50,51]. Data for this will be analysed descriptively. Patterns in these data will be explored across and within sites, based on data of what types of implementation strategies were used, and how many strategies were reported to be used. Data on implementation facilitators/barriers will be synthesised narratively, guided by content analysis and thematic analysis principles. Evaluation of implementation costs A cost analysis will be undertaken that will estimate the quantity of resource inputs and costs associated with intervention implementation activities across the seven study sites. This will draw on data supplied by local site leads who will complete a costing pro-forma designed speci cally for the Indigo pilot evaluation. This asks for quantitative information on staff time inputs, local pay rates and nancial expenditures recorded against key implementation activities. The design of the pro-forma has been informed by an activity-based costing approach to assessing the cost implications of implementing health programmes, as outlined by Cidav et al [52]. Estimates of total implementation costs and costs related to broad categories of implementation activity will be presented by study site. Costs will be presented in both local currency values and in US dollar purchasing power parity (PPP) adjusted values using appropriate PPP conversion factors published by the World Bank [53]. Data management REDCap [54] will be used for entry of quantitative data, with response elds for all items (including respondents' sociodemographic characteristics, site characteristics, and outcome variables). In each site a member of the local research team is identi ed, who is responsible for local data collection and data entry. The coordinating team at King's College London will then export data from REDCap for data checking and cleaning. Data analyses The suitability of the measures will be examined, for instance for their distribution, and ceiling and oor effects. This is in line with the aims of this being a feasibility (proof-of-principle) pilot study. For the quantitative data analyses, descriptive summaries such as total scores and simple counts will be performed, which will then be compared at the different time points, as well as the % change before and after the intervention is implemented (using chi-square tests). Primary and secondary outcomes will be analysed using mixed effects linear or logistic or Poisson regression models depending on the data type accounting for clustering due to repeated observations at three time points (Times 1, 2 and 3) in each site. Regression results will be pooled across countries using random effects meta-analysis, with a test for heterogeneity of regression coe cients being summarised using the I2 statistic [55]. All data analyses will be conducted with the use of STATA 17. For the qualitative analyses, focused framework analysis based on the themes in the topic guide will be carried out, with some thematic analysis principles also applied with further bottom-up codes generated by sites where applicable and site teams identifying select key illustrative quotes to enrich the data. The focus groups and/or semi-structured interviews will be audio-recorded and transcribed verbatim before being translated into English (where appropriate) and then analysed. Conclusions Indigo-Local is a multi-site feasibility (proof-of-platform) pilot study, aiming to develop, implement and evaluate a community-based awareness-raising intervention designed to reduce mental-health-related stigma and improve access to mental health services in seven sites in ve LMICs in Africa and Asia. The intervention includes several key components: a stakeholder group workshop; a stepped training programme (using a ToT approach) of community health workers (or similar cadres of workers) and service users that includes repeated supervision and booster sessions; awareness-raising activities in the community; and a media campaign. The outcome of this study will therefore be contextually adapted, evidence-based interventions to reduce mental health-related stigma in local communities to achieve improved access to mental health care. We will have replicable models of how to involve people with lived experience as an integral part of the intervention and will produce knowledge of how intervention content and implementation strategies vary across settings. The interventions and their delivery will have been re ned to be acceptable, feasible and ready for larger-scale implementation and evaluation. This study thereby has the potential to make an important contribution to the evidence base on what works to reduce mental-health-related stigma in local communities in LMICs.
2023-08-31T06:18:31.782Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "1806ced29c5bbca292cc0c791e125fb181eb4d00", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3237562/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a95824bc528157cd918135048e87502ff8fce731", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44624863
pes2o/s2orc
v3-fos-license
Calmodulin-dependent Protein Kinase IV Regulates Hematopoietic Stem Cell Maintenance The hematopoietic stem cell (HSC) gives rise to all mature, terminally differentiated cells of the blood. Here we show that calmodulin-dependent protein kinase IV (CaMKIV) is present in c-Kit (cid:1) ScaI (cid:1) Lin (cid:2) /low hematopoietic progenitor cells (KLS cells) and that its absence results in hematopoietic failure, characterized byadiminishedKLScellpopulationandbyaninabilityofthesecells to reconstitute blood cells upon serial transplantation. KLS cell fail-ureintheabsenceofCaMKIViscorrelatedwithincreasedapoptosis and proliferation of these cells in vivo and in vitro . In turn, these cell biological defects are correlated with decreases in CREB-serine 133 phosphorylation as well as in CREB-binding protein (CBP) and Bcl-2 levels. Re-expression of CaMKIV in Camk4 (cid:2) / (cid:2) KLS cells results in the rescue of the proliferation defects in vitro as well as in the restoration of CBP and Bcl-2 to wild type levels. These studies show that CaMKIV is a regulator of HSC homeostasis To evaluate the physiological roles of CaMKIV, two independent C57BL/6J ϫ 129Sv lines of Camk4 Ϫ/Ϫ mice were generated using different targeting strategies (7,13). Both lines of Camk4 Ϫ/Ϫ mice revealed deficits in brain (4,13,14) and T cell function (1). Furthermore, targeted expression of a kinase-inactive CaMKIV in mice results in defective thymocyte survival and activation (2). Although the precise cascade of events in which CaMKIV participates remains enigmatic, neurons (13,14) and memory T cells (1) null for Camk4 show a marked decrease in CREB Ser 133 phosphorylation (phospho-CREB), indicating that CREBmediated transcription may contribute to the observed phenotypes. In addition, CaMKIV has been shown to phosphorylate CBP at Ser 301 , thereby enhancing CREB-CBP-mediated transcription (10). Such findings have led to the hypothesis that a CaMK cascade, of which CaMKIV is a component, is a part of the pathway by which Ca 2ϩ regulates transcription mediated by CREB and CBP (15). In this report, we investigated whether CaMKIV is involved in early hematopoietic development and found that the absence of CaMKIV results in a reduction in the number of c-Kit ϩ ScaI ϩ Lin Ϫ/low cells (KLS cells), a cell population that includes long-term and short term hematopoietic stem cells as well as other multipotent progenitor cells (16). Specifically, we found that the Camk4 gene is expressed in KLS cells and that CaMKIV is required for KLS cells to repopulate the bone marrow in transplantation assays. Furthermore, Camk4 Ϫ/Ϫ KLS cells display enhanced proliferation as well as increased apoptosis, in vivo and in vitro, compared with wild type (WT) cells and have decreased levels of phospho-CREB (pCREB), CBP, Bcl-2 mRNA and Bcl-2 protein. Re-expression of CaMKIV in Camk4 Ϫ/Ϫ KLS cells restores Bcl-2 and CBP levels and rescues the proliferation defects. Thus, our data reveal a novel role for CaMKIV in the maintenance of hematopoietic homeostasis and suggest that this role involves suppression of inappropriate KLS cell proliferation. EXPERIMENTAL PROCEDURES Mouse Strains-The Camk4 Ϫ/Ϫ mouse was generated using a CaMKIV target vector construct that deletes the first two exons of CaMKIV and the two known transcription initiation sites (7). Genotyping was performed as described previously (7). In the mixed genetic background of C57BL6/J ϫ 129Sv, ϳ50% of the Camk4 Ϫ/Ϫ pups showed growth retardation and died within the first 3 weeks of postnatal life. The remaining mice grew to adulthood but were infertile. Because these severe defects did not occur in the other line of Camk4 Ϫ/Ϫ mice generated by Ho et al. (13), we initiated a breeding program for nine generations to stabilize the genetic background. This resulted in the loss of the fertility and premature death phenotypes but the brain and T cell phenotypes were maintained. All mice used in the present study were fertile, grossly asymptomatic and lived a normal life span. All animals were housed and maintained in the Levine Science Research Center Animal Facility located at Duke University under a 12-h light, 12-h dark cycle. Food and water were provided ad libitum, and all care was given in compliance within National Institutes of Health and institutional guidelines on the use of laboratory and experimental animals. Bone Marrow Histology-Femurs isolated from 8-week-old mice were fixed in fresh 4% paraformaldehyde for 48 h, washed in 70% ethanol, and decalcified for 72 h. Glycol methacrylate infiltration and embedding were performed using JB-4 embedding kit (Polysciences, Warrington, PA). Two-m sections were prepared and stained with hematoxylin and eosin. White Blood Cell Differentials-Blood was extracted for analysis by cardiac puncture. Blood cell counts were performed using automated analysis on a System 9000 Automated Cell Counter (Serono-Baker Diagnostics, Allentown, PA). Colony Forming Assays-Colony forming assays were performed by plating 1 ϫ 10 4 unfractionated bone marrow cells in quadruplicate on Methocult methylocellulose medium (Stem Cell Technologies, Vancouver, British Columbia, Canada) and grown at 37°C in 5% CO 2 . The evaluation of colony forming units was performed after 2 weeks in culture as per manufacturer's protocol. Isolation of Hematopoietic Stem Cells-Isolation of HSCs from bone marrow cells was performed using a FACSVantage (BD Biosciences) as described (16). In particular, HSCs were sorted based on positive expression of c-Kit and Sca-1 (c-Kit ϩ , Sca-1 ϩ ) and low/negative expression of the lineage markers (Lin Ϫ/low ). Stem Cell Transplantation-Bone marrow transplants were performed using the congenic strain B6.SJL-Ptprc a Pep3 b /BoyJ (CD45.1, Jackson ImmunoResearch Laboratories, West Grove, PA) as the recipient. Recipient mice, older than 10 weeks of age, were irradiated by exposing them to a single dose of 9.5 Gy 137 Cs source. The following day, c-Kit ϩ , Sca-1 ϩ , Lin Ϫ/low hematopoietic stem cells (KLS cells) were isolated from 3-week-old wild type and Camk4 Ϫ/Ϫ (both CD45.2) donor mice. About 1000 sorted KLS cells from one donor were injected into the retro-orbital sinus of five to six irradiated recipients, and the experiment was repeated with at least six WT and six Camk4 Ϫ/Ϫ donors. The following day, bone marrow cells from three recipient mice per donor cell genotype were isolated and analyzed for the presence of CD45.2 marker to ensure "proper homing" of the donor KLS cells. To measure repopulation, peripheral blood was obtained from the retro-orbital vein every 3 weeks (17). The blood cells were labeled with CD45.1 FITC, CD45.2 PE, and either Mac-1 and Gr-1 for myeloid lineage or CD3 and B220 for lymphoid lineage analyses (16,18). Serial Bone Marrow Transplants-The primary recipient mice were sacrificed at 3.5 months. New CD45.1 recipient mice (n ϭ 5/group) were irradiated (9.5 Gy in a single dose) and transplanted with 4 ϫ 10 6 mononuclear bone marrow cells from sacrificed, individual primary recipient mice by injection via the retro orbital sinus. Bone marrow cells from each primary recipient were injected into five secondary recipients. Peripheral blood from secondary recipients was analyzed by flow cytometry every 3 weeks (17). Bromodeoxyuridine (BrdUrd) Analysis-For in vivo BrdUrd labeling assays, WT and Camk4 Ϫ/Ϫ mice were fed with 0.5 mg/ml of BrdUrd (Sigma) dissolved in drinking water for 4 days. KLS cells isolated from these mice were fixed in 70% ethanol at Ϫ20°C, permeabilized, stained with BrdUrd-PE antibody according to manufacturer's protocol (Pharmingen), and analyzed by fluorescence-activated cell sorting (FACS) for the presence of BrdUrd-PE-positive cells. For Ki-67 labeling, approximately 10,000 freshly sorted KLS cells were fixed in 80% ethanol for 12 h at Ϫ20°C, permeabilized with 0.1% Triton X-100 (Sigma), and stained with FITC-labeled Ki-67 antibody (Pharmingen) for 30 min. The cells were then washed and subjected to FACS analysis for the presence of Ki-67-FITC-positive cells. AnnexinV Apoptosis Assay-Approximately 10,000 freshly isolated KLS cells were incubated with AnnexinV-FITC and propidium iodine according to manufacturer's instructions (Pharmingen). Stained cells were analyzed by flow cytometry within 30 min. Immunocytochemistry-Freshly isolated KLS cells were collected onto slides by cytospin, either immediately after isolation or after stimulation with 2 M ionomycin or 3 M forskolin (both from EMD Biosciences, La Jolla, CA) for 10 min at 37°C. The cells were fixed in 4% paraformaldehyde for 30 min and permeabilized using 0.5% Nonidet P-40 for 10 min. Following overnight incubation at 4°C with either anti-CREB NT (rabbit polyclonal, Upstate, Charlottesville, VA), antiphospho-CREB (against Ser 133 , rabbit polyclonal, Upstate), anti-CBP (A-22, rabbit polyclonal, Santa Cruz Biotechnology, Santa Cruz, CA), anti-Bcl-2 (mouse monoclonal, Pharmingen), or anti-p21 cip1 (C-19G, goat polyclonal, Santa Cruz Biotechnology) the slides were incubated with the appropriate fluorescent secondary antibody. Digital confocal images were taken of all samples with the same settings and analyzed using Metamorph® software to quantify the intensity of the fluorescence; n Ͼ 50 for each condition. Real-time RT-PCR Analysis-Total RNA was prepared from ϳ10,000 freshly isolated HSCs using the RNAqueous-Micro kit (Ambion, Austin, TX), according to manufacturer's instructions. The first strand cDNA was prepared using SuperScript III reverse transcriptase (Invitrogen), according to manufacturer's directions. Quantitative real-time PCR-based gene expression analysis was performed using IQ SYBR Green Supermix with the respective primers, and the reactions were performed using a LightCycler (Roche Applied Science). The sequences of all the primers used in this study are available upon request. Murine Stem Cell Virus (MSCV)-CaMKIV Add-back Experiments- CaMKIV-WT or CaMKIV-K71M cDNA was cloned into MSCV-IRES-GFP vectors, and high titer control and recombinant viruses were made by pseudotyping with vesicular stomatitis virus glycoprotein. Approximately 30,000 WT or Camk4 Ϫ/Ϫ KLS cells were allowed to proliferate overnight at 37°C in X-vivo-15 (Cambrex, Walkersville, MD) media supplemented with 2% fetal bovine serum, 30 ng/ml stem cell factor, 30 ng/ml Flt-3 ligand, and 50 M 2-mercaptoethanol. The cells were infected with the appropriate MSCV virus at an MOI of 5:1 and were harvested 3 days after infection. Expression of CaMKIV-WT or CaMKIV-K71M was confirmed by RT-PCR using specific primers against CaMKIV. For in vitro cell proliferation assays, GFP ϩ -MSCVinfected KLS cells were FACS sorted at 15 cells per well into Terasaki plates. The cells were grown in X-vivo-15 (Cambrex, Walkersville, MD) media supplemented with 5% FBS, 30 ng/ml stem cell factor, 30 ng/ml Flt-3 ligand, and 50 M 2-mercaptoethanol for 6 days. The proliferation rate of the KLS cells was estimated by counting the number of cells in each well at the indicated time points. For immunocytochemistry, GFP ϩ virus-infected KLS cells were cytospun onto slides, fixed, and stained for respective antibodies as mentioned before. Protocols are available upon request. Loss of CaMKIV Results in Diminished Bone Marrow Cellularity and Number of HSCs-Initial histological analysis of bone sections from adult Camk4 Ϫ/Ϫ mice revealed a decrease in bone marrow cellularity (Fig. 1, A and B), raising the possibility that CaMKIV could play a role in hematopoiesis. To explore this idea, we analyzed peripheral blood samples drawn from Camk4 Ϫ/Ϫ mice and found a 44% decrease in total white blood cells (p Ͻ 0.005), a 43% decrease in cells of the myeloid lineage (neutrophils, monocytes, and eosinophils; p Ͻ 0.01), and a 53% decrease in lymphoid cells (T and B cells; p Ͻ 0.002) compared with WT ( Fig. 1C). To evaluate whether hematopoietic progenitor activity is compromised in the absence of CaMKIV, we performed colony forming assays on bone marrow cells isolated from WT and Camk4 Ϫ/Ϫ mice. Bone marrow cells from Camk4 Ϫ/Ϫ mice formed fewer granulocytemonocyte and pre-B cell colonies compared with the WT (data not shown) suggesting that CaMKIV might regulate progenitor cell development. Long term and short term HSCs and the multipotent progenitors primarily reside in the bone marrow where they differentiate into committed progenitors in the myeloid and lymphoid lineages, which further mature before release into the peripheral blood. Thus, a reduction in the number of peripheral blood cells could result either from a primary defect in HSC self-renewal/maintenance or in differentiation of these cells. To distinguish between these two possibilities, we first examined the frequency of KLS cells in WT and Camk4 Ϫ/Ϫ mice by FACS. Fig. 1D shows that there is a 2-fold reduction in the number of KLS cells (12,000 per mouse on average) in Camk4 Ϫ/Ϫ mice compared with WT (26,000 per mouse on average). These results raised the possibility that CaMKIV could participate in the maintenance of the KLS cell population in the bone marrow. To determine whether the absence of CaMKIV resulted in a higher number of KLS cells dying by apoptosis (19), freshly isolated KLS cells from Camk4 Ϫ/Ϫ and WT mice were stained for AnnexinV and propidium iodide (PI). While the AnnexinV-positive:PI-negative population only includes intact cells that are in the early stages of apoptosis, the AnnexinV-positive:PI-positive population includes cells that are necrotic or at advanced stages of apoptosis as well as cells killed or damaged during isolation. FACS analysis revealed three times more AnnexinV-positive:PI-negative (early apoptotic) cells in the freshly isolated Camk4 Ϫ/Ϫ KLS cell population compared with WT (Fig. 1E). Thus, the lower number of KLS cells in Camk4 Ϫ/Ϫ mice and the predis-position of these cells to apoptosis support the idea that CaMKIV has a role in maintaining KLS cell homeostasis by regulating their survival. Camk4 Ϫ/Ϫ KLS Cells Are Compromised in Their Long Term Reconstitution Ability following Bone Marrow Transplantation-To investigate whether the role for CaMKIV in KLS cells is cell autonomous and whether CaMKIV deficiency compromises long term HSC function, we performed in vivo bone marrow reconstitution assays by injecting irradiated CD 45.1 recipient mice with ϳ1000 KLS cells isolated from CD 45.2 WT or Camk4 Ϫ/Ϫ donor mice (20 -22). We chose to transplant KLS cells rather than whole bone marrow as the latter could skew the results due to the lower frequency of KLS cells present in Camk4 Ϫ/Ϫ mice. We first analyzed bone marrow cells from recipient mice 19 h after transplantation and confirmed that donor-derived KLS cells from WT and Camk4 Ϫ/Ϫ mice equivalently "home" to the bone marrow of the host mice (Fig. 2Ai). Next, recipient reconstitution was determined by FACS analysis of peripheral blood samples drawn every 3 weeks (Fig. 2, Aii and B). Mice transplanted with WT cells displayed normal reconstitution patterns at 3, 6, 9, and 12 weeks after transplant (16). In contrast, KLS cells from Camk4 Ϫ/Ϫ mice led to significantly enhanced peripheral blood reconstitution between 3 and 6 weeks after transplant (Fig. 2, Aii and B). However, by 9 weeks the percentage of donor-derived CD 45.2 Camk4 Ϫ/Ϫ cells was markedly reduced in the peripheral blood of recipient mice relative to WT cells (Fig. 2, Aii and B). Next, to determine whether the Camk4 Ϫ/Ϫ donor KLS cells that remained in the bone marrow at 12 weeks post-transplantation were still functional, we performed secondary bone marrow transplantation assays. Approximately, 4 ϫ 10 6 total bone marrow cells (containing 6000 CD45.2 WT-derived KLS cells or 200 CD45.2 Camk4 Ϫ/Ϫ -derived KLS cells) from recipient mice that had been transplanted 12 weeks previously with WT or Camk4 Ϫ/Ϫ KLS cells were serially transplanted into new sub-lethally irradiated recipients. FACS analysis of blood samples drawn every 3 weeks showed no significant recipient reconstitution in mice transplanted with Camk4 Ϫ/Ϫ cells, whereas reconstitution of WT cells occurred normally (Fig. 2C). Previous studies have shown that even as few as 1-10 viable long term HSCs are capable of reconstituting irradiated recipient bone marrow upon transplantation (18,23). Cumulatively, these transplant data suggest that in contrast to the behavior of WT KLS cells, Camk4 Ϫ/Ϫ KLS cells might inappropriately undergo a burst of engraftment followed by premature exhaustion, resulting in a loss of long term repopulating ability. 4) and Camk4 Ϫ/Ϫ (n ϭ 8) donor KLS cells at 3, 6, 9, and 12 weeks after transplantation by all the recipient mice in a representative reconstitution experiment (n ϭ 2). Each line represents an individual irradiated recipient mouse. C, bone marrow cells from each primary recipient were transplanted into five irradiated secondary recipients and the percentage of cells from donor mice reconstituting at 3, 6, 9, and 12 weeks after transplantation were analyzed. Each line represents the mean Ϯ S.E. for the indicated genotype (n ϭ 6). Absence of CaMKIV Results in Enhanced in we performed in vivo labeling of WT and Camk4 Ϫ/Ϫ mice with BrdUrd for 4 days. The KLS cells were then isolated, fixed, stained with anti-BrdUrd antibody, and analyzed by FACS. While only 9% of the WT KLS cells were positive for BrdUrd incorporation, about 30% of Camk4 Ϫ/Ϫ KLS cells stained positive for BrdUrd (Fig. 3A), indicating that a higher number of mutant cells are in proliferation. We also confirmed that Camk4 Ϫ/Ϫ KLS cells have a greater proliferation index by Ki-67 labeling of freshly isolated KLS cells (data not shown). These results indicate that CaMKIV might act to suppress excessive proliferation by KLS cells in the bone marrow, thereby regulating HSC homeostasis. If the role of CaMKIV in KLS cells is to suppress inappropriate cell proliferation, then re-expression of CaMKIV in Camk4 Ϫ/Ϫ KLS cells should rescue their hyperproliferative phenotype. To test this idea we introduced either wild type CaMKIV (CaMKIV-WT) or a kinase-inactive CaMKIV (CaMKIV-K71M) into WT and Camk4 Ϫ/Ϫ KLS cells using the MSCV-based vector that also encodes the GFP under the control of an IRES downstream of the cloned gene. KLS cells infected with the viruses were sorted based on GFP expression, and the expression of CaMKIV-WT and CaMKIV-K71M mRNAs was confirmed by RT-PCR (Fig. 3B). Equal numbers of GFP-positive WT and Camk4 Ϫ/Ϫ KLS cells were then plated on Terasaki plates in media supplemented with 5% fetal bovine serum, 30 ng/ml stem cell factor and 30 ng/ml Flt-3 ligand. Proliferation was followed by counting the number of cells at 2-day intervals. On days 2 and 4 of in vitro growth, Camk4 Ϫ/Ϫ KLS cells infected with MSCV-control virus proliferate at a higher rate (2-fold higher) than WT KLS cells (Fig. 3C). In addition, Camk4 Ϫ/Ϫ cells are exhausted to a greater extent than the WT cells between days 4 and 6. Remarkably, re-expression of CaMKIV rescues both the hyperproliferation as well as the rapid exhaustion phenotypes characteristic of Camk4 Ϫ/Ϫ KLS cells, and these cells behave in a similar fashion to WT cells infected with the control virus (Fig. 3C). CaMKIV activity is required for the rescue of the proliferation defects in the Camk4 Ϫ/Ϫ KLS cells as the kinase-inactive mutant (K71M) was not able to alter prolif-eration of the mutant cells (Fig. 3C). In fact, not only did CaMKIV-K71M fail to reduce proliferation to WT levels but also hyperproliferation was exacerbated in the mutant cells, such that at 4 days of in vitro growth, there were more Camk4 Ϫ/Ϫ cells expressing CaMKIV-K71M than Camk4 Ϫ/Ϫ cells infected with the control virus. These data indicate that CaMKIV can regulate HSC proliferation in vitro and suggest that the enhanced engraftment observed in vivo is due to the overproliferation of Camk4 Ϫ/Ϫ KLS cells. Cumulatively these findings indicate that CaMKIV activity is important for maintaining HSCs in a relatively quiescent state. Lower Levels of Phospho-CREB, CBP, and Bcl-2 mRNA and Protein Levels in Camk4 Ϫ/Ϫ KLS Cells-What is the signaling pathway by which CaMKIV functions to maintain hematopoietic homeostasis? CaMKIV can phosphorylate CREB on Ser 133 (pCREB) in response to transient increases in intracellular calcium (24). Since decreased levels of pCREB have been found in neurons (13,14) and memory T cells (1) of Camk4 Ϫ/Ϫ mice, we examined whether decreased Ca 2ϩ -induced pCREB was also observed in Camk4 Ϫ/Ϫ KLS cells. As shown in Fig. 4, Ai and Aii, pCREB was reduced 2.5-fold in KLS cells deficient in CaMKIV as determined by immunofluorescence. In addition, whereas pCREB in WT cells was increased substantially following ionomycin treatment, very little increase was noted in similarly treated Camk4 Ϫ/Ϫ cells (supplemental Fig. 1). As total CREB levels were similar in WT and Camk4 Ϫ/Ϫ mice (Fig. 4, Ai and Aii) these results indicate that the Ca 2ϩ signaling pathway leading to pCREB must be active in KLS cells in vivo and suggest that a defect in CaMKIV/pCREB-mediated transcription may compromise the functions of these cells. Phosphorylation of CREB on Ser 133 is required to recruit the CREBbinding proteins CBP or p300 to transcription complexes, which is in turn required for transcriptional activation of CRE-containing promoters (1,9,25,26). In addition to phosphorylating CREB, CaMKIV has also been reported to phosphorylate CBP on Ser 301 (10), which positively regulates its function as a transcriptional co-activator. Although antibodies specific to CBP-pSer 301 that can be used in immunocytochemistry are unavailable, we did use a CBP polyclonal antibody to evaluate whether or not CBP levels might be altered in Camk4 Ϫ/Ϫ KLS cells. Surprisingly, CBP is significantly reduced, by 2.4-fold, in the Camk4 Ϫ/Ϫ KLS cells compared with WT cells (Fig. 4, Ai and Aii). These data raise the possibility that phosphorylation of CREB and/or CBP by CaMKIV might play a role in maintaining CBP levels in these cells and support the idea that a Ca 2ϩ -dependent CaMKIV/CREB/CBP signaling cascade is active in HSCs. If a CaMKIV signaling cascade functions through CREB and CBP to regulate transcription in HSCs, what target gene or genes might be activated to suppress proliferation as well as enhance survival of KLS stem cells? To explore possible mechanisms by which CaMKIV might regulate KLS proliferation and homeostasis, we compared the mRNA levels of several pro-and anti-apoptotic genes as well as the cyclin-dependent kinase inhibitor, p21 cip1 in WT and Camk4 Ϫ/Ϫ KLS cells (Fig. 4C). The absence of p21 cip1 has previously been shown to result in hematopoietic stem cell exhaustion upon serial bone marrow transplantation (17). Our results reveal that, among the 14 mRNAs evaluated, only the Bcl-2 mRNA is differentially expressed between WT and Camk4 Ϫ/Ϫ KLS cells, and as shown in Fig. 4, B and C, this 1.9-fold decrease in the Camk4 Ϫ/Ϫ KLS cells is statistically significant. Several studies have shown that transcription of the pro-survival gene Bcl-2 gene requires pCREB and can be stimulated by Ca 2ϩ (27)(28)(29). Moreover, in addition to its role in cell survival, Bcl-2 has been reported to play a role in maintaining cellular quiescence (30,31). We also examined Bcl-2 protein levels in freshly isolated WT and Camk4 Ϫ/Ϫ KLS cells by immunocytochemistry. Bcl-2 protein levels are 2.7-fold lower in Camk4 Ϫ/Ϫ KLS cells compared with the WT cells (Fig. 4, Ai and Aii), while protein levels of the cyclin-dependent kinase inhibitor p21 cip1 are similar in WT and Camk4 Ϫ/Ϫ KLS cells (Fig. 4, Ai and Aii). Collectively, our data indicate that in freshly isolated KLS cells, there is a positive correlation between the presence of CaMKIV, phosphorylation of CREB, and levels of CBP, Bcl-2 mRNA, and Bcl-2 protein. Since freshly isolated Camk4 Ϫ/Ϫ KLS cells show a 2.5-fold reduction in pCREB levels compared with WT cells, we wondered whether culturing these cells in the presence of stimuli that activate Ca 2ϩ -independent CREB kinases would increase pCREB in Camk4 Ϫ/Ϫ cells. Consistent with this idea, we could increase pCREB to the same level in freshly isolated Camk4 Ϫ/Ϫ and WT KLS cells following incubation with forskolin (supplemental Fig. 1), demonstrating that the cAMP-dependent protein kinase pathway is intact and leads to CREB phosphorylation in both cell types. Re-expression of CaMKIV Results in Restoration of WT Levels of CBP and Bcl-2 in Camk4 Ϫ/Ϫ KLS Cells-We hypothesized that if the loss of CaMKIV in KLS cells is specifically responsible for decreased pCREB, CBP, and Bcl-2, then re-expression of CaMKIV in freshly isolated Camk4 Ϫ/Ϫ KLS cells might reverse these defects by reconstituting the Ca 2ϩ -dependent signaling pathway. As shown in Fig. 5, A and B, respectively, Bcl-2 and CBP levels were 3.5-and 2-fold lower in Camk4 Ϫ/Ϫ KLS cells infected with MSCV-control virus, compared with control virus-infected WT cells. Re-expression of CaMKIV-WT, but not kinase inactive CaMKIV-K71M, quantitatively restores WT levels of CBP and Bcl-2 in the Camk4 Ϫ/Ϫ KLS cells (Fig. 5, A and B). Interestingly, pCREB levels were only slightly reduced in Camk4 Ϫ/Ϫ KLS cells infected with the control virus compared with WT cells and introduction of either CaMKIV-WT or CaMKIV-K71M resulted in only a slight but nonsignificant increase in pCREB levels in these cells (Fig. 5, A and B). We suspect that normalization of CREB phosphorylation in both cell types is due to serum-induced activation of CREB kinases other than CaMKIV as illustrated by the Forskolin experiments above (supplementary Fig. 1). These results also show that pCREB may be necessary but is not sufficient to restore Bcl-2 gene expression in the absence of CaMKIV and support the idea that an important component of the action of CaMKIV is an effect on CBP (9, 10). At any rate when taken together, our data support a role for a Ca 2ϩ /CaMKIV/pCREB/CBP pathway in the regulation of Bcl-2 gene expression in KLS cells and strengthen our idea that this pathway may be important for promoting maintenance of HSC pool in mouse bone marrow by preventing inappropriate proliferation of KLS cells. DISCUSSION Self-renewing hematopoietic stem cells replenish billions of mature myeloid and lymphoid cells in the blood on a daily basis, a process that is vital for sustaining life. Understanding how the molecular regulation of HSC self-renewal is achieved is crucial for the improvement of HSCbased transplantation therapies. We have uncovered a novel role for CaMKIV in the maintenance of HSC homeostasis. The CaMKIV gene is expressed in KLS cells (Fig. 3B, lanes 1 and 2), and its absence results in lower numbers of these cells in the bone marrow of null animals. We also find that the absence of CaMKIV results in increased proliferation of KLS cells. When Camk4 Ϫ/Ϫ KLS cells are challenged with expansion signals in vivo (bone marrow transplantation) or in vitro (growth factorcontaining medium) they undergo premature proliferation followed by exhaustion. At least in culture, these altered proliferation defects could be rescued by re-expression of CaMKIV in the Camk4 Ϫ/Ϫ KLS cells implying that, even acutely, this protein kinase serves as an inhibitor of inappropriate proliferation of the HSCs. The number of self-renewing HSCs in the bone marrow is regulated, at least in part, due to elimination of excess HSCs generated through inappropriate proliferation by apoptosis (32). Since the absence of CaMKIV results in increased proliferation of HSCs, perhaps due to their inability to maintain quiescence, as well as in predisposition of the cycling cells to exhaustion via apoptosis, we suggest that CaMKIV plays a role in HSC homeostasis. The Camk4 Ϫ/Ϫ mice utilized in this study are asymptomatic and live a normal life span when housed in a clean environment. The difference in the number of KLS cells between WT and Camk4 Ϫ/Ϫ mice is 2-fold under homeostatic conditions, which may not be sufficient to cause overt immunological abnormalities. However, when challenged with stress or expansion signals such as those presented by bone marrow transplantation, these differences became much more important (Fig. 2, A-C). Similar results due to relatively small differences in KLS cell num-ber under homeostasis that become amplified upon being challenged have been observed in Bcl-2 transgenic, p21 cip1Ϫ/Ϫ and p18 ink4cϪ/Ϫ mice (17,32,33). Additionally, the zinc finger transcriptional repressor Gfi-1 was recently shown to be a regulator of HSC proliferation (34,35). Although HSCs from both Camk4 Ϫ/Ϫ and Gfi-1 Ϫ/Ϫ mice show higher proliferation, unlike the Camk4 Ϫ/Ϫ mice, Gfi-1 Ϫ/Ϫ mice have a higher number of KLS cells in their bone marrow under homeostatic conditions (34,35). However, similar to Camk4 Ϫ/Ϫ KLS cells, Gfi-1 Ϫ/Ϫ HSCs fail to reconstitute the bone marrow upon serial transplantation, due to their exhaustion in response to the expansion stimulus provided by transplantation (34). Freshly isolated Camk4 Ϫ/Ϫ KLS cells have significantly lower levels of pCREB, CBP, Bcl-2 mRNA, and Bcl-2 protein levels compared with WT cells. CBP and Bcl-2 levels could be restored by re-expression of CaMKIV, whereas culturing cells in the presence of serum and growth factors increased pCREB levels in KLS cells whether or not CaMKIV was present. These results suggest that pCREB may be necessary but is not sufficient for maintaining CBP and Bcl-2 in the absence of CaMKIV. pCREB is known to drive transcription from the CRE of the Bcl-2 gene promoter (28,29), and CBP is the most important transcriptional coactivator for CREB. Thus, while it is likely that CaMKIV may also regulate other transcription factors present in KLS cells, our existing evi- dence strongly suggests a role for a Ca 2ϩ /CaMKIV/pCREB/CBP pathway in the regulation of Bcl-2 expression in KLS cells. Consistent with the idea of a role for a CaMKIV, CBP, Bcl-2 pathway in the regulation of the HSC population, targeted overexpression of Bcl-2 in HSCs in vivo results in increased number, quiescence, and self-renewal of HSCs (32), precisely the opposite phenotypes that we report herein to arise due to the absence of CaMKIV. Thus, in the Bcl-2 transgenic mouse, although a higher percentage of HSCs is quiescent, the steady-state number of HSCs is actually increased as a result of a failure of these cells to be cleared by apoptosis (due to enhanced Bcl-2 expression) (32). Mice haplo-insufficient for CBP also exhibit HSC exhaustion (36,37). Although silencing of the CBP gene results in early embryonic lethality, mice heterozygous for CBP null mutation survive and display multiple severe phenotypes (37). Interestingly, CBP haplo-insufficient mice and CaMKIV null mice share phenotypic consequences in the brain and hematopoietic systems, although these two types of genetically altered mice do not phenocopy each other (36 -38). One example of a difference between the two mouse lines is that the CBP ϩ/Ϫ mice show age-dependent decrease in bone marrow cellularity and decrease in numbers of KLS hematopoietic stem cells as well as numbers of myeloid and B cell colony forming progenitors in the bone marrow (36). Second, HSCs from CBP ϩ/Ϫ mice show no reconstitution only after tertiary bone marrow transplantation (37). Finally, unlike the case in Camk4 Ϫ/Ϫ mice, the CBP ϩ/Ϫ hematological defects only appear as the mice become older (36,37). Regardless, both CaMKIV and a full complement of CBP seem to be required for maintaining HSC pools in the bone marrow. Although our data indicate that CaMKIV may regulate the levels of CBP, it is not clear how this is achieved. Preliminary studies in cerebellar granule cells, which express both CBP and CaMKIV, show that CBP levels are also reduced in the absence of CaMKIV. However, expression of CaMKIV or incubation of these cells with proteosome inhibitors results in restoration of the CBP levels in Camk4 Ϫ/Ϫ cells (data not shown). These results indicate that CaMKIV might regulate the stability of CBP in cells that express both proteins. Relevant to this idea CBP levels can be regulated by proteolysis as CBP polyubiquitination and degradation have been reported to occur in neurons undergoing degeneration in Huntington disease (39). Based on this collective evidence, we suggest that a Ca 2ϩ /CaMKIV/ CREB/CBP signaling pathway is critical for the maintenance of HSC homeostasis and that one target for this pathway is likely to be the Bcl-2 gene (Fig. 5C). Unquestionably, additional genes regulated by this pathway collectively contribute to the regulation of HSC self-renewal by CaMKIV and we are actively pursuing their identity. Nevertheless, our data argue that decreased levels of Bcl-2, a protein with dual roles in the maintenance of cell survival and cell quiescence (31), may be an important contributing factor for the inability of the HSCs of the Camk4 Ϫ/Ϫ mice to maintain quiescence and for the inappropriate proliferation of these cells when challenged with an expansion signal. In addition, the decrease in Bcl-2 in these proliferating HSCs might result in increased susceptibility of this cell population to apoptosis and together these events result in the eventual exhaustion of the hematopoietic stem cells.
2018-04-03T05:49:58.263Z
2005-09-30T00:00:00.000
{ "year": 2005, "sha1": "d15bc3008b30ba0c18243a86a3695a62643b7c95", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/39/33101.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "cad88a2d1c006e6d96b930d5008a259c937f6f2c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
264451523
pes2o/s2orc
v3-fos-license
Exploring OCR Capabilities of GPT-4V(ision) : A Quantitative and In-depth Evaluation This paper presents a comprehensive evaluation of the Optical Character Recognition (OCR) capabilities of the recently released GPT-4V(ision), a Large Multimodal Model (LMM). We assess the model's performance across a range of OCR tasks, including scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually-rich document. The evaluation reveals that GPT-4V performs well in recognizing and understanding Latin contents, but struggles with multilingual scenarios and complex tasks. Specifically, it showed limitations when dealing with non-Latin languages and complex tasks such as handwriting mathematical expression recognition, table structure recognition, and end-to-end semantic entity recognition and pair extraction from document image. Based on these observations, we affirm the necessity and continued research value of specialized OCR models. In general, despite its versatility in handling diverse OCR tasks, GPT-4V does not outperform existing state-of-the-art OCR models. How to fully utilize pre-trained general-purpose LMMs such as GPT-4V for OCR downstream tasks remains an open problem. The study offers a critical reference for future research in OCR with LMMs. Evaluation pipeline and results are available at https://github.com/SCUT-DLVCLab/GPT-4V_OCR. Particularly, the recent release of GPT-4V(ision) [14] presents a significant breakthrough in the domain of LMMs.Researchers across diverse fields are eager to comprehend the capabilities of GPT-4V, with those in the Optical Character Recognition (OCR) domain displaying particular curiosity in its potential to address OCR tasks.While the official report qualitatively demonstrates GPT-4V's abilities in several OCR-related tasks (including text recognition, expression recognition, and document understanding), quantitative assessment and in-depth analysis are urgently needed, which will provide valuable insights and essential references for future research. The evaluation results suggest that GPT-4V does not match the performance of specialized OCR models.Specifically, GPT-4V demonstrates superior performance in Latin content but encounters limitations when dealing with other languages.Furthermore, GPT-4V struggles in complex scenarios for tasks such as HMER, TSR, and VIE. Based on the experimental results, we try to address an important question: do specialized models still hold research value in the OCR field?Given the three critical drawbacks of GPT-4V, namely, limited performance in multilingual and complex scenarios, high inference costs, and challenges in updating, we argue that existing LMMs struggle to simultaneously handle various OCR tasks [79].Therefore, we affirm the continued research value of specialized models in the OCR field.However, it is still crucial to leverage the potential of LMMs like GPT-4V for future OCR research.There may be three potential directions worth investigating, including semantic understanding enhancement, downstream task finetuning, and auto/semi-auto data construction. Experiments We evaluate GPT-4V on the following OCR tasks: scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually-rich document.The evaluation process was conducted within the web-based dialogue interface with GPT-4V, of which we directly uploaded the image and prompt, and then extracted relevant answers from the generated responses.The prompts for each task were meticulously designed.Additionally, to prevent interference from contextual information, we used a separate dialogue window for each image.Due to the conversation limits (50 conversations per 3 hours) of GPT-4V, we conducted sampling on datasets with a large number of samples. Scene text recognition Dataset We focus on both word-level text recognition and end-to-end text spotting.For word-level text recognition, we employ CUTE80 [66], SCUT-CTW1500 [67], Total-Text [68], WordArt [69] in English and ReCTS [70] in Chinese.We randomly select 50 images from each dataset above for evaluation.The datasets are downloaded from [2] . • CUTE80 comprises 80 images specifically curated for the purpose of evaluating curved text. • SCUT-CTW1500 is a comprehensive curved text dataset encompassing a total of 1500 images. • Total-Text has 1,555 scene images which collected with curved text in mind. • WordArt consists of 6316 artistic text images, which primarily features challenging artistic text. • ReCTS is a large-scale dataset of 25,000 images, which mainly focuses on reading Chinese text on signboard. In the end-to-end text spotting task, we use MLT19 [71] to evaluate the multilingual capabilities of GPT-4V.For each language, we randomly select 20 images from the training set.Additionally, to investigate the impact of image resolution on recognition results, we select 20 English images from the aforementioned subset and resize their long sides to 128, 256, 512, 1024, and 2048 pixels, respectively. • MLT19 is a dataset for Multi-Lingual scene Text (MLT) detection and recognition, which consists of 20,000 images containing text from 10 languages. Prompt For word-level English text recognition, we use the following prompt: "What is the scene text in the image?", while for ReCTS in Chinese, we translate the prompt into Chinese, resulting in: "图片中的场景文字是什么?" The prompt in end-to-end text spotting is: "What are all the scene text in the image?Do not translate." Metric For the evaluation of word-level recognition, we employ word accuracy ignoring case and symbols (WAICS) [80] as metric.In the task of end-to-end text spotting, the predictions of GPT-4V and ground truths (GT) are split with spaces and then evaluated using precision and recall.Precision represents the ratio of correctly identified words to those generated by GPT-4V, while recall is the ratio of correctly identified words to the total number of GT words.We also compute the F1 score as follow. [2]https://github.com/Yuliang-Liu/MultimodalOCR Results and analysis The results are shown in Table 1, Table 2 and Table 3, respectively.We visualize some examples in Figure 1.Based on the results, we draw the following insights: (1) There is a substantial accuracy disparity between the recognition of English and Chinese text.As shown in Table 1, the performance of English text recognition is commendable.Conversely, the accuracy of Chinese text recognition is zero (ReCTS).We speculate that this may be due to the lack of Chinese scene text images as training data in GPT-4V. (2) GPT-4V exhibits a strong ability to recognize Latin characters, surpassing its performance in other languages. As shown in Table 2, it can be observed that GPT-4V performs significantly better in English, French, German, and Italian, compared to non-Latin alphabet languages.This suggests noticeable limitations in GPT-4V's multilingual OCR capabilities. (3) GPT-4V supports input images with different resolutions.As shown in Table 3, there is a positive correlation between the input image resolution and the recognition performance.This suggests that, unlike previous LMMs that resize images to a fixed size, GPT-4V supports input images with variable resolutions.Meanwhile, we hypothesize that the image encoder of GPT-4V employs a fixed patch size, therefore increasing the resolution of the input image leads to a longer sequence, which help the model to capture more information. Table 1.Results of word-level scene text recognition.The SOTA of CUTE80 and WordArt are achieved by [80] and [81], respectively.[82] reported the SOTA on SCUT-CTW1500 and Total-Text.The SOTA of ReCTS can be found at [3] . 2. Results of MLT19.The SOTA of end-to-end text spotting in MLT19 can be found at [4] . [3https://rrc.cvc.uab.es/?ch=12&com=evaluation&task=2 [4] https://rrc.cvc.uab.es/?ch=15&com=evaluation&task=4In the answers of GPT-4V, we highlight the characters that match the GT in green and characters that do not match in red.GPT-4V can recognize curved, slanted, and artistic English text, while common-style Chinese text can not be recognized. Handwritten text recognition Dataset To evaluate GPT-4V's capability in handwritten text recognition, we employ two commonly used handwritten datasets: IAM [72] (in English) and CASIA-HWDB [73] (in Chinese).We randomly sample 50 pages and 50 text lines from each of the test sets of IAM and CASIA-HWDB for evaluation. • IAM comprises 1,539 pages and 13,353 lines of handwritten English text. • CASIA-HWDB is an offline handwritten Chinese dataset, which contains about 5,090 pages and 1.35 million character samples of 7,356 classes (7,185 Chinese characters and 171 symbols). Prompt For IAM, we use the prompt: "Recognize the text in the image."as input.And for CASIA-HWDB, we use the Chinese prompt "请直接告诉我,图片中的文字都是什么?" , which means "Please tell me directly, what are all the text in the image?" Metric Two metrics are used for evaluation in the handwritten English text: Word Error Rate (WER) and Character Error Rate (CER) [83].To evaluate the performance in handwritten Chinese text, we use AR and CR metrics [36]. Results and analysis As shown in Table 4 and 5. (1) There's also a significant performance gap between English and Chinese handwritten text.This phenomenon is consistent with the findings in Section 2.1, which collectively suggests that GPT-4V performs well in English text recognition while facing notable challenges in Chinese. (2) GPT-4V exhibits significant hallucinations in Chinese text recognition.As shown in Figure 3 (c) and (d), the responses generated by GPT-4V demonstrate a high degree of fluency in both grammar and semantics.However, they substantially deviate from the textual content of the ground truth (GT), appearing to produce nonsensical information in a seemingly earnest manner. Table 4. Results of IAM.The SOTA of page-level IAM in WER and CER metric are achieved by [84] and [85], respectively.And the line-level SOTA is achieved by [86]. Handwritten mathematical expression recognition Dataset For this task, we employ two representative dataset includes CROHME2014 [74] and HME100K [42].We randomly select 50 images from the test sets of each of these two datasets for evaluation.Table 5. Results of CASIA-HWDB.The SOTA of page-level CASIA-HWDB in AR and CR metric are achieved by [87] and [88], respectively.And the line-level SOTA is achieved by [36]. Method Page-level Line-level GPT-4V 0.97% 36.54%-3.45% 11.85% Supervised-SOTA 96.83% 96.99% 97.70% 97.91%In the responses of GPT-4V, we highlight characters that match the GT in green and characters that do not match in red.For English text, GPT-4V demonstrates excellent performance.In contrast, for Chinese text, GPT-4V has generated a passage of text that is semantically coherent, but it is not associated with the ground truth text (GT). • CROHME2014 is a classical online dataset for handwritten mathematical expression recognition, which comprises 9,820 samples of mathematical expressions. • HME100K is a large-scale handwritten mathematical expression recognition dataset, which contains 100k images from ten thousand writers and is mainly captured by cameras. Prompt In this task, we use "This is an image of a handwritten mathematical expression.Please recognize the expression above as LaTeX." as prompt. Metric The metrics we employed include the correct rates at the expression level, and with at most one to three errors [74]. Results and analysis The results are shown in Table 6.Based on the analysis of the failed case, we draw the following findings. (1) GPT-4V appears to be limited when dealing with camera-captured and poor handwriting scenarios.As shown in Table 6, the performance on HEM100K (which features camera-captured images and poor handwriting) significantly drops compared to CROHME2014.As shown in Figure 3 (2) GPT-4V exhibits certain challenges in fine-grained character recognition.Among the failed cases, we observed instances where GPT-4V occasionally missed small-scale characters.Two examples are shown in Figure 3 (e) and (f).For these two examples, GPT-4V has omitted a superscript and a subscript, respectively.This finding aligns with the evaluation results of Liu et al. [79] on other multimodal models, suggesting that GPT-4V may also suffer from certain fine-grained perceptual issues. Figure 3. Illustration of handwritten mathematical expression recognition.In each example, the left side displays the input image, while the right side shows the image rendered from the LaTeX sequence output by GPT-4V.In the answer of GPT-4V, we highlight elements that match the GT in green and elements that do not match in red.The symbol _ in red represents the missing elements in the output. Table structure recognition Dataset The datasets we used for this task includes SciTSR [75] and WTW [76].We randomly select 50 tables from each of the test sets of SciTSR and WTW for evaluation.Following [53], we crop table regions from original images for evaluation.Metric To evaluate the performance of GPT-4V in table structure recognition, we use TEDS-S metrics [48], which is a variation of Tree-Edit-Distance-Based Similarity (TEDS) [48] that disregards the textual content of the cells and only evaluates the accuracy of the table structure prediction. Results and analysis The results are shown in Table 7.We gain two important findings based on the results: (1) GPT-4V struggles with complex tables.GPT-4V demonstrates outstanding performance when handling tables with structured layouts and consistent text distributions, such as Figure 4 (a).However, when dealing with other types of tables, including those with numerous empty cells, uneven text distribution, skewing, rotation, or densely packed arrangements, its performance noticeably declines. (2) Content omission issues are observed in GPT-4V when processing lengthy tables.Despite emphasizing the requirement of "do not omit anything" in the prompt, we still observed some instances of content omission in the responses, particularly in the case of a large table .A typical example is shown in Figure 4 (e), the table image Figure 4 (c) contains many rows, but GPT-4V only reconstructs three of them. Table 7.The TEDS-S of SciTSR and WTW.The SOTA of SciTSR and WTW are both achieved by [52]. • FUNSD dataset is a commonly used form understanding benchmark, which contains 199 scanned form-like documents with noisy images. • XFUND dataset is a multilingual extension of FUNSD that covers seven languages (Chinese, Japanese, French, Italian, German, Spanish, and Portuguese). We evaluate GPT-4V on the Semantic Entity Recognition (SER) and the end-to-end Pair Extraction tasks.The SER task requires the model to identify the category of each text segments, which are predefined as header, question, answer, and other in FUNSD and XFUND.The end-to-end pair extraction task asks the model to extract all the key-value pairs in the given document image.We use the full test set (both FUNSD and XFUND-zh contain 50 samples) for performance evaluation. Prompt For FUNSD, we use the following prompt for SER: Please read the text in this image and return the information in the following JSON format (note xxx is placeholder, if the information is not available in the image, put "N/A" instead)."header": [xxx, ...], "key": [xxx, ...], "value": [xxx, ...] It's important to highlight that, we redefined the official entity type of "question" and "answer" as "key" and "value" to maintain consistency with the Pair Extraction task. For end-to-end Pair Extraction, we use the following prompt: You are a document understanding AI, who reads the contents in the given document image and tells the information that the user needs.Respond with the original content in the document image, do not reformat.No extra explanation is needed.Extract all the key-value pairs from the document image. Metric For the SER task, we employ the entity-level F1-score [60] for performance evaluation.Additionally, Normalized Edit Distance (NED) is also calculated as is done in other end-to-end VIE methods [65].However, due to limitations in GPT-4V's ability to generate precise bounding boxes for entities, we aligned predictions with ground-truth using the principle of minimum edit distance. Results and analysis The SER and Pair Extraction results are shown in 8 and 9, respectively.We found that: (1) GPT-4V might have constraints in comprehending the spatial arrangement of documents.As shown in Figure 5, some text content located near the top of the page, which lacks both visual and semantic alignment with the header category, is erroneously identified as a header.Additional visualizations are presented in 6.It is evident that GPT-4V excels in analyzing documents with straightforward layouts but struggles to comprehend those featuring intricate layouts. Table 8.SER Results of FUNSD and XFUND-zh.The SOTA of FUNSD is provided in [65].(3) The long cycle and complex process of updating make it difficult to promptly address minor issues.Considering the aforementioned shortcomings and limited OCR capabilities of some other LMMs [79], we believe that existing LMMs struggle to simultaneously excel in various OCR tasks.Therefore, we contend that specialized models in the field of OCR continue to hold significant value for research. How can we fully leverage the potential of LMMs like GPT-4V in the OCR domain?These are some possible strategies. (1) Semantic understanding enhancement: A significant characteristic of LMMs lies in their outstanding semantic capabilities after extensive training on large-scale data.Since semantic understanding is a crucial factor in document comprehension and some related tasks, harnessing the semantic potential of LMMs can greatly enhance the performance in these tasks. (2) Downstream task finetuning: Another approach that fully leverages the prior knowledge of LMMs is fine-tuning, especially in scenarios with limited data.Fine-tuning allows the model to adapt to specific tasks or domains, thus improving the performance [89]. (3) Auto/semi-auto data construction: Using LMMs for automatic/semi-automatic data annotation and generation will substantially reduce the cost of manual labeling, which is an effective strategy for tackling the difficulties of data acquisition [90]. Limitations There are three main limitations of our work.First, the test sample of our evaluation is small-scale (mostly 50 samples per dataset) due to the conversation limits (50 conversations per 3 hours) of GPT-4V.This could potentially limit the generalizability of the results.Second, our assessment primarily focuses on mainstream OCR tasks and does not include other OCR-related tasks.Hence, the findings might not cover the full spectrum of OCR capabilities of GPT-4V.Third, only the zero-shot capacity of GPT-4V in OCR was evaluated, without exploring few-shot scenarios.As a result, the potential benefits of further training or fine-tuning the LLM model for specific tasks are not addressed.Few-shot scenarios with technology such as in-context learning [91] are worth of exploring in the future. Conclusion In this paper, we present a comprehensive evaluation of the OCR capabilities of GPT-4V through a variety of experiments. For the first time, we offer not only qualitative demonstrations but also quantitative performance analysis of GPT-4V across a wide spectrum of tasks.These tasks encompass scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually rich documents. Our findings, grounded in meticulous experimental results, provide an in-depth analysis of the strengths and limitations of GPT-4V.Although the model shows a strong ability to accurately recognize Latin content and supports input images of variable resolutions, it displays notable struggles with multilingual and complex scenarios.Additionally, the high inference costs and the challenges associated with continuous updating pose significant barriers to the real-world deployment of GPT-4V.Therefore, we contend that specialized models in the field of OCR continue to hold significant value for research.Despite these limitations, GPT-4V and other existing general LMMs could still significantly contribute to the development of the OCR field in several ways.These would include enhancing semantic understanding, fine-tuning for downstream tasks, and facilitating auto/semi-auto data construction. In summary, this paper presents a first-of-its-kind, in-depth quantitative evaluation of GPT-4V's performance in OCR tasks.We will continuously update the evaluation results in the future, and we hope the findings in this paper will provide valuable insights and strategies for researchers and practitioners working on OCR tasks using large multi-modal models. Figure 1 . Figure1.Illustration of word-level scene text recognition.In the answers of GPT-4V, we highlight the characters that match the GT in green and characters that do not match in red.GPT-4V can recognize curved, slanted, and artistic English text, while common-style Chinese text can not be recognized. Figure 2 . Figure 2. Illustration of handwritten text recognition.(a), (b), (c), and (d) are samples of page-level IAM, line-level IAM, page-level CASIA-HWDB, and line-level CASIA-HWDB, respectively.In the responses of GPT-4V, we highlight characters that match the GT in green and characters that do not match in red.For English text, GPT-4V demonstrates excellent performance.In contrast, for Chinese text, GPT-4V has generated a passage of text that is semantically coherent, but it is not associated with the ground truth text (GT). , (a) and (c) are examples from CROHME2014, (b) and (d) are from HEM100K, GPT-4V performs well on the former, but poorly on the latter. Figure 4 . Figure 4. Illustration of table structure recognition.(a) and (c) are two input images, (b) and (d) are the corresponding visualized images of GPT-4V's html-style output sequence.(e) is the output sequence of (c), where the elements that GPT-4V indicate the omitted content are highlighted in red. Figure 5 .Figure 6 . Figure 5. Illustration of error cases of the SER task.The text content enclosed within the red box is incorrectly identified as header entities. Figure 7 . Figure 7. Illustration of error cases of the Pair Extraction task.The text content enclosed within the red box is incorrectly identified as entity pairs. Table 3 . Impact of image resolution for recognition performance on MLT19 English subset. Table 6 . [46]lts of handwritten mathematical expression recognition.The SOTA of CROHME2014 and HME100K are both achieved by[46]. • SciTSR is a dedicated dataset created to address the task of table structure recognition in scientific papers.The dataset consists of 12,000 training samples and 3,000 test samples.• WTW's images are collected in the wild.The dataset is split into training/testing sets with 10,970 and 3,611 samples respectively.Prompt For both SciTSR and WTW, we use the prompt "Please read the table in this image and return a html-style reconstructed table in text, do not omit anything."as input.
2023-10-26T06:41:14.225Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "1b97f78cc3056859949aa2c48e995df03ccf7a4f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43a3d51cc60c638e7523c63f203ac0c9026ce2d8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55712263
pes2o/s2orc
v3-fos-license
A NoSQL Geo-Data Solution for the Consumption of Services on the Web Web applications and portals are strategic gateways to deliver tools, data, computational infrastructures and services over the Internet. Software and data interoperability is the key factor to enable the integration of knowledge and share common objectives. Web applications are using ever more big spatial data ecosystems that usually involve cross-border data flows and rely on open Internet. Demand of web GIS based applications, in particular, shows a steady growth over the last few years, indicative of a scenario where spatial-data infrastructures will be ever more consumed by mobile and web applications. Management and analysis of large and growing volumes of geo-data is challenging the scientific community without clear long term solutions. The INNO project’s objective is to improve, develop and apply innovative, state of the art technologies to efficiently query, render and expose on the web spatially enabled data. The solution proposed is based on a NoSQL database infrastructure, on the set up of a efficient innovative communication protocol and the use of a light web-GIS client library to view results. Two specific goals are recognized to be of paramount importance; to improve the consumption of spatial data on the WEB and to build regional capacities on Global Earth Observation (GEO) proposing new standards and approaches. I. INTRODUCTION GIS (Geographic Information System) based technologies over the last few years have seen a drastic increase in the production of GIS applications and projects in a vast verity of contests. Many traditional tools applications are being converted into client/server solutions exposed by mobile and web applications. This indicates a scenario where geo-data infrastructures will be consumed ever more in the future, with the massive use of web services [1], [2]. Reports on international market and research (e.g. Research and Marketshttp://www.marketsandmarkets.com), highlights that GIS market is expected to grow at a rate higher than 10 % in the next years. Analysts state that such expansion is due to increased demand from all sectors (both private and public). On the same level, geo-data is steadily growing in dimension and spatial resolution. Due to the rapid growth in the volume of demand served from mobile Manuscript received January 27, 2015; revised June 20, 2015. devices and web-based tools, large numbers of geodistributed data centres today are benefitting from modern cloud infrastructures. A geo-database is a database optimized to store and query spatial tables (layers). Layers are objects defined in a geometric space and are usually referred to a geographic reference system. Currently, the WGS84 (World Geodetic System 1984) with revisions as recent as 2004, is the most used reference system worldwide. Geo-databases allow the representation of most geometric objects such as points, lines and polygons, 3D elements, et al. The experience made during many collaboration initiatives [3], such as the Global Earth Observation System of Systems (GEOSS), highlights that there is a dire need for increasing software and data interoperability for the sharing of information and knowledge between repositories from different sources. So far, many open standards and web interoperability services are being considered such as the Web Map Service (WMS), the Web Feature Service (WFS), and Web Map Tile Service (WMTS) supported by the Open Geospatial Consortium (OGC) [4]. WMS is a standard protocol for serving geo-referenced map images over the internet that are generated by a rendering map engine using data from a GIS data source. WFS is an interface that allows to obtain geographical features across the web using platform-independent calls. While the WMS interface or online mapping portals like Google Maps return an image, the WFS interface provides the geometrical and alphanumeric data of the geographical layer, that end-users can edit or analyze. When exploiting the WFS, a web client obtains fine-grained information in GML (Geography Markup Language), a specialized XML format for geospatial data that describes both geometry and attribute. WMTS is a standard protocol for serving pre-rendered geo-referenced map subdivided in tiles over the Internet. This service aims at solving situations where short response times are necessary. WMS or WFS are not practical when dealing with massive parallel CPU-intensive use cases. As a matter of fact, to produce a image response, a WMS service can require some CPU seconds, depending on various factors. To overcome the CPU intensive, on-the-fly, rendering problem, pre-rendered map tiles can be used (e.g. google maps). On this regards, several schemes were created to manage these map tiles. No existing web services solve all problems, and although their usefulness is widely recognized, the use of interoperability (e.g. OGC web services) standards is till limited. II. OBJECTIVES AND METHODS Scalability and flexibility of web applications, data accessibility and security are open issues tightly bound to technological development [5]. Such needs have been addressed by the INNO project by developing a suite of tools and in house solutions to exploit large geo-data sets, made available through a Storage Data Infrastructure (SDI). Such tools are loosely coupled components that aim at addressing specific needs such as data management and accessibility. A service oriented architecture has been optimised for deploying, storing, managing and querying GIS based data with a NoSQL approach. The solution we propose was developed and positively tested for the management of geo-data based on a varied version of the OGC WMTS implementation. Such solution is built on the use of vector tiles with different degree of resolution at different zoom levels. This ensures scalability and data can be replicated depending on available resources. Each tile is very light and is stored within the NoSQL db as a JSON document. Such documents store the information about the geometry of any geographical layer with different degree of resolution at different zoom levels ( Fig. 1). This guarantees flexibility and fast response times by one side and, on the other side, it maintains the fine-grained information level of a WFS. JSON documents are used to represent all objects (map tiles -geometry and attributes) and their mutual relationships. The use of a document model enhances flexibility so that you can change application objects without having to migrate the database schema. Another advantage in a flexible, document-based data model is that it is well suited to represent real-world items and how you want to represent them. JSON documents support nested structures, as well as fields representing relationships between items which enable developers to realistically represent objects in the application. The use of a NoSQL approach implies that appropriate algorithms are being implemented to allow an efficient access to the data. The communication protocol between back-end and front-end is designed so as to exchange the least amount of information possible: Only JSON documents (numeric vector map) of small size are transferred. These documents are then processed and rendered (e.g. transformed into images) by the front-end. Connectors and interfaces have been developed for the transparent access to the data. Client applications can exploit API functions specialized to obtain the required data. These elements enable the user to remotely query the database that will respond with the necessary documents to the application requests. III. BACK-END AND CLIENT A suite of modules has been developed to manage the server side and the client side aspects. Creating the NoSQL database implies a pre-processing phase that, in our case, takes place on a postgreSQL/postGIS [6] environment. For each zoom level, vector tiles are created. In detail, a server side procedure processes the geographical layer to create a NoSQL instance. The geographical layer is subdivided into tiles with a varying precision at different zoom levels. Usually 18 zoom levels can be created for each geographical layer (the higher the zoom level, the greater the number of files to be created). At the 18th zoom level, a geographical area of 1° by 1° generates 1 million files. To limit the number of files for each zoom level, macro tiles can also be created. At lower zoom level, vector data are simplified in order to limit the transfer of data and the rendering done by the client. The simplification needs to be calibrated in order to create tiles with enough details. If it is too simplified on the client side, the image rendered is going to appear too coarse. On the back-end an Extract Transport and Load (ETL) procedure has been developed that works as follow:  Insert the layers (e.g. in shapefile format) into a PostgreSQL / PostGIS DBMS. A layer, in order to be loaded, must contain valid geometry elements (e.g. in Well Known text (WKT) format). This is required by the functions that transform the data from a PostGIS table into JSON documents (http://json.org/). The WKT has been chosen in the place of the GeoJSON because it is more compact.  Process the data: the processing of the data takes place in the postGIS engine because NoSQL database engines are still not able to adequately manage and manipulate GIS data. The procedures create JSON documents for each zoom level and for each layer to be included within the NoSQL db.  A semplification algorithm is applied to create the different zoom level,  Deploy the JSON documents for all layers within the NoSQL db  Create the indexes in the database. A first version of a Client/Server Communication Protocol necessary for exploiting the geo-database infrastructure has been developed. It enables to query and access to the geographical layers and allows to retrieve the vector tiles intended to be rendered and themed by the client. In Fig. 2 and Fig. 3, we show the application of the simplification algorithm for the layer "Municipalities of Sardinia" at 10th zoom level. As shown in figures, this step requires a calibration phase in order to create documents that at each zoom level have enough information to represent realistically the real world. The image rendered represents more realistically the real world. IV. THE TEST CASE The use of a NoSQL database engine implies that the data structure needs to be designed particularly to meet requirements of the application. In this sense, a NoSQL database is an application oriented infrastructure, while a SQL database can be considered general and can serve many applications and purposes. The inner logic within a NoSQL implementation is to be defined to satisfy a relationship type based on keyvalue. This approach does not provide explicit links between information. A careful analysis of various NoSQL engines was carried out, paying particular attention on spatial extensions functionalities. MongoDB (www.mongodb.com), Couchbase (www.couchbase.com/), and other NoSQL technologies were tested. Couchbase was chosen as it allows the creation of R-Tree type indexes. R-trees are data structures used for spatial access. Other NoSQL geospatial implementations are more primitive and offer only limited geospatial analysis capabilities. With the Rtree model, Couchbase has various functions to deal with spatial features (e.g. you can find polygons within a bounding box or a line that intersect a bounding box). Other NoSQL engines, such as Mongodb (one of the most popular NoSQL database software) can manage also spatial data, but do not provide sufficient spatial management capabilities and functions. Couchbase is a leading NoSQL distributed database, which supports key applications and is available as a software package of Enterprise level. The INNO tools were tested with various layers with different geometry provided by the following data-centers:  The Sardinian Geo-portal (http://www.sardegnageoportale.it/): it provides high resolution geo-data of Sardinia (Italy). Data are exposed via web services (mostly WMS) and the infrastructures meets the requirements of the INSPIRE directives. it provides worldwide data such as buildings, railways, roads, waterways, land use, natural elements, locations points of interest. For each layer a descriptive table is also provided. Tiles are generated using the definition "Slippy map Tilenames" (http://wiki.openstreetmap.org/wiki/Slippy_map_tilenam es) although in our implementation no images are produced but JSON documents. Each zoom level is a directory. The zoom parameter is an integer between 0 (zoomed out) and 18 (zoomed in). 18 is normally the maximum, (some tile servers go beyond that). In the table below, it is shown for the layer "Municipalities of Sardinia", for each zoom level (from 4 to 18), the number of tiles, the maximum dimension of JSON documents, and the processing time required. As for the WFS, the rendering is also accomplished directly on the web-client. This operation is done each time a new layer is inserted into the NoSQL db. The Leaflet (http://leafletjs.com/) open-source JavaScript library has been chosen to visualize the data on the web-client. This interface is particularly optimized for mobile, user-friendly interactive maps. It works efficiently across all major web and mobile platforms, making use of HTML5 and CSS3 on modern browsers while still being usable on older ones. In Fig. 2, we show a visualization of the communal borders of Sardinia stored in the NoSQL db. In Table I, we show for the layer "Municipalities of Sardinia", for different zoom levels, the number of tiles, the maximum dimension of JSON documents, and the processing time required to process a geographical area within the Sardinian island (center of the Mediterranean Sea) of 1° x 1°. This paper aims at improving existing interoperability methods for sharing data on the web. In this regards, the INNO project aims at improving scalability of big-geodata infrastructure and at providing customized spatial services and tools to enhance capabilities for geospatial data creation, queries, analysis, and data visualization. The experiments, we conducted, prove that our approach is valid and could be applied to many real situations. ACKNOWLEDGMENT The research work was supported by "Regione Autonoma della Sardegna", and "Sardegna Ricerche". Pierluigi Cau holds a full time position at Center for Advanced Studies and Research in Sardinia (CRS4). He has been working in the Environmental Sciences program of the Energy and Environment sector since 2000. His research topics are focused on computational Geographical Information Systems and development of innovative WEB ICT tools for the management of GIS data. He organized international workshops/conferences and taught advanced courses on hydrology and GIS at universities. He has tutored several stagers and early stage researchers. Simone Manca has been with CRS4 since 2000. He works as an expert software engineer in the Distributed Computing Group. Currently he deals with software development for high-performance computing infrastructures, virtualization and distributed storage. In the past, he has worked in the biomedical field, developing interfaces and applications with health informatics standards such as DICOM and HL7. He is also experienced in the environmental and geographical information systems fields, having developed decision support system tools for planners, integrations of numerical models and web interfaces.
2018-12-07T18:33:17.183Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "60c79a5e2cb96b45c9e0c963667d05b081252b99", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12720/jait.7.2.88-92", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "60c79a5e2cb96b45c9e0c963667d05b081252b99", "s2fieldsofstudy": [ "Computer Science", "Geography", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }